Tutorials for NVIDIA NIM for LLMs#
Use the following GitHub projects to learn about NIM.
Playbooks#
The NVIDIA NIM for Large Language Models (LLMs) playbooks demonstrate how to use NVIDIA NIM for LLMs to self-host RAG, deploy on Hugging Face, and fine-tune with LoRA.
The NVIDIA AI Blueprint: Bring Your LLM to NIM playbooks show three deployment approaches using the multi-LLM compatible NIM container:
The Build a RAG using a locally hosted NIM playbook demonstrates how to build a RAG using NVIDIA NIM for LLMs with a locally hosted Llama 3 8B Instruct NIM and deploy it using NVIDIA AI Endpoints for LangChain.
For information on customizing NVIDIA NIMs for domain-specific needs, refer to Customizing NVIDIA NIM for Domain-Specific Needs with NVIDIA NeMo.
Platform Deployment Guides#
The NIM on Azure Kubernetes Service (AKS) deployment guide provides step-by-step instructions for deploying AKS.
The NIM on Azure Machine Learning (AzureML) deployment guide provides step-by-step instructions for deploying AzureML using Azure CLI and Jupyter Notebook.
The End to End LLM App development with Azure AI Studio, Prompt Flow and NIMs deployment guide provides end-to-end LLM App development with Azure AI Studio, Prompt Flow, and NIMs.
The NIM on AWS Elastic Kubernetes Service (EKS) deployment guide provides step-by-step instructions for deploying on AWS EKS.
The NIM on AWS SageMaker deployment guide provides step-by-step instructions for deploying on AWS SageMaker using Jupyter Notebooks, Python CLI, and the shell.
The NIM on KServe deployment guide provides step-by-step instructions on how to deploy on KServe.