Tutorials
The NVIDIA NIM for Large Language Models (LLMs) playbooks demonstrate how to use NVIDIA NIM for LLMs to self-host RAG, deploy on Hugging Face, and fine-tune with LoRA.
The Build a RAG using a locally hosted NIM playbook demonstrates how to build an RAG using NVIDIA NIM for LLMs with a locally hosted Llama3-8b-instruct NIM and deploy it using NVIDIA AI Endpoints for LangChain.
The Llama 3 LoRA Fine-Tuning and Deployment with NeMo Framework and NVIDIA NIM playbook demonstrates how to perform LoRA PEFT on a Llama 3 8B Instruct using a dataset for bio-medical domain question answering and deploy multiple LoRA adapters with NVIDIA NIM for LLMs.
The Llama 3.1 Law-Domain LoRA Fine-Tuning and Deployment with NeMo Framework and NVIDIA NIM playbook demonstrates how to perform LoRA PEFT on Llama 3.1 8B Instruct using a synthetically augmented version of Law StackExchange with NeMo Framework, followed by deployment with NVIDIA NIM for LLMs. As a pre-requisite, follow the tutorial for data curation using NeMo Curator.
The NIM on Azure Kubernetes Service (AKS) deployment guide provides step-by-step instructions for deploying AKS.
The NIM on Azure Machine Learning (AzureML) deployment guide provides step-by-step instructions for deploying AzureML using Azure CLI and Jupyter Notebook.
The End to End LLM App development with Azure AI Studio, Prompt Flow and NIMs deployment guide provides end-to-end LLM App development with Azure AI Studio, Prompt Flow, and NIMs.
The NIM on AWS Elastic Kubernetes Service (EKS) deployment guide provides step-by-step instructions for deploying on AWS EKS.
The NIM on AWS SageMaker deployment guide provides step-by-step instructions for deploying on AWS SageMaker using Jupyter Notebooks, Python CLI, and the shell.
The NIM on KServe deployment guide provides step-by-step on how to deploy on KServe.