Deploy NVIDIA RAG Blueprint on Kubernetes with Helm from the repository#
Use the following documentation to deploy the NVIDIA RAG Blueprint by using the helm chart from the repository.
To deploy the Helm chart with MIG support, refer to RAG Deployment with MIG Support.
To deploy with Helm from the repository, refer to Deploy Helm from the repository.
For other deployment options, refer to Deployment Options.
The following are the core services that you install:
RAG server
Ingestor server
NV-Ingest
Prerequisites#
Verify that you meet the prerequisites specified in prerequisites.
Deploy the RAG Helm chart from the repository#
If you are working directly with the source Helm chart, and you want to customize components individually, use the following procedure.
Change directory to
deploy/helm/by running the following code.cd deploy/helm/
Create a namespace for the deployment by running the following code.
kubectl create namespace rag
Configure Helm repo additions by editing and then running the following code.
helm repo add nvidia-nim https://helm.ngc.nvidia.com/nim/nvidia/ --username='$oauthtoken' --password=$NGC_API_KEY helm repo add nim https://helm.ngc.nvidia.com/nim/ --username='$oauthtoken' --password=$NGC_API_KEY helm repo add nemo-microservices https://helm.ngc.nvidia.com/nvidia/nemo-microservices --username='$oauthtoken' --password=$NGC_API_KEY helm repo add baidu-nim https://helm.ngc.nvidia.com/nim/baidu --username='$oauthtoken' --password=$NGC_API_KEY helm repo add bitnami https://charts.bitnami.com/bitnami helm repo add otel https://open-telemetry.github.io/opentelemetry-helm-charts helm repo add zipkin https://zipkin.io/zipkin-helm helm repo add prometheus https://prometheus-community.github.io/helm-charts
Update Helm chart dependencies by running the following code.
helm dependency update nvidia-blueprint-rag
Install the chart by running the following code.
helm upgrade --install rag -n rag nvidia-blueprint-rag/ \ --set imagePullSecret.password=$NGC_API_KEY \ --set ngcApiSecret.password=$NGC_API_KEY
Note
Refer to NIM Model Profile Configuration to set NIM LLM profile according to the GPU type and count. Set the profile explicitly to avoid any errors with NIM LLM pod deployment.
Follow the remaining instructions in Deploy on Kubernetes with Helm: