Setup

Docker

Prerequisites for running container

  • nvidia-docker2 needs to be installed

  • NVIDIA Pascal GPU or newer

  • CUDA 11.0+

  • NVIDIA driver 450.80.02+

The single container can be spun up as a server or accessed through Jupyter Notebook to work on SDK or bash into the container directly and use cuOpt. The following describes how to run a container to access these options.

How to access the container

The container is hosted in NGC and even though it’s available in the public catalog, it would still need a sign-in to access the container. Therefore, the user needs to register into NGC to access cuOpt.

Please refer to the doc for more information about NGC and API keys.

Log in to the NGC container registry.

sudo docker login nvcr.io

When prompted for a user name, enter the following text:

$oauthtoken

When prompted for a password, enter the NGC API key, as shown in the following example.

This API key needs to be generated and saved securely, giving access to NGC. For more information, refer to doc <https://docs.nvidia.com/ngc/ngc-overview/index.html#generating-api-key>_.

Username: $oauthtoken
Password: MY-API-KEY

Run the command to download the container from the NGC registry.

sudo docker pull nvcr.io/j9mrpofbmtxd/cuopt_ea_service/cuopt:<tag>

NVIDIA cuOpt Server

This starts a microservice to run on port 5000, and other microservices can send requests to this endpoint.

sudo docker run --network=host -it --gpus all --rm nvcr.io/j9mrpofbmtxd/cuopt_ea_service/cuopt:<tag>

NVIDIA cuOpt through Juptyer Notebook

This runs the Jupyter Notebook in the container, which can be accessed locally to work on cuOpt or try out some examples.

Example notebooks are available in /home/cuopt_user/notebooks.

sudo docker run --network=host -it --gpus all --rm nvcr.io/j9mrpofbmtxd/cuopt_ea_service/cuopt:<tag> jupyter-notebook --notebook-dir /home/cuopt_user/notebooks

A local directory can be mounted to work on examples or projects,

sudo docker run --network=host -it --gpus all -v $(pwd)/notebooks/:/notebooks --user 1000:1000 --rm nvcr.io/j9mrpofbmtxd/cuopt_ea_service/cuopt:<tag> jupyter-notebook --notebook-dir /notebooks

NVIDIA cuOpt Python

This runs the container and opens the bash shell.

sudo docker run --network=host -it --gpus all --rm nvcr.io/j9mrpofbmtxd/cuopt_ea_service/cuopt:<tag> /bin/bash

Helm Chart

cuOpt container can be deployed using helm-chart provided through NGC. You would need a Kubernetes cluster to deploy this; NGC has instructions on creating a simple cluster locally for testing.

Create Namespace for NVIDIA cuOpt Server

Create a namespace and an environment variable for the namespace to organize the k8s cluster deployed to logically separate NVIDIA cuOpt deployments from other projects using the following command:

kubectl create namespace <some name>
export NAMESPACE="<some name>"

Fetch Helm Chart

helm fetch https://helm.ngc.nvidia.com/nvidia/cuopt/charts/cuopt-<tag> --username='$oauthtoken' --password=<YOUR API KEY>

Run cuOpt as a Server

  • On ClusterIP

helm install --namespace $NAMESPACE --set ngc.apiKey=<YOUR API KEY> nvidia-cuopt-chart cuopt --values cuopt/values.yaml

It might take some time to download the container, which is shown in the status as ContainerCreating, else it would be Running. Use the following commands to verify,

kubectl -n $NAMESPACE get all
NAME                                          READY   STATUS    RESTARTS   AGE
pod/cuopt-cuopt-deployment-595656b9d6-dbqcb   1/1     Running   0          21s

NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/cuopt-cuopt-deployment-cuopt-service   ClusterIP   X.X.X.X          <none>        5000/TCP,8888/TCP   21s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cuopt-cuopt-deployment   1/1     1            1           21s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/cuopt-cuopt-deployment-595656b9d6   1         1         1       21s
  • On nodePort

    Enable node port using param enable_nodeport. The default node port for the server is 30000 and can be changed to another using the param node_port_server.

helm install --namespace $NAMESPACE --set ngc.apiKey=<YOUR API KEY> nvidia-cuopt-chart cuopt --set enable_nodeport=true --set node_port_server=30011 --values cuopt/values.yaml

Note: Notebook can also be run with option enable_notebook_server along with cuOpt server.

Run Jupyter Notebook

Note: Using values_notebook.yaml to set-up notebook related set-ups

  • On ClusterIP

helm install --namespace $NAMESPACE --set ngc.apiKey=<YOUR API KEY> nvidia-cuopt-chart cuopt --values cuopt/values_notebook.yaml
  • On nodePort

    Enable node port using param enable_nodeport. The default node port for the server is 30001 and can be changed to another using the param node_port_notebook.

helm install --namespace $NAMESPACE --set ngc.apiKey=<YOUR API KEY> nvidia-cuopt-chart cuopt --set enable_nodeport=true --set node_port_notebook=30021 --values cuopt/values_notebook.yaml

Note: cuOpt server can also run with option enable_server along with notebook.

Uninstalling NVIDIA cuOpt Server

helm uninstall -n $NAMESPACE nvidia-cuopt-chart

For examples and additional information, refer to the documentation.