.. _setup: ~~~~~~ Setup ~~~~~~ Docker ====== Prerequisites for running container ----------------------------------- * `nvidia-docker2 `_ needs to be installed * NVIDIA Pascal GPU or newer * CUDA 11.0+ * NVIDIA driver 450.80.02+ A single container can be spun up as a server or accessed through Jupyter Notebook to work on SDK. Users can also utilize bash directly into the container and use cuOpt. The following describes how to run a container to access these options. .. note:: Pascal will be dropped in future releases. Access the Container --------------------------- The container is hosted on `NGC `_ and requires NGC sign-in to access the container. Please refer to the doc for more information about `NGC and API keys `_. Log in to the NGC container registry. .. code-block:: bash sudo docker login nvcr.io When prompted for a user name, enter the following text: .. code-block:: bash $oauthtoken When prompted for a password, enter the NGC API key, as shown in the following example. If you have not generated an API Key, you can generate it by going to the Setup option in your profile and choose Get API Key. Store this or generate a new one next time. More information can be found `here `_. .. code-block:: bash :linenos: Username: $oauthtoken Password: Select the container version from the NGC registry page and copy the container image path. In a command prompt, run the following command to pull the docker image. .. code-block:: bash sudo docker pull Ensure the pull completes successfully before proceeding to the next step. Run cuOpt as a Microservice ---------------------------------- This starts a microservice to run on port 5000, and other microservices can send requests to this endpoint. If you have Docker 19.03 or later, a typical command to launch the container is: .. code-block:: bash docker run --network=host -it --gpus all --rm * If you are running on WSL, you would need explicit port mapping: .. code-block:: bash docker run -it -p 8000:5000 --gpus all --rm If you have Docker 19.02 or earlier, a typical command to launch the container is: .. code-block:: bash nvidia-docker run -it --gpus all --rm --network=host * If you are running on WSL, you would need explicit port mapping: .. code-block:: bash nvidia-docker run -it -p 8000:5000 --gpus all --rm Run cuOpt as a Python SDK -------------------------------- To do this, run the container and open the bash shell. If you have Docker 19.03 or later, a typical command to launch the container is: .. code-block:: bash docker run -it --gpus all --rm --network=host /bin/bash If you have Docker 19.02 or earlier, a typical command to launch the container is: .. code-block:: bash nvidia-docker run -it --gpus all --rm --network=host /bin/bash Access Sample Juptyer Notebooks ----------------------------------- There are sample notebooks available in the container itself to try, in the in /home/cuopt_user/notebooks directory. This can be accessed as follows .. code-block:: bash docker run --gpus all -it --rm --network=host jupyter-notebook --notebook-dir /home/cuopt_user/notebooks Helm Chart ========== cuOpt container can be deployed using `Helm charts `_ provided through NGC. Users would need a Kubernetes cluster to deploy this and can refer to NGC's instructions on creating a simple cluster locally for testing. Create Namespace for NVIDIA cuOpt Server ---------------------------------------- Create a namespace and an environment variable for the namespace to organize the k8s cluster deployed to logically separate NVIDIA cuOpt deployments from other projects using the following command: .. code-block:: bash :linenos: kubectl create namespace export NAMESPACE="" Fetch Helm Chart ---------------- .. code-block:: bash helm fetch https://helm.ngc.nvidia.com/nvidia/cuopt/charts/cuopt- --username='$oauthtoken' --password= Run cuOpt as a Server --------------------- - On ClusterIP .. code-block:: bash helm install --namespace $NAMESPACE --set ngc.apiKey= nvidia-cuopt-chart cuopt --values cuopt/values.yaml It might take some time to download the container, which is shown in the status as **ContainerCreating**, else it would be **Running**. Use the following commands to verify, .. code-block:: bash :linenos: kubectl -n $NAMESPACE get all NAME READY STATUS RESTARTS AGE pod/cuopt-cuopt-deployment-595656b9d6-dbqcb 1/1 Running 0 21s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cuopt-cuopt-deployment-cuopt-service ClusterIP X.X.X.X 5000/TCP,8888/TCP 21s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cuopt-cuopt-deployment 1/1 1 1 21s NAME DESIRED CURRENT READY AGE replicaset.apps/cuopt-cuopt-deployment-595656b9d6 1 1 1 21s - On nodePort Enable node port using param `enable_nodeport`. The default node port for the server is `30000` and can be changed to another using the param `node_port_server`. .. code-block:: bash helm install --namespace $NAMESPACE --set ngc.apiKey= nvidia-cuopt-chart cuopt --set enable_nodeport=true --set node_port_server=30011 --values cuopt/values.yaml .. note:: Notebook can also be run with option `enable_notebook_server` along with cuOpt server. Run Jupyter Notebook -------------------- .. note:: Using values_notebook.yaml to set-up notebook related set-ups - On ClusterIP .. code-block:: bash helm install --namespace $NAMESPACE --set ngc.apiKey= nvidia-cuopt-chart cuopt --values cuopt/values_notebook.yaml - On nodePort Enable node port using param `enable_nodeport`. The default node port for the server is `30001` and can be changed to another using the param `node_port_notebook`. .. code-block:: bash helm install --namespace $NAMESPACE --set ngc.apiKey= nvidia-cuopt-chart cuopt --set enable_nodeport=true --set node_port_notebook=30021 --values cuopt/values_notebook.yaml .. note:: cuOpt server can also run with option `enable_server` along with notebook. Uninstalling NVIDIA cuOpt Server --------------------------------- .. code-block:: bash helm uninstall -n $NAMESPACE nvidia-cuopt-chart For examples and additional information, refer to the documentation.