System Requirements#
Self-Hosted Container#
nvidia-docker2 needs to be installed
GPU
VRP : Ampere (A100) or Hopper (H100) architecture.
LP/MIP : Hopper (H100 SXM) architecture.
Note
In windows, container can be run only through WSL2.
Self-hosted Container on Kubernetes#
CuOpt can be deployed using a Helm chart on any Kubernetes cluster which has properly configured compute nodes equipped with supported NVIDIA GPUs. The system requirements for the compute nodes are the same as listed above. The cluster may be a single-node, “all-in-one” cluster, or it may be multi-node.
One option for building a single-node cluster is to use NVIDIA’s Cloud Native Stack available on github here.
Check your CSP’s documentation for information on Kubernetes cluster deployments with NVIDIA GPUs.
Additional information from NVIDIA on Kubernetes can be found here.
Please see the self-hosted server overview for instructions on using the cuOpt Helm chart.
Thin-client for Self-Hosted#
OS - Ubuntu
cpu - x86
Python - 3.10.x
Managed Service#
NVIDIA will provide access to cuOpt service, users would need just get the system requirements of thin client
Thin-client for Managed Service#
OS - Ubuntu
cpu - x86
Python - 3.10.x