System Requirements#

Self-Hosted Container#

  • Multiple GPUs are supported. cuOpt uses one GPU per cuOpt solver process.

  • CPU - x86-64 or arm >= 8 core (Recommended)

  • Memory >= 16 GB (Recommended)

  • Minimum Storage: 20 GB (8 GB container size)

  • CUDA - 12.6

  • Compute Capability >= 9.x

  • Minimum NVIDIA Driver Version: 525.60.04

  • CUDA installation guide for linux and for windows

Note

In windows, container can be run only through WSL2.

Self-hosted Container on Kubernetes#

CuOpt can be deployed using a Helm chart on any Kubernetes cluster which has properly configured compute nodes equipped with supported NVIDIA GPUs. The system requirements for the compute nodes are the same as listed above. The cluster may be a single-node, “all-in-one” cluster, or it may be multi-node.

One option for building a single-node cluster is to use NVIDIA’s Cloud Native Stack available on github here.

Check your CSP’s documentation for information on Kubernetes cluster deployments with NVIDIA GPUs.

Additional information from NVIDIA on Kubernetes can be found here.

Please see the self-hosted server overview for instructions on using the cuOpt Helm chart.

Thin-client for Self-Hosted#

  • OS - Ubuntu

  • cpu - x86

  • Python - 3.10.x

Managed Service#

  • NVIDIA will provide access to cuOpt service, users would need just get the system requirements of thin client

Thin-client for Managed Service#

  • OS - Ubuntu

  • cpu - x86

  • Python - 3.10.x