NVIDIA cuQuantum Appliance

The NVIDIA cuQuantum Appliance is a highly performant multi-GPU multi-node solution for quantum circuit simulation. It contains NVIDIA’s cuStateVec and cuTensorNet libraries which optimize state vector and tensor network simulation, respectively. The cuTensorNet library functionality is accessible through Python for Tensor Network operations. With the cuStateVec libraries, NVIDIA provides the following simulators:

IBM’s Qiskit Aer frontend via cusvaer, NVIDIA’s distributed state vector backend solver.
An optimized multi-GPU Google Cirq frontend via qsim, Google’s state vector simulator.


Using NVIDIA’s cuQuantum Appliance NGC Container requires the host system to have the following installed:

Docker Engine
NVIDIA Container Toolkit
For supported versions, see the container release notes. No other installation, compilation, or dependency management is required.

Running the NVIDIA cuQuantum Appliance with Cirq or Qiskit

...$ docker pull nvcr.io/nvidia/cuquantum-appliance:23.10  # pull the image
...$ docker run --gpus all -it --rm nvcr.io/nvidia/cuquantum-appliance:23.10  # launch the container interactively
...$ docker run --gpus '"device=0,3"' -it --rm nvcr.io/nvidia/cuquantum-appliance:23.10  # ... interactive launch, but enumerate only GPUs 0,3

The examples are located under /home/cuquantum/examples. Confirm this with the following command:

...$ docker run --gpus all --rm nvcr.io/nvidia/cuquantum-appliance:23.10 ls -la /home/cuquantum/examples

===                 NVIDIA CUQUANTUM APPLIANCE v23.10                  ===
=== COPYRIGHT © NVIDIA CORPORATION & AFFILIATES.  All rights reserved. ===

INFO: nvidia devices detected
INFO: gpu functionality will be available

total 36
drwxr-xr-x 2 cuquantum cuquantum 4096 Nov 10 01:52 .
drwxr-x--- 1 cuquantum cuquantum 4096 Nov 10 01:54 ..
-rw-r--r-- 1 cuquantum cuquantum 2150 Nov 10 01:52 ghz.py
-rw-r--r-- 1 cuquantum cuquantum 7436 Nov 10 01:52 hidden_shift.py
-rw-r--r-- 1 cuquantum cuquantum 1396 Nov 10 01:52 qiskit_ghz.py
-rw-r--r-- 1 cuquantum cuquantum 8364 Nov 10 01:52 simon.py

Running the examples is straightforward:

#### without an interactive session:
...$ docker run --gpus all --rm nvcr.io/nvidia/cuquantum-appliance:23.10 python /home/cuquantum/examples/{example_name}.py
#### with an interactive session:
...$ docker run --gpus all --rm -it nvcr.io/nvidia/cuquantum-appliance:23.10
(cuquantum-23.10) cuquantum@...:~$ cd examples && python {example_name}.py

The examples all accept runtime arguments. To see what they are, pass --help to the python + script command. Looking at two examples, ghz.py and qiskit_ghz.py, the help messages are as follows:

(cuquantum-23.10) cuquantum@...:~/examples$ python ghz.py --help
usage: ghz.py [-h] [--nqubits NQUBITS] [--nsamples NSAMPLES] [--ngpus NGPUS]

GHZ circuit

  -h, --help           show this help message and exit
  --nqubits NQUBITS    the number of qubits in the circuit
  --nsamples NSAMPLES  the number of samples to take
  --ngpus NGPUS        the number of GPUs to use
(cuquantum-23.10) cuquantum@...:~/examples$ python qiskit_ghz.py --help
usage: qiskit_ghz.py [-h] [--nbits NBITS] [--precision {single,double}] [--disable-cusvaer]

Qiskit ghz.

  -h, --help            show this help message and exit
  --nbits NBITS         the number of qubits
  --precision {single,double}
                        numerical precision
  --disable-cusvaer     disable cusvaer

Importantly, ghz.py implements the GHZ circuit using Cirq as a frontend, and qiskit_ghz.py implements the GHZ circuit using Qiskit as a frontend. The cuQuantum Appliance modifies the backends of these frameworks, optimizing them for use with Nvidia’s platforms. Information regarding any alterations are available in the Appliance section of the Nvidia cuQuantum documentation.

Running cd examples && python ghz.py --nqubits 30 will create and simulate a GHZ circuit running on a single GPU. To run on 4 available GPUs, use ... python ghz.py --nqubits 30 --ngpus 4. The output will look something like this:

(cuquantum-23.10) cuquantum@...:~/examples$ python ghz.py --nqubits 30
q(0),q(1),q(2),q(3),q(4),q(5),q(6),q(7),q(8),q(9),q(10),q(11),q(12),q(13),q(14),q(15),q(16),q(17),q(18),q(19),q(20),q(21),q(22),q(23),q(24),q(25),q(26),q(27),q(28),q(29)=111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111

Likewise, cd examples && python qiskit_ghz.py --nbits 30 will create and simulate a GHZ circuit. This script will assign one GPU per process. To run on 4 GPUs, you need to explicitly enumerate the GPUs you want to use and execute with MPI:

#### interactively:
...$ docker run --gpus '"device=0,1,2,3"' -it --rm nvcr.io/nvidia/cuquantum-appliance:23.10
(cuquantum-23.10) cuquantum@...:~$ cd examples && mpirun -np 4 python qiskit_ghz.py --nbits 30
#### noninteractively:
...$ docker run --gpus '"device=0,1,2,3"' --rm nvcr.io/nvidia/cuquantum-appliance:23.10 mpirun -np 4 python /home/cuquantum/examples/qiskit_ghz.py --nbits 30

The output from qiskit_ghz.py looks like this:

(cuquantum-23.10) cuquantum@...:~$ cd examples && mpirun -np 4 python qiskit_ghz.py --nbits 30
precision: single
{'000000000000000000000000000000': 520, '111111111111111111111111111111': 504}

NOTE: Qiskit could initialize the CUDA contexts for all available GPUs per rank.

More information, examples, and utilities are available in the NVIDIA cuQuantum repository on GitHub. Notably, you can find useful guides for getting started with multi-node multi-GPU simulation using the benchmarks tools.

Software in the container

Default user environment

The default user in the container is cuquantum with user ID 1000. The cuquantum user is a member of the sudo group. By default, executing commands with sudo using the cuquantum user requires a password which can be obtained by reading the file located at /home/cuquantum/.README formatted as {user}:{password}.

To acquire new packages, we recommend using conda install -c conda-forge ... in the default environment (cuquantum-23.10). You may clone this environment and change the name using conda create --name {new_name} --clone cuquantum-23.10. This may be useful in isolating your changes from the default environment.

CUDA is available under /usr/local/cuda. /usr/local/cuda is a symbolic directory managed by update-alternatives. To query configuration information, use update-alternatives --config cuda.


We provide Open MPI v4.1 in the container located at /usr/local/openmpi. The default mpirun runtime configuration can be queried with ompi_info --all --parseable. When using the multi-GPU features in the cuQuantum Appliance, a valid and compatible mpirun runtime configuration must be exposed to the container. It must also be accessible to the container runtime.

If you observe warnings or errors as follows when calling mpirun in the container:

    [LOG_CAT_ML] You must specify a valid HCA device by setting:
    -x HCOLL_MAIN_IB=<dev_name:port> or  or -x UCX_NET_DEVICES=<dev_name:port>.
    If no device was specified for HCOLL (or the calling library), automatic device detection will be run.
    In case of unfounded HCA device please contact your system administrator.
    ... Error: coll_hcoll_module.c:310 - mca_coll_hcoll_comm_query() Hcol library init failed

In an interactive session of the container, specify modular component architectures, to disable cross-memory attach (CMA) and hierarchical collectives (HCOLL):

mpirun -np ${num_gpus} \
    --mca pml ucx \
    -x UCX_TLS=^cma \
    --mca coll_hcoll_enable 0 \
    -x OMPI_MCA_coll_hcoll_enable=0 \

If the warnings and errors are no longer emitted, please consult your system administrator and confirm the hardware and software architecture to ensure optimal usage of the cuQuantum Appliance.

Important change notices

version == 23.10

The following image tags are available:




Before v23.10, the operating system in the container was Ubuntu 20.04. In v23.10, we added support for Ubuntu 22.04 without dropping support for Ubuntu 20.04. To avoid breaking changes implied by altering the image tag, nvcr.io/nvidia/cuquantum-appliance:23.10 now points to nvcr.io/nvidia/cuquantum-appliance:23.10-devel-ubuntu22.04.

This means that for a given machine architecture,march='arm64' or march='x86_64', pulling from cuquantum-appliance:23.10-${march} is equivalent to pulling from cuquantum-appliance:23.10-${march}. The following two docker pull commands will download the same image.

docker pull nvcr.io/nvidia/cuquantum-appliance:23.10*
docker pull nvcr.io/nvidia/cuquantum-appliance:23.10-devel-ubuntu22.04*

Security scanning notices

Version 23.10 security scanning results summary

This section provides a summary of potential vulnerabilities that are evaluated with high severity by the CVSS v3.1 standard. To view security scanning results for the latest container image, refer to the security scanning tab near the top of this page, or follow this link.











RecursionError in email.utils.parseaddr while calling Python object






remote code execution with malicious url using --extra-index-url option in pip


Appliance version end of life summary









EOL 24.03



No new features or security remediation



No new features or security remediation

Note: for a version formatted as YY.*, the notice applies to all versions with the same year.


The NVIDIA cuQuantum Appliance documentation is hosted here.
A guide for using qiskit can be found here.
A guide and tutorials for using cirq can be found here.
A guide to getting started with qsimcirq can be found here.

License Agreement

The image is governed by the NVIDIA End User License Agreement. By downloading the NVIDIA cuQuantum Appliance, you accept the terms and conditions of this license. The cuQuantum Appliance End User License Agreement can be viewed here. Since the image includes components licensed under open-source licenses, the source code for these components can be found here

Citing cuQuantum

H. Bayraktar et al., “cuQuantum SDK: A High-Performance Library for Accelerating Quantum Science,” 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), Bellevue, WA, USA, 2023, pp. 1050-1061, doi: 10.1109/QCE57702.2023.00119