Please visit the NVIDIA cuQuantum Python documentation.
Install cuQuantum Python from conda-forge¶
If you already have a Conda environment set up, it is the easiest to install cuQuantum Python from the conda-forge channel:
conda install -c conda-forge cuquantum-python
The Conda solver will install all required dependencies for you.
Install cuQuantum Python from PyPI¶
Alternatively, assuming you already have a Python environment set up (it doesn’t matter if it’s a Conda env or not), you can also install cuQuantum Python this way:
pip install cuquantum-python-cu11
pip solver will also install all dependencies for you (including both cuTENSOR and cuQuantum wheels).
User can still install cuQuantum Python using
pip install cuquantum-python, which currently points to the
cuquantum-python-cu11wheel that is subject to change in the future. Installing wheels with the
-cuXXsuffix is encouraged.
To manually manage all Python dependencies, append
pip installto bypass the
pipsolver, see below.
Building and installing cuQuantum Python from source¶
The build-time dependencies of the cuQuantum Python package include:
CUDA Toolkit 11.x
Except for CUDA and Python, the rest of the build-time dependencies are handled by the new PEP-517-based build system (see Step 7 below).
To compile and install cuQuantum Python from source, please follow the steps below:
Clone the NVIDIA/cuQuantum repository:
git clone https://github.com/NVIDIA/cuQuantum
CUDA_PATHto point to your CUDA installation
CUQUANTUM_ROOTto point to your cuQuantum installation
CUTENSOR_ROOTto point to your cuTENSOR installation
[optional] Make sure cuQuantum and cuTENSOR are visible in your
Switch to the directory containing the Python implementation:
Build and install:
pip install .if you skip Step 3-5 above
pip install -v --no-deps --no-build-isolation .otherwise (advanced)
For Step 7, if you are building from source for testing/developing purposes you’d likely want to insert a
-eflag before the last period (so
pip ... .becomes
pip ... -e .):
-e: use the “editable” (in-place) mode
-v: enable more verbose output
--no-deps: avoid installing the run-time dependencies
--no-build-isolation: reuse the current Python environment instead of creating a new one for building the package (this avoids installing any build-time dependencies)
As an alternative to setting
CUTENSORNET_ROOTcan be set to point to the cuStateVec and the cuTensorNet libraries, respectively. The latter two environment variables take precedence if defined.
Runtime dependencies of the cuQuantum Python package include:
An NVIDIA GPU with compute capability 7.0+
Driver: Linux (450.80.02+)
CUDA Toolkit 11.x
CuPy v9.5.0+ (see installation guide)
PyTorch v1.10+ (optional, see installation guide)
Qiskit v0.24.0+ (optional, see installation guide)
Cirq v0.6.0+ (optional, see installation guide)
mpi4py v3.1.0+ (optional, see installation guide)
If you install everything from conda-forge, the dependencies are taken care for you (except for the driver).
If you install the pip wheels, cuTENSOR and cuQuantum (but not CUDA Toolkit or the driver,
please make sure the CUDA libraries are discoverable through your
LD_LIBRARY_PATH) are installed for you.
If you build cuQuantum Python from source, please make sure the paths to the cuQuantum and cuTENSOR libraries are added
LD_LIBRARY_PATH environment variable.
If a system has multiple copies of cuTENSOR, one of which is installed in a default system path, the Python runtime could pick it up despite cuQuantum Python is linked to another copy installed elsewhere, potentially causing a version-mismatch error. The proper fix is to remove cuTENSOR from the system paths to ensure the visibility of the proper copy. DO NOT ATTEMPT to use
LD_PRELOADto overwrite it — it could cause hard to debug behaviors!
In certain environments, if PyTorch is installed
import cuquantumcould fail (with a segmentation fault). It is currently under investigation and a temporary workaround is to import
Samples for demonstrating the usage of both low-level and high-level Python APIs are
available in the
samples directory. The low-level API samples are 1:1 translations of the corresponding
samples written in C. The high-level API samples demonstrate pythonic usages of the cuTensorNet
library in Python.
If pytest is installed, typing
pytest tests at the command prompt in the Python source root directory will
run all tests. Some tests would be skipped if
cffi is not installed or if the environment
CUDA_PATH is not set.