Overview

This section describes the basic working principle of the cuStateVec library. For a general introduction to quantum circuits, please refer to Introduction to quantum computing.

API synchronization behavior

The cuStateVec APIs are designed for asynchronous execution. Their API synchronization behavior follows the description in API synchronization behavior in CUDA Runtime API. Developers are required to appropriately call CUDA APIs to synchronize API calls.

Using CUDA Stream

The execution of most cuStateVec APIs are serialized on the stream attached to the cuStateVec handle created by custatevecCreate(). The initial stream is the default stream. Users are able to set a user-created stream to the cuStateVec handle by calling custatevecSetStream(). All types of streams (default, blocking and non-blocking) are acceptable. API calls are synchronized by appropriate CUDA API calls such as cudaDeviceSynchronize, cudaStreamSynchronize or cudaStreamWaitEvent.

There is one exception in Distributed index bit swap API. The custatevecSVSwapWorkerCreate() API requires a user-created stream that is specifically utilized only for data transfers. Therefore, other cuStateVec API calls are concurrently executed on the stream attached to the handle. Also, custatevecSVSwapWorkerExecute() blocks on the stream specified on the call to custatevecSVSwapWorkerCreate() API to synchronize with the peer of data transfers.

Description of state vectors

In the cuStateVec library, the state vector is always given as a device array and its data type is specified by a cudaDataType_t constant. It’s the user’s responsibility to manage memory for the state vector.

This version of cuStateVec library supports 128-bit complex (complex128) and 64-bit complex (complex64) as datatypes of the state vector. The size of a state vector is represented by the nIndexBits argument which corresponds to the number of qubits in a circuit. Therefore, the state vector size is expressed as \(2^{\text{nIndexBits}}\).

The type custatevecIndex_t is provided to express the state vector index, which is a typedef of the 64-bit signed integer. It is also used to express the number of state vector elements.

Bit ordering

In the cuStateVec library, the bit ordering of the state vector index is defined in little endian order. The 0-th index bit is the least significant bit (LSB). Most functions accept arguments to specify bit positions as integer arrays. Those bit positions are specified in little endian order. Values in bit positions are in the range \([0, \text{nIndexBits})\).

The cuStateVec library represents bit strings in either of the following two ways:

  • One 32-bit signed integer array for one bit string:

    Some APIs use a pair of 32-bit signed integer arrays bitString and bitOrdering arguments to specify one bit string. The bitString argument specifies bit string values as an array of 0s and 1s. The bitOrdering argument specifies the bit positions of the bitString array elements in little endian order. Both arrays are allocated on host memory.

    In the following example, “10” is specified as a bit string. Bit string values are mapped to the 2nd and 3rd index bits and can be used to specify a bit mask, \(***\cdots *10*\).

    int32_t bitString[]   = {0, 1}
    int32_t bitOrdering[] = {1, 2}
    

  • One 64-bit signed integer array for multiple bit strings:

    Some APIs introduce a pair of bitStrings and bitOrdering arguments to represent each bit string using custatevecIndex_t, which is a 64-bit signed integer, to handle multiple bit strings with small memory footprint. The bitOrdering argument is a 32-bit signed integer array and it specifies the bit positions of each bit string in the bitStrings argument in little endian order.

    The following example describes the same bit string, as was used in the previous example:

    custatevecIndex_t bitStrings[] = {0b10}
    int32_t bitOrdering[] = {1, 2}
    

    bitStrings are allocated on host memory but some APIs allow bitStrings to be allocated on device memory as well. For the detailed requirements, please refer to each API description.

Supported data types

By default, computation is executed using the corresponding precision of the state vector, double float (FP64) for complex128 and single float (FP32) for complex64.

The cuStateVec library also provides the compute type, allowing computation with reduced precision. Some cuStateVec functions accept the compute type specified by using custatevecComputeType_t.

Below is the table of combinations of state vector and compute types available in the current version of the cuStateVec library.

State vector / cudaDataType_t

Matrix / cudaDataType_t

Compute / custatevecComputeType_t

Complex 128 / CUDA_C_F64

Complex 128 / CUDA_C_F64

FP64 / CUSTATEVEC_COMPUTE_64F

Complex 64 / CUDA_C_F32

Complex 128 / CUDA_C_F64

FP32 / CUSTATEVEC_COMPUTE_32F

Complex 64 / CUDA_C_F32

Complex 64 / CUDA_C_F32

FP32 / CUSTATEVEC_COMPUTE_32F

Note

CUSTATEVEC_COMPUTE_TF32 is not available in this version.

Workspace

The cuStateVec library internally manages temporary device memory for executing functions, which is referred to as context workspace.

The context workspace is attached to the cuStateVec context and allocated when a cuStateVec context is created by calling custatevecCreate(). The default size of the context workspace is chosen to cover most typical use cases, obtained by calling custatevecGetDefaultWorkspaceSize().

When the context workspace cannot provide enough amount of temporary memory or when a device memory chunk is shared by two or more functions, there are two options for users:

  • Users can provide user-managed device memory for the extra workspace. Functions that need the extra workspace have their sibling functions suffixed by GetWorkspaceSize(). If these functions return a nonzero value via the extraBufferSizeInBytes argument, users are requested to allocate a device memory and supply the pointer to the allocated memory to the corresponding function. The extra workspace should be 256-byte aligned, which is automatically satisfied by using cudaMalloc() to allocate device memory. If the size of the extra workspace is not enough, CUSTATEVEC_STATUS_INSUFFICIENT_WORKSPACE is returned.

  • Users also can set a device memory handler. When it is set to the cuStateVec library context, the library can directly draw memory from the pool on the user’s behalf. In this case, users are not required to allocate device memory explicitly and a null pointer (zero size) can be specified as the extra workspace (size) in the function. Please refer to custatevecDeviceMemHandler_t and custatevecSetDeviceMemHandler() for details.

Gate fusion

Gate applications account for large proportion of the computation cost in quantum simulators. We can reduce the overall memory footprint required in gate applications by fusing multiple gates into one larger gate.

../_images/fusion.png

cuStateVec API supports these general gate applications with multiple qubits. For detailed information, please refer to custatevecApplyMatrix().

Multi-GPU Computation

The memory usage in quantum circuit simulations increases exponentially with the number of qubits. To simulate more qubits, multiple GPUs are required. A typical approach is to divide the qubits into global and local ones.

For instance, a 3-qubit system with 8 state vector elements can be equally distributed to 4 GPUs as described in the following figure.

../_images/globalQubits.png

When an index is assigned for each sub state vector, it can represent the higher-order qubits. We refer to these qubits as global qubits and other qubits as local qubits. In the example above, we have 2 global qubits and 1 local qubit.

In general, for an \(M\)-qubit system, suppose each GPU can store \(2^N\) state vector elements (for \(N\) local qubits), then \(2^{M-N}\) GPUs (that is, \(M-N\) global qubits) are required to store the entire state vector. The \(k\)-th GPU (\(k = (i_{M-1} i_{M-2} \cdots i_{N})_2\)) stores the state vector elements \(\alpha_{i_{M-1} i_{M-2} \cdots i_{N} i_{N-1} \cdots i_{0}}\) with \(i_p \in \{0, 1\}, 0 \leq p \leq N-1\).

For instance,

  • GPU #0 handles elements from \(\alpha_{0_{M-1} \cdots 0_{N+1} 0_{N} 0_{N-1} \cdots 0_{0}}\) to \(\alpha_{0_{M-1} \cdots 0_{N+1} 0_{N} 1_{N-1} \cdots 1_{0}}\),

  • GPU #1 handles elements from \(\alpha_{0_{M-1} \cdots 0_{N+1} 1_{N} 0_{N-1} \cdots 0_{0}}\) to \(\alpha_{0_{M-1} \cdots 0_{N+1} 1_{N} 1_{N-1} \cdots 1_{0}}\)

  • GPU #2 handles elements from \(\alpha_{0_{M-1} \cdots 1_{N+1} 0_{N} 0_{N-1} \cdots 0_{0}}\) to \(\alpha_{0_{M-1} \cdots 1_{N+1} 0_{N} 1_{N-1} \cdots 1_{0}}\)

  • GPU #3 handles elements from \(\alpha_{0_{M-1} \cdots 1_{N+1} 1_{N} 0_{N-1} \cdots 0_{0}}\) to \(\alpha_{0_{M-1} \cdots 1_{N+1} 1_{N} 1_{N-1} \cdots 1_{0}}\), and so on.

Here, the indices \(i_{M-1}, i_{M-2}, \cdots, i_{N}\) belong to the global qubits, and others belong to the local qubits.

cuStateVec provides APIs for multi-GPU qubit measurement, sampling, and qubit reordering. Measurement and sampling APIs work on single GPU, and users are required to gather/scatter the results of each GPU. As for details of each API, please refer to Qubit Measurement, Sampling, and Single-process qubit reordering, respectively.

Also for those who are interested in multi-GPU quantum simulations, NVIDIA cuQuantum Appliance is available.

Note

Each GPU requires its own cuStateVec handle. Also, the users are responsible for switching the CUDA device context.

Batched state vectors simulation

cuStateVec provides gate application and qubit measurement APIs for a group of state vectors. When computing with many small state vectors, replacing API calls for each state vector with a single batched API call is expected to lead to improved performance.

These batched-version APIs assume that state vectors are allocated as one contiguous block of device memory and require three parameters to specify their locations:

  • nSVs, the number of state vectors in the batch.

  • nIndexBits, the number of qubits in each state vector. All the state vectors in a batch need to have the same number of qubits.

  • svStride, the offset in number of elements between two state vectors. It should be equal to or larger than each state vector size, 1 << nIndexBits.

For instance, the following figure describes a group of state vectors with nSVs = 3, nIndexBits = 2, and svStride = 8. Here, each element of the 2-D array in the figure are ordered in column-major format.

../_images/batchedStateVectors.png

For the details of each API, please refer to custatevecApplyMatrixBatched(), custatevecComputeExpectationBatched(), custatevecAbs2SumArrayBatched(), custatevecCollapseByBitStringBatched(), and custatevecMeasureBatched().

References

For a technical introduction to cuStateVec, please refer to the NVIDIA blog:

Citing cuQuantum

    1. Bayraktar et al., “cuQuantum SDK: A High-Performance Library for Accelerating Quantum Science,” 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), Bellevue, WA, USA, 2023, pp. 1050-1061, doi: 10.1109/QCE57702.2023.00119.