Cirq

Notice

We only document what is different from Google’s qsimcirq. For more information, please review Google’s Documentation. For information on using qsim, refer to its documentation. For getting started using the NVIDIA cuQuantum Appliance with Cirq, visit the overview on NGC.

API Reference

qsimcirq.QSimOptions

class QSimOptions:
    max_fused_gate_size: int = 4
    max_fused_diagonal_gate_size: int = -1
    cpu_threads: int = 1
    ev_noisy_repetitions: int = 1
    disable_gpu: bool = False
    gpu_mode: Union[int, Sequence[int]] = (0,)
    gpu_network: int = 0
    gpu_state_threads: int = 512
    gpu_data_blocks: int = 16
    verbosity: int = 0
    denormals_are_zeros: bool = False
    n_subsvs: int = -1
    use_sampler: Union[bool, None] = None

Keyword

Description

gpu_mode

The GPU simulator backend to use. If 1, the simulator backend will use cuStateVec. If n, an integer greater than 1, the simulator will use the multi-GPU backend with the first n devices. If a sequence of integers, the simulator will use the multi-GPU backend with devices whose ordinals match the values in the list. Default is to use the multi-GPU backend with device 0.

n_subsvs

The number of state-vector partitions. This option is ignored unless the multi-GPU backend is in use. If -1, the number of partitions equals the number of GPUs. For optimal performance, ensure that the number of partitions equals the number of GPUs, and that both are a power of two.

use_sampler

If None, the multi-GPU backend will use its sampler, and all other backends will use their default sampler. If True, use the multi-GPU backend’s sampler. If False, the multi-GPU backend’s sampler is disabled.

gpu_network

Topology of inter-GPU data transfer network. This option is effective when multi-GPU support is enabled. Supported network topologies are switch network and full mesh network. If 0 is specified, network topology is automatically detected. If 1 or 2 is specified, switch or full mesh network is selected, respectively. Switch network is aiming at supporting GPU data transfer network in DGX A100 and DGX-2 in which all GPUs are connected to NVSwitch via NVLink. GPUs connected via PCIe switches are also considered as the switch network. Full mesh network is aiming at supporting GPU data transfer networks seen in DGX Station A100/V100 in which all devices are directly connected via NVLink.

disable_gpu

Whether or not to disable the GPU simulator backend. All GPU options are only considered when this is False (default). Note the difference from qsimcirq’s use_gpu keyword.

max_fused_diagonal_gate_size

The maximum number of qubits allowed per fused diagonal gate. This option is ignored unless the NGC multi-GPU backend is in use. If 0, the gate fusion for diagonal gates is disabled. If -1, this parameter is automatically adjusted for the better performance.


Note

The NVIDIA cuQuantum Appliance provides GPU and multi-GPU simulators. If you need to run a CPU simulator in qsimcirq, it’s recommended to use the qsimcirq packages released by Google Quantum AI team. If you need to run a CPU backend installed in NVIDIA cuQuantum Appliance, please specify options qsimcirq.QSimOptions(disable_gpu=True).

Some examples

options = qsimcirq.QSimOptions(gpu_mode=1)      # use cuStateVec (single-GPU) backend
options = qsimcirq.QSimOptions(gpu_mode=(0,))   # use multi-GPU backend with GPU #0
options = qsimcirq.QSimOptions(gpu_mode=2)      # use multi-GPU backend with the first 2 devices
options = qsimcirq.QSimOptions(gpu_mode=(0,2))  # use multi-GPU backend with GPU #0 & #2
options = qsimcirq.QSimOptions(gpu_mode=())     # use multi-GPU backend with GPU #0