# Getting Started¶

In this section, we show how to contract a tensor network using cuTensorNet. First, we describe how to install the library and how to compile a sample code. Then, we present an example code used to perform common steps in cuTensorNet. In this example, we perform the following tensor contraction:

$R_{k,l} = A_{a,b,c,d,e,f} B_{b,g,h,e,i,j} C_{m,a,g,f,i,k} D_{l,c,h,d,j,m}$

We construct the code step by step, each step adding code at the end. The steps are separated by succinct multi-line comment blocks.

It is recommended that the reader refers to Overview and cuTENSOR documentation for familiarity with the nomenclature and cuTENSOR operations.

## Installation and Compilation¶

### Install cuTensorNet from conda-forge¶

If you already have a Conda environment set up (if not, Miniforge/Mambaforge is a great starting point), cuTensorNet can be easily installed from the conda-forge channel, via:

conda install -c conda-forge cutensornet


Alternatively, you can install cuQuantum, which contains both cuTensorNet and cuStateVec, from the conda-forge channel, via:

conda install -c conda-forge cuquantum


In any case, the conda solver will install all required dependencies for you. The package is installed to the current $CONDA_PREFIX, so you can simply update the environment variable as follows: export CUQUANTUM_ROOT=$CONDA_PREFIX


If you install cutensornet or cuquantum with an MPI from conda-forge, e.g.,

conda install -c conda-forge cutensornet openmpi


it would come with the MPI support ready (libcutensornet_distributed_interface_mpi.so is shipped with the package, and $CUTENSORNET_COMM_LIB is set properly, see below). Then, you can follow the on-screen instruction to enable CUDA-aware MPI. Warning Be aware that it is not recommended to include Conda environment paths (such as $CONDA_PREFIX) as part of your LD_LIBRARY_PATH. It might be unsafe depending on your use case.

Note

The mpich package on conda-forge does not support CUDA awareness. You can still install the “external” flavor using conda install -c conda-forge cutensornet "mpich=*=external_*", and supply your local build of CUDA-aware MPICH. For more information on the “external” builds, please refer to conda-forge’s documentation.

### Install cuTensorNet from NVIDIA DevZone¶

We will be using the tarball distribution to illustrate the process. Once the tarball file is downloaded to CUQUANTUM_ROOT directory, you can unpack it via:

tar zxvf cuquantum-linux-xxxxx-aa.bb.cc.dd-archive.tar.xz


and update the environment variable:

export CUQUANTUM_ROOT=/path/to/where/tarball/is/unpacked


For the installation of cuTENSOR, please follow the installation guide at https://docs.nvidia.com/cuda/cutensor/getting_started.html#installation-and-compilation.

If one has a CUDA-aware MPI library installed (e.g., a CUDA-aware version of Open MPI, MVAPICH, or MPICH), the distributed version of the cuTensorNet library can be activated by proceeding to the ${CUQUANTUM_ROOT}/distributed_interfaces directory and running the activate_mpi.sh bash script (on Linux). The activation script requires two environment variables to be set: ${CUDA_PATH} (path to the CUDA root directory) and ${MPI_PATH} (path to the CUDA-aware MPI root directory). Note that if your CUDA-aware MPI library was not installed in a single location, but dispersed in system directories instead, the ${MPI_PATH}/include must contain the mpi.h header (you can simply adjust ${MPI_PATH} to make it so). The gcc compiler must be available in the system. Upon success, the MPI activation script will build a shared cuTensorNet-MPI wrapper library (libcutensornet_distributed_interface_mpi.so) based on the provided MPI library implementation (via ${MPI_PATH}). It will also set the environment variable ${CUTENSORNET_COMM_LIB} to point to that wrapper library. One should add the definition of this environment variable to their .bashrc script such that it will be properly set upon opening a new Linux terminal. If ${CUTENSORNET_COMM_LIB} becomes unset, an attempt to use MPI parallelization inside cuTensorNet library will cause an error.

### Compilation¶

Assuming cuQuantum has been extracted in CUQUANTUM_ROOT and cuTENSOR in CUTENSOR_ROOT, we update the library path as follows:

export LD_LIBRARY_PATH=${CUQUANTUM_ROOT}/lib:${CUTENSOR_ROOT}/lib/11:${LD_LIBRARY_PATH}  Depending on your CUDA Toolkit, you might have to choose a different library version (e.g., ${CUTENSOR_ROOT}/lib/11.0).

The serial sample code discussed below (tensornet_example.cu) can be compiled via the following command:

nvcc tensornet_example.cu -I${CUQUANTUM_ROOT}/include -I${CUTENSOR_ROOT}/include -L${CUQUANTUM_ROOT}/lib -L${CUTENSOR_ROOT}/lib/11 -lcutensornet -lcutensor -o tensornet_example


For static linking against the cuTensorNet library, use the following command (note that libmetis_static.a needs to be explicitly linked against, assuming it is installed through the NVIDIA CUDA Toolkit and accessible through $LIBRARY_PATH): nvcc tensornet_example.cu -I${CUQUANTUM_ROOT}/include -I${CUTENSOR_ROOT}/include${CUQUANTUM_ROOT}/lib/libcutensornet_static.a -L${CUTENSOR_DIR}/lib/11 -lcutensor libmetis_static.a -o tensornet_example  In order to build parallel (MPI) versions of the examples (tensornet_example_mpi_auto.cu and tensornet_example_mpi.cu), one will need to have an MPI library installed (e.g., recent Open MPI, MVAPICH, or MPICH). In particular, the automatic parallel example requires CUDA-aware MPI, see Code Example (Automatic Slice-Based Distributed Parallelization) below. In this case, one will need to add -I${MPI_PATH}/include and -L${MPI_PATH}/lib -lmpi to the build command: nvcc tensornet_example_mpi_auto.cu -I${CUQUANTUM_ROOT}/include -I${CUTENSOR_ROOT}/include -I${MPI_PATH}/include -L${CUQUANTUM_ROOT}/lib -L${CUTENSOR_ROOT}/lib/11 -lcutensornet -lcutensor -L${MPI_PATH}/lib -lmpi -o tensornet_example_mpi_auto nvcc tensornet_example_mpi.cu -I${CUQUANTUM_ROOT}/include -I${CUTENSOR_ROOT}/include -I${MPI_PATH}/include -L${CUQUANTUM_ROOT}/lib -L${CUTENSOR_ROOT}/lib/11 -lcutensornet -lcutensor -L${MPI_PATH}/lib -lmpi -o tensornet_example_mpi  Warning When running tensornet_example_mpi_auto.cu without CUDA-aware MPI, the program will crash. Note Depending on the source of the cuQuantum package, you may need to replace lib above by lib64. ## Code Example (Serial)¶ The following code example illustrates the common steps necessary to use cuTensorNet and also introduces typical tensor network operations. The full sample code can be found in the NVIDIA/cuQuantum repository (here). ### Headers and data types¶  8#include <stdlib.h> 9#include <stdio.h> 10 11#include <unordered_map> 12#include <vector> 13#include <cassert> 14 15#include <cuda_runtime.h> 16#include <cutensornet.h> 17 18 19#define HANDLE_ERROR(x) \ 20{ const auto err = x; \ 21 if( err != CUTENSORNET_STATUS_SUCCESS ) \ 22 { printf("Error: %s in line %d\n", cutensornetGetErrorString(err), __LINE__); \ 23 fflush(stdout); \ 24 } \ 25}; 26 27#define HANDLE_CUDA_ERROR(x) \ 28{ const auto err = x; \ 29 if( err != cudaSuccess ) \ 30 { printf("CUDA Error: %s in line %d\n", cudaGetErrorString(err), __LINE__); \ 31 fflush(stdout); \ 32 } \ 33}; 34 35 36struct GPUTimer 37{ 38 GPUTimer(cudaStream_t stream): stream_(stream) 39 { 40 cudaEventCreate(&start_); 41 cudaEventCreate(&stop_); 42 } 43 44 ~GPUTimer() 45 { 46 cudaEventDestroy(start_); 47 cudaEventDestroy(stop_); 48 } 49 50 void start() 51 { 52 cudaEventRecord(start_, stream_); 53 } 54 55 float seconds() 56 { 57 cudaEventRecord(stop_, stream_); 58 cudaEventSynchronize(stop_); 59 float time; 60 cudaEventElapsedTime(&time, start_, stop_); 61 return time * 1e-3; 62 } 63 64 private: 65 cudaEvent_t start_, stop_; 66 cudaStream_t stream_; 67}; 68 69 70int main(int argc, char **argv) 71{ 72 static_assert(sizeof(size_t) == sizeof(int64_t), "Please build this sample on a 64-bit architecture!"); 73 74 bool verbose = true; 75 76 // Check cuTensorNet version 77 const size_t cuTensornetVersion = cutensornetGetVersion(); 78 if(verbose) 79 printf("cuTensorNet version: %ld\n", cuTensornetVersion); 80 81 // Set GPU device 82 int numDevices {0}; 83 HANDLE_CUDA_ERROR( cudaGetDeviceCount(&numDevices) ); 84 const int deviceId = 0; 85 HANDLE_CUDA_ERROR( cudaSetDevice(deviceId) ); 86 cudaDeviceProp prop; 87 HANDLE_CUDA_ERROR( cudaGetDeviceProperties(&prop, deviceId) ); 88 89 if(verbose) { 90 printf("===== device info ======\n"); 91 printf("GPU-name:%s\n", prop.name); 92 printf("GPU-clock:%d\n", prop.clockRate); 93 printf("GPU-memoryClock:%d\n", prop.memoryClockRate); 94 printf("GPU-nSM:%d\n", prop.multiProcessorCount); 95 printf("GPU-major:%d\n", prop.major); 96 printf("GPU-minor:%d\n", prop.minor); 97 printf("========================\n"); 98 } 99 100 typedef float floatType; 101 cudaDataType_t typeData = CUDA_R_32F; 102 cutensornetComputeType_t typeCompute = CUTENSORNET_COMPUTE_32F; 103 104 if(verbose) 105 printf("Included headers and defined data types\n");  ### Define tensor network and tensor sizes¶ Next, we define the topology of the tensor network (i.e., the modes of the tensors, their extents, and their connectivity). 109 /********************** 110 * Computing: R_{k,l} = A_{a,b,c,d,e,f} B_{b,g,h,e,i,j} C_{m,a,g,f,i,k} D_{l,c,h,d,j,m} 111 **********************/ 112 113 constexpr int32_t numInputs = 4; 114 115 // Create vectors of tensor modes 116 std::vector<int32_t> modesA{'a','b','c','d','e','f'}; 117 std::vector<int32_t> modesB{'b','g','h','e','i','j'}; 118 std::vector<int32_t> modesC{'m','a','g','f','i','k'}; 119 std::vector<int32_t> modesD{'l','c','h','d','j','m'}; 120 std::vector<int32_t> modesR{'k','l'}; 121 122 // Set mode extents 123 std::unordered_map<int32_t, int64_t> extent; 124 extent['a'] = 16; 125 extent['b'] = 16; 126 extent['c'] = 16; 127 extent['d'] = 16; 128 extent['e'] = 16; 129 extent['f'] = 16; 130 extent['g'] = 16; 131 extent['h'] = 16; 132 extent['i'] = 16; 133 extent['j'] = 16; 134 extent['k'] = 16; 135 extent['l'] = 16; 136 extent['m'] = 16; 137 138 // Create a vector of extents for each tensor 139 std::vector<int64_t> extentA; 140 for (auto mode : modesA) 141 extentA.push_back(extent[mode]); 142 std::vector<int64_t> extentB; 143 for (auto mode : modesB) 144 extentB.push_back(extent[mode]); 145 std::vector<int64_t> extentC; 146 for (auto mode : modesC) 147 extentC.push_back(extent[mode]); 148 std::vector<int64_t> extentD; 149 for (auto mode : modesD) 150 extentD.push_back(extent[mode]); 151 std::vector<int64_t> extentR; 152 for (auto mode : modesR) 153 extentR.push_back(extent[mode]); 154 155 if(verbose) 156 printf("Defined tensor network, modes, and extents\n");  ### Allocate memory and initialize data¶ Next, we allocate memory for the tensor network operands and initialize them to random values. 159 /********************** 160 * Allocating data 161 **********************/ 162 163 size_t elementsA = 1; 164 for (auto mode : modesA) 165 elementsA *= extent[mode]; 166 size_t elementsB = 1; 167 for (auto mode : modesB) 168 elementsB *= extent[mode]; 169 size_t elementsC = 1; 170 for (auto mode : modesC) 171 elementsC *= extent[mode]; 172 size_t elementsD = 1; 173 for (auto mode : modesD) 174 elementsD *= extent[mode]; 175 size_t elementsR = 1; 176 for (auto mode : modesR) 177 elementsR *= extent[mode]; 178 179 size_t sizeA = sizeof(floatType) * elementsA; 180 size_t sizeB = sizeof(floatType) * elementsB; 181 size_t sizeC = sizeof(floatType) * elementsC; 182 size_t sizeD = sizeof(floatType) * elementsD; 183 size_t sizeR = sizeof(floatType) * elementsR; 184 if(verbose) 185 printf("Total GPU memory used for tensor storage: %.2f GiB\n", 186 (sizeA + sizeB + sizeC + sizeD + sizeR) / 1024. /1024. / 1024); 187 188 void* rawDataIn_d[numInputs]; 189 void* R_d; 190 HANDLE_CUDA_ERROR( cudaMalloc((void**) &rawDataIn_d[0], sizeA) ); 191 HANDLE_CUDA_ERROR( cudaMalloc((void**) &rawDataIn_d[1], sizeB) ); 192 HANDLE_CUDA_ERROR( cudaMalloc((void**) &rawDataIn_d[2], sizeC) ); 193 HANDLE_CUDA_ERROR( cudaMalloc((void**) &rawDataIn_d[3], sizeD) ); 194 HANDLE_CUDA_ERROR( cudaMalloc((void**) &R_d, sizeR)); 195 196 floatType *A = (floatType*) malloc(sizeof(floatType) * elementsA); 197 floatType *B = (floatType*) malloc(sizeof(floatType) * elementsB); 198 floatType *C = (floatType*) malloc(sizeof(floatType) * elementsC); 199 floatType *D = (floatType*) malloc(sizeof(floatType) * elementsD); 200 floatType *R = (floatType*) malloc(sizeof(floatType) * elementsR); 201 202 if (A == NULL || B == NULL || C == NULL || D == NULL || R == NULL) 203 { 204 printf("Error: Host memory allocation failed!\n"); 205 return -1; 206 } 207 208 /******************* 209 * Initialize data 210 *******************/ 211 212 memset(R, 0, sizeof(floatType) * elementsR); 213 for (uint64_t i = 0; i < elementsA; i++) 214 A[i] = ((floatType) rand()) / RAND_MAX; 215 for (uint64_t i = 0; i < elementsB; i++) 216 B[i] = ((floatType) rand()) / RAND_MAX; 217 for (uint64_t i = 0; i < elementsC; i++) 218 C[i] = ((floatType) rand()) / RAND_MAX; 219 for (uint64_t i = 0; i < elementsD; i++) 220 D[i] = ((floatType) rand()) / RAND_MAX; 221 222 HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[0], A, sizeA, cudaMemcpyHostToDevice) ); 223 HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[1], B, sizeB, cudaMemcpyHostToDevice) ); 224 HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[2], C, sizeC, cudaMemcpyHostToDevice) ); 225 HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[3], D, sizeD, cudaMemcpyHostToDevice) ); 226 227 if(verbose) 228 printf("Allocated GPU memory for data, and initialize data\n");  ### cuTensorNet handle and network descriptor¶ Next, we initialize the cuTensorNet library via cutensornetCreate() and create the network descriptor with the desired tensor modes, extents, and strides, as well as the data and compute types. Note that the created library context will be associated with the currently active GPU. 231 /************************* 232 * cuTensorNet 233 *************************/ 234 235 cudaStream_t stream; 236 cudaStreamCreate(&stream); 237 238 cutensornetHandle_t handle; 239 HANDLE_ERROR( cutensornetCreate(&handle) ); 240 241 const int32_t nmodeA = modesA.size(); 242 const int32_t nmodeB = modesB.size(); 243 const int32_t nmodeC = modesC.size(); 244 const int32_t nmodeD = modesD.size(); 245 const int32_t nmodeR = modesR.size(); 246 247 /******************************* 248 * Create Network Descriptor 249 *******************************/ 250 251 const int32_t* modesIn[] = {modesA.data(), modesB.data(), modesC.data(), modesD.data()}; 252 int32_t const numModesIn[] = {nmodeA, nmodeB, nmodeC, nmodeD}; 253 const int64_t* extentsIn[] = {extentA.data(), extentB.data(), extentC.data(), extentD.data()}; 254 const int64_t* stridesIn[] = {NULL, NULL, NULL, NULL}; // strides are optional; if no stride is provided, cuTensorNet assumes a generalized column-major data layout 255 256 // Set up tensor network 257 cutensornetNetworkDescriptor_t descNet; 258 HANDLE_ERROR( cutensornetCreateNetworkDescriptor(handle, 259 numInputs, numModesIn, extentsIn, stridesIn, modesIn, NULL, 260 nmodeR, extentR.data(), /*stridesOut = */NULL, modesR.data(), 261 typeData, typeCompute, 262 &descNet) ); 263 264 if(verbose) 265 printf("Initialized the cuTensorNet library and created a tensor network descriptor\n");  ### Optimal contraction order and slicing¶ At this stage, we can deploy the cuTensorNet optimizer to find an optimized contraction path and slicing combination. We choose a limit for the workspace needed to perform the contraction based on the available memory resources, and provide it to the optimizer as a constraint. We then create an optimizer configuration object of type cutensornetContractionOptimizerConfig_t to specify various optimizer options and provide it to the optimizer, which is invoked via cutensornetContractionOptimize(). The results from the optimizer will be returned in an optimizer info object of type cutensornetContractionOptimizerInfo_t. 268 /******************************* 269 * Choose workspace limit based on available resources. 270 *******************************/ 271 272 size_t freeMem, totalMem; 273 HANDLE_CUDA_ERROR( cudaMemGetInfo(&freeMem, &totalMem) ); 274 uint64_t workspaceLimit = (uint64_t)((double)freeMem * 0.9); 275 if(verbose) 276 printf("Workspace limit = %lu\n", workspaceLimit); 277 278 /******************************* 279 * Find "optimal" contraction order and slicing 280 *******************************/ 281 282 cutensornetContractionOptimizerConfig_t optimizerConfig; 283 HANDLE_ERROR( cutensornetCreateContractionOptimizerConfig(handle, &optimizerConfig) ); 284 285 // Set the desired number of hyper-samples (defaults to 0) 286 int32_t num_hypersamples = 8; 287 HANDLE_ERROR( cutensornetContractionOptimizerConfigSetAttribute(handle, 288 optimizerConfig, 289 CUTENSORNET_CONTRACTION_OPTIMIZER_CONFIG_HYPER_NUM_SAMPLES, 290 &num_hypersamples, 291 sizeof(num_hypersamples)) ); 292 293 // Create contraction optimizer info and find an optimized contraction path 294 cutensornetContractionOptimizerInfo_t optimizerInfo; 295 HANDLE_ERROR( cutensornetCreateContractionOptimizerInfo(handle, descNet, &optimizerInfo) ); 296 297 HANDLE_ERROR( cutensornetContractionOptimize(handle, 298 descNet, 299 optimizerConfig, 300 workspaceLimit, 301 optimizerInfo) ); 302 303 // Query the number of slices the tensor network execution will be split into 304 int64_t numSlices = 0; 305 HANDLE_ERROR( cutensornetContractionOptimizerInfoGetAttribute( 306 handle, 307 optimizerInfo, 308 CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_NUM_SLICES, 309 &numSlices, 310 sizeof(numSlices)) ); 311 assert(numSlices > 0); 312 313 if(verbose) 314 printf("Found an optimized contraction path using cuTensorNet optimizer\n");  It is also possible to bypass the cuTensorNet optimizer and import a pre-determined contraction path, as well as slicing information, directly to the optimizer info object via cutensornetContractionOptimizerInfoSetAttribute(). ### Create workspace descriptor and allocate workspace memory¶ Next, we create a workspace descriptor, compute the workspace sizes, and query the minimum workspace size needed to contract the network. We then allocate device memory for the workspace and set this in the workspace descriptor. The workspace descriptor will be provided to the contraction plan. 317 /******************************* 318 * Create workspace descriptor, allocate workspace, and set it. 319 *******************************/ 320 321 cutensornetWorkspaceDescriptor_t workDesc; 322 HANDLE_ERROR( cutensornetCreateWorkspaceDescriptor(handle, &workDesc) ); 323 324 uint64_t requiredWorkspaceSize = 0; 325 HANDLE_ERROR( cutensornetWorkspaceComputeContractionSizes(handle, 326 descNet, 327 optimizerInfo, 328 workDesc) ); 329 330 HANDLE_ERROR( cutensornetWorkspaceGetSize(handle, 331 workDesc, 332 CUTENSORNET_WORKSIZE_PREF_MIN, 333 CUTENSORNET_MEMSPACE_DEVICE, 334 &requiredWorkspaceSize) ); 335 336 void* work = nullptr; 337 HANDLE_CUDA_ERROR( cudaMalloc(&work, requiredWorkspaceSize) ); 338 339 HANDLE_ERROR( cutensornetWorkspaceSet(handle, 340 workDesc, 341 CUTENSORNET_MEMSPACE_DEVICE, 342 work, 343 requiredWorkspaceSize) ); 344 345 if(verbose) 346 printf("Allocated and set up the GPU workspace\n");  ### Contraction plan and auto-tune¶ We create a tensor network contraction plan holding all pairwise contraction plans for cuTENSOR. Optionally, we can auto-tune the plan such that cuTENSOR selects the best kernel for each pairwise contraction. This contraction plan can be reused for many (possibly different) data inputs, avoiding the cost of initializing this plan redundantly. 349 /******************************* 350 * Initialize the pairwise contraction plan (for cuTENSOR). 351 *******************************/ 352 353 cutensornetContractionPlan_t plan; 354 HANDLE_ERROR( cutensornetCreateContractionPlan(handle, 355 descNet, 356 optimizerInfo, 357 workDesc, 358 &plan) ); 359 360 /******************************* 361 * Optional: Auto-tune cuTENSOR's cutensorContractionPlan to pick the fastest kernel 362 * for each pairwise tensor contraction. 363 *******************************/ 364 cutensornetContractionAutotunePreference_t autotunePref; 365 HANDLE_ERROR( cutensornetCreateContractionAutotunePreference(handle, 366 &autotunePref) ); 367 368 const int numAutotuningIterations = 5; // may be 0 369 HANDLE_ERROR( cutensornetContractionAutotunePreferenceSetAttribute( 370 handle, 371 autotunePref, 372 CUTENSORNET_CONTRACTION_AUTOTUNE_MAX_ITERATIONS, 373 &numAutotuningIterations, 374 sizeof(numAutotuningIterations)) ); 375 376 // Modify the plan again to find the best pair-wise contractions 377 HANDLE_ERROR( cutensornetContractionAutotune(handle, 378 plan, 379 rawDataIn_d, 380 R_d, 381 workDesc, 382 autotunePref, 383 stream) ); 384 385 HANDLE_ERROR( cutensornetDestroyContractionAutotunePreference(autotunePref) ); 386 387 if(verbose) 388 printf("Created a contraction plan for cuTensorNet and optionally auto-tuned it\n");  ### Tensor network contraction execution¶ Finally, we contract the tensor network as many times as needed, possibly with different input each time. Tensor network slices, captured as a cutensornetSliceGroup_t object, are computed using the same contraction plan. For convenience, NULL can be provided to the cutensornetContractSlices() function instead of a slice group when the goal is to contract all the slices in the network. We also clean up and free allocated resources. 391 /********************** 392 * Execute the tensor network contraction 393 **********************/ 394 395 // Create a cutensornetSliceGroup_t object from a range of slice IDs 396 cutensornetSliceGroup_t sliceGroup{}; 397 HANDLE_ERROR( cutensornetCreateSliceGroupFromIDRange(handle, 0, numSlices, 1, &sliceGroup) ); 398 399 GPUTimer timer {stream}; 400 double minTimeCUTENSOR = 1e100; 401 const int numRuns = 3; // number of repeats to get stable performance results 402 for (int i = 0; i < numRuns; ++i) 403 { 404 HANDLE_CUDA_ERROR( cudaMemcpy(R_d, R, sizeR, cudaMemcpyHostToDevice) ); // restore the output tensor on GPU 405 HANDLE_CUDA_ERROR( cudaDeviceSynchronize() ); 406 407 /* 408 * Contract all slices of the tensor network 409 */ 410 timer.start(); 411 412 int32_t accumulateOutput = 0; // output tensor data will be overwritten 413 HANDLE_ERROR( cutensornetContractSlices(handle, 414 plan, 415 rawDataIn_d, 416 R_d, 417 accumulateOutput, 418 workDesc, 419 sliceGroup, // slternatively, NULL can also be used to contract over all slices instead of specifying a sliceGroup object 420 stream) ); 421 422 // Synchronize and measure best timing 423 auto time = timer.seconds(); 424 minTimeCUTENSOR = (time > minTimeCUTENSOR) ? minTimeCUTENSOR : time; 425 } 426 427 if(verbose) 428 printf("Contracted the tensor network, each slice used the same contraction plan\n"); 429 430 // Print the 1-norm of the output tensor (verification) 431 HANDLE_CUDA_ERROR( cudaStreamSynchronize(stream) ); 432 HANDLE_CUDA_ERROR( cudaMemcpy(R, R_d, sizeR, cudaMemcpyDeviceToHost) ); // restore the output tensor on Host 433 double norm1 = 0.0; 434 for (int64_t i = 0; i < elementsR; ++i) { 435 norm1 += std::abs(R[i]); 436 } 437 if(verbose) 438 printf("Computed the 1-norm of the output tensor: %e\n", norm1); 439 440 /*************************/ 441 442 // Query the total Flop count for the tensor network contraction 443 double flops {0.0}; 444 HANDLE_ERROR( cutensornetContractionOptimizerInfoGetAttribute( 445 handle, 446 optimizerInfo, 447 CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_FLOP_COUNT, 448 &flops, 449 sizeof(flops)) ); 450 451 if(verbose) { 452 printf("Number of tensor network slices = %ld\n", numSlices); 453 printf("Tensor network contraction time (ms) = %.3f\n", minTimeCUTENSOR * 1000.f); 454 } 455 456 // Free cuTensorNet resources 457 HANDLE_ERROR( cutensornetDestroySliceGroup(sliceGroup) ); 458 HANDLE_ERROR( cutensornetDestroyContractionPlan(plan) ); 459 HANDLE_ERROR( cutensornetDestroyWorkspaceDescriptor(workDesc) ); 460 HANDLE_ERROR( cutensornetDestroyContractionOptimizerInfo(optimizerInfo) ); 461 HANDLE_ERROR( cutensornetDestroyContractionOptimizerConfig(optimizerConfig) ); 462 HANDLE_ERROR( cutensornetDestroyNetworkDescriptor(descNet) ); 463 HANDLE_ERROR( cutensornetDestroy(handle) ); 464 465 // Free Host memory resources 466 if (R) free(R); 467 if (D) free(D); 468 if (C) free(C); 469 if (B) free(B); 470 if (A) free(A); 471 472 // Free GPU memory resources 473 if (work) cudaFree(work); 474 if (R_d) cudaFree(R_d); 475 if (rawDataIn_d[0]) cudaFree(rawDataIn_d[0]); 476 if (rawDataIn_d[1]) cudaFree(rawDataIn_d[1]); 477 if (rawDataIn_d[2]) cudaFree(rawDataIn_d[2]); 478 if (rawDataIn_d[3]) cudaFree(rawDataIn_d[3]); 479 480 if(verbose) 481 printf("Freed resources and exited\n"); 482 483 return 0; 484}  Recall that the full sample code can be found in the NVIDIA/cuQuantum repository (here). ## Code Example (Automatic Slice-Based Distributed Parallelization)¶ It is straightforward to adapt Code Example (Serial) and enable automatic parallel execution across multiple/many GPU devices (across multiple/many nodes). We will illustrate this with an example using the Message Passing Interface (MPI) as the communication layer. Below we show the minor additions that need to be made in order to enable distributed parallel execution without making any changes to the original serial source code. The full MPI-automatic sample code can be found in the NVIDIA/cuQuantum repository. To enable automatic parallelism, cuTensorNet requires that • the environment variable $CUTENSORNET_COMM_LIB is set to the path to the wrapper shared library libcutensornet_distributed_interface_mpi.so, and

• the executable is linked to a CUDA-aware MPI library

The detailed instruction for setting these up is given in the installation guide above.

First, in addition to the headers and definitions mentioned in Headers and data types, we include the MPI header and define a macro to handle MPI errors. We also need to initialize the MPI service and assign a unique GPU device to each MPI process that will later be associated with the cuTensorNet library handle created inside the MPI process.

20#include <mpi.h>

44#define HANDLE_MPI_ERROR(x)                                       \
45{ const auto err = x;                                             \
46  if( err != MPI_SUCCESS )                                        \
47  { char error[MPI_MAX_ERROR_STRING]; int len;                    \
48    MPI_Error_string(err, error, &len);                           \
49    printf("MPI Error: %s in line %d\n", error, __LINE__);        \
50    fflush(stdout);                                               \
51    MPI_Abort(MPI_COMM_WORLD, err);                               \
52  }                                                               \
53};


The MPI service initialization must precede the first cutensornetCreate() call which creates a cuTensorNet library handle. An attempt to call cutensornetCreate() before initializing the MPI service will result in an error.

 97   // Initialize MPI
98   HANDLE_MPI_ERROR( MPI_Init(&argc, &argv) );
99   int rank {-1};
100   HANDLE_MPI_ERROR( MPI_Comm_rank(MPI_COMM_WORLD, &rank) );
101   int numProcs {0};
102   HANDLE_MPI_ERROR( MPI_Comm_size(MPI_COMM_WORLD, &numProcs) );


If multiple GPU devices located on the same node are visible to an MPI process, we need to pick an exclusive GPU device for each MPI process. If the mpirun (or mpiexec) command provided by your MPI library implementation sets up an environment variable that shows the rank of the respective MPI process during its invocation, you can use that environment variable to set CUDA_VISIBLE_DEVICES to point to a specific single GPU device assigned to the MPI process exclusively (for example, Open MPI provides \${OMPI_COMM_WORLD_LOCAL_RANK} for this purpose). Otherwise, the GPU device can be set manually, as shown below.

122   // Set GPU device based on ranks and nodes
123   int numDevices {0};
124   HANDLE_CUDA_ERROR( cudaGetDeviceCount(&numDevices) );
125   const int deviceId = rank % numDevices; // we assume that the processes are mapped to nodes in contiguous chunks
126   HANDLE_CUDA_ERROR( cudaSetDevice(deviceId) );
128   HANDLE_CUDA_ERROR( cudaGetDeviceProperties(&prop, deviceId) );


Next we define the tensor network as described in Define tensor network and tensor sizes. In a one GPU device per process model, the tensor network, including operands and result data, is replicated on each process. The root process initializes the input data and broadcasts it to the other processes.

253   /*******************
254   * Initialize data
255   *******************/
256
257   memset(R, 0, sizeof(floatType) * elementsR);
258   if(rank == 0)
259   {
260      for (uint64_t i = 0; i < elementsA; i++)
261         A[i] = ((floatType) rand()) / RAND_MAX;
262      for (uint64_t i = 0; i < elementsB; i++)
263         B[i] = ((floatType) rand()) / RAND_MAX;
264      for (uint64_t i = 0; i < elementsC; i++)
265         C[i] = ((floatType) rand()) / RAND_MAX;
266      for (uint64_t i = 0; i < elementsD; i++)
267         D[i] = ((floatType) rand()) / RAND_MAX;
268   }
269
270   // Broadcast input data to all ranks
271   HANDLE_MPI_ERROR( MPI_Bcast(A, elementsA, floatTypeMPI, 0, MPI_COMM_WORLD) );
272   HANDLE_MPI_ERROR( MPI_Bcast(B, elementsB, floatTypeMPI, 0, MPI_COMM_WORLD) );
273   HANDLE_MPI_ERROR( MPI_Bcast(C, elementsC, floatTypeMPI, 0, MPI_COMM_WORLD) );
274   HANDLE_MPI_ERROR( MPI_Bcast(D, elementsD, floatTypeMPI, 0, MPI_COMM_WORLD) );
275
276   // Copy data to GPU
277   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[0], A, sizeA, cudaMemcpyHostToDevice) );
278   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[1], B, sizeB, cudaMemcpyHostToDevice) );
279   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[2], C, sizeC, cudaMemcpyHostToDevice) );
280   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[3], D, sizeD, cudaMemcpyHostToDevice) );
281
282   if(verbose)
283      printf("Allocated GPU memory for data, and initialize data\n");


Once the MPI service has been initialized and the cuTensorNet library handle has been created afterwards, one can activate the distributed parallel execution by calling cutensornetDistributedResetConfiguration(). Per standard practice, the user’s code needs to create a duplicate MPI communicator via MPI_Comm_dup. Then, the duplicate MPI communicator is associated with the cuTensorNet library handle by passing the pointer to the duplicate MPI communicator together with its size (in bytes) to the cutensornetDistributedResetConfiguration() call. The MPI communicator will be stored inside the cuTensorNet library handle such that all subsequent calls to the tensor network contraction path finder and tensor network contraction executor will be parallelized across all paricipating MPI processes (each MPI process is associated with its own GPU).

337   /*******************************
338   * Activate distributed (parallel) execution prior to
339   * calling contraction path finder and contraction executor
340   *******************************/
341   // HANDLE_ERROR( cutensornetDistributedResetConfiguration(handle, NULL, 0) ); // resets back to serial execution
342   MPI_Comm cutnComm;
343   HANDLE_MPI_ERROR( MPI_Comm_dup(MPI_COMM_WORLD, &cutnComm) ); // duplicate MPI communicator
344   HANDLE_ERROR( cutensornetDistributedResetConfiguration(handle, &cutnComm, sizeof(cutnComm)) );
345   if(verbose)
346      printf("Reset distributed MPI configuration\n");


Note

cutensornetDistributedResetConfiguration() is a collective call that must be executed by all participating MPI processes.

The API of this distributed parallelization model makes it straightforward to run source codes written for serial execution on multiple GPUs/nodes. Essentially, all MPI processes will execute exactly the same (serial) source code while automatically performing distributed parallelization inside the tensor network contraction path finder and tensor network contraction executor calls. The parallelization of the tensor network contraction path finder will only occur when the number of requested hyper-samples is larger than zero. However, regardless of that, activation of the distributed parallelization must precede the invocation of the tensor network contraction path finder. That is, the tensor network contraction path finder and tensor network contraction execution invocations must be done strictly after activating the distributed parallelization via cutensornetDistributedResetConfiguration(). When the distributed configuration is set to a parallel mode, the user is normally expected to invoke tensor network contraction execution by calling the cutensornetContractSlices() function which is provided with the full range of tensor network slices that will be automatically distributed across all MPI processes.

Since the size of the tensor network must be sufficiently large to get a benefit of acceletation from distributed execution, smaller tensor networks (those which consist of only a single slice) can still be processed without distributed parallelization, which can be achieved by calling cutensornetDistributedResetConfiguration() with a NULL argument in place of the MPI communicator pointer (as before, this should be done prior to calling the tensor network contraction path finder). That is, the switch between distributed parallelization and redundant serial execution can be done on a per-tensor-network basis. Users can decide which (larger) tensor networks to process in a parallel manner and which (smaller ones) to process in a serial manner redundantly, by resetting the distributed configuration appropriately. In both cases, all MPI processes will produce the same output tensor (result) at the end of the tensor network execution.

Note

In the current version of the cuTensorNet library, the parallel tensor network contraction execution triggered by the cutensornetContractSlices() call will block the provided CUDA stream as well as the calling CPU thread until the execution has completed on all MPI processes. This is a temporary limitation that will be lifted in future versions of the cuTensorNet library, where the call to cutensornetContractSlices() will be fully asynchronous, similar to the serial execution case. Additionally, for an explicit synchronization of all MPI processes (barrier) one can make a collective call to cutensornetDistributedSynchronize().

Before termination, the MPI service needs to be finalized.

554   // Shut down MPI service
555   HANDLE_MPI_ERROR( MPI_Finalize() );


The complete MPI-automatic sample can be found in the NVIDIA/cuQuantum repository.

## Code Example (Manual Slice-Based Distributed Parallelization)¶

For advanced users, it is also possible (but more involved) to adapt Code Example (Serial) to explicitly parallelize execution of the tensor network contraction operation on multiple GPU devices. Here we will also use MPI as the communication layer. For brevity, we will show only the changes that need to be made on top of the serial example. The full MPI-manual sample code can be found in the NVIDIA/cuQuantum repository. Note that this sample does NOT require CUDA-aware MPI.

First, in addition to the headers and definitions mentioned in Headers and data types, we need to include the MPI header and define a macro to handle MPI errors. We also need to initialize the MPI service and associate each MPI process with its own GPU device, as explained previously.

20#include <mpi.h>

44#define HANDLE_MPI_ERROR(x)                                       \
45{ const auto err = x;                                             \
46  if( err != MPI_SUCCESS )                                        \
47  { char error[MPI_MAX_ERROR_STRING]; int len;                    \
48    MPI_Error_string(err, error, &len);                           \
49    printf("MPI Error: %s in line %d\n", error, __LINE__);        \
50    fflush(stdout);                                               \
51    MPI_Abort(MPI_COMM_WORLD, err);                               \
52  }                                                               \
53};

 97   // Initialize MPI
98   HANDLE_MPI_ERROR( MPI_Init(&argc, &argv) );
99   int rank {-1};
100   HANDLE_MPI_ERROR( MPI_Comm_rank(MPI_COMM_WORLD, &rank) );
101   int numProcs {0};
102   HANDLE_MPI_ERROR( MPI_Comm_size(MPI_COMM_WORLD, &numProcs) );

122   // Set GPU device based on ranks and nodes
123   int numDevices {0};
124   HANDLE_CUDA_ERROR( cudaGetDeviceCount(&numDevices) );
125   const int deviceId = rank % numDevices; // we assume that the processes are mapped to nodes in contiguous chunks
126   HANDLE_CUDA_ERROR( cudaSetDevice(deviceId) );
128   HANDLE_CUDA_ERROR( cudaGetDeviceProperties(&prop, deviceId) );


Next, we define the tensor network as described in Define tensor network and tensor sizes. In a one GPU device per process model, the tensor network, including operands and result data, is replicated on each process. The root process initializes the input data and broadcasts it to the other processes.

253   /*******************
254   * Initialize data
255   *******************/
256
257   memset(R, 0, sizeof(floatType) * elementsR);
258   if(rank == 0)
259   {
260      for (uint64_t i = 0; i < elementsA; i++)
261         A[i] = ((floatType) rand()) / RAND_MAX;
262      for (uint64_t i = 0; i < elementsB; i++)
263         B[i] = ((floatType) rand()) / RAND_MAX;
264      for (uint64_t i = 0; i < elementsC; i++)
265         C[i] = ((floatType) rand()) / RAND_MAX;
266      for (uint64_t i = 0; i < elementsD; i++)
267         D[i] = ((floatType) rand()) / RAND_MAX;
268   }
269
270   // Broadcast input data to all ranks
271   HANDLE_MPI_ERROR( MPI_Bcast(A, elementsA, floatTypeMPI, 0, MPI_COMM_WORLD) );
272   HANDLE_MPI_ERROR( MPI_Bcast(B, elementsB, floatTypeMPI, 0, MPI_COMM_WORLD) );
273   HANDLE_MPI_ERROR( MPI_Bcast(C, elementsC, floatTypeMPI, 0, MPI_COMM_WORLD) );
274   HANDLE_MPI_ERROR( MPI_Bcast(D, elementsD, floatTypeMPI, 0, MPI_COMM_WORLD) );
275
276   // Copy data to GPU
277   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[0], A, sizeA, cudaMemcpyHostToDevice) );
278   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[1], B, sizeB, cudaMemcpyHostToDevice) );
279   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[2], C, sizeC, cudaMemcpyHostToDevice) );
280   HANDLE_CUDA_ERROR( cudaMemcpy(rawDataIn_d[3], D, sizeD, cudaMemcpyHostToDevice) );
281
282   if(verbose)
283      printf("Allocated GPU memory for data, and initialize data\n");


Then we create the library handle and tensor network descriptor on each process, as decribed in cuTensorNet handle and network descriptor.

Next, we find the optimal contraction path and slicing combination for our tensor network. We will run the cuTensorNet optimizer on all processes and determine which process has the best path in terms of the FLOP count. We will then pack the optimizer info object on this process, broadcast the packed buffer, and unpack it on all other processes. Each process now has the same optimizer info object, which we use to calculate the share of slices for each process.

358   // Compute the path on all ranks so that we can choose the path with the lowest cost. Note that since this is a tiny
359   // example with 4 operands, all processes will compute the same globally optimal path. This is not the case for large
360   // tensor networks. For large networks, hyper-optimization does become beneficial.
361
362   // Enforce tensor network slicing (for parallelization)
363   const int32_t min_slices = numProcs;
364   HANDLE_ERROR( cutensornetContractionOptimizerConfigSetAttribute(handle,
365                  optimizerConfig,
366                  CUTENSORNET_CONTRACTION_OPTIMIZER_CONFIG_SLICER_MIN_SLICES,
367                  &min_slices,
368                  sizeof(min_slices)) );
369
370   // Find an optimized tensor network contraction path on each MPI process
371   HANDLE_ERROR( cutensornetContractionOptimize(handle,
372                                       descNet,
373                                       optimizerConfig,
374                                       workspaceLimit,
375                                       optimizerInfo) );
376
377   // Query the obtained Flop count
378   double flops{-1.};
379   HANDLE_ERROR( cutensornetContractionOptimizerInfoGetAttribute(handle,
380                     optimizerInfo,
381                     CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_FLOP_COUNT,
382                     &flops,
383                     sizeof(flops)) );
384
385   // Choose the contraction path with the lowest Flop cost
386   struct {
387      double value;
388      int rank;
389   } in{flops, rank}, out;
390   HANDLE_MPI_ERROR( MPI_Allreduce(&in, &out, 1, MPI_DOUBLE_INT, MPI_MINLOC, MPI_COMM_WORLD) );
391   const int sender = out.rank;
392   flops = out.value;
393
394   if (verbose)
395      printf("Process %d has the path with the lowest FLOP count %lf\n", sender, flops);
396
397   // Get the buffer size for optimizerInfo and broadcast it
398   size_t bufSize {0};
399   if (rank == sender)
400   {
401       HANDLE_ERROR( cutensornetContractionOptimizerInfoGetPackedSize(handle, optimizerInfo, &bufSize) );
402   }
403   HANDLE_MPI_ERROR( MPI_Bcast(&bufSize, 1, MPI_INT64_T, sender, MPI_COMM_WORLD) );
404
405   // Allocate a buffer
406   std::vector<char> buffer(bufSize);
407
408   // Pack optimizerInfo on sender and broadcast it
409   if (rank == sender)
410   {
411       HANDLE_ERROR( cutensornetContractionOptimizerInfoPackData(handle, optimizerInfo, buffer.data(), bufSize) );
412   }
413   HANDLE_MPI_ERROR( MPI_Bcast(buffer.data(), bufSize, MPI_CHAR, sender, MPI_COMM_WORLD) );
414
415   // Unpack optimizerInfo from the buffer
416   if (rank != sender)
417   {
418       HANDLE_ERROR( cutensornetUpdateContractionOptimizerInfoFromPackedData(handle, buffer.data(), bufSize, optimizerInfo) );
419   }
420
421   // Query the number of slices the tensor network execution will be split into
422   int64_t numSlices = 0;
423   HANDLE_ERROR( cutensornetContractionOptimizerInfoGetAttribute(
424                  handle,
425                  optimizerInfo,
426                  CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_NUM_SLICES,
427                  &numSlices,
428                  sizeof(numSlices)) );
429   assert(numSlices > 0);
430
431   // Calculate each process's share of the slices
432   int64_t procChunk = numSlices / numProcs;
433   int extra = numSlices % numProcs;
434   int procSliceBegin = rank * procChunk + std::min(rank, extra);
435   int procSliceEnd = (rank == numProcs - 1) ? numSlices : (rank + 1) * procChunk + std::min(rank + 1, extra);


We now create the workspace descriptor and allocate memory as described in Create workspace descriptor and allocate workspace memory, and create the Contraction plan and auto-tune the tensor network.

Next, on each process, we create a slice group (see cutensornetSliceGroup_t) that corresponds to its share of the tensor network slices. We then provide this slice group object to the cutensornetContractSlices() function to get a partial contraction result on each process.

523   // Create a cutensornetSliceGroup_t object from a range of slice IDs
524   cutensornetSliceGroup_t sliceGroup{};
525   HANDLE_ERROR( cutensornetCreateSliceGroupFromIDRange(handle, procSliceBegin, procSliceEnd, 1, &sliceGroup) );

546      HANDLE_ERROR( cutensornetContractSlices(handle,
547                                 plan,
548                                 rawDataIn_d,
549                                 R_d,
550                                 accumulateOutput,
551                                 workDesc,
552                                 sliceGroup,
553                                 stream) );


Finally, we sum up the partial contributions to obtain the result of the tensor network contraction.

559      // Perform Allreduce operation on the output tensor
560      HANDLE_CUDA_ERROR( cudaStreamSynchronize(stream) );
561      HANDLE_CUDA_ERROR( cudaMemcpy(R, R_d, sizeR, cudaMemcpyDeviceToHost) ); // restore the output tensor on Host
562      HANDLE_MPI_ERROR( MPI_Allreduce(MPI_IN_PLACE, R, elementsR, floatTypeMPI, MPI_SUM, MPI_COMM_WORLD) );


Before termination, the MPI service needs to be finalized.

624   // Shut down MPI service
625   HANDLE_MPI_ERROR( MPI_Finalize() );


The complete MPI-manual sample can be found in the NVIDIA/cuQuantum repository.

## Useful Tips¶

• For debugging, the environment variable CUTENSORNET_LOG_LEVEL=n can be set. The level n = 0, 1, …, 5 corresponds to the logger level as described and used in cutensornetLoggerSetLevel(). The environment variable CUTENSORNET_LOG_FILE=<filepath> can be used to redirect the log output to a custom file at <filepath> instead of stdout.