Examples¶
In this section, we show how to contract a tensor network using cuTensorNet. First, we describe how to compile sample code. Then, we present an example code used to perform common steps in cuTensorNet. In the example, we perform the following tensor contraction:
We construct the code step by step, each step adding code at the end. The steps are separated by succinct multi-line comment blocks.
It is recommended that the reader refers to Overview and cuTENSOR documentation for familiarity with the nomenclature and cuTENSOR operations.
Compiling code¶
Assuming cuQuantum has been extracted in CUQUANTUM_ROOT
and cuTENSOR in CUTENSOR_ROOT
, we update the library path as follows:
export LD_LIBRARY_PATH=${CUQUANTUM_ROOT}/lib:${CUTENSOR_ROOT}/lib:${LD_LIBRARY_PATH}
Depending on your CUDA Toolkit, you might have to choose a different library version (e.g., ${CUTENSOR_ROOT}/lib/13.0
).
The serial sample code discussed below (tensornet_example.cu
) can be compiled via the following command:
nvcc tensornet_example.cu -I${CUQUANTUM_ROOT}/include -L${CUQUANTUM_ROOT}/lib -L${CUTENSOR_ROOT}/lib -lcutensornet -lcutensor -o tensornet_example
For static linking against the cuTensorNet library, use the following command (note that libmetis_static.a
needs to be explicitly linked against,
assuming it is installed through the NVIDIA CUDA Toolkit and accessible through $LIBRARY_PATH
):
nvcc tensornet_example.cu -I${CUQUANTUM_ROOT}/include ${CUQUANTUM_ROOT}/lib/libcutensornet_static.a -L${CUTENSOR_DIR}/lib -lcutensor libmetis_static.a -o tensornet_example
In order to build parallel (MPI) versions of the examples (tensornet_example_mpi_auto.cu
and tensornet_example_mpi.cu
),
one will need to have an MPI library installed (e.g., recent Open MPI, MVAPICH, or MPICH).
In particular, the automatic parallel example requires CUDA-aware MPI, see Code example (automatic slice-based distributed parallelization) below.
In this case, one will need to add -I${MPI_PATH}/include
and -L${MPI_PATH}/lib -lmpi
to the build command:
nvcc tensornet_example_mpi_auto.cu -I${CUQUANTUM_ROOT}/include -I${MPI_PATH}/include -L${CUQUANTUM_ROOT}/lib -L${CUTENSOR_ROOT}/lib -lcutensornet -lcutensor -L${MPI_PATH}/lib -lmpi -o tensornet_example_mpi_auto
nvcc tensornet_example_mpi.cu -I${CUQUANTUM_ROOT}/include -I${MPI_PATH}/include -L${CUQUANTUM_ROOT}/lib -L${CUTENSOR_ROOT}/lib -lcutensornet -lcutensor -L${MPI_PATH}/lib -lmpi -o tensornet_example_mpi
Warning
When running tensornet_example_mpi_auto.cu
without CUDA-aware MPI, the program will crash.
Note
Depending on the source of the cuQuantum package, you may need to replace lib
above by lib64
.
Code example (serial)¶
The following code example illustrates the common steps necessary to use cuTensorNet and also introduces typical tensor network operations. The full sample code can be found in the NVIDIA/cuQuantum repository (here).
Headers and data types¶
8#include <stdlib.h>
9#include <stdio.h>
10
11#include <unordered_map>
12#include <vector>
13#include <cassert>
14
15#include <cuda_runtime.h>
16#include <cutensornet.h>
17
18#define HANDLE_ERROR(x) \
19 do { \
20 const auto err = x; \
21 if (err != CUTENSORNET_STATUS_SUCCESS) \
22 { \
23 printf("Error: %s in line %d\n", cutensornetGetErrorString(err), __LINE__); \
24 fflush(stdout); \
25 exit(EXIT_FAILURE); \
26 } \
27 } while (0)
28
29#define HANDLE_CUDA_ERROR(x) \
30 do { \
31 const auto err = x; \
32 if (err != cudaSuccess) \
33 { \
34 printf("CUDA Error: %s in line %d\n", cudaGetErrorString(err), __LINE__); \
35 fflush(stdout); \
36 exit(EXIT_FAILURE); \
37 } \
38 } while (0)
39
40// Usage: DEV_ATTR(cudaDevAttrClockRate, deviceId)
41#define DEV_ATTR(ENUMCONST, DID) \
42 ({ int v; \
43 HANDLE_CUDA_ERROR(cudaDeviceGetAttribute(&v, ENUMCONST, DID)); \
44 v; })
45
46
47struct GPUTimer
48{
49 GPUTimer(cudaStream_t stream) : stream_(stream)
50 {
51 HANDLE_CUDA_ERROR(cudaEventCreate(&start_));
52 HANDLE_CUDA_ERROR(cudaEventCreate(&stop_));
53 }
54
55 ~GPUTimer()
56 {
57 HANDLE_CUDA_ERROR(cudaEventDestroy(start_));
58 HANDLE_CUDA_ERROR(cudaEventDestroy(stop_));
59 }
60
61 void start() { HANDLE_CUDA_ERROR(cudaEventRecord(start_, stream_)); }
62
63 float seconds()
64 {
65 HANDLE_CUDA_ERROR(cudaEventRecord(stop_, stream_));
66 HANDLE_CUDA_ERROR(cudaEventSynchronize(stop_));
67 float time;
68 HANDLE_CUDA_ERROR(cudaEventElapsedTime(&time, start_, stop_));
69 return time * 1e-3;
70 }
71
72private:
73 cudaEvent_t start_, stop_;
74 cudaStream_t stream_;
75};
76
77int main()
78{
79 static_assert(sizeof(size_t) == sizeof(int64_t), "Please build this sample on a 64-bit architecture!");
80
81 bool verbose = true;
82
83 // Check cuTensorNet version
84 const size_t cuTensornetVersion = cutensornetGetVersion();
85 if (verbose) printf("cuTensorNet version: %ld\n", cuTensornetVersion);
86
87 // Set GPU device
88 int numDevices{0};
89 HANDLE_CUDA_ERROR(cudaGetDeviceCount(&numDevices));
90 const int deviceId = 0;
91 HANDLE_CUDA_ERROR(cudaSetDevice(deviceId));
92 cudaDeviceProp prop;
93 HANDLE_CUDA_ERROR(cudaGetDeviceProperties(&prop, deviceId));
94
95 if (verbose)
96 {
97 printf("===== device info ======\n");
98 printf("GPU-local-id:%d\n", deviceId);
99 printf("GPU-name:%s\n", prop.name);
100 printf("GPU-clock:%d\n", DEV_ATTR(cudaDevAttrClockRate, deviceId));
101 printf("GPU-memoryClock:%d\n", DEV_ATTR(cudaDevAttrMemoryClockRate, deviceId));
102 printf("GPU-nSM:%d\n", prop.multiProcessorCount);
103 printf("GPU-major:%d\n", prop.major);
104 printf("GPU-minor:%d\n", prop.minor);
105 printf("========================\n");
106 }
107
108 typedef float floatType;
109 cudaDataType_t typeData = CUDA_R_32F;
110 cutensornetComputeType_t typeCompute = CUTENSORNET_COMPUTE_32F;
111
112 if (verbose) printf("Included headers and defined data types\n");
Define tensor network and tensor sizes¶
Next, we define the topology of the tensor network (i.e., the modes of the tensors, their extents, and their connectivity).
116 /**************************************************************************************
117 * Computing: R_{k,l} = A_{a,b,c,d,e,f} B_{b,g,h,e,i,j} C_{m,a,g,f,i,k} D_{l,c,h,d,j,m}
118 **************************************************************************************/
119
120 constexpr int32_t numInputs = 4;
121
122 // Create vectors of tensor modes
123 std::vector<std::vector<int32_t>> tensorModes{ // for input tensors & output tensor
124 // input tensors
125 {'a', 'b', 'c', 'd', 'e', 'f'}, // tensor A
126 {'b', 'g', 'h', 'e', 'i', 'j'}, // tensor B
127 {'m', 'a', 'g', 'f', 'i', 'k'}, // tensor C
128 {'l', 'c', 'h', 'd', 'j', 'm'}, // tensor D
129 // output tensor
130 {'k', 'l'}, // tensor R
131 };
132
133 // Set mode extents
134 int64_t sameExtent = 16; // setting same extent for simplicity. In principle extents can differ.
135 std::unordered_map<int32_t, int64_t> extent;
136 for (auto& vec : tensorModes)
137 {
138 for (auto& mode : vec)
139 {
140 extent[mode] = sameExtent;
141 }
142 }
143
144 // Create a vector of extents for each tensor
145 std::vector<std::vector<int64_t>> tensorExtents; // for input tensors & output tensor
146 tensorExtents.resize(numInputs + 1); // hold inputs + output tensors
147 for (int32_t t = 0; t < numInputs + 1; ++t)
148 {
149 for (auto& mode : tensorModes[t]) tensorExtents[t].push_back(extent[mode]);
150 }
151
152 if (verbose) printf("Defined tensor network, modes, and extents\n");
Allocate memory and initialize data¶
Next, we allocate memory for the tensor network operands and initialize them to random values.
155 /*****************
156 * Allocating data
157 *****************/
158
159 std::vector<size_t> tensorElements(numInputs + 1); // for input tensors & output tensor
160 std::vector<size_t> tensorSizes(numInputs + 1); // for input tensors & output tensor
161 size_t totalSize = 0;
162 for (int32_t t = 0; t < numInputs + 1; ++t)
163 {
164 size_t numElements = 1;
165 for (auto& mode : tensorModes[t]) numElements *= extent[mode];
166 tensorElements[t] = numElements;
167
168 tensorSizes[t] = sizeof(floatType) * numElements;
169 totalSize += tensorSizes[t];
170 }
171
172 if (verbose) printf("Total GPU memory used for tensor storage: %.2f GiB\n", (totalSize) / 1024. / 1024. / 1024);
173
174 void* tensorData_d[numInputs + 1]; // for input tensors & output tensor
175 for (int32_t t = 0; t < numInputs + 1; ++t)
176 {
177 HANDLE_CUDA_ERROR(cudaMalloc((void**)&tensorData_d[t], tensorSizes[t]));
178 }
179
180 floatType* tensorData_h[numInputs + 1]; // for input tensors & output tensor
181 for (int32_t t = 0; t < numInputs + 1; ++t)
182 {
183 tensorData_h[t] = (floatType*)malloc(tensorSizes[t]);
184 if (tensorData_h[t] == NULL)
185 {
186 printf("Error: Host memory allocation failed!\n");
187 return -1;
188 }
189 }
190
191 /*****************
192 * Initialize data
193 *****************/
194
195 // init output tensor to all 0s
196 memset(tensorData_h[numInputs], 0, tensorSizes[numInputs]);
197 // init input tensors to random values
198 for (int32_t t = 0; t < numInputs; ++t)
199 {
200 for (size_t e = 0; e < tensorElements[t]; ++e) tensorData_h[t][e] = ((floatType)rand()) / RAND_MAX;
201 }
202 // copy input data to device buffers
203 for (int32_t t = 0; t < numInputs; ++t)
204 {
205 HANDLE_CUDA_ERROR(cudaMemcpy(tensorData_d[t], tensorData_h[t], tensorSizes[t], cudaMemcpyHostToDevice));
206 }
cuTensorNet handle and network descriptor¶
Next, we initialize the cuTensorNet library via cutensornetCreate()
.
Note that the created library context will be associated with the currently active GPU.
We create the network descriptor, and append the input tensors with the desired tensor modes and extents, as well as the data type.
We can, optionaly, set the output tensor modes (if skipped, the output modes will be inferred).
Also, we can, optionaly, set the compute mode on the network (if skipped, a default compute type, corresponding to the data type, will be used.
Refer to cutensornetCreateNetwork()
).
209 /*************
210 * cuTensorNet
211 *************/
212
213 cudaStream_t stream;
214 HANDLE_CUDA_ERROR(cudaStreamCreate(&stream));
215
216 cutensornetHandle_t handle;
217 HANDLE_ERROR(cutensornetCreate(&handle));
218
219 if (verbose) printf("Allocated GPU memory for data, initialized data, and created library handle\n");
220
221 /****************
222 * Create Network
223 ****************/
224
225 // Set up tensor network
226 cutensornetNetworkDescriptor_t networkDesc;
227 HANDLE_ERROR(cutensornetCreateNetwork(handle, &networkDesc));
228
229 int64_t tensorIDs[numInputs]; // for input tensors
230 // attach the input tensors to the network
231 for (int32_t t = 0; t < numInputs; ++t)
232 {
233 HANDLE_ERROR(cutensornetNetworkAppendTensor(handle,
234 networkDesc,
235 tensorModes[t].size(),
236 tensorExtents[t].data(),
237 tensorModes[t].data(),
238 NULL,
239 typeData,
240 &tensorIDs[t]));
241 }
242
243 // set the output tensor
244 HANDLE_ERROR(cutensornetNetworkSetOutputTensor(handle,
245 networkDesc,
246 tensorModes[numInputs].size(),
247 tensorModes[numInputs].data(),
248 typeData));
249
250 // set the network compute type
251 HANDLE_ERROR(cutensornetNetworkSetAttribute(handle,
252 networkDesc,
253 CUTENSORNET_NETWORK_COMPUTE_TYPE,
254 &typeCompute,
255 sizeof(typeCompute)));
256 if (verbose) printf("Initialized the cuTensorNet library and created a tensor network descriptor\n");
Optimal contraction order and slicing¶
At this stage, we can deploy the cuTensorNet optimizer to find an optimized contraction path and slicing combination.
We choose a limit for the workspace needed to perform the contraction based on the available memory resources, and provide it
to the optimizer as a constraint. We then create an optimizer configuration object of type cutensornetContractionOptimizerConfig_t
to specify various optimizer options and provide it to the optimizer, which is invoked via cutensornetContractionOptimize()
.
The results from the optimizer will be returned in an optimizer info object of type cutensornetContractionOptimizerInfo_t
.
259 /******************************************************
260 * Choose workspace limit based on available resources.
261 ******************************************************/
262
263 size_t freeMem, totalMem;
264 HANDLE_CUDA_ERROR(cudaMemGetInfo(&freeMem, &totalMem));
265 uint64_t workspaceLimit = (uint64_t)((double)freeMem * 0.9);
266 if (verbose) printf("Workspace limit = %lu\n", workspaceLimit);
267
268 /*******************************
269 * Find "optimal" contraction order and slicing
270 *******************************/
271
272 cutensornetContractionOptimizerConfig_t optimizerConfig;
273 HANDLE_ERROR(cutensornetCreateContractionOptimizerConfig(handle, &optimizerConfig));
274
275 // Set the desired number of hyper-samples (defaults to 0)
276 int32_t num_hypersamples = 8;
277 HANDLE_ERROR(cutensornetContractionOptimizerConfigSetAttribute(handle,
278 optimizerConfig,
279 CUTENSORNET_CONTRACTION_OPTIMIZER_CONFIG_HYPER_NUM_SAMPLES,
280 &num_hypersamples,
281 sizeof(num_hypersamples)));
282
283 // Create contraction optimizer info and find an optimized contraction path
284 cutensornetContractionOptimizerInfo_t optimizerInfo;
285 HANDLE_ERROR(cutensornetCreateContractionOptimizerInfo(handle, networkDesc, &optimizerInfo));
286
287 HANDLE_ERROR(cutensornetContractionOptimize(handle,
288 networkDesc,
289 optimizerConfig,
290 workspaceLimit,
291 optimizerInfo));
292
293 // Query the number of slices the tensor network execution will be split into
294 int64_t numSlices = 0;
295 HANDLE_ERROR(cutensornetContractionOptimizerInfoGetAttribute(handle,
296 optimizerInfo,
297 CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_NUM_SLICES,
298 &numSlices,
299 sizeof(numSlices)));
300 assert(numSlices > 0);
301
302 if (verbose) printf("Found an optimized contraction path using cuTensorNet optimizer\n");
It is also possible to bypass the cuTensorNet optimizer and import a pre-determined contraction path, as well as slicing information,
directly to the optimizer info object via cutensornetContractionOptimizerInfoSetAttribute()
,
then attach it to the network via cutensornetNetworkSetOptimizerInfo()
.
Create workspace descriptor and allocate workspace memory¶
Next, we create a workspace descriptor, compute the workspace sizes, and query the minimum workspace size needed to contract the network. We then allocate device memory for the workspace and set this in the workspace descriptor. The workspace descriptor will be provided to the contraction preparation and computation calls.
305 /*******************************
306 * Create workspace descriptor, allocate workspace, and set it.
307 *******************************/
308
309 cutensornetWorkspaceDescriptor_t workDesc;
310 HANDLE_ERROR(cutensornetCreateWorkspaceDescriptor(handle, &workDesc));
311
312 int64_t requiredWorkspaceSize = 0;
313 HANDLE_ERROR(cutensornetWorkspaceComputeContractionSizes(handle,
314 networkDesc,
315 optimizerInfo,
316 workDesc));
317
318 HANDLE_ERROR(cutensornetWorkspaceGetMemorySize(handle,
319 workDesc,
320 CUTENSORNET_WORKSIZE_PREF_MIN,
321 CUTENSORNET_MEMSPACE_DEVICE,
322 CUTENSORNET_WORKSPACE_SCRATCH,
323 &requiredWorkspaceSize));
324
325 void* work = nullptr;
326 HANDLE_CUDA_ERROR(cudaMalloc(&work, requiredWorkspaceSize));
327
328 HANDLE_ERROR(cutensornetWorkspaceSetMemory(handle,
329 workDesc,
330 CUTENSORNET_MEMSPACE_DEVICE,
331 CUTENSORNET_WORKSPACE_SCRATCH,
332 work,
333 requiredWorkspaceSize));
334
335 if (verbose) printf("Allocated and set up the GPU workspace\n");
Contraction preparation and auto-tuning¶
We prepare the tensor network contraction, via cutensornetNetworkPrepareContraction()
,
and set tensor’s data buffers and strides via cutensornetNetworkSetInputTensorMemory()
and cutensornetNetworkSetOutputTensorMemory()
.
Optionally, we can auto-tune the contraction, via cutensornetNetworkAutotuneContraction()
,
such that cuTENSOR selects the best kernel for each pairwise contraction.
This prepared network contraction can be reused for many (possibly different) data inputs, avoiding
the cost of initializing it redundantly.
338 /**************************
339 * Prepare the contraction.
340 **************************/
341
342 HANDLE_ERROR(cutensornetNetworkPrepareContraction(handle,
343 networkDesc,
344 workDesc));
345
346 // set tensor's data buffers and strides
347 for (int32_t t = 0; t < numInputs; ++t)
348 {
349 HANDLE_ERROR(cutensornetNetworkSetInputTensorMemory(handle,
350 networkDesc,
351 tensorIDs[t],
352 tensorData_d[t],
353 NULL));
354 }
355 HANDLE_ERROR(cutensornetNetworkSetOutputTensorMemory(handle,
356 networkDesc,
357 tensorData_d[numInputs],
358 NULL));
359 /****************************************************************
360 * Optional: Auto-tune the contraction to pick the fastest kernel
361 * for each pairwise tensor contraction.
362 ****************************************************************/
363 cutensornetNetworkAutotunePreference_t autotunePref;
364 HANDLE_ERROR(cutensornetCreateNetworkAutotunePreference(handle,
365 &autotunePref));
366
367 const int numAutotuningIterations = 5; // may be 0
368 HANDLE_ERROR(cutensornetNetworkAutotunePreferenceSetAttribute(handle,
369 autotunePref,
370 CUTENSORNET_NETWORK_AUTOTUNE_MAX_ITERATIONS,
371 &numAutotuningIterations,
372 sizeof(numAutotuningIterations)));
373
374 // Modify the network again to find the best pair-wise contractions
375 HANDLE_ERROR(cutensornetNetworkAutotuneContraction(handle,
376 networkDesc,
377 workDesc,
378 autotunePref,
379 stream));
380
381 HANDLE_ERROR(cutensornetDestroyNetworkAutotunePreference(autotunePref));
382
383 if (verbose) printf("Prepared the network contraction for cuTensorNet and optionally auto-tuned it\n");
Tensor network contraction execution¶
Finally, we contract the tensor network as many times as needed, via cutensornetNetworkContract()
,
possibly with different input each time (reset the data buffers and strides via cutensornetNetworkSetInputTensorMemory()
).
Tensor network slices, captured as a cutensornetSliceGroup_t
object, are computed using the same prepared network contraction.
For convenience, NULL
can be provided to the cutensornetNetworkContract()
function instead of a slice group when
the goal is to contract all the slices in the network. We also clean up and free allocated resources.
386 /****************************************
387 * Execute the tensor network contraction
388 ****************************************/
389
390 // Create a cutensornetSliceGroup_t object from a range of slice IDs
391 cutensornetSliceGroup_t sliceGroup{};
392 HANDLE_ERROR(cutensornetCreateSliceGroupFromIDRange(handle, 0, numSlices, 1, &sliceGroup));
393
394 GPUTimer timer{stream};
395 double minTimeCUTENSORNET = 1e100;
396 const int numRuns = 3; // number of repeats to get stable performance results
397 for (int i = 0; i < numRuns; ++i)
398 {
399 // reset the output tensor data
400 HANDLE_CUDA_ERROR(cudaMemcpy(tensorData_d[numInputs], tensorData_h[numInputs], tensorSizes[numInputs], cudaMemcpyHostToDevice));
401 HANDLE_CUDA_ERROR(cudaDeviceSynchronize());
402
403 /*
404 * Contract all slices of the tensor network
405 */
406 timer.start();
407
408 int32_t accumulateOutput = 0; // output tensor data will be overwritten
409 HANDLE_ERROR(cutensornetNetworkContract(handle,
410 networkDesc,
411 accumulateOutput,
412 workDesc,
413 sliceGroup, // alternatively, NULL can also be used to contract over all slices instead of specifying a sliceGroup object
414 stream));
415
416 // Synchronize and measure best timing
417 auto time = timer.seconds();
418 minTimeCUTENSORNET = (time > minTimeCUTENSORNET) ? minTimeCUTENSORNET : time;
419 }
420
421 if (verbose) printf("Contracted the tensor network, each slice used the same prepared contraction\n");
422
423 // Print the 1-norm of the output tensor (verification)
424 HANDLE_CUDA_ERROR(cudaStreamSynchronize(stream));
425 // restore the output tensor on Host
426 HANDLE_CUDA_ERROR(cudaMemcpy(tensorData_h[numInputs], tensorData_d[numInputs], tensorSizes[numInputs], cudaMemcpyDeviceToHost));
427 double norm1 = 0.0;
428 for (int64_t i = 0; i < tensorElements[numInputs]; ++i)
429 {
430 norm1 += std::abs(tensorData_h[numInputs][i]);
431 }
432 if (verbose) printf("Computed the 1-norm of the output tensor: %e\n", norm1);
433
434 /*************************/
435
436 // Query the total Flop count for the tensor network contraction
437 double flops{0.0};
438 HANDLE_ERROR(cutensornetContractionOptimizerInfoGetAttribute(handle,
439 optimizerInfo,
440 CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_FLOP_COUNT,
441 &flops,
442 sizeof(flops)));
443
444 if (verbose)
445 {
446 printf("Number of tensor network slices = %ld\n", numSlices);
447 printf("Tensor network contraction time (ms) = %.3f\n", minTimeCUTENSORNET * 1000.f);
448 }
449
450 // Sphinx: #9
451 /****************
452 * Free resources
453 ****************/
454
455 // Free cuTensorNet resources
456 HANDLE_ERROR(cutensornetDestroySliceGroup(sliceGroup));
457 HANDLE_ERROR(cutensornetDestroyWorkspaceDescriptor(workDesc));
458 HANDLE_ERROR(cutensornetDestroyContractionOptimizerInfo(optimizerInfo));
459 HANDLE_ERROR(cutensornetDestroyContractionOptimizerConfig(optimizerConfig));
460 HANDLE_ERROR(cutensornetDestroyNetwork(networkDesc));
461 HANDLE_ERROR(cutensornetDestroy(handle));
462
463 HANDLE_CUDA_ERROR(cudaStreamDestroy(stream));
464
465 // Free Host and GPU memory resources
466 for (int32_t t = 0; t < numInputs + 1; ++t)
467 {
468 if (tensorData_h[t]) free(tensorData_h[t]);
469 if (tensorData_d[t]) cudaFree(tensorData_d[t]);
470 }
471 if (work) cudaFree(work);
472
473 if (verbose) printf("Freed resources and exited\n");
474
475 return 0;
476}
Recall that the full sample code can be found in the NVIDIA/cuQuantum repository (here).
Code example (automatic slice-based distributed parallelization)¶
It is straightforward to adapt Code example (Serial) and enable automatic parallel execution across multiple/many GPU devices (across multiple/many nodes). We will illustrate this with an example using the Message Passing Interface (MPI) as the communication layer. Below we show the minor additions that need to be made in order to enable distributed parallel execution without making any changes to the original serial source code. The full MPI-automatic sample code can be found in the NVIDIA/cuQuantum repository. To enable automatic parallelism, cuTensorNet requires that
the environment variable
$CUTENSORNET_COMM_LIB
is set to the path to the wrapper shared librarylibcutensornet_distributed_interface_mpi.so
, andthe executable is linked to a CUDA-aware MPI library
The detailed instruction for setting these up is given in the installation guide above.
First, in addition to the headers and definitions mentioned in Headers and data types, we include the MPI header and define a macro to handle MPI errors. We also need to initialize the MPI service and assign a unique GPU device to each MPI process that will later be associated with the cuTensorNet library handle created inside the MPI process.
20#include <mpi.h>
48#define HANDLE_MPI_ERROR(x) \
49 do { \
50 const auto err = x; \
51 if (err != MPI_SUCCESS) \
52 { \
53 char error[MPI_MAX_ERROR_STRING]; \
54 int len; \
55 MPI_Error_string(err, error, &len); \
56 printf("MPI Error: %s in line %d\n", error, __LINE__); \
57 fflush(stdout); \
58 MPI_Abort(MPI_COMM_WORLD, err); \
59 } \
60 } while (0)
The MPI service initialization must precede the first cutensornetCreate()
call
which creates a cuTensorNet library handle. An attempt to call cutensornetCreate()
before initializing the MPI service will result in an error.
106 // Initialize MPI
107 HANDLE_MPI_ERROR(MPI_Init(&argc, &argv));
108 int rank{-1};
109 HANDLE_MPI_ERROR(MPI_Comm_rank(MPI_COMM_WORLD, &rank));
110 int numProcs{0};
111 HANDLE_MPI_ERROR(MPI_Comm_size(MPI_COMM_WORLD, &numProcs));
If multiple GPU devices located on the same node are visible to an MPI process,
we need to pick an exclusive GPU device for each MPI process. If the mpirun
(or mpiexec
)
command provided by your MPI library implementation sets up an environment variable
that shows the rank of the respective MPI process during its invocation, you can use
that environment variable to set CUDA_VISIBLE_DEVICES
to point to a specific single
GPU device assigned to the MPI process exclusively (for example, Open MPI provides
${OMPI_COMM_WORLD_LOCAL_RANK}
for this purpose). Otherwise, the GPU device can be
set manually, as shown below.
129 // Set GPU device based on ranks and nodes
130 int numDevices{0};
131 HANDLE_CUDA_ERROR(cudaGetDeviceCount(&numDevices));
132 const int deviceId = rank % numDevices; // we assume that the processes are mapped to nodes in contiguous chunks
133 HANDLE_CUDA_ERROR(cudaSetDevice(deviceId));
134 cudaDeviceProp prop;
135 HANDLE_CUDA_ERROR(cudaGetDeviceProperties(&prop, deviceId));
Next we define the tensor network as described in Define tensor network and tensor sizes. In a one GPU device per process model, the tensor network, including operands and result data, is replicated on each process. The root process initializes the input data and broadcasts it to the other processes.
243 /*******************
244 * Initialize data
245 *******************/
246
247 // init output tensor to all 0s
248 memset(tensorData_h[numInputs], 0, tensorSizes[numInputs]);
249 if (rank == 0)
250 {
251 // init input tensors to random values
252 for (int32_t t = 0; t < numInputs; ++t)
253 {
254 for (uint64_t i = 0; i < tensorElements[t]; ++i) tensorData_h[t][i] = ((floatType)rand()) / RAND_MAX;
255 }
256 }
257
258 // Broadcast input data to all ranks
259 for (int32_t t = 0; t < numInputs; ++t)
260 {
261 HANDLE_MPI_ERROR(MPI_Bcast(tensorData_h[t], tensorElements[t], floatTypeMPI, 0, MPI_COMM_WORLD));
262 }
263
264 // copy input data to device buffers
265 for (int32_t t = 0; t < numInputs; ++t)
266 {
267 HANDLE_CUDA_ERROR(cudaMemcpy(tensorData_d[t], tensorData_h[t], tensorSizes[t], cudaMemcpyHostToDevice));
268 }
Once the MPI service has been initialized and the cuTensorNet library handle has been created
afterwards, one can activate the distributed parallel execution by calling cutensornetDistributedResetConfiguration()
.
Per standard practice, the user’s code needs to create a duplicate MPI communicator via MPI_Comm_dup
.
Then, the duplicate MPI communicator is associated with the cuTensorNet library handle
by passing the pointer to the duplicate MPI communicator together with its size (in bytes)
to the cutensornetDistributedResetConfiguration()
call. The MPI communicator will be stored
inside the cuTensorNet library handle such that all subsequent calls to the
tensor network contraction path finder and tensor network contraction executor will be
parallelized across all participating MPI processes (each MPI process is associated with its own GPU).
333 /*******************************
334 * Activate distributed (parallel) execution prior to
335 * calling contraction path finder and contraction executor
336 *******************************/
337 // HANDLE_ERROR( cutensornetDistributedResetConfiguration(handle, NULL, 0) ); // resets back to serial execution
338 MPI_Comm cutnComm;
339 HANDLE_MPI_ERROR(MPI_Comm_dup(MPI_COMM_WORLD, &cutnComm)); // duplicate MPI communicator
340 HANDLE_ERROR(cutensornetDistributedResetConfiguration(handle, &cutnComm, sizeof(cutnComm)));
341 if (verbose) printf("Reset distributed MPI configuration\n");
Note
cutensornetDistributedResetConfiguration()
is a collective call that must be executed
by all participating MPI processes.
The API of this distributed parallelization model makes it straightforward to run source codes
written for serial execution on multiple GPUs/nodes. Essentially, all MPI processes will
execute exactly the same (serial) source code while automatically performing distributed parallelization
inside the tensor network contraction path finder and tensor network contraction executor calls.
The parallelization of the tensor network contraction path finder will only occur when the number
of requested hyper-samples is larger than zero. However, regardless of that, activation of the
distributed parallelization must precede the invocation of the tensor network contraction path finder.
That is, the tensor network contraction path finder and tensor network contraction execution invocations
must be done strictly after activating the distributed parallelization via cutensornetDistributedResetConfiguration()
.
When the distributed configuration is set to a parallel mode, the user is normally expected to invoke
tensor network contraction execution by calling the cutensornetNetworkContract()
function which is provided
with the full range of tensor network slices that will be automatically distributed across all MPI processes.
Since the size of the tensor network must be sufficiently large to get a benefit of acceleration from
distributed execution, smaller tensor networks (those which consist of only a single slice)
can still be processed without distributed parallelization, which can be achieved by calling
cutensornetDistributedResetConfiguration()
with a NULL
argument in place of the MPI communicator
pointer (as before, this should be done prior to calling the tensor network contraction path finder).
That is, the switch between distributed parallelization and redundant serial execution can be done
on a per-tensor-network basis. Users can decide which (larger) tensor networks to process
in a parallel manner and which (smaller ones) to process in a serial manner redundantly,
by resetting the distributed configuration appropriately. In both cases, all MPI processes
will produce the same output tensor (result) at the end of the tensor network execution.
Note
In the current version of the cuTensorNet library, the parallel tensor network contraction
execution triggered by the cutensornetNetworkContract()
call will block the provided CUDA stream as well
as the calling CPU thread until the execution has completed on all MPI processes. This is a temporary
limitation that will be lifted in future versions of the cuTensorNet library, where the call to
cutensornetNetworkContract()
will be fully asynchronous, similar to the serial execution case.
Additionally, for an explicit synchronization of all MPI processes (barrier) one can make
a collective call to cutensornetDistributedSynchronize()
.
Before termination, the MPI service needs to be finalized.
552 // Shut down MPI service
553 HANDLE_MPI_ERROR(MPI_Finalize());
The complete MPI-automatic sample can be found in the NVIDIA/cuQuantum repository.
Code example (manual slice-based distributed parallelization)¶
For advanced users, it is also possible (but more involved) to adapt Code example (Serial) to explicitly parallelize execution of the tensor network contraction operation on multiple GPU devices. Here we will also use MPI as the communication layer. For brevity, we will show only the changes that need to be made on top of the serial example. The full MPI-manual sample code can be found in the NVIDIA/cuQuantum repository. Note that this sample does NOT require CUDA-aware MPI.
First, in addition to the headers and definitions mentioned in Headers and data types, we need to include the MPI header and define a macro to handle MPI errors. We also need to initialize the MPI service and associate each MPI process with its own GPU device, as explained previously.
20#include <mpi.h>
48#define HANDLE_MPI_ERROR(x) \
49 do { \
50 const auto err = x; \
51 if (err != MPI_SUCCESS) \
52 { \
53 char error[MPI_MAX_ERROR_STRING]; \
54 int len; \
55 MPI_Error_string(err, error, &len); \
56 printf("MPI Error: %s in line %d\n", error, __LINE__); \
57 fflush(stdout); \
58 MPI_Abort(MPI_COMM_WORLD, err); \
59 } \
60 } while (0)
106 // Initialize MPI
107 HANDLE_MPI_ERROR(MPI_Init(&argc, &argv));
108 int rank{-1};
109 HANDLE_MPI_ERROR(MPI_Comm_rank(MPI_COMM_WORLD, &rank));
110 int numProcs{0};
111 HANDLE_MPI_ERROR(MPI_Comm_size(MPI_COMM_WORLD, &numProcs));
129 // Set GPU device based on ranks and nodes
130 int numDevices{0};
131 HANDLE_CUDA_ERROR(cudaGetDeviceCount(&numDevices));
132 const int deviceId = rank % numDevices; // we assume that the processes are mapped to nodes in contiguous chunks
133 HANDLE_CUDA_ERROR(cudaSetDevice(deviceId));
134 cudaDeviceProp prop;
135 HANDLE_CUDA_ERROR(cudaGetDeviceProperties(&prop, deviceId));
Next, we define the tensor network as described in Define tensor network and tensor sizes. In a one GPU device per process model, the tensor network, including operands and result data, is replicated on each process. The root process initializes the input data and broadcasts it to the other processes.
243 /*******************
244 * Initialize data
245 *******************/
246
247 // init output tensor to all 0s
248 memset(tensorData_h[numInputs], 0, tensorSizes[numInputs]);
249 if (rank == 0)
250 {
251 // init input tensors to random values
252 for (int32_t t = 0; t < numInputs; ++t)
253 {
254 for (uint64_t i = 0; i < tensorElements[t]; ++i) tensorData_h[t][i] = ((floatType)rand()) / RAND_MAX;
255 }
256 }
257
258 // Broadcast input data to all ranks
259 for (int32_t t = 0; t < numInputs; ++t)
260 {
261 HANDLE_MPI_ERROR(MPI_Bcast(tensorData_h[t], tensorElements[t], floatTypeMPI, 0, MPI_COMM_WORLD));
262 }
263
264 // copy input data to device buffers
265 for (int32_t t = 0; t < numInputs; ++t)
266 {
267 HANDLE_CUDA_ERROR(cudaMemcpy(tensorData_d[t], tensorData_h[t], tensorSizes[t], cudaMemcpyHostToDevice));
268 }
Then we create the library handle and tensor network descriptor on each process, as described in cuTensorNet handle and network descriptor.
Next, we find the optimal contraction path and slicing combination for our tensor network.
We will run the cuTensorNet optimizer on all processes and determine which process has the best path
in terms of the FLOP count. We will then pack the optimizer info object on this process, broadcast the packed buffer,
and unpack it on all other processes. Each process now has the same optimizer info object, which we use to calculate
the share of slices for each process. Because the optimizer info object is modified as a result of unifiying the optimizer across processes,
we need to update the network with the modified optimizer info object via cutensornetNetworkSetOptimizerInfo()
.
353 // Compute the path on all ranks so that we can choose the path with the lowest cost. Note that since this is a tiny
354 // example with 4 operands, all processes will compute the same globally optimal path. This is not the case for large
355 // tensor networks. For large networks, hyper-optimization does become beneficial.
356
357 // Enforce tensor network slicing (for parallelization)
358 const int32_t min_slices = numProcs;
359 HANDLE_ERROR(cutensornetContractionOptimizerConfigSetAttribute(handle,
360 optimizerConfig,
361 CUTENSORNET_CONTRACTION_OPTIMIZER_CONFIG_SLICER_MIN_SLICES,
362 &min_slices,
363 sizeof(min_slices)));
364
365 // Find an optimized tensor network contraction path on each MPI process
366 HANDLE_ERROR(cutensornetContractionOptimize(handle,
367 networkDesc,
368 optimizerConfig,
369 workspaceLimit,
370 optimizerInfo));
371
372 // Query the obtained Flop count
373 double flops{-1.};
374 HANDLE_ERROR(cutensornetContractionOptimizerInfoGetAttribute(handle,
375 optimizerInfo,
376 CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_FLOP_COUNT,
377 &flops,
378 sizeof(flops)));
379
380 // Choose the contraction path with the lowest Flop cost
381 struct
382 {
383 double value;
384 int rank;
385 } in{flops, rank}, out;
386
387 HANDLE_MPI_ERROR(MPI_Allreduce(&in, &out, 1, MPI_DOUBLE_INT, MPI_MINLOC, MPI_COMM_WORLD));
388 const int sender = out.rank;
389 flops = out.value;
390
391 if (verbose) printf("Process %d has the path with the lowest FLOP count %lf\n", sender, flops);
392
393 // Get the buffer size for optimizerInfo and broadcast it
394 size_t bufSize{0};
395 if (rank == sender)
396 {
397 HANDLE_ERROR(cutensornetContractionOptimizerInfoGetPackedSize(handle, optimizerInfo, &bufSize));
398 }
399 HANDLE_MPI_ERROR(MPI_Bcast(&bufSize, 1, MPI_INT64_T, sender, MPI_COMM_WORLD));
400
401 // Allocate a buffer
402 std::vector<char> buffer(bufSize);
403
404 // Pack optimizerInfo on sender and broadcast it
405 if (rank == sender)
406 {
407 HANDLE_ERROR(cutensornetContractionOptimizerInfoPackData(handle, optimizerInfo, buffer.data(), bufSize));
408 }
409 HANDLE_MPI_ERROR(MPI_Bcast(buffer.data(), bufSize, MPI_CHAR, sender, MPI_COMM_WORLD));
410
411 // Unpack optimizerInfo from the buffer
412 if (rank != sender)
413 {
414 HANDLE_ERROR(
415 cutensornetUpdateContractionOptimizerInfoFromPackedData(handle, buffer.data(), bufSize, optimizerInfo));
416 }
417
418 // Update the network with the modified optimizer info
419 HANDLE_ERROR(cutensornetNetworkSetOptimizerInfo(handle, networkDesc, optimizerInfo));
420
421 // Query the number of slices the tensor network execution will be split into
422 int64_t numSlices = 0;
423 HANDLE_ERROR(cutensornetContractionOptimizerInfoGetAttribute(handle,
424 optimizerInfo,
425 CUTENSORNET_CONTRACTION_OPTIMIZER_INFO_NUM_SLICES,
426 &numSlices,
427 sizeof(numSlices)));
428 assert(numSlices > 0);
429
430 // Calculate each process's share of the slices
431 int64_t procChunk = numSlices / numProcs;
432 int extra = numSlices % numProcs;
433 int procSliceBegin = rank * procChunk + std::min(rank, extra);
434 int procSliceEnd = (rank == numProcs - 1) ? numSlices : (rank + 1) * procChunk + std::min(rank + 1, extra);
We now create the workspace descriptor and allocate memory as described in Create workspace descriptor and allocate workspace memory, and create the Contraction preparation and auto-tuning of the tensor network.
Next, on each process, we create a slice group (see cutensornetSliceGroup_t
) that corresponds to its share of the tensor network slices.
We then provide this slice group object to the cutensornetNetworkContract()
function to get a partial contraction result on each process.
525 // Create a cutensornetSliceGroup_t object from a range of slice IDs
526 cutensornetSliceGroup_t sliceGroup{};
527 HANDLE_ERROR(cutensornetCreateSliceGroupFromIDRange(handle, procSliceBegin, procSliceEnd, 1, &sliceGroup));
548 HANDLE_ERROR(cutensornetNetworkContract(handle,
549 networkDesc,
550 accumulateOutput,
551 workDesc,
552 sliceGroup,
553 stream));
Finally, we sum up the partial contributions to obtain the result of the tensor network contraction.
559 // Perform Allreduce operation on the output tensor
560 HANDLE_CUDA_ERROR(cudaStreamSynchronize(stream));
561 // restore the output tensor on Host
562 HANDLE_CUDA_ERROR(cudaMemcpy(tensorData_h[numInputs], tensorData_d[numInputs], tensorSizes[numInputs], cudaMemcpyDeviceToHost));
563 HANDLE_MPI_ERROR(MPI_Allreduce(MPI_IN_PLACE, tensorData_h[numInputs], tensorElements[numInputs], floatTypeMPI, MPI_SUM, MPI_COMM_WORLD));
Before termination, the MPI service needs to be finalized.
624 // Shut down MPI service
625 HANDLE_MPI_ERROR(MPI_Finalize());
The complete MPI-manual sample can be found in the NVIDIA/cuQuantum repository.
Code example (tensorQR)¶
Code example (tensorSVD)¶
Code example (GateSplit)¶
Code example (MPS factorization)¶
Code example (intermediate tensor reuse)¶
- Caching/Reusing constant intermediate tensors
- Headers and data types
- Define tensor network and tensor sizes
- Allocate memory, initialize data, initialize cuTensorNet handle
- Mark constant tensors and create the network descriptor
- Contraction order and slicing
- Create workspace descriptor and allocate workspace memory
- Contraction preparation and auto-tuning
- Tensor network contraction execution
- Free resources
Code example (gradients computation)¶
- Computing gradients via backward propagation
- Headers and data types
- Define tensor network and tensor sizes
- Allocate memory, initialize data, initialize cuTensorNet handle
- Construct the network
- Contraction order
- Create workspace descriptor and allocate workspace memory
- Contraction preparation and auto-tuning
- Tensor network contraction execution and gradient computation
- Free resources
Code example (amplitudes slice)¶
- Computing tensor network state amplitudes
- Headers and error handling
- Define the tensor network state and the desired slice of state amplitudes
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Allocate the amplitudes slice tensor on GPU
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Create the state amplitudes accessor
- Configure the state amplitudes accessor
- Prepare the computation of the amplitudes slice tensor
- Set up the workspace
- Compute the specified slice of state amplitudes
- Free resources
Code example (expectation value)¶
- Computing tensor network state expectation value
- Headers and error handling
- Define the tensor network state
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Construct a tensor network operator
- Create the expectation value object
- Configure the expectation value calculation
- Prepare the expectation value calculation
- Set up the workspace
- Compute the requested expectation value
- Free resources
Code example (marginal distribution)¶
- Computing tensor network state marginal distribution tensor
- Headers and error handling
- Define the tensor network state and the desired marginal distribution tensor
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Allocate the marginal distribution tensor on GPU
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Create the marginal distribution object
- Configure the marginal distribution object
- Prepare the computation of the marginal distribution tensor
- Set up the workspace
- Compute the marginal distribution tensor
- Free resources
Code example (tensor network sampling)¶
- Sampling the tensor network state
- Headers and error handling
- Define the tensor network state and the desired number of output samples to generate
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Create a pure tensor network state
- Apply quantum gates
- Create the tensor network state sampler
- Configure the tensor network state sampler
- Prepare the tensor network state sampler
- Allocate the workspace buffer on GPU and setup the workspace
- Perform sampling of the final quantum circuit state
- Free resources
Code example (MPS amplitudes slice using simple update)¶
- Computing Matrix Product State (MPS) Amplitudes
- Headers and error handling
- Define the tensor network state and the desired slice of state amplitudes
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Allocate MPS tensors
- Allocate the amplitudes slice tensor on GPU
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Request MPS factorization for the final quantum circuit state
- Configure MPS factorization procedure
- Prepare the computation of MPS factorization
- Compute MPS factorization
- Create the state amplitudes accessor
- Configure the state amplitudes accessor
- Prepare the computation of the amplitudes slice tensor
- Set up the workspace
- Compute the specified slice of state amplitudes
- Free resources
Code example (MPS expectation value)¶
- Computing Matrix Product State expectation value
- Headers and error handling
- Define the tensor network state
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Allocate MPS tensors
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Request MPS factorization for the final quantum circuit state
- Configure MPS factorization procedure
- Prepare the computation of MPS factorization
- Compute MPS factorization
- Construct a tensor network operator
- Create the expectation value object
- Configure the expectation value calculation
- Prepare the expectation value calculation
- Set up the workspace
- Compute the requested expectation value
- Free resources
Code example (MPS marginal distribution)¶
- Computing Matrix Product State marginal distribution tensor
- Headers and error handling
- Define the tensor network state and the desired marginal distribution tensor
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Allocate MPS tensors
- Allocate the marginal distribution tensor on GPU
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Request MPS factorization for the final quantum circuit state
- Configure MPS factorization procedure
- Prepare the computation of MPS factorization
- Compute MPS factorization
- Create the marginal distribution object
- Configure the marginal distribution object
- Prepare the computation of the marginal distribution tensor
- Set up the workspace
- Compute the marginal distribution tensor
- Free resources
Code example (MPS sampling)¶
- Sampling the Matrix Product State
- Headers and error handling
- Define the tensor network state and the desired number of output samples to generate
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Allocate MPS tensors
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Request MPS factorization for the final quantum circuit state
- Configure MPS factorization procedure
- Prepare the computation of MPS factorization
- Compute MPS factorization
- Create the tensor network state sampler
- Configure the tensor network state sampler
- Prepare the tensor network state sampler
- Set up the workspace
- Perform sampling of the final quantum circuit state
- Free resources
Code example (MPS sampling QFT)¶
- Sampling the Matrix Product State (QFT Circuit)
- Headers and error handling
- Define the tensor network state and the desired number of output samples to generate
- Initialize the cuTensorNet library handle
- Define quantum gates in GPU memory
- Allocate MPS tensors in GPU memory
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Apply quantum gates
- Request MPS factorization for the final quantum circuit state
- Configure MPS factorization procedure
- Prepare the computation of MPS factorization
- Compute MPS factorization
- Create the tensor network state sampler
- Configure the tensor network state sampler
- Prepare the tensor network state sampler
- Set up the workspace
- Perform sampling of the final quantum circuit state
- Free resources
Code example (Projection MPS Circuit DMRG)¶
- Projection MPS Circuit DMRG
- Headers, error handling, and helper functions
- Define the tensor network state and quantum circuit configuration
- Initialize the cuTensorNet library handle
- Define quantum gates on GPU
- Create initial quantum state and apply gates
- Set up and specify the MPS representation of the output state
- Initialize the MPS in the vacuum state
- Define MPS representation and allocate projection MPS tensors
- Create state projection MPS
- Perform DMRG optimization
- Free resources
Code example (MPS sampling MPO)¶
- Sampling the Matrix Product State (circuit with Matrix Product Operators)
- Headers and error handling
- Define the tensor network state and the desired number of output samples to generate
- Initialize the cuTensorNet library handle
- Define and allocate MPO tensors
- Define and allocate MPS tensors
- Allocate the scratch buffer on GPU
- Create a pure tensor network state
- Construct necessary MPO tensor network operators
- Apply MPO tensor network operators to the quantum circuit state
- Request MPS factorization for the final quantum circuit state
- Configure MPS factorization procedure
- Prepare the computation of MPS factorization
- Compute MPS factorization
- Create the tensor network state sampler
- Configure the tensor network state sampler
- Prepare the tensor network state sampler
- Set up the workspace
- Perform sampling of the final quantum circuit state
- Free resources
Useful tips¶
For debugging, one can set the environment variable
CUTENSORNET_LOG_LEVEL=n
. The leveln
= 0, 1, …, 5 corresponds to the logger level as described and used incutensornetLoggerSetLevel()
. The environment variableCUTENSORNET_LOG_FILE=<filepath>
can be used to redirect the log output to a custom file at<filepath>
instead ofstdout
.