Performing gate split operation using cuTensorNet adopts a very similar workflow as QR example and SVD example. Here, we here highlight the notable differences between the two APIs. The full code can be found in the NVIDIA/cuQuantum repository (here).
Define tensor operands¶
As with QR/SVD decomposition, we first define all the tensor operands by specifying the data type, modes partition, and the extents. In this process, we choose to perform fixed extent truncation to a size of 16.
84 /************************************************************************************
85 * Gate Split: A_{i,j,k,l} B_{k,o,p,q} G_{m,n,l,o}-> A'_{i,j,x,m} S_{x} B'_{x,n,p,q}
86 *************************************************************************************/
87 typedef float floatType;
88 cudaDataType_t typeData = CUDA_R_32F;
89 cutensornetComputeType_t typeCompute = CUTENSORNET_COMPUTE_32F;
90
91 // Create vector of modes
92 std::vector<int32_t> modesAIn{'i','j','k','l'};
93 std::vector<int32_t> modesBIn{'k','o','p','q'};
94 std::vector<int32_t> modesGIn{'m','n','l','o'}; // input, G is the gate operator
95
96 std::vector<int32_t> modesAOut{'i','j','x','m'};
97 std::vector<int32_t> modesBOut{'x','n','p','q'}; // SVD output
98
99 // Extents
100 std::unordered_map<int32_t, int64_t> extent;
101 extent['i'] = 16;
102 extent['j'] = 16;
103 extent['k'] = 16;
104 extent['l'] = 2;
105 extent['m'] = 2;
106 extent['n'] = 2;
107 extent['o'] = 2;
108 extent['p'] = 16;
109 extent['q'] = 16;
110
111 const int64_t maxExtent = 16; //truncate to a maximal extent of 16
112 extent['x'] = maxExtent;
113
114 // Create a vector of extents for each tensor
115 std::vector<int64_t> extentAIn;
116 for (auto mode : modesAIn)
117 extentAIn.push_back(extent[mode]);
118 std::vector<int64_t> extentBIn;
119 for (auto mode : modesBIn)
120 extentBIn.push_back(extent[mode]);
121 std::vector<int64_t> extentGIn;
122 for (auto mode : modesGIn)
123 extentGIn.push_back(extent[mode]);
124 std::vector<int64_t> extentAOut;
125 for (auto mode : modesAOut)
126 extentAOut.push_back(extent[mode]);
127 std::vector<int64_t> extentBOut;
128 for (auto mode : modesBOut)
129 extentBOut.push_back(extent[mode]);
130
Execution¶
Similar to SVD example, we can specify the SVD options in cutensornetTensorSVDConfig_t
.
Workspace size query can be achieved by calling cutensornetWorkspaceComputeGateSplitSizes()
with the provided cutensornetGateSplitAlgo_t
.
Finally, we can execute the gate split computation by calling cutensornetGateSplit()
.
299 /**********************
300 * Execution
301 **********************/
302
303 GPUTimer timer{stream};
304 double minTimeCUTENSOR = 1e100;
305 const int numRuns = 3; // to get stable perf results
306 for (int i=0; i < numRuns; ++i)
307 {
308 // restore output
309 cudaMemsetAsync(D_AOut, 0, sizeAOut, stream);
310 cudaMemsetAsync(D_S, 0, sizeS, stream);
311 cudaMemsetAsync(D_BOut, 0, sizeBOut, stream);
312
313 // With value-based truncation, `cutensornetGateSplit` can potentially update the shared extent in descTensorA/BOut.
314 // We here restore descTensorA/BOut to the original problem.
315 HANDLE_ERROR( cutensornetDestroyTensorDescriptor(descTensorAOut) );
316 HANDLE_ERROR( cutensornetDestroyTensorDescriptor(descTensorBOut) );
317 HANDLE_ERROR( cutensornetCreateTensorDescriptor(handle, numModesAOut, extentAOut.data(), strides, modesAOut.data(), typeData, &descTensorAOut) );
318 HANDLE_ERROR( cutensornetCreateTensorDescriptor(handle, numModesBOut, extentBOut.data(), strides, modesBOut.data(), typeData, &descTensorBOut) );
319
320 cudaDeviceSynchronize();
321 timer.start();
322 HANDLE_ERROR( cutensornetGateSplit(handle,
323 descTensorAIn, D_AIn,
324 descTensorBIn, D_BIn,
325 descTensorGIn, D_GIn,
326 descTensorAOut, D_AOut,
327 D_S,
328 descTensorBOut, D_BOut,
329 gateAlgo,
330 svdConfig, typeCompute, svdInfo,
331 workDesc, stream) );
332 // Synchronize and measure timing
333 auto time = timer.seconds();
334 minTimeCUTENSOR = (minTimeCUTENSOR < time) ? minTimeCUTENSOR : time;
335 }
336
337 printf("Performing Gate Split\n");
Note
Like in cutensornetTensorSVD()
, since we turned on weighted truncation options in this example, the tensor descriptors for outputs A and B need to be
restored if we wish to perform the same computation multiple times.
After the computation, we always need to free up all resources.