cuquantum.cutensornet.compute_gradients_backward

cuquantum.cutensornet.compute_gradients_backward(intptr_t handle, intptr_t plan, raw_data_in, intptr_t output_gradient, gradients, bool accumulate_output, intptr_t workspace, intptr_t stream)[source]

Compute the gradients of the network w.r.t. the input tensors whose gradients are required.

The input tensors should form a tensor network that is prescribed by the tensor network descriptor that was used to create the contraction plan.

Warning

This function is experimental and is subject to change in future releases.

Parameters
  • handle (intptr_t) – The library handle.

  • plan (intptr_t) – The contraction plan handle.

  • raw_data_in

    A host array of pointer addresses (as Python int) for each input tensor (on device). It can be

    • an int as the pointer address to the array

    • a Python sequence of int

  • output_gradient (intptr_t) – The pointer address (as Python int) to the gradient w.r.t. the output tensor (on device).

  • gradients

    A host array of pointer addresses (as Python int) for each gradient tensor (on device). It can be

    • an int as the pointer address to the array

    • a Python sequence of int

  • accumulate_output (bool) – Whether to accumulate the data in gradients.

  • workspace (intptr_t) – The workspace descriptor.

  • stream (intptr_t) – The CUDA stream handle (cudaStream_t as Python int).