Einsum

Computes a summation over the elements of the inputs along dimensions specified by the equation parameter, which is written in the Einstein summation convention.

  • The equation specifies ASCII lower-case letters for each dimension in the inputs in the same order as the dimensions, separated by a comma for each input.

  • The equation is represented as term1,term2…->output-term where each term corresponds to an operand tensor and the characters within the terms correspond to operands dimensions.

  • The dimensions labeled with the same subscript must match.

  • Repeating a label across multiple inputs means that those axes will be multiplied.

  • Omitting a label from the output means values along those axes will be accumulated.

  • The output subscripts must appear at least once for some input operand and at most once for the output.

  • In implicit mode, i.e. if the equation does not contain ->, the indices which appear once in the expression will be part of the output in increasing alphabetical order.

  • In explicit mode, the output can be controlled by specifying output subscript labels by adding an arrow (->) followed by subscripts for the output. For example, ij,jk->ik is equivalent to ij,jk.

  • An empty string (“”) is valid for scalar operands.

  • The equation may contain spaces (SPC- 0x20) between the different elements (subscripts, arrow and comma).

Attributes

equation A string representing the summation equation written in the Einstein summation convention.

Inputs

inputs: tensors of type T. Up to two inputs can be set.

Outputs

output: tensor of type T.

Data Types

T: float16, float32

Shape Information

Examples

Einsum
in1 = network.add_input("input1", dtype=trt.float32, shape=(2, 3))
layer = network.add_einsum(inputs=[in1], equation="ij->ji")
network.mark_output(layer.get_output(0))

inputs[in1.name] = np.array([[-3.0, -2.0, -1.0], [0.0, 1.0, 2.0]])

outputs[layer.get_output(0).name] = layer.get_output(0).shape

expected[layer.get_output(0).name] = np.array(
    [
        [-3.0, 0.0],
        [-2.0, 1.0],
        [-1.0, 2.0],
    ]
)
in1 = network.add_input("input1", dtype=trt.float32, shape=(2, 3))
in2 = network.add_input("input2", dtype=trt.float32, shape=(3, 1))
layer = network.add_einsum(inputs=[in1, in2], equation="ik,kj->ij")
network.mark_output(layer.get_output(0))

inputs[in1.name] = np.array([[-3.0, -2.0, -1.0], [0.0, 1.0, 2.0]])

inputs[in2.name] = np.array([[1.0], [2.0], [3.0]])

outputs[layer.get_output(0).name] = layer.get_output(0).shape

expected[layer.get_output(0).name] = np.array([[-10.0], [8.0]])

C++ API

For more information about the C++ IEinsumLayer operator, refer to the C++ IEinsumLayer documentation.

Python API

For more information about the Python IEinsumLayer operator, refer to the Python IEinsumLayer documentation.