ElementWise

Computes a per-element binary operation between two input tensors to produce an output tensor. When applicable, broadcasting is used (refer to Shape Information for more information).

Attributes

operation ElementWise operation can be one of:

  • SUM \(output=input1+input2\)

  • PROD \(output=input1*input2\)

  • MAX \(output=max(input1,input2)\)

  • MIN \(output=min(input1,input2)\)

  • SUB \(output=input1-input2\)

  • DIV \(output=\frac{input1}{input2}\)

  • POWER \(output=input1^{input2}\)

  • FLOOR_DIV \(output=\lfloor\frac{a}{b}\rfloor\)

  • AND \(output=and(input1,input2)\)

  • OR \(output=or(input1,input2)\)

  • XOR \(output=xor(input1,input2)\)

  • EQUAL \(output=(input1==input2)\)

  • GREATER \(output=(input1>input2)\)

  • LESS \(output=(input1<input2)\)

Inputs

input1: tensor of type T1

input2: tensor of type T1

Outputs

output: tensor of type T2

Data Types

When not specified explicitly, T1==T2

Operation

T1

T2

SUM

int8, float16, float32

PROD

int8, float16, float32

MAX

int8, float16, float32

MIN

int8, float16, float32

SUB

int8, float16, float32

DIV

int8, float16, float32

POWER

int8, float16, float32

FLOOR_DIV

int8, float16, float32

AND

bool

OR

bool

XOR

bool

EQUAL

float16, float32

bool

GREATER

float16, float32

bool

LESS

float16, float32

bool

Shape Information

Inputs must have the same rank. For each dimension, their lengths must match, or one of them must be equal to 1. In the latter case, the tensor is broadcast along that axis.

The output has the same rank as the inputs. For each output dimension, its length is equal to the lengths of the corresponding input dimensions if they match, otherwise it is equal to the length that is not 1.

Examples

ElementWise With Broadacast
in1 = network.add_input("input1", dtype=trt.float32, shape=(2, 3))
in2 = network.add_input("input2", dtype=trt.float32, shape=(1, 3))
layer = network.add_elementwise(in1, in2, op=trt.ElementWiseOperation.PROD)
network.mark_output(layer.get_output(0))

inputs[in1.name] = np.array([[-3.0, -2.0, -1.0], [0.0, 1.0, 2.0]])
inputs[in2.name] = np.array([[4.0, 5.0, 6.0]])

outputs[layer.get_output(0).name] = layer.get_output(0).shape

expected[layer.get_output(0).name] = np.array([[-12.0, -10.0, -6.0], [0.0, 5.0, 12.0]])

C++ API

For more information about the C++ IElementWiseLayer operator, refer to the C++ IElementWiseLayer documentation.

Python API

For more information about the Python IElementWiseLayer operator, refer to the Python IElementWiseLayer documentation.