ParametricReLU#
Apply a parametric ReLU function on input tensor A and produce an output tensor B with the same dimensions. The parametric ReLU function is a Leaky ReLU where the slopes of \(x<0\) can be defined for each element.
Outputs#
output: tensor of type T
.
Data Types#
T: int8
, float16
, float32
, bfloat16
Shape Information#
input and output are tensors with a shape of \([a_0,...,a_n]\).
slopes is a tensor with a shape of \([b_0,...,b_n]\), where \(b_i=a_i\) or \(b_i=1\).
Volume Limits#
input and slopes can have up to \(2^{31}-1\) elements.
DLA Support#
DLA FP16 and DLA INT8 are supported.
When running this layer on DLA, the slopes input must be a build-time constant.
Examples#
ParametricReLU
in1 = network.add_input("input1", dtype=trt.float32, shape=(2, 3))
slopes = network.add_input("slopes", dtype=trt.float32, shape=(2, 3))
layer = network.add_parametric_relu(in1, slopes)
network.mark_output(layer.get_output(0))
inputs[in1.name] = np.array([[-3.0, -2.0, -1.0], [0.0, 1.0, 2.0]])
inputs[slopes.name] = np.array([[-1.0, 1.0, 0.0], [0.0, 1.0, 2.0]])
outputs[layer.get_output(0).name] = layer.get_output(0).shape
expected[layer.get_output(0).name] = np.array([[3.0, -2.0, 0.0], [0.0, 1.0, 2.0]])
ParametricReLU With Broadcasting
in1 = network.add_input("input1", dtype=trt.float32, shape=(2, 3))
slopes = network.add_input("slopes", dtype=trt.float32, shape=(2, 1))
layer = network.add_parametric_relu(in1, slopes)
network.mark_output(layer.get_output(0))
inputs[in1.name] = np.array([[-3.0, -2.0, -1.0], [0.0, 1.0, 2.0]])
inputs[slopes.name] = np.array([[2.0], [1.0]])
outputs[layer.get_output(0).name] = layer.get_output(0).shape
expected[layer.get_output(0).name] = np.array([[-6.0, -4.0, -2.0], [0.0, 1.0, 2.0]])
C++ API#
For more information about the C++ IParametricReLULayer operator, refer to the C++ IParametricReLULayer documentation.
Python API#
For more information about the Python IParametricReLULayer operator, refer to the Python IParametricReLULayer documentation.