# Activation¶

Apply an activation function on an input tensor A and produce an output tensor B with the same dimensions.

PRelu, SoftMax

## Attributes¶

type activation function can be one of:

• RELU $$output=max(0, input)$$

• SIGMOID $$output=\frac{1}{1+e^{-input}}$$

• TANH $$output=\frac{1-e^{-2 \cdot input}}{1+e^{-2 \cdot input}}$$

• LEAKY_RELU $$output=input \text{ if } input\geq0 \text{ else } \alpha \cdot input$$

• ELU $$output=input \text{ if } input\geq0 \text{ else } \alpha \cdot (e^{input} -1)$$

• SELU $$output=\beta \cdot input \text{ if } input\geq0 \text{ else } \beta \cdot (\alpha \cdot e^{input} - \alpha)$$

• SOFTSIGN $$output=\frac{input}{1+|input|}$$

• SOFTPLUS $$output=\alpha \cdot log(e^{\beta \cdot input} + 1)$$

• CLIP $$output=max(\alpha, min(\beta, input))$$

• HARD_SIGMOID $$output=max(0, min(1, \alpha \cdot input +\beta))$$

• SCALED_TANH $$output=\alpha \cdot tanh(\beta \cdot input)$$

• THRESHOLDED_RELU $$output=max(0, input - \alpha)$$

alpha parameter used when the activation function is one of: LEAKY_RELU, ELU, SELU, SOFTPLUS, CLIP, HARD_SIGMOID, SCALED_TANH, THRESHOLDED_RELU

beta parameter used when the activation function is one of: SELU, SOFTPLUS, CLIP, HARD_SIGMOID, SCALED_TANH

## Inputs¶

input: tensor of type T1

## Outputs¶

output: tensor of type T1

## Data Types¶

T1: int8, float16, float32

## Shape Information¶

The output has the same shape as the input.

## DLA Restrictions¶

DLA supports the following activation types:

• CLIP where $$\alpha=0$$ and $$\beta\leq127$$

• RELU

• SIGMOID

• TANH

• LEAKY_RELU

## Examples¶

Activation
in1 = network.add_input("input1", dtype=trt.float32, shape=(2, 3))
network.mark_output(layer.get_output(0))

inputs[in1.name] = np.array([[-3.0, -2.0, -1.0], [0.0, 1.0, 2.0]])

outputs[layer.get_output(0).name] = layer.get_output(0).shape

expected[layer.get_output(0).name] = np.array([[0.0, 0.0, 0.0], [0.0, 1.0, 2.0]])