LRN

Computes a per-element Local Response Normalization (LRN) on an input tensor into an output tensor.

Given an input tensor with shape \([D_0,...,D_n]\) and the paramaters \(w, \alpha, \beta, \text{and }k\) the normalization is calculated as follows:

\[\huge{output_{a_0,...,a_{n-2},a_{n-1},a_{n}} = \frac{input_{a_0,...,a_{n-2},a_{n-1},a_{n}}}{(k+\alpha \sum_{j=max(0, a_{n-2}-w)}^{min(D_{n-2}-1, a_{n-2}+w)}input_{a_0,...,j,a_{n-1},a_{n}}^2)^\beta}}\]

Attributes

\(w\) the size of the cross-channel window. \(w \in {\{1,3,5,7,9,11,13,15\}}\).

\(\alpha\) a normalization parameter. \(\alpha \in {[-1 \cdot 10^{20}, 1 \cdot 10^{20}]}\).

\(\beta\) a normalization parameter. \(\beta \in {[0.01, 1 \cdot 10^{5}]}\).

\(k\) a normalization parameter. \(k \in {[1 \cdot 10^{5}, 1 \cdot 10^{10}]}\).

Inputs

input: tensor of type T1.

Outputs

output: tensor of type T1.

Data Types

T1: float32

Shape Information

input is a tensor with a shape of \([a_0,...,a_n]\).

output has the same shape as input.

DLA Restrictions

w \(w \in {\{3,5,7,9\}}\).

Examples

LRN
in1 = network.add_input("input1", dtype=trt.float32, shape=(1, 5, 2, 2))
layer = network.add_lrn(in1, window=3, alpha=1, beta=1, k=0.1)
network.mark_output(layer.get_output(0))

inputs[in1.name] = np.array(
    [[[[0, 0], [0, 0]], [[1, 1], [1, 1]], [[2, 2], [2, 2]], [[3, 3], [3, 3]], [[4, 4], [4, 4]]]]
)

outputs[layer.get_output(0).name] = layer.get_output(0).shape
expected[layer.get_output(0).name] = np.array(
    [
        [
            [[0.0, 0.0], [0.0, 0.0]],
            [[0.56603765, 0.56603765], [0.56603765, 0.56603765]],
            [[0.4195804, 0.4195804], [0.4195804, 0.4195804]],
            [[0.3071672, 0.3071672], [0.3071672, 0.3071672]],
            [[0.47430828, 0.47430828], [0.47430828, 0.47430828]],
        ]
    ],
)

C++ API

For more information about the C++ ILRNLayer operator, refer to the C++ ILRNLayer documentation.

Python API

For more information about the Python ILRNLayer operator, refer to the Python ILRNLayer documentation.