TensorRT 8.2.5
|
An Activation layer in a network definition. More...
#include <NvInfer.h>
Public Member Functions | |
void | setActivationType (ActivationType type) noexcept |
Set the type of activation to be performed. More... | |
ActivationType | getActivationType () const noexcept |
Get the type of activation to be performed. More... | |
void | setAlpha (float alpha) noexcept |
Set the alpha parameter (must be finite). More... | |
void | setBeta (float beta) noexcept |
Set the beta parameter (must be finite). More... | |
float | getAlpha () const noexcept |
Get the alpha parameter. More... | |
float | getBeta () const noexcept |
Get the beta parameter. More... | |
![]() | |
LayerType | getType () const noexcept |
Return the type of a layer. More... | |
void | setName (const char *name) noexcept |
Set the name of a layer. More... | |
const char * | getName () const noexcept |
Return the name of a layer. More... | |
int32_t | getNbInputs () const noexcept |
Get the number of inputs of a layer. | |
ITensor * | getInput (int32_t index) const noexcept |
Get the layer input corresponding to the given index. More... | |
int32_t | getNbOutputs () const noexcept |
Get the number of outputs of a layer. | |
ITensor * | getOutput (int32_t index) const noexcept |
Get the layer output corresponding to the given index. More... | |
void | setInput (int32_t index, ITensor &tensor) noexcept |
Replace an input of this layer with a specific tensor. More... | |
void | setPrecision (DataType dataType) noexcept |
Set the computational precision of this layer. More... | |
DataType | getPrecision () const noexcept |
get the computational precision of this layer More... | |
bool | precisionIsSet () const noexcept |
whether the computational precision has been set for this layer More... | |
void | resetPrecision () noexcept |
reset the computational precision for this layer More... | |
void | setOutputType (int32_t index, DataType dataType) noexcept |
Set the output type of this layer. More... | |
DataType | getOutputType (int32_t index) const noexcept |
get the output type of this layer More... | |
bool | outputTypeIsSet (int32_t index) const noexcept |
whether the output type has been set for this layer More... | |
void | resetOutputType (int32_t index) noexcept |
reset the output type for this layer More... | |
Protected Attributes | |
apiv::VActivationLayer * | mImpl |
![]() | |
apiv::VLayer * | mLayer |
Additional Inherited Members | |
![]() | |
INoCopy (const INoCopy &other)=delete | |
INoCopy & | operator= (const INoCopy &other)=delete |
INoCopy (INoCopy &&other)=delete | |
INoCopy & | operator= (INoCopy &&other)=delete |
An Activation layer in a network definition.
This layer applies a per-element activation function to its input.
The output has the same shape as the input.
|
inlinenoexcept |
Get the type of activation to be performed.
|
inlinenoexcept |
Get the alpha parameter.
|
inlinenoexcept |
Get the beta parameter.
|
inlinenoexcept |
Set the type of activation to be performed.
On the DLA, the valid activation types are kRELU, kSIGMOID, kTANH, and kCLIP.
|
inlinenoexcept |
Set the alpha parameter (must be finite).
This parameter is used by the following activations: LeakyRelu, Elu, Selu, Softplus, Clip, HardSigmoid, ScaledTanh, ThresholdedRelu.
It is ignored by the other activations.
|
inlinenoexcept |
Set the beta parameter (must be finite).
This parameter is used by the following activations: Selu, Softplus, Clip, HardSigmoid, ScaledTanh.
It is ignored by the other activations.
Copyright © 2024 NVIDIA Corporation
Privacy Policy |
Manage My Privacy |
Do Not Sell or Share My Data |
Terms of Service |
Accessibility |
Corporate Policies |
Product Security |
Contact