NVIDIA NvNeural SDK
2022.2
GPU inference framework for NVIDIA Nsight Deep Learning Designer
|
IBypassLayer is an optional interface that marks a layer as one that can potentially forward its input and be skipped. More...
#include <nvneural/LayerTypes.h>
Public Member Functions | |
virtual const ILayer * | getAliasedInput () const noexcept=0 |
Returns the input layer that should be aliased from this layer or nullptr if this layer cannot be bypassed either because it's not bypass-able or doesn't meet the criteria for bypassing. | |
![]() | |
virtual RefCount | addRef () const noexcept=0 |
Increments the object's reference count. More... | |
virtual const void * | queryInterface (TypeId interface) const noexcept=0 |
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. | |
virtual void * | queryInterface (TypeId interface) noexcept=0 |
Retrieves a new object interface pointer. More... | |
virtual RefCount | release () const noexcept=0 |
Decrements the object's reference count and destroy the object if the reference count reaches zero. More... | |
Static Public Attributes | |
static const IRefObject::TypeId | typeID = 0xeb843bc48914779eul |
Interface TypeId for InterfaceOf purposes. | |
![]() | |
static const TypeId | typeID = 0x14ecc3f9de638e1dul |
Interface TypeId for InterfaceOf purposes. | |
Additional Inherited Members | |
![]() | |
using | RefCount = std::uint32_t |
Typedef used to track the number of active references to an object. | |
using | TypeId = std::uint64_t |
Every interface must define a unique TypeId. This should be randomized. | |
![]() | |
virtual | ~IRefObject ()=default |
A protected destructor prevents accidental stack-allocation of IRefObjects or use with other smart pointer classes like std::unique_ptr. | |
IBypassLayer is an optional interface that marks a layer as one that can potentially forward its input and be skipped.
This is a layer whose output will not change during inference, a couple of examples are dropout, selection, and mix layers (note that mix layers may be skippable but are not always a direct copy of a specific input). Another occurrence is when a layer's parameters will not change the output, as in the case of mix layers.
In these cases, the layer may be able to be "bypassed" thus skipping a memory copy of the input to output. The Network takes care of determining the correct layer dimensions/internal dimensions, will be those of either the IBypassLayer or the layer it aliases.