NVIDIA NvNeural SDK  2022.2
GPU inference framework for NVIDIA Nsight Deep Learning Designer
nvneural::IOnnxGenerationLayer Class Referenceabstract

ONNX export interface for ILayer. More...

#include <nvneural/OnnxTypes.h>

Inheritance diagram for nvneural::IOnnxGenerationLayer:
nvneural::IRefObject

Public Member Functions

virtual NeuralResult generateLayerOnnx (IOnnxGenerationHost *pOnnxHost) const noexcept=0
 Generates ONNX operators for this layer. More...
 
- Public Member Functions inherited from nvneural::IRefObject
virtual RefCount addRef () const noexcept=0
 Increments the object's reference count. More...
 
virtual const void * queryInterface (TypeId interface) const noexcept=0
 This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
 
virtual void * queryInterface (TypeId interface) noexcept=0
 Retrieves a new object interface pointer. More...
 
virtual RefCount release () const noexcept=0
 Decrements the object's reference count and destroy the object if the reference count reaches zero. More...
 

Static Public Attributes

static const IRefObject::TypeId typeID = 0x3846c67a6509ae19ul
 Interface TypeId for InterfaceOf purposes.
 
- Static Public Attributes inherited from nvneural::IRefObject
static const TypeId typeID = 0x14ecc3f9de638e1dul
 Interface TypeId for InterfaceOf purposes.
 

Additional Inherited Members

- Public Types inherited from nvneural::IRefObject
using RefCount = std::uint32_t
 Typedef used to track the number of active references to an object.
 
using TypeId = std::uint64_t
 Every interface must define a unique TypeId. This should be randomized.
 
- Protected Member Functions inherited from nvneural::IRefObject
virtual ~IRefObject ()=default
 A protected destructor prevents accidental stack-allocation of IRefObjects or use with other smart pointer classes like std::unique_ptr.
 

Detailed Description

ONNX export interface for ILayer.

Layers should implement this interface to support export to ONNX. By default, layers that do not implement this interface will be replaced by opaque placeholder instances.

Member Function Documentation

◆ generateLayerOnnx()

virtual NeuralResult nvneural::IOnnxGenerationLayer::generateLayerOnnx ( IOnnxGenerationHost pOnnxHost) const
pure virtualnoexcept

Generates ONNX operators for this layer.

You should not generate operators representing layer post-activation. That is handled by the tool.

If your layer implements multiple fused operations with an activation in between, all of those operations must be emitted by this function. Only the trailing activation is unnecessary in this function.

Parameters
pOnnxHostONNX host interface. The layer should dump its current state into the graph contained in this object.
Returns
NeuralResult::Success normally. Failure if something went wrong and the export should be canceled. You may return NeuralResult::Unsupported to treat this layer as though it had no IOnnxGenerationLayer interface. This is useful if the layer inherits a larger parent class but needs to opt out of ONNX generation. Do not modify the graph if you use this path.

The documentation for this class was generated from the following file: