NVIDIA NvNeural SDK  2022.2
GPU inference framework for NVIDIA Nsight Deep Learning Designer
nvneural::INetworkDebugger Class Referenceabstract

INetworkDebugger defines a callback interface for network inference. More...

#include <nvneural/NetworkTypes.h>

Inheritance diagram for nvneural::INetworkDebugger:
nvneural::IRefObject

Public Member Functions

virtual NeuralResult beginLayer (ILayer *pLayer) noexcept=0
 Indicates that layer inference is beginning. More...
 
virtual NeuralResult beginNetwork (INetworkRuntime *pNetworkRuntime, ILayer *pModelLayer, ILayerList *pExternLayerList) noexcept=0
 Indicates the beginning of subnetwork inference within a pass. More...
 
virtual NeuralResult beginPass (ILayer **ppInferenceLayers, size_t layerCount) noexcept=0
 Indicates the beginning of an inference pass. More...
 
virtual NeuralResult endLayer (ILayer *pLayer) noexcept=0
 Indicates that layer inference has completed. More...
 
virtual NeuralResult endNetwork (INetworkRuntime *pNetworkRuntime, ILayer *pModelLayer, ILayerList *pExternLayerList) noexcept=0
 Indicates the end of subnetwork inference within a pass. More...
 
virtual NeuralResult endPass () noexcept=0
 Indicates the end of an inference pass.
 
virtual NeuralResult skipLayer (ILayer *pLayer) noexcept=0
 Indicates that a layer was optimized out during inference and will be skipped. More...
 
- Public Member Functions inherited from nvneural::IRefObject
virtual RefCount addRef () const noexcept=0
 Increments the object's reference count. More...
 
virtual const void * queryInterface (TypeId interface) const noexcept=0
 This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
 
virtual void * queryInterface (TypeId interface) noexcept=0
 Retrieves a new object interface pointer. More...
 
virtual RefCount release () const noexcept=0
 Decrements the object's reference count and destroy the object if the reference count reaches zero. More...
 

Static Public Attributes

static const IRefObject::TypeId typeID = 0x44b2d2279e4abca9ul
 Interface TypeId for InterfaceOf purposes.
 
- Static Public Attributes inherited from nvneural::IRefObject
static const TypeId typeID = 0x14ecc3f9de638e1dul
 Interface TypeId for InterfaceOf purposes.
 

Additional Inherited Members

- Public Types inherited from nvneural::IRefObject
using RefCount = std::uint32_t
 Typedef used to track the number of active references to an object.
 
using TypeId = std::uint64_t
 Every interface must define a unique TypeId. This should be randomized.
 
- Protected Member Functions inherited from nvneural::IRefObject
virtual ~IRefObject ()=default
 A protected destructor prevents accidental stack-allocation of IRefObjects or use with other smart pointer classes like std::unique_ptr.
 

Detailed Description

INetworkDebugger defines a callback interface for network inference.

During inference, each network begins with a beginPass callback and a beginNetwork to initialize the nested network stack. Nested networks (used by network layers) will emit their own beginNetwork and endNetwork pairs.

Individual layers will emit either a beginLayer/endLayer pair or–if bypassed–a single skipLayer call.

The endPass call is emitted at the end of successful inference.

Failure results returned from an INetworkDebugger will abort inference. Inference failures do not unwind debugger state. Error paths should manually reset the debugger if this is necessary.

Member Function Documentation

◆ beginLayer()

virtual NeuralResult nvneural::INetworkDebugger::beginLayer ( ILayer pLayer)
pure virtualnoexcept

Indicates that layer inference is beginning.

Parameters
pLayerLayer being inferenced

◆ beginNetwork()

virtual NeuralResult nvneural::INetworkDebugger::beginNetwork ( INetworkRuntime pNetworkRuntime,
ILayer pModelLayer,
ILayerList pExternLayerList 
)
pure virtualnoexcept

Indicates the beginning of subnetwork inference within a pass.

This callback is also issued at the same time as beginPass for consistency.

Parameters
pNetworkRuntimeThe INetworkRuntime performing inference
pModelLayerThe network layer being inferenced; nullptr if this is the outermost network
pExternLayerListThe list of layers being inferenced within this network; nullptr if this is the outermost network

◆ beginPass()

virtual NeuralResult nvneural::INetworkDebugger::beginPass ( ILayer **  ppInferenceLayers,
size_t  layerCount 
)
pure virtualnoexcept

Indicates the beginning of an inference pass.

Parameters
ppInferenceLayersOrdered list of layers being inferenced
layerCountNumber of layers in ppInferenceLayers

◆ endLayer()

virtual NeuralResult nvneural::INetworkDebugger::endLayer ( ILayer pLayer)
pure virtualnoexcept

Indicates that layer inference has completed.

Parameters
pLayerThe inferenced layer

◆ endNetwork()

virtual NeuralResult nvneural::INetworkDebugger::endNetwork ( INetworkRuntime pNetworkRuntime,
ILayer pModelLayer,
ILayerList pExternLayerList 
)
pure virtualnoexcept

Indicates the end of subnetwork inference within a pass.

This callback is also issued at the same time as endPass for consistency. The arguments match beginNetwork.

Parameters
pNetworkRuntimeThe INetworkRuntime performing inference
pModelLayerThe network layer that was inferenced; nullptr if this was the outermost network
pExternLayerListThe list of layers that were inferenced within this network; nullptr if this was the outermost network

◆ skipLayer()

virtual NeuralResult nvneural::INetworkDebugger::skipLayer ( ILayer pLayer)
pure virtualnoexcept

Indicates that a layer was optimized out during inference and will be skipped.

Parameters
pLayerLayer being skipped

The documentation for this class was generated from the following file: