TensorRT 10.0.1
nvonnxparser::IParser Class Referenceabstract

an object for parsing ONNX models into a TensorRT network definition More...

#include <NvOnnxParser.h>

Public Member Functions

virtual bool parse (void const *serialized_onnx_model, size_t serialized_onnx_model_size, const char *model_path=nullptr)=0
 Parse a serialized ONNX model into the TensorRT network. This method has very limited diagnostics. If parsing the serialized model fails for any reason (e.g. unsupported IR version, unsupported opset, etc.) it the user responsibility to intercept and report the error. To obtain a better diagnostic, use the parseFromFile method below. More...
 
virtual bool parseFromFile (const char *onnxModelFile, int verbosity)=0
 Parse an onnx model file, which can be a binary protobuf or a text onnx model calls parse method inside. More...
 
virtual bool supportsModel (void const *serialized_onnx_model, size_t serialized_onnx_model_size, SubGraphCollection_t &sub_graph_collection, const char *model_path=nullptr)=0
 Check whether TensorRT supports a particular ONNX model. If the function returns True, one can proceed to engine building without having to call parse or parseFromFile. More...
 
virtual bool parseWithWeightDescriptors (void const *serialized_onnx_model, size_t serialized_onnx_model_size)=0
 Parse a serialized ONNX model into the TensorRT network with consideration of user provided weights. More...
 
virtual bool supportsOperator (const char *op_name) const =0
 Returns whether the specified operator may be supported by the parser. More...
 
virtual int getNbErrors () const =0
 Get the number of errors that occurred during prior calls to parse. More...
 
virtual IParserError const * getError (int index) const =0
 Get an error that occurred during prior calls to parse. More...
 
virtual void clearErrors ()=0
 Clear errors from prior calls to parse. More...
 
virtual ~IParser () noexcept=default
 
virtual char const *const * getUsedVCPluginLibraries (int64_t &nbPluginLibs) const noexcept=0
 Query the plugin libraries needed to implement operations used by the parser in a version-compatible engine. More...
 
virtual void setFlags (OnnxParserFlags onnxParserFlags) noexcept=0
 Set the parser flags. More...
 
virtual OnnxParserFlags getFlags () const noexcept=0
 Get the parser flags. Defaults to 0. More...
 
virtual void clearFlag (OnnxParserFlag onnxParserFlag) noexcept=0
 clear a parser flag. More...
 
virtual void setFlag (OnnxParserFlag onnxParserFlag) noexcept=0
 Set a single parser flag. More...
 
virtual bool getFlag (OnnxParserFlag onnxParserFlag) const noexcept=0
 Returns true if the parser flag is set. More...
 
virtual nvinfer1::ITensor const * getLayerOutputTensor (char const *name, int64_t i)=0
 Return the i-th output ITensor object for the ONNX layer "name". More...
 

Detailed Description

an object for parsing ONNX models into a TensorRT network definition

Constructor & Destructor Documentation

◆ ~IParser()

virtual nvonnxparser::IParser::~IParser ( )
virtualdefaultnoexcept

Member Function Documentation

◆ clearErrors()

virtual void nvonnxparser::IParser::clearErrors ( )
pure virtual

Clear errors from prior calls to parse.

See also
getNbErrors() getError() IParserError

◆ clearFlag()

virtual void nvonnxparser::IParser::clearFlag ( OnnxParserFlag  onnxParserFlag)
pure virtualnoexcept

clear a parser flag.

clears the parser flag from the enabled flags.

See also
setFlags()

◆ getError()

virtual IParserError const * nvonnxparser::IParser::getError ( int  index) const
pure virtual

Get an error that occurred during prior calls to parse.

See also
getNbErrors() clearErrors() IParserError

◆ getFlag()

virtual bool nvonnxparser::IParser::getFlag ( OnnxParserFlag  onnxParserFlag) const
pure virtualnoexcept

Returns true if the parser flag is set.

See also
getFlags()
Returns
True if flag is set, false if unset.

◆ getFlags()

virtual OnnxParserFlags nvonnxparser::IParser::getFlags ( ) const
pure virtualnoexcept

Get the parser flags. Defaults to 0.

Returns
The parser flags as a bitmask.
See also
setFlags()

◆ getLayerOutputTensor()

virtual nvinfer1::ITensor const * nvonnxparser::IParser::getLayerOutputTensor ( char const *  name,
int64_t  i 
)
pure virtual

Return the i-th output ITensor object for the ONNX layer "name".

Return the i-th output ITensor object for the ONNX layer "name". If "name" is not found or i is out of range, return nullptr. In the case of multiple nodes sharing the same name this function will return the output tensors of the first instance of the node in the ONNX graph.

Parameters
nameThe name of the ONNX layer.
iThe index of the output. i must be in range [0, layer.num_outputs).

◆ getNbErrors()

virtual int nvonnxparser::IParser::getNbErrors ( ) const
pure virtual

Get the number of errors that occurred during prior calls to parse.

See also
getError() clearErrors() IParserError

◆ getUsedVCPluginLibraries()

virtual char const *const * nvonnxparser::IParser::getUsedVCPluginLibraries ( int64_t &  nbPluginLibs) const
pure virtualnoexcept

Query the plugin libraries needed to implement operations used by the parser in a version-compatible engine.

This provides a list of plugin libraries on the filesystem needed to implement operations in the parsed network. If you are building a version-compatible engine using this network, provide this list to IBuilderConfig::setPluginsToSerialize to serialize these plugins along with the version-compatible engine, or, if you want to ship these plugin libraries externally to the engine, ensure that IPluginRegistry::loadLibrary is used to load these libraries in the appropriate runtime before deserializing the corresponding engine.

Parameters
[out]nbPluginLibsReturns the number of plugin libraries in the array, or -1 if there was an error.
Returns
Array of nbPluginLibs C-strings describing plugin library paths on the filesystem if nbPluginLibs > 0, or nullptr otherwise. This array is owned by the IParser, and the pointers in the array are only valid until the next call to parse(), supportsModel(), parseFromFile(), or parseWithWeightDescriptors().

◆ parse()

virtual bool nvonnxparser::IParser::parse ( void const *  serialized_onnx_model,
size_t  serialized_onnx_model_size,
const char *  model_path = nullptr 
)
pure virtual

Parse a serialized ONNX model into the TensorRT network. This method has very limited diagnostics. If parsing the serialized model fails for any reason (e.g. unsupported IR version, unsupported opset, etc.) it the user responsibility to intercept and report the error. To obtain a better diagnostic, use the parseFromFile method below.

Parameters
serialized_onnx_modelPointer to the serialized ONNX model
serialized_onnx_model_sizeSize of the serialized ONNX model in bytes
model_pathAbsolute path to the model file for loading external weights if required
Returns
true if the model was parsed successfully
See also
getNbErrors() getError()

◆ parseFromFile()

virtual bool nvonnxparser::IParser::parseFromFile ( const char *  onnxModelFile,
int  verbosity 
)
pure virtual

Parse an onnx model file, which can be a binary protobuf or a text onnx model calls parse method inside.

Parameters
onnxModelFilename
verbosityLevel
Returns
true if the model was parsed successfully

◆ parseWithWeightDescriptors()

virtual bool nvonnxparser::IParser::parseWithWeightDescriptors ( void const *  serialized_onnx_model,
size_t  serialized_onnx_model_size 
)
pure virtual

Parse a serialized ONNX model into the TensorRT network with consideration of user provided weights.

Parameters
serialized_onnx_modelPointer to the serialized ONNX model
serialized_onnx_model_sizeSize of the serialized ONNX model in bytes
Returns
true if the model was parsed successfully
See also
getNbErrors() getError()

◆ setFlag()

virtual void nvonnxparser::IParser::setFlag ( OnnxParserFlag  onnxParserFlag)
pure virtualnoexcept

Set a single parser flag.

Add the input parser flag to the already enabled flags.

See also
setFlags()

◆ setFlags()

virtual void nvonnxparser::IParser::setFlags ( OnnxParserFlags  onnxParserFlags)
pure virtualnoexcept

Set the parser flags.

The flags are listed in the OnnxParserFlag enum.

Parameters
OnnxParserFlagThe flags used when parsing an ONNX model.
Note
This function will override the previous set flags, rather than bitwise ORing the new flag.
See also
getFlags()

◆ supportsModel()

virtual bool nvonnxparser::IParser::supportsModel ( void const *  serialized_onnx_model,
size_t  serialized_onnx_model_size,
SubGraphCollection_t sub_graph_collection,
const char *  model_path = nullptr 
)
pure virtual

Check whether TensorRT supports a particular ONNX model. If the function returns True, one can proceed to engine building without having to call parse or parseFromFile.

Parameters
serialized_onnx_modelPointer to the serialized ONNX model
serialized_onnx_model_sizeSize of the serialized ONNX model in bytes
sub_graph_collectionContainer to hold supported subgraphs
model_pathAbsolute path to the model file for loading external weights if required
Returns
true if the model is supported

◆ supportsOperator()

virtual bool nvonnxparser::IParser::supportsOperator ( const char *  op_name) const
pure virtual

Returns whether the specified operator may be supported by the parser.

Note that a result of true does not guarantee that the operator will be supported in all cases (i.e., this function may return false-positives).

Parameters
op_nameThe name of the ONNX operator to check for support

The documentation for this class was generated from the following file:

  Copyright © 2024 NVIDIA Corporation
  Privacy Policy | Manage My Privacy | Do Not Sell or Share My Data | Terms of Service | Accessibility | Corporate Policies | Product Security | Contact