TensorRT 10.5.0
|
an object for parsing ONNX models into a TensorRT network definition More...
#include <NvOnnxParser.h>
Public Member Functions | |
virtual bool | parse (void const *serialized_onnx_model, size_t serialized_onnx_model_size, const char *model_path=nullptr) noexcept=0 |
Parse a serialized ONNX model into the TensorRT network. This method has very limited diagnostics. If parsing the serialized model fails for any reason (e.g. unsupported IR version, unsupported opset, etc.) it the user responsibility to intercept and report the error. To obtain a better diagnostic, use the parseFromFile method below. More... | |
virtual bool | parseFromFile (const char *onnxModelFile, int verbosity) noexcept=0 |
Parse an onnx model file, which can be a binary protobuf or a text onnx model calls parse method inside. More... | |
virtual TRT_DEPRECATED bool | supportsModel (void const *serialized_onnx_model, size_t serialized_onnx_model_size, SubGraphCollection_t &sub_graph_collection, const char *model_path=nullptr) noexcept=0 |
Check whether TensorRT supports a particular ONNX model. If the function returns True, one can proceed to engine building without having to call parse or parseFromFile . More... | |
virtual bool | parseWithWeightDescriptors (void const *serialized_onnx_model, size_t serialized_onnx_model_size) noexcept=0 |
Parse a serialized ONNX model into the TensorRT network with consideration of user provided weights. More... | |
virtual bool | supportsOperator (const char *op_name) const noexcept=0 |
Returns whether the specified operator may be supported by the parser. More... | |
virtual int | getNbErrors () const noexcept=0 |
Get the number of errors that occurred during prior calls to parse . More... | |
virtual IParserError const * | getError (int index) const noexcept=0 |
Get an error that occurred during prior calls to parse . More... | |
virtual void | clearErrors () noexcept=0 |
Clear errors from prior calls to parse . More... | |
virtual | ~IParser () noexcept=default |
virtual char const *const * | getUsedVCPluginLibraries (int64_t &nbPluginLibs) const noexcept=0 |
Query the plugin libraries needed to implement operations used by the parser in a version-compatible engine. More... | |
virtual void | setFlags (OnnxParserFlags onnxParserFlags) noexcept=0 |
Set the parser flags. More... | |
virtual OnnxParserFlags | getFlags () const noexcept=0 |
Get the parser flags. Defaults to 0. More... | |
virtual void | clearFlag (OnnxParserFlag onnxParserFlag) noexcept=0 |
clear a parser flag. More... | |
virtual void | setFlag (OnnxParserFlag onnxParserFlag) noexcept=0 |
Set a single parser flag. More... | |
virtual bool | getFlag (OnnxParserFlag onnxParserFlag) const noexcept=0 |
Returns true if the parser flag is set. More... | |
virtual nvinfer1::ITensor const * | getLayerOutputTensor (char const *name, int64_t i) noexcept=0 |
Return the i-th output ITensor object for the ONNX layer "name". More... | |
virtual bool | supportsModelV2 (void const *serializedOnnxModel, size_t serializedOnnxModelSize, char const *modelPath=nullptr) noexcept=0 |
Check whether TensorRT supports a particular ONNX model. If the function returns True, one can proceed to engine building without having to call parse or parseFromFile . Results can be queried through getNbSubgraphs , isSubgraphSupported , getSubgraphNodes . More... | |
virtual int64_t | getNbSubgraphs () noexcept=0 |
Get the number of subgraphs. Calling this function before calling supportsModelV2 results in undefined behavior. More... | |
virtual bool | isSubgraphSupported (int64_t const index) noexcept=0 |
Returns whether the subgraph is supported. Calling this function before calling supportsModelV2 results in undefined behavior. More... | |
virtual int64_t * | getSubgraphNodes (int64_t const index, int64_t &subgraphLength) noexcept=0 |
Get the nodes of the specified subgraph. Calling this function before calling supportsModelV2 results in undefined behavior. More... | |
an object for parsing ONNX models into a TensorRT network definition
|
virtualdefaultnoexcept |
|
pure virtualnoexcept |
Clear errors from prior calls to parse
.
|
pure virtualnoexcept |
|
pure virtualnoexcept |
Get an error that occurred during prior calls to parse
.
|
pure virtualnoexcept |
Returns true if the parser flag is set.
|
pure virtualnoexcept |
|
pure virtualnoexcept |
Return the i-th output ITensor object for the ONNX layer "name".
Return the i-th output ITensor object for the ONNX layer "name". If "name" is not found or i is out of range, return nullptr. In the case of multiple nodes sharing the same name this function will return the output tensors of the first instance of the node in the ONNX graph.
name | The name of the ONNX layer. |
i | The index of the output. i must be in range [0, layer.num_outputs). |
|
pure virtualnoexcept |
Get the number of errors that occurred during prior calls to parse
.
|
pure virtualnoexcept |
Get the number of subgraphs. Calling this function before calling supportsModelV2
results in undefined behavior.
|
pure virtualnoexcept |
Get the nodes of the specified subgraph. Calling this function before calling supportsModelV2
results in undefined behavior.
index | Index of the subgraph. |
subgraphLength | Returns the length of the subgraph as reference. |
|
pure virtualnoexcept |
Query the plugin libraries needed to implement operations used by the parser in a version-compatible engine.
This provides a list of plugin libraries on the filesystem needed to implement operations in the parsed network. If you are building a version-compatible engine using this network, provide this list to IBuilderConfig::setPluginsToSerialize to serialize these plugins along with the version-compatible engine, or, if you want to ship these plugin libraries externally to the engine, ensure that IPluginRegistry::loadLibrary is used to load these libraries in the appropriate runtime before deserializing the corresponding engine.
[out] | nbPluginLibs | Returns the number of plugin libraries in the array, or -1 if there was an error. |
nbPluginLibs
C-strings describing plugin library paths on the filesystem if nbPluginLibs > 0, or nullptr otherwise. This array is owned by the IParser, and the pointers in the array are only valid until the next call to parse(), supportsModel(), parseFromFile(), or parseWithWeightDescriptors().
|
pure virtualnoexcept |
Returns whether the subgraph is supported. Calling this function before calling supportsModelV2
results in undefined behavior.
index | Index of the subgraph. |
|
pure virtualnoexcept |
Parse a serialized ONNX model into the TensorRT network. This method has very limited diagnostics. If parsing the serialized model fails for any reason (e.g. unsupported IR version, unsupported opset, etc.) it the user responsibility to intercept and report the error. To obtain a better diagnostic, use the parseFromFile method below.
serialized_onnx_model | Pointer to the serialized ONNX model |
serialized_onnx_model_size | Size of the serialized ONNX model in bytes |
model_path | Absolute path to the model file for loading external weights if required |
|
pure virtualnoexcept |
Parse an onnx model file, which can be a binary protobuf or a text onnx model calls parse method inside.
onnxModelFile | name |
verbosity | Level |
|
pure virtualnoexcept |
Parse a serialized ONNX model into the TensorRT network with consideration of user provided weights.
serialized_onnx_model | Pointer to the serialized ONNX model |
serialized_onnx_model_size | Size of the serialized ONNX model in bytes |
|
pure virtualnoexcept |
Set a single parser flag.
Add the input parser flag to the already enabled flags.
|
pure virtualnoexcept |
Set the parser flags.
The flags are listed in the OnnxParserFlag enum.
OnnxParserFlag | The flags used when parsing an ONNX model. |
|
pure virtualnoexcept |
Check whether TensorRT supports a particular ONNX model. If the function returns True, one can proceed to engine building without having to call parse
or parseFromFile
.
[DEPRECATED] Deprecated in TensorRT 10.1. See supportsModelV2.
serialized_onnx_model | Pointer to the serialized ONNX model |
serialized_onnx_model_size | Size of the serialized ONNX model in bytes |
sub_graph_collection | Container to hold supported subgraphs |
model_path | Absolute path to the model file for loading external weights if required |
|
pure virtualnoexcept |
Check whether TensorRT supports a particular ONNX model. If the function returns True, one can proceed to engine building without having to call parse
or parseFromFile
. Results can be queried through getNbSubgraphs
, isSubgraphSupported
, getSubgraphNodes
.
serializedOnnxModel | Pointer to the serialized ONNX model |
serializedOnnxModelSize | Size of the serialized ONNX model in bytes |
modelPath | Absolute path to the model file for loading external weights if required |
|
pure virtualnoexcept |
Returns whether the specified operator may be supported by the parser.
Note that a result of true does not guarantee that the operator will be supported in all cases (i.e., this function may return false-positives).
op_name | The name of the ONNX operator to check for support |
Copyright © 2024 NVIDIA Corporation
Privacy Policy |
Manage My Privacy |
Do Not Sell or Share My Data |
Terms of Service |
Accessibility |
Corporate Policies |
Product Security |
Contact