TensorRT
7.2.0.9
|
#include "NvInferRuntimeCommon.h"
Go to the source code of this file.
Classes | |
class | nvinfer1::Weights |
An array of weights used as a layer parameter. More... | |
class | nvinfer1::IHostMemory |
Class to handle library allocated memory that is accessible to the user. More... | |
class | nvinfer1::IPlugin |
Plugin class for user-implemented layers. More... | |
class | nvinfer1::IPluginExt |
Plugin class for user-implemented layers. More... | |
class | nvinfer1::IDimensionExpr |
class | nvinfer1::IExprBuilder |
class | nvinfer1::DimsExprs |
class | nvinfer1::DynamicPluginTensorDesc |
class | nvinfer1::IPluginV2DynamicExt |
class | nvinfer1::IProfiler |
Application-implemented interface for profiling. More... | |
class | nvinfer1::IRuntime |
Allows a serialized functionally unsafe engine to be deserialized. More... | |
class | nvinfer1::IRefitter |
Updates weights in an engine. More... | |
class | nvinfer1::IPluginFactory |
Plugin factory for deserialization. More... | |
class | nvinfer1::IOptimizationProfile |
Optimization profile for dynamic input dimensions and shape tensors. More... | |
class | nvinfer1::ICudaEngine |
An engine for executing inference on a built network, with functionally unsafe features. More... | |
class | nvinfer1::IExecutionContext |
Context for executing inference using an engine, with functionally unsafe features. More... | |
Namespaces | |
nvinfer1 | |
The TensorRT API version 1 namespace. | |
Enumerations | |
enum | nvinfer1::EngineCapability : int32_t { nvinfer1::EngineCapability::kDEFAULT = 0, nvinfer1::EngineCapability::kSAFE_GPU = 1, nvinfer1::EngineCapability::kSAFE_DLA = 2 } |
Forward declaration of IPluginFactory for use by other interfaces. More... | |
enum | nvinfer1::DimensionOperation : int32_t { nvinfer1::DimensionOperation::kSUM = 0, nvinfer1::DimensionOperation::kPROD = 1, nvinfer1::DimensionOperation::kMAX = 2, nvinfer1::DimensionOperation::kMIN = 3, nvinfer1::DimensionOperation::kSUB = 4, nvinfer1::DimensionOperation::kEQUAL = 5, nvinfer1::DimensionOperation::kLESS = 6, nvinfer1::DimensionOperation::kFLOOR_DIV = 7, nvinfer1::DimensionOperation::kCEIL_DIV = 8 } |
An operation on two IDimensionExpr, which represent integer expressions used in dimension computations. More... | |
enum | nvinfer1::WeightsRole : int32_t { nvinfer1::WeightsRole::kKERNEL = 0, nvinfer1::WeightsRole::kBIAS = 1, nvinfer1::WeightsRole::kSHIFT = 2, nvinfer1::WeightsRole::kSCALE = 3, nvinfer1::WeightsRole::kCONSTANT = 4 } |
How a layer uses particular Weights. More... | |
enum | nvinfer1::DeviceType : int32_t { nvinfer1::DeviceType::kGPU, nvinfer1::DeviceType::kDLA } |
The device that this layer/network will execute on. More... | |
enum | nvinfer1::OptProfileSelector : int32_t { nvinfer1::OptProfileSelector::kMIN = 0, nvinfer1::OptProfileSelector::kOPT = 1, nvinfer1::OptProfileSelector::kMAX = 2 } |
When setting or querying optimization profile parameters (such as shape tensor inputs or dynamic dimensions), select whether we are interested in the minimum, optimum, or maximum values for these parameters. The minimum and maximum specify the permitted range that is supported at runtime, while the optimum value is used for the kernel selection. This should be the "typical" value that is expected to occur at runtime. More... | |
Functions | |
template<> | |
constexpr int32_t | nvinfer1::EnumMax< EngineCapability > () |
Maximum number of elements in EngineCapability enum. More... | |
template<> | |
constexpr int32_t | nvinfer1::EnumMax< DimensionOperation > () |
Maximum number of elements in DimensionOperation enum. More... | |
template<> | |
constexpr int32_t | nvinfer1::EnumMax< WeightsRole > () |
Maximum number of elements in WeightsRole enum. More... | |
template<> | |
constexpr int32_t | nvinfer1::EnumMax< DeviceType > () |
Maximum number of elements in DeviceType enum. More... | |
template<> | |
constexpr int32_t | nvinfer1::EnumMax< OptProfileSelector > () |
IRuntime * | nvinfer1::anonymous_namespace{NvInferRuntime.h}::createInferRuntime (ILogger &logger) |
Create an instance of an IRuntime class. More... | |
IRefitter * | nvinfer1::anonymous_namespace{NvInferRuntime.h}::createInferRefitter (ICudaEngine &engine, ILogger &logger) |
Create an instance of an IRefitter class. More... | |
This is the top-level API file for TensorRT extended runtime library.