NVIDIA TensorRT Inference Server

The NVIDIA TensorRT Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service via an HTTP or gRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server.

What’s New In 1.6.0

  • Added TensorRT 6 support, which includes support for TensorRT dynamic shapes.

  • Shared memory support is added as an alpha feature in this release. This support allows input and output tensors to be communicated via shared memory instead of over the network. Currently only system (CPU) shared memory is supported.

  • Amazon S3 is now supported as a remote file system for model repositories. Use the s3:// prefix on model repository paths to reference S3 locations.

  • The inference server library API is available as a beta in this release. The library API allows you to link against libtrtserver.so so that you can include all the inference server functionality directly in your application.

  • GRPC endpoint performance improvement. The inference server’s GRPC endpoint now uses significantly less memory while delivering higher performance.

  • The ensemble scheduler is now more flexible in allowing batching and non-batching models to be composed together in an ensemble.

  • The ensemble scheduler will now keep tensors in GPU memory between models when possible. Doing so significantly increases performance of some ensembles by avoiding copies to and from system memory.

  • The performance client, perf_client, now supports models with variable-sized input tensors.

Features

  • Multiple framework support. The server can manage any number and mix of models (limited by system disk and memory resources). Supports TensorRT, TensorFlow GraphDef, TensorFlow SavedModel, ONNX and Caffe2 NetDef model formats. Also supports TensorFlow-TensorRT integrated models. Variable-size input and output tensors are allowed if supported by the framework. See Capabilities for detailed support information for each framework.

  • Concurrent model execution support. Multiple models (or multiple instances of the same model) can run simultaneously on the same GPU.

  • Batching support. For models that support batching, the server can accept requests for a batch of inputs and respond with the corresponding batch of outputs. The inference server also supports multiple scheduling and batching algorithms that combine individual inference requests together to improve inference throughput. These scheduling and batching decisions are transparent to the client requesting inference.

  • Custom backend support. The inference server allows individual models to be implemented with custom backends instead of by a deep-learning framework. With a custom backend a model can implement any logic desired, while still benefiting from the GPU support, concurrent execution, dynamic batching and other features provided by the server.

  • Ensemble support. An ensemble represents a pipeline of one or more models and the connection of input and output tensors between those models. A single inference request to an ensemble will trigger the execution of the entire pipeline.

  • Multi-GPU support. The server can distribute inferencing across all system GPUs.

  • The inference server monitors the model repository for any change and dynamically reloads the model(s) when necessary, without requiring a server restart. Models and model versions can be added and removed, and model configurations can be modified while the server is running.

  • Model repositories may reside on a locally accessible file system (e.g. NFS), in Google Cloud Storage or in Amazon S3.

  • Readiness and liveness health endpoints suitable for any orchestration or deployment framework, such as Kubernetes.

  • Metrics indicating GPU utilization, server throughput, and server latency.

  • C library inferface allows the full functionality of the inference server to be included directly in an application.

Indices and tables