Backend-Platform Support Matrix#
Even though Triton supports inference across various platforms such as cloud, data center, edge and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia, it does so by relying on the backends. Note that not all Triton backends support every platform. The purpose of this document is to go over what all compute platforms are supported by each of these Triton backends. GPU in this document refers to Nvidia GPU. See GPU, Driver, and CUDA Support Matrix to learn more about supported GPUs.
Ubuntu 22.04#
The table below describes target device(s) supported for inference by each backend on different platforms.
Backend |
x86 |
ARM-SBSA |
---|---|---|
TensorRT |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
ONNX Runtime |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
TensorFlow |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
PyTorch |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
OpenVINO |
:x: GPU |
:x: GPU |
Python[1] |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
DALI |
:heavy_check_mark: GPU |
|
FIL |
:heavy_check_mark: GPU |
Unsupported |
TensorRT-LLM |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
vLLM |
:heavy_check_mark: GPU |
Unsupported |
Windows 10#
Only TensorRT and ONNX Runtime backends are supported on Windows.
Backend |
x86 |
ARM-SBSA |
---|---|---|
TensorRT |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
ONNX Runtime |
:heavy_check_mark: GPU |
:heavy_check_mark: GPU |
Jetson JetPack#
Following backends are currently supported on Jetson Jetpack:
Backend |
Jetson |
---|---|
TensorRT |
:heavy_check_mark: GPU |
ONNX Runtime |
:heavy_check_mark: GPU |
TensorFlow |
:heavy_check_mark: GPU |
PyTorch |
:heavy_check_mark: GPU |
Python[1] |
:x: GPU |
Look at the Triton Inference Server Support for Jetson and JetPack.
AWS Inferentia#
Currently, inference on AWS Inferentia is only supported via python backend where the deployed python script invokes AWS Neuron SDK.