Use Triton on SageMaker#
Below are important pointers on how to deploy Triton Inference Server on AWS SageMaker to serve trained models in production:
See docker/sagemaker/serve for details on how Triton Inference Server is deployed.
See qa/L0_sakemaker/test.sh for example usage and testing.
See AWS SageMaker Documentation for more details.