# Building a Custom TensorRT-LLM Container

> For clean Markdown content of this page, append .md to this URL. For the complete documentation index, see https://docs.nvidia.com/dynamo/llms.txt. For full content including API reference and SDK examples, see https://docs.nvidia.com/dynamo/llms-full.txt.

For the prebuilt container, see the [TensorRT-LLM Quick Start](/dynamo/v1.0.0/backends/tensor-rt-llm#quick-start).

## Building a Custom Container

If you need to build a container from source (e.g., for custom modifications or a different CUDA version):

```bash
# TensorRT-LLM uses git-lfs, which needs to be installed in advance.
apt-get update && apt-get -y install git git-lfs

# On an x86 machine:
python container/render.py --framework=trtllm --target=runtime --output-short-filename --cuda-version=13.1
docker build -t dynamo:trtllm-latest -f container/rendered.Dockerfile .

# On an ARM machine:
python container/render.py --framework=trtllm --target=runtime --platform=arm64 --output-short-filename --cuda-version=13.1
docker build -t dynamo:trtllm-latest -f container/rendered.Dockerfile .
```

Run the custom container:

```bash
./container/run.sh --framework trtllm -it
```