> For clean Markdown of any page, append .md to the page URL.
> For a complete documentation index, see https://docs.nvidia.com/dsx/llms.txt.
> For full documentation content, see https://docs.nvidia.com/dsx/llms-full.txt.

# Common Component Combinations

These are typical adoption patterns showing how components complement each other:

| Use Case              | Core Components                                                             | Optional Additions                      |
| --------------------- | --------------------------------------------------------------------------- | --------------------------------------- |
| Basic ML Inference    | TensorRT + Triton                                                           | DALI, GPU Operator                      |
| Speech/NLP Pipeline   | Riva SDK + Triton                                                           | DALI, TensorRT                          |
| Single-Node LLM       | TensorRT-LLM + Dynamo                                                       | Model Optimizer                         |
| Distributed LLM       | Dynamo + KV Block Manager + NIXL + Router + TensorRT-LLM                    | Planner, Model Express, Model Optimizer |
| Kubernetes Deployment | GPU Operator + KAI Scheduler                                                | Network Operator, Grove                 |
| Full GenAI Stack      | Dynamo + NIXL + KV Block Manager + Router + Grove + KAI Scheduler + Planner | AIConfigurator, AIPerf                  |

## Architecture Overview

![NVIDIA Inference Architecture Overview](https://files.buildwithfern.com/nvidia-dsx.docs.buildwithfern.com/dsx/7a4fccb3867d34ea53ec1fcf7ed6b8e9060b9978c93d9d73909f65a2310c01cc/_dot_dot_/docs/guides/inference-ra/assets/images/nira-arch-overview.png)