> For clean Markdown content of this page, append .md to this URL. For the complete documentation index, see https://docs.nvidia.com/dynamo/llms.txt. For full content including API reference and SDK examples, see https://docs.nvidia.com/dynamo/llms-full.txt.

# Diffusion

## Overview

Dynamo supports serving diffusion models across multiple backends, enabling generation of images and video from text prompts. Backends expose diffusion capabilities through the same Dynamo pipeline infrastructure used for LLM inference, including frontend routing, scaling, and observability.

## Support Matrix

| Modality | vLLM-Omni | SGLang | TRT-LLM |
|----------|-----------|--------|---------|
| Text-to-Text | ✅ | ✅ | ❌ |
| Text-to-Image | ✅ | ✅ | ❌ |
| Text-to-Video | ✅ | ✅ | ✅ |
| Image-to-Video | ❌ | ❌ | ❌ |

**Status:** ✅ Supported | ❌ Not supported

<Note>Image-to-video support is planned and coming soon across all backends.</Note>

## Backend Documentation

For deployment guides, configuration, and examples for each backend:

- **[vLLM-Omni](/dynamo/v1.0.0/user-guides/diffusion/v-llm-omni)**
- **[SGLang Diffusion](/dynamo/v1.0.0/user-guides/diffusion/sg-lang-diffusion)**
- **[TRT-LLM Diffusion](/dynamo/v1.0.0/user-guides/diffusion/trt-llm-diffusion)**
- **[FastVideo (custom worker)](/dynamo/v1.0.0/user-guides/diffusion/fastvideo)**