NVIDIA Optimized Frameworks

vLLM Release 26.04

The NVIDIA vLLM Release 26.04 is made up of two container images available on NGC: vLLM.

Contents of the vLLM container

This container image contains the complete source of the version of vLLM in /opt/vllm. It is pre-built and installed in the default system Python environment (/usr/local/lib/python3.12/dist-packages/vllm) in the container image. Visit vLLM Docs to learn more about vLLM.

The NVIDIA vLLM Container is optimized for use with NVIDIA GPUs, and contains the following software for GPU acceleration

  • Please see to the CUDA section for the list of libraries inherited from the CUDA container.
  • vLLM: 0.19.0
  • flashinfer 0.6.7 post3
  • transformers 4.57.6
  • flash-attention 2.7.4.post1
  • xgrammar 0.1.33
  • Torch 2.12.0a0+0291f960b6.nv26.04.48445190

Driver Requirements

Release 26.04 is based on CUDA 13.2.1 For comprehensive and up-to-date driver compatibility information, please refer to the following documentation:

Key Features and Enhancements

This vLLM release includes the following key features and enhancements.

  • Support Nemotron Super V3
  • Support Nemotron 3 Nano Omni

Announcements

  • None.

Known Issues

  • vLLM serve uses aggressive GPU memory allocation by default (effectively --gpu-memory-utilization≈1.0). On systems with shared/unified GPU memory (e.g. DGX Spark or Jetson platforms), this can lead to out-of-memory errors. If you encounter OOM, start vllm serve with a lower utilization value, for example: vllm serve <model> --gpu-memory-utilization 0.7.

  • When running Nemotron Nano V3 or Nemotron Super V3 NVFP4 models on Spark it is required to limit the number of sequences to 4:
    • vllm serve <model> --max-num-seqs 4&
  • When running Nemotron 3 Nano Omni you are required to override the model architecture reference:
    • Vllm serve <model> --hf-overrides='{"architectures":["NemotronH_Nano_VL_V2"]}'
© Copyright 2026, NVIDIA. Last updated on Apr 27, 2026