SmolVLM#

SmolVLM is HuggingFace’s compact vision language model designed for on-device and memory-constrained deployment, featuring an efficient image token compression strategy.

Task

Image-Text-to-Text

Architecture

SmolVLMForConditionalGeneration

Parameters

256M – 2B

HF Org

HuggingFaceTB

Available Models#

  • SmolVLM-Instruct: 2B

  • SmolVLM-256M-Instruct: 256M

Architecture#

  • SmolVLMForConditionalGeneration

Example HF Models#

Model

HF ID

SmolVLM Instruct

HuggingFaceTB/SmolVLM-Instruct

SmolVLM 256M Instruct

HuggingFaceTB/SmolVLM-256M-Instruct

Try with NeMo AutoModel#

Install NeMo AutoModel and follow the fine-tuning guide to configure a recipe for this model.

1. Install (full instructions):

pip install nemo-automodel

2. Clone the repo to get example recipes you can adapt:

git clone https://github.com/NVIDIA-NeMo/Automodel.git
cd Automodel

3. Fine-tune by adapting a base VLM recipe — override the model ID on the CLI:

automodel --nproc-per-node=8 examples/vlm_finetune/gemma3/gemma3_vl_4b_cord_v2.yaml \
  --model.pretrained_model_name_or_path <MODEL_HF_ID>

Replace <MODEL_HF_ID> with the model ID from Example HF Models above.

Run with Docker

1. Pull the container and mount a checkpoint directory:

docker run --gpus all -it --rm \
  --shm-size=8g \
  -v $(pwd)/checkpoints:/opt/Automodel/checkpoints \
  nvcr.io/nvidia/nemo-automodel:26.02.00

2. The recipes are at /opt/Automodel/examples/ — navigate there:

cd /opt/Automodel

3. Fine-tune:

automodel --nproc-per-node=8 examples/vlm_finetune/gemma3/gemma3_vl_4b_cord_v2.yaml \
  --model.pretrained_model_name_or_path <MODEL_HF_ID>

See the Installation Guide and VLM Fine-Tuning Guide.

Fine-Tuning#

See the VLM Fine-Tuning Guide.

Hugging Face Model Cards#