Fine-Tuning Tutorials#

Use the tutorials in this section to gain a deeper understanding of how the NVIDIA NeMo Customizer microservice enables fine-tuning tasks.

Tip

Tutorials are organized by complexity and typically build on one another. The tutorials reference NMP_BASE_URL which is the base URL of your NeMo Platform deployment. Refer to the Quickstart Guide for setup instructions and how to obtain your platform URL.


Getting Started#

Understanding Model Entities and Adapters

Learn the fundamentals of how NeMo Customizer works with Model Entities and Adapters, and how to choose the right approach for your project.

Understanding NeMo Customizer: Models, Training, and Resources

Dataset Preparation#

Format Training Datasets

Learn how to format datasets for different model types.

Format Training Dataset

Customization Jobs#

Fine-Tune a Model with Custom Data Using LoRA

Learn how to perform supervised fine-tuning with LoRA adapters using custom data.

LoRA Model Customization Job
Fine-Tune a Model with Custom Data Processing All Weights

Learn how to perform supervised fine-tuning using custom data by modifying the all training parameters.

Full SFT Customization
Align a Model with DPO and Preference Data

Learn how to align a model with DPO (Direct Preference Optimization) to prefer certain kinds of responses over others.

DPO Customization
Fine-Tune an Embedding Model With Positive and Negative Samples Using LoRA

Learn how to fine-tune embedding models using LoRA merged training for improved question-answering and retrieval tasks.

Embedding Model Customization

Monitoring & Optimization#

Check Customization Job Metrics

Learn how to check job metrics using MLflow or Weights & Biases.

Checking Your Customization Job Metrics
Optimize Tokens per GPU

Learn how to optimize the token-per-GPU throughput for a LoRA optimization job.

Optimize for Tokens/GPU Throughput