Fine-Tuning Tutorials#
Use the tutorials in this section to gain a deeper understanding of how the NVIDIA NeMo Customizer microservice enables fine-tuning tasks.
Tip
Tutorials are organized by complexity and typically build on one another. The tutorials reference NMP_BASE_URL which is the base URL of your NeMo Platform deployment. Refer to the Quickstart Guide for setup instructions and how to obtain your platform URL.
Getting Started#
Learn the fundamentals of how NeMo Customizer works with Model Entities and Adapters, and how to choose the right approach for your project.
Dataset Preparation#
Learn how to format datasets for different model types.
Customization Jobs#
Learn how to perform supervised fine-tuning with LoRA adapters using custom data.
Learn how to perform supervised fine-tuning using custom data by modifying the all training parameters.
Learn how to align a model with DPO (Direct Preference Optimization) to prefer certain kinds of responses over others.
Learn how to fine-tune embedding models using LoRA merged training for improved question-answering and retrieval tasks.
Monitoring & Optimization#
Learn how to check job metrics using MLflow or Weights & Biases.
Learn how to optimize the token-per-GPU throughput for a LoRA optimization job.