Start a DPO Customization Job#

Learn how to use the NeMo Microservices Platform to create a DPO (Direct Preference Optimization) job using a custom dataset.

DPO is an advanced fine-tuning technique for preference-based alignment. If you’re new to fine-tuning, consider starting with LoRA or Full SFT tutorials first.

About DPO Customization Jobs#

Direct Preference Optimization (DPO) is an RL-free alignment algorithm that operates on preference data. Given a prompt and a pair of chosen and rejected responses, DPO aims to increase the probability of the chosen response and decrease the probability of the rejected response relative to a frozen reference model. The actor is initialized using the reference model. For more details, refer to the DPO paper.

DPO shares similarities with Full SFT training workflows but differs in a few key ways:

DPO vs SFT Training Comparison#

Aspect

SFT (Supervised Fine-Tuning)

DPO (Direct Preference Optimization)

Data Requirements

Labeled instruction-response pairs where the desired output is explicitly provided

Pairwise preference data, where for a given input, one response is explicitly preferred over another

Learning Objective

Directly teaches the model to generate a specific “correct” response

Directly optimizes the model to align with human preferences by maximizing the probability of preferred responses and minimizing rejected ones, without needing an explicit reward model

Alignment Focus

Aligns the model with the specific examples present in its training data

Aligns the model with broader human preferences, which can be more effective for subjective tasks or those without a single “correct” answer

Computational Efficiency

Standard fine-tuning efficiency

More computationally efficient than SFT (especially when compared to full RLHF methods) as it bypasses the need to train a separate reward model

Prerequisites#

Platform Prerequisites#

New to using NeMo microservices?

NeMo microservices use an entity management system to organize all resources—including datasets, models, and job artifacts—into namespaces and projects. Without setting up these organizational entities first, you cannot use the microservices.

If you’re new to the platform, complete these foundational tutorials first:

  1. Get Started Tutorials: Learn how to deploy, customize, and evaluate models using the platform end-to-end

  2. Set Up Organizational Entities: Learn how to create namespaces and projects to organize your work

If you’re already familiar with namespaces, projects, and how to upload datasets to the platform, you can proceed directly with this tutorial.

Learn more: Entity Concepts

NeMo Customizer Prerequisites#

Microservice Setup Requirements and Environment Variables

Before starting, make sure you have:

  • Access to NeMo Customizer

  • The huggingface_hub Python package installed

  • (Optional) Weights & Biases account and API key for enhanced visualization

Set up environment variables:

# Set up environment variables
export CUSTOMIZER_BASE_URL="<your-customizer-service-url>"
export ENTITY_HOST="<your-entity-store-url>"
export DS_HOST="<your-datastore-url>"
export NAMESPACE="default"
export DATASET_NAME="test-dataset"

# Hugging Face environment variables (for dataset/model file management)
export HF_ENDPOINT="${DS_HOST}/v1/hf"
export HF_TOKEN="dummy-unused-value"  # Or your actual HF token

# Optional monitoring
export WANDB_API_KEY="<your-wandb-api-key>"

Replace the placeholder values with your actual service URLs and credentials.

Tutorial-Specific Prerequisites#

  • Access to the Deployment Management Service for model deployment


Select Model#

Find Available Configs#

First, we need to identify what model customization configurations are available for you to use. This will describe the models and corresponding techniques you can choose. DPO jobs require a model that supports the following:

  • finetuning_type: all_weights

  • training_type: dpo

Note

GPU requirements are typically higher for all_weights than with PEFT techniques like LoRA.

  1. Get all customization configurations.

    curl -X GET "${CUSTOMIZER_BASE_URL}/v1/customization/configs?filter%5Btraining_type%5D=dpo&filter%5Bfinetuning_type%5D=all_weights" \
      -H 'Accept: application/json' | jq
    
  2. Review the response to find a model that meets your requirements.

    Example Response
    {
      "object": "list",
      "data": [
        {
          "name": "meta/llama-3.2-1b-instruct@v1.0.0+A100",
          "namespace": "default",
          "dataset_schemas": [
            {
              "title": "Newline-Delimited JSON File",
              "type": "array",
              "items": {
                "description": "Schema for Supervised Fine-Tuning (SFT) training data items.",
                "properties": {
                  "prompt": {
                    "description": "The prompt for the entry",
                    "title": "Prompt",
                    "type": "string"
                  },
                  "completion": {
                    "description": "The completion to train on",
                    "title": "Completion",
                    "type": "string"
                  }
                },
                "required": ["prompt", "completion"],
                "title": "SFTDatasetItemSchema",
                "type": "object"
              }
            }
          ],
          "training_options": [
            {
              "training_type": "sft",
              "finetuning_type": "lora",
              "num_gpus": 1,
              "num_nodes": 1,
              "tensor_parallel_size": 1,
              "use_sequence_parallel": false
            },
            {
              "training_type": "sft",
              "finetuning_type": "all_weights",
              "num_gpus": 1,
              "num_nodes": 1,
              "tensor_parallel_size": 1,
              "use_sequence_parallel": false
            }
          ]
        },
        {
          "name": "nvidia/llama-3.2-nv-embedqa-1b@v2+A100",
          "namespace": "nvidia",
          "dataset_schemas": [
            {
              "title": "Newline-Delimited JSON File",
              "type": "array",
              "items": {
                "description": "Schema for embedding training data items.",
                "properties": {
                  "query": {
                    "description": "The query to use as an anchor",
                    "title": "Query",
                    "type": "string"
                  },
                  "pos_doc": {
                    "description": "A document that should match positively with the anchor",
                    "title": "Positive Document",
                    "type": "string"
                  },
                  "neg_doc": {
                    "description": "Documents that should not match with the anchor",
                    "title": "Negative Documents",
                    "type": "array",
                    "items": {"type": "string"}
                  }
                },
                "required": ["query", "pos_doc", "neg_doc"],
                "title": "EmbeddingDatasetItemSchema",
                "type": "object"
              }
            }
          ],
          "training_options": [
            {
              "training_type": "sft",
              "finetuning_type": "lora_merged",
              "num_gpus": 1,
              "num_nodes": 1,
              "tensor_parallel_size": 1,
              "use_sequence_parallel": false
            }
          ]
        }
      ]
    }
    

The response shows that Llama 3.2 1b Instruct is available for DPO, and will require 1 GPU to train.

Review Dataset Schema#

You can examine the dataset_schemas field in the response to understand what data format your model requires.

The schema outlines the specific fields and data types your dataset needs to include, formatted as New Line Delimited JSON (NDJSON). In the next section, we’ll walk through an example to help you understand the schema structure.

Create Datasets#

Now that we know the required shape of the dataset expected by the model configuration, we can prepare our training and validation files and upload them to the dataset.

Prepare Files#

  1. Create two files, train.jsonl and validation.jsonl.

  2. Populate the files with DPO preference data in the required format.

DPO training requires preference pairs with three fields:

  • prompt: The input prompt (can be a string or array of message objects)

  • chosen_response: The preferred response

  • rejected_response: The less preferred response

Note

Each record should be on a single line in your .jsonl file, with no line breaks within the JSON objects.

{"prompt": "What is the capital of France?", "chosen_response": "The capital of France is Paris.", "rejected_response": "I'm not sure, but I think it might be Lyon or Marseille."}
{"prompt": "Explain the concept of machine learning in simple terms.", "chosen_response": "Machine learning is a way for computers to learn from data and improve their performance on tasks without being explicitly programmed. The computer recognizes patterns in examples and uses those patterns to make predictions or decisions.", "rejected_response": "Machine learning is when computers learn things. It's complicated and involves algorithms."}

Upload Training Data#

Initialize Client#

You need to upload the training files to the training path in NeMo Data Store, and validation files to the validation path. You can have multiple files in each path and they will all be used.

To set up the Hugging Face API client, you’ll need these configuration values:

  • Host URL for the entity store service

  • Host URL for the data storage service

  • A namespace to organize your resources

  • Name of your dataset

from nemo_microservices import NeMoMicroservices
from huggingface_hub import HfApi
import os
import requests

# Configuration
ENTITY_HOST = os.environ.get('ENTITY_HOST')  # Replace with the public url of your Entity Store
DS_HOST = os.environ.get('DS_HOST')  # Replace with the public url of your Datastore
NAMESPACE = os.environ.get('NAMESPACE', 'default')
DATASET_NAME = os.environ.get('DATASET_NAME', 'test-dataset')  # dataset name needs to be unique for the namespace

# Initialize NeMo Microservices client for entity operations
entity_client = NeMoMicroservices(
    base_url=ENTITY_HOST
)

# Initialize Hugging Face API client for file operations
hf_api = HfApi(endpoint=f"{DS_HOST}/v1/hf", token="")

Create Namespaces#

Set the namespace we defined in our configuration values in both the NeMo Entity Store and the NeMo Data Store so that they match.

def create_namespaces(entity_client, ds_host, namespace):
    # Create namespace in entity store using SDK
    try:
        entity_client.namespaces.create(
            id=namespace,
            description=f"Namespace for {namespace} resources"
        )
        print(f"Created namespace {namespace} in Entity Store")
    except Exception as e:
        print(f"Namespace {namespace} may already exist in Entity Store: {e}")

    # Create namespace in datastore using requests
    nds_url = f"{ds_host}/v1/datastore/namespaces"
    resp = requests.post(nds_url, data={"namespace": namespace})
    if resp.status_code in (200, 201):
        print(f"Created namespace {namespace} in Datastore")
    elif resp.status_code in (409, 422):
        print(f"Namespace {namespace} already exists in Datastore")
    else:
        print(f"Failed to create namespace in Datastore: {resp.status_code}")

create_namespaces(entity_client, DS_HOST, NAMESPACE)

Set Up Dataset Repository#

Create a dataset repository in NeMo Data Store.

def setup_dataset_repo(hf_api, entity_client, namespace, dataset_name, description="Training dataset"):
    repo_id = f"{namespace}/{dataset_name}"

    # Create the repo in datastore
    hf_api.create_repo(repo_id, repo_type="dataset", exist_ok=True)

    # Create dataset in entity store using SDK
    dataset = entity_client.datasets.create(
        name=dataset_name,
        namespace=namespace,
        files_url=f"hf://datasets/{repo_id}",
        description=description
    )

    print(f"Created dataset repository: {repo_id}")
    return repo_id

repo_id = setup_dataset_repo(hf_api, entity_client, NAMESPACE, DATASET_NAME)

Upload Files#

Upload the training and validation files to the dataset.

def upload_dataset_files(hf_api, repo_id, training_file="train.jsonl", validation_file="validation.jsonl"):
    # Upload training file
    hf_api.upload_file(
        path_or_fileobj=training_file,
        path_in_repo="training/training_file.jsonl",
        repo_id=repo_id,
        repo_type="dataset",
        revision="main",
        commit_message=f"Training file for {repo_id}"
    )

    # Upload validation file
    hf_api.upload_file(
        path_or_fileobj=validation_file,
        path_in_repo="validation/validation_file.jsonl",
        repo_id=repo_id,
        repo_type="dataset",
        revision="main",
        commit_message=f"Validation file for {repo_id}"
    )

upload_dataset_files(hf_api, repo_id)

Checkpoint

At this point, we’ve uploaded our training and validation files to the dataset and are ready to define the details of our customization job.

Start Model Customization Job#

Important

The config field must include a version, for example: meta/llama-3.2-1b-instruct@v1.0.0+A100. Omitting the version will result in an error like:

{ "detail": "Version is not specified in the config URN: meta/llama-3.2-1b-instruct" }

You can find the correct config URN (with version) by inspecting the output of the /customization/configs endpoint. Use the name and version fields to construct the URN as name@version.

Example curl:

curl -X GET "${CUSTOMIZER_BASE_URL}/v1/customization/configs?page_size=1000" -H 'Accept: application/json' | jq '.data[] | "\(.namespace)/\(.name)"'

Set Hyperparameters#

While model customization configurations come with default settings, you can customize your training by specifying additional hyperparameters in the hyperparameters field of your customization job.

To train with DPO, we must:

  1. Set the training_type to dpo (Direct Preference Optimization).

  2. Set the finetuning_type to all_weights.

To override default DPO specific hyperparameters, include hyperparameters.dpo field. Available DPO parameters:

  • ref_policy_kl_penalty: Controls how strongly the trained policy is penalized for deviating from the reference policy (default: 0.05)

  • preference_loss_weight: Scales the contribution of the preference loss (default: 1.0)

  • preference_average_log_probs: Whether to normalize by sequence length (default: false)

  • sft_loss_weight: Weight for supervised fine-tuning loss component (default: 0.0)

Note

DPO automatically uses the base model specified in config as the frozen reference model. No separate reference model parameter is needed.

Example configuration:

{
  "hyperparameters": {
    "training_type": "dpo",
    "finetuning_type": "all_weights",
    "epochs": 3,
    "batch_size": 4,
    "learning_rate": 0.00005,
    "dpo": {
      "ref_policy_kl_penalty": 0.1
    }
  }
}

Note

For more information on hyperparameter options and their description, review the Hyperparameter Options reference.

Create and Submit Training Job#

Use the following command to start a DPO training job. Replace meta/llama-3.2-1b-instruct@v1.0.0+A100 with your chosen model configuration including the version and test-dataset with your dataset name.

  1. Create a job using the model configuration (config), dataset, and hyperparameters we defined in the previous sections.

    from nemo_microservices import NeMoMicroservices
    import os
    
    # Initialize the client
    client = NeMoMicroservices(
        base_url=os.environ['CUSTOMIZER_BASE_URL']
    )
    
    # Set up WandB API key for enhanced visualization
    extra_headers = {}
    if os.getenv('WANDB_API_KEY'):
        extra_headers['wandb-api-key'] = os.getenv('WANDB_API_KEY')
    
    # Create a DPO customization job
    job = client.customization.jobs.create(
        config="meta/llama-3.2-1b-instruct@v1.0.0+A100",
        dataset={
            "name": "test-dataset",
            "namespace": "default"
        },
        hyperparameters={
            "training_type": "dpo",
            "finetuning_type": "all_weights",
            "epochs": 3,
            "batch_size": 4,
            "learning_rate": 0.00005,
            "dpo": {
                "ref_policy_kl_penalty": 0.1
            }
        },
        extra_headers=extra_headers
    )
    
    print(f"Created DPO job with ID: {job.id}")
    print(f"Job status: {job.status}")
    print(f"Output model: {job.output_model}")
    
    curl -X "POST" \
      "${CUSTOMIZER_BASE_URL}/v1/customization/jobs" \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -H "wandb-api-key: ${WANDB_API_KEY}" \
      -d '{
        "config": "meta/llama-3.2-1b-instruct@v1.0.0+A100",
        "dataset": {"name": "test-dataset", "namespace": "default"},
        "hyperparameters": {
          "training_type": "dpo",
          "finetuning_type": "all_weights",
          "epochs": 3,
          "batch_size": 4,
          "learning_rate": 0.00005,
          "dpo": {
            "ref_policy_kl_penalty": 0.1
          }
        }
      }' | jq
    
  2. Review the response.

    Example Response
    {
      "id": "cust-Pi95UoDbNcqwgkruAB8LY6",
      "created_at": "2025-02-19T20:10:06.278132",
      "updated_at": "2025-02-19T20:10:06.278133",
      "namespace": "default",
      "config": {
        "schema_version": "1.0",
        "id": "58bee815-0473-45d7-a5e6-fc088f6142eb",
        "namespace": "default",
        "created_at": "2025-02-19T20:10:06.454149",
        "updated_at": "2025-02-19T20:10:06.454160",
        "custom_fields": {},
        "name": "meta/llama-3.2-1b-instruct@v1.0.0+A100",
        "base_model": "meta/llama-3.2-1b-instruct",
        "model_path": "llama-3_2-1b-instruct",
        "training_types": ["sft"],
        "finetuning_types": ["lora"],
        "precision": "bf16",
        "num_gpus": 1,
        "num_nodes": 1,
        "micro_batch_size": 1,
        "tensor_parallel_size": 1,
        "max_seq_length": 4096
      },
      "dataset": { "namespace": "default", "name": "test-dataset" },
      "hyperparameters": {
        "finetuning_type": "lora",
        "training_type": "sft",
        "batch_size": 16,
        "epochs": 10,
        "learning_rate": 0.0001,
        "lora": {
          "adapter_dim": 8,
          "adapter_dropout": 0.01
        }
      },
      "output_model": "default/meta-llama-3.2-1b-instruct-test-dataset-lora@cust-Pi95UoDbNcqwgkruAB8LY6",
      "status": "created",
      "custom_fields": {}
    }
    
  3. Copy the following values from the response:

    • id

    • output_model

You can check the job status as detailed in getting the job status.

Deploy the model#

Once the job finishes, Customizer uploads the full model weights to the Data Store and makes them available for deployment through the Deployment Management Service.

Important

Unlike LoRA adapters, DPO models with full weights require a dedicated NIM deployment. The recommended approach is to use the Deployment Management Service, which automatically handles weight downloading, storage provisioning, and NIM deployment.

Prerequisites for Model Deployment#

Before deploying your fine-tuned model, ensure you have:

  • Access to the Deployment Management Service through the NeMo platform or independent base URL. Store this URL in the environment variable DEPLOYMENT_BASE_URL.

  • Access to NIM Proxy for inference testing. Store this URL in the environment variable NIM_PROXY_BASE_URL.

  • The output_model value from your completed customization job (such as default/dpo_llama_3@v1).

  • Appropriate NIM container image details for your base model (image name and tag).

Deploy Using Deployment Management Service#

The Deployment Management Service automatically detects DPO models with full weights and handles all deployment complexity, including weight downloading and storage management.

  1. Create a deployment configuration for your fine-tuned model.

    # Create deployment configuration for fine-tuned model
    deployment_config = client.deployment.configs.create(
        name="<deployment-config-name>",
        namespace="default",
        description="Configuration for fine-tuned model deployment",
        model="<output-model-name>",  # Your output_model from training job
        nim_deployment={
            "image_name": "<nim-container-image>",  # e.g., "nvcr.io/nim/meta/llama-3.2-1b-instruct"
            "image_tag": "1.6.0",  # Use appropriate NIM version
            "gpu": 1
        }
    )
    
    print(f"Created deployment config: {deployment_config.name}")
    
    # Create deployment configuration
    curl -X POST \
      "${DEPLOYMENT_BASE_URL}/v1/deployment/configs" \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '{
        "name": "<deployment-config-name>",
        "namespace": "default",
        "description": "Configuration for fine-tuned model deployment",
        "model": "'${OUTPUT_MODEL}'",
        "nim_deployment": {
          "image_name": "'${NIM_IMAGE}'",
          "image_tag": "'${NIM_TAG}'",
          "gpu": 1
        }
      }' | jq
    
  2. Deploy the fine-tuned model using the configuration.

    # Deploy the model using the configuration
    # (using the same client initialized above)
    deployment = client.deployment.model_deployments.create(
        name="sft-llama-deployment",
        namespace="default",
        description="Fine-tuned SFT Llama 3.2 1B model deployment",
        config="default/sft-llama-deploy-config"  # Reference the config
    )
    
    print(f"Created deployment: {deployment.name}")
    print(f"Status: {deployment.status_details.status}")
    
    # Deploy the model using the configuration
    curl -X POST "${NIM_PROXY_BASE_URL}/v1/deployment/model-deployments" \
      -H "Content-Type: application/json" \
      -d '{
        "name": "sft-llama-deployment",
        "namespace": "default",
        "description": "Fine-tuned SFT Llama 3.2 1B model deployment",
        "config": "default/sft-llama-deploy-config"
      }'
    
  3. Check deployment status.

    # Monitor deployment status with polling
    # (using the same client initialized above)
    import time
    
    def wait_for_deployment(client, deployment_name, namespace="default", timeout=1200):
        """Wait for deployment to complete"""
        start_time = time.time()
    
        while True:
            # Check timeout
            if time.time() - start_time > timeout:
                raise RuntimeError(f"Deployment timeout after {timeout} seconds")
    
            # Get current status
            deployment_status = client.deployment.model_deployments.retrieve(
                deployment_name, namespace=namespace
            )
    
            status = deployment_status.status_details.status
            elapsed = time.time() - start_time
    
            print(f"Deployment status: {status} after {elapsed:.1f}s")
    
            if status == "ready":
                print("✅ Deployment completed successfully!")
                break
            elif status in ["failed", "cancelled"]:
                raise RuntimeError(f"Deployment {status}")
    
            time.sleep(10)  # Poll every 10 seconds
    
        return deployment_status
    
    # Wait for deployment to complete (takes ~10 minutes first time)
    # First deployment is slower due to container image pulling
    final_status = wait_for_deployment(client, "sft-llama-deployment")
    print(f"Model deployed as: {final_status.models}")
    
    # Monitor deployment status with polling
    while true; do
      STATUS=$(curl -s "${NIM_PROXY_BASE_URL}/v1/deployment/model-deployments/sft-llama-deployment?namespace=default" | \
        jq -r '.status_details.status')
      
      echo "Deployment status: $STATUS"
      
      if [ "$STATUS" = "ready" ]; then
        echo "✅ Deployment completed successfully!"
        break
      elif [ "$STATUS" = "failed" ] || [ "$STATUS" = "cancelled" ]; then
        echo "❌ Deployment $STATUS"
        exit 1
      fi
      
      sleep 10  # Poll every 10 seconds
    done
    

Test Your Fine-Tuned Model#

After the deployment shows “ready” status, test your fine-tuned model through the NIM Proxy endpoint.

# Test using OpenAI-compatible client
from openai import OpenAI
import os

# Initialize OpenAI client pointing to NIM Proxy
openai_client = OpenAI(
    base_url=f"{os.environ['NIM_PROXY_BASE_URL']}/v1",
    api_key="not-used"  # NIM doesn't require API key for local deployment
)

# Test the fine-tuned model
response = openai_client.completions.create(
    model="default/sft_llama_3@v1",  # Use your actual output_model
    prompt="Extract from the following context the minimal span word for word that best answers the question.\n- If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct.\n- If you do not know the answer to a question, please do not share false information.\n- If the answer is not in the context, the answer should be \"?\".\n- Your answer should not include any other text than the answer to the question.\n\nContext: When is the upcoming GTC event? GTC 2018 attracted over 8,400 attendees. Due to the COVID pandemic of 2020, GTC 2020 was converted to a digital event and drew roughly 59,000 registrants. The 2021 GTC keynote, which was streamed on YouTube on April 12, included a portion that was made with CGI using the Nvidia Omniverse real-time rendering platform. This next GTC will take place in the middle of March, 2023. Answer:",
    max_tokens=128,
    temperature=0.7
)

print("✅ Model inference successful!")
print(f"Response: {response.choices[0].text.strip()}")
# Test the deployed model using OpenAI-compatible API
curl -X POST "${NIM_PROXY_BASE_URL}/v1/completions" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "default/sft_llama_3@v1",
    "prompt": "Extract from the following context the minimal span word for word that best answers the question.\n- If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct.\n- If you do not know the answer to a question, please do not share false information.\n- If the answer is not in the context, the answer should be \"?\".\n- Your answer should not include any other text than the answer to the question.\n\nContext: When is the upcoming GTC event? GTC 2018 attracted over 8,400 attendees. Due to the COVID pandemic of 2020, GTC 2020 was converted to a digital event and drew roughly 59,000 registrants. The 2021 GTC keynote, which was streamed on YouTube on April 12, included a portion that was made with CGI using the Nvidia Omniverse real-time rendering platform. This next GTC will take place in the middle of March, 2023. Answer:",
    "max_tokens": 128,
    "temperature": 0.7
  }'

Note

The Deployment Management Service automatically:

  • Creates necessary storage resources (PVC) for model weights

  • Downloads weights from the Data Store using NIMCache

  • Configures and deploys the NIM with custom weights

  • Manages the complete deployment lifecycle

This eliminates the need for manual Kubernetes operations, weight downloading, and Helm configurations.

Conclusion#

You have started a DPO job and deployed a NIM with your custom weights. You can now use the NIM endpoint to interact with your fine-tuned model and assess its performance on your specific use case.

If you included a WandB API key, you can view your training results at wandb.ai under the nvidia-nemo-customizer project.

Note

The W&B integration is optional. When enabled, we’ll send training metrics to W&B using your API key. While we encrypt your API key and don’t log it internally, please review W&B’s terms of service before use.

Next Steps#

Learn how to check customization job metrics using the id.