Update an existing Customization Config#
Prerequisites#
Before you can update a customization configuration, make sure that you have:
Access to the NeMo Customizer service
Set the
CUSTOMIZER_BASE_URL
environment variable to your NeMo Customizer service endpoint
export CUSTOMIZER_BASE_URL="https://your-customizer-service-url"
To Update an Existing Customization Config#
Choose one of the following options to update an existing customization config.
import os
from nemo_microservices import NeMoMicroservices
# Initialize the client
client = NeMoMicroservices(
base_url=os.environ['CUSTOMIZER_BASE_URL']
)
# Update customization config
updated_config = client.customization.configs.update(
config_name="llama-3.1-8b-instruct@v1.0.0+A100",
namespace="default",
description="Updated description",
max_seq_length=4096
)
print(f"Updated config: {updated_config.name}")
print(f"New description: {updated_config.description}")
print(f"Max sequence length: {updated_config.max_seq_length}")
curl -X PATCH "${CUSTOMIZER_BASE_URL}/customization/configs/default/llama-3.1-8b-instruct@v1.0.0+A100" \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"description": "Updated description",
"max_seq_length": 4096
}' | jq
Note
The update endpoint supports many additional parameters beyond description
and max_seq_length
, including:
Training Options Management:
training_options
,add_training_options
,remove_training_options
Templates:
prompt_template
,chat_prompt_template
Hardware:
pod_spec
,training_precision
Metadata:
project
,custom_fields
,ownership
Data:
dataset_schemas
Training options are identified by the combination of training_type
and finetuning_type
. You can update existing options, add new ones, or remove specific combinations as needed.
Example Response
{
"created_at": "2024-11-26T02:58:55.339737",
"updated_at": "2024-11-26T03:58:55.339737",
"id": "customization_config-MedVscVbr4pgLhLgKTLbv9",
"name": "llama-3.1-8b-instruct@v1.0.0+A100",
"namespace": "default",
"description": "Updated description",
"target": {
"id": "customization_target-A5bK7mNpR8qE9sL2fG3hJ6",
"name": "meta/llama-3.1-8b-instruct@2.0",
"namespace": "default",
"base_model": "meta/llama-3.1-8b-instruct",
"enabled": true,
"num_parameters": 8000000000,
"precision": "bf16",
"status": "ready"
},
"training_options": [
{
"training_type": "sft",
"finetuning_type": "lora",
"num_gpus": 2,
"num_nodes": 1,
"tensor_parallel_size": 1,
"pipeline_parallel_size": 1,
"micro_batch_size": 1,
"use_sequence_parallel": false
}
],
"training_precision": "bf16",
"max_seq_length": 4096,
"pod_spec": {
"node_selectors": {
"nvidia.com/gpu.product": "NVIDIA-A100-SXM4-80GB"
},
"annotations": {
"nmp/job-type": "customization"
},
"tolerations": [
{
"key": "app",
"operator": "Equal",
"value": "customizer",
"effect": "NoSchedule"
}
]
},
"prompt_template": "{input} {output}",
"chat_prompt_template": null,
"dataset_schemas": [],
"project": null,
"ownership": {}
}