Get Customization Configs#
Get a list of available customization configurations and their details.
You can review the returned customization configs to determine which models are available for fine-tuning. These configs are typically added by your cluster administrator during the initial setup of NeMo Customizer.
Prerequisites#
Before you can get a list of customization configurations, make sure that you have:
Access to the NeMo Customizer service via the API or Studio UI.
Reviewed the customization config reference article.
Get a List of Customization Configurations#
Perform a GET request to the
/v1/customization/configs
endpoint.BASE_MODEL="meta/llama-3.1-8b-instruct" FINETUNING_TYPE="lora" TRAINING_TYPE="sft" BATCH_SIZE="8" EPOCHS="1" LOG_EVERY_N_STEPS="10" DATASET="string" curl --get \ "https://${CUSTOMIZER_HOSTNAME}/v1/customization/configs" \ --data-urlencode "page=1" \ --data-urlencode "page_size=10" \ --data-urlencode "sort=-created_at" \ --data-urlencode "filter[base_model]=${BASE_MODEL}" \ --data-urlencode "filter[finetuning_type]=${FINETUNING_TYPE}" \ --data-urlencode "filter[training_type]=${TRAINING_TYPE}" \ --data-urlencode "filter[batch_size]=${BATCH_SIZE}" \ --data-urlencode "filter[epochs]=${EPOCHS}" \ --data-urlencode "filter[log_every_n_steps]=${LOG_EVERY_N_STEPS}" \ --data-urlencode "filter[dataset]=${DATASET}" \ --data-urlencode "filter[status]=created" | jq
You can use the
name
field of a returned config object to set theconfig
parameter when creating a customization job.Review the returned customization configs.
Example Response
{ "object": "list", "data": [ { "created_at": "2024-11-26T02:58:55.339737", "updated_at": "2024-11-26T02:58:55.339737", "name": "customization_config-MedVscVbr4pgLhLgKTLbv9", "base_model": "meta/llama-3.1-8b-instruct", "training_types": [ "sft" ], "finetuning_types": [ "lora" ], "precision": "bf16", "num_gpus": 2, "micro_batch_size": 1, "tensor_parallel_size": 1, "max_seq_length": 4096, "custom_fields": {} }, ], "pagination": { "page": 1, "page_size": 10, "current_page_size": 2, "total_pages": 1, "total_results": 2 }, "sort": "created_at" }
Tip
The
num_gpus
multiplied by thenum_nodes
is the total number of GPUs required to run a fine tuning.