List Customization Configs#
Get a list of available customization configurations and their details to determine which models are available for fine-tuning.
Tip
These configs are typically added by your cluster administrator during the initial setup of NeMo Customizer.
Prerequisites#
Before you can get a list of customization configurations, make sure that you have:
Access to the NeMo Customizer service
Options#
API#
Submit a GET request to
/v1/customization/configs
.BASE_MODEL="meta/llama-3.1-8b-instruct" TRAINING_TYPE="sft" FINETUNING_TYPE="lora" curl --get \ "${CUSTOMIZER_SERVICE_URL}/v1/customization/configs" \ --data-urlencode "page=1" \ --data-urlencode "page_size=10" \ --data-urlencode "sort=-created_at" \ --data-urlencode "filter[base_model]=${BASE_MODEL}" \ --data-urlencode "filter[training_type]=${TRAINING_TYPE}" \ --data-urlencode "filter[finetuning_type]=${FINETUNING_TYPE}" \ --data-urlencode "filter[enabled]=true" | jq
You can use either use the full
{namespace}/{name}
fields of a returned config object, or just usename
(which would use thedefault
namespace), and set theconfig
parameter when creating a customization job.Review the returned customization configs.
Example Response
{ "object": "list", "data": [ { "created_at": "2024-11-26T02:58:55.339737", "updated_at": "2024-11-26T02:58:55.339737", "id": "customization_config-MedVscVbr4pgLhLgKTLbv9", "name": "lama-3.1-8b-instruct@v1.0.0+A100", "namespace": "default", "description": "Configuration for training LLama 3.1 8B on A100 GPUs", "target": "meta/llama-3.1-8b-instruct@2.0", "training_options": [ { "training_type": "sft", "finetuning_type": "lora", "num_gpus": 2, "num_nodes": 1, "tensor_parallel_size": 1, "micro_batch_size": 1, }, ], "training_precision": "bf16", "max_seq_length": 2048, "pod_spec": "{ object }", "prompt_template": "string", "chat_prompt_template": "string", "dataset_schemas": "{ object }", "project": "string", "ownership": "{ object }" }, ], "pagination": { "page": 1, "page_size": 10, "current_page_size": 2, "total_pages": 1, "total_results": 2 }, "sort": "-created_at" }
Tip
The
num_gpus
multiplied by thenum_nodes
is the total number of GPUs required to run a fine tuning.