Customization Config#

Refer to the following table to understand the fields and settings of a given customization configuration available to your organization.

Customization configurations are created by your organization admins and define the initial parameters for a customization job, such as:

  • The base model to customize

  • The training methods supported

  • The fine-tuning approaches supported

  • The precision format

  • The number of GPUs and compute nodes to use

  • The parallelization techniques employed for the distributed training

  • The size of micro-batches for training

  • The maximum sequence length for input

An organization can have multiple customization configurations, each with different settings.

Note

For parameters that you can set at the customization job level, see the Hyperparameters reference.


Schema#

Field

Description

name

Unique identifier for the config

base_model

The base model to customize

training_types

List of supported training methods

finetuning_types

List of supported fine-tuning approaches

precision

Model precision format (bfloat16)

num_gpus

Number of GPUs to use for training

num_nodes

Number of compute nodes to use

micro_batch_size

Size of micro-batches for training

tensor_parallel_size

Degree of tensor parallelism

use_sequence_parallel

Enables sequence parallelism

max_seq_length

Maximum sequence length for input

custom_fields

Optional custom configuration parameters

training_options

Resource configuration for the training option