nemo_rl.models.policy.utils#

Module Contents#

Functions#

is_vllm_v1_engine_enabled

Check if vLLM V1 engine is enabled.

import_class_from_path

Import a class from a string path (e.g. ‘torch.optim.AdamW’).

get_gpu_info

Return information about the GPU being used by this worker.

sliding_window_overwrite

Returns configuration overrides to handle sliding window settings based on model rules.

configure_expandable_segments

Configure expandable_segments on Hopper and newer architectures (compute capability 9.x+).

get_runtime_env_for_policy_worker

Get runtime environment configuration for policy workers.

get_megatron_checkpoint_dir

Gets the default megatron checkpoint directory for initial HF -> Mcore conversion.

API#

nemo_rl.models.policy.utils.is_vllm_v1_engine_enabled() bool[source]#

Check if vLLM V1 engine is enabled.

Returns:

True if V1 engine is enabled, False otherwise (defaults to True if not set)

Return type:

bool

nemo_rl.models.policy.utils.import_class_from_path(name: str) Any[source]#

Import a class from a string path (e.g. ‘torch.optim.AdamW’).

Parameters:

full_path – Full path to class including module path and class name

Returns:

The imported class object

nemo_rl.models.policy.utils.get_gpu_info(model: torch.nn.Module) dict[str, Any][source]#

Return information about the GPU being used by this worker.

nemo_rl.models.policy.utils.sliding_window_overwrite(model_name: str) dict[str, Any][source]#

Returns configuration overrides to handle sliding window settings based on model rules.

Parameters:

model_name – The HuggingFace model name or path to load configuration from

Returns:

Dictionary with overwrite values, or empty dict if no overwrites needed

Return type:

dict

nemo_rl.models.policy.utils.configure_expandable_segments() None[source]#

Configure expandable_segments on Hopper and newer architectures (compute capability 9.x+).

This helps with memory allocation but causes crashes on Ampere GPUs, so we only enable it on newer architectures. If PYTORCH_CUDA_ALLOC_CONF is already set, preserves existing values.

nemo_rl.models.policy.utils.get_runtime_env_for_policy_worker(
policy_worker_name: str,
) dict[str, Any][source]#

Get runtime environment configuration for policy workers.

Note: expandable_segments configuration is handled directly in the worker init methods to ensure proper GPU detection after CUDA initialization.

nemo_rl.models.policy.utils.get_megatron_checkpoint_dir() str[source]#

Gets the default megatron checkpoint directory for initial HF -> Mcore conversion.

Megatron initial checkpoint should be saved to a path available on all nodes. The directory used will take this order of precendence:

  1. $NRL_MEGATRON_CHECKPOINT_DIR (if set)

  2. $HF_HOME/nemo_rl (if HF_HOME is set)

  3. ~/.cache/huggingface/nemo_rl

HF_HOME is preferred since many users will also have that path mounted and it means one less directory to mount into your runtime environment.