nemo_curator.utils.gpu_utils

View as Markdown

Module Contents

Functions

NameDescription
ensure_cudnn_loadedDiscover and pre-load cuDNN from the nvidia-cudnn-cu12 pip package.
get_gpu_countGet number of available CUDA GPUs as a power of 2.
get_max_model_len_from_configTry to get max model length from HuggingFace AutoConfig.

Data

_cudnn_loaded

API

nemo_curator.utils.gpu_utils.ensure_cudnn_loaded() -> bool

Discover and pre-load cuDNN from the nvidia-cudnn-cu12 pip package.

ONNX Runtime relies on the system dynamic linker to locate libcudnn*.so files, but pip-installed packages place them inside the virtual-environment site-packages tree which is not on the default library search path.

Call this function early — before any import onnxruntime — to make those libraries visible to the linker.

This function is idempotent: repeated calls are cheap no-ops after the first successful load.

Returns

bool True if cuDNN was successfully loaded (or was already loaded), False otherwise.

nemo_curator.utils.gpu_utils.get_gpu_count() -> int

Get number of available CUDA GPUs as a power of 2.

Many models require tensor parallelism to use power-of-2 GPU counts. This returns the largest power of 2 <= available GPU count.

Returns: int

Power of 2 GPU count, minimum 1.

Raises:

  • RuntimeError: If no CUDA GPUs are detected.
nemo_curator.utils.gpu_utils.get_max_model_len_from_config(
model: str,
cache_dir: str | None = None
) -> int | None

Try to get max model length from HuggingFace AutoConfig.

Parameters:

model
str

Model identifier (e.g., “microsoft/phi-4”)

cache_dir
str | NoneDefaults to None

Optional cache directory for model config.

Returns: int | None

Max model length if found, None otherwise.

nemo_curator.utils.gpu_utils._cudnn_loaded: bool = False