***
layout: overview
slug: nemo-curator/nemo\_curator/utils/gpu\_utils
title: nemo\_curator.utils.gpu\_utils
-------------------------------------
## Module Contents
### Functions
| Name | Description |
| ---------------------------------------------------------------------------------------------- | -------------------------------------------------------- |
| [`get_gpu_count`](#nemo_curator-utils-gpu_utils-get_gpu_count) | Get number of available CUDA GPUs as a power of 2. |
| [`get_max_model_len_from_config`](#nemo_curator-utils-gpu_utils-get_max_model_len_from_config) | Try to get max model length from HuggingFace AutoConfig. |
### API
```python
nemo_curator.utils.gpu_utils.get_gpu_count() -> int
```
Get number of available CUDA GPUs as a power of 2.
Many models require tensor parallelism to use power-of-2 GPU counts.
This returns the largest power of 2 \<= available GPU count.
**Returns:** `int`
Power of 2 GPU count, minimum 1.
**Raises:**
* `RuntimeError`: If no CUDA GPUs are detected.
```python
nemo_curator.utils.gpu_utils.get_max_model_len_from_config(
model: str,
cache_dir: str | None = None
) -> int | None
```
Try to get max model length from HuggingFace AutoConfig.
**Parameters:**
Model identifier (e.g., "microsoft/phi-4")
Optional cache directory for model config.
**Returns:** `int | None`
Max model length if found, None otherwise.