Configure NVIDIA Earth-2 FourCastNet NIM at Runtime#

Use this documentation for details about how to configure the NVIDIA Earth-2 FourCastNet NIM at runtime.

GPU Selection#

Passing --gpus all to docker run is acceptable in homogeneous environments with 1 or more of the same GPU. In some environments, it is beneficial to run the container on specific GPUs. Expose specific GPUs inside the container by using either:

  • The --gpus flag, for example --gpus='"device=1"'.

  • The environment variable NVIDIA_VISIBLE_DEVICES, for example -e NVIDIA_VISIBLE_DEVICES=1.

The device IDs to use as inputs are the output of nvidia-smi -L.

GPU 0: Tesla H100 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)
GPU 1: NVIDIA GeForce RTX 3080 (UUID: GPU-b404a1a1-d532-5b5c-20bc-b34e37f3ac46)

See the NVIDIA Container Toolkit documentation for more instructions.

Shared Memory Flag#

The FourCastNet NIM uses Triton’s Python backend capabilities that scales with the number of CPU cores available. You might need to increase the available shared memory given to the microservice container.

Example providing 4g of shared memory:

docker run ... --shm-size 4g ...

Model Profiles#

The FourCastNet NIM has the following model profiles that can be used:

SFNO ERA5 73ch Fine-tuned#

NIM_MODEL_PROFILE: eb0719154e9c48106f33177da4913ace10a2ac03d18424d0f2ba37542b719140

Spherical Fourier Neural Operator model for predicting atmospheric dynamics trained on 73 variables of ERA5. Fine-tuned for deterministic accuracy.

SFNO ERA5 73ch#

NIM_MODEL_PROFILE: ef1d33ffd6c1f0ba4acaf96801b79564d943481c36445625cb85cc9fa9916f12

Spherical Fourier Neural Operator model for predicting atmospheric dynamics trained on 73 variables of ERA5. No fine-tuning process used.

Environment Variables#

The FourCastNet NIM allows a few customizations that are referenced on the start up of the container. The following variables can be used to change the NIM behavior:

Variable

Default

Description

NGC_API_KEY

Your NGC API key with read access to the model registry for the model profile you are using.

NIM_MODEL_PROFILE

9f9…c23a

The model package to load into NIM on launch. This is downloaded from NGC assuming that you have the correct permissions.

NIM_HTTP_API_PORT

8000

Publish the NIM service to the specified port inside the container. Make sure to adjust the port passed to the -p/--publish flag of docker run to reflect that.

NIM_DISABLE_MODEL_DOWNLOAD

Disable model download on container startup.

Mounted Volumes#

The following paths inside the container can be mounted to enhance the runtime of the NIM:

Container Path

Required

Description

Example

/opt/nim/.cache

No

This is the directory within which models are downloaded inside the container. This directory must be accessible from inside the container. This can be achieved by adding the option -u $(id -u) to the docker run command.

-v ~/.cache/nim:/opt/nim/.cache