Configuration#
This page explains how to configure NIM for BGR using environment variables during container launch, apply per-request overrides for geometry relaxation parameters, and manage model and GPU settings.
Environment Variables#
Pass environment variables when you launch the container. For example:
docker run ... -e ALCHEMI_NIM_MODEL_TYPE="mace" ...
Use the environment variables in the following table to configure NIM for BGR:
Environment Variable |
Type |
Default |
Description |
|---|---|---|---|
|
string |
|
Model type: |
|
string |
null |
Path to the model file or directory inside the container. |
|
bool |
true |
Periodic boundary conditions mode. |
|
int |
null |
Fixed batch size. If |
|
float |
0.005 |
Force tolerance (eV/Å). You can override this per request. |
|
int |
2000 |
Maximum optimization steps. You can override this per request. |
|
bool |
false |
Enable cell optimization. You can override this per request. |
|
float |
0.5 |
Pressure tolerance (kBar) for cell optimization. You can override this per request. |
|
string |
|
Optimizer preset: |
|
bool |
false |
Enable DFT-D3(BJ) dispersion correction. |
|
string (JSON) |
null |
JSON-encoded object for custom DFT-D3(BJ) parameters (for example, |
Refer to Environment Variables for a complete reference of all environment variables and their defaults.
DFT-D3 Parameters#
When ALCHEMI_NIM_DFT3_ENABLED is true, you can provide custom damping
parameters using the ALCHEMI_NIM_DFT3_PARAM environment variable. The value
must be a JSON string (for example, '{"s8": 0.3908, "a1": 0.5660, "a2": 3.1280}').
For detailed information on DFT-D3(BJ) dispersion corrections, refer to the ALCHEMI Toolkit-Ops documentation.
Field |
Type |
Default |
Description |
|---|---|---|---|
|
float |
null |
BJ damping parameter a1 |
|
float |
null |
BJ damping parameter a2 (Bohr) |
|
float |
null |
Scaling factor for the C6 term |
|
float |
null |
Scaling factor for the C8 term |
|
float |
15.0 |
Cutoff distance (Å) |
|
float |
0.2 |
Fraction of the cutoff used for the smoothing region |
Model Configuration#
For supported models and sourcing details, refer to Supported Models. For step-by-step download hints and container launch arguments for each model type (MACE, AIMNet2, TensorNet), refer to Custom Models.
For non-bundled models (TensorNet and AIMNet2), mount the content of the
model directory into the container and set ALCHEMI_NIM_MODEL_PATH to the
mounted path inside the container:
docker run --rm -ti --name alchemi-bgr --gpus=all \
-e NGC_API_KEY \
-p 8000:8000 --shm-size=8g \
-v /path/to/model-dir:/opt/nim/.cache/model-dir \
-e ALCHEMI_NIM_MODEL_TYPE="tensornet" \
-e ALCHEMI_NIM_MODEL_PATH="/opt/nim/.cache/model-dir" \
nvcr.io/nim/nvidia/alchemi-bgr:${__container_version}
The mounted directory should contain the model files as downloaded from the model repository. For specific model requirements, refer to Custom Models.
GPU and Memory Configuration#
Configure memory allocation and GPU utilization to manage hardware resources.
Shared memory: Use
--shm-size=8gto support large requests.Batch size: The NIM estimates the optimal and maximum batch size for available GPU memory at startup. Override this behavior by setting
ALCHEMI_NIM_BATCH_SIZE.Multi-GPU: The NIM automatically uses all available GPUs. Each GPU runs an independent worker.