Overview#

The NVIDIA NIM for Batched Geometry Relaxation provides a high-performance engine for batched geometry relaxation, a crucial step in materials discovery and computational materials science. Through supporting multiple machine learning interatomic potentials (MLIPs) including MACE, AIMNet2, and TensorNet models optimized for NVIDIA GPUs, this NIM enables researchers to perform batched geometry relaxations at scale, significantly reducing the time required for these computationally intensive tasks.

The NIM supports a variety of use cases from periodic materials to isolated molecules, with optional cell optimization and dispersion corrections for accurate structure predictions.

The NIM is primarily recommended for the following use cases:

  • High-throughput geometry relaxation: Ideal for tasks requiring the relaxation of a large number of atomistic structures with GPU acceleration.

  • Large-scale materials discovery: Facilitates the exploration of vast chemical spaces by enabling rapid screening of material candidates.

  • Computational materials science research: Provides a powerful tool for researchers investigating material properties and behaviors through geometry optimization.

  • Molecular structure optimization: Supports organic molecular systems with specialized models like AIMNet2.

Key Features of NIM for BGR#

The NIM for BGR includes the following key features:

  • Multiple MLIP Models: Supports various machine learning interatomic potentials including MACE, AIMNet2 (with NSE variant), and TensorNet models, allowing users to choose the best model for their specific application.

  • Dynamic Batching: Optimizes GPU utilization by dynamically estimating and adjusting batch sizes based on available GPU memory and structure sizes. This enables processing multiple structures concurrently, maximizing throughput and efficiency.

  • GPU-based FIRE2 Optimizer: Implements the Fast Inertial Relaxation Engine (FIRE2) optimizer directly on the GPU, significantly accelerating the relaxation process compared to CPU-based alternatives.

  • Cell Optimization: Optionally optimizes unit cell parameters alongside atomic positions for periodic systems, enabling full structural relaxation under pressure constraints.

  • DFT-D3 Dispersion Corrections: Supports DFT-D3(BJ) dispersion corrections for improved accuracy in systems where van der Waals interactions are important.

  • Flexible Convergence Criteria: Per-request force tolerance and pressure tolerance parameters allow fine-grained control over optimization convergence.

Supported Models#

Pre-bundled model: MACE-MP-0b2-Large is pre-bundled and auto-downloaded when the container is run with an NGC API key. Inference can be started immediately without downloading or mounting any model files.

NIM for BGR supports multiple MLIP model architectures. The ALCHEMI_NIM_MODEL_TYPE environment variable selects the architecture; the specific model variant is determined by the files mounted into the container (or the bundled MACE-MP-0b2-Large when using the default).

The following table lists the supported models and their capabilities:

ALCHEMI_NIM_MODEL_TYPE

Supported Models

Periodic Only

Non-periodic

Bundled

Reference

mace

MACE-MP-0b2-Large (built-in); other MACE models (for example, MACE-MPA-0) can be mounted

Yes

No

MACE-MP-0b2-Large auto-downloaded

MACE-MP-0b2-Large, MACE

tensornet

TensorNet-MatPES-PBE-v2025.1-PES, TensorNet-MatPES-r2SCAN-v2025.1-PES

Yes

No

No

MatPES, TensorNet

aimnet2

All AIMNet2 models

No

Yes

No

AIMNet2

TensorNet models are MatPES potentials from materialyzeai/matgl (PyG version, not DGL).

AIMNet2 supports all models listed at isayevlab/aimnetcentral.

For non-bundled models, refer to Model Configuration for mounting instructions. For download hints and container launch arguments by model type, refer to Custom Models.

References#

Advantages of NIMs#

NIMs offer a simple and easy-to-deploy route for self-hosted AI applications. Two major advantages that NIMs offer for system administrators and developers are:

  • Increased productivity — NIMs allow developers to build generative AI applications quickly, in minutes rather than weeks, by providing a standardized way to add AI capabilities to their applications.

  • Simplified deployment — NIMs provide containers that can be easily deployed on various platforms, including clouds, data centers, or workstations, making it convenient for developers to test, deploy, and scale their applications.

The NIM for BGR provides fast, accurate models behind a consistent API for high-throughput geometry relaxation tasks. As part of the broader NVIDIA NIM ecosystem, NIM for BGR can be used in conjunction with other NIMs to build pipelines that accelerate atomistic modeling workflows.