Scalable Molecular Docking (Latest)
Scalable Molecular Docking (Latest)

Configure NIM

This section provides additional details for launching the DiffDock NIM container.

Container image tags can be seen with the command below, similar to other container images on NGC.

Copy
Copied!
            

ngc registry image info nvcr.io/nim/nvidia/bionemo-diffdock:1.2.0

Pull the container image using one of the following commands:

Docker

Copy
Copied!
            

docker pull nvcr.io/nim/nvidia/bionemo-diffdock:1.2.0

NGC

Copy
Copied!
            

ngc registry image pull nvcr.io/nim/nvidia/bionemo-diffdock:1.2.0

As in the Quickstart Guide, you can run the following command to start the NIM.

Copy
Copied!
            

docker run --rm -it --name diffdock-nim \ --runtime=nvidia \ -e NVIDIA_VISIBLE_DEVICES=0 \ -e NGC_API_KEY=$NGC_API_KEY \ -p 8000:8000 \ nvcr.io/nim/nvidia/bionemo-diffdock:1.2.0


We can break down the command into the following components. Some may be modified in a production setting to better suit the user’s desired application.

  • docker run: This is the command to run a new container from a Docker image.

  • --rm: This flag tells Docker to automatically remove the container when it exits. This property is useful for one-off runs or testing, as it prevents the container from being left behind.

  • -it: These flags combine to create an interactive terminal session within the container. -i keeps the standard input open, and -t allocates a pseudo-TTY.

  • --name diffdock-nim: This flag gives the container the name “diffdock-nim”.

  • --runtime=nvidia: This flag specifies the runtime to use for the container. In this case, it is set to “nvidia”, which is used for GPU acceleration.

  • -e NVIDIA_VISIBLE_DEVICES=0: DiffDock NIM uses only a single GPU to process the inference so an environment variable NVIDIA_VISIBLE_DEVICES needs to be specified to control which GPU device is visible to the container. In this case, it is set to 0, which means the container will only use the first GPU (if available).

  • -e NGC_API_KEY: This flag sets an environment variable NGC_API_KEY used for authentication with NVIDIA’s NGC (NVIDIA GPU Cloud) service.

  • -p 8000:8000: The DiffDock NIM container opens port 8000 to accept the inference request. This parameter -p [host_port]:[container_port] will map the container internal port to the host, so other applications and processes on the host can request the service. In this case, the contianer maps to the same port 8000 on the host.

  • -e NIM_HTTP_API_PORT=[PORT_NUMBER]: This variable is used to modify the default HTTP port (i.e., 8000) for the inference request. Please note that this port number should match the container port in the -p option, so it can be correctly mapped to the host network.

Working on a multi-GPU platform, multiple container instances can be launched for scalability. But, each container instance needs to be bound to different GPU device (NVIDIA_VISIBLE_DEVICES) and mapped to different ports (-p) on the host. Below is an example of launching two instances, using GPU devices 0/1 and host ports 60001/60002, respectively.

Copy
Copied!
            

# Launch instance #1 docker run --rm -it --name diffdock-nim-1 \ --runtime=nvidia \ -e NVIDIA_VISIBLE_DEVICES=0 \ -e NGC_API_KEY=$NGC_API_KEY \ -p 60001:8000 \ nvcr.io/nim/nvidia/bionemo-diffdock:1.2.0


Copy
Copied!
            

# Launch instance #2 docker run --rm -it --name diffdock-nim-2 \ --runtime=nvidia \ -e NVIDIA_VISIBLE_DEVICES=1 \ -e NGC_API_KEY=$NGC_API_KEY \ -p 60002:8000 \ nvcr.io/nim/nvidia/bionemo-diffdock:1.2.0


On initial startup, the container will download the model checkpoint from NGC. You can skip this download step on future runs by caching the model weights locally using a cache directory as in the example below.

Copy
Copied!
            

# Create the cache directory on the host machine export LOCAL_NIM_CACHE=~/.cache/nim mkdir -p "$LOCAL_NIM_CACHE" chmod 777 $LOCAL_NIM_CACHE # Run the container with the cache directory mounted in the appropriate location docker run --rm -it --name diffdock-nim \ --runtime=nvidia \ -e CUDA_VISIBLE_DEVICES=0 \ -e NGC_API_KEY=$NGC_API_KEY \ -v "$LOCAL_NIM_CACHE:/home/nvs/.cache/nim" \ -p 8000:8000 \ nvcr.io/nim/nvidia/bionemo-diffdock:1.2.0


The logging level for the NIM can be controlled using the environment variable NIM_LOG_LEVEL. This variable allows you to specify the level of logging detail you want to see in the container’s logs.

The following logging levels are available:

  • DEBUG: This level will log all inputs and outputs for each endpoint of the server. This can be useful for debugging purposes, but it can also produce very large logs and should only be used when necessary.

  • INFO: This level will log important events and information about the server’s operation.

  • WARNING: This level will log warnings about potential issues or errors.

  • ERROR: This level will log errors that occur during the server’s operation.

  • CRITICAL: This level will log critical errors that prevent the server from functioning properly.

If no value is provided for NIM_LOG_LEVEL, the default logging level will be INFO. If you want to suprress most of the runtime information on the screen you can set it to be ERROR or CRITICAL. If you want to check the verbose messages you can set it to be DEBUG.

To set the logging level, you can pass the NIM_LOG_LEVEL environment variable when starting the NIM. For example:

Copy
Copied!
            

docker run ... -e NIM_LOG_LEVEL=DEBUG ...


This will set the logging level to DEBUG, which will log all inputs and outputs for each endpoint of the server.

When setting the logging level, you should consider the trade-off between logging detail and log size. If you set the logging level to DEBUG, you may generate very large logs that can be difficult to manage. However, if you set the logging level to a higher level (such as INFO or WARNING), you may miss important debugging information.

Previous Getting Started
Next API Reference
© | | | | | | |. Last updated on Jul 25, 2024.