Using Clara requires the following:

System requirements

Driver requirements

Clara is based on NVIDIA CUDA 10.1.243, which requires NVIDIA Driver release 418.xx. However, if you are running on Tesla (for example, T4 or any other Tesla board), you may use NVIDIA driver release 396, 384.111+ or 410. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384.111+ or 410. The CUDA driver’s compatibility package only supports particular drivers. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades.

GPU requirements

Clara supports CUDA compute capability 6.0 and higher. This corresponds to GPUs in the Pascal, Volta, and Turing families. Specifically, for a list of GPUs that this compute capability corresponds to, see CUDA GPUs. For additional support details, see Deep Learning Frameworks Support Matrix.

Software requirements

nvidia-docker 2.0 installed, see instructions.

Download the docker container

  • export

  • docker pull $dockerImage

Running the container

Once downloaded, run the docker using this command:

docker run -it --rm --shm-size=1G --ulimit memlock=-1 --ulimit stack=67108864 --ipc=host --net=host --mount type=bind,source=/your/dataset/location,target=/workspace/data $dockerImage /bin/bash

The docker, by default, starts in the /opt/nvidia folder. To access local directories from within the docker, they have to be mounted in the docker.

To mount a directory, use the -v <source_dir>:<mount_dir> option. Here is an example:

docker run --shm-size=1G --ulimit memlock=-1 --ulimit stack=67108864 -it --rm -v /home/<username>/clara-experiments:/workspace/clara-experiments $dockerImage /bin/bash

This mounts the /home/<username>/clara-experiments directory in your disk to /workspace/clara-experiments in docker.


More information for mounting directories can be found in Docker documentation

If you are on a network that uses a proxy server to connect to the Internet, you can provide proxy server details when launching the container.

docker run -it --rm -e HTTPS_PROXY=https_proxy_server_ip:https_proxy_server_port -e HTTP_PROXY=http_proxy_server_ip:http_proxy_server_port --shm-size=1G --ulimit memlock=-1 --ulimit stack=67108864 $dockerImage /bin/bash

For GPU isolation in docker, you may use –gpus= with the latest docker release as shown here.

docker run -it --rm --gpus=1 --shm-size=1G --ulimit memlock=-1 --ulimit stack=67108864 $dockerImage /bin/bash

Downloading the models

The NGC models page has models available for direct download as a zip archive.

You can also download models from inside the container with the build-in NGC commands to retrieve or list models hosted on NGC.

Use this command to list available models: ngc registry model list nvidia/med/*

root@03c5db1ddbcc:/opt/nvidia# ngc registry model list nvidia/med/*

To download a model to the current directory, use this command below after choosing a model and setting the MODEL_NAME and VERSION for that model:


ngc registry model download-version nvidia/med/$MODEL_NAME:$VERSION --dest /var/tmp

Downloaded 49.74 MB in 4s, Download speed: 12.4 MB/s
Transfer id: clara_mri_seg_brain_tumors_br16_full_amp_v1 Download status: Completed.
Downloaded local path: /var/tmp/clara_mri_seg_brain_tumors_br16_full_amp_v1
Total files downloaded: 22
Total downloaded size: 49.74 MB
Started at: 2020-01-13 19:01:06.519897
Completed at: 2020-01-13 19:01:10.526815
Duration taken: 4s

Browse the NGC models page for the most updated models and information.