Installation Requirements
- Access to the internet. 
- Any NVIDIA GPU that supports CUDA architecture 60, 70, 75 or 80 and has at least 12GB of GPU RAM. Parabricks has been tested on NVIDIA V100, NVIDIA A100 and NVIDIA T4 GPUs. 
- System Requirements: - A 2 GPU server should have at least 100GB CPU RAM and at least 24 CPU threads. 
- A 4 GPU server should have at least 196GB CPU RAM and at least 32 CPU threads. 
- A 8 GPU server should have at least 392GB CPU RAM and at least 48 CPU threads. 
 
Please note that Clara Parabricks is not supported on virtual (vGPU) or MIG (Multi-Instance) GPUs.
The following are software requirements for running Clara Parabricks.
- An NVIDIA driver that supports cuda-10.1 or higher. If you're using an Ampere GPU, support for cuda-11.0 or higher is required. 
- Any Linux Operating System that supports one of the following: 
- singularity version 3.0 (or higher) 
- Bare metal installation is supported for Ubuntu 18.04 only 
 
- Python 3 
Checking available Nvidia hardware and driver
To check what Nvidia hardware and driver version you have, use the nvidia-smi command:
            
            $ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.04   Driver Version: 450.119.04   CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla V100-DGXS...  On   | 00000000:07:00.0 Off |                    0 |
| N/A   44C    P0    38W / 300W |     74MiB / 16155MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-DGXS...  On   | 00000000:08:00.0 Off |                    0 |
| N/A   44C    P0    37W / 300W |      6MiB / 16158MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-DGXS...  On   | 00000000:0E:00.0 Off |                    0 |
| N/A   44C    P0    39W / 300W |      6MiB / 16158MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-DGXS...  On   | 00000000:0F:00.0 Off |                    0 |
| N/A   44C    P0    38W / 300W |      6MiB / 16158MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      3019      G   /usr/lib/xorg/Xorg                 56MiB |
|    0   N/A  N/A      3350      G   /usr/bin/gnome-shell               16MiB |
|    1   N/A  N/A      3019      G   /usr/lib/xorg/Xorg                  4MiB |
|    2   N/A  N/A      3019      G   /usr/lib/xorg/Xorg                  4MiB |
|    3   N/A  N/A      3019      G   /usr/lib/xorg/Xorg                  4MiB |
+-----------------------------------------------------------------------------+
    
This shows the following important information:
- The NVIDIA driver version is 450.119.04. 
- The CUDA version is 11.0. 
- There are four Tesla V100 GPUs. 
- Each GPU has 16 GB of memory. 
Checking available CPU RAM and threads
To see how much RAM and CPU threads in your machine, you can run the following:
            
            #To check available memory
$ cat /proc/meminfo | grep MemTotal
#To check available number of threads
$ cat /proc/cpuinfo | grep processor | wc -l
    
Checking nvidia-docker2 installation
To make sure you have nvidia-docker2 installed run this command:
            
            $ docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
    
When it finishes downloading the container it will run the nvidia-smi command and show you
the same output as above.
Checking python version
To see which version of Python you have, enter the following command:
            
            $ python3 --version
    
Make sure it's at least version 3 (3.6.9, 3.7, etc).
There are two types of Parabricks installation licenses:
- Node Locked: licenses are tied to a specific set of GPUs on a server. 
- Flexera based: licenses allow for a set amount of GPUs to be used at once using a license server. This will use the Nvidia License Server. You can read more about it here (optional). 
The software can be installed and run in 3 ways:
- Docker container 
- Singularity container 
- Bare-metal Debian package (.deb)