Parabricks containers are compatible with WDL and NextFlow for building customized workflows, intertwining GPU- and CPU-powered tasks with different compute requirements, and deploying at scale.
These enable workflows to be deployed on cloud batch services as well as local clusters (e.g. SLURM) in a well managed process, pulling from a combination of Parabricks and third-party containers and running these on pre-defined nodes.
![Workflow diagram.png](https://docscontent.nvidia.com/dims4/default/62ee00f/2147483647/strip/true/crop/3840x1344+0+0/resize/1440x504!/quality/90/?url=https%3A%2F%2Fk3-prod-nvidia-docs.s3.us-west-2.amazonaws.com%2Fbrightspot%2Fsphinx%2F0000018b-2b9c-dfbc-afcb-2fbe0e260000%2Fclara%2Fparabricks%2F4.1.1%2F_images%2FWorkflow%20diagram.png)
For further information on running these workflows, and to see the open-source reference workflows, which can be easily forked/edited, visit the Clara Parabricks Workflows repository. This repository includes recommended instance configurations for deploying the GPU-based tools on cloud and can be easily forked/edited for your own purposes.