Parabricks containers are compatible with WDL and NextFlow for building customized workflows, intertwining GPU- and CPU-powered tasks with different compute requirements, and deploying at scale.
These enable workflows to be deployed on cloud batch services as well as local clusters (e.g. SLURM) in a well managed process, pulling from a combination of Parabricks and third-party containers and running these on pre-defined nodes.
![Workflow_diagram.png](https://docscontent.nvidia.com/dims4/default/d25a4e7/2147483647/strip/true/crop/3840x1344+0+0/resize/1440x504!/quality/90/?url=https%3A%2F%2Fk3-prod-nvidia-docs.s3.us-west-2.amazonaws.com%2Fbrightspot%2Fsphinx%2F00000190-5ac7-d553-a9b1-5ee79a740000%2Fclara%2Fparabricks%2Flatest%2F_images%2FWorkflow_diagram.png)
For further information on running these workflows, and to see the open-source reference workflows, which can be easily forked/edited, visit the Parabricks Workflows repository. This repository includes recommended instance configurations for deploying the GPU-based tools on cloud and can be easily forked/edited for your own purposes.