Installation#
There are two options for installing Cosmos-Predict2: Using a Conda environment or using a Docker container.
Note
Ensure you have the necessary hardware and software prerequisites before installation.
Note
ARM platforms like GB200 require installation of the decord package, which also downloads the NVIDIA Video Codec SDK in the repository root. The installation process will be handled by the Conda scripts or Dockerfile below.
Option 1: Conda Environment Setup#
Clone the Cosmos-Predict2 repository from GitHub.
git clone git@github.com:nvidia-cosmos/cosmos-predict2.git cd cosmos-predict2
Create and activate the conda environment.
conda env create --file cosmos-predict2.yaml conda activate cosmos-predict2
Install Cosmos-Predict2 dependencies for the environment.
# Try to install decord when on ARM platform bash scripts/install_decord_arm.sh pip install -r requirements-conda.txt pip install flash-attn==2.6.3 --no-build-isolation # Transformer engine ln -sf $CONDA_PREFIX/lib/python3.10/site-packages/nvidia/*/include/* $CONDA_PREFIX/include/ ln -sf $CONDA_PREFIX/lib/python3.10/site-packages/nvidia/*/include/* $CONDA_PREFIX/include/python3.10 CUDA_HOME=$CONDA_PREFIX pip install transformer-engine[pytorch]==1.13.0 # NATTEN CUDA_HOME=$CONDA_PREFIX pip install natten==0.20.1
Install the Apex library for training. This step is optional if you are using Cosmos-Predict2 for inference only.
CUDA_HOME=$CONDA_PREFIX pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext --cuda_ext" git+https://github.com/NVIDIA/apex.git
Test the environment using the
test_environment.py
script.CUDA_HOME=$CONDA_PREFIX PYTHONPATH=$(pwd) python scripts/test_environment.py
Ensure the CUDA_HOME environment variable points to your Conda installation directory.
export CUDA_HOME=$CONDA_PREFIX
Option 2: Docker Container Setup#
Set up the Docker container by either pulling the pre-built container or building the container using the provided Dockerfile.
Pull the pre-build container:
docker pull nvcr.io/nvidia/cosmos/cosmos-predict2-container:1.1
Build the container from the Dockerfile (ensure you run this command from the root directory of the repository):
docker build -t cosmos-predict2-local -f Dockerfile .
Run the Docker container:
Note
Replace
[CONTAINER_NAME]
with eithernvcr.io/nvidia/cosmos/cosmos-predict2-container:1.0
orcosmos-predict2-local
.Note
Replace
/path/to/cosmos-predict2
,/path/to/datasets
, and/path/to/checkpoints
with your actual local paths.docker run --gpus all -it --rm \ -v /path/to/cosmos-predict2:/workspace \ -v /path/to/datasets:/workspace/datasets \ -v /path/to/checkpoints:/workspace/checkpoints \ [CONTAINER_NAME]
Test the environment using the
test_environment.py
script inside container.python /workspace/scripts/test_environment.py
Next Steps#
If you haven’t done so already, determine which model you want to work with–refer to the Model Matrix page for more information.
You can then move on to the Predict2 Quickstart Guide.