Please follow instructions in Clara installation to install the docker image and start the container.
Additionally you would like to mount some shared disk/folder for saving all the models, logs and configurations for AIAA server to persist. This will help you to start/stop the container any time without losing the models/configurations for AIAA.
export OPTIONS="--shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864" export SOURCE_DIR=<source dir to store> export MOUNT_DIR=/aiaa-experiments export LOCAL_PORT=<the port you want to use> export REMOTE_PORT=80 export DOCKER_IMAGE="nvcr.io/nvidia/clara-train-sdk:<version here>" docker run $OPTIONS --gpus=1 -it --rm \ -p $LOCAL_PORT:$REMOTE_PORT \ -v $SOURCE_DIR:$MOUNT_DIR \ --ipc=host \ $DOCKER_IMAGE \ /bin/bash
The system requirements of AIAA depends on how many models you want to load in the server. If your models are big and you don’t have enough system RAM/GPU memory, please load one model at a time.
When you run docker, make sure you have docker ports (e.g.
http-port: 80) mapped to host machine for external access.
You can do that with docker option
-p [LOCAL_PORT]:[REMOTE_PORT]. The local port is the host port you want to use while
the remote port is the port which AIAA listen to inside the docker container (80 for HTTP, 443 for HTTPS).
You can use
-e NVIDIA_VISIBLE_DEVICES=<ids of the GPU you want to use> or specify instance group
in the “trtis” section of model config for fine-grain resource control.