Use OCI S3 Object Storage#
NVIDIA NIM for VLMs supports loading models from Oracle Cloud Infrastructure (OCI) Object Storage using the Amazon S3 Compatibility API. Use this option when models are hosted in an OCI Object Storage bucket and you want NVIDIA NIM for VLMs to fetch and serve them directly from OCI Object Storage using the S3-compatible API. For more information, refer to the OCI Object Storage Amazon S3 Compatibility API documentation.
Requirements#
Ensure that your environment meets the following requirements:
An OCI S3-compatible endpoint: The URL of your OCI Object Storage bucket that supports the S3 API. NVIDIA NIM for VLMs uses this endpoint to connect to your OCI Object Storage.
A model path: The path to the model repository in the OCI Object Storage bucket. This tells NVIDIA NIM for VLMs where to find and load your models.
Credentials: AWS-style credentials for the S3-compatible endpoint. Provide these using either environment variables or a shared credentials file.
OCI S3-Compatible Endpoint#
Set AWS_ENDPOINT_URL to the OCI Object Storage S3-compatible endpoint:
-e AWS_ENDPOINT_URL="https://<namespace>.compat.objectstorage.<region>.oraclecloud.com"
Model Path#
Set NIM_MODEL_NAME to the model repository directory in the following format:
s3repo://<org>/<model-repo>[:<version>]
Review the following model path examples:
s3repo://mistralai/mistralai-ministral-14bs3repo://mistralai/mistralai-ministral-14b:1.7.0
Alternatively, you can specify the bucket explicitly:
s3repo://<bucket>/<org>/<model-name>[:<version>]
Review the following explicit bucket example:
s3repo://mistral-models/mistralai/mistralai-ministral-14b
Loading Credentials#
Configure the connectivity region and credentials for the S3-compatible endpoint using the following environment variables:
AWS_REGIONAWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKEN(optional, for temporary credentials)
Note
For more details about these environment variables, refer to Configure Your NIM.
Example#
The following example uses AWS environment variables to load credentials and mounts a local cache directory to persist downloads:
# Choose a container name for bookkeeping
export CONTAINER_NAME=mistralai-ministral-3-14b-instruct-2512
# The container name from the previous ngc registry image list command
Repository="ministral-3-14b-instruct-2512"
Latest_Tag="1.7.0"
# Choose a VLM NIM Image from NGC
export IMG_NAME="nvcr.io/nim/mistralai/${Repository}:${Latest_Tag}"
# Choose a path on your system to cache the downloaded models
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "$LOCAL_NIM_CACHE"
docker run -it --rm --name=$CONTAINER_NAME \
--runtime=nvidia \
--gpus all \
--shm-size=16GB \
-e NGC_API_KEY=$NGC_API_KEY \
-e AWS_REGION="us-sanjose-1" \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-e AWS_ENDPOINT_URL="https://<namespace>.compat.objectstorage.<region>.oraclecloud.com" \
-e NIM_MODEL_NAME="s3repo://mistralai/mistralai-ministral-14b:1.7.0" \
-v "$LOCAL_NIM_CACHE:/opt/nim/.cache" \
-u $(id -u) \
-p 8000:8000 \
$IMG_NAME