Getting Started#
Prerequisites#
Check the Support Matrix to make sure that you have the supported hardware and software stack.
NGC Authentication#
Generate an API Key#
An NGC API key is required to access NGC resources. You can generate a key at NGC API Keys.
When creating an NGC API Personal key, ensure that at least “NGC Catalog” is selected from the “Services Included” dropdown. More Services can be included if this key is to be reused for other purposes.
Note
Personal keys allow you to configure an expiration date, revoke or delete the key using an action button, and rotate the key as needed. For more information about key types, please refer the NGC User Guide.
Export the API Key#
Pass the value of the API key to the docker run
command in the next section as the NGC_API_KEY
environment variable to download the appropriate models and resources when starting the NIM.
If you are not familiar with how to create the NGC_API_KEY
environment variable, the simplest way is to export it in your terminal:
export NGC_API_KEY=<value>
Run one of the following commands to make the key available at startup:
# If using bash
echo "export NGC_API_KEY=<value>" >> ~/.bashrc
# If using zsh
echo "export NGC_API_KEY=<value>" >> ~/.zshrc
Note
Other, more secure options include saving the value in a file, so that you can retrieve with cat $NGC_API_KEY_FILE
, or using a password manager.
Docker Login to NGC#
To pull the NIM container image from NGC, first authenticate with the NVIDIA Container Registry with the following command:
echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin
Use $oauthtoken
as the username and NGC_API_KEY
as the password. The $oauthtoken
username is a special name that indicates that you will authenticate with an API key and not a user name and password.
Launching the NIM Container#
The following command launches the Studio Voice NIM container with the gRPC service. Find reference to runtime parameters for the container here.
Transactional NIM:
docker run -it --rm --name=studio-voice \
--runtime=nvidia \
--gpus all \
--shm-size=8GB \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_MODEL_PROFILE=<nim_model_profile> \
-e FILE_SIZE_LIMIT=36700160 \
-e STREAMING=false \
-p 8000:8000 \
-p 8001:8001 \
nvcr.io/nim/nvidia/maxine-studio-voice:latest
Streaming NIM:
docker run -it --rm --name=studio-voice \
--runtime=nvidia \
--gpus all \
--shm-size=8GB \
-e NGC_API_KEY=$NGC_API_KEY \
-e NIM_MODEL_PROFILE=<nim_model_profile> \
-e STREAMING=true \
-p 8000:8000 \
-p 8001:8001 \
nvcr.io/nim/nvidia/maxine-studio-voice:latest
Note
Note that NIM_MODEL_PROFILE
is an optional parameter. If NIM_MODEL_PROFILE
is not provided, the NIM automatically selects a matching NIM_MODEL_PROFILE
based on the target hardware architecture.
If STREAMING
is set to true
, the NIM automatically selects a profile that is compatible with your GPU of Model Type 48k-LL
.
If STREAMING
is set to false
, the NIM automatically selects a profile that is compatible with your GPU of Model Type 48k-HQ
.
However, if NIM_MODEL_PROFILE
is used, ensure that the associated GPU architecture is compatible with the target hardware. If an incorrect NIM_MODEL_PROFILE
is used, a deserialization error will occur on inference.
For more information about NIM_MODEL_PROFILE
, refer to the NIM Model Profile Table.
Note
The flag --gpus all
is used to assign all available GPUs to the docker container. This fails on multiple GPU unless all GPUs are same. To assign specific GPU to the docker container (in case of different multiple GPUs available in your machine) use --gpus '"device=0,1,2..."'
Note
Please note, required VRAM resource for running the NIM, varies between 2.3 and 3.7 GB VRAM depending on the model profile.
Note
If a non-supported GPU is used for launching the NIM, you get an error:
nimlib.exceptions.NIMProfileIDNotFound: Could not match a profile in manifest at /opt/nim/etc/default/model_manifest.yaml
.
If the command runs successfully, you will get an output ending similar to the following:
I1126 09:22:21.048202 31 grpc_server.cc:2558] "Started GRPCInferenceService at 127.0.0.1:9001"
I1126 09:22:21.048377 31 http_server.cc:4704] "Started HTTPService at 127.0.0.1:9000"
I1126 09:22:21.089295 31 http_server.cc:362] "Started Metrics Service at 127.0.0.1:9002"
Maxine GRPC Service: Listening to 0.0.0.0:8001
Note
By default, the Studio Voice gRPC service is hosted on port 8001
. You must use this port for inferencing requests.
Environment Variables#
The following table describes the environment variables that can be passed into a NIM as a -e
argument added to a docker run
command:
ENV |
Required? |
Default |
Notes |
---|---|---|---|
|
Yes |
None |
You must set this variable to the value of your personal NGC API key. |
|
No |
|
Location (in container) where the container caches model artifacts. |
|
Optional |
None |
You must set this model profile to be able to download the specific model type supported on your GPU. To know more about |
|
No |
36700160 |
Maximum size limit of the input audio file in bytes. This is applicable only for the transactional mode. Defaults to 35 MB. |
|
No |
false |
Enable audio streaming mode on gRPC endpoint if set to |
|
No |
disabled |
Set SSL security on the endpoints to |
|
No |
None |
Set the path to CA root certificate inside the NIM. This is required only when |
|
No |
None |
Set the path to the server’s public SSL certificate inside the NIM. This is required only when an SSL mode is enabled. For example, if the SSL certificates are mounted at |
|
No |
None |
Set the path to the server’s private key inside the NIM. This is required only when an SSL mode is enabled. For example, if the SSL certificates are mounted at |
Runtime Parameters for the Container#
Flags |
Description |
---|---|
|
|
|
Delete the container after it stops (see Docker docs) |
|
Give a name to the NIM container. Use any preferred value. |
|
Ensure NVIDIA drivers are accessible in the container. |
|
Expose NVIDIA GPUs inside the container. If you are running on a host with multiple GPUs, you need to specify which GPU to use, you can also specify multiple GPUs. See GPU Enumeration for further information on for mounting specific GPUs. |
|
Allocate host memory for multi-process communication. |
|
Provide the container with the token necessary to download adequate models and resources from NGC. See above. |
|
Ports published by the container are directly accessible on the host port. |
Stopping the Container#
The following commands can be used to stop the container.
docker stop $CONTAINER_NAME
docker rm $CONTAINER_NAME