Loading Models
To load an AIAA model you need a model config that describes the inference workflow and usually a model file that contains either the weights or the whole network structure.
There are multiple options to load a model into AIAA.
AIAA allows you to load the model directly from NVIDIA GPU Cloud (NGC).
A list of available pre-trained models are in here.
(“Annotation” models that required user inputs are in here)
You can also use ngc registry model list nvidia/med/clara_*
to get a list of models.
The following example is to load the clara_ct_seg_spleen_amp pre-trained model.
# note that the version in this command means the version on NGC
# which differs from the Clara-Train version
curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_ct_seg_spleen_amp" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d '{"path":"nvidia/med/clara_ct_seg_spleen_amp","version":"1"}'
You can also download the model from NGC and load it.
ngc registry model download-version nvidia/med/clara_ct_seg_spleen_amp:1
curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_ct_seg_spleen_amp" \
-F "config=@clara_ct_seg_spleen_amp_v1/config/config_aiaa.json;type=application/json" \
-F "data=@clara_ct_seg_spleen_amp_v1/models/model.trt.pb"
Follow NGC CLI installation to setup NGC CLI first.
If you have already downloaded the MMAR into a local disk, you can use the following approach to load it from the disk.
# loading segmentation spleen model
curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_ct_seg_spleen_amp" \
-F "data=@clara_ct_seg_spleen_amp.with_models.tgz"
# loading DeepGrow model
curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_deepgrow" \
-F "data=@clara_train_deepgrow_aiaa_inference_only.zip"
If you have trained a TensorFlow (TF) model and zipped the model checkpoint files into some archive (e.g. zip, tar, gz), you can use the following approach to load it into AIAA.
# Zip the checkpoint files
zip model.zip \
model.ckpt.data-00000-of-00001 \
model.ckpt.index \
model.ckpt.meta
curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_ct_seg_spleen_amp" \
-F "config=@config_aiaa.json;type=application/json" \
-F "data=@model.zip"
If you upload TF checkpoints to AIAA, it will be automatically converted to a TF-TRT model.
If you have model.trt.pb
(TF-TRT format), you can load the same into AIAA as follows.
curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_ct_seg_spleen_amp" \
-F "config=@config_aiaa.json;type=application/json" \
-F "data=@model.trt.pb"
To get a TF-TRT model you can check https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html. Note that this model is classified as “tensorflow_graphdef” in TRTIS.
If you are using Clara to train your models, you can also use export.sh to convert your model to a TF-TRT model.
Before running inference or using clients, make sure you can see your models in http://127.0.0.1:$LOCAL_PORT/v1/models. If not, please follow instructions in Frequently Asked Questions to debug.