Frequently Asked Questions

Q1: Why does my model not show up in /v1/models?

Following the steps below to debug your model:

  1. Check your config.json file:

  • Check if it is valid JSON format:

    import json
    config = json.load(open('config.json', 'r'))
    

    This code should run without exception.

  1. Check your model file:

  1. Upload model to AIAA again:

Once you make sure all the pieces are correct, upload your model again to AIAA.

  1. Increase triton_model_timeout:

AIAA will poll Triton for this amount of time before AIAA claims the model is not imported correctly. If you are using the Triton engine, you can try using a larger timeout to ensure the model import success. (Modify the TRITON_MODEL_TIMEOUT in “docker-compose.env”)

  1. Check your logs:

If all the above steps do not work, start using flag --debug and check log files in <AIAA workspace>/logs. You can also go to Nvidia Developer Forums.

Note

Currently, AIAA requires models to have a single input and a single output. Multi-class segmentation can be achieved by having multiple channels in output.

Q2: Why are the models returning bad results?

Most of the time, this is caused by the mismatch of data. Make sure your testing data in AIAA have the same characteristics as the data that you used to train your models.

That would include the following:

  1. Resolution/Spacing

  2. Orientation

  3. Contrast/Phase

For example, the pre-trained segmentation models on NGC are using data from Medical Segmentation Decathlon.

We re-scale the image to have a spacing of [1.0, 1.0, 1.0] and make sure the affine matrix of Nifti have all positive values.

Hint

MONAI provides some nice transforms to tackle the resolution and orientation problems.

Q3: Does AIAA support 2D models?

Yes, AIAA server supports 2D models. You can use HTTP requests to directly interacting with the AIAA server API.

Q4: What if my GPU card does not have enough memory?

If your GPU card is very tight on memory, you can do some of the following points to alleviate this:

  1. Load fewer models in the AIAA server

  2. Reduce roi (the size of scanning window) in config_aiaa.json

  3. Try to reduce your network size

Q5: Why can’t I start AIAA?

Make sure $AIAA_PORT is not used by other processes.

Q6: How can I start the AIAA server clean?

To start it all clean, remove the workspace folder and create a new one. Then start the AIAA server with the new workspace.

Q7: Why is AIAA occupying all the GPU memories when I am not running any inference?

When AIAA runs with Triton backend, it will put one model instance on every GPU that is visible inside the docker.

Users can modify the “device_ids” section under “deploy” section of “tritonserver” service to change the GPU id that you want to use.control what GPUs are visible. For the number of model instances on each GPU, users can modify gpu_instance_count under triton_model_config in their model configs.

When a model instance is loaded in GPU, even if it is not serving any inference requests at that moment, it will occupy some amount of GPU memory. As a result, if users want to free that GPU memory, they will have to either stop the AIAA server or unload some models (using DELETE model API).

Q9: Can I run multiple containers of AIAA in the same host?

Yes. But you have to make sure you are using different ports and they do not overlap.

Q10: Creating Datasets for Clara Train with Clara AIAA?

For full cycle of annotation, training, fine-tuning, please use MONAI Label client/plugins for user interaction.

Q11: If we train our own model and load it into AIAA, is it foreseeable that a user could download the model and compromise our IP?

In the MMAR, the model weights is saved explicitly. And when loading it into the AIAA server, it is saved inside the AIAA workspace. But the admins can limit the access to the machine running the AIAA server.

Regular users only have access to the REST API which abstracts them away from the MMAR.

There is a GET Model command (https://docs.nvidia.com/clara/clara-train-sdk/aiaa/server_apis.html), however it only works for clients with admin rights.

Moreover it would only return the config but not the actual trained model weights.

Q12: How can we use SSL with AIAA?

Please refer to running AIAA with SSL

Hint

More discussions can be found in Nvidia Developer Forums