Release NotesΒΆ

  • 4.1
    • Changes:
      • To run AIAA in background mode use OS specific features like & in Ubuntu

      • AIAA defaults to run as current user/group

      • AIAA translates MMARs into MONAI Label apps and run MONAI Label

      • Apache is replaced by Uvicorn

      • Default engine is NATIVE when you start AIAA server

      • The contents of workspace is changed and it represents a MONAI Label app

      • For Training/Finetuning NGC/MMAR models, please use MONAI Label client plugins

      • Access all MONAI Label features at http://127.0.0.1:$AIAA_PORT/monailabel

      • AIAA admin PUT model api is changed, please refer to Loading Models for details

      • AIAA admin PATCH model api is deprecated. One should either modify the code directly in the workspace or use PUT api to update the model

  • 4.0
    • Changes:
      • start_aas.sh/stop_aas.sh change to start_aiaa.sh/stop_aiaa.sh

      • AIAA defaults to run as user group nvidia:nvidia

      • Default listening port inside container changes from 80 to 5000 (SSL: 443 to 5001)

      • Remove dashboard functionality (--monitor)

      • Enabling of debug mode changes from --debug 1 to just --debug

      • Separate Triton inference server from Clara-Train docker image

      • The version of model config changes from string to integer

      • The inference section of model config is re-factored, please refer to Model Config for details

      • Supported models change from TensorFlow based to PyTorch based