Key Features
AIAA implements a list of useful features that aims at better results and quicker response time. AIAA also enables users to bring their models which provides a lot of flexibility.
AIAA uses a scanning window mechanism to do auto-segmentation. AIAA chunks the input image volume into small cubes first. Then each cube will be sent to TRTIS to run model inference. Finally, the results are aggregated and return as one segmentation mask. Note that the requests to TRTIS run asynchronously.
This is the abbreviation for Deep Extreme cut in 3D. We combine [1] and [2] to train a powerful model in 3D spaces. Once the user provides 6 extreme points (2 at each axis) around the object of interest, the model can produce a high-quality segmentation mask fast.
Some users may start with all unlabelled data. To help with cold-start, AIAA provides the DeepGrow API. Users can start to click on the object of interest (foreground points) and get a reasonable segmentation mask to start in just a few seconds.
If the model over-segments, users can also specify background points to correct the result.
Our model is training using [3]. In addition to the 3D volume, the network takes into consideration both foreground and background points.
Note that this algorithm does not focus on any specific organ.
If your model does not belong to these categories or if you want a more sophisticated workflow, you can define your inference procedure or cascade inferences using the inference pipeline.
Please refer to Bring your own models for more details.
To support generic inference, AIAA adds a new API called “/inference”. This new API can be used with classification models, custom inference procedure, or custom inference pipeline. The current workflow of segmentation/annotation/deepgrow remains unchanged.