Key Features

AIAA implements a list of useful features that aims at better results and quicker response time. AIAA also enables users to bring their models which provides a lot of flexibility.

AIAA uses a scanning window mechanism to do auto-segmentation. AIAA chunks the input image volume into small cubes first. Then each cube will be sent to Triton to run model inference. Finally, the results are aggregated and return as one segmentation mask.

This is the abbreviation for Deep Extreme cut in 3D. We combine [1] and [2] to train a powerful model in 3D spaces. Once the user provides 6 extreme points (2 at each axis) around the object of interest, the model can produce a high-quality segmentation mask fast.

Some users may start with all unlabelled data. To help with cold-start, AIAA provides the DeepGrow API. Users can start to click on the object of interest (foreground points) and get a reasonable segmentation mask to start in just a few seconds.

If the model over-segments, users can also specify background points to correct the result.

Our model is trained using [3]. In addition to the 3D volume, the network takes into consideration both foreground and background points.

Note that this algorithm does not focus on any specific organ.

To support generic inference, AIAA adds a new API called “/inference”. This new API can be used with classification models, custom inference procedure, or custom inference pipeline. The current workflow of segmentation/annotation/deepgrow remains unchanged.

This feature implements a polygon editing tool. Given a current segmentation result, the polygon contour automatically “snaps” polygon vertices within a certain neighborhood to tissue boundaries based on the user input.

© Copyright 2020, NVIDIA. Last updated on Feb 2, 2023.