NVIDIA Clara Train 4.1
1.0

Continuous Learning

AIAA supports continuous learning feature. Users can easily load the pretrained models into AIAA server, use the MONAI Label client to annotate unlabelled images, and use those new data to train a better model.

AIAA supports the feature using Clara-Train’s MMAR.

A valid MMAR for continuous learning should contain the following files:

Copy
Copied!
            

ROOT config config_train.json config_aiaa.json commands prepare_dataset.sh train.sh train_multi_gpu.sh export.sh

After we start the AIAA server, we can zip the MMAR folder and load it into the AIAA server using the following commands:

Copy
Copied!
            

# zip the folder tar -zcvf mmar.tgz [/path/to/your/mmar/folder] # load the model into AIAA server curl -X PUT "http://127.0.0.1:$AIAA_PORT/admin/model/mmar_train" \ -F "data=@mmar.tgz"

Then users can use “POST” request to invoke model training, for example:

Copy
Copied!
            

curl -X POST "http://127.0.0.1:$AIAA_PORT/admin/train/mmar_train"

A more friendly way would be to install the MONAI Label client,

and interact directly with the client side application.

The fine-tuning by default will only run on newly added data. If we want to train with all the data, we can create symbolic link (or copy data) into the AIAA’s workspace/mmars folder.

The links should be created in the following structure:

Copy
Copied!
            

[AIAA WORKSPACE FOLDER] mmars [the mmar name] dataset training images image_1 <- creates inside here image_2 <- creates inside here ... labels label_1 <- creates inside here label_2 <- creates inside here ...

© Copyright 2021, NVIDIA. Last updated on Feb 2, 2023.