Bring your own models

Each AIAA model consists of four parts, “Pre-transforms”, “Inference”, “Post-transforms” and “Writer”. AIAA provides the flexibility for users to customize their own transforms and inference procedure. Please refer to the following sections for customizing each individual component:

Note

The custom codes should be put inside <workspace>/lib folder. The <workspace>/transforms folder is still supported but would be deprecated.

Attention

<workspace> refer to the workspace path specified when launching AIAA server. By default it is in /var/nvidia/aiaa inside the docker, please refer to Running AIAA for more information.

© Copyright 2020, NVIDIA. Last updated on Feb 2, 2023.