Bring your own models
Each AIAA model consists of four parts, “Pre-transforms”, “Inference”, “Post-transforms” and “Writer”. AIAA provides the flexibility for users to customize their own transforms and inference procedure. Please refer to the following sections for customizing each individual component:
- Bring your own Transforms
- Bring your own Inference
- Bring your own Writer
- Bring your own InferencePipeline
The custom codes should be put inside
<workspace>/transforms folder is still supported but would be deprecated.
<workspace> refer to the workspace path specified when launching AIAA server.
By default it is in
/var/nvidia/aiaa inside the docker,
please refer to Running AIAA for more information.