Model Config

Model config is the configuration to describe an inference workflow in AIAA (config_aiaa.json in the MMAR). It is divided into the following sections (bold means required):

  1. Basic Information:

    • version: The version of the model.

    • type: Following are the types of model currently supported in AIAA

      • segmentation

      • annotation

      • classification

      • deepgrow

      • others

    • labels: The organs/subject of interest of this model.

    • description: A text to describe this model.

    Note

    Note that the type of DExtr3D model is “annotation”.


  2. Pre-Transforms (pre_transforms):

    What transforms to do before the data is passed into the model. Each transform will have

    • name: Name of the transform

    • args: List of arguments passed during creation of transform object

    Example:

    Copy
    Copied!
                

    { "name": "monai.transforms.LoadImaged", "args": { "keys": "image" } }


  3. Inference (inference):

    • input: The key(s) to input images/volumes. (default: "image")

    • output: The key(s) to output images/volumes. (default: "model")

    • AIAA (required for AIAA backend):

      • name: Name of Inference. Built-In is:

        • PyTorchInference: Native PyTorch Inference which supports simple and scanning window inference logic

      • args: List of arguments passed during creation of Inference object

    • TRITON (required for TRITON backend):

      • name: Name of Inference. Built-In is:

        • TritonInference: Triton Inference which supports simple and scanning window inference logic

      • args: List of arguments passed during creation of Inference object

      • triton_model_config: This section contains Triton model configuration. Please refer to Triton Model Configuration to add any Triton specific settings

    • meta_data: Additional info which will be passed to client in model info

  4. Post-Transforms (post_transforms):

    What transforms to do after predictions. Semantics are the same as Pre-Transforms.

  5. Writer (writer):

    What writer to use to write results out.

Note

For PyTorch models with Triton, the platform of the Triton model config should be set to “pytorch_libtorch”.

For Triton related attributes can refer to Triton documentation.

Attention

4.0+ does not support config_aiaa.json and models from previous versions.

If you are using Triton backend, we have the following guidelines for you:

  • The config need to have section “triton” and the “platform” need to be “pytorch_libtorch”.

  • The triton input is called “INPUT__x” where x starts from 0.

  • The triton output is called “OUTPUT__x” where x starts from 0.

  • The inference input are the keys you want to pass into the network. They are mapped to the triton input follows the order.

  • The inference output are the keys to store the results you get from the network. They are mapped to the triton output follows the order of specification.

For example:

Copy
Copied!
            

{ "inference": { "input": ["image", "label"], "output": ["model", "logit"], "TRITON": { "name": "TritonInference", "args": {}, "triton_model_config": { "platform": "pytorch_libtorch", "input": [ { "name": "INPUT__0", "data_type": "TYPE_FP32", "dims": [3, 256, 256] }, { "name": "INPUT__1", "data_type": "TYPE_FP32", "dims": [3, 256, 256] } ], "output": [ { "name": "OUTPUT__0", "data_type": "TYPE_FP32", "dims": [1, 256, 256] }, { "name": "OUTPUT__1", "data_type": "TYPE_FP32", "dims": [1, 256, 256] } ] } } } }

The “image” will be fed into the network’s “INPUT__0”, while “label” will be fed into the network’s “INPUT__1”.

The result of “OUTPUT__0” will be stored into “model”, while “OUTPUT__1” will be stored into “logit”.

The following is an example of config_aiaa.json for the model clara_pt_spleen_ct_segmentation

Copy
Copied!
            

{ "version": 1, "type": "segmentation", "labels": [ "spleen" ], "description": "A pre-trained model for volumetric (3D) segmentation of the spleen from CT image", "pre_transforms": [ { "name": "monai.transforms.LoadImaged", "args": { "keys": "image" } }, { "name": "monai.transforms.AddChanneld", "args": { "keys": "image" } }, { "name": "monai.transforms.Spacingd", "args": { "keys": "image", "pixdim": [ 1.0, 1.0, 1.0 ] } }, { "name": "monai.transforms.ScaleIntensityRanged", "args": { "keys": "image", "a_min": -57, "a_max": 164, "b_min": 0.0, "b_max": 1.0, "clip": true } } ], "inference": { "input": "image", "output": "pred", "AIAA": { "name": "aiaa.inference.PyTorchInference", "args": { "scanning_window": true, "roi": [ 160, 160, 160 ], "overlap": 0.6, "device": "cpu", "sw_device": "cuda" } }, "TRITON": { "name": "aiaa.inference.TritonInference", "args": { "scanning_window": true, "roi": [ 160, 160, 160 ], "overlap": 0.1 }, "triton_model_config": { "platform": "pytorch_libtorch", "max_batch_size": 1, "input": [ { "name": "INPUT__0", "data_type": "TYPE_FP32", "dims": [ 1, 160, 160, 160 ] } ], "output": [ { "name": "OUTPUT__0", "data_type": "TYPE_FP32", "dims": [ 2, 160, 160, 160 ] } ] } } }, "post_transforms": [ { "name": "monai.transforms.AddChanneld", "args": { "keys": "pred" } }, { "name": "monai.transforms.Activationsd", "args": { "keys": "pred", "softmax": true } }, { "name": "monai.transforms.AsDiscreted", "args": { "keys": "pred", "argmax": true } }, { "name": "monai.transforms.SqueezeDimd", "args": { "keys": "pred", "dim": 0 } }, { "name": "monai.transforms.ToNumpyd", "args": { "keys": "pred" } }, { "name": "aiaa.transforms.Restored", "args": { "keys": "pred", "ref_image": "image" } }, { "name": "aiaa.transforms.ExtremePointsd", "args": { "keys": "pred", "result": "result", "points": "points" } }, { "name": "aiaa.transforms.BoundingBoxd", "args": { "keys": "pred", "result": "result", "bbox": "bbox" } } ], "writer": { "name": "aiaa.transforms.Writer", "args": { "image": "pred", "json": "result" } } }

© Copyright 2021, NVIDIA. Last updated on Feb 2, 2023.