Model Config

Model config is the configuration to describe an inference workflow in AIAA. It is divided into the following sections (bold means required):

  1. Basic Information:

    • version: The version of AIAA, should be 3 or 2.

    • type: Following are the types of model currently supported in AIAA

      • segmentation

      • annotation

      • classification

      • deepgrow

      • others

      • pipeline

    • labels: The organs/subject of interest of this model.

    • description: A text to describe this model.

    Note

    Note that for compatibility reason, the type of DExtr3D model is “annotation”.

  2. Pre-Transforms:

    What transforms to do before the data is passed into the model. Each transform will have

    • name: Name of the transform. It can be a short name or full qualified name with class path (in case of bring your own transforms)

    • args: List of arguments passed during creation of transform object

    Example:

    {
      "name": "LoadNifti",
      "args": {
        "fields": "image",
        "as_closest_canonical": "false"
      }
    }
    

    Note

    Transform should be either a callable object or derived from Clara Train transforms. Please refer to Bring your own Transforms for details.

  3. Inference:

    • image: The field to input images/volumes. (default: "image")

    • image_format: The shape format of the input image. This is required only if you don’t specify shape format in your pre-transforms.

    • name: Name of Inference. Built-Ins are:

      • TRTISInference: TRTIS Inference (strongly recommended) which supports simple and scanning window inference logic

      • TFInference: Native TensorFlow Inference which supports simple and scanning window inference logic

      • PTInference: Native PyTorch Inference for any ad-hoc experiments

    • args: List of arguments passed during creation of Inference object

    • node_mapping: Map from field to model tensor names. (default: {"NV_MODEL_INPUT": "image", "NV_MODEL_OUTPUT": "model"})

    • additional_info: Additional info which will be passed to client in model info

    • trtis (required for TRTIS engine):

      Please refer TRTIS Model Configuration to add any TRTIS specific settings

    • tf (TF model node mapping, is required in native mode):

      • input_nodes: Map from field to model input tensor names. Usually "image": "NV_MODEL_INPUT".

      • output_nodes: Map from field to model output tensor names. Usually "model": "NV_MODEL_OUTPUT".

    • pt (PyTorch model while using in native mode):

      • network:

        • name: Network to be loaded to run prediction.

        • args: List of arguments passed during creation of network object

    Note

    In case of custom Inference, you can provide fully qualified name with class path.

    Such custom Inference should be either a callable object or implements AIAA Inference interface.

  4. Post-Transforms:

    What transforms to do after predictions. Semantics are same as Pre-Transforms.

  5. Writer:

    What writer to use to write results out.

Note

For PyTorch models, the TRTIS platform should be set to “pytorch_libtorch”.

For TRTIS related attributes can refer to TRTIS documentation.

Attention

If you want to upload TensorFlow ckpt model, you need to specify the tf part.

Attention

The config_aiaa.json in v1.x is not compatible with current release. Check Converting from previous TLT to modify your config file.

The following is an example of config_aiaa.json for the model clara_ct_seg_spleen_amp

{
  "version": "3",
  "type": "segmentation",
  "labels": [
    "spleen"
  ],
  "description": "A pre-trained model for volumetric (3D) segmentation of the spleen from CT image",
  "pre_transforms": [
    {
      "name": "LoadNifti",
      "args": {
        "fields": "image"
      }
    },
    {
      "name": "ConvertToChannelsFirst",
      "args": {
        "fields": "image"
      }
    },
    {
      "name": "ScaleByResolution",
      "args": {
        "fields": "image",
        "target_resolution": [1.0, 1.0, 1.0]
      }
    },
    {
      "name": "ScaleIntensityRange",
      "args": {
        "fields": "image",
        "a_min": -57,
        "a_max": 164,
        "b_min": 0.0,
        "b_max": 1.0,
        "clip": true
      }
    }
  ],
  "inference": {
    "image": "image",
    "name": "TRTISInference",
    "args": {
      "scanning_window": true,
      "batch_size": 1,
      "roi": [160, 160, 160]
    },
    "trtis": {
      "platform": "tensorflow_graphdef",
      "max_batch_size": 1,
      "input": [
        {
          "name": "NV_MODEL_INPUT",
          "data_type": "TYPE_FP32",
          "dims": [1, 160, 160, 160]
        }
      ],
      "output": [
        {
          "name": "NV_MODEL_OUTPUT",
          "data_type": "TYPE_FP32",
          "dims": [2, 160, 160, 160]
        }
      ],
      "instance_group": [
        {
          "count": 1,
          "kind": "KIND_AUTO"
        }
      ]
    },
    "tf": {
      "input_nodes": {
        "image": "NV_MODEL_INPUT"
      },
      "output_nodes": {
        "model": "NV_MODEL_OUTPUT"
      }
    }
  },
  "post_transforms": [
    {
      "name": "ArgmaxAcrossChannels",
      "args": {
        "fields": "model"
      }
    },
    {
      "name": "FetchExtremePoints",
      "args": {
        "image_field": "image",
        "label_field": "model",
        "points": "points"
      }
    },
    {
      "name": "CopyProperties",
      "args": {
        "fields": ["model"],
        "from_field": "image",
        "properties": ["affine"]
      }
    },
    {
      "name": "RestoreOriginalShape",
      "args": {
        "field": "model",
        "src_field": "image",
        "is_label": true
      }
    }
  ],
  "writer": {
    "name": "WriteNifti",
    "args": {
      "field": "model",
      "dtype": "uint8"
    }
  }
}