9.32. Digital Pathology Nuclei Segmentation Operator

9.32.1. Overview

Digital Pathology Nuclei Segmentation Operator is a reference application that makes use of Clara Pipeline Driver and OpenSlide for Digital Pathology image segmentation (cell nuclei setmentation).

This application, in the form of a Docker container, is expected to work with Clara (CPDriver) orchestrator engine to use FastIO’s features but would work as standalone with Docker by specifying the environment variable NVIDIA_CLARA_NOSYNCLOCK to TRUE.

It uses the following model/packages:

The main code is available at /app/main.py and it is executed with parameters inside the container like below:

/bin/bash -c 'python -u /app/main.py <command name>'
usage: main.py [-h] [-d DEBUG_LEVEL] [--input-path INPUT_PATH]
               [--output-path OUTPUT_PATH] [--config-path CONFIG_PATH]
               [-w NUM_WORKERS]
               [--mask-pixel-count-limit MASK_PIXEL_COUNT_LIMIT]
               [-t TILE_SIZE] [-m MODEL_NAME] [-o OVERLAP]

positional arguments:
  command               Command to execute

optional arguments:
  -h, --help            show this help message and exit
  -d DEBUG_LEVEL, --debug-level DEBUG_LEVEL
                        Set debug level (e.g., 'INFO', 'DEBUG')
  --input-path INPUT_PATH
                        Input folder path. Default is '/input'
  --output-path OUTPUT_PATH
                        Output folder path. Default is '/output'
  --config-path CONFIG_PATH
                        Config folder path. Default is '/config'
  -w NUM_WORKERS, --num-workers NUM_WORKERS
                        Number of workers. Default is (# of cpus)
  --mask-pixel-count-limit MASK_PIXEL_COUNT_LIMIT
                        Mask pixel count limit. Default is 1024 * 1024
  -t TILE_SIZE, --tile-size TILE_SIZE
                        Tile size. Default is 256
  -m MODEL_NAME, --model-name MODEL_NAME
                        Model name. Default is 'segmentation_unet_nuclei'
  -o OVERLAP, --overlap OVERLAP
                        Overlap size. Default is 0. Not used for now

According to the <command>, it does a different job and each command acts as a stage in the pipeline.

9.32.2. Commands segmentation

This executes all the operations (load/filter/stitch) at once.

This command loads a multi-res SVS file, tiles it, performs inferences with TRITON, and then writes out the multi-resolution/tiled image into the file system. This process consists of the following three stages:

  • Pre-processing: It loads a whole slide image at a low-resolution to generate a mask. The generated mask image would be used to skip inferencing on background tiles. For each tile, some filters (color conversion, normalization, and so on) are applied before inferencing.
  • Inferencing: For each tile (256x256x3, uint8), it uses TRITON based inference to segment nuclei in the tile.
  • Post-processing: For each segmentation result in the tile, the segmentation part would be overlaid on top of the original image. Each post-processed tile would be saved into a single multi-resolution/tiled TIFF file by using tifffile library. Input

Input requires a folder (mounted at /input folder inside the container) containing the following files:

  • .tif or .svs - Input image file
  • config_render.json - Configuration for Render Server

This command expects TRITON server is running and the server’s HTTP API address is available through the environment variable NVIDIA_TRITONURI (e.g., ‘’) so that inference calls are done on a model specified by the model name parameter (-m or --model-name). Output

The following files would be stored at /output folder inside the container:

  • image.tif - Output image file
  • config.meta - Metadata for Render Server
  • config_render.json - Configuration for Render Server