9.34. Clara Deploy DICOM Report Object Writer

CAUTION: Investigational device, not for diagnostic use. Limited by Federal (or United States) law to investigational use.

This research use only software has not been cleared or approved by FDA or any regulatory agency.

This asset requires the Clara Deploy SDK. Follow the instructions on the Clara Bootstrap page to install the Clara Deploy SDK.

9.34.1. Overview

This example application creates a DICOM encapsulated PDF object as well as DICOM Comprehensive 3D Structure Report object for AI classification results. The created DICOM objects are saved in DICOM Part 10 files.

The design and implementation of this application follow the guidance in the Integrating the Healthcare Enterprise (IHE) Radiology Technical Framework Supplement AI Results (AIR) Revision 1.1 - Trial Implementation. This AI Results Profile addresses the capture, distribution, and display of medical imaging analysis results. The central use case involves results generated by artificial intelligence (AI Model) algorithms.


NOTE

The DICOM SR Writer is an experimental implementation, specifically in its writing of SR Document modules. The is partly due to the lack of applicable codes for the AI classification results, and the need to be requested procedure specific. It is therefore advised that this operator be customized for a specific AI model before use.


9.34.2. Inputs

This application, in the form of a Docker container, expects the following inputs:

  • in the folder /input, by default, a signle AI classification results file Text file types, .txt and .csv, are supported.

  • in the folder /dcm by default, the original DICOM Study instance files that were analyzed for the classification results. The instance files can be in subfolders.

  • Optionally, the information of the AI model used in the analysis can be provided to the application through environment variables. See below section on environment variables.

Both of the /input and /dcm folders need to be mapped to the host folders when the Docker container is started.

9.34.3. Outputs

This application saves the DICOM object to the output folder /output by default in DICOM Part 10 File Format. The file name is generated by suffixing the input file name with -DICOMSEG and the extension dcm. The output folder must be mapped to a host folder.

Logs generated by the application are saved in the folder /logs by default, which similarly must be mapped to a host folder.

9.34.4. Environment Variables

The application supports the following environment variables for customizing IO and DICOM report IOD types, as well as getting AI model information (default value in parentheses):

  • NVIDIA_CLARA_INPUT ('/input'): The root folder where the application search for AI result file.

  • NVIDIA_CLARA_OUTPUT ('/output'): The folder where the application saves generated DICOM instance files.

  • NVIDIA_CLARA_LOGS ('/logs'): The folder for application logs.

  • NVIDIA_CLARA_DCM ('/dcm'): The folder where the application searches for the original DICOM study instance files.

  • NVIDIA_DICOM_REPORT_TYPE ('pdf'): The types of report to be generated by the application, pdf or sr. When set as blank, all supported types are generated.

  • NVIDIA_AI_MODEL_CREATOR (‘’): Creator of the AI model, used for populating the Contributing Equipement Sequence in the DICOM report along with the next few variables. This is recommended by [IHE AI Results (AIR) Revision 1.1 - Trial Implementation](https://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_Suppl_AIR.pdf).

  • NVIDIA_AI_MODEL_NAME (''): Name of the AI model.

  • NVIDIA_AI_MODEL_VERSION (''): Version of the AI model.

  • NVIDIA_AI_MODEL_UID (''): Unique identifier of the AI model.

9.34.5. Directory Structure

The directories in the container are shown below. The core of the application code is under the folder dicomreport.

/app
├── buildContainers.sh
├── dicomreport
│   ├── app.py
│   ├── dicom_iod_writer.py
│   ├── dicom_parser.py
│   ├── dicom_pdf_writer.py
│   ├── dicom_sr_writer.py
│   ├── __init__.py
│   └── runtime_envs.py
├── Dockerfile
├── __init__.py
├── logging_config.json
├── logs
│   ├── errors.log
│   ├── info.log
│   └── report_content.pdf
├── main.py
├── ngc
│   ├── metadata.json
│   └── overview.md
├── output
│   ├── preds_model-DICOMReport-PDF.dcm
│   └── preds_model-DICOMReport-SR.dcm
├── public
│   └── docs
│       └── README.md
├── requirements.txt
├── run_app_docker.sh
└── test-data
    ├── classification
    │   └── preds_model.csv
    └── dcm
        └── CT000000.dcm

9.34.6. Executing the Operator Docker Image

9.34.6.1. Prerequisites

  • The classification result file, .txt or .csv type.

  • At least of the original DICOM instance files from the DICOM study used in the AI inference.

9.34.6.2. Step 1

Change to your working directory (e.g. my_test).

9.34.6.3. Step 2

Create, if they do not exist, the following directories under your working directory:

  • input, and copy over the classification file.

  • dcm, and copy over the dcm files of the original DICOM series.

  • output for the generated DICOM Segmentation dcm file.

  • logs for log files.

9.34.6.4. Step 3

In your working directory, create a shell script (e.g. run_app_docker.sh or other name if you prefer), copy and paste the sample content below, modify the variable APP_NAME to that of the Docker Image name and tag, and save the file.

SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
TESTDATA_DIR=$(readlink -f "${SCRIPT_DIR}"/test-data)
APP_NAME="dicomreport_writer:latest"
INPUT_TYPE="classification"

# Build Docker image, not needed if the Docker image has been pulled.
# docker build -t ${APP_NAME} -f ${SCRIPT_DIR}/Dockerfile ${SCRIPT_DIR}

# Run ${APP_NAME} container.
docker run --name ${APP_NAME} -t --rm \
    -v ${TESTDATA_DIR}/${INPUT_TYPE}:/input \
    -v ${SCRIPT_DIR}/output:/output \
    -v ${SCRIPT_DIR}/logs:/logs \
    -v ${TESTDATA_DIR}/dcm:/dcm \
    -e DEBUG_VSCODE \
    -e DEBUG_VSCODE_PORT \
    -e NVIDIA_CLARA_NOSYNCLOCK=TRUE \
    -e NVIDIA_DICOM_REPORT_TYPE='pdf' \
    -e NVIDIA_AI_MODEL_CREATOR='NVIDIA/NIH' \
    -e NVIDIA_AI_MODEL_NAME='COVID-19 Classification' \
    -e NVIDIA_AI_MODEL_VERSION=1.0 \
    ${APP_NAME}

echo "${APP_NAME} has finished."

9.34.6.5. Step 4

Execute the script below and wait for the application container to finish:

./run_app_docker.sh

9.34.6.6. Step 5

Check for the following output files:

  • The results are in the output directory

  • File(s) with the same name as the input file, suffixed with -DICOMReport-PDF or -DICOMReport-SR, and with extension .dcm.

9.34.6.7. Step 6

To visualize the results, use MicroDicom or another DICOM viewer. For detailed steps, see the viewer documentation. The key steps are as follows:

  1. Import the DICOM instance file (dcm file) as well as the original DICOM series.

  2. Open the series for the report to view metadata and the report.

  3. May have to open external viewer to display the PDF.

9.34.7. Executing the Operator Docker Image Interactively

If you want to see the internals of the container and/or manually run the application inside the container, follow these steps:

  1. Start the container in a interactive session. Tp do this, you need to modify the sample script above by replacing docker run -t with docker run -it --entrypoint /bin/bash, and then run the script file.

  2. Once in the container terminal, ensure the current directory is /app.

  3. Check /input and /dcm have the expected input file(s) and DICOM instance files respectively.

  4. Check /output and /logs folders and remove existing files if any.

  5. Enter the command python ./main.py, and watch the application execute and finish in a few seconds.

  6. Check the output folder for the newly created DICOM file.

  7. Enter command exit to exit the container.

9.34.8. License

An End User License Agreement is included with the product. By pulling and using the Clara Deploy asset on NGC, you accept the terms and conditions of these licenses.

9.34.9. Suggested Reading

Release Notes, the Getting Started Guide, and the SDK itself are available at the NVIDIA Developer forum.

For answers to any questions you may have about this release, visit the NVIDIA Devtalk forum.