Clara Holoscan Deploy 0.8.1 EA

10.34. Clara Deploy DICOM Report Object Writer

CAUTION: Investigational device, not for diagnostic use. Limited by Federal (or United States) law to investigational use.

This research use only software has not been cleared or approved by FDA or any regulatory agency.

This asset requires the Clara Deploy SDK. Follow the instructions on the Clara Bootstrap page to install the Clara Deploy SDK.

This example application creates a DICOM encapsulated PDF object as well as DICOM Comprehensive 3D Structure Report object for AI classification results. The created DICOM objects are saved in DICOM Part 10 files.

The design and implementation of this application follow the guidance in the Integrating the Healthcare Enterprise (IHE) Radiology Technical Framework Supplement AI Results (AIR) Revision 1.1 - Trial Implementation. This AI Results Profile addresses the capture, distribution, and display of medical imaging analysis results. The central use case involves results generated by artificial intelligence (AI Model) algorithms.


The DICOM SR Writer is an experimental implementation, specifically in its writing of SR Document modules. The is partly due to the lack of applicable codes for the AI classification results, and the need to be requested procedure specific. It is therefore advised that this operator be customized for a specific AI model before use.

This application, in the form of a Docker container, expects the following inputs:

  • in the folder /input, by default, a signle AI classification results file Text file types, .txt and .csv, are supported

  • in the folder /dcm, by default, the original DICOM Study instance files that were analyzed for the classification results. The instance files can be in subfolders

  • in the folder /series_selection, by default, the selected series image file, selected-images.json, output of DICOM Parser or Series Selection operator, whichever is used to select the series and its image for inference

  • Optionally, the information of the AI model used in the analysis can be provided to the application through environment variables. See below section on environment variables

The /input and /dcm folders need to be mapped to the host folders when the Docker container is started. If multiple series are in /dcm and series selection has been used to select specific series, then /series_selection need to be mapped to a folder containing the selected-images.json file.

This application saves the DICOM object to the output folder /output by default in DICOM Part 10 File Format. The file name is generated by suffixing the input file name with -DICOMSEG and the extension dcm. The output folder must be mapped to a host folder.

Logs generated by the application are saved in the folder /logs by default, which similarly must be mapped to a host folder.

The application supports the following environment variables for customizing IO and DICOM report IOD types, as well as getting AI model information (default value in parentheses):

  • NVIDIA_CLARA_INPUT: The root folder where the application searches for AI result file, default /input

  • NVIDIA_CLARA_OUTPUT: The folder where the application saves generated DICOM instance files, default /output

  • NVIDIA_CLARA_LOGS: The folder for application logs, default /logs

  • NVIDIA_CLARA_DCM: The folder where the application searches for the original DICOM study instance files, default /dcm

  • 'NVIDIA_CLARA_SERIES_SELECTION: The folder where the application searches for selected series JSON file, default /series_selection

  • NVIDIA_DICOM_REPORT_TYPE: The types of report to be generated by the application, pdf or sr, default is pdf. When set as blank, all supported types are generated

  • NVIDIA_AI_MODEL_CREATOR: Creator of the AI model, used for populating the Contributing Equipment Sequence in the DICOM report along with the next few variables. This is recommended by [IHE AI Results (AIR) Revision 1.1 - Trial Implementation](, default blank

  • NVIDIA_AI_MODEL_NAME: Name of the AI model, default blank

  • NVIDIA_AI_MODEL_VERSION: Version of the AI model, default blank

  • NVIDIA_AI_MODEL_UID: Unique identifier of the AI model, default blank

The directories in the container are shown below. The core of the application code is under the folder dicomreport.


/app ├── ├── dicomreport │ ├── │ ├── │ ├── │ ├── │ ├── │ ├── │ └── ├── Dockerfile ├── ├── logging_config.json ├── logs │ ├── errors.log │ ├── info.log │ └── report_content.pdf ├── ├── ngc │ ├── metadata.json │ └── ├── output │ ├── preds_model-DICOMReport-PDF.dcm │ └── preds_model-DICOMReport-SR.dcm ├── public │ └── docs │ └── ├── requirements.txt ├── └── test-data ├── classification │ └── preds_model.csv └── dcm └── CT000000.dcm

  • The classification result file, .txt or .csv type.

  • At least of the original DICOM instance files from the DICOM study used in the AI inference. 1

Change to your working directory (e.g. my_test). 2

Create, if they do not exist, the following directories under your working directory:

  • input, and copy over the classification file.

  • dcm, and copy over the dcm files of the original DICOM series.

  • output for the generated DICOM Segmentation dcm file.

  • logs for log files. 3

In your working directory, create a shell script (e.g. or other name if you prefer), copy and paste the sample content below, modify the variable APP_NAME to that of the Docker Image name and tag, and save the file.


SCRIPT_DIR=$(dirname "$(readlink -f "$0")") TESTDATA_DIR=$(readlink -f "${SCRIPT_DIR}"/test-data) APP_NAME="dicomreport_writer:latest" INPUT_TYPE="classification" # Build Docker image, not needed if the Docker image has been pulled. # docker build -t ${APP_NAME} -f ${SCRIPT_DIR}/Dockerfile ${SCRIPT_DIR} # Run ${APP_NAME} container. docker run --name ${APP_NAME} -t --rm \ -v ${TESTDATA_DIR}/${INPUT_TYPE}:/input \ -v ${SCRIPT_DIR}/output:/output \ -v ${SCRIPT_DIR}/logs:/logs \ -v ${TESTDATA_DIR}/dcm:/dcm \ -e DEBUG_VSCODE \ -e DEBUG_VSCODE_PORT \ -e NVIDIA_CLARA_NOSYNCLOCK=TRUE \ -e NVIDIA_DICOM_REPORT_TYPE='pdf' \ -e NVIDIA_AI_MODEL_CREATOR='NVIDIA/NIH' \ -e NVIDIA_AI_MODEL_NAME='COVID-19 Classification' \ -e NVIDIA_AI_MODEL_VERSION=1.0 \ ${APP_NAME} echo "${APP_NAME}has finished." 4

Execute the script below and wait for the application container to finish:


./ 5

Check for the following output files:

  • The results are in the output directory

  • File(s) with the same name as the input file, suffixed with -DICOMReport-PDF or -DICOMReport-SR, and with extension .dcm. 6

To visualize the results, use MicroDicom or another DICOM viewer. For detailed steps, see the viewer documentation. The key steps are as follows:

  1. Import the DICOM instance file (dcm file) as well as the original DICOM series.

  2. Open the series for the report to view metadata and the report.

  3. May have to open external viewer to display the PDF.

If you want to see the internals of the container and/or manually run the application inside the container, follow these steps:

  1. Start the container in a interactive session. Tp do this, you need to modify the sample script above by replacing docker run -t with docker run -it --entrypoint /bin/bash, and then run the script file.

  2. Once in the container terminal, ensure the current directory is /app.

  3. Check /input and /dcm have the expected input file(s) and DICOM instance files respectively.

  4. Check /output and /logs folders and remove existing files if any.

  5. Enter the command python ./, and watch the application execute and finish in a few seconds.

  6. Check the output folder for the newly created DICOM file.

  7. Enter command exit to exit the container.

An End User License Agreement is included with the product. By pulling and using the Clara Deploy asset on NGC, you accept the terms and conditions of these licenses.

Release Notes, the Getting Started Guide, and the SDK itself are available at the NVIDIA Developer forum.

For answers to any questions you may have about this release, visit the NVIDIA Devtalk forum.

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved.. Last updated on Feb 1, 2023.