10.34. Clara Deploy DICOM Report Object Writer
CAUTION: Investigational device, not for diagnostic use. Limited by Federal (or United States) law to investigational use.
This research use only software has not been cleared or approved by FDA or any regulatory agency.
This asset requires the Clara Deploy SDK. Follow the instructions on the Clara Bootstrap page to install the Clara Deploy SDK.
This example application creates a DICOM encapsulated PDF object as well as DICOM Comprehensive 3D Structure Report object for AI classification results. The created DICOM objects are saved in DICOM Part 10 files.
The design and implementation of this application follow the guidance in the Integrating the Healthcare Enterprise (IHE) Radiology Technical Framework Supplement AI Results (AIR) Revision 1.1 - Trial Implementation. This AI Results Profile addresses the capture, distribution, and display of medical imaging analysis results. The central use case involves results generated by artificial intelligence (AI Model) algorithms.
NOTE
The DICOM SR Writer is an experimental implementation, specifically in its writing of SR Document modules. The is partly due to the lack of applicable codes for the AI classification results, and the need to be requested procedure specific. It is therefore advised that this operator be customized for a specific AI model before use.
This application, in the form of a Docker container, expects the following inputs:
in the folder
/input
, by default, a signle AI classification results file Text file types,.txt
and.csv
, are supportedin the folder
/dcm
, by default, the original DICOM Study instance files that were analyzed for the classification results. The instance files can be in subfoldersin the folder
/series_selection
, by default, the selected series image file,selected-images.json
, output of DICOM Parser or Series Selection operator, whichever is used to select the series and its image for inferenceOptionally, the information of the AI model used in the analysis can be provided to the application through environment variables. See below section on environment variables
The /input
and /dcm
folders need to be mapped to the host folders when the Docker container is started. If multiple series are in /dcm
and series selection has been used to select specific series, then /series_selection
need to be mapped to a folder containing the selected-images.json
file.
This application saves the DICOM object to the output folder /output
by default in DICOM Part 10 File Format. The file name is generated by suffixing the input file name with -DICOMSEG
and the extension dcm
. The output folder must be mapped to a host folder.
Logs generated by the application are saved in the folder /logs
by default, which similarly must be mapped to a host folder.
The application supports the following environment variables for customizing IO and DICOM report IOD types, as well as getting AI model information (default value in parentheses):
NVIDIA_CLARA_INPUT
: The root folder where the application searches for AI result file, default/input
NVIDIA_CLARA_OUTPUT
: The folder where the application saves generated DICOM instance files, default/output
NVIDIA_CLARA_LOGS
: The folder for application logs, default/logs
NVIDIA_CLARA_DCM
: The folder where the application searches for the original DICOM study instance files, default/dcm
'NVIDIA_CLARA_SERIES_SELECTION
: The folder where the application searches for selected series JSON file, default/series_selection
NVIDIA_DICOM_REPORT_TYPE
: The types of report to be generated by the application,pdf
orsr
, default ispdf
. When set as blank, all supported types are generatedNVIDIA_AI_MODEL_CREATOR: Creator of the AI model, used for populating the Contributing Equipment Sequence in the DICOM report along with the next few variables. This is recommended by [IHE AI Results (AIR) Revision 1.1 - Trial Implementation](https://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_Suppl_AIR.pdf), default blank
NVIDIA_AI_MODEL_NAME
: Name of the AI model, default blankNVIDIA_AI_MODEL_VERSION
: Version of the AI model, default blankNVIDIA_AI_MODEL_UID
: Unique identifier of the AI model, default blank
The directories in the container are shown below. The core of the application code is under the folder dicomreport
.
/app
├── buildContainers.sh
├── dicomreport
│ ├── app.py
│ ├── dicom_iod_writer.py
│ ├── dicom_parser.py
│ ├── dicom_pdf_writer.py
│ ├── dicom_sr_writer.py
│ ├── __init__.py
│ └── runtime_envs.py
├── Dockerfile
├── __init__.py
├── logging_config.json
├── logs
│ ├── errors.log
│ ├── info.log
│ └── report_content.pdf
├── main.py
├── ngc
│ ├── metadata.json
│ └── overview.md
├── output
│ ├── preds_model-DICOMReport-PDF.dcm
│ └── preds_model-DICOMReport-SR.dcm
├── public
│ └── docs
│ └── README.md
├── requirements.txt
├── run_app_docker.sh
└── test-data
├── classification
│ └── preds_model.csv
└── dcm
└── CT000000.dcm
10.34.6.1.Prerequisites
The classification result file,
.txt
or.csv
type.At least of the original DICOM instance files from the DICOM study used in the AI inference.
10.34.6.2.Step 1
Change to your working directory (e.g. my_test
).
10.34.6.3.Step 2
Create, if they do not exist, the following directories under your working directory:
input
, and copy over the classification file.dcm
, and copy over the dcm files of the original DICOM series.output
for the generated DICOM Segmentation dcm file.logs
for log files.
10.34.6.4.Step 3
In your working directory, create a shell script (e.g. run_app_docker.sh
or other name if you prefer), copy and paste the sample content below, modify the variable APP_NAME
to that of the Docker Image name and tag, and save the file.
SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
TESTDATA_DIR=$(readlink -f "${SCRIPT_DIR}"/test-data)
APP_NAME="dicomreport_writer:latest"
INPUT_TYPE="classification"
# Build Docker image, not needed if the Docker image has been pulled.
# docker build -t ${APP_NAME} -f ${SCRIPT_DIR}/Dockerfile ${SCRIPT_DIR}
# Run ${APP_NAME} container.
docker run --name ${APP_NAME} -t --rm \
-v ${TESTDATA_DIR}/${INPUT_TYPE}:/input \
-v ${SCRIPT_DIR}/output:/output \
-v ${SCRIPT_DIR}/logs:/logs \
-v ${TESTDATA_DIR}/dcm:/dcm \
-e DEBUG_VSCODE \
-e DEBUG_VSCODE_PORT \
-e NVIDIA_CLARA_NOSYNCLOCK=TRUE \
-e NVIDIA_DICOM_REPORT_TYPE='pdf' \
-e NVIDIA_AI_MODEL_CREATOR='NVIDIA/NIH' \
-e NVIDIA_AI_MODEL_NAME='COVID-19 Classification' \
-e NVIDIA_AI_MODEL_VERSION=1.0 \
${APP_NAME}
echo "${APP_NAME}has finished."
10.34.6.5.Step 4
Execute the script below and wait for the application container to finish:
./run_app_docker.sh
10.34.6.6.Step 5
Check for the following output files:
The results are in the
output
directoryFile(s) with the same name as the input file, suffixed with
-DICOMReport-PDF
or-DICOMReport-SR
, and with extension.dcm
.
10.34.6.7.Step 6
To visualize the results, use MicroDicom or another DICOM viewer. For detailed steps, see the viewer documentation. The key steps are as follows:
Import the DICOM instance file (dcm file) as well as the original DICOM series.
Open the series for the report to view metadata and the report.
May have to open external viewer to display the PDF.
If you want to see the internals of the container and/or manually run the application inside the container, follow these steps:
Start the container in a interactive session. Tp do this, you need to modify the sample script above by replacing
docker run -t
withdocker run -it --entrypoint /bin/bash
, and then run the script file.Once in the container terminal, ensure the current directory is
/app
.Check
/input
and/dcm
have the expected input file(s) and DICOM instance files respectively.Check
/output
and/logs
folders and remove existing files if any.Enter the command
python ./main.py
, and watch the application execute and finish in a few seconds.Check the
output
folder for the newly created DICOM file.Enter command
exit
to exit the container.
An End User License Agreement is included with the product. By pulling and using the Clara Deploy asset on NGC, you accept the terms and conditions of these licenses.
Release Notes, the Getting Started Guide, and the SDK itself are available at the NVIDIA Developer forum.
For answers to any questions you may have about this release, visit the NVIDIA Devtalk forum.