7.4. Clara Deploy SDK CT Recon Operator

7.4.1. Overview

The CT reconstruction operator within the Clara Deploy SDK uses the reconstruction toolkit RTK. RTK is open-source and cross-platform software based on the Insight Toolkit (ITK). RTK performs fast circular cone-beam CT reconstruction.

CT reconstruction design in Clara uses pre-compiled binaries and applications from RTK and ITK, and a wrapper is provided with several options to the user. Reconstruction pipeline definition is used to specify the reconstruction parameters, and I/O details to the Clara Deploy SDK. The Clara Deploy SDK spins up the reconstruction container as specified in the pipeline and executes it with parameters and data. Output from the reconstruction operator is generated in the mounted folder. A brief schema is demonstrated in the figure below.


7.4.2. Data Input

The Reconstruction Operator runs on raw CT projections from a cone-beam geometry. A geometry file is created by the reconstruction container and is fed into the reconstruction algorithm. One of the projections from a sequence of projections (in the sample dataset) is shown in the figure below.


Projections are fed into reconstruction algorithm as a MHD volume. Dicom format of dataset (nvrtk_sample_d1_dicom_raw_0.0.1.zip) is available in SDK zip folder under test-data folder.

7.4.3. Data Output

The pipeline outputs a reconstructed volme in MHD format. A slice from the reconstructed volume (on the sample dataset) is shown in the figure below.


7.4.4. Parameters

The following reconstruction parameters are supported:

  • Number of projections: Number of projections for creation of cone beam geometry. This is created internally by reconstruction container and is used in reconstruction process. Range [1,1024]

  • Recon Algorithms: Clara R3 supports two reconstruction algorithms. FDK (Feldkemp, David and Kress) and Iterative FDK.

  • Input projections: Input projection data in MHD format. User must provide the projection volume (coming from a cone beam geometry) and in sync with the number of projection parameters provided.

  • Spacing of reconstructed volume: (X,Y,Z) spacing information of the volume to be reconstructed.

  • Origin of reconstructed volume:/ (X,Y,Z) origin information of the volume to be reconstructed.

  • Dimensions of reconstructed volume:/ (X,Y,Z) dimension information of the volume to be reconstructed.

  • Use GPU: Hardware to perform the reconstruction. Options are CPU or GPU.

  • Number of iterations: Applicable only for Iterative FDK.

  • Pre processing filter parameters: If the user wants to process the input projections before starting the reconstruction, Hanning filter is supported in Clara R3. Filter smoothing parameters can be specified from a value between 0 and 1.

7.4.5. Dependencies

The algorithms depend on the ITK libraries. All required libraries are pre-built and part of the Docker container.

7.4.6. Directory Structure

This sample includes the following folders and files:

  • wrapper/

    • nvrtk.py A script that interfaces with the RTK and ITK applications. The script is the entry point to the Docker container. It extracts parameters via environment variables and invokes required applications for processing. Parameters can be specified by the user in the pipeline definition file, they are then processed by the Clara Deploy SDK and are made available to the reconstruction container via environment variables.

    Supported parameters as environment variables are defined in Dockerfile definition below.

  • Dockerfile

    • This script creates the Docker image.

    • All required RTK applications and ITK libraries are pulled from the artifactory and linked appropriately during image creation.

    • Default environment variables are specified in the Docker image creation. Environment variables can be updated during run time, either locally or within the Clara Platform. The following environment variables are supported:

      ENV nvrtkindir data # Directory location of input projections in MHD format
      ENV nvrtkoutdir data # Directory for output reconstructed files
      ENV nvrtkgeomdir data # Directory for geometry files
      ENV nvrtkinfile projections.mhd # Input MHD file name
      ENV nvrtkoutfile recon.mhd # Reconstructed output file name in MHD
      ENV nvrtkgeomfile geometry.xml # Geometry file name in XML
      ENV nvrtkalgo FDK # Specify reconstruction algorithm
      ENV nvrtkspacing 1.0,1.0,1.0 # Reconstructed Volume Spacing (x,y,z)
      ENV nvrtkdimension 512,512,100 # Dimension of reconstructed volume (x,y,z)
      ENV nvrtkorigin 0.0,0.0,0.0 # Origin of reconstructed volume (x,y,z)
      ENV nvrtkhann 0.0 # Frequency of hann window [0,1]
      ENV nvrtkhardware gpu # GPU or CPU for reconstruction
      ENV nvrtkniter 3 # number of iterations
      ENV nvrtkproj 180 # number of input projections
      ENV nvrtklogs data # Directory for recon logs

7.4.7. Execution of Reconstruction docker Reconstruction Operator Docker Image

Pre-build reconstruction docker image is shipped with the SDK. All parameters within the reconstruction operator (as defined above) can be used by the user. Users can run reconstruction operator using the pipeline definition files and sample datasets. These are described below.

Before executing the reconstruction pipeline, ensure that all environment variables are correctly set and appropriate data folders are mounted. This can be done in the “variables” section of recon-operator within the pipeline definition file.

Get the data: Have projection data ready for execution. If sample dataset (nvrtk_sample_d1_dicom_raw_0.0.1.zip) is desired, it can be extracted from test-data folder under SDK zip. If using the sample dataset, unzip it before using.

7.4.8. Execute of Reconstruction Pipeline within Clara

Reconstruction pipeline definition is mandatory to start the process of running reconstruction on Clara platform. These pipelines are present in SDK zip under clara-reference-pipelines folder. Reconstruction Pipeline definition

Users must define the reconstruction pipeline using the Pipeline definition language (that Clara supports).

Recon pipeline definition consists of 4 operators (dicom-reader, recon-operator, dicom-writer, register-dicom-results). Dicom-reader operator converts input DICOM data into MHD format. Recon-operator reconstructs the input data. Dicom-writer converts the reconstructed volume into DICOM format. Register-dicom-results operator transfers the DICOM volume to the configured PACS destination.

The recon-operator defines the reconstruction parameters under ‘variables’. These variables are passed by platform as environment variables to the recon-operator at runtime.

The Dicom-Adapter kick starts the pipeline. The Dicom-reader operator reads in the input DICOM image and converts them into MHD. Output of the Dicom-reader becomes the input of the recon-operator as specified in the pipeline. The Recon-operator has 3 output folders. The ‘out’ folder gets the actual reconstructed volume. The ‘logs’ folder gets all logs from recon-operator. The ‘geom’ folder contains the geometry file created by the recon-operator.

Refer to the Clara Deploy SDK User manual for details on creating a new pipeline and details of the pipeline definition language.

The user can deploy their own reconstruction containers, by creating Docker images and uploading them in the Clara Deploy SDK. The user’s reconstruction Docker image becomes one of the operators in the pipeline definition and can replace the sample reconstruction Docker image. Clara R3 supports input data in DICOM format. The user must convert their input data to DICOM format before deploying. Execution Steps on Clara Platform

The following are the execution steps for the CT Reconstruction Pipeline:

  • Ensure that Clara Platform is correctly installed and all required containers and services are deployed. Refer to installation section in Clara SDK user’s guide.

  • Review the Pipeline definition file ct-recon.yaml. The pipelines are shipped with SDK and is located under clara-reference-pipelines folder under SDK zip. Edit any parameters if required.

  • Create a pipeline ID for execution.

    clara create pipelines -p ct-recon.yaml
  • Verify the pipeline ID creation. Type the following command on the shell.

    kubectl get pl

    Pipeline with ID created in step 3 is visible in the kubectl pipelines listed.

  • Update the dicom-adapter configuration with pipeline id. Use an existing AE-title or create a new AE-title (if desired)

  • Restart the dicom-adapter service

    clara dicom stop
    clara dicom start
  • Execute the store-scu command to kick start the pipeline on Clara.

    # Run the Docker container using the script
    storescu -v +sd +r -xb -aet "DCM4CHEE" -aec <AE-TITLE> <LOCAL_IP> PORT <INPUT_DICOM_DIRECTORY>
  • Open http://localhost:8000, visualize the operators running in the pipeline. Once all operators are checked in green color, it completes the pipeline. Related logs can be seen in the webpage (by clicking on individual operators).

  • Open the configured PACS and verify the output, register-dicom-service must have pushed the reconstructed volume.