NVIDIA Clara Pipeline Driver Guidance

The NVIDIA Clara Pipeline Driver, or WFD, is an essential piece of pipeline orchestration. The WFD is provided as a library to be included as part of your pipelinevvv stage’s worker process.

  1. Bind the library to your source code. In C or C++ this is easy as adding libnvclara.so to your MAKE file and including clara.h in your source code. If you’re using a language like Python, Java, Node.js, or C#, use that language’s method of binding compiled binaries:

    • Python uses C-Types. For more information, see: Python Standard Library. Additionally, NVIDIA has provided an already generated Python library (see: Python APIs section).

  2. Once your process has started, call nvidia_clara_wfd__create with appropriate function pointers, and keep a handle on the wdf_out value.

  3. Have your code do whatever it needs to do to get started, then have it wait on the callbacks from WFD.

  4. WFD calls your provided functions in a specific event order: startup, prepare, execute, cleanup.

  5. When the process has completed, call nvidia_clara_wfd__release with the reference you took from nvidia_clara_wfd__create.

  6. Optionally, your process can block the current thread and wait for callback completion to occur by calling nvidia_clara_wfd__wait_for_completion and passing in the WFD pointer from nvidia_clara_wfd__create. This blocks the calling thread until all other life-cycle events have completed.

WFD provides life-cycle orchestration for your process. Once the WFD instance has been created, your process receives event callbacks in the following order:

  1. Start Up

    This is an early initialization phase where your process can do any required early-stage setup or initialization. No pipeline specific data or information is available at this time.

  2. Prepare

    This is a pre-execute phase. By this point in the life-cycle all other pipeline stages have been initialized, and the creator of the job (pipeline instance) has supplied all required inputs.

    Stage inputs are not yet available.

  3. Execute

    This is the execution phase where your process most likely does the majority of its work.

    Stage inputs are available, and output streams are awaiting results. When Clara Info indicates that a render service is available, properly prepared study data can be published to the service for 3D visualization. To publish study data, see publishing study data for visualization.

    Once this phase completes, the next stage is told to execute. If this is the last stage in the pipeline, the job creator is notified that its job has completed.

  4. Clean Up

    This phase occurs after the execute phase has completed and the job has been notified that this stage is complete. Clean up resources that need to be cleaned up, freed, or released.

    Do not delete your results as other stages or the job creator may need those.

example:

Copy
Copied!
            

/* The example provided is for the Clara Pipeline Driver C Client. */ // Execute callback provided to WFD for handling execution. int my_execute_callback(nvidia_clara_payload *payload) { nvidia_clara_payload_stream **streams = NULL; int streams_count = 0; result r = 0; r = nvidia_clara_payload__get_inputs(payload, &streams, &streams_count); if (r != 0) { printf("error: Failed to read input streams from payload (%d)\n", r); return -1; } printf("Read %d input stream(s) from the payload.\n", streams_count); r = nvidia_clara_payload__get_outputs(payload, &streams, &streams_count); if (r != 0) { printf("error: Failed to read output streams from payload (%d)\n", r); return -1; } printf("Read %d output stream(s) from the payload.\n", streams_count); return 0; } void error_callback(int code, char *message, char *source, int is_fatal) { const char _error[] = "error"; const char _fatal[] = "fatal"; const char _info[] = "info"; const char *prefix = NULL; if (is_fatal) { prefix = _fatal; } else if (code == 0) { prefix = _info; } else { prefix = _error; } printf("%s: [%s (%d)] %s\n", prefix, source, code, message); } // Entry-point for the application. int main(int arc, char *argv[]) { nvidia_clara_wfd *wfd; // Create the Pipeline driver instance, this will start the client life-cycle. if (nvidia_clara_wfd__create(NULL, NULL, my_execute_callback, NULL, error_callback, &wfd) == 0) { // Block the current thread and wait for the pipeline driver to complete. nvidia_clara_wfd__wait_for_completion(wfd); // Clean up our WFD allocation. nvidia_clara_wfd__release(wfd); // Return 0 (success!) return 0; } // Return -1 (error) return -1; }

NVIDIA Clara supports rendering of specific kinds of study data as 3D visualizations. Accessing the visualizations is done though the web-based dashboard. While pipelinepipeline jobs do not have direct access to visualization services, they can publish study data to the Clara-provided Render Server.

Publishing study data is done as part of the Execute phase of a pipeline stage. If a pipeline stage has or produces content to be published for visualization the nvidia_clara_study_data__publish API can make the study available to Render Server for visualization.

example:

Copy
Copied!
            

/* The example provided is for the Clara Pipeline Driver C Client. */ // Execute callback provided to WFD for handling execution. int publish_stage_execute_callback(nvidia_clara_payload *payload) { nvidia_clara_wfd *wfd = NULL; nvidia_clara_study_data *study = NULL; nvidia_clara_payload_stream **streams = NULL; int streams_count = 0; int success = -1; // First get the WFD reference from the payload. if (nvidia_clara_payload__get_wfd(payload, &wfd) == 0) { // Next use the WFD reference to create a study. if (nvidia_clara_study_data__create(wfd, &study) == 0) { // Link all of the inputs to the study for publication. if (nvidia_clara_payload__get_inputs(payload, &streams, &streams_count) == 0) { int copied = 0; for (int i = 0; i < streams_count; i += 1) { if (nvidia_clara_study_data__add_stream(study, streams[i]) == 0) { copied += 1; } else { printf("Failed to copy input stream %d to study.\n", i); } } printf("Successfully copied %d input stream(s) to study for publication.\n", copied); // Publish the study for visualization by Clara Render Server. if (nvidia_clara_study_data__publish(study) == 0) { success = 0; } else { printf("Failed to publish study to visualization service.\n"); } } else { printf("Failed to read input stream data from payload.\n"); } // Finally release the study allocation. nvidia_clara_study_data__release(study); } } return success; } // Entry-point for the application. int main(int arc, char *argv[]) { nvidia_clara_wfd *wfd; // Create the Pipeline driver instance, this will start the client life-cycle. if (nvidia_clara_wfd__create(NULL, NULL, publish_stage_execute_callback, NULL, NULL, &wfd) == 0) { // Wait for the pipeline driver to complete. nvidia_clara_wfd__wait_for_completion(wfd); // Clean up our WFD allocation. nvidia_clara_wfd__release(wfd); // Return 0 (success!) return 0; } // Return -1 (error) return -1; }

The NVIDIA Clara Pipeline Driver provides a query interface for discovering state about the current pipelinepipeline job and/or stage. The query interface is provided by nvidia_clara_info__read with the nvidia_clara_info info_kind parameter determining the type of information returned.

The API provides the following information:

  • Job Identifier

    This is the unique identifier for the currently running pipeline job. Unique identifiers are 32-character hexadecimal strings which represent 128-bit values.

  • Job Name

    This is the human-readable name given to the currently running pipeline job. This value is intended to provide an easy reference point for humans when looking at user interfaces or log files.

  • Stage Name

    This is the human-readable name given to the currently running pipeline stage. This value is intended to provide an easy reference point for humans when looking at user interfaces or log files.

  • Stage Timeout

    This is a string representing the number of seconds the currently running pipeline stage has been allocated for completion. The stage can be terminated if it exceeds this value. If this value is null or empty, then there is no assigned timeout.

  • Tensor RT Inference Service

    This is the URL of the Tensor RT Inference Service [TRTIS] associated with the currently running pipeline job. If the value is null or empty, then TRTIS is unavailable and no TRTIS instance is associated with the currently running pipeline job.

  • Render Service Availability

    This returns a Boolean value which indicates if the currently running pipeline stage has access to publish study data to a visualization service.

NOTE: This API is in its pre-alpha stages, and is subject to changes in future releases of NVIDIA Clara Pipeline Driver.

example:

Copy
Copied!
            

/* The example provided is for the Clara Pipeline Driver C Client. */ // Entry-point for the application. int main(int arc, char *argv[]) { char allocation[sizeof(strbuf) + sizeof(char) * 4096]; // 4KiB allocation to use as a buffer. strbuf *buffer = &allocation; // Utilize our local buffer as a string buffer to be used to query information from Clara. result r = 0; buffer->size = 4096; /* Query Clara to discover the unique identifier for the current job. */ if ((r = nvidia_clara_info__read(NVIDIA_CLARA_INFO_JOB_ID, buffer)) != 0) { printf("Querying Clara for Job ID failed with error: %d\n", r); } printf("The current Job ID is %s.", buffer); /* Query Clara to discover the name of the current job. */ if ((r = nvidia_clara_info__read(NVIDIA_CLARA_INFO_JOB_NAME, buffer)) != 0) { printf("Querying Clara for Job Name failed with error: %d\n", r); } printf("The current Job Name is %s.", buffer); /* Query Clara to discover the name of the current job stage. */ if ((r = nvidia_clara_info__read(NVIDIA_CLARA_INFO_STAGE_NAME, buffer)) != 0) { printf("Querying Clara for Stage Name failed with error: %d\n", r); } printf("The current Stage Name is %s.", buffer); /* Query Clara to discover how long the current job stage has to complete its work. */ if ((r = nvidia_clara_info__read(NVIDIA_CLARA_INFO_STAGE_TIMEOUT, buffer)) != 0) { printf("Querying Clara for Stage Timeout failed with error: %d\n", r); } printf("The current Stage Timeout is %s seconds.", buffer); /* Query Clara to discover if TRTIS is available to the current job stage. */ if ((r = nvidia_clara_info__read(NVIDIA_CLARA_INFO_TRTIS_SERVICE, buffer)) != 0) { printf("Querying Clara for TRTIS URL failed with error: %d\n", r); } printf("The current TensorRT Inference Server URL is %s.", buffer); /* Query Clara to discover if the current job stages supports study publication. */ /* Publication is available when the query results is 0. */ if ((r = nvidia_clara_info__read(NVIDIA_CLARA_INFO_RENDER_SERVICE, NULL)) == 0) { printf("Study publication is available.\n"); } else { printf("Study publication is not available.\n"); } return 0; }

© Copyright 2018-2019, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.