Clara Holoscan Deploy 0.7.4
0.7.4

Clara Deploy SDK API Overview

The Clara Deploy SDK API is provided to facilitate pipeline, pipeline job, and payload related operations. The API is based on Google’s GRPC standard which is both platform and language agnostic. This means anyone can develop clients for Clara Deploy SDK using the language of their choice, on the platform of their choice, for any platform possible.

GRPC based solutions are provided as a set of remote procedure calls [RPC] grouped together in a service. Each RPC defines a request and a response message. GRPC messages are structured data definitions based on Google’s wire-type encoding solution Protocol Buffers.

When calling a service’s RPC, the requester will provide a populated request message to the RPC. In turn, the RPC will handle the communication with the remote service provider, then return a response message.

All Clara Deploy SDK response messages will include a standard response header, which can be used to determine if the call was successful or not. Additional information will depend on the RPC used and the response message returned. See the service listing below for details regarding each RPC.

Services

The Clara Deploy SDK provides three separate, but related, services: Pipelines Service, Jobs Service, and Payloads Service. Those familiar with object oriented software development will recognize the idea of separating concerns into separate interfaces.

Provides functionality related to pipeline definitions such as creation, enumeration, and removal of pipeline definitions.

  • Create

    Requests the creation of a new pipeline, based on a definition provided to the service.

  • Details

    Requests details of a pipeline.

  • List

    Requests a listing of all pipelines known by the service.

  • Remove

    Requests the removal of a pipeline definition from the service.

  • Update

    Requests an update to a pipeline definition known by the service.

Provides functionality related to pipeline jobs such as creation, status, and inspection of pipeline jobs.

  • Cancel

    Requests cancellation of a pending or running job by its identifier.

  • Create

    Requests creation of a new job based on a known pipeline.

  • List

    Requests a filtered list of all known jobs, or a list of all running jobs when no filter is provided.

  • Start

    Requests starting a job.

  • Status

    Requests the status of a known job by its identifier.

  • ClaraStop

    Requests all pipeline services deployed during pipeline job initialization(s) to be halted, and associated resources released.

Provides functionality related to pipeline payloads such as uploading and downloading of data, and enumerating the contents of a payload.

  • Delete

    Requests the deletion of a known payload by its identifier.

  • Details

    Requests the details (file listing) of a known payload by its identifier.

  • Download

    Requests the download of a blob (file) from a known payload by its identifier and path.

  • Upload

    Requests the upload of a blob (file) from a known payload.

Examples

All examples below are written using a kind of “pseudo-code”, meaning that none of them are expected to work “as-is”. Each example is expected to be interpreted and re-written in your language of choice using the freely downloadable GRPC tools (see Useful Links below).

Copy
Copied!
            

// Create a pipelines service GRPC client. var client = PipelinesServiceClient(connection_string) // Create a request stream object. var request_stream = client.create() // Add each file that composes the pipeline definition // to the request stream. for (var filename in pipeline_filename_list) { // Create a 64 kilobyte (chunk size limit) array. var buffer = new char[64 * 1024 / sizeof(char)]; // Open the file for reading because we need the // content of the file. var reader = File.openRead(filename) int read = 0; // Read upto 64 KiB of data from the file, the GRPC message // definition assumes the native encoding of your language // of choice, and will handle re-encoding the content as part // of the send to the server. while ((read = reader.read(buffer, 0, buffer.Length)) > 0) { // Streams are composed of a series of messages, or chunks. // Create a `PipelineCreateRequest` message for each // chunk of the stream. var request = PipelineCreateRequest(); // Create a `PipelineDefinitionFile` message and populate // it with the data read this time around the loop. It is // important to use the same `path` value for content from // the same file, when a file exceeds the chunk size limit. request.definition = PipelineDefinitionFile() request.header = RequestHeader() request.header.user_agent = "custom-client-example" request.defintion.path = filename request.defintion.content = String(buffer, 0, read) request_stream.append(request); } } var response = request_stream.close()

Copy
Copied!
            

// Create a pipelines service GRPC client. var client = PipelineServiceClient(connection_string) // Create a pipelines list request object. var request = PipelinesListRequest() request.header = RequestHeader() request.header.user_agent = "custom-client-example" // Request data from Clara Deploy SDK. var response_stream = client.list(); // Loop over the response messages. // Each pipeline registered with Clara Deploy SDK will // be described in a separate message. while (var reponse = response_stream.read_next()) { print("Pipeline {response.details.name} ({response.details.pipeline_id})\n") }

Copy
Copied!
            

// Create a jobs service GRPC client. var jobs_client = JobsServiceClient(connection_string) // Create a jobs create request object. var create_request = JobsCreateRequest() create_request.header = RequestHeader() create_request.header.user_agent = "custom-client-example" // Assign the local `job_pipeline_id` value to the request's `pipeline_id`. create_request.pipeline_id = job_pipeline_id // Assign the local `job_name` value to the request's `name`. create_request.name = job_name; // Use the client to create the job, capturing the response from the client. var create_response = jobs_client.Create(create_request) // Clara Deploy SDK will send a `ResponseHeader` along with its response. // When the header's response code value is less than zero, an error has occured. if (create_response.header.code < 0) throw error("Failed to create job. Clara Deploy SDK responded with an error ({create_response.header.code}).") // Capture the job and payload identifiers var job_id = create_response.job_id var payload_id = create_response.payload_id // Since the pipeline has data it needs as input // we need to upload that data to payload prior to // starting the job. var payloads_client = PayloadsServiceClient(connection_string) // Use the client to create an upload request stream. var request_stream = payloads_client.upload() // Allocate a buffer for copying data to each stream chunk. // We'll use 64 KiB because that's vert close to the request // message size limit. var buffer = new byte[64 * 1024] // Loop over all of the files we need to push to // Clara Deploy SDK prior to starting the job. for (var local_file_path in job_input_files) { // Open the file and get its size, var reader = file.open_read(local_file_path) var file_size = file.get_size(local_file_path) var read = 0; // Read the file in 64 KiB sized chunks, and send each one // to Clara Deploy SDK to store in the pipeline's input folder. while ((read = reader.read(buffer, 0, buffer.length)) > 0) { // Clara Deploy SDK only want the name of the file. Any // included path information could cause issues for the // pipeline operators. var file_name = file.get_name(local_file_path) // Create the request message. Notice that at least one // message per input file will be created. Multiple messages // will be created for larger files. // Clara Deploy SDK will reassemble larger files base on // the file name in the request's `details` message. var request = PayloadUploadRequest() request.header = RequestHeader() request.header.user_agent = "custom-client-example" request.payload_id = payload_id request.details = PayloadFileDetails() request.details.size = file_size request.details.name = file_name request.data = buffer.range(0, read) request_stream.append(request) } } // Close the request to inform Clara Deploy SDK that no more // input data will be coming. request_stream.close() // Now it is time to actually start the job. // Clara Deploy SDK will not start the job immediately, but // will queue the job if there are currently insufficient // resources available. var start_request = JobsStartRequest() start_request.header = RequestHeader() start_request.header.user_agent = "custom-client-example" start_request.job_id = job_id var start_response = jobs_client.Start(start_request) if (start_response.header.code < 0) throw error("Failed to start job ({job_id}). Clara Deploy SDK responded with an error ({start_response.header.code}).")

Copy
Copied!
            

// Create a jobs service GRPC client. var client = JobsServiceClient(connection_string) // Create a jobs service status request object. var request = JobsStatusRequest() request.header = RequestHeader() request.header.user_agent = "custom-client-example" request.job_id = my_job_id // Send to the request to Clara Deploy SDK and // receive a response message back. var response = client.Status(request) // Check the response header for an error code. if (response.header.code < 0) throw error("Failed to get job status ({job_id}). Clara Deploy SDK responded with error ({response.header.code}).") // Print out the details of the job status response. print("job status\n") print(" name: {response.name}\n") print(" id: {response.job_id}\n") print(" payload: {response.payload_id}\n") print(" pipeline: {response.pipeline_id}\n") print(" state: {response.state}\n") print(" status: {response.status}\n") if (response.messages.count > 0) { print(" messages:\n") for (var message in reponse.messages) { print(" {message}\n") } }

Copy
Copied!
            

// Create a payloads service GRPC client. var client = PayloadsServiceClient(connection_string) // Create a payloads details request object. var request = PayloadsDetailsRequest() request.header = RequestHeader() request.header.user_agent = "custom-client-examples" request.payload_id = my_payload_id // Request the data from Clara Deploy SDK var response_stream = client.Details(request) // Loop over each response message sent. // Each file in the payload will have a corresponding // message in the repsonse. while (var response = response_stream.read_next()) { print(" {response.file.name} {response.file.size}\n") }

Copy
Copied!
            

// Create a payloads service GRPC client. var client = PayloadsServiceClient(connection_string) // Create a payloads download request object. // The payload's identifier and the name of the file // contained by the payload are required to download // a payload file's contents. // See the example above to see how to get the names // of files contained in a payload. var request = PayloadsDownloadRequest() request.header = RequestHeader() request.header.user_agent = "custom-client-examples" request.payload_id = my_payload_id request.name = payload_file_name var response_stream = client.Download(request) if (response.header.code < 0) throw error("Failed to download file. Clara Deploy SDK returned error ({response.header.code})") // Create a local file for the download data. var writer = file.create(local_file_name) // Loop over the stream response messages. Each // message will be a chunk of the file. Appending // each in order will restore the file locally. while (var response = response_stream.read_next()) { writer.write(response.data) } writer.close()

© Copyright 2018-2020, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.