Class InferGrpcStreamContext

Class Documentation

class InferGrpcStreamContext

InferGrpcStreamContext is the streaming instantiation of InferGrpcContext.

All synchronous and asynchronous requests sent from this context will be sent in the same stream.

Public Static Functions

static Error Create(std::unique_ptr<InferContext> *ctx, const std::string &server_url, const std::string &model_name, int64_t model_version = -1, bool verbose = false)

Create streaming context that performs inference for a non-sequence model using the GRPC protocol.

Return

Error object indicating success or failure.

Parameters
  • ctx: Returns a new InferGrpcContext object.

  • server_url: The inference server name and port.

  • model_name: The name of the model to get status for.

  • model_version: The version of the model to use for inference, or -1 to indicate that the latest (i.e. highest version number) version should be used.

  • verbose: If true generate verbose output when contacting the inference server.

static Error Create(std::unique_ptr<InferContext> *ctx, CorrelationID correlation_id, const std::string &server_url, const std::string &model_name, int64_t model_version = -1, bool verbose = false)

Create streaming context that performs inference for a sequence model using a given correlation ID and the GRPC protocol.

Return

Error object indicating success or failure.

Parameters
  • ctx: Returns a new InferGrpcContext object.

  • correlation_id: The correlation ID to use for all inferences performed with this context. A value of 0 (zero) indicates that no correlation ID should be used.

  • server_url: The inference server name and port.

  • model_name: The name of the model to get status for.

  • model_version: The version of the model to use for inference, or -1 to indicate that the latest (i.e. highest version number) version should be used.

  • verbose: If true generate verbose output when contacting the inference server.