Function TRITONSERVER_InferenceRequestNew

Function Documentation

TRITONSERVER_Error *TRITONSERVER_InferenceRequestNew(TRITONSERVER_InferenceRequest **inference_request, TRITONSERVER_Server *server, const char *model_name, const int64_t model_version)

Create a new inference request object.

Return

a TRITONSERVER_Error indicating success or failure.

Parameters
  • inference_request: Returns the new request object.

  • server: the inference server object.

  • model_name: The name of the model to use for the request.

  • model_version: The version of the model to use for the request. If -1 then the server will choose a version based on the model’s policy.