Function TRITONSERVER_InferenceResponseOutput¶
Defined in File tritonserver.h
Function Documentation¶
-
TRITONSERVER_Error *
TRITONSERVER_InferenceResponseOutput
(TRITONSERVER_InferenceResponse *inference_response, const uint32_t index, const char **name, TRITONSERVER_DataType *datatype, const int64_t **shape, uint64_t *dim_count, uint32_t *batch_size, const void **base, size_t *byte_size, TRITONSERVER_MemoryType *memory_type, int64_t *memory_type_id, void **userp)¶ Get all information about an output tensor.
The tensor data is returned as the base pointer to the data and the size, in bytes, of the data. The caller does not own any of the returned value and must not modify or delete them. The lifetime of all returned values extends until ‘inference_response’ is deleted.
- Return
a TRITONSERVER_Error indicating success or failure.
- Parameters
inference_response
: The response object.index
: The index of the output tensor, must be 0 <= index < count, where ‘count’ is the value returned by TRITONSERVER_InferenceResponseOutputCount.name
: Returns the name of the output.datatype
: Returns the type of the output.shape
: Returns the shape of the output.dim_count
: Returns the number of dimensions of the returned shape.batch_size
: Returns the batch size of the output as understood by Triton. If the model does not support batching in a way that Triton understands the value will be 0.base
: Returns the tensor data for the output.byte_size
: Returns the size, in bytes, of the data.memory_type
: Returns the memory type of the data.memory_type_id
: Returns the memory type id of the data.userp
: The user-specified value associated with the buffer in TRITONSERVER_ResponseAllocatorAllocFn_t.