Class ServerStatusHttpContext¶
- Defined in File request.h
Inheritance Relationships¶
Base Type¶
public nvidia::inferenceserver::client::ServerStatusContext
(Class ServerStatusContext)
Class Documentation¶
-
class
ServerStatusHttpContext
: public nvidia::inferenceserver::client::ServerStatusContext¶ ServerStatusHttpContext is the HTTP instantiation of ServerStatusContext.
Public Functions
-
Error
GetServerStatus
(ServerStatus *status)¶ Contact the inference server and get status.
- Return
- Error object indicating success or failure.
- Parameters
status
: Returns the status.
Public Static Functions
-
static Error
Create
(std::unique_ptr<ServerStatusContext> *ctx, const std::string &server_url, bool verbose = false)¶ Create a context that returns information about an inference server and all models on the server using HTTP protocol.
- Return
- Error object indicating success or failure.
- Parameters
ctx
: Returns a new ServerStatusHttpContext object.server_url
: The inference server name and port.verbose
: If true generate verbose output when contacting the inference server.
-
static Error
Create
(std::unique_ptr<ServerStatusContext> *ctx, const std::string &server_url, const std::string &model_name, bool verbose = false)¶ Create a context that returns information about an inference server and one model on the sever using HTTP protocol.
- Return
- Error object indicating success or failure.
- Parameters
ctx
: Returns a new ServerStatusHttpContext object.server_url
: The inference server name and port.model_name
: The name of the model to get status for.verbose
: If true generate verbose output when contacting the inference server.
-
Error