Inference Models Resource#

This resource corresponds to the NIM Proxy microservice’s v1/models endpoint, which is for listing models that are available for inference.

Sync Models Resource#

class nemo_microservices.lib.custom_resources.inference.ModelsResource(client: NeMoMicroservices)

Bases: SyncAPIResource

property with_raw_response: ModelsResourceWithRawResponse

This property can be used as a prefix for any HTTP method call to return the raw response object instead of the parsed content.

For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#accessing-raw-response-data-e-g-headers

property with_streaming_response: ModelsResourceWithStreamingResponse

An alternative to .with_raw_response that doesn’t eagerly read the response body.

For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#with_streaming_response

list(
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) ModelListResponse

Returns a list of models available for inference.

Note: this endpoint doesn’t support pagination or filtering.

create_from_dict(data: dict[str, object]) object

Async Models Resource#

class nemo_microservices.lib.custom_resources.inference.AsyncModelsResource(client: AsyncNeMoMicroservices)

Bases: AsyncAPIResource

property with_raw_response: AsyncModelsResourceWithRawResponse

This property can be used as a prefix for any HTTP method call to return the raw response object instead of the parsed content.

For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#accessing-raw-response-data-e-g-headers

property with_streaming_response: AsyncModelsResourceWithStreamingResponse

An alternative to .with_raw_response that doesn’t eagerly read the response body.

For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#with_streaming_response

async list(
*,
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) ModelListResponse

Returns a list of models available for inference.

Note: this endpoint doesn’t support pagination or filtering.

create_from_dict(
data: dict[str, object],
) object