nemo_microservices.resources.v2.inference.gateway.openai
#
Module Contents#
Classes#
API#
- class nemo_microservices.resources.v2.inference.gateway.openai.AsyncOpenAIResource( )#
Bases:
nemo_microservices._resource.AsyncAPIResource
Initialization
- async delete(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- async get(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- async patch(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- async post(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- async put(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- property with_raw_response: nemo_microservices.resources.v2.inference.gateway.openai.AsyncOpenAIResourceWithRawResponse#
This property can be used as a prefix for any HTTP method call to return the raw response object instead of the parsed content.
For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#accessing-raw-response-data-e-g-headers
- property with_streaming_response: nemo_microservices.resources.v2.inference.gateway.openai.AsyncOpenAIResourceWithStreamingResponse#
An alternative to
.with_raw_response
that doesn’t eagerly read the response body.For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#with_streaming_response
- class nemo_microservices.resources.v2.inference.gateway.openai.AsyncOpenAIResourceWithRawResponse( )#
Initialization
- class nemo_microservices.resources.v2.inference.gateway.openai.AsyncOpenAIResourceWithStreamingResponse( )#
Initialization
- class nemo_microservices.resources.v2.inference.gateway.openai.OpenAIResource(client: nemo_microservices._client.NeMoMicroservices)#
Bases:
nemo_microservices._resource.SyncAPIResource
Initialization
- delete(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- get(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- patch(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- post(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- put(
- trailing_uri: str,
- *,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Proxy requests to OpenAI-compatible inference endpoints.
This is a stub implementation that returns request details.
Args: extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- property with_raw_response: nemo_microservices.resources.v2.inference.gateway.openai.OpenAIResourceWithRawResponse#
This property can be used as a prefix for any HTTP method call to return the raw response object instead of the parsed content.
For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#accessing-raw-response-data-e-g-headers
- property with_streaming_response: nemo_microservices.resources.v2.inference.gateway.openai.OpenAIResourceWithStreamingResponse#
An alternative to
.with_raw_response
that doesn’t eagerly read the response body.For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#with_streaming_response
- class nemo_microservices.resources.v2.inference.gateway.openai.OpenAIResourceWithRawResponse( )#
Initialization
- class nemo_microservices.resources.v2.inference.gateway.openai.OpenAIResourceWithStreamingResponse( )#
Initialization