nemo_microservices.resources.guardrail.guardrail
#
Module Contents#
Classes#
API#
- class nemo_microservices.resources.guardrail.guardrail.AsyncGuardrailResource( )#
Bases:
nemo_microservices._resource.AsyncAPIResource
Initialization
- async check(
- *,
- messages: Iterable[nemo_microservices.types.guardrail_check_params.Message],
- model: str,
- best_of: int | nemo_microservices._types.Omit = omit,
- echo: bool | nemo_microservices._types.Omit = omit,
- frequency_penalty: float | nemo_microservices._types.Omit = omit,
- function_call: str | Dict[str, object] | nemo_microservices._types.Omit = omit,
- guardrails: nemo_microservices.types.guardrails_data_param.GuardrailsDataParam | nemo_microservices._types.Omit = omit,
- ignore_eos: bool | nemo_microservices._types.Omit = omit,
- logit_bias: Dict[str, float] | nemo_microservices._types.Omit = omit,
- logprobs: bool | nemo_microservices._types.Omit = omit,
- max_tokens: int | nemo_microservices._types.Omit = omit,
- n: int | nemo_microservices._types.Omit = omit,
- presence_penalty: float | nemo_microservices._types.Omit = omit,
- response_format: Dict[str, str] | nemo_microservices._types.Omit = omit,
- seed: int | nemo_microservices._types.Omit = omit,
- stop: str | nemo_microservices._types.SequenceNotStr[str] | nemo_microservices._types.Omit = omit,
- stream: bool | nemo_microservices._types.Omit = omit,
- suffix: str | nemo_microservices._types.Omit = omit,
- system_fingerprint: str | nemo_microservices._types.Omit = omit,
- temperature: float | nemo_microservices._types.Omit = omit,
- tool_choice: str | Dict[str, object] | nemo_microservices._types.Omit = omit,
- tools: nemo_microservices._types.SequenceNotStr[str] | nemo_microservices._types.Omit = omit,
- top_logprobs: int | nemo_microservices._types.Omit = omit,
- top_p: float | nemo_microservices._types.Omit = omit,
- user: str | nemo_microservices._types.Omit = omit,
- vision: bool | nemo_microservices._types.Omit = omit,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Chat completion for the provided conversation.
Args: messages: A list of messages comprising the conversation so far
model: The model to use for completion. Must be one of the available models.
best_of: Not supported. Generates best_of completions server-side and returns the “best” (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return - best_of must be greater than n.
echo: Not supported. If
echo
is true, the response will include the prompt and optionally its tokens ids and logprobs.frequency_penalty: Positive values penalize new tokens based on their existing frequency in the text.
function_call: Not Supported. Deprecated in favor of tool_choice. ‘none’ means the model will not call a function and instead generates a message. ‘auto’ means the model can pick between generating a message or calling a function. Specifying a particular function via {‘name’: ‘my_function’} forces the model to call that function.
guardrails: Guardrails specific options for the request.
ignore_eos: Ignore the eos when running
logit_bias: Not Supported. Modify the likelihood of specified tokens appearing in the completion.
logprobs: Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message
max_tokens: The maximum number of tokens that can be generated in the chat completion.
n: How many chat completion choices to generate for each input message.
presence_penalty: Positive values penalize new tokens based on whether they appear in the text so far.
response_format: Format of the response, can be ‘json_object’ to force the model to output valid JSON.
seed: If specified, attempts to sample deterministically.
stop: Up to 4 sequences where the API will stop generating further tokens.
stream: If set, partial message deltas will be sent, like in ChatGPT.
suffix: Not supported. If echo is set, the prompt is returned with the completion.
system_fingerprint: Represents the backend configuration that the model runs with. Used with seed for determinism.
temperature: What sampling temperature to use, between 0 and 2.
tool_choice: Not Supported. Favored over function_call. Controls which (if any) function is called by the model.
tools: A list of tools the model may call.
top_logprobs: The number of most likely tokens to return at each token position.
top_p: An alternative to sampling with temperature, called nucleus sampling.
user: Not Supported. A unique identifier representing your end-user.
vision: Whether this is a vision-capable request with image inputs.
extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- property completions: nemo_microservices.resources.guardrail.completions.AsyncCompletionsResource#
- property configs: nemo_microservices.resources.guardrail.configs.AsyncConfigsResource#
- property models: nemo_microservices.resources.guardrail.models.AsyncModelsResource#
- property with_raw_response: nemo_microservices.resources.guardrail.guardrail.AsyncGuardrailResourceWithRawResponse#
This property can be used as a prefix for any HTTP method call to return the raw response object instead of the parsed content.
For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#accessing-raw-response-data-e-g-headers
- property with_streaming_response: nemo_microservices.resources.guardrail.guardrail.AsyncGuardrailResourceWithStreamingResponse#
An alternative to
.with_raw_response
that doesn’t eagerly read the response body.For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#with_streaming_response
- class nemo_microservices.resources.guardrail.guardrail.AsyncGuardrailResourceWithRawResponse( )#
Initialization
- property completions: nemo_microservices.resources.guardrail.completions.AsyncCompletionsResourceWithRawResponse#
- class nemo_microservices.resources.guardrail.guardrail.AsyncGuardrailResourceWithStreamingResponse( )#
Initialization
- property chat: nemo_microservices.resources.guardrail.chat.chat.AsyncChatResourceWithStreamingResponse#
- property completions: nemo_microservices.resources.guardrail.completions.AsyncCompletionsResourceWithStreamingResponse#
- class nemo_microservices.resources.guardrail.guardrail.GuardrailResource( )#
Bases:
nemo_microservices._resource.SyncAPIResource
Initialization
- check(
- *,
- messages: Iterable[nemo_microservices.types.guardrail_check_params.Message],
- model: str,
- best_of: int | nemo_microservices._types.Omit = omit,
- echo: bool | nemo_microservices._types.Omit = omit,
- frequency_penalty: float | nemo_microservices._types.Omit = omit,
- function_call: str | Dict[str, object] | nemo_microservices._types.Omit = omit,
- guardrails: nemo_microservices.types.guardrails_data_param.GuardrailsDataParam | nemo_microservices._types.Omit = omit,
- ignore_eos: bool | nemo_microservices._types.Omit = omit,
- logit_bias: Dict[str, float] | nemo_microservices._types.Omit = omit,
- logprobs: bool | nemo_microservices._types.Omit = omit,
- max_tokens: int | nemo_microservices._types.Omit = omit,
- n: int | nemo_microservices._types.Omit = omit,
- presence_penalty: float | nemo_microservices._types.Omit = omit,
- response_format: Dict[str, str] | nemo_microservices._types.Omit = omit,
- seed: int | nemo_microservices._types.Omit = omit,
- stop: str | nemo_microservices._types.SequenceNotStr[str] | nemo_microservices._types.Omit = omit,
- stream: bool | nemo_microservices._types.Omit = omit,
- suffix: str | nemo_microservices._types.Omit = omit,
- system_fingerprint: str | nemo_microservices._types.Omit = omit,
- temperature: float | nemo_microservices._types.Omit = omit,
- tool_choice: str | Dict[str, object] | nemo_microservices._types.Omit = omit,
- tools: nemo_microservices._types.SequenceNotStr[str] | nemo_microservices._types.Omit = omit,
- top_logprobs: int | nemo_microservices._types.Omit = omit,
- top_p: float | nemo_microservices._types.Omit = omit,
- user: str | nemo_microservices._types.Omit = omit,
- vision: bool | nemo_microservices._types.Omit = omit,
- extra_headers: nemo_microservices._types.Headers | None = None,
- extra_query: nemo_microservices._types.Query | None = None,
- extra_body: nemo_microservices._types.Body | None = None,
- timeout: float | httpx.Timeout | None | nemo_microservices._types.NotGiven = not_given,
Chat completion for the provided conversation.
Args: messages: A list of messages comprising the conversation so far
model: The model to use for completion. Must be one of the available models.
best_of: Not supported. Generates best_of completions server-side and returns the “best” (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return - best_of must be greater than n.
echo: Not supported. If
echo
is true, the response will include the prompt and optionally its tokens ids and logprobs.frequency_penalty: Positive values penalize new tokens based on their existing frequency in the text.
function_call: Not Supported. Deprecated in favor of tool_choice. ‘none’ means the model will not call a function and instead generates a message. ‘auto’ means the model can pick between generating a message or calling a function. Specifying a particular function via {‘name’: ‘my_function’} forces the model to call that function.
guardrails: Guardrails specific options for the request.
ignore_eos: Ignore the eos when running
logit_bias: Not Supported. Modify the likelihood of specified tokens appearing in the completion.
logprobs: Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message
max_tokens: The maximum number of tokens that can be generated in the chat completion.
n: How many chat completion choices to generate for each input message.
presence_penalty: Positive values penalize new tokens based on whether they appear in the text so far.
response_format: Format of the response, can be ‘json_object’ to force the model to output valid JSON.
seed: If specified, attempts to sample deterministically.
stop: Up to 4 sequences where the API will stop generating further tokens.
stream: If set, partial message deltas will be sent, like in ChatGPT.
suffix: Not supported. If echo is set, the prompt is returned with the completion.
system_fingerprint: Represents the backend configuration that the model runs with. Used with seed for determinism.
temperature: What sampling temperature to use, between 0 and 2.
tool_choice: Not Supported. Favored over function_call. Controls which (if any) function is called by the model.
tools: A list of tools the model may call.
top_logprobs: The number of most likely tokens to return at each token position.
top_p: An alternative to sampling with temperature, called nucleus sampling.
user: Not Supported. A unique identifier representing your end-user.
vision: Whether this is a vision-capable request with image inputs.
extra_headers: Send extra headers
extra_query: Add additional query parameters to the request
extra_body: Add additional JSON properties to the request
timeout: Override the client-level default timeout for this request, in seconds
- property completions: nemo_microservices.resources.guardrail.completions.CompletionsResource#
- property configs: nemo_microservices.resources.guardrail.configs.ConfigsResource#
- property models: nemo_microservices.resources.guardrail.models.ModelsResource#
- property with_raw_response: nemo_microservices.resources.guardrail.guardrail.GuardrailResourceWithRawResponse#
This property can be used as a prefix for any HTTP method call to return the raw response object instead of the parsed content.
For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#accessing-raw-response-data-e-g-headers
- property with_streaming_response: nemo_microservices.resources.guardrail.guardrail.GuardrailResourceWithStreamingResponse#
An alternative to
.with_raw_response
that doesn’t eagerly read the response body.For more information, see https://docs.nvidia.com/nemo/microservices/latest/pysdk/index.html#with_streaming_response
- class nemo_microservices.resources.guardrail.guardrail.GuardrailResourceWithRawResponse( )#
Initialization
- property completions: nemo_microservices.resources.guardrail.completions.CompletionsResourceWithRawResponse#
- class nemo_microservices.resources.guardrail.guardrail.GuardrailResourceWithStreamingResponse( )#
Initialization
- property completions: nemo_microservices.resources.guardrail.completions.CompletionsResourceWithStreamingResponse#