GuardrailCheckParams#

class nemo_microservices.types.GuardrailCheckParams

Bases: TypedDict

messages: Required[Iterable[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]]

A list of messages comprising the conversation so far

model: Required[str]

The model to use for completion. Must be one of the available models.

best_of: int

Not supported.

Generates best_of completions server-side and returns the “best” (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return - best_of must be greater than n.

echo: bool

Not supported.

If echo is true, the response will include the prompt and optionally its tokens ids and logprobs.

frequency_penalty: float

Positive values penalize new tokens based on their existing frequency in the text.

function_call: str | object

Not Supported.

Deprecated in favor of tool_choice. ‘none’ means the model will not call a function and instead generates a message. ‘auto’ means the model can pick between generating a message or calling a function. Specifying a particular function via {‘name’: ‘my_function’} forces the model to call that function.

guardrails: GuardrailsDataParam

Guardrails specific options for the request.

ignore_eos: bool

Ignore the eos when running

logit_bias: Dict[str, float]

Not Supported.

Modify the likelihood of specified tokens appearing in the completion.

logprobs: bool

Whether to return log probabilities of the output tokens or not.

If true, returns the log probabilities of each output token returned in the content of message

max_tokens: int

The maximum number of tokens that can be generated in the chat completion.

n: int

How many chat completion choices to generate for each input message.

presence_penalty: float

Positive values penalize new tokens based on whether they appear in the text so far.

response_format: Dict[str, str]

Format of the response, can be ‘json_object’ to force the model to output valid JSON.

seed: int

If specified, attempts to sample deterministically.

stop: List[str] | str

Up to 4 sequences where the API will stop generating further tokens.

stream: bool

If set, partial message deltas will be sent, like in ChatGPT.

suffix: str

Not supported. If echo is set, the prompt is returned with the completion.

system_fingerprint: str

Represents the backend configuration that the model runs with.

Used with seed for determinism.

temperature: float

What sampling temperature to use, between 0 and 2.

tool_choice: str | object

Not Supported.

Favored over function_call. Controls which (if any) function is called by the model.

tools: List[str]

A list of tools the model may call.

top_logprobs: int

The number of most likely tokens to return at each token position.

top_p: float

An alternative to sampling with temperature, called nucleus sampling.

user: str

Not Supported. A unique identifier representing your end-user.

vision: bool

Whether this is a vision-capable request with image inputs.