nemo_microservices.types.guardrails_data_param#

Module Contents#

Classes#

Data#

API#

nemo_microservices.types.guardrails_data_param.Config: typing_extensions.TypeAlias#

None

class nemo_microservices.types.guardrails_data_param.GuardrailsDataParam#

Bases: typing_extensions.TypedDict

config: nemo_microservices.types.guardrails_data_param.Config#

None

The id of the configuration or its dict representation to be used.

config_id: str#

None

The id of the configuration to be used.

config_ids: nemo_microservices._types.SequenceNotStr[str]#

None

The list of configuration ids to be used.

If set, the configurations will be combined.

context: Dict[str, object]#

None

Additional context data to be added to the conversation.

options: nemo_microservices.types.generation_options_param.GenerationOptionsParam#

None

A set of options that should be applied during a generation.

The GenerationOptions control various things such as what rails are enabled, additional parameters for the main LLM, whether the rails should be enforced or ran in parallel, what to be included in the generation log, etc.

return_choice: bool#

None

If set, guardrails data will be included as a JSON in the choices array.

state: Dict[str, object]#

None

A state object that should be used to continue the interaction.

stream: bool#

None

If set, partial message deltas will be sent, like in ChatGPT.

Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.