morpheus.llm.services.openai_chat_service.OpenAIChatClient
- class OpenAIChatClient(parent, *, model_name, set_assistant=False, max_retries=10, json=False, **model_kwargs)[source]
Bases:
morpheus.llm.services.llm_service.LLMClient
Client for interacting with a specific OpenAI chat model. This class should be constructed with the
OpenAIChatService.get_client
method.- Parameters
- parent
- model_name
- set_assistant: bool, optional
- max_retries: int, optional
- json: bool, optional
- model_kwargs
The parent service for this client.
The name of the model to interact with.
When
True
, a second input field namedassistant
will be used to proide additional context to the model, by default FalseThe maximum number of retries to attempt when making a request to the OpenAI API, by default 10
When
True
, the response will be returned as a JSON object, by default FalseAdditional keyword arguments to pass to the model when generating text.
- Attributes
model_kwargs
model_name
Get the keyword args that will be passed to the model when calling generation functions.
Get the name of the model associated with this client.
Methods
generate
(**input_dict)Issue a request to generate a response based on a given prompt. generate_async
(**input_dict)Issue an asynchronous request to generate a response based on a given prompt. generate_batch
()Issue a request to generate a list of responses based on a list of prompts. generate_batch_async
()Issue an asynchronous request to generate a list of responses based on a list of prompts. get_input_names
()Returns the names of the inputs to the model. - generate(**input_dict)[source]
Issue a request to generate a response based on a given prompt.
- Parameters
- input_dict
Input containing prompt data.
- async generate_async(**input_dict)[source]
Issue an asynchronous request to generate a response based on a given prompt.
- Parameters
- input_dict
Input containing prompt data.
- generate_batch(inputs: dict[str, list], return_exceptions: Literal[True] = True) → list[str | BaseException][source]
- generate_batch(inputs: dict[str, list], return_exceptions: Literal[False] = False) → list[str]
Issue a request to generate a list of responses based on a list of prompts.
- Parameters
- inputs
- return_exceptions
Inputs containing prompt data.
Whether to return exceptions in the output list or raise them immediately.
- async generate_batch_async(inputs: dict[str, list], return_exceptions: Literal[True] = True) → list[str | BaseException][source]
- async generate_batch_async(inputs: dict[str, list], return_exceptions: Literal[False] = False) → list[str]
Issue an asynchronous request to generate a list of responses based on a list of prompts.
- Parameters
- inputs
- return_exceptions
Inputs containing prompt data.
Whether to return exceptions in the output list or raise them immediately.
- get_input_names()[source]
Returns the names of the inputs to the model.
- Returns
- list[str]
List of input names.
- property model_kwargs
Get the keyword args that will be passed to the model when calling generation functions.
- Returns
- dict
The keyword arguments dictionary.
- property model_name
Get the name of the model associated with this client.
- Returns
- str
The name of the model.