morpheus_llm.llm.services.openai_chat_service.OpenAIChatClient#
- class OpenAIChatClient(
- parent,
- *,
- model_name,
- set_assistant=False,
- max_retries=10,
- json=False,
- **model_kwargs,
Bases:
LLMClientClient for interacting with a specific OpenAI chat model. This class should be constructed with the
OpenAIChatService.get_clientmethod.- Parameters:
- parentOpenAIChatService
The parent service for this client.
- model_namestr
The name of the model to interact with.
- set_assistant: bool, optional
When
True, a second input field namedassistantwill be used to proide additional context to the model, by default False- max_retries: int, optional
The maximum number of retries to attempt when making a request to the OpenAI API, by default 10
- json: bool, optional
When
True, the response will be returned as a JSON object, by default False- model_kwargsdict[str, typing.Any]
Additional keyword arguments to pass to the model when generating text.
- Attributes:
model_kwargsGet the keyword args that will be passed to the model when calling generation functions.
model_nameGet the name of the model associated with this client.
Methods
generate(**input_dict)Issue a request to generate a response based on a given prompt.
generate_async(**input_dict)Issue an asynchronous request to generate a response based on a given prompt.
Issue a request to generate a list of responses based on a list of prompts.
Issue an asynchronous request to generate a list of responses based on a list of prompts.
Returns the names of the inputs to the model.
- generate(**input_dict)[source]#
Issue a request to generate a response based on a given prompt.
- Parameters:
- input_dictdict
Input containing prompt data.
- async generate_async(**input_dict)[source]#
Issue an asynchronous request to generate a response based on a given prompt.
- Parameters:
- input_dictdict
Input containing prompt data.
- generate_batch( ) list[str | BaseException][source]#
- generate_batch( ) list[str]
- generate_batch( ) list[str] | list[str | BaseException]
Issue a request to generate a list of responses based on a list of prompts.
- Parameters:
- inputsdict
Inputs containing prompt data.
- return_exceptionsbool
Whether to return exceptions in the output list or raise them immediately.
- async generate_batch_async( ) list[str | BaseException][source]#
- async generate_batch_async( ) list[str]
- async generate_batch_async( ) list[str] | list[str | BaseException]
Issue an asynchronous request to generate a list of responses based on a list of prompts.
- Parameters:
- inputsdict
Inputs containing prompt data.
- return_exceptionsbool
Whether to return exceptions in the output list or raise them immediately.
- get_input_names()[source]#
Returns the names of the inputs to the model.
- Returns:
- list[str]
List of input names.
- property model_kwargs#
Get the keyword args that will be passed to the model when calling generation functions.
- Returns:
- dict
The keyword arguments dictionary.
- property model_name#
Get the name of the model associated with this client.
- Returns:
- str
The name of the model.