morpheus_llm.llm.services.openai_chat_service.OpenAIChatClient#

class OpenAIChatClient(
parent,
*,
model_name,
set_assistant=False,
max_retries=10,
json=False,
**model_kwargs,
)[source]#

Bases: LLMClient

Client for interacting with a specific OpenAI chat model. This class should be constructed with the OpenAIChatService.get_client method.

Parameters:
parentOpenAIChatService

The parent service for this client.

model_namestr

The name of the model to interact with.

set_assistant: bool, optional

When True, a second input field named assistant will be used to proide additional context to the model, by default False

max_retries: int, optional

The maximum number of retries to attempt when making a request to the OpenAI API, by default 10

json: bool, optional

When True, the response will be returned as a JSON object, by default False

model_kwargsdict[str, typing.Any]

Additional keyword arguments to pass to the model when generating text.

Attributes:
model_kwargs

Get the keyword args that will be passed to the model when calling generation functions.

model_name

Get the name of the model associated with this client.

Methods

generate(**input_dict)

Issue a request to generate a response based on a given prompt.

generate_async(**input_dict)

Issue an asynchronous request to generate a response based on a given prompt.

generate_batch()

Issue a request to generate a list of responses based on a list of prompts.

generate_batch_async()

Issue an asynchronous request to generate a list of responses based on a list of prompts.

get_input_names()

Returns the names of the inputs to the model.

generate(**input_dict)[source]#

Issue a request to generate a response based on a given prompt.

Parameters:
input_dictdict

Input containing prompt data.

async generate_async(**input_dict)[source]#

Issue an asynchronous request to generate a response based on a given prompt.

Parameters:
input_dictdict

Input containing prompt data.

generate_batch(
inputs: dict[str, list],
return_exceptions: Literal[True],
) list[str | BaseException][source]#
generate_batch(
inputs: dict[str, list],
return_exceptions: Literal[False],
) list[str]
generate_batch(
inputs: dict[str, list],
return_exceptions: bool = False,
) list[str] | list[str | BaseException]

Issue a request to generate a list of responses based on a list of prompts.

Parameters:
inputsdict

Inputs containing prompt data.

return_exceptionsbool

Whether to return exceptions in the output list or raise them immediately.

async generate_batch_async(
inputs: dict[str, list],
return_exceptions: Literal[True],
) list[str | BaseException][source]#
async generate_batch_async(
inputs: dict[str, list],
return_exceptions: Literal[False],
) list[str]
async generate_batch_async(
inputs: dict[str, list],
return_exceptions: bool = False,
) list[str] | list[str | BaseException]

Issue an asynchronous request to generate a list of responses based on a list of prompts.

Parameters:
inputsdict

Inputs containing prompt data.

return_exceptionsbool

Whether to return exceptions in the output list or raise them immediately.

get_input_names()[source]#

Returns the names of the inputs to the model.

Returns:
list[str]

List of input names.

property model_kwargs#

Get the keyword args that will be passed to the model when calling generation functions.

Returns:
dict

The keyword arguments dictionary.

property model_name#

Get the name of the model associated with this client.

Returns:
str

The name of the model.