NVIDIA Morpheus (25.02.01)

morpheus_llm.llm.services.llm_service.LLMClient

class LLMClient[source]

Bases: abc.ABC

Abstract interface for clients which are able to interact with LLM models. Concrete implementations of this class will have an associated implementation of LLMService which is able to construct instances of this class.

Methods

generate(**input_dict) Issue a request to generate a response based on a given prompt.
generate_async(**input_dict) Issue an asynchronous request to generate a response based on a given prompt.
generate_batch() Issue a request to generate a list of responses based on a list of prompts.
generate_batch_async() Issue an asynchronous request to generate a list of responses based on a list of prompts.
get_input_names() Returns the names of the inputs to the model.

abstract generate(**input_dict)[source]

Issue a request to generate a response based on a given prompt.

Parameters
input_dictdict

Input containing prompt data.

Returns
str

Generated response for prompt.

abstract async generate_async(**input_dict)[source]

Issue an asynchronous request to generate a response based on a given prompt.

Parameters
input_dictdict

Input containing prompt data.

Returns
str

Generated async response for prompt.

abstract generate_batch(inputs: dict[str, list], return_exceptions: Literal[True])list[str | BaseException][source]
abstract generate_batch(inputs: dict[str, list], return_exceptions: Literal[False])list[str]
abstract generate_batch(inputs: dict[str, list], return_exceptions: bool = False)list[str] | list[str | BaseException]

Issue a request to generate a list of responses based on a list of prompts.

Parameters
inputsdict

Inputs containing prompt data.

return_exceptionsbool

Whether to return exceptions in the output list or raise them immediately.

Returns
list[str] | list[str | BaseException]

List of responses or list of responses and exceptions.

abstract async generate_batch_async(inputs: dict[str, list], return_exceptions: Literal[True])list[str | BaseException][source]
abstract async generate_batch_async(inputs: dict[str, list], return_exceptions: Literal[False])list[str]
abstract async generate_batch_async(inputs: dict[str, list], return_exceptions: bool = False)list[str] | list[str | BaseException]

Issue an asynchronous request to generate a list of responses based on a list of prompts.

Parameters
inputsdict

Inputs containing prompt data.

return_exceptionsbool

Whether to return exceptions in the output list or raise them immediately.

Returns
list[str] | list[str | BaseException]

List of responses or list of responses and exceptions.

abstract get_input_names()[source]

Returns the names of the inputs to the model.

Returns
list[str]

List of input names.

Previous morpheus_llm.llm.services.llm_service
Next morpheus_llm.llm.services.llm_service.LLMService
© Copyright 2024, NVIDIA. Last updated on Mar 3, 2025.