(Latest Version)
class LLMGenerateNode(llm_client)[source]

Bases: morpheus._lib.llm.LLMNodeBase

Generates responses from an LLM using the provided llm_client instance based on prompts provided as input from upstream nodes.


The client instance to use to generate responses.


The names of the inputs to this node. Defaults to ["prompt"].


execute(self, context) Execute the current node with the given context instance.
get_input_names(self) Get the input names for the node.
async execute(self: morpheus._lib.llm.LLMNodeBase, context: morpheus._lib.llm.LLMContext)Awaitable[morpheus._lib.llm.LLMContext][source]

Execute the current node with the given context instance.

All inputs for the given node should be fetched from the context, typically by calling either context.get_inputs to fetch all inputs as a dict, or context.get_input to fetch a specific input.

Similarly the output of the node is written to the context using context.set_output.

context : morpheus._lib.llm.LLMContext

Context instance to use for the execution

get_input_names(self: morpheus._lib.llm.LLMNodeBase) → List[str][source]

Get the input names for the node.


The input names for the node

Previous morpheus.llm.nodes.llm_generate_node
Next morpheus.llm.nodes.prompt_template_node
© Copyright 2024, NVIDIA. Last updated on Apr 25, 2024.