morpheus.llm

All objects related to using LLMs in Morpheus

class InputMap
Attributes
external_name

The name of node that will be mapped to this input.

internal_name

The internal node name that the external node maps to.

property external_name

The name of node that will be mapped to this input. Use a leading ‘/’ to indicate it is a sibling node otherwise it will be treated as a parent node. Can also specify a specific node output such as ‘/sibling_node/output1’ to map the output ‘output1’ of ‘sibling_node’ to this input. Can also use a wild card such as ‘/sibling_node/*’ to match all internal node names

property internal_name

The internal node name that the external node maps to. Must match an input returned from get_input_names() of the desired node. Defaults to ‘-’ which is a placeholder for the default input of the node. Use a wildcard ‘*’ to match all inputs of the node (Must also use a wild card on the external mapping).

class LLMContext
Attributes
full_name

input_map

name

parent

view_outputs

Methods

get_input(*args, **kwargs) Overloaded function.
get_inputs(self)

message(self)

push(self, name, inputs)

set_output(*args, **kwargs) Overloaded function.
task(self)

property full_name

get_input(*args, **kwargs)

Overloaded function.

  1. get_input(self: morpheus._lib.llm.LLMContext) -> object

  2. get_input(self: morpheus._lib.llm.LLMContext, node_name: str) -> object

get_inputs(self: morpheus._lib.llm.LLMContext)dict

property input_map

message(self: morpheus._lib.llm.LLMContext)morpheus._lib.messages.ControlMessage

property name

property parent

push(self: morpheus._lib.llm.LLMContext, name: str, inputs: List[morpheus._lib.llm.InputMap])morpheus._lib.llm.LLMContext

set_output(*args, **kwargs)

Overloaded function.

  1. set_output(self: morpheus._lib.llm.LLMContext, outputs: object) -> None

  2. set_output(self: morpheus._lib.llm.LLMContext, output_name: str, output: object) -> None

task(self: morpheus._lib.llm.LLMContext)morpheus._lib.llm.LLMTask

property view_outputs

class LLMEngine

Methods

add_node(self, name, *[, inputs, is_output]) Add an LLMNode to the current node.
add_task_handler(self, inputs, handler)

execute(self, context) Execute the current node with the given context instance.
get_input_names(self) Get the input names for the node.
run(self, message)

add_task_handler(self: morpheus._lib.llm.LLMEngine, inputs: List[Union[morpheus._lib.llm.InputMap, str, Tuple[str, str], morpheus._lib.llm.LLMNodeRunner]], handler: morpheus._lib.llm.LLMTaskHandler)None

run(self: morpheus._lib.llm.LLMEngine, message: morpheus._lib.messages.ControlMessage)Awaitable[List[morpheus._lib.messages.ControlMessage]]

class LLMLambdaNode

Methods

execute(self, context)

get_input_names(self)

execute(self: morpheus._lib.llm.LLMLambdaNode, context: morpheus._lib.llm.LLMContext)Awaitable[morpheus._lib.llm.LLMContext]

get_input_names(self: morpheus._lib.llm.LLMLambdaNode) → List[str]

class LLMNode

Methods

add_node(self, name, *[, inputs, is_output]) Add an LLMNode to the current node.
execute(self, context) Execute the current node with the given context instance.
get_input_names(self) Get the input names for the node.
add_node(self: morpheus._lib.llm.LLMNode, name: str, *, inputs: object = None, node: morpheus._lib.llm.LLMNodeBase, is_output: bool = False)morpheus._lib.llm.LLMNodeRunner

Add an LLMNode to the current node.

Parameters
name

The name of the node to add

inputs

List of input mappings to use for the node, in the form of [(external_name, internal_name), ...] If unspecified the node’s input_names will be used.

node

The node to add

is_output

Indicates if the node is an output node, by default False

class LLMNodeBase

Methods

execute(self, context) Execute the current node with the given context instance.
get_input_names(self) Get the input names for the node.
execute(self: morpheus._lib.llm.LLMNodeBase, context: morpheus._lib.llm.LLMContext)Awaitable[morpheus._lib.llm.LLMContext]

Execute the current node with the given context instance.

All inputs for the given node should be fetched from the context, typically by calling either context.get_inputs to fetch all inputs as a dict, or context.get_input to fetch a specific input.

Similarly the output of the node is written to the context using context.set_output.

Parameters
context : morpheus._lib.llm.LLMContext

Context instance to use for the execution

get_input_names(self: morpheus._lib.llm.LLMNodeBase) → List[str]

Get the input names for the node.

Returns
list[str]

The input names for the node

class LLMNodeRunner
Attributes
inputs

name

parent_input_names

sibling_input_names

Methods

execute(self, context)

execute(self: morpheus._lib.llm.LLMNodeRunner, context: morpheus._lib.llm.LLMContext)Awaitable[morpheus._lib.llm.LLMContext]

property inputs

property name

property parent_input_names

property sibling_input_names

class LLMTask
Attributes
task_type

Methods

get(*args, **kwargs) Overloaded function.
get(*args, **kwargs)

Overloaded function.

  1. get(self: morpheus._lib.llm.LLMTask, key: str) -> object

  2. get(self: morpheus._lib.llm.LLMTask, key: str, default_value: object) -> object

property task_type

class LLMTaskHandler

Acts as a sink for an LLMEngine, emitting results as a ControlMessage

Methods

get_input_names(self) Get the input names for the task handler.
try_handle(self, context) Convert the given context into a list of ControlMessage instances.
get_input_names(self: morpheus._lib.llm.LLMTaskHandler) → List[str]

Get the input names for the task handler.

Returns
list[str]

The input names for the task handler.

try_handle(self: morpheus._lib.llm.LLMTaskHandler, context: morpheus._lib.llm.LLMContext)Awaitable[Optional[List[morpheus._lib.messages.ControlMessage]]]

Convert the given context into a list of ControlMessage instances.

Parameters
context : morpheus._lib.llm.LLMContext

Context instance to use for the execution

Returns
Task[Optional[list[ControlMessage]]]

Modules

morpheus.llm.nodes

morpheus.llm.services

morpheus.llm.task_handlers

Previous morpheus.io.utils
Next morpheus.llm.nodes
© Copyright 2024, NVIDIA. Last updated on Apr 25, 2024.