LLM Flows

This section explains how to create LLM-driven flows in Colang 2.0.

Using Colang, you can describe complex patterns of interaction. However, as a developer, you will never be able to describe all the potential paths an interaction can take. And this is where an LLM can help, by generating LLM-driven continuations at runtime.

The Dialog Rails and the Input Rails examples, show how to use the LLM to generate continuations dynamically. The example below is similar to the dialog rails example, but it instructs to LLM to generate directly the bot response:

examples/v2_x/tutorial/llm_flows/main.co
 1import core
 2import llm
 3
 4flow main
 5  """You are an assistant that should talk to the user about cars.
 6  Politely decline to talk about anything else.
 7
 8  Last user question is: "{{ question }}"
 9  Generate the output in the following format:
10
11  bot say "<<the response>>"
12  """
13  $question = await user said something
14  ...

The main flow above waits for the user said something to match a user utterance, stores the result in the $question local variable and then invokes the LLM, through the ... (generation operator) to generate the continuation of the flow.

Note

Context variables can be included in the NLD (Natural Language Description) of a flow (a.k.a., docstrings in Python) using double curly braces (the Jinja2 syntax).

Testing

$ nemoguardrails chat --config=examples/v2_x/other/llm_flow

> hi

Hello! How can I assist you with cars today?

> what can yo udo?

I am an assistant that can talk to you about cars. Is there anything specific you would like to know?

This section concludes the Colang 2.0 getting started guide. Check out the Recommended Next Steps for the recommended way to continue learning about Colang 2.0.