Workflow Configuration#
AgentIQ workflows are defined by a YAML configuration file, which specifies which entities (functions, LLMs, embedders, etc.) to use in the workflow, along with general configuration settings.
The configuration attributes of each entity in AgentIQ is defined by a Configuration Object. This object defines both the type and optionally the default value of each attribute. Any attribute without a default value is required to be specified in the configuration file.
Configuration Object#
Each AgentIQ tool requires a configuration object which inherits from FunctionBaseConfig
. The FunctionBaseConfig
class and ultimately all AgentIQ configuration objects are subclasses of the pydantic.BaseModel
class from the Pydantic Library, which provides a way to define and validate configuration objects. Each configuration object defines the parameters used to create runtime instances of functions (or other component type), each with different functionality based on configuration settings. It is possible to define nested functions that access other component runtime instances by name. These could be other functions
, llms
, embedders
, retrievers
, or memory
. To facilitate nested runtime instance discovery, each component must be initialized in order based on the dependency tree. Enabling this feature requires configuration object parameters that refer to other component instances by name use a ComponentRef
dtype
that matches referred component type. The supported ComponentRef
types are enumerated below:
FunctionRef
: Refers to a registered function by its instance name in thefunctions
section configuration object.LLMRef
: Refers to a registered LLM by its instance name in thellms
section of the configuration object.EmbedderRef
: Refers to a registered embedder by its instance name in theembedders
section of the configuration object.RetrieverRef
: Refers to a registered retriever by its instance name in theretrievers
section of the configuration object.MemoryRef
: Refers to a registered memory by its instance name in thememory
section of the configuration object.
Workflow Configuration File#
The workflow configuration file is a YAML file that specifies the tools and models to use in the workflow, along with general configuration settings. To illustrate how these are organized, we will examine the configuration of the simple workflow.
examples/simple/configs/config.yml
:
functions:
webpage_query:
_type: webpage_query
webpage_url: https://docs.smith.langchain.com/user_guide
description: "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!"
embedder_name: nv-embedqa-e5-v5
chunk_size: 512
current_datetime:
_type: current_datetime
llms:
nim_llm:
_type: nim
model_name: meta/llama-3.1-70b-instruct
temperature: 0.0
embedders:
nv-embedqa-e5-v5:
_type: nim
model_name: nvidia/nv-embedqa-e5-v5
workflow:
_type: react_agent
tool_names: [webpage_query, current_datetime]
llm_name: nim_llm
verbose: true
retry_parsing_errors: true
max_retries: 3
From the above we see that it is divided into four sections: functions
, llms
, embedders
, and workflow
. There are additional optional sections not used in the above example they are: general
, memory
, retrievers
, and eval
.
functions
#
The functions
section contains the tools used in the workflow, in our example we have two webpage_query
and current_datetime
. By convention, the key matches the _type
value, however this is not a strict requirement, and can be used to include multiple instances of the same tool.
llms
#
This section contains the models used in the workflow. The _type
value refers to the API hosting the model, in this case nim
refers to an NIM model hosted on build.nvidia.com
. The supported API types are nim
, and openai
.
The model_name
value then needs to match a model hosted by the API, in our example we are using the meta/llama-3.1-70b-instruct
model.
Both the nim
and openai
APIs support API specific attributes. For nim
these are defined in the NIMModelConfig
class, and for openai
these are defined in the OpenAIModelConfig
class.
embedders
#
This section follows a the same structure as the llms
section and serves as a way to separate the embedding models from the LLM models. In our example, we are using the nvidia/nv-embedqa-e5-v5
model.
workflow
#
This section ties the previous sections together by defining the tools and LLM models to use. The tool_names
section lists the tool names from the functions
section, while the llm_name
section specifies the LLM model to use.
The _type
value refers to the workflow type, in our example we are using a react_agent
workflow. You can also use the workflow type, tool_calling_agent
. The parameters for each are specified by the ReActAgentWorkflowConfig
and ToolCallAgentWorkflowConfig
classes respectively.
general
#
This section contains general configuration settings for AngentIQ which are not specific to any workflow. The parameters for this section are specified by the GeneralConfig
class.
Note
The use_uvloop
parameter which specifies whether to use the uvloop
event loop. This is set to true
by default, and can provide a significant speedup in some cases, however this can also make it difficult to debug workflow issues. For debugging purposes it is recommended to set this to false
:
general:
use_uvloop: false
eval
#
This section contains the evaluation settings for the workflow. Refer to Evaluating AgentIQ Workflows for more information.
memory
#
This section configures integration of memory layers with tools such as the Mem0 Platform. It follows the same format as the llms
section. Refer to docs/source/concepts/memory.md
for an example on how this is used.
retrievers
#
This section configures retrievers for vector stores. It follows the same format as the llms
section. Refer to the examples/rag_demo
example workflow for an example on how this is used.