nat.agent.base#
Attributes#
Classes#
Create a collection of name/value pairs. |
|
Helper class that provides a standard way to create an ABC using |
Module Contents#
- logger#
- TOOL_NOT_FOUND_ERROR_MESSAGE = 'There is no tool named {tool_name}. Tool must be one of {tools}.'#
- INPUT_SCHEMA_MESSAGE = '. Arguments must be provided as a valid JSON object following this format: {schema}'#
- NO_INPUT_ERROR_MESSAGE = 'No human input received to the agent, Please ask a valid question.'#
- AGENT_LOG_PREFIX = '[AGENT]'#
- AGENT_CALL_LOG_MESSAGE#
- TOOL_CALL_LOG_MESSAGE#
- class AgentDecision(*args, **kwds)#
Bases:
enum.Enum
Create a collection of name/value pairs.
Example enumeration:
>>> class Color(Enum): ... RED = 1 ... BLUE = 2 ... GREEN = 3
Access them by:
attribute access:
>>> Color.RED <Color.RED: 1>
value lookup:
>>> Color(1) <Color.RED: 1>
name lookup:
>>> Color['RED'] <Color.RED: 1>
Enumerations can be iterated over, and know how many members they have:
>>> len(Color) 3
>>> list(Color) [<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]
Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.
- TOOL = 'tool'#
- END = 'finished'#
- class BaseAgent(
- llm: langchain_core.language_models.BaseChatModel,
- tools: list[langchain_core.tools.BaseTool],
- callbacks: list[langchain_core.callbacks.AsyncCallbackHandler] | None = None,
- detailed_logs: bool = False,
Bases:
abc.ABC
Helper class that provides a standard way to create an ABC using inheritance.
- llm#
- tools#
- callbacks = []#
- detailed_logs = False#
- graph = None#
- async _stream_llm(
- runnable: Any,
- inputs: dict[str, Any],
- config: langchain_core.runnables.RunnableConfig | None = None,
Stream from LLM runnable. Retry logic is handled automatically by the underlying LLM client.
Parameters#
- runnableAny
The LLM runnable (prompt | llm or similar)
- inputsDict[str, Any]
The inputs to pass to the runnable
- configRunnableConfig | None
The config to pass to the runnable (should include callbacks)
Returns#
- AIMessage
The LLM response
- async _call_llm(
- messages: list[langchain_core.messages.BaseMessage],
Call the LLM directly. Retry logic is handled automatically by the underlying LLM client.
Parameters#
- messageslist[BaseMessage]
The messages to send to the LLM
Returns#
- AIMessage
The LLM response
- async _call_tool(
- tool: langchain_core.tools.BaseTool,
- tool_input: dict[str, Any] | str,
- config: langchain_core.runnables.RunnableConfig | None = None,
- max_retries: int = 3,
Call a tool with retry logic and error handling.
Parameters#
- toolBaseTool
The tool to call
- tool_inputUnion[Dict[str, Any], str]
The input to pass to the tool
- configRunnableConfig | None
The config to pass to the tool
- max_retriesint
Maximum number of retry attempts (default: 3)
Returns#
- ToolMessage
The tool response
- _log_tool_response( ) None #
Log tool response with consistent formatting and length limits.
Parameters#
- tool_namestr
The name of the tool that was called
- tool_inputAny
The input that was passed to the tool
- tool_responsestr
The response from the tool
- max_charsint
Maximum number of characters to log (default: 1000)
- _parse_json(json_string: str) dict[str, Any] #
Safely parse JSON with graceful error handling. If JSON parsing fails, returns an empty dict or error info.
Parameters#
- json_stringstr
The JSON string to parse
Returns#
- Dict[str, Any]
The parsed JSON or error information