nat.plugins.autogen.callback_handler#

AutoGen callback handler for usage statistics collection.

This module provides profiling instrumentation for AutoGen agents by monkey-patching LLM client and tool classes to collect telemetry data.

Supported LLM Clients#

  • OpenAIChatCompletionClient: OpenAI and OpenAI-compatible APIs (NIM, LiteLLM)

  • AzureOpenAIChatCompletionClient: Azure OpenAI deployments

  • AnthropicBedrockChatCompletionClient: AWS Bedrock (Anthropic models)

Supported Methods#

  • create: Non-streaming LLM completions

  • create_stream: Streaming LLM completions

  • BaseTool.run_json: Tool executions

Attributes#

Classes#

ClientPatchInfo

Stores original method references for a patched client class.

PatchedClients

Stores all patched client information for restoration.

AutoGenProfilerHandler

Callback handler for AutoGen that intercepts LLM and tool calls for profiling.

Module Contents#

logger#
class ClientPatchInfo#

Stores original method references for a patched client class.

create: collections.abc.Callable[Ellipsis, Any] | None = None#
create_stream: collections.abc.Callable[Ellipsis, Any] | None = None#
class PatchedClients#

Stores all patched client information for restoration.

openai: ClientPatchInfo#
azure: ClientPatchInfo#
bedrock: ClientPatchInfo#
tool: collections.abc.Callable[Ellipsis, Any] | None = None#
class AutoGenProfilerHandler#

Bases: nat.profiler.callbacks.base_callback_class.BaseProfilerCallback

Callback handler for AutoGen that intercepts LLM and tool calls for profiling.

This handler monkey-patches AutoGen client classes to collect usage statistics including token usage, inputs, outputs, and timing information.

Supported clients:
  • OpenAIChatCompletionClient (OpenAI, NIM, LiteLLM)

  • AzureOpenAIChatCompletionClient (Azure OpenAI)

  • AnthropicBedrockChatCompletionClient (AWS Bedrock)

Supported methods:
  • create (non-streaming)

  • create_stream (streaming)

  • BaseTool.run_json (tool execution)

Example:
>>> handler = AutoGenProfilerHandler()
>>> handler.instrument()
>>> # ... run AutoGen workflow ...
>>> handler.uninstrument()

Initialize the AutoGenProfilerHandler.

_lock#
last_call_ts#
step_manager#
_patched#
_instrumented = False#
instrument() None#

Monkey-patch AutoGen methods with usage-stat collection logic.

Patches the following classes if available:
  • OpenAIChatCompletionClient.create, create_stream

  • AzureOpenAIChatCompletionClient.create, create_stream

  • AnthropicBedrockChatCompletionClient.create

  • BaseTool.run_json

Does nothing if already instrumented or if imports fail.

uninstrument() None#

Restore original AutoGen methods.

Should be called to clean up monkey patches, especially in test environments.

_extract_model_name(client: Any) str#

Extract model name from AutoGen client instance.

Args:

client: AutoGen chat completion client instance

Returns:

str: Model name or ‘unknown_model’ if extraction fails

_extract_input_text(messages: list[Any]) str#

Extract text content from message list.

Handles both dict-style messages and AutoGen typed message objects (UserMessage, AssistantMessage, SystemMessage).

Args:

messages: List of message dictionaries or AutoGen message objects

Returns:

str: Concatenated text content from messages

_extract_output_text(output: Any) str#

Extract text content from LLM response.

Args:

output: LLM response object

Returns:

str: Concatenated text content from response

_extract_usage(output: Any) dict[str, Any]#

Extract token usage from LLM response.

Args:

output: LLM response object

Returns:

dict: Token usage dictionary

_extract_chat_response(output: Any) dict[str, Any]#

Extract chat response metadata from LLM response.

Args:

output: LLM response object

Returns:

dict: Chat response metadata

_create_llm_wrapper(
original_func: collections.abc.Callable[Ellipsis, Any],
) collections.abc.Callable[Ellipsis, Any]#

Create wrapper for non-streaming LLM calls.

Args:

original_func: Original create method to wrap

Returns:

Callable: Wrapped function with profiling

_create_stream_wrapper(
original_func: collections.abc.Callable[Ellipsis, Any],
) collections.abc.Callable[Ellipsis, Any]#

Create wrapper for streaming LLM calls.

Args:

original_func: Original create_stream method to wrap

Returns:

Callable: Wrapped function with profiling

_create_tool_wrapper(
original_func: collections.abc.Callable[Ellipsis, Any],
) collections.abc.Callable[Ellipsis, Any]#

Create wrapper for tool execution calls.

Args:

original_func: Original run_json method to wrap

Returns:

Callable: Wrapped function with profiling