nat.plugins.autogen.llm#

AutoGen LLM client registrations for NAT.

This module provides AutoGen-compatible LLM client wrappers for the following providers:

Supported Providers#

  • OpenAI: Direct OpenAI API integration via OpenAIChatCompletionClient

  • Azure OpenAI: Azure-hosted OpenAI models via AzureOpenAIChatCompletionClient

  • NVIDIA NIM: OpenAI-compatible endpoints for NVIDIA models

  • LiteLLM: Unified interface to multiple LLM providers via OpenAI-compatible client

  • AWS Bedrock: Amazon Bedrock models (Claude/Anthropic) via AnthropicBedrockChatCompletionClient

Each wrapper: - Patches clients with NAT retry logic from RetryMixin - Injects chain-of-thought prompts when ThinkingMixin is configured - Removes NAT-specific config keys before instantiating AutoGen clients

Attributes#

Functions#

_patch_autogen_client_based_on_config(→ ModelType)

Patch AutoGen client with NAT mixins (retry, thinking).

_close_autogen_client(→ None)

Close an AutoGen client if it has a close method.

openai_autogen(...)

Create OpenAI client for AutoGen integration.

azure_openai_autogen(...)

Create Azure OpenAI client for AutoGen integration.

_strip_strict_from_tools_deep(→ dict[str, Any])

Remove 'strict' field from tool definitions in request kwargs for NIM compatibility.

_patch_nim_client_for_tools(→ ModelType)

Patch AutoGen client's underlying OpenAI client to strip 'strict' from tools for NIM.

nim_autogen(...)

Create NVIDIA NIM client for AutoGen integration.

litellm_autogen(...)

Create LiteLLM client for AutoGen integration.

bedrock_autogen(...)

Create AWS Bedrock client for AutoGen integration.

Module Contents#

logger#
ModelType#
_patch_autogen_client_based_on_config(
client: ModelType,
llm_config: nat.data_models.llm.LLMBaseConfig,
) ModelType#

Patch AutoGen client with NAT mixins (retry, thinking).

Args:

client (ModelType): The AutoGen LLM client to patch. llm_config (LLMBaseConfig): The LLM configuration containing mixin settings.

Returns:

ModelType: The patched AutoGen LLM client.

async _close_autogen_client(client: Any) None#

Close an AutoGen client if it has a close method.

Args:

client: The AutoGen client to close

async openai_autogen(
llm_config: nat.llm.openai_llm.OpenAIModelConfig,
_builder: nat.builder.builder.Builder,
) collections.abc.AsyncGenerator[ModelType, None]#

Create OpenAI client for AutoGen integration.

Args:

llm_config (OpenAIModelConfig): OpenAI model configuration _builder (Builder): NAT builder instance

Yields:

AsyncGenerator[ModelType, None]: Configured AutoGen OpenAI client

async azure_openai_autogen(
llm_config: nat.llm.azure_openai_llm.AzureOpenAIModelConfig,
_builder: nat.builder.builder.Builder,
) collections.abc.AsyncGenerator[ModelType, None]#

Create Azure OpenAI client for AutoGen integration.

Args:

llm_config (AzureOpenAIModelConfig): Azure OpenAI model configuration _builder (Builder): NAT builder instance

Yields:

AsyncGenerator[ModelType, None]: Configured AutoGen Azure OpenAI client

_strip_strict_from_tools_deep(
kwargs: dict[str, Any],
) dict[str, Any]#

Remove ‘strict’ field from tool definitions in request kwargs for NIM compatibility.

NIM’s API doesn’t support OpenAI’s ‘strict’ parameter in tool/function definitions. AutoGen adds this field automatically, so we strip it before sending to NIM.

Args:

kwargs: The request keyword arguments dictionary

Returns:

kwargs with ‘strict’ field removed from tool function definitions

_patch_nim_client_for_tools(client: ModelType) ModelType#

Patch AutoGen client’s underlying OpenAI client to strip ‘strict’ from tools for NIM.

This patches at the lowest level (the actual OpenAI AsyncClient) to ensure the ‘strict’ field is removed after AutoGen’s internal processing.

Args:

client: The AutoGen OpenAI client to patch

Returns:

The patched client (unmodified if patching fails)

async nim_autogen(
llm_config: nat.llm.nim_llm.NIMModelConfig,
_builder: nat.builder.builder.Builder,
) collections.abc.AsyncGenerator[ModelType, None]#

Create NVIDIA NIM client for AutoGen integration.

Args:

llm_config (NIMModelConfig): NIM model configuration _builder (Builder): NAT builder instance

Yields:

Configured AutoGen NIM client (via OpenAI compatibility)

async litellm_autogen(
llm_config: nat.llm.litellm_llm.LiteLlmModelConfig,
_builder: nat.builder.builder.Builder,
) collections.abc.AsyncGenerator[ModelType, None]#

Create LiteLLM client for AutoGen integration.

LiteLLM provides a unified interface to multiple LLM providers. This integration uses AutoGen’s OpenAI-compatible client since LiteLLM exposes an OpenAI-compatible API endpoint.

Args:

llm_config (LiteLlmModelConfig): LiteLLM model configuration _builder (Builder): NAT builder instance

Yields:

AsyncGenerator[ModelType, None]: Configured AutoGen client via LiteLLM

async bedrock_autogen(
llm_config: nat.llm.aws_bedrock_llm.AWSBedrockModelConfig,
_builder: nat.builder.builder.Builder,
) collections.abc.AsyncGenerator[ModelType, None]#

Create AWS Bedrock client for AutoGen integration.

Uses AutoGen’s AnthropicBedrockChatCompletionClient which supports Anthropic Claude models hosted on AWS Bedrock. Credentials are loaded in the following priority:

  1. Explicit values from credentials_profile_name in the AWS profile.

  2. Standard environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN).

  3. Ambient credentials provided by the compute environment (IAM role).

Args:

llm_config (AWSBedrockModelConfig): AWS Bedrock model configuration _builder (Builder): NAT builder instance

Yields:

AsyncGenerator[ModelType, None]: Configured AutoGen Bedrock client