nat.plugins.langchain.llm#

Attributes#

Functions#

_patch_llm_based_on_config(→ ModelType)

aws_bedrock_langchain(llm_config, _builder)

azure_openai_langchain(llm_config, _builder)

nim_langchain(llm_config, _builder)

openai_langchain(llm_config, _builder)

dynamo_langchain(llm_config, _builder)

Create a LangChain ChatOpenAI client for Dynamo with automatic agent hint injection.

litellm_langchain(llm_config, _builder)

huggingface_langchain(llm_config, _builder)

huggingface_inference_langchain(...)

LangChain client for HuggingFace Inference API.

Module Contents#

logger#
ModelType#
_patch_llm_based_on_config(
client: ModelType,
llm_config: nat.data_models.llm.LLMBaseConfig,
) ModelType#
async aws_bedrock_langchain(
llm_config: nat.llm.aws_bedrock_llm.AWSBedrockModelConfig,
_builder: nat.builder.builder.Builder,
)#
async azure_openai_langchain(
llm_config: nat.llm.azure_openai_llm.AzureOpenAIModelConfig,
_builder: nat.builder.builder.Builder,
)#
async nim_langchain(
llm_config: nat.llm.nim_llm.NIMModelConfig,
_builder: nat.builder.builder.Builder,
)#
async openai_langchain(
llm_config: nat.llm.openai_llm.OpenAIModelConfig,
_builder: nat.builder.builder.Builder,
)#
async dynamo_langchain(
llm_config: nat.llm.dynamo_llm.DynamoModelConfig,
_builder: nat.builder.builder.Builder,
)#

Create a LangChain ChatOpenAI client for Dynamo with automatic agent hint injection.

This client injects Dynamo routing hints via nvext.agent_hints at the HTTP transport level, enabling KV cache optimization and request routing.

async litellm_langchain(
llm_config: nat.llm.litellm_llm.LiteLlmModelConfig,
_builder: nat.builder.builder.Builder,
)#
async huggingface_langchain(
llm_config: nat.llm.huggingface_llm.HuggingFaceConfig,
_builder: nat.builder.builder.Builder,
)#
async huggingface_inference_langchain(
llm_config: nat.llm.huggingface_inference_llm.HuggingFaceInferenceLLMConfig,
_builder: nat.builder.builder.Builder,
) collections.abc.AsyncIterator[Any]#

LangChain client for HuggingFace Inference API.

Uses langchain_huggingface.HuggingFaceEndpoint for Serverless API, Inference Endpoints, and TGI servers.