nat.plugins.langchain.llm#
Attributes#
Functions#
|
|
|
|
|
|
|
|
|
|
|
Create a LangChain ChatOpenAI client for Dynamo with automatic agent hint injection. |
|
|
|
|
LangChain client for HuggingFace Inference API. |
Module Contents#
- logger#
- ModelType#
- _patch_llm_based_on_config(
- client: ModelType,
- llm_config: nat.data_models.llm.LLMBaseConfig,
- async aws_bedrock_langchain(
- llm_config: nat.llm.aws_bedrock_llm.AWSBedrockModelConfig,
- _builder: nat.builder.builder.Builder,
- async azure_openai_langchain(
- llm_config: nat.llm.azure_openai_llm.AzureOpenAIModelConfig,
- _builder: nat.builder.builder.Builder,
- async nim_langchain(
- llm_config: nat.llm.nim_llm.NIMModelConfig,
- _builder: nat.builder.builder.Builder,
- async openai_langchain(
- llm_config: nat.llm.openai_llm.OpenAIModelConfig,
- _builder: nat.builder.builder.Builder,
- async dynamo_langchain(
- llm_config: nat.llm.dynamo_llm.DynamoModelConfig,
- _builder: nat.builder.builder.Builder,
Create a LangChain ChatOpenAI client for Dynamo with automatic agent hint injection.
This client injects Dynamo routing hints via nvext.agent_hints at the HTTP transport level, enabling KV cache optimization and request routing.
- async litellm_langchain(
- llm_config: nat.llm.litellm_llm.LiteLlmModelConfig,
- _builder: nat.builder.builder.Builder,
- async huggingface_langchain(
- llm_config: nat.llm.huggingface_llm.HuggingFaceConfig,
- _builder: nat.builder.builder.Builder,
- async huggingface_inference_langchain(
- llm_config: nat.llm.huggingface_inference_llm.HuggingFaceInferenceLLMConfig,
- _builder: nat.builder.builder.Builder,
LangChain client for HuggingFace Inference API.
Uses
langchain_huggingface.HuggingFaceEndpointfor Serverless API, Inference Endpoints, and TGI servers.