Plan and Execute (LangGraph) Bot#
This is an example chatbot that showcases the Plan & Execute LangGraph pipeline that uses LLM and Tavily search to answer complex questions. It first plans the steps that need to be solved, and executes them using an internet search via Tavily. Then it uses the LLM to determine whether the answer has been found, or if it needs to re-plan the question.
The Plan and Execute bot showcases the following ACE Agent features:
Integrating a Langgraph graph/runnable with ACE Agent
Installing custom dependencies in Plugin Server
Can be deployed in either Event Architecture or Plugin Server Architecture
Prerequisites
Create a Tavily API key here. We use Tavily to perform internet searches to answer questions.
export TAVILY_API_KEY=...
Add it to
deploy/docker/.env
.TAVILY_API_KEY=${TAVILY_API_KEY}
Docker-based bot deployment
Set the
OPENAI_API_KEY
environment variable with your OpenAI API key before launching the bot.export OPENAI_API_KEY=...
Copy the requirements from
langgraph_plan_and_execute/plugin/requirements_dev.txt
intodeploy/docker/dockerfiles/plugin_server.Dockerfile
.############################## # Install custom dependencies ############################## RUN pip3 install \ tavily-python==0.3.3 \ langgraph==0.0.31 \ langchain-openai==0.1.2 \ langchain==0.1.12
Prepare the environment for the Docker compose commands.
export BOT_PATH=./samples/langgraph_plan_and_execute/ source deploy/docker/docker_init.sh
Deploy the ACE Agent microservices. Deploy the Chat Engine, Plugin server, and NLP server microservices.
docker compose -f deploy/docker/docker-compose.yml up --build event-bot -d
Interact with the bot using the URL
http://<workstation IP>:7006/
.
![LangGraph bot sample conversation](../../../_images/langgraph-bot-conversation.png)
Note
Due to the complex nature of the LangGraph example, this bot has high latency. This may make it unsuitable for speech pipelines. Latency can be reduced by reducing the complexity of the LangGraph graph in the Plugin directory.