Plan and Execute (LangGraph) Bot#

This is an example chatbot that showcases the Plan & Execute LangGraph pipeline that uses LLM and Tavily search to answer complex questions. It first plans the steps that need to be solved, and executes them using an internet search via Tavily. Then it uses the LLM to determine whether the answer has been found, or if it needs to re-plan the question.

The Plan and Execute bot showcases the following ACE Agent features:

Prerequisites

  1. Create a Tavily API key here. We use Tavily to perform internet searches to answer questions.

    export TAVILY_API_KEY=...
    
  2. Add it to deploy/docker/.env.

    TAVILY_API_KEY=${TAVILY_API_KEY}
    

Docker-based bot deployment

  1. Set the OPENAI_API_KEY environment variable with your OpenAI API key before launching the bot.

    export OPENAI_API_KEY=...
    
  2. Copy the requirements from langgraph_plan_and_execute/plugin/requirements_dev.txt into deploy/docker/dockerfiles/plugin_server.Dockerfile.

    ##############################
    # Install custom dependencies
    ##############################
    RUN pip3 install \
        tavily-python==0.3.3 \
        langgraph==0.0.31 \
        langchain-openai==0.1.2 \
        langchain==0.1.12
    
  3. Prepare the environment for the Docker compose commands.

    export BOT_PATH=./samples/langgraph_plan_and_execute/
    source deploy/docker/docker_init.sh
    
  4. Deploy the ACE Agent microservices. Deploy the Chat Engine, Plugin server, and NLP server microservices.

    docker compose -f deploy/docker/docker-compose.yml up --build event-bot -d
    
  5. Interact with the bot using the URL http://<workstation IP>:7006/.

LangGraph bot sample conversation

Note

Due to the complex nature of the LangGraph example, this bot has high latency. This may make it unsuitable for speech pipelines. Latency can be reduced by reducing the complexity of the LangGraph graph in the Plugin directory.