DuckDuckGo LangChain Bot

This is an example chatbot that showcases a LangChain agent that uses conversation history and the DuckDuckGo tool to answer questions. It first rephrases the question based on the conversation history, poses the rephrased question to DuckDuckGo, and generates a final answer based on the DuckDuckGo output. It relies on a custom plugin endpoint which streams the response from the agent. It uses an OpenAI chat model for rephrasing and final response formation.

The DuckDuckGo LangChain bot showcases the following ACE Agent features:

  • Integrating a LangChain agent with ACE Agent

  • Handling conversation history in the agent

  • Installing custom dependencies in Plugin server

  • Plugin Server Architecture

Docker-based bot deployment

  1. Set the OPENAI_API_KEY environment variable with your OpenAI API key before launching the bot.

    export OPENAI_API_KEY=...
    
  2. Copy the requirements from ddg_langchain_bot/plugins/requirements_dev.txt into deploy/docker/dockerfiles/plugin_server.Dockerfile.

    ##############################
    # Install custom dependencies
    ##############################
    RUN pip3 install \
        langchain==0.1.1 \
        langchain-community==0.0.13 \
        langchain-core==0.1.12 \
        duckduckgo-search==5.3.1b1
    

Note

If you see a crash in the plugin server or an issue with fetching a response from DuckDuckGo, try using a more recent duckduckgo-search version.

  1. Prepare the environment for the Docker compose commands.

    export BOT_PATH=./samples/ddg_langchain_bot/
    source deploy/docker/docker_init.sh
    
  2. Deploy the Speech models.

    docker compose -f deploy/docker/docker-compose.yml up model-utils-speech
    
  3. Deploy the ACE Agent microservices. Deploy the Chat Controller, Chat Engine, Plugin server, and NLP server microservices.

    docker compose -f deploy/docker/docker-compose.yml up speech-bot -d
    
  4. Wait for a few minutes for all services to be ready, you can check the Docker logs for individual microservices to confirm. You will see log print Server listening on 0.0.0.0:50055 in the Docker logs for the Chat Controller container.

  5. Try out the bot using a web browser, you can deploy a sample frontend application with voice capture and playback support as well as with text input-output support using the following command.

    docker compose -f deploy/docker/docker-compose.yml up frontend-speech
    
  6. Interact with the bot using the URL http://<workstation IP>:9001/. For accessing the mic on the browser, we need to either convert http to https endpoint by adding SSL validation or update your chrome://flags/ or edge://flags/ to allow http://<workstation IP>:9001 as a secure endpoint.

DuckDuckGo LangChain Bot