DuckDuckGo LangChain Bot#
This is an example chatbot that showcases a LangChain agent that uses conversation history and the DuckDuckGo tool to answer questions. It first rephrases the question based on the conversation history, poses the rephrased question to DuckDuckGo, and generates a final answer based on the DuckDuckGo output. It relies on a custom plugin endpoint which streams the response from the agent. It uses an OpenAI chat model for rephrasing and final response formation.
The DuckDuckGo LangChain bot showcases the following ACE Agent features:
Integrating a LangChain agent with ACE Agent
Handling conversation history in the agent
Installing custom dependencies in Plugin server
Docker-based bot deployment
Set the OPENAI_API_KEY environment variable with your OpenAI API key before launching the bot.
export OPENAI_API_KEY=...
Copy the requirements from
ddg_langchain_bot/plugins/requirements_dev.txt
intodeploy/docker/dockerfiles/plugin_server.Dockerfile
.############################## # Install custom dependencies ############################## RUN pip3 install \ langchain==0.1.1 \ langchain-community==0.0.13 \ langchain-core==0.1.12 \ duckduckgo-search==5.3.1b1
Note
If you see a crash in the plugin server or an issue with fetching a response from DuckDuckGo, try using a more recent duckduckgo-search
version.
If you don’t make changes in Dockerfile, you will receive import errors similar to ModuleNotFoundError: No module named 'duckduckgo_search'
Prepare the environment for the Docker compose commands.
export BOT_PATH=./samples/ddg_langchain_bot/ source deploy/docker/docker_init.sh
For Plugin Server Architecture based bots, we need to use the
speech_lite
pipeline configuration for the Chat Controller microservice. Update thePIPELINE
variable indeploy/docker/docker_init.sh
or override it by setting thePIPELINE
environment variable manually.export PIPELINE=speech_lite
Deploy the Speech models.
docker compose -f deploy/docker/docker-compose.yml up model-utils-speech
Deploy the ACE Agent microservices. Deploy the Chat Controller, Chat Engine, Plugin server, and NLP server microservices.
docker compose -f deploy/docker/docker-compose.yml up speech-bot -d --build
Wait for a few minutes for all services to be ready, you can check the Docker logs for individual microservices to confirm. You will see log print
Server listening on 0.0.0.0:50055
in the Docker logs for the Chat Controller container.Interact with the bot using the URL
http://<workstation IP>:7006/
. For accessing the mic on the browser, we need to either converthttp
tohttps
endpoint by adding SSL validation or update yourchrome://flags/
oredge://flags/
to allowhttp://<workstation IP>:7006
as a secure endpoint.