Installation#
This guide walks through setting up the AI-Q blueprint for local development. For containerized or production deployments, refer to Deployment.
Prerequisites#
Requirement |
Version |
Notes |
|---|---|---|
Python |
3.11 – 3.13 |
3.13 recommended |
latest |
Python package manager (installed automatically by the setup script if missing) |
|
Git |
2.x+ |
|
Node.js |
22+ |
Optional – only needed for the web UI |
You also need at least one LLM API key. Refer to API key setup below.
Hardware Requirements#
When using NVIDIA API Catalog (the default), inference runs on NVIDIA-hosted infrastructure and there are no local GPU requirements. The hardware requirements below apply only when self-hosting models via NVIDIA NIM.
Component |
Default Model |
Self-Hosted Hardware Reference |
|---|---|---|
LLM (intent classifier, orchestrator, planner) |
|
|
LLM (deep research researcher) |
|
|
LLM (deep research orchestrator/planner, optional) |
|
|
Document summary (optional) |
|
|
Text embedding |
|
|
VLM (image/chart extraction, optional) |
|
|
Knowledge layer (Foundational RAG, optional) |
– |
Automated Setup (Recommended)#
The setup script handles everything – virtual environment, Python dependencies, and UI dependencies:
git clone <repository-url>
cd aiq
./scripts/setup.sh
The script performs the following steps:
Installs
uvif not already presentCreates a Python 3.13 virtual environment at
.venv/Installs the core package with dev dependencies
Installs all frontends (CLI, debug console, API server)
Installs benchmark packages (freshqa, deepsearch_qa)
Installs all data source plugins (Tavily, Google Scholar, knowledge layer)
Sets up pre-commit hooks
Copies
deploy/.env.exampletodeploy/.envif no.envfile existsInstalls UI npm dependencies (if Node.js is available)
After the script completes, activate the virtual environment:
source .venv/bin/activate
Manual Setup#
If you prefer to install components selectively, follow these steps.
1. Clone the Repository#
git clone https://github.com/NVIDIA-AI-Blueprints/aiq.git
cd aiq
2. Create the Virtual Environment#
uv venv --python 3.13 .venv
source .venv/bin/activate
3. Install Dependencies#
Install the core package and only the frontends, benchmarks, and data sources you need:
# Core with development dependencies
uv pip install -e ".[dev]"
# Frontends (pick what you need)
uv pip install -e ./frontends/cli # CLI interface
uv pip install -e ./frontends/debug # Debug console
uv pip install -e ./frontends/aiq_api # Unified API server (includes debug)
# Data sources (pick what you need)
uv pip install -e ./sources/tavily_web_search
uv pip install -e ./sources/google_scholar_paper_search
uv pip install -e "./sources/knowledge_layer[llamaindex,foundational_rag]"
# Benchmarks (optional)
uv pip install -e ./frontends/benchmarks/freshqa
uv pip install -e ./frontends/benchmarks/deepsearch_qa
4. Set Up Pre-Commit Hooks (Development)#
pre-commit install
API Key Setup#
AI-Q needs API keys to access LLMs and search providers. Create an environment file from the provided template:
cp deploy/.env.example deploy/.env
Then edit deploy/.env and fill in your keys.
Required Keys#
Variable |
Provider |
How to obtain |
|---|---|---|
|
Sign in, click any model, select Deploy > Get API Key > Generate Key |
Optional Keys#
Variable |
Provider |
Purpose |
|---|---|---|
|
Web search |
|
|
Academic paper search (Google Scholar). To enable, uncomment |
At minimum, you need NVIDIA_API_KEY for LLM inference and TAVILY_API_KEY for web search. Paper search (SERPER_API_KEY) is disabled by default in the shipped configs – refer to the comments in your config file to enable it.
Verify Installation#
Confirm that the NeMo Agent Toolkit CLI is available and can find the project plugins:
# Must use the project venv, not the system nat
.venv/bin/nat --help
You should observe the nat CLI help output with available commands (run, serve, eval, etc.).
To verify plugins are registered:
.venv/bin/nat run --help
This should list available workflow configurations.
Building the Documentation#
The project documentation is built with Sphinx and uses MyST-Parser for Markdown support. To build the HTML docs locally:
# Install docs dependencies and build in one step
uv run --extra docs sphinx-build -M html docs/source docs/build
The generated site is written to docs/build/html/. Open docs/build/html/index.html in a browser to view it.
If you already have the virtual environment activated with docs extras installed, you can also run:
sphinx-build -M html docs/source docs/build
Next Steps#
Quick Start – Run your first research query in 5 minutes
Developer Guide – Recommended reading path through the documentation
Deployment – Docker Compose deployment