Tutorials#
Learn how to develop with the Aerial Framework using hands-on Jupyter notebook tutorials.
Running Tutorials#
Step 1: Setup Container (First Time Only)
From the top-level aerial-framework directory, configure and pull/build the
Docker container (first time may take a few minutes):
bash container/setup_container.sh
The setup script will:
Check Docker and GPU requirements (compute capability >= 8.0)
Detect optional networking configurations (InfiniBand, VFIO, GDRCopy, Hugepages)
Create
.envfile with auto-detected settingsPull or build container image
If networking devices are missing, you’ll see a warning but the script continues. The Fronthaul and DPDK/DOCA tests are optional and require NIC hardware.
Step 2: Start Container
Stop any existing container with the same name, then start a new container in the background:
docker stop aerial-framework-base-$USER || true # ignore error if container not already running
docker compose -f container/compose.yaml run -d --rm --name aerial-framework-base-$USER aerial-framework-base
Step 3: Convert Notebooks
Convert Python source files to notebooks:
docker exec aerial-framework-base-$USER bash -c "uv run ./scripts/setup_python_env.py jupytext_convert docs"
Option 1: VS Code with Dev Containers (Recommended)
Important
In step 5, you must open /opt/nvidia/aerial-framework/docs, not /opt/nvidia/aerial-framework/.
In step 7, select the “framework-docs” notebook kernel. Selecting the wrong kernel
and venv will cause ModuleNotFoundError when running notebooks.
Install the “Dev Containers” extension in VS Code
If working on a remote machine, first connect via Remote-SSH extension
Press
Ctrl+Shift+P(orCmd+Shift+Pon Mac) and select “Dev Containers: Attach to Running Container…”Select
aerial-framework-base-<your-username>from the listOnce attached, open the
/opt/nvidia/aerial-framework/docsfolderOpen any
.ipynbfile fromtutorials/generated/Click the kernel selector (top right) and choose “framework-docs” (
.venv/bin/python)The notebooks will run using the docs environment which already has
ipykernelinstalled
Note: First-time setup will download VS Code Server (first time may take a few minutes).
Option 2: JupyterLab
# For local machine
docker exec aerial-framework-base-$USER bash -c "uv run --directory docs jupyter-lab"
# For remote machine (accessible over network)
docker exec aerial-framework-base-$USER bash -c "uv run --directory docs jupyter-lab --ip='0.0.0.0' --no-browser"
JupyterLab will open in the docs directory. Navigate to tutorials/generated
in the file browser to access the notebooks.
For remote machines: Copy the URL that shows the hostname (e.g., http://<remote-host>:8888/lab?token=...) and open it in your local browser.
Resources:
Attach to a running container - VS Code documentation on attaching to containers
Working with Jupyter Notebooks in VS Code - Guide for running Jupyter notebooks
Jupyter Kernel Management - Selecting Python interpreters for notebooks
1. Getting Started Guide#
Setting up the Docker development container
Configuring your environment
Building the project
Running tests
2. Reference PUSCH Receiver#
Installing the RAN Python package
Loading test vector data
Processing PUSCH inner receiver blocks (channel estimation, equalization, soft demapping)
Processing PUSCH outer receiver blocks (descramble, derate, LDPC decoding, CRC)
3. MLIR-TensorRT#
Defining a simple JAX function (FIR filter)
Compiling to TensorRT
Executing and verifying correctness
4. PUSCH Receiver Lowering#
4. PUSCH Receiver Lowering Tutorial
Compiling the complete PUSCH inner receiver pipeline to TensorRT
Executing with different backends (JAX CUDA and TensorRT)
Benchmarking with NVIDIA Nsight Systems
5. AI Channel Filter Training#
Training a custom AI channel filter for channel estimation
Evaluating the performance of the trained AI channel filter
Benchmarking the performance of the trained AI channel filter
Profiling the performance of the trained AI channel filter
6. PUSCH Channel Filter Lowering#
6. PUSCH Channel Filter Lowering
Designing custom PUSCH channel estimation filters in JAX
Compiling channel estimators to TensorRT engines with MLIR-TensorRT
Testing channel filter performance with CDL datasets from Sionna
GPU profiling and benchmark analysis with NVIDIA Nsight Systems
7. Running the Complete PUSCH Pipeline#
7. Complete 5G NR PUSCH Pipeline
Running a PUSCH processing pipeline with mixture of hand-written CUDA code and compiled TensorRT layers
Running inner and outer PUSCH receiver blocks together
Performance analysis and profiling
Validating end-to-end results
8. Fronthaul and RU Emulator Testing#
8. Fronthaul Uplink Processing
Real-time system setup with GH200 and BlueField-3 NIC
O-RAN fronthaul C-Plane and U-Plane interfaces
DPDK and DOCA GPUNetIO for network processing
FAPI capture and C-Plane packet preparation
GPU-accelerated U-Plane processing with order kernel
Real-time task scheduling with timed triggers
Running fronthaul integration tests
9. Top Level PHY RAN Application#
9. Top Level PHY RAN Application
Integrating fronthaul with PUSCH processing
Testing with MAC and RU emulators
Performance tuning and optimization