Introduction

Aerial SDK 23-4

This document describes the supported configurations, test-vector configurations, and limitations for this release of the NVIDIA® cuBB® SDK.

Release Version: 23-4

Description

Revision

Host OS
  • x86 platform: Ubuntu 22.04 with 5.15.0-1042-nvidia-lowlatency kernel
  • Grace Hopper platform: Ubuntu 22.04 with 6.2.0-1012-nvidia-64k kernel
AX800
  • CUDA Toolkit: 12.2.0
  • GPU Driver (OpenRM): 535.54.03
  • BFB: DOCA_2.5.0_BSP_4.5.0_Ubuntu_22.04-1.23-10.prod.bfb
  • NIC FW: 32.39.2048
A100X
  • CUDA Toolkit: 12.2.0
  • GPU Driver (OpenRM): 535.54.03
  • BFB: DOCA_2.5.0_BSP_4.5.0_Ubuntu_22.04-1.23-10.prod.bfb
  • NIC FW: 24.39.2048
A100
  • CUDA Toolkit: 12.2.0
  • GPU Driver (OpenRM): 535.54.03
BF3 NIC
  • BFB: DOCA_2.5.0_BSP_4.5.0_Ubuntu_22.04-1.23-10.prod.bfb
  • NIC FW: 32.39.2048
CX6-DX NIC NIC FW: 22.39.2048 Note: If the CX6-DX NIC is used to run RU emulator on dual ports, downgrade the NIC FW to 22.35.1012 due to a known issue.
DOCA OFED 23.10-1.1.9 Note: DOCA OFED is only required by Grace Hopper platform. It is not required for x86 platform.
NVIDIA-peermem Note: Aerial has been using kernel DMA-buf instead of nvidia-peermem since 23-4 release. It is not required anymore.
GDRCopy 2.4.1
DPDK 22.11 (Included in Mellanox DOCA)
DOCA 2.5
NV Container Toolkit 1.14
SCF 222.10.02 (partial upgrade to 222.10.04)
Server
  • Gigabyte(E251-U70) with Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
  • Dell PowerEdge R750 with duel Intel(R) Xeon(R) Gold 6336Y CPU @ 2.40GHz
  • Supermicro Grace Hopper MGX ARS-111GL-NHR (Config 2)
GPU AX800, A100X, A100, GH200 (Early Access)
Note

Aerial has been using DMA-buf, inbox driver and OpenRM driver since 23-4 release. So MOFED and nvidia-peermem are not needed anymore. On the x86 platform, the 5.15 kernel with DMA-buf and inbox driver are used. On the Grace Hopper platform, the 6.2 kernel with DMA-buf and DOCA OFED are used.

Description

Revision

Host OS Ubuntu 22.04 with 5.15.0-1042-nvidia-lowlatency
Container OS Ubuntu 22.04
Containerd 1.5.8
Kubernetes 1.23
Helm 3.8
Network Operator 23.4.0
CX6-DX NIC FW 22.39.2048
A100X NIC FW 24.39.2048
GPU Operator 23.6.0
CUDA Toolkit 12.2.0
NVIDIA GPU Driver 535.54.03

This section defines common acronyms, abbreviations, and terms that are used in this cuBB SDK documentation.

Term or Abbreviation

Definition

Aerial SDK that accelerates 5G RAN functions with the GPU
cuBB CUDA GPU software libraries/tools that accelerate 5G RAN compute-intensive processing
cuPHY CUDA 5G PHY layer software library of the cuBB
cuPHY-CP cuPHY control-plane software
HDF5 A data file format used for storing test vectors. The HDF5 software library provides functions for reading and writing the test vector files
CMake CMake is a software tool for configuring the makefiles for building the SDK CUDA examples
DPDK Data Plane Development Kit
CX6-DX Mellanox ConnectX6-DX NIC
CDM/FDM/TDM Code-division multiplexing, Frequency Division Multiplexing, Time-Division Multiplexing
MU-MIMO Multi-User Multiple Input - Multiple Output
SU-MIMO Single-User Multiple Input - Multiple Output
RB Resource Block
PRB Physical Resource Block
RE Resource Element
REG Resource Element Group
CORESET Control Resource Set
DCI Downlink Control Information
DMRS Demodulation Reference Signal
eCPRI Enhanced Common Public Radio Interface
MIB Master Information Block
O-RAN Open Radio Access Network
SIB/SIB1 System Information Block
TTI Transmission Time Interval
LDPC Low-Density Parity Check Code
PDCCH Physical Downlink Control Channel
PDSCH Physical Downlink Shared Channel
PUCCH Physical Uplink Control Channel
PUSCH Physical Uplink Shared Channel
PRACH Physical Random Access Channel
UCI Uplink Control Information
UE-EM UE Emulator Test Equipment
Previous cuBB Release Notes
Next Supported Features and Configurations
© Copyright 2022-2023, NVIDIA.. Last updated on Apr 20, 2024.