cuPHY (Latest Release)

Overview

test

Aerial SDK (ASDK) is a set of software defined libraries that are optimized to run 5G gNB workloads on GPU. These libraries include cuPHY, cuMAC and pyAerial. In this document, we focus on layer-1 (L1), or physical (PHY) layer of 5G gNB software stack.

cuPHY is the 5G L1 library of ASDK. It is designed as an inline accelerator to run on GPU and it does not require any additional HW accelerator. It is implemented according to the O-RAN 7.2 split option [cite ORAN spec]. It takes advantage of massively parallel GPU architecture to accelerate computationally heavy signal processing tasks. It also makes use of fast GPU I/O interface between the NVIDIA Bluefield-3 (BF3) NIC and GPU (GPU Direct RDMA [ref to go here]) to improve the latency.

BF3 NIC provides the Fronthaul (FH) connectivity in addition to IEEE 1588 compliant timing synchronization [cite BF3 product summary page]. The BF3 NIC also has a built-in SyncE and eCPRI windowing functionality, which meets G.8273.2 timing requirements.

In the following, we will first give an overview of ASDK L1 SW stack. Then, we will go into details of each SW component and explain how they interact with each other. Finally, we will go over cuPHY library which implements signal processing algorithms in CUDA kernels.

aerial_sdk_sw_stack.png

Aerial SDK software stack

Acronym Description
3GPP Third Generation Partnership Project
5G NR Fifth generation new radio
CUDA Compute Unified Device Architecture
cuBB CUDA base-band (L1 software stack consisting of L2 adapter, PHY control layer and PHY layer)
CUDA Compute Unified Device Architecture
cuPHY CUDA PHY (L1 functionality on the GPU accelerator in inline mode )
DU or O-DU O-RAN Distributed Unit (a logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split.)
eCPRI Ethernet Common Public Radio Interface
eAxC Extended Antenna Carrier: a data flow for a single antenna (or spatial stream) for a single carrier in a single sector
FAPI Functional Application Programming Interface
FH Fronthaul
NIC Network interface card
O-RAN Open RAN
RAN Radio Access Network
RU or O-RU O-RAN Radio Unit: a logical node hosting Low-PHY layer and RF processing based on a lower layer functional split
SCF Small Cell Forum
SyncE Synchronous Ethernet: is an ITU-T standard to provide a synchronization signal to network resources

ASDK L1 software stack architecture is shown in Figure 2. It consists of L2 adapter, cuPHY driver, cuPHY and cuPHY controller.

The interface between the L2 and L1 goes through nvipc interface, which is provided as a separate library. L2 and L1 communicate using FAPI protocol [cite SCF FAPI spec]. L2 adapter takes in slot commands from the L2 and translates them into L1 task, which are then consumed by cuPHY driver. Similarly, L1 task results are sent from cuPHY driver to L2 adapter, which are then communicated to L2.

The user transport block (TB) data in both DL and UL directions go through the same nvipc interface. The data exchange directly happens between cuPHY and L2 with the control of cuPHY driver.

cuPHY driver controls execution of cuPHY L1 kernels and manages the movement of data in and out of these kernels. The interface between the cuPHY L1 kernels and the NIC is also managed by the cuPHY driver by using the FH driver, that is provided as a library.

cuPHY controller is the main application that initializes the cell configurations, FH buffers and configures all threads that are used by L1 control tasks.

The functionality of each of these components is explained in more detail in the following sections.

[TODO]

  • OAM interface from L2adapter – does it go through FAPI

    • Hybrid mode m-plane – how to include it in this figure?

aerial_l1_sw_stack_arch.png

Aerial L1 software stack architecture

Previous Aerial cuPHY
Next L2 Adapter
© Copyright 2024, NVIDIA. Last updated on Mar 14, 2024.