NVIDIA Aerial CUDA-Accelerated RAN: 5G/6G Inline PHY Acceleration Library (cuPHY)

NVIDIA Aerial cuPHY is a cloud-native, software-defined platform optimized to run 5G/6G-compatible gNB physical layer (L1/PHY) workloads on NVIDIA DPU/NIC and GPU hardware. It successfully fulfills the promise of Open RAN by providing the complete source code access for a standards-compliant, multi-vendor interoperable, high-performance solution. Aerial cuPHY has been optimized for commercial deployments providing a competitive total cost of ownership (TCO) when deployed on NVIDIA’s commercial-off-the-shelf (COTS) hardware. cuPHY is part of the Aerial CUDA-Accelerated RAN platform, which provides industry leading performance per watt. Figure 1 shows how cuPHY fits within the overall Aerial CUDA-Accelerated RAN platform, and how L1 and L2 acceleration is enabled on the same platform.

ran_platform.png

Figure 1. Aerial CUDA-Accelerated RAN Platform

NVIDIA Aerial cuPHY employs an innovative “inline acceleration” architecture that directly processes high-throughput, time-critical radio access network (RAN) fronthaul (FH) traffic on the GPU. In this approach, the entire PHY pipeline runs in a hardware-accelerated manner, avoiding the throughput and latency bottlenecks associated with traditional CPU-only and look-aside-acceleration architectures. For a detailed analysis of the advantages of inline acceleration, please refer to this blog post.

cuPHY takes advantage of the massively parallel GPU architecture to efficiently accelerate computationally heavy signal processing tasks. It implements the RAN FH network by leveraging a high-speed and low-latency input output (I/O) interface between the NVIDIA DPU/NIC and GPU using DOCA GPUNetIO and GPUDirect RDMA libraries. NVIDIA DPU/NIC provides O-RAN 7.2x FH connectivity, IEEE 1588 compliant timing synchronization, and meets G.8273.2 timing requirements with built-in SyncE and eCPRI windowing functionality.

Key Features

Advanced Virtualized RAN Physical layer

AI First Platform:

  • Powerful AI acceleration framework for xApps & enhanced vRAN signal processing.

  • AI in RAN : GPU acceleration of cuPHY also enables AI assisted PHY modules as well as the replacement of modules with trained neural network models.

Multi-Tenancy

  • Employ NVIDIA Multi Instance GPU (MIG) technology to provide deterministic performance for multiple simultaneously accelerated workloads including RAN and Generative AI applications.

  • MIG enables true wireless operator multi-tenancy through hardware isolated GPU compute and memory resources, providing QoS/QoE even with disparate radio access technologies and numerologies.

Fronthaul Reach Extension

  • NVIDIA Aerial cuPHY can be easily configured to tradeoff multicell capacity with FH reach, providing regional coverage with fewer datacenters and greater deployment flexibility.

  • Allows dynamic consolidation of workloads, increased workload pooling, larger resiliency failover groups (N+1), and avoids fractional-usage base stations across datacenters.

SU-/MU-MIMO Boost

  • NVIDIA Aerial’s Reproducing Kernel Hilbert Space (RKHS) channel estimation and prediction which focuses on the dominant clusters of reflections to build a clean picture of the wireless channel, providing compelling spectral efficiency gains.

Cloud Native

Many-to-One L2 containers - FAPI

  • Fully utilize Aerial cuPHY multicell capacity without requiring rearchitecting existing MAC schedulers. Associate several L2 containers with a single L1 cuPHY container.

Performant Cloud Solution

  • Release qualification via Canonical cloud stack including multicell capacity validation on Kubernetes.

  • Full compatibility with Cloud OS solutions such as WindRiver and RedHat

Open-Source Powered

  • GPU Operator and Network Operator provide up streamed cloud native frameworks as shown in Figure 2. Leverages open-source network and GPU drivers included with core Linux distributions (dma_buf, inbox_driver, OpenRM)

aerial_cuphy_overview.png

Figure 2: Aerial cuPHY - cloud native and open-source powered

Compelling Performance

Industry Leading Multicell Capacity

  • NVIDIA GH200 Grace Hopper Superchip provides 20 peak cells of TDD 4T4R (4DL/2UL) @ 100MHz

  • NVIDIA GPUs efficiently accelerate the entire physical layer workload by leveraging inherit algorithmic parallelism.

  • NVIDIA DPU/NICs provide high-throughput packet processing and accurate send scheduling. DOCA GPUNetIO achieves 4x the performance vs CPU-based DPDK

Competitive TCO

  • NVIDIA Aerial cuPHY is a 3GPP compliant and O-RAN conformant solution delivering competitive spectral efficiency per Watt on a COTS based platform.

Commercial Ready

Unified Software-Defined Solution

  • Single architecture for all RAN technologies, channel bandwidths, and frequency bands; supporting radio units from 4T4R through 64T64R massive MIMO.

  • Solution is agnostic to CPU host architecture and SKUs: ARM | x86

  • Aerial cuPHY is forward and backward compatible across NVIDIA products and generations.

  • Enables virtualized RAN (vRAN) independent software vendors (ISVs) to develop and maintain a single software branch.

Telecom Production Grade

  • Aerial cuPHY is available as a robust hardened vRAN solution which has passed top-tier operator network qualification processes and is deployed commercially in some of the world’s most demanding wireless environments.

Enterprise Support Services

  • NVIDIA Aerial offers enterprise support services for the whole stack, including all NVIDIA software and underlying open-source dependencies.

Target Audience

  • Telecom Operators

  • Network Equipment Providers

  • Test & Measurement Equipment Providers

  • Network Planning ISVs

  • Academia & Universities

Value Proposition

  • Aerial cuPHY provides a cost-effective and efficient solution for running 5G gNB workloads on GPU.

  • Aerial cuPHY offers high performance and low latency, making it an attractive option for telecom operators and network equipment providers.

  • Its compliance with industry standards such as O-RAN ensures interoperability and ease of integration.

© Copyright 2024, NVIDIA. Last updated on Apr 19, 2024.