Aerial CUDA-Accelerated RAN

Aerial CUDA-Accelerated RAN 24-1 Download PDF

Aerial CUDA-Accelerated RAN brings together the Aerial software for 5G and AI frameworks and the NVIDIA accelerated computing platform, enabling TCO reduction and unlocking infrastructure monetization for telcos.

Aerial CUDA-Accelerated RAN has the following key features:

  • Software-defined, scalable, modular, highly programmable and cloud-native, without any fixed function accelerators. Enables the ecosystem to flexibly adopt necessary modules for their commercial products.

  • Full-stack acceleration of DU L1, DU L2+, CU, UPF and other network functions, enabling workload consolidation for maximum performance and spectral efficiency, leading to best-in-class system TCO.

  • General purpose infrastructure, with multi-tenancy that can power both traditional workloads and cutting-edge AI applications for best-in-class RoA.

What’s New in 24-1

Now Available in Release 24-1 for Aerial CUDA-Accelerated RAN

  • Aerial cuPHY: CUDA accelerated inline PHY

    • 64T64R Massive MIMO (early access)

    • Enhanced L1-L2 interface

    • 4T4R @ 100MHz multicell capacity on Grace Hopper

    • CSI-P2 enhancement

    • O-RAN Fronthaul

    • Grace Hopper MIG support

    • 4T4T new feature support

  • Aerial cuMAC: CUDA accelerated MAC scheduler

  • pyAerial: Python interface to cuPHY modules and pipeline

  • Aerial Data Lake: Data collection service for PHY to enable AI/ML training

© Copyright 2024, NVIDIA. Last updated on Jun 6, 2024.