Installation Guide

Integrating and deploying Aerial Research Cloud network for Advanced 5G and 6G

Integrating and deploying Aerial Research Cloud for Advanced 5G and 6G research can be described in the following steps:

  • Chapter 1: Procure all the required hardware based on the published BOM in this document

  • Chapter 2: Configure the network hardware

  • Chapter 3: Install the software to match the published release manifest

  • Chapter 4: Validate the setup by successfully running bi-directional UDP traffic as described

The rest of this document provides the step-by-step description to enable early research testbed staging, integrating, configuration and validate network go-live with IP traffic

Procure all the hardware listed in the BOM below.

5G Infrastructure Blueprint HW BOM

Note

Unless specific solution architecture based on use case differs, all components required in unit of 1

Aerial gNB

Gigabyte Edge E251-U70 Server x 1 with CPU Intel Xeon Gold 6240R, 2.4GHz, 24C48T, Memory 96GB DDR4, Storage 480GB LiteOn SSD x1. GPU GA100 x1, NIC x1 MLX CX6-DX MCX623106AE-CDAT)

CN

Dell PowerEdge R750 Server

FrountHaul(FH) Switch

Dell PowerSwitch S5248F-ON

Fibrolan Falcon RX

GrandMaster(GM)

QULSAR Qg 2 Multi-Sync Gateway

O-RUs supported

5G Infra Component

HW Manifest

ORU

Configuration

Freq Band

Foxconn

RPQN-7801E

4T4R

3.7GHz - 3.8GHz

(indoors)

UEs supported

UE

Configuration

Handset

OnePlus Nord 5G AC2003 EU/UK Model

SU-MIMO

2DL, 1UL

Handset

Oppo Reno 5G Oppo, model CPH2201 Oppo Reno5 Pro 5G - Full phone specifications (gsmarena.com)

SU-MIMO

2DL, 1UL

Quectel RM500Q-GL UE

SU-MIMO

2DL, 1UL

Cables

Dell C2G 1m LC-LC 50/125 Duplex Multimode OM4 Fiber Cable - Aqua - 3ft – Optical patch cable

NVIDIA MCP1600-C001E30N DAC Cable Ethernet 100GbE QSFP28 1m

Beyondtech 5m (16ft) LC UPC to LC UPC Duplex OM3 Multimode PVC (OFNR) 2.0mm Fiber Optic Patch Cable

CableCreation 3ft Cat5/Cat6 Ethernet Cables

PDUs

Tripp Lite 1.4kW Single-Phase Monitored PDU with LX Platform Interface, 120V Outlets (8 5-15R), 5-15P, 12ft Cord, 1U Rack-Mount, TAA

Transceivers

Finisar SFP-to-RJ45 Transceiver

Intel Ethernet SFP+SR Optics

Dell SFP28-25G-SR Transceiver

Ethernet Switch

Netgear ProSafe Plus JGS524E Rackmount

iPerf Laptop

Connected to the switch (10G ethernet)

To procure all the hardware items in the blueprint BOM, please contact the Aerial Research Cloud team at aria@nvidia.com. In the Email, please include your full name, company name, preferred Email contact, and country/region.

Refer to the tutorials for help with these installation steps.

Configuration Steps

  1. Setup the GrandMaster

  2. Setup the switch

  3. Setup PTP

  4. Setup Foxconn O-RU

Chapter 2.1 Setup the Qulsar GrandMaster

Step 1.

Follow the user guide to setup the MGMT connection

image1.png


Step 2.

Set the operating mode to GNSS Only, and other fields as such, then run Start Engine

image2.png


Step 3.

Enable the ports on the GrandMaster with the 8275.1 Profile configurations

image3.png


Step 4.

Configure the clock configs as such:

image4.png


Step 5.

GPS configuration values were unchanged from the default settings of QG2

image5.png


Step 6.

Verify that the GPS Signal reaches the GrandMaster:

image6.png

Chapter 2.2 Switch setup

Chapter 2.2.1 Dell Switch

In the following example the RUs are on ports 1 and 7, the GrandMaster is on port 5, the CN is on ports 11 and 12, and the gNB ports are connected to ports 49 and 51 all on vlan 2.

Set up MGMT access to the switch

Enable PTP on the switch:

Copy
Copied!
            

OS10# configure terminal OS10(config)# ptp clock boundary profile g8275.1 ptp domain 24 ptp system-time enable !

Configure the GrandMaster port:

Copy
Copied!
            

OS10(config)# interface ethernet 1/1/5:1 no shutdown no switchport ip address 169.254.2.1/24 flowcontrol receive off ptp delay-req-min-interval -4 ptp enable ptp sync-interval -4 ptp transport layer2 !

Confgure Fronthaul Network Configuration by creating a vlan.

..Note:: If vlan has changed, remember to modify the ASDK yaml file and O-RU configuration.

Create vlan 2:

Copy
Copied!
            

OS10# configure terminal OS10(config)# interface vlan 2 OS10(conf-if-vl-2)# <165>1 2023-03-16T16:51:36.458730+00:00 OS10 dn_alm 813 - - Node.1-Unit.1:PRI [event], Dell EMC (OS10) %IFM_ASTATE_UP: Interface admin state up :vlan2 OS10(conf-if-vl-2)# show configuration ! interface vlan2 no shutdown OS10(conf-if-vl-2)# exit

Cofigure the RU port

Copy
Copied!
            

OS10(config)# interface ethernet 1/1/1 mode eth 10g-4x no shutdown no switchport ip address 169.254.2.1/24 flowcontrol receive off ptp delay-req-min-interval -4 ptp enable ptp sync-interval -4 ptp transport layer2 !

Configure the other ports (repeat as necessary):

RU Port should look like the following:

Copy
Copied!
            

no shutdown switchport mode trunk switchport trunk allowed vlan 2 mtu 8192 speed 10000 flowcontrol receive off ptp enable ptp transport layer2``` !

Check the PTP status:

Copy
Copied!
            

OS10# show ptp | no-more PTP Clock : Boundary Clock Identity : b0:4f:13:ff:ff:46:63:5f GrandMaster Clock Identity : fc:af:6a:ff:fe:02:bc:8d Clock Mode : One-step Clock Quality Class : 135 Accuracy : <=100ns Offset Log Scaled Variance : 65535 Domain : 24 Priority1 : 128 Priority2 : 128 Profile : G8275-1(Local-Priority:-128) Steps Removed : 1 Mean Path Delay(ns) : 637 Offset From Master(ns) : 1 Number of Ports : 8 ---------------------------------------------------------------------------- Interface State Port Identity ---------------------------------------------------------------------------- Ethernet1/1/1:1 Master b0:4f:13:ff:ff:46:63:5f:1 Ethernet1/1/3:1 Master b0:4f:13:ff:ff:46:63:5f:3 Ethernet1/1/5:1 Slave b0:4f:13:ff:ff:46:63:5f:5 Ethernet1/1/7:1 Master b0:4f:13:ff:ff:46:63:5f:8 Ethernet1/1/11 Master b0:4f:13:ff:ff:46:63:5f:4 Ethernet1/1/49 Master b0:4f:13:ff:ff:46:63:5f:9 Ethernet1/1/51 Master b0:4f:13:ff:ff:46:63:5f:10 Ethernet1/1/54 Master b0:4f:13:ff:ff:46:63:5f:2 ---------------------------------------------------------------------------- Number of slave ports :1 Number of master ports :7


Chapter 2.2.2 Fibrolan Falcon RX Setup

Although the Fibrolan switch has not be qualified in NVIDIA lab, OAI labs incorporate the following configuration and switch for interoperability

fibrolan_1.png

To get started follow the Fibrolan Getting Started Guide.

In our setup the Qulsar GrandMaster is connected to port 4, the Aerial SDK to port 17, and the Foxconn O-RU to port 16 (C/U plane) and port 15 (S/M plane). You can ignore all other ports in the figures[A][B] below.

VLAN setup

In the following we assume that the VLAN tag for both the control plane and the user plane of the O-RAN CU plane is 2. VLAN 80 is used for everything else.

fibrolan_2.png

Figure A - Vlan Setup

Open the configuration page of the Fibrolan switch, go to configuration -> VLANs. Port 4 (the Qulsar GrandMaster) needs to be configured in Access mode using and setting the port VLAN to 80.

fibrolan_3.png

Figure B - Vlan Setup

Use the same configuration for port 15 (RU S/M plane).

Ports 16 and 17 need to be configured in Trunk mode, port VLAN 80, Untag Port VLAN, Allowed VLANs 80,2

DHCP setup

The RU M-plane requires to setup a DHCP server. Go to Configuration -> DHCP -> server -> pool and create a new DHCP server with the following settings

fibrolan_4.png


PTP setup

For the PTP setup, first follow the Fibrolan “PTP Boundary Clock Configuration” guide with the following specific settings: • Device Type “Ord-Bound” • Profile “G8275.1” • Clock domain 24 • VLAN 80 Also make sure you enable the used ports (4,15,16,17 in our case)

We also recommend to use “hybrid mode” as sync mode.

If everything is configured correctly, the Sync Center should be green

fibrolan_5.png

Chapter 2.3 PTP Setup

Step 1.

Enter these commands to configure PTP4L assuming the ens6f0 NIC interface and CPU core 20 are used for PTP:

Copy
Copied!
            

cat <<EOF | sudo tee /etc/ptp.conf [global] priority1 128 priority2 128 domainNumber 24 tx_timestamp_timeout 30 dscp_event 46 dscp_general 46 logging_level 6 verbose 1 use_syslog 0 logMinDelayReqInterval 1 [ens6f0] logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 logMinDelayReqInterval -4 delay_mechanism E2E network_transport L2 EOF cat <<EOF | sudo tee /lib/systemd/system/ptp4l.service [Unit] Description=Precision Time Protocol (PTP) service Documentation=man:ptp4l [Service] Restart=always RestartSec=5s Type=simple ExecStart=/usr/bin/taskset -c 20 /usr/sbin/ptp4l -f /etc/ptp.conf [Install] WantedBy=multi-user.target EOF $ sudo systemctl daemon-reload $ sudo systemctl restart ptp4l.service $ sudo systemctl enable ptp4l.service


Step 2.

The server will follow the grandmaster clock as shown here:

Copy
Copied!
            

$ sudo systemctl status ptp4l.service • ptp4l.service - Precision Time Protocol (PTP) service Loaded: loaded (/lib/systemd/system/ptp4l.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2022-02-03 22:41:12 UTC; 5min ago Docs: man:ptp4l Main PID: 1112 (ptp4l) Tasks: 1 (limit: 94582) Memory: 812.0K CGroup: /system.slice/ptp4l.service └─1112 /usr/sbin/ptp4l -f /etc/ptp.conf Feb 03 22:46:30 dc6-aerial-devkit-17 taskset[1112]: ptp4l[444.474]: rms 5 max 11 freq +2450 +/- 8 delay 259 +/- 1 Feb 03 22:46:31 dc6-aerial-devkit-17 taskset[1112]: ptp4l[445.475]: rms 5 max 12 freq +2447 +/- 9 delay 260 +/- 1 Feb 03 22:46:32 dc6-aerial-devkit-17 taskset[1112]: ptp4l[446.475]: rms 6 max 13 freq +2461 +/- 7 delay 258 +/- 0 Feb 03 22:46:33 dc6-aerial-devkit-17 taskset[1112]: ptp4l[447.475]: rms 5 max 10 freq +2457 +/- 9 delay 260 +/- 0 Feb 03 22:46:34 dc6-aerial-devkit-17 taskset[1112]: ptp4l[448.475]: rms 3 max 6 freq +2454 +/- 4 delay 261 +/- 1 Feb 03 22:46:35 dc6-aerial-devkit-17 taskset[1112]: ptp4l[449.475]: rms 4 max 7 freq +2458 +/- 6 delay 259 +/- 0 Feb 03 22:46:36 dc6-aerial-devkit-17 taskset[1112]: ptp4l[450.475]: rms 4 max 6 freq +2454 +/- 6 delay 259 +/- 1 Feb 03 22:46:37 dc6-aerial-devkit-17 taskset[1112]: ptp4l[451.475]: rms 4 max 8 freq +2452 +/- 6 delay 258 +/- 0 Feb 03 22:46:38 dc6-aerial-devkit-17 taskset[1112]: ptp4l[452.475]: rms 3 max 7 freq +2454 +/- 6 delay 258 +/- 0 Feb 03 22:46:39 dc6-aerial-devkit-17 taskset[1112]: ptp4l[453.475]: rms 6 max 14 freq +2460 +/- 9 delay 258 +/- 1


Step 3.

Enter the commands to turn off NTP:

Copy
Copied!
            

$ sudo timedatectl set-ntp false $ timedatectl Local time: Thu 2022-02-03 22:30:58 UTC Universal time: Thu 2022-02-03 22:30:58 UTC RTC time: Thu 2022-02-03 22:30:58 Time zone: Etc/UTC (UTC, +0000) System clock synchronized: no NTP service: inactive RTC in local TZ: no


Step 6.

Run PHC2SYS as service:

Copy
Copied!
            

# If more than one instance is already running, kill the existing # PHC2SYS sessions. # Command used can be found in /lib/systemd/system/phc2sys.service # Update the ExecStart line to the following, assuming ens6f0 interface is used. $ sudo nano /lib/systemd/system/phc2sys.service [Unit] Description=Synchronize system clock or PTP hardware clock (PHC) Documentation=man:phc2sys After=ntpdate.service Requires=ptp4l.service After=ptp4l.service [Service] Restart=always RestartSec=5s Type=simple ExecStart=/usr/sbin/phc2sys -a -r -n 24 -R 256 -u 256 [Install] WantedBy=multi-user.target #Note: If there is more than one ptp4l service running on the server the port must be explicitly specified, e.g: ExecStart=/bin/sh -c "/usr/sbin/phc2sys -s /dev/ptp$(ethtool -T ens6f0 | grep PTP | awk '{print $4}')-c CLOCK_REALTIME -n 24 -O 0 -R 256 -u 256" # Once that file is changed, run the following: $ sudo systemctl daemon-reload $ sudo systemctl restart phc2sys.service # Set to start automatically on reboot $ sudo systemctl enable phc2sys.service # check that the service is active and has low rms value (<30): $ sudo systemctl status phc2sys.service • phc2sys.service - Synchronize system clock or PTP hardware clock (PHC) Loaded: loaded (/lib/systemd/system/phc2sys.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:phc2sys # If the service is already running as below then you don't need to change # anything: $ sudo systemctl status phc2sys.service • phc2sys.service - Synchronize system clock or PTP hardware clock (PHC) Loaded: loaded (/lib/systemd/system/phc2sys.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2021-04-30 14:28:57 UTC; 17s ago Docs: man:phc2sys Main PID: 1180983 (sh) Tasks: 2 (limit: 94582) Memory: 2.2M CGroup: /system.slice/phc2sys.service └─1181087 /usr/sbin/phc2sys -a -r -n 24 -R 256 -u 256 Apr 30 14:29:05 aerial-devkit-16 phc2sys[1181087]: [53625.834] CLOCK_REALTIME rms 10 max 24 freq +35384 +/- 42 delay 1769 +/- 11 Apr 30 14:29:06 aerial-devkit-16 phc2sys[1181087]: [53626.850] CLOCK_REALTIME rms 9 max 26 freq +35355 +/- 41 delay 1774 +/- 9 Apr 30 14:29:07 aerial-devkit-16 phc2sys[1181087]: [53627.866] CLOCK_REALTIME rms 8 max 23 freq +35378 +/- 23 delay 1778 +/- 7 Apr 30 14:29:08 aerial-devkit-16 phc2sys[1181087]: [53628.881] CLOCK_REALTIME rms 9 max 22 freq +35358 +/- 26 delay 1761 +/- 13 Apr 30 14:29:09 aerial-devkit-16 phc2sys[1181087]: [53629.897] CLOCK_REALTIME rms 8 max 20 freq +35372 +/- 14 delay 1760 +/- 12 Apr 30 14:29:10 aerial-devkit-16 phc2sys[1181087]: [53630.913] CLOCK_REALTIME rms 9 max 25 freq +35374 +/- 15 delay 1764 +/- 12 Apr 30 14:29:11 aerial-devkit-16 phc2sys[1181087]: [53631.929] CLOCK_REALTIME rms 9 max 21 freq +35371 +/- 21 delay 1759 +/- 8 Apr 30 14:29:12 aerial-devkit-16 phc2sys[1181087]: [53632.945] CLOCK_REALTIME rms 9 max 23 freq +35364 +/- 22 delay 1760 +/- 9 Apr 30 14:29:13 aerial-devkit-16 phc2sys[1181087]: [53633.961] CLOCK_REALTIME rms 9 max 23 freq +35373 +/- 16 delay 1756 +/- 9 Apr 30 14:29:14 aerial-devkit-16 phc2sys[1181087]: [53634.976] CLOCK_REALTIME rms 10 max 24 freq +35354 +/- 33 delay 1757 +/- 9


Step 6.

Verify whether the system clock is synchronized:

Copy
Copied!
            

$ timedatectl Local time: Thu 2022-02-03 22:30:58 UTC Universal time: Thu 2022-02-03 22:30:58 UTC RTC time: Thu 2022-02-03 22:30:58 Time zone: Etc/UTC (UTC, +0000) System clock synchronized: yes NTP service: inactive RTC in local TZ: no

Chapter 2.4 Set up the Foxconn ORU

image7.jpg

Foxconn RPQN-7801E

Connections and Settings

image8.png

Connections:

  • 10SFP: C/U plane (will support S/M plane after FW upgrade)

  • 1G RJ45: S/M plane

  • 10G RJ45: POE only

  • Micro-USB: USB to serial for debugging (115200, 8, 1, none, flow control off)

GrandMaster settings (Qulsar):

  • PTP timing port: Disable VLAN

  • Two steps: OFF

  • Domain number: 24 <- need to config on O-RU

  • IPv4, Unicast, etc.

/home/root/sdcard/RRHconfig_xran.xml:

  • RRH_PTPV2_GRAND_MASTER_IP = 20.0.0.8

  • RRH_PTPV2_SUB_DOMAIN_NUM = 24

  • C/U plane VLAN tag

  • RRH_LO_FREQUENCY_KHZ = 3750000

Configure VLAN and IP address on the gNB server

  1. Add these instructions to server startup script ‘/etc/rc.local’ so they are automatically run on reboot

  2. You should configure this on the fronthaul port

  3. Make sure you use different/unique ip address from the example below

Copy
Copied!
            

sudo ip link add link ens6f0 name ens6f0.2 type vlan id2 sudo ip addr add 169.254.1.103/24dev ens6f0.2 sudo ip link set up ens6f0.2


O-RU M-Plane Setup

Add the following to the bottom of /etc/profile and comment out the line with set_qse.sh if there is one. The interface should be initially set to eth0 for firmware version 1 and to qse-eth after upgrading to firmware version ≥ 2

Copy
Copied!
            

interface=eth0 vlanid=2 ipLastOctet=20 ip link add link ${interface} name ${interface}.$vlanid type vlan id $vlanid ip addr flush dev ${interface} ip addr add 169.254.0.0/24 dev ${interface} ip addr add 169.254.1.${ipLastOctet}/24 dev ${interface}.$vlanid ip link set up ${interface}.$vlanid

Reboot the O-RU using the command ./reboot.sh and check the network configuration:

Copy
Copied!
            

# ip r 169.254.1.0/24 dev eth0.2 src 169.254.1.20


Firmware Update

The Foxconn ORU needs to be upgraded to version 2.6.9 to support M- & S-planes on the 10G interface.

The following steps should be executed on the serial port.

  1. Download the install_eng_v3_1_6_1q_524_202207260927.run and install_eng_v2_6_9q_524.run from Mantis.

  2. Copy the executables from gNB to O-RU using below command:

Copy
Copied!
            

scp -oCiphers=aes128-ctr -P 830 install_eng_v2_6_9q_524.run root@169.254.1.20:/home/root/test/ scp -oCiphers=aes128-ctr -P 830 install_eng_v3_1_6_1q_524_202207260927.run root@169.254.1.20:/home/root/test/

  1. Execute install_eng_v3_1_6_1q_524_202207260927.run under /home/root/test first and wait for reboot.

  2. Execute install_eng_v2_6_9q_524.run under /home/root/test and wait for reboot.

  3. With above steps, the RU firmware will be upgraded to v2.6.9q.524 and had the OAM packages installed.

    Run below to check the version:

Copy
Copied!
            

root@ae-oru-2:~/test# cat version.txt branch: 328-change_default_clock_out_to_10mhz version: 60635d6be38bd0480968c344d5ecc3aec1a29fe1 tag: v2.6.9q.524-oam

  1. Change /etc/profile to reflect the correct interface and reboot:

Copy
Copied!
            

interface=qse-eth

  1. Confirm that the correct interface is set to vlan 2

Copy
Copied!
            

# ip r 169.254.1.0/24 dev qse-eth.2 src 169.254.1.20

  1. Confirm that you can ping and ssh to the O-RU using this interface:

$ping 169.254.1.20 PING 169.254.1.20 (169.254.1.20) 56(84) bytes of data. 64 bytes from 169.254.1.20: icmp_seq=1 ttl=64 time=0.165 ms 64 bytes from 169.254.1.20: icmp_seq=2 ttl=64 time=0.160 ms 64 bytes from 169.254.1.20: icmp_seq=3 ttl=64 time=0.148 ms $ ssh root@169.254.1.20 root@169.254.1.20's password: cj/6c93zj4g4d; Last login: Thu Apr 20 16:40:15 2023 from 169.254.1.103 ip: RTNETLINK answers: File exists ip: RTNETLINK answers: File exists root@arria10:~/test#

Update O-RU configuration

Update configurations in /home/root/sdcard/RRHconfig_xran.xml

Copy
Copied!
            

root@arria10:~/test# grep -v '<!-' ../sdcard/RRHconfig_xran.xml RRH_DST_MAC_ADDR = 08:c0:eb:71:e7:d4 # To match fronthaul interface of DU RRH_SRC_MAC_ADDR = 6C:AD:AD:00:04:6C # To match qse-eth of RU RRH_EN_EAXC_ID = 0 RRH_EAXC_ID_TYPE1 = 0x0, 0x1, 0x2, 0x3 RRH_EAXC_ID_TYPE3 = 0x8, 0x9, 0xA, 0xB RRH_EN_SPC = 1 RRH_RRH_LTE_OR_NR = 1 RRH_TRX_EN_BIT_MASK = 0x0f RRH_RF_EN_BIT_MASK = 0x0f RRH_CMPR_HDR_PRESENT = 0 RRH_CMPR_TYPE = 1, 1 RRH_CMPR_BIT_LENGTH = 9, 9 RRH_UL_INIT_SYM_ID = 0 RRH_TX_TRUNC_BITS = 4 RRH_RX_TRUNC_BITS = 4 RRH_MAX_PRB = 273 RRH_C_PLANE_VLAN_TAG = 0x0002 #To match vlan id set in cuphycontroller yaml file RRH_U_PLANE_VLAN_TAG = 0x0002 #To match vlan id set in cuphycontroller yaml file RRH_SLOT_TICKS_IN_SEC = 2000 RRH_SLOT_PERIOD_IN_SAMPLE = 61440 RRH_LO_FREQUENCY_KHZ = 3750000, 0 RRH_TX_POWER = 24, 24 RRH_TX_ATTENUATION = 12.0, 12.0, 12.0, 12.0 RRH_RX_ATTENUATION = 0.0, 0.0, 0.0, 0.0 RRH_BB_GENERAL_CTRL = 0x0, 0x0, 0x0, 0x0 RRH_RF_GENERAL_CTRL = 0x3, 0x1, 0x0, 0x0 RRH_PTPV2_GRAND_MASTER_MODE = 3 RRH_PTPV2_JITTER_LEVEL = 0 RRH_PTPV2_VLAN_ID = 0 RRH_PTPV2_IP_MODE = 4 RRH_PTPV2_GRAND_MASTER_IP = 192.167.27.150 RRH_PTPV2_SUB_DOMAIN_NUM = 24 RRH_PTPV2_ACCEPTED_CLOCK_CLASS = 135 RRH_TRACE_PERIOD = 10


Reboot O-RU

Copy
Copied!
            

cd /home/root/test/ ./reboot

Run below to enable the config

Copy
Copied!
            

cd /home/root/test/ ./init_rrh_config_enable_cuplane

At this point the console become unresponsive and fill with prints related to PTP, AFE initialization, and finally packet counters.

This section describes how to setup the Aerial private 5G network which consists of:

  • Aerial SDK L1

  • Remaining components of OAI gNB

  • OAI Core Network

  • User Equipment (UE)

  • Edge Server Applications(e.g. iPerf)

image5b.png

These instructions assume that the core network and gNB can be deployed on the same host server.

ARC Software Release Manifest

Component

Version

Aerial SDK (ASDK) PHY

22-4

OAI gNB

OAI_Aerial_v1.0

OAI CN

1.5

Setup Aerial SDK L1

Please follow the step by step installation guide for cuBB located at NVIDIA Developer Zone - Aerial SDK. Please refer to the ASDK release above in the ARC software release manifest and find the instructions in archieve sector of link below.

https://developer.nvidia.com/docs/gputelecom/aerial-sdk/text/cubb_install/index.html

Running the cuBB docker container

Copy
Copied!
            

GPU_FLAG="--gpus all" cuBB_SDK=/opt/nvidia/cuBB AERIAL_CUBB_CONTAINER=cuBB AERIAL_CUBB_IMAGE=c_aerial_aerial:22.4 sudo docker run --detach --privileged \ -it $GPU_FLAG --name cuBB \ --hostname c_aerial_$USER \ --add-host c_aerial_$USER:127.0.0.1 \ --network host \ --shm-size=4096m \ -e cuBB_SDK=$cuBB_SDK \ -w $cuBB_SDK \ -v $(echo ~):$(echo ~) \ -v /dev/hugepages:/dev/hugepages \ -v /usr/src:/usr/src \ -v /lib/modules:/lib/modules \ -v ~/share:/opt/cuBB/share \ --userns=host \ --ipc=host \ -v /var/log/aerial:/var/log/aerial \ $AERIAL_CUBB_IMAGE docker exec -it $AERIAL_CUBB_CONTAINER bash

cuBB Installation Guide: From System Requirements to Troubleshooting**()

Since cuBB 22.2.2 release, the test vectors are not included in the SDK. The developer needs to generate the TV files first before running cuPHY examples or cuBB end-to-end test.

Using Aerial Python mcore Module

No Matlab license required to generate TV files using Aerial Python mcore module. The cuBB Container already has aerial_mcore installed. To generate the TV files, run the following commands inside the Aerial container.

Note

The TV generation may take few hours on the devkit with current isocpus parameter setting in kernel command line. Please also ensure the host has sufficient space to contain 111GB of TV files.

Copy
Copied!
            

cd ${cuBB_SDK}/5GModel/aerial_mcore/examples source ../scripts/setup.sh export REGRESSION_MODE=1 time python3 ./example_5GModel_regression.py allChannels echo $? ls -alF GPU_test_input/ du -h GPU_test_input/

Example output is shown below. The “real” time takes less than one hour on a 24 cores x86 host. The “echo $?” shows the exit code of the process, which should be 0. A non-zero exit code indicates a failure.

Copy
Copied!
            

Channel Compliance_Test Error Test_Vector Error Performance_Test Fail ------------------------------------------------------------------------------ SSB 37 0 42 0 0 0 PDCCH 71 0 80 0 0 0 PDSCH 274 0 286 0 0 0 CSIRS 86 0 87 0 0 0 DLMIX 0 0 1049 0 0 0 PRACH 60 0 60 0 48 0 PUCCH 469 0 469 0 96 0 PUSCH 388 0 398 0 41 0 SRS 125 0 125 0 0 0 ULMIX 0 0 576 0 0 0 BFW 58 0 58 0 0 0 ------------------------------------------------------------------------------ Total 1568 0 3230 0 185 0 Total time for runRegression is 2147 seconds Parallel pool using the 'local' profile is shutting down. real 36m51.931s user 585m1.704s sys 10m28.322s

To Generate the launch pattern for each test case using cubb_scripts:

Copy
Copied!
            

cd $cuBB_SDK cd cubb_scripts python3 auto_lp.py -i ../5GModel/aerial_mcore/examples/GPU_test_input -t launch_pattern_nrSim.yaml

Then copy the launch pattern and TV files to testVectors repo.

Copy
Copied!
            

cd $cuBB_SDK cp ./5GModel/aerial_mcore/examples/GPU_test_input/TVnr_* ./testVectors/. cp ./5GModel/aerial_mcore/examples/GPU_test_input/launch_pattern* ./testVectors/multi-cell/.


Using Matlab

To generate TV files using Matlab, run the following command in Matlab:

Copy
Copied!
            

cd('nr_matlab'); startup; [nTC, errCnt] = runRegression({'TestVector'}, {'allChannels'}, 'compact', [0, 1] );

All the cuPHY TVs are generated and stored under nr_matlab/GPU_test_input.

Generate the launch pattern for each test case using cubb_scripts:

Copy
Copied!
            

cd $cuBB_SDK cd cubb_scripts python3 auto_lp.py -i ../5GModel/nr_matlab/GPU_test_input -t launch_pattern_nrSim.yaml

Copy the launch pattern and TV files to testVectors repo.

Copy
Copied!
            

cd $cuBB_SDK cp ./5GModel/nr_matlab/GPU_test_input/TVnr_* ./testVectors/. cp ./5GModel/nr_matlab/GPU_test_input/launch_pattern* ./testVectors/multi-cell/.

PTP slave setup

Please refer to installation instructions in the Aerial SDK documentation based on ARC release manifest above. In the below link, replace “aerial-sdk-X-X” with the appropriate ASDK release (e.g. “aerial-sdk-22-4”).

https://developer.nvidia.com/docs/gputelecom/aerial-sdk/aerial-sdk-archive/aerial-sdk-X-X/text/cubb_install/installing_tools.html#install-ptp4l-and-phc2sys

Setup OAI gNB

Install Ubuntu on both servers

  1. https://releases.ubuntu.com/20.04.4/ubuntu-20.04.4-desktop-amd64.iso

  2. Run the following:

    Copy
    Copied!
                

    sudo apt update sudo apt dist-upgrade sudo apt autoremove

Prepare gNB docker images

Build gNB docker image

Check out the OpenAirInterface5G repository

Copy
Copied!
            

git clone https://gitlab.eurecom.fr/rssilva/openairinterface5g.git cd openairinterface5g git checkout OAI_Aerial_v1.0

Build the docker image

Copy
Copied!
            

docker build . -f docker/Dockerfile.aerial.ubuntu20


gNB configuration file

vnf.sa.band78.fr1.273PRB.Aerial.conf

Targeted in a future release - docker-compose yaml file and an entrypoint script for the docker container

Setup OAI CN5G

Do this iptables setup below every time after a system reboot. It is also possible to make this permanent in Ubuntu system configuration.

Copy
Copied!
            

On CN5G server, configure it to allow the traffic coming in by adding this rule to iptables: # On CN5G, upon startup: sudo sysctl net.ipv4.conf.all.forwarding=1 sudo iptables -P FORWARD ACCEPT

Install the core network by following these steps.

The user configurable configuration files are:

  • ~/oai-cn5g-fed/docker-compose/docker-compose-basic-nrf.yaml

  • ~/oai-cn5g-fed/docker-compose/database/oai_db.sql

Configuring OAI gNB and CN5G

For the purpose of understanding which address is what in the example configuration setting and commands below, we will assume the gNB and CN5G servers have these interface names and IP addresses.

CN5G Server

Copy
Copied!
            

eno1: 10.31.66.x = SSH management port for terminal eno2: 169.254.200.6 = BH connection on SFP switch for gNB-CN5G traffic

gNB Server

Copy
Copied!
            

eno1: 10.31.66.x = SSH management port for terminal ens6f0: b8:ce:f6:4e:75:40 = FH MAC address ens6f0.2: 169.254.1.105 = FH IP address ens6f1: 169.254.200.5 = BH connection SFP switch for gNB-CN5G traffic

gNB to set static route

On the gNB server, add this static route for a path to the CN5G server. Please apply this route each time after the server power-on.

Copy
Copied!
            

Syntax: sudo ip route add 192.168.70.128/26 via <CN5G IP> dev <gNB interface for CN5G> Example: sudo ip route add 192.168.70.128/26 via 169.254.200.6 dev ens6f1

gNB to set the CN5G server to uses for AMF

Edit gNB configuration file: targets/PROJECTS/GENERIC-NR-5GC/CONF/vnf.sa.band78.fr1.273PRB.Aerial.conf

Below is an example with lab-specific network parameters. Your IP address and interface names may differ.

Copy
Copied!
            

GNB_INTERFACE_NAME_FOR_NG_AMF = "ens6f1"; # gNB side interface name of the SFP port toward CN (was eno1) GNB_IPV4_ADDRESS_FOR_NG_AMF = "169.254.200.5"; # gNB side IP address of interface above (was 172.21.16.130) GNB_INTERFACE_NAME_FOR_NGU = "ens6f1"; # gNB side interface name of the SFP port toward CN (was eno1) GNB_IPV4_ADDRESS_FOR_NGU = "169.254.200.5"; # Same IP as GNB_IPV4_ADDRESS_FOR_NG_AMF above (was 172.21.16.130)

Remove SD parameter from gNB configuration file

In the same gNB configuration file, if this line “sd = 0x1” exist, please delete this line when using the latest CN5G.

Copy
Copied!
            

plmn_list = ({ mcc = 001; mnc = 01; mnc_length = 2; snssaiList = ( { sst = 1; sd = 0x1; // 0 false, else true } ); });


Running CN5G

To start CN5G

Copy
Copied!
            

cd ~/oai-cn5g-fed/docker-compose python3 core-network.py --type start-basic --scenario 1

Or alternatively:

Copy
Copied!
            

docker-compose up -d


To Stop CN5G

Copy
Copied!
            

cd ~/oai-cn5g-fed/docker-compose python3 core-network.py --type stop-basic --scenario 1

Or alternatively:

Copy
Copied!
            

docker-compose down


To monitor CN5G logs while running

docker logs oai-amf -f

To capture PCAPs

Copy
Copied!
            

docker exec -it oai-amf /bin/bash apt update && apt install tcpdump -y tcpdump -i any -w /tmp/amf.pcap

Then we can copy the pcap out from the container

Copy
Copied!
            

docker cp oai-amf:/tmp/amf.pcap .

Example Screenshot of Starting CN5G

Copy
Copied!
            

aerial@:~/oai-cn5g-fed/docker-compose$ python3 core-network.py --type start-basic --scenario 1 [2022-11-16 01:17:22,058] root:DEBUG: Starting 5gcn components... Please wait.... [2022-11-16 01:17:22,058] root:DEBUG: docker-compose -f docker-compose-basic-nrf.yaml up -d Creating network "demo-oai-public-net" with driver "bridge" Pulling mysql (mysql:5.7)... Creating oai-nrf ... done Creating mysql ... done Creating oai-udr ... done Creating oai-udm ... done Creating oai-ausf ... done Creating oai-amf ... done Creating oai-smf ... done Creating oai-spgwu ... done Creating oai-ext-dn ... done 5.7: Pulling from library/mysql Digest: sha256:0e3435e72c493aec752d8274379b1eac4d634f47a7781a7a92b8636fa1dc94c1 Status: Downloaded newer image for mysql:5.7 [2022-11-16 01:17:35,693] root:DEBUG: OAI 5G Core network started, checking the health status of the containers... takes few secs.... [2022-11-16 01:17:35,693] root:DEBUG: docker-compose -f docker-compose-basic-nrf.yaml ps -a [2022-11-16 01:17:48,674] root:DEBUG: All components are healthy, please see below for more details.... Name Command State Ports ----------------------------------------------------------------------------------------- mysql docker-entrypoint.sh mysqld Up (healthy) 3306/tcp, 33060/tcp oai-amf /bin/bash /openair-amf/bin ... Up (healthy) 38412/sctp, 80/tcp, 9090/tcp oai-ausf /bin/bash /openair-ausf/bi ... Up (healthy) 80/tcp oai-ext-dn /bin/bash -c ip route add ... Up (healthy) oai-nrf /bin/bash /openair-nrf/bin ... Up (healthy) 80/tcp, 9090/tcp oai-smf /bin/bash /openair-smf/bin ... Up (healthy) 80/tcp, 8080/tcp, 8805/udp oai-spgwu /bin/bash /openair-spgwu-t ... Up (healthy) 2152/udp, 8805/udp oai-udm /bin/bash /openair-udm/bin ... Up (healthy) 80/tcp oai-udr /bin/bash /openair-udr/bin ... Up (healthy) 80/tcp [2022-11-16 01:17:48,674] root:DEBUG: Checking if the containers are configured.... [2022-11-16 01:17:48,674] root:DEBUG: Checking if AMF, SMF and UPF registered with nrf core network.... [2022-11-16 01:17:48,674] root:DEBUG: curl -s -X GET http://192.168.70.130/nnrf-nfm/v1/nf-instances?nf-type="AMF" | grep -o "192.168.70.132" 192.168.70.132 [2022-11-16 01:17:48,692] root:DEBUG: curl -s -X GET http://192.168.70.130/nnrf-nfm/v1/nf-instances?nf-type="SMF" | grep -o "192.168.70.133" 192.168.70.133 [2022-11-16 01:17:48,708] root:DEBUG: curl -s -X GET http://192.168.70.130/nnrf-nfm/v1/nf-instances?nf-type="UPF" | grep -o "192.168.70.134" 192.168.70.134 [2022-11-16 01:17:48,718] root:DEBUG: Checking if AUSF, UDM and UDR registered with nrf core network.... [2022-11-16 01:17:48,718] root:DEBUG: curl -s -X GET http://192.168.70.130/nnrf-nfm/v1/nf-instances?nf-type="AUSF" | grep -o "192.168.70.138" 192.168.70.138 [2022-11-16 01:17:48,733] root:DEBUG: curl -s -X GET http://192.168.70.130/nnrf-nfm/v1/nf-instances?nf-type="UDM" | grep -o "192.168.70.137" 192.168.70.137 [2022-11-16 01:17:48,747] root:DEBUG: curl -s -X GET http://192.168.70.130/nnrf-nfm/v1/nf-instances?nf-type="UDR" | grep -o "192.168.70.136" 192.168.70.136 [2022-11-16 01:17:48,758] root:DEBUG: AUSF, UDM, UDR, AMF, SMF and UPF are registered to NRF.... [2022-11-16 01:17:48,758] root:DEBUG: Checking if SMF is able to connect with UPF.... [2022-11-16 01:17:48,829] root:DEBUG: UPF did answer to N4 Association request from SMF.... [2022-11-16 01:17:48,866] root:DEBUG: SMF receiving heathbeats from UPF.... [2022-11-16 01:17:48,867] root:DEBUG: OAI 5G Core network is configured and healthy.... aerial@:~/oai-cn5g-fed/docker-compose$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c6a7eca08187 trf-gen-cn5g:latest "/bin/bash -c ' ip r…" About a minute ago Up About a minute (healthy) oai-ext-dn 5fa931ffb5f1 oai-spgwu-tiny:develop "/bin/bash /openair-…" About a minute ago Up About a minute (healthy) 2152/udp, 8805/udp oai-spgwu 70b48ac70b63 oai-smf:develop "/bin/bash /openair-…" About a minute ago Up About a minute (healthy) 80/tcp, 8080/tcp, 8805/udp oai-smf f18566936f62 oai-amf:develop "/bin/bash /openair-…" About a minute ago Up About a minute (healthy) 80/tcp, 9090/tcp, 38412/sctp oai-amf a75c40af3268 oai-ausf:develop "/bin/bash /openair-…" About a minute ago Up About a minute (healthy) 80/tcp oai-ausf a3d796819591 oai-udm:develop "/bin/bash /openair-…" About a minute ago Up About a minute (healthy) 80/tcp oai-udm 5442e9a1a2d8 oai-udr:develop "/bin/bash /openair-…" About a minute ago Up About a minute (healthy) 80/tcp oai-udr 7bfb07becff3 mysql:5.7 "docker-entrypoint.s…" About a minute ago Up About a minute (healthy) 3306/tcp, 33060/tcp mysql ea55f52bfcc6 oai-nrf:develop "/bin/bash /openair-…" About a minute ago Up About a minute (healthy) 80/tcp, 9090/tcp oai-nrf


Step 1: Add the SIM User Profile

Modify:

  • oai_db.sql (with plain text editor)

    There are currently 3 UEs pre-configured here, just search for: 001010000000001 and you will find them, add/edit as needed.

  • docker-compose-basic-nrf.yaml

    MCC, MNC, OPERATOR_KEY (you need to change them in several places)

  • On gNB server, change the MCC and MNC in the gNB config file ./targets/PROJECTS/GENERIC-NR-5GC/CONF/vnf.sa.band78.fr1.273PRB.Aerial.conf

Copy
Copied!
            

plmn_list = ({ - mcc = 208; - mnc = 98; + mcc = 001; + mnc = 01; mnc_length = 2;


Step 2: Setup the UE and SIM Card

For reference, please use the following

*SIM cards – 4G and 5G reference software (open-cells.com)*

Program SIM Card with Open Cells Project application “uicc-v2.6” https://open-cells.com/d5138782a8739209ec5760865b1e53b0/uicc-v2.6.tgz

Use the ADM code specific to the SIM card. If wrong ADM is used for 8 times, the SIM card will be permanently locked.

Copy
Copied!
            

sudo ./program_uicc --adm 12345678 --imsi 001010000000001 --isdn 00000001 --acc 0001 --key fec86ba6eb707ed08905757b1bb44b8f --opc C42449363BBAD02B66D16BC975D77CC1 -spn "OpenAirInterface" --authenticate Existing values in USIM ICCID: 89860061100000000191 WARNING: iccid luhn encoding of last digit not done USIM IMSI: 208920100001191 USIM MSISDN: 00000191 USIM Service Provider Name: OpenCells191 Setting new values Reading UICC values after uploading new values ICCID: 89860061100000000191 WARNING: iccid luhn encoding of last digit not done USIM IMSI: 001010000000001 USIM MSISDN: 00000001 USIM Service Provider Name: OpenAirInterface Succeeded to authentify with SQN: 64 set HSS SQN value as: 96

CUE Configuration Setup

Install the “Magic IPERF” application on the UE:

  1. To test with CUE, a test SIM card with Milenage support is required. The following has to be provisioned on the SIM and it has to match the Core Network settings: mcc, mnc, IMSI, Ki, OPc

  2. The APN on the CUE should be configured according to Core Network settings.

  3. Start the DNS (Core network should assign mobile IP address and DNS. If DNS is not assigned, set DNS with other Android app.)

Step 3. Running End-to-End OTA

This section describes how to run end to end traffic from UE to the edge core network.

Start OAI CN5G Core Network

Start CN5G Network
Copy
Copied!
            

sudo sysctl net.ipv4.conf.all.forwarding=1 sudo iptables -P FORWARD ACCEPT cd ~/oai-cn5g-fed/docker-compose python3 core-network.py --type start-basic --scenario 1


Start CN5G Edge Application

After the CN5G is started, we can use oai-ext-dn container to run IPERF

Copy
Copied!
            

docker exec -it oai-ext-dn /bin/bash

Start NVIDIA Aerial cuBB on the gNB

Copy
Copied!
            

# Run on host: start a docker terminal docker exec -it cuBB /bin/bash # Run in docker container export CUDA_DEVICE_MAX_CONNECTIONS=16 $cuBB_SDK/build/cuPHY-CP/cuphycontroller/examples/cuphycontroller P5G # Wait until see console log ====> PhyDriver initialized! 16:29:35.913840 C [NVIPC:DEBUG] ipc_debug_open: pcap enabled: fapi_type=1 fapi_tb_loc=1 16:29:36.141657 C [NVIPC:SHM] shm_ipc_open: forward_enable=0 fw_max_msg_buf_count=0 fw_max_data_buf_count=0 16:29:36.153808 C [CTL.SCF] cuPHYController configured for 1 cells 16:29:36.153816 C [CTL.SCF] ====> cuPHYController initialized, L1 is ready!


Start OAI gNB L2 Stack on the gNB

Start up the OAI container:

Copy
Copied!
            

docker run -dP --privileged --ipc container:cuBB \ --gpus all --network host --shm-size=4096m -it \ -v /lib/modules:/lib/modules \ -v /dev/hugepages:/dev/hugepages \ -v /usr/src:/usr/src \ -v ~/openairinterface5g:/opt/oai/ \ -v ~/share:/opt/nvidia/cuBB/share \ --cpuset-cpus=6-13\ --name i_oai_aerial c_oai_aerial:latest

Then start the OAI nr-softmodem, enter the the container to run the using the configuration file mounted from the host

Copy
Copied!
            

GPU_FLAG="--gpus all" OAI_GNB_CONTAINER=i_oai_aerial OAI_GNB_IMAGE=c_oai_aerial:latest docker run --detach --privileged --rm \ --ipc container:$AERIAL_CUBB_CONTAINER $GPU_FLAG \ --network host --shm-size=4096m -it \ --cpuset-cpus=13-20 \ --name $OAI_GNB_CONTAINER \ -v /lib/modules:/lib/modules \ -v /dev/hugepages:/dev/hugepages \ -v /usr/src:/usr/src \ -v ~/openairinterface5g:/opt/oai/ \ -v ~/share:/opt/nvidia/cuBB/share \ $OAI_GNB_IMAGE

Copy
Copied!
            

docker exec -it $OAI_GNB_CONTAINER bash # cd to the openairinterface directory source oaienv cd cmake_targets/ran_build/build/ ./nr-softmodem -O ../../../targets/PROJECTS/GENERIC-NR-5GC/CONF/vnf.sa.band78.fr1.273PRB.Aerial.conf --nfapi aerial --sa

To stop the container:

Copy
Copied!
            

docker stop i_oai_aerial docker rm i_oai_aerial


CUE Connecting to 5G Network

Take the CUE out of Airplane mode to start the UE attaching to the network.

Observe 5G Connect Status

See Preamble log in cuphycontroller console output.

Check Core Network log or CUE log to see whether NAS authentication and PDU session succeed.

Running E2E IPERF Traffic

Start ping, iperf or other network app tests after PDU session connected successfully.

One can install and run “Magic IPerf” Android application on the CUE for this purpose.

Ping Test

Ping the UE from the CN:

Copy
Copied!
            

docker exec -it oai-ext-dn ping 12.1.1.2

Ping from the UE to the CN:

Copy
Copied!
            

ping 192.168.70.135 -t -S 12.1.1.2


IPERF Downlink Test

UE Side:

Copy
Copied!
            

iperf -s -u -i 1 -B 12.1.1.2

CN5G Side:

Copy
Copied!
            

docker exec -it oai-ext-dn iperf -u -t 360 -i 1 -fk -B 192.168.70.135 -b 4M -c 12.1.1.2


IPERF Uplink Test

CN5G Side:

Copy
Copied!
            

docker exec -it oai-ext-dn iperf -s -u -i 1 -B 192.168.70.135

UE Side:

Copy
Copied!
            

iperf -u -t 360 -i 1 -fk -b 20M -c 192.168.70.135 -B 12.1.1.2

© Copyright 2023, NVIDIA.. Last updated on Apr 28, 2023.