Installation#
The Aerial Omniverse Digital Twin (AODT) Installer is a way to get up and running quickly with fresh installations on qualified systems, both in the cloud and on-prem. There are several components that must be installed and configured in order for a deployed system to run AODT. This section will detail how to use the AODT Installer for each of the qualified system configurations.
System Requirements#
AODT can be installed in the cloud or on-prem. The installation involves deploying a set of frontend components and a set of backend components. Both frontend and backend components require one NVIDIA GPU. The frontend components and backend components can be deployed to either the same node (i.e., colocated) or to separate nodes (i.e. multi-node). The frontend and backend can be deployed on a single GPU setup prompting the simulation through databases. Differently, to have frontend and the backend running concurrently, at least 2 GPU are needed. See the section Database Replay in this guide for more details. The following table details the GPU requirements for each case:
System Type |
GPU Qnty |
GPU vRAM |
GPU Requirement |
GPU Notes |
---|---|---|---|---|
Frontend alone |
1 |
12GB+ |
GTX/RTX |
e.g. RTX 6000 Ada, A10, L40 |
Backend alone |
1 |
48GB+ |
e.g. RTX 6000 Ada, A100, H100, L40 |
|
Frontend and backend replay |
1 |
48GB+ |
e.g. RTX 6000 Ada, L40 |
|
Frontend and backend colocated |
2 |
see note |
see note |
1x frontend-capable GPU, 1x backend GPU |
The following tables describes the GPU driver versions used by the installation scripts
System Type |
Operating System |
deployed driver version |
---|---|---|
Frontend azure |
Linux |
550.127.05 |
Frontend azure |
Windows |
552.55 |
Backend azure |
Linux |
560.35.05 |
Frontend and backend replay |
Linux |
560.35.05 |
Frontend and backend colocated |
Linux |
560.35.03 |
and the OS support for each type.
System Type |
OS |
---|---|
Frontend alone |
Windows 11, Windows Server 2022, Ubuntu 22.04 |
Backend alone |
Ubuntu 22.04 |
Frontend and backend replay |
Ubuntu 22.04 |
Frontend and backend colocated |
Ubuntu 22.04 |
For memory and CPU requirements, we recommend looking at the qualified systems in the next section.
Additional information#
The AODT backend supports the following streaming multiprocessor (SM) architectures: 80, 86, 89 and 90. The runtime logic checks:
Startup verification: Upon startup, the backend confirms that it is running on a system with a supported SM architecture. If an unsupported architecture is detected, it emits an error to the standard error console and the application terminates.
Compile-time vs. run-time check: The backend also compares the compile-time SM architecture against the run-time one. If the compile-time architecture is different than the run-time one, the backend prints an error and exits. In case the compile-time process specifies multiple values, the runtime logic picks the highest value to compare with the runtime values.
Compilation Option: For users wishing to run the backend on the same system used for its compilation, or a different system with the same SM Architecture, the CMake build system allows for specifying -DCMAKE_CUDA_ARCHITECTURES="native"
during the CMake generation phase. This ensures that the compiled version aligns with the device architecture, preventing the aforementioned errors; "native"
is also the default value in case the user does not specify any value of -DCMAKE_CUDA_ARCHITECTURES
during CMake generation time.
Deployment#
The following qualified systems have been tested and are directly supported with the AODT Installer:
Qualified system |
Node 1 |
Node 2 |
---|---|---|
Azure VM (Multi-Node) |
|
|
Dell R750 (Colocated) |
|
N/A |
Note: installations on Microsoft Azure A10 VMs require NVIDIA GRID drivers.
Azure#
The Aerial Omniverse Digital Twin can be installed on Microsoft Azure using the Azure Installer. The Azure Installer in turn can be downloaded from NGC - Aerial Omniverse DT Installer using version tag 1.2.0
.
Specifically, the user can first download and unzip the file from the Azure folder into a local directory, create a file called .secrets in such directory, and define the following environment variables:
RESOURCEGROUP=
WINDOWS_PASSWORD=
SSH_KEY_NAME=
LOCAL_IP=
GUI_OS=
NGC_CLI_API_KEY=
where
Variable |
Description |
---|---|
RESOURCEGROUP |
Microsoft Azure Resource Group |
SSH_KEY_NAME |
Name of SSH key stored in Microsoft Azure |
WINDOWS_PASSWORD |
Password length must be between 12 and 72 characters and satisfy 3 of the following conditions: 1 lower case character, 1 upper case character, 1 number and 1 special character |
LOCAL_IP |
IP address (as seen by Azure) of the host that will run the provisioning scripts |
GUI_OS |
Windows |
NGC_CLI_API_KEY |
NGC API KEY |
More information on NGC_CLI_API_KEY can be found here: NGC - User’s Guide.
Also, if necessary, the following command can be used to find LOCAL_IP, the external IP address of the local machine that will be used to connect to the VMs.
curl ifconfig.me
The private ssh key must be stored in a location accessible by the installation bundle, e.g. ~/.ssh/azure.pem
Once the variables above are configured, we can use the mcr.microsoft.com/azure-cli:cbl-mariner2.0
docker image to run the provisioning scripts.
docker run --rm -it --env-file .secrets -v ./aodt_1.2.0:/aodt -w /aodt/azure mcr.microsoft.com/azure-cli:cbl-mariner2.0
The docker container will mount the downloaded scripts.
When using Windows for GUI_OS, the AODT Azure frontend installation uses the nvidia nvidia-quadro-vws-win2022win2022-23-06-vgpu17-2:17.2.0 VM image. We can find details of this image here. Before using this image, users must review and accept the Azure Marketplace image terms for the image. One way to do that is by running the following commands inside the azure-cli docker container:
$ az login
$ az vm image terms show --publisher nvidia --offer nvidia-quadro-vws-win2022 --plan ove
$ # Review the terms as needed, and then accept the terms
$ az vm image terms accept --publisher nvidia --offer nvidia-quadro-vws-win2022 --plan ove
Inside the docker container, we can run the following commands:
$ az login
$ bash -e azure_install.sh
and the script will create the VMs, configure the network inbound ports, and download the scripts needed for the next step. At the end of the execution it will output something like this:
To install AODT 1.2.0 on the VMs, execute the following command:
BACKEND_IP=<backend-ip> FRONTEND_IP=<frontend-ip> bash -e ./aodt_install.sh
where backend-ip and frontend-ip are ip addresses given to the VMs during the provisioning.
Still in the docker container, execute the given command to continue installation
$ BACKEND_IP=<IP> FRONTEND_IP=<IP> bash -e ./aodt_install.sh
The script is expected to take several minutes to complete. At the end, it will show:
Use Microsoft Remote Desktop Connection to connect to <ip-address>
Username: .\aerial
Password: REDACTED-check-secrets-file
BACKEND_IP=<ip-address>
Logging into the Azure VM#
We can use Microsoft Remote Desktop Client to connect to the IP address shown at the end of the installation using the configured username and password in the .secrets file.
Once successfully logged in to the remote desktop session, wait for the installation scripts to complete, while ignoring any pop up windows prompting for an NVIDIA Omniverse email address.
Once the installation is complete, the script will launch the AODT application and open a Jupyter notebook in the browser.
Dell R750#
For a full deployment on prem, we can select the pre-qualified Dell PowerEdge R750 server. Install Ubuntu-22.04.3 Server
using the default options in the Ubuntu installer. When loading the Ubuntu 22.04 Server ISO, we may use a bootable USB, or the server’s virtual media function. For instructions on installing Ubuntu 22.04 Server and creating the bootable USB, we can follow the official Ubuntu documentation here. For instructions on using the R750’s virtual media function, we can follow Dell’s official documentation here. After installing Ubuntu-22.04.3 Server
, we can log in using SSH and run the following commands
sudo apt-get install -y jq unzip
export NGC_CLI_API_KEY=<NGC_CLI_API_KEY>
AUTH_URL="https://authn.nvidia.com/token?service=ngc&scope=group/ngc:esee5uzbruax&group/ngc:esee5uzbruax/"
TOKEN=$(curl -s -u "\$oauthtoken":"$NGC_CLI_API_KEY" -H "Accept:application/json" "$AUTH_URL" | jq -r '.token')
versionTag="1.2.0"
downloadedZip="$HOME/aodt_1.2.0.zip"
curl -L "https://api.ngc.nvidia.com/v2/org/esee5uzbruax/resources/aodt-installer/versions/$versionTag/files/aodt_1.2.0.zip" -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -o $downloadedZip
# Unzip the downloaded file
unzip -o $downloadedZip || jq -r . $downloadedZip
Again, more information on NGC_CLI_API_KEY can be found here: NGC - User’s Guide.
Once the aodt_1.2.0.zip has been downloaded and extracted, we can continue by running the following command
cd aodt_1.2.0
./make_install.sh
./install.sh
When the installation is complete, we can use a VNC client to connect to the VNC server on port 5901. The VNC password is nvidia
.
We will find that the script has already launched the AODT application and opened up a Jupyter notebook in the browser.
If the server was rebooted and we find that AODT application is Not running, we can open a terminal and issue the command:
~/aodt_1.2.0/frontend/start.sh
As an alternative we can use the desktop icon AODT-1.2.0 to start the AODT application with debug logs printed into a terminal window. We might need to right click the icon and “Allow Launching”.
Validation#
Once the Aerial Omniverse Digital Twin graphical interface is running, we can click on the toolbar icon showing the gears and connect to the RAN digital twin.
If asked for credentials, we can use the following:
username:
omniverse
password:
aerial_123456
Once successfully logged in, we can then select the Content tab (refer to the Graphical User Interface section for further details) and click Add New Connection. In the dialog window, we can then
type
omniverse-server
click
OK
expand the
omniverse-server
tree viewright click on
omniverse://omniverse-server/Users/aerial/plateau/tokyo.usd
and open the map.
Once the map is loaded, we will continue by
selecting the Viewport tab
right clicking on the Stage widget
and selecting Aerial > Create Panel twice from the context menu.
The first panel will be used - by default (refer to the /Scenario scope in the Stage widget) - for the user equipment UE and the second for the radio unit (RU).
With the panels defined, we then can
right click in the Viewport
select Aerial > Deploy RU from the context menu
and click on the final location where we would like to place the RU
With the RU deployed, we will then select it from the Stage widget and enable the Show Raypaths
checkbox from the Property widget.

Similarly, we will
right click on the Viewport
and select Aerial > Deploy UE from the context menu.
Differently from the procedure for the RU, however, this will drop the UE in the location where the right click took place.
Finally, we can
select the Scenario entry in the Stage widget
set
Duration equal to 10.0
Interval to 0.1
click the Generate UEs icon in the toolbar
click the Start UE Mobility icon
This will start a simulation and update the graphical interface as in the figure below.
By clicking on the Play button in the toolbar, we can then inspect the evolution of the mobility of the UE and the corresponding rays that illustrate how the radiation emitted by the RU reaches the UE.
Detailed Explanation#
The installation process as described above is intended to be automatic and to abstract away as much of the underlying provisioning as possible. However, there are times when these extra details are helpful. This next section goes into detail on some of the most common sources of problems.
Azure - Subscriptions#
This installation assumes that the user has a single subscription. If there are multiple subscriptions, then subscription may be changed with az account set
Change the active subscription using the subscription ID#
Inside the azure cli docker container, issue the following command before provisioning.
az account set --subscription "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Azure - Firewall#
Part of the Azure installation script sets up the firewall configurations used by the frontend and backend VMs in Azure. The default configurations for this allows very specific ports to be opened only for very specific IPs. The ports differ for the backend VM and for the frontend VM. To see the configuration of these firewalls, use the Azure portal, select the VM, and then select the VM’s network configurations.
The default IPs used for the firewall configuration include the IP assigned to the frontend VM, the IP assigned to the backend VM, and the IP of the machine doing the provisioning. This last IP is the same as the LOCAL_IP that is stored in the .secrets file. The above instructions show how to find the LOCAL_IP by using the “curl” command line tool to query a public endpoint. This will find the current IP address and that IP is used in the rest of the provisioning process.
However, there are many reasons why that IP address might change, for example, if working from a different location, or from a second computer, or if the DHCP lease changes after some period of inactivity. If the LOCAL_IP changes, then the firewalls will not allow connectivity. This is by design, but can be a problem if using a setup with a LOCAL_IP that changes frequently. There are several things that can resolve this:
Update the LOCAL_IP in Azure’s firewalls to include the exact IP each time it changes. This can be done using the Azure portal, for example.
Change the LOCAL_IP in the firewall to something less restrictive if the subnet of IPs that LOCAL_IP can pull from are known. E.g. use a /24 subnet if applicable.
There is no one-size-fits-all solution to the firewall posture. Your local IT department may be able to suggest something that fits your needs.
Docker Compose#
The backend VM utilizes Docker Compose to control all the various applications. This includes the aodt-sim container, the aodt-gis container, the clickhouse database, the Jupyter Notebook, and the Nucleus server. Docker and Docker Compose can be used to troubleshoot these services - either by reading logs, restarting containers, or modifying configurations.
Make Install Scripts#
The installation process has been split up into two steps. The first step make_install.sh, will probe the system and create a installation script. The second step will use the generated install.sh script to install the software.
If needed users can inspect the generated install scripts before running them in case some of the modules needs customization. The make_install.sh script can be used to generate installations scripts for three different scenarios:
./make_install.sh frontend : Generate install.sh scripts for frontend components only
./make_install.sh backend : Generate install.sh scripts for backend components only
./make_install.sh : Generate install.sh script for both backend and frontend components (default)