Overview
Universal Pass Through or UPT is the flagship feature of VMware vSphere 8.0, a component of VMware Cloud Foundation. UPT enables users to experience direct passthrough-like performance of NVIDIA DPUs without limiting them from being able to use vSphere management features like vMotion or DRS.
With Project Monterey, NVIDIA Bluefield DPUs, and the associated release of vSphere 8.0 with Universal Pass Through (or UPT), VMware has opened up the ability to realize the benefits of fastpathing network traffic through the hypervisor without sacrificing essential operational tools such as vMotion. For VMware Cloud Foundation administrators, the evolution of Project Monterey will feel like a seamless transition.
Today, as vSphere administrators, we will walk through the process of setting up, configuring, and utilizing UPT for the vMotion service. First, you will see a walkthrough of a failed attempt to vMotion a (VM) that uses a standard SR-IOV PCI Passthrough configured network adapter. Then, we will go through a guided lab to demonstrate a successful vMotion using a UPT enabled adapter.
There are five main segments of the lab demonstration:
Configure a distributed switch within vCenter.
Create and assign a NSX overlay segment to the switch within NSX-T.
Configure your cluster to use the VDS.
Create a VM in the cluster that is configured with a UPT enabled network adapter assigned to your overlay segment.
vMotion the VM from one host to another.
In the images below, you will see a demonstration of a vMotion failure when using an PCI passthrough network device for a VM. While becoming the standard for performance oriented applications and workloads, PCI passthrough depends on directly binding to a PCI device on the host and limits the use of popular vSphere functions such as vMotion and Distributed Resource Scheduler (DRS).
The steps in this section are for illustrating a failed vMotion use case. They are not part of the hand-on exercises for this lab.
In our non-UPT cluster, we click on a host and note that Nvidia DPU offloading is present via the Bluefield-2.
We access the settings of one of our configured VM’s in the cluster. This VM has been configured with a PCI passthrough network adapter that is assigned to one of our NSX overlay segments. Note that this network adapter is bound to one of our Bluefield ports via a virtual function. vCenter reminds us that these PCI device bindings will remove our ability to vMotion this VM.
Lastly we attempt to migrate our VM to a different host within the cluster. Here we notice that vCenter tells us there are compatibility issues with this migration due to the backing devices (i.e. PCI devices) attached to the VM. In our case, this is referring to the SR-IOV configured network adapter. We are unable to progress through the vMotion steps any further.
In the next lab sections we will show how UPT enables us to get around this limitation without sacrificing our ability to utilize DPU offload or NSX capabilities.