Support for this feature is at beta level.
This section describes how to set up and perform live migration on VMs with SR-IOV and with actively running traffic.
The below are the requirements for working with SR-IOV Live Migration.
- VM network persistency for applications - VM’s applications must survive the migration process
- No internal VM admin configuration
- Support for ASAP2 solution (kernel OVS hardware offload)
- No use of physical function (PF) network device in Paravirtual (PV) path for failover network traffic
- Use of sub-function (SF) in HyperVisor as failover PV path
- ConnectX-5 or higher adapter cards
- Hypervisor host with RedHat/CentOS minimal version of 8.0 with MLNX_OFED minimal version of 5.4
- VMs that run on CentOS v8.0 or higher
- Hypervisor host with latest libvirt from hhttps://libvirt.org/news.html#v6-1-0-2020-03-03
- Hypervisor host with latest QEMU from https://www.qemu.org/2021/04/30/qemu-6-0-0/
This section consists of the following steps.
- Host Servers Configuration.
- VM OS Installation Using "virt-manager".
- VFs to VMs Deployment.
- ASAP2 with OVS Deployment.
- Live Migration with Paravirtual Path and Traffic.
Host Servers Configuration
The following steps should be performed in both host servers.
- Install RedHat/CentOS v8.0 on the host server.
The CentOS 8.0 ISO image can be downloaded via this link.
- Connect the host servers to the Ethernet switch.
Two host servers HV1 (eth0: 10.20.1.118) and HV2 (eth0: 10.20.1.119) connected via an Ethernet switch with switch ports configured/enabled VLAN. For example, vlan=100
Install the latest MLNX_OFED version.
Download and install NVIDIA MLNX_OFED driver for distribution RHEL/CentOS 8.0.
- Configure the host server and NVIDIA NIC with SR-IOV as instructed here.
Configure storage for VMs' images as shared.
The default location for VMs' images is /var/lib/libvirt/images, which is a shared location that is set up as an NFS directory in this PoC. For example:
Set up the network bridge "installation" for VMs to enable external communication.
VMs must be able to download and install extra required packages for external sources.
Download CentOS v8.0 ISO image for VM installation.
Download CentOS v8.0 ISO image from one of the mirror sites to the local host.
VM OS installation Using "virt-manager"
Launch "virt-manager" to create a new VM. Click the icon as shown below.
Choose “Local install media (ISO images or CDROM)”.
- Specify the location of the downloaded CentOS 8.0 ISO image.
- Fill in the fields under "Choose Memory and CPU settings" for the VM.
- Create the disk image at the default root user location, for example: /var/lib/libvirt/images (NFS mount point). Make sure the storage is higher than 120GB and the virtual disk image is 60GB.
- In the VM Name field, add "vm-01", and for the network selection, choose "Bridge installation: Host device eth0".
- Click "vm-01", then "Open".
- Follow the installation with “Minimal Install” and “Virtual Block Device” selection.
- Click "Begin Installation".
- Reboot VM "vm-01" after installation is completed.
Use the VM's console terminal to enable external communication.
Shut down VM "vm-01" and clone it to VM "vm-02".
- Clone the virtual disk VM-01.qcow to VM-02.qcow.
- Test the VM installation and migration without VFs.
Boot both VMs and run ping to each other.
Perform live migration using the "virsh" command line on HV1 where VM "vm-01" resides.
Perform live migration.
Verify that vm-01 is migrated and resides at HV2.
VFs to VMs Deployment
Make sure SR-IOV is enabled and VFs are available.
Enable the usage of VF2 "0000:06:00.3" and VF3 "0000:06:00.4" to assign to VM-01 and VM-02 respectively.
Attach VF to VM with XML file using "virsh" command line.
Create VM-01-vf.xml and VM-02-vf.xml files to assign VF1 to VM-01 and VF2 to VM-02 as "hostdev" with MAC-address assigned.
- Assign the VF to the VM by running the "virsh" command line.
Before attaching the VF, VM-01 and VM-02 should have a single network interface.
On HV1 host, assign VF1 to VM-01.
On HV2 host, assign VF2 to VM-02.
After attaching the VFs, VM-01 and VM-02 should have two network interfaces. The second interface "ens12" is the VF with the MAC-address assigned.
Connect to VMs console, configure the IP address of the VF network interface and run traffic.
Install iperf, configure the IP address and run traffic between them on both VMs.
NVIDIA ASAP2 with OVS Deployment
Perform the NVIDIA ASAP2 installation and configuration on both HV1 and HV2.
Download, build and install OpenvSwitch-2.12.0.
Configure OVS with a single vSwitch "ovs-sriov" with
Create OVS "ovs-sriov" and set
Enable SR-IOV and SWITCHDEV mode by executing "asap_config.sh" script for PF port 1.
Create a sub-function on PF port 1 with the script "create_sf.sh".
Rename the sub-function's netdevices.
Live Migration with Paravirtual Path and Traffic
Create bonding devices of VF and sub-function (SF) representors.
Add Uplink "ens2" and bonding devices to "ovs-sriov" bridge.
Modify the VM-01's xml file to have the default SF's macvtap-based virtio netdevice.
Edit VM-01 configuration with the same MAC address assigned to VF-1.
Make sure the alias name has the prefix "ua-".
Edit VM-02 configuration with the same MAC address assigned to VF-2.
Make sure the alias name has the prefix "ua-".
Restart the VMs and verify that the PV path exists in both VMs and that it is accessible.
Each VM should have two extra netdevs, such as: eth0, eth1 where eth0 is master and eth1 is automatically enslaved to eth0.
Configure the IP address and run iperf on VMs over SF PF path.
Modify the XML file of the VFs to link to the persistent device of the SFs.
Attach VFs to VMs VM-01 and VM-02, and switch to the direct path in HyperVisors HV1 and HV2. I/O traffic should continue after the VFs have been successfully attached to the VMs.
Each VM should have one extra netdev from the attached VF that is automatically enslaved to eth0.
Detach VF from VM, switch to SF PV path in HyperVisors HV1 and HV2. I/O traffic should pause 0.5s and then resume.
Perform Live Migration on VM-01. iperf traffic should run as usual.
Attach VF to VM again and switch to the direct path in HyperVisor. I/O traffic should run as usual.