ASAP OVS Offload
ASAP2 solution combines the performance and efficiency of server/storage networking hardware with the flexibility of virtual switching software. ASAP2 offers up to 10 times better performance than non-offloaded OVS solutions, delivering software-defined networks with the highest total infrastructure efficiency, deployment flexibility and operational simplicity.
Starting from NVIDIA® Mellanox® ConnectX®-5 NICs, Mellanox supports accelerated virtual switching in server NIC hardware through the ASAP2 feature.
While accelerating the data plane, ASAP2 keeps the SDN control plane intact thus staying completely transparent to applications, maintaining flexibility and ease of deployments.
The following OverCloud operating system packages are supported:
Item | Version |
Kernel | 5.7.12 or above with connection tracking modules |
Open vSwitch | 2.13.0 or above |
Supported Network Interface Cards and Firmware
OVS Hardware Offload with kernel Datapath. OVS-Kernel uses Linux Traffic Control (TC) to program the NIC with offload flows.
Supported Network Interface Cards and Firmware
NVIDIA® Mellanox® support for TripleO Ussuri supports the following Mellanox NICs and their corresponding firmware versions:
NICs | Supported Protocols | Recommended Firmware Rev. |
ConnectX®-6 | Ethernet | 20.26.1040 |
ConnectX®-6 Dx | Ethernet | 22.28.1002 |
ConnectX®-5 | Ethernet | 18.25.1040 |
Configuring ASAP2 on an Open vSwitch
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latestRedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
As OVS mech is not the default one, this file must be used in deployment command to use OVS mech:
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml
Use the ovs-hw-offload.yaml file, and configure it on the VLAN/VXLAN setup in the following way:
In case of VLAN setup, configure the ovs-hw-offload.yaml:
$ vi /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - vlan NeutronTunnelTypes:
''
NeutronOVSFirewallDriver: openvswitch NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network: datacentre ComputeSriovParameters: NeutronBridgeMappings: -'datacentre:br-ex'
OvsHwOffload:true
In the case of a VXLAN setup, perform the following:
Configure the ovs-hw-offload.yaml:
parameter_defaults: NeutronFlatNetworks: datacentre NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network:
null
ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex OvsHwOffload: TrueConfigure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
- type:
interface
name: <interface_name> addresses: - ip_netmask: get_param: TenantIpSubnetConfigure the interface names in the /usr/share/openstack-tripleo-heat- templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
- type: sriov_pf name: enp3s0f0 link_mode: switchdev numvfs:
64
promisc:true
use_dhcp:false
Create a new role for the compute node and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the ~/cloud-names.yaml file accordingly. You may use the following example:
parameter_defaults: ComputeSriovCount:
2
OvercloudComputeSriovFlavor: computeAssign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat- templates//environments/net-single-nic-with-vlans.yaml file, by adding the following line:
OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml
Run the overcloud-prep-containers.sh file.
Deploying the Overcloud
The default deployment of the overcloud is done by using the appropriate templates and yamls from the ~/heat-templates, as shown in the following example:
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates/ \
--libvirt-type kvm -r ~/roles_data.yaml \
-e /home/stack/containers-default
-parameters.yaml \
-e /usr/share/openstack-tripleo-heat-templates/heat-templates/environments/docker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/enviroments/ovs-hw-offload.yaml \
--timeout 180
\
-e /home/stack/cloud-names.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e /home/stack/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
--validation-warnings-fatal \
--ntp-server pool.ntp.org
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
source ./overcloudrc
Create a flavor:
# openstack flavor create m1.small --id
3
--ram2048
--disk20
--vcpus1
Create a "cirrios" image:
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network.
In case of a VLAN network:
$ openstack network create
private
--provider-physical-network datacentre --provider-network-type vlan -shareIn case of a VXLAN network:
$ openstack network create
private
--provider-physical-network datacentre --provider-network-type vlan -share
Create a subnet:
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud using the following command after creating the direct port accordingly.
For the first VM:
$ direct_port1=`openstack port create direct1 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $openstack server create --flavor3
--image mellanox --nic port-id=$direct_port1 vm1For the second VM:
$ direct_port2=`openstack port create direct2 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port2 vm2
Configuration
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
Section 6.4: Configuring OVS Hardware Offload
Section 6.5: Tuning Examples for OVS Hardware Offload
Section 6.6: Components of OVS Hardware Offload
Section 6.7: Troubleshooting OVS Hardware Offload
Section 6.8: Debugging HW Offload Flow
Use the ovs-hw-offload.yaml file from the following location:
/usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
Configure it over OVN setup in the following way:
parameter_defaults: NeutronOVSFirewallDriver: openvswitch NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes:
'geneve'
NovaPCIPassthrough: - devname:"enp3s0f0"
physical_network:null
NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName:"throughput-performance"
KernelArgs:"intel_iommu=on iommu=pt"
OvsHwOffload: TrueConfigure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/control.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
-type:
interface
name: <interface_name> addresses: - ip_netmask: get_param: TenantIpSubnetConfigure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
- type: sriov_pf name: enp3s0f0 link_mode: switchdev numvfs:
16
promisc:true
use_dhcp:false
Create a new role for the compute node, and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the ~/cloud-names.yaml file accordingly. You may use the following example:
parameter_defaults: ComputeSriovCount:
2
OvercloudComputeSriovFlavor: compute ControllerCount:3
OvercloudControllerFlavor: controlAssign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:
OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml
Deploying the Overcloud
Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:
openstack overcloud deploy\
--templates /usr/share/openstack-tripleo-heat-templates/ \
--libvirt-type kvm \
-r /home/stack/roles_data.yaml\
--timeout 240
\
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\
--validation-warnings-fatal \
-e /home/stack/cloud-names.yaml\
-e /home/stack/overcloud_storage_params.yaml\
-e /home/stack/containers-prepare-parameter.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\
-e /home/stack/overcloud-selinux-config.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
$ source ./overcloudrc
Create a flavor:
$ openstack flavor create m1.small --id
3
--ram2048
--disk20
--vcpus1
Create a “cirrios” image:
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network:
$ openstack network create
private
--provider-network-type geneve --shareCreate a subnet:
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud, using the following command after creating the port accordingly:
For the first VM:
$ direct_port1=`openstack port create direct1 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port1 vm1For the second VM:
$ direct_port2=`openstack port create direct1 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port2 vm2
Configuration
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
Section 6.4: Configuring OVS Hardware Offload
Section 6.5: Tuning Examples for OVS Hardware Offload
Section 6.6: Components of OVS Hardware Offload
Section 6.7: Troubleshooting OVS Hardware Offload
Section 6.8: Debugging HW Offload Flow
Use the ovs-hw-offload.yaml file from the following location:
/usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
Configure it over OVN setup in the following way:
parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes:
'geneve'
NovaPCIPassthrough: - devname:"enp3s0f0"
physical_network:null
NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName:"throughput-performance"
KernelArgs:"intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=1024"
OvsHwOffload: TrueConfigure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/control.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
- type:
interface
name: <interface_name> addresses: - ip_netmask: get_param: TenantIpSubnetConfigure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
- type: sriov_pf name: enp3s0f0 link_mode: switchdev numvfs:
64
promisc:true
use_dhcp:false
Create a new role for the compute node, and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the ~/cloud-names.yaml accordingly. You may refer to the following example:
parameter_defaults: ComputeSriovCount:
2
OvercloudComputeSriovFlavor: compute ControllerCount:3
OvercloudControllerFlavor: controlAssign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:
OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml
Customizing the Overcloud Images with MOFED
To customize the overcloud images with MOFED, run:
$ sudo su
$ yum install -y libguestfs-tools
$ export LIBGUESTFS_BACKEND=direct
$ cd /home/stack/images/
$ wget https://www.mellanox.com/downloads/ofed/MLNX_OFED-5.1-2.3.7.1/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz
$ virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-5.1
-2.3
.7.1
-rhel8.2
-x86_64.tgz /tmp
$ virt-customize -v -a overcloud-full.qcow2 --run-command 'yum install pciutils tcl tcsh pkgconf-pkg-config gcc-gfortran make tk -y'
$ virt-customize -v -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz && rm -rf /tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz'
$ virt-customize -v -a overcloud-full.qcow2 --run-command '/tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64/mlnxofedinstall --force'
$ virt-customize -v -a overcloud-full.qcow2 --run-command ' /etc/init.d/openibd restart'
$ virt-customize -a overcloud-full.qcow2 --selinux-relabel
Customizing the Overcloud Image with os-net-config
For vDPA, all VFS must be bound before starting the OVS container. For this purpose, use this patch in the os-net-config. Since it is not merged yet, apply it manually on the overcloud image:
$ cat << EOF > os-net-config-sriov-bind
#!/bin/python3
import
sys
from os_net_config.sriov_bind_config import
main
if
__name__ == "__main__"
:
sys.exit(main())
EOF
$ chmod 755
os-net-config-sriov-bind
$ virt-copy-in -a overcloud-full.qcow2 os-net-config-sriov-bind /usr/bin/
Deploying the Overcloud
Deploy the overcloud, using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:
openstack overcloud deploy\ --templates /usr/share/openstack-tripleo-heat-templates \ --libvirt-type kvm \ -r /home/stack/roles_data.yaml\ --timeout
240
\ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\ --validation-warnings-fatal \ -e /home/stack/cloud-names.yaml\ -e /home/stack/overcloud_storage_params.yaml\ -e /home/stack/containers-prepare-parameter.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\ -e /home/stack/overcloud-selinux-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yamlApply Openstack patches:
Perform the following on all compute nodes:
$ echo
'group = "hugetlbfs"'
>> /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf $ podman exec -it -u root nova_compute bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git
$ yum install patch -y $ cd /usr/lib/python3.6
/site-packages/nova $ patch -p2 < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/nova_os_vif_util.patch $ cp -a /containerized-ovs-forwarder/python/ovs_module /usr/lib/python3.6
/site-packages/ $ cd /usr/lib/python3.6
/site-packages/vif_plug_ovs $ patch < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/os-vif.patch $ exit $ podman restart nova_compute nova_libvirt $ mkdir -p /var/lib/vhost_sockets/ $ chmod775
/var/lib/vhost_sockets/ $ chown qemu:hugetlbfs /var/lib/vhost_sockets/Perform the following on all controller nodes:
$ podman exec -it -u root neutron_api bash $ git clone https:
//github.com/Mellanox/containerized-ovs-forwarder.git
$ yum install patch -y $ cd /usr/lib/python3.6
/site-packages/neutron $ patch -p1 < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/networking-ovn.patch $ exit $ podman restart neutron_apiPreparing the OVS-Forwarder Container
To prepare the OVS-forwarder container, perform the following on all compute nodes:
Pull the ovs-forwrder image from the docker.io with a specific tag:
$ podman pull mellanox/ovs-forwarder:
51237
Create the ovs-forwarder container with the right PCI of SriovPF and the range of VFS: --pci-args <pci_address> <vfs_range>:
$ mkdir -p /forwarder/var/run/openvswitch/ $ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder:
51237
\ --pci-args0000
:02
:00.0
0
-3
Note: In case the VF-LAG pass the PCI and VFS range for the second port, you may also run:
$ --pci-args
0000
:02
:00.0
0
-3
--pci-args0000
:02
:00.1
0
-3
Start the ovs-forwarder container:
$ podman start ovs_forwarder_container
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
$ source ./overcloudrc
Create a flavor:
$ openstack flavor create --ram
1024
--vcpus1
--property dpdk=true
--property hw:mem_page_size=1GB --public
dpdk.1gCreate a “cirrios” image:
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network:
$ openstack network create
private
--provider-network-type geneve --shareCreate a subnet:
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud, using the following command after creating the vDPA port accordingly:
For the first VM:
$ virtio_port0=`openstack port create virtio_port --vnic-type=virtio-forwarder --network
private
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port0 --availability-zone nova:overcloud-computesriov-0
.localdomain vm0For the second VM:
$ virtio_port1=`openstack port create virtio_port --vnic-type=virtio-forwarder --network
private
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port1 --availability-zone nova:overcloud-computesriov-0
.localdomain vm1