ASAP OVS Offload
ASAP2 solution combines the performance and efficiency of server/storage networking hardware with the flexibility of virtual switching software. ASAP2 offers up to 10 times better performance than non-offloaded OVS solutions, delivering software-defined networks with the highest total infrastructure efficiency, deployment flexibility and operational simplicity.
Starting from NVIDIA® ConnectX®-5 NICs, NVIDIA supports accelerated virtual switching in server NIC hardware through the ASAP2 feature.
While accelerating the data plane, ASAP2 keeps the SDN control plane intact thus staying completely transparent to applications, maintaining flexibility and ease of deployments.
The following OverCloud operating system packages are supported:
Item | Version |
Kernel | 5.7.12 or above with connection tracking modules |
Open vSwitch | 2.13.0 or above |
Supported Network Interface Cards and Firmware
OVS Hardware Offload with kernel Datapath. OVS-Kernel uses Linux Traffic Control (TC) to program the NIC with offload flows.
Supported Network Interface Cards and Firmware
NVIDIA® support for TripleO Victoria supports the following NVIDIA NICs and their corresponding firmware versions:z
NICs | Supported Protocols | Recommended Firmware Rev. |
ConnectX®-6 | Ethernet | 20.26.1040 |
ConnectX®-6 Dx | Ethernet | 22.28.1002 |
ConnectX®-5 | Ethernet | 18.25.1040 |
Configuring ASAP2 on an Open vSwitch
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latestRedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
As OVS mech is not the default one, this file must be used in deployment command to use OVS mech:
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml
Use the ovs-hw-offload.yaml file, and configure it on the VLAN/VXLAN setup in the following way:
In case of VLAN setup, configure the ovs-hw-offload.yaml:
$ vi /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - vlan NeutronTunnelTypes:
''
NeutronOVSFirewallDriver: openvswitch NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network: datacentre ComputeSriovParameters: NeutronBridgeMappings: -'datacentre:br-ex'
OvsHwOffload:true
In the case of a VXLAN setup, perform the following:
Configure the ovs-hw-offload.yaml:
parameter_defaults: NeutronFlatNetworks: datacentre NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network:
null
ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex OvsHwOffload: TrueMake sure to move the tenant network from VLAN on a bridge to a separated interface by having this section in your controller.j2.yaml file:
- type:
interface
name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~'/'
~ tenant_cidr }}Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file:
- type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~
'/'
~ tenant_cidr }} link_mode: switchdev numvfs:64
promisc:true
use_dhcp:false
Create a new role for the compute node and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:
- name: Controller count:
1
instances: - name: control-0
- name: ComputeSriov count:2
instances: - name: compute-0
- name: compute-1
Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line::
ComputeSriovNetworkConfigTemplate:
'/home/stack/new/nic_configs/compute.j2.yaml'
Deploying the Overcloud
The default deployment of the overcloud is done by using the appropriate templates and yamls from the ~/heat-templates, as shown in the following example:
openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates/ \
--libvirt-type kvm -r ~/roles_data.yaml \
-e /home/stack/overcloud-baremetal-deployed.yaml\
-e /home/stack/containers-prepare-parameter.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/enviroments/ovs-hw-offload.yaml \
--timeout 180
\
-e /home/stack/cloud-names.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e /home/stack/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
--validation-warnings-fatal \
--ntp-server pool.ntp.org
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
source ./overcloudrc
Create a flavor:
# openstack flavor create m1.small --id
3
--ram2048
--disk20
--vcpus1
Create a "cirrios" image:
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare MellanoxCreate a network.
In case of a VLAN network:
$ openstack network create
private
--provider-physical-network datacentre --provider-network-type vlan -shareIn case of a VXLAN network:
$ openstack network create
private
--provider-physical-network datacentre --provider-network-type vlan -share
Create a subnet:
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud using the following command after creating the direct port accordingly.
For the first VM:
$ direct_port1=`openstack port create direct1 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $openstack server create --flavor3
--image mellanox --nic port-id=$direct_port1 vm1For the second VM:
$ direct_port2=`openstack port create direct2 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port2 vm2
Configuration
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
Section 6.4: Configuring OVS Hardware Offload
Section 6.5: Tuning Examples for OVS Hardware Offload
Section 6.6: Components of OVS Hardware Offload
Section 6.7: Troubleshooting OVS Hardware Offload
Section 6.8: Debugging HW Offload Flow
Use the ovs-hw-offload.yaml file from the following location:
/usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
Configure it over OVN setup in the following way:
parameter_defaults: NeutronOVSFirewallDriver: openvswitch NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes:
'geneve'
NovaPCIPassthrough: - devname:"enp3s0f0"
physical_network:null
NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName:"throughput-performance"
KernelArgs:"intel_iommu=on iommu=pt"
OvsHwOffload: TrueMake sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your controller.j2.yaml file:
- type:
interface
name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~'/'
~ tenant_cidr }}Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file::
- type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~
'/'
~ tenant_cidr }} link_mode: switchdev numvfs:64
promisc:true
use_dhcp:false
Create a new role for the compute node, and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:
- name: Controller count:
1
instances: - name: control-0
- name: ComputeSriov count:2
instances: - name: compute-0
- name: compute-1
Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:
ComputeSriovNetworkConfigTemplate:
'/home/stack/new/nic_configs/compute.j2.yaml'
Deploying the Overcloud
Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:
openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml
openstack overcloud deploy\
--templates /usr/share/openstack-tripleo-heat-templates/ \
--libvirt-type kvm \
-r /home/stack/roles_data.yaml\
--timeout 240
\
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\
--validation-warnings-fatal \
-e /home/stack/cloud-names.yaml\
-e /home/stack/overcloud_storage_params.yaml\
-e /home/stack/overcloud-baremetal-deployed.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\
-e /home/stack/overcloud-selinux-config.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
$ source ./overcloudrc
Create a flavor:
$ openstack flavor create m1.small --id
3
--ram2048
--disk20
--vcpus1
Create a “cirrios” image:
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network:
$ openstack network create
private
--provider-network-type geneve --shareCreate a subnet:
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud, using the following command after creating the port accordingly:
For the first VM:
$ direct_port1=`openstack port create direct1 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port1 vm1For the second VM:
$ direct_port2=`openstack port create direct1 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port2 vm2
Configuration
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
Section 6.4: Configuring OVS Hardware Offload
Section 6.5: Tuning Examples for OVS Hardware Offload
Section 6.6: Components of OVS Hardware Offload
Section 6.7: Troubleshooting OVS Hardware Offload
Section 6.8: Debugging HW Offload Flow
Use the ovs-hw-offload.yaml file from the following location:
/usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
Configure it over OVN setup in the following way:
parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes:
'geneve'
NovaPCIPassthrough: - devname:"enp3s0f0"
physical_network:null
NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName:"throughput-performance"
KernelArgs:"intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=8"
OvsHwOffload: TrueMake sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your controller.j2.yaml file:
- type:
interface
name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~'/'
~ tenant_cidr }}Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file:
- type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~
'/'
~ tenant_cidr }} link_mode: switchdev numvfs:64
promisc:true
use_dhcp:false
Create a new role for the compute node, and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:
- name: Controller count:
1
instances: - name: control-0
- name: ComputeSriov count:2
instances: - name: compute-0
- name: compute-1
Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:
ComputeSriovNetworkConfigTemplate:
'/home/stack/new/nic_configs/compute.j2.yaml'
Customizing the Overcloud Images with MOFED
To customize the overcloud images with MOFED, run:
$ sudo su
$ yum install -y libguestfs-tools
$ export LIBGUESTFS_BACKEND=direct
$ cd /home/stack/images/
$ wget https://www.mellanox.com/downloads/ofed/MLNX_OFED-5.1-2.3.7.1/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz
$ virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-5.1
-2.3
.7.1
-rhel8.2
-x86_64.tgz /tmp
$ virt-customize -v -a overcloud-full.qcow2 --run-command 'yum install pciutils tcl tcsh pkgconf-pkg-config gcc-gfortran make tk -y'
$ virt-customize -v -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz && rm -rf /tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz'
$ virt-customize -v -a overcloud-full.qcow2 --run-command '/tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64/mlnxofedinstall --force'
$ virt-customize -v -a overcloud-full.qcow2 --run-command ' /etc/init.d/openibd restart'
$ virt-customize -a overcloud-full.qcow2 --selinux-relabel
Deploying the Overcloud
Deploy the overcloud, using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:
openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml openstack overcloud deploy\ --templates /usr/share/openstack-tripleo-heat-templates \ --libvirt-type kvm \ -r /home/stack/roles_data.yaml\ --timeout
240
\ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\ --validation-warnings-fatal \ -e /home/stack/cloud-names.yaml\ -e /home/stack/overcloud_storage_params.yaml\ -e /home/stack/overcloud-baremetal-deployed.yaml \ -e /home/stack/containers-prepare-parameter.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\ -e /home/stack/overcloud-selinux-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yamlApply Openstack patches:
Perform the following on all compute nodes:
$ echo
'group = "hugetlbfs"'
>> /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf $ podman exec -it -u root nova_compute bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git
$ yum install patch -y $ cd /usr/lib/python3.6
/site-packages/nova $ patch -p2 < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/nova_os_vif_util.patch $ cp -a /containerized-ovs-forwarder/python/ovs_module /usr/lib/python3.6
/site-packages/ $ cd /usr/lib/python3.6
/site-packages/vif_plug_ovs $ patch < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/os-vif.patch $ exit $ podman restart nova_compute nova_libvirt $ chmod775
/var/lib/vhost_sockets/ $ chown qemu:hugetlbfs /var/lib/vhost_sockets/Perform the following on all controller nodes:
$ podman exec -it -u root neutron_api bash $ git clone https:
//github.com/Mellanox/containerized-ovs-forwarder.git
$ yum install patch -y $ cd /usr/lib/python3.6
/site-packages/neutron $ patch -p1 < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/networking-ovn.patch $ exit $ podman restart neutron_apiPreparing the OVS-Forwarder Container
For vDPA, make sure you have the related PFs configured with ASAP2 and all VFs bound before creating and starting the OVS container.
To prepare the OVS-forwarder container, perform the following on all compute nodes:
Pull the ovs-forwrder image from the docker.io with a specific tag:
$ podman pull mellanox/ovs-forwarder:
52220
Create the ovs-forwarder container with the right PCI of SriovPF and the range of VFs: --pci-args <pci_address> pf0vf[<vfs_range>]:
$ mkdir -p /forwarder/var/run/openvswitch/ $ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder:
52220
\ --pci-args0000
:02
:00.0
pf0vf[0
-3
]Note: In VF-LAG case, the –pci-args flag for the second PF should be the PCI address of the first PF and the physical name of the second PF --pci-args 0000:02:00.0 pf1vf[0-3] :
$ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder:
52220
\ --pci-args0000:02
:00.0
pf0vf[0
-3
] --pci-args0000
:02
:00.0
pf1vf[0
-3
]Start the ovs-forwarder container:
$ podman start ovs_forwarder_container
Create ovs forwarder container service:
$ wget https:
//raw.githubusercontent.com/Mellanox/containerized-ovs-forwarder/master/openstack/ovs_forwarder_container_service_create.sh
$ bash ovs_forwarder_container_service_create.sh
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
$ source ./overcloudrc
Create a flavor:
$ openstack flavor create --ram
1024
--vcpus1
--property hw:mem_page_size=1GB --public
dpdk.1gCreate a “cirrios” image:
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network:
$ openstack network create
private
--provider-network-type geneve --shareCreate a subnet:
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud, using the following command after creating the vDPA port accordingly:
For the first VM:
$ virtio_port0=`openstack port create virtio_port --vnic-type=virtio-forwarder --network
private
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port0 --availability-zone nova:overcloud-computesriov-0
.localdomain vm0For the second VM:
$ virtio_port1=`openstack port create virtio_port --vnic-type=virtio-forwarder --network
private
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port1 --availability-zone nova:overcloud-computesriov-0
.localdomain vm1