ML2 OVN OVS-Kernel
Configuration
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
Section 6.4: Configuring OVS Hardware Offload
Section 6.5: Tuning Examples for OVS Hardware Offload
Section 6.6: Components of OVS Hardware Offload
Section 6.7: Troubleshooting OVS Hardware Offload
Section 6.8: Debugging HW Offload Flow
Use the ovs-hw-offload.yaml file from the following location:
/usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
Configure it over OVN setup in the following way:
parameter_defaults: NeutronOVSFirewallDriver: openvswitch NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes:
'geneve'
NovaPCIPassthrough: - devname:"enp3s0f0"
physical_network:null
NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName:"throughput-performance"
KernelArgs:"intel_iommu=on iommu=pt"
OvsHwOffload: TrueMake sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your controller.j2.yaml file::
- type:
interface
name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~'/'
~ tenant_cidr }}Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file:
- type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~
'/'
~ tenant_cidr }} link_mode: switchdev numvfs:64
promisc:true
use_dhcp:false
Create a new role for the compute node, and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:
- name: Controller count:
1
instances: - name: control-0
- name: ComputeSriov count:2
instances: - name: compute-0
- name: compute-1
Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:
ComputeSriovNetworkConfigTemplate:
'/home/stack/new/nic_configs/compute.j2.yaml'
Deploying the Overcloud
Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:
openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml
openstack overcloud deploy\
--templates /usr/share/openstack-tripleo-heat-templates/ \
--libvirt-type kvm \
-r /home/stack/roles_data.yaml\
--timeout 240
\
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\
--validation-warnings-fatal \
-e /home/stack/cloud-names.yaml\
-e /home/stack/overcloud_storage_params.yaml\
-e /home/stack/overcloud-baremetal-deployed.yaml \
-e /home/stack/containers-prepare-parameter.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\
-e /home/stack/overcloud-selinux-config.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
$ source ./overcloudrc
Create a flavor:
$ openstack flavor create m1.small --id
3
--ram2048
--disk20
--vcpus1
Create a “cirrios” image.
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network.
$ openstack network create
private
--provider-network-type geneve --shareCreate a subnet.
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud, using the following command after creating the port accordingly:
For the first VM:
$ direct_port1=`openstack port create direct1 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port1 vm1For the second VM:
$ direct_port2=`openstack port create direct2 --vnic-type=direct --network
private
--binding-profile'{"capabilities":["switchdev"]}'
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor3
--image mellanox --nic port-id=$direct_port2 vm2
Configuration
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
Section 6.4: Configuring OVS Hardware Offload
Section 6.5: Tuning Examples for OVS Hardware Offload
Section 6.6: Components of OVS Hardware Offload
Section 6.7: Troubleshooting OVS Hardware Offload
Section 6.8: Debugging HW Offload Flow
Use the ovs-hw-offload.yaml file from the following location:
/usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
Configure it over OVN setup in the following way:
parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes:
'geneve'
NovaPCIPassthrough: - devname:"enp3s0f0"
physical_network:null
NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName:"throughput-performance"
KernelArgs:"intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=8"
OvsHwOffload: TrueWarningDue to a limitation in the ovs-dpdk, only the first PF can be used for switchdev in case of a non Vf-lag.
Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your controller.j2.yaml file:
- type:
interface
name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~'/'
~ tenant_cidr }}Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file:
- type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~
'/'
~ tenant_cidr }} link_mode: switchdev numvfs:64
promisc:true
use_dhcp:false
Create a new role for the compute node and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:
- name: Controller count:
1
instances: - name: control-0
- name: ComputeSriov count:2
instances: - name: compute-0
- name: compute-1
Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file by adding the following line:
ComputeSriovNetworkConfigTemplate:
'/home/stack/new/nic_configs/compute.j2.yaml'
OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml
Customizing the Overcloud Images with MOFED
To customize the overcloud Images with MOFED, run:
$ sudo su
$ yum install -y libguestfs-tools
$ export LIBGUESTFS_BACKEND=direct
$ cd /home/stack/images/
$ wget https://content.mellanox.com/ofed/MLNX_OFED-5.2-2.2.0.0/MLNX_OFED_LINUX-5.2-2.2.0.0-rhel8.3-x86_64.tgz
$ virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-5.2
-2.2
.0.0
-rhel8.3
-x86_64.tgz /tmp
$ virt-customize -v -a overcloud-full.qcow2 --run-command 'yum install pciutils tcl tcsh pkgconf-pkg-config gcc-gfortran make tk perl -y'
$ virt-customize -v -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-5.2
-2.2
.0.0
-rhel8.3
-x86_64.tgz && rm -rf /tmp/MLNX_OFED_LINUX-5.2
-2.2
.0.0
-rhel8.3
-x86_64.tgz
$ virt-customize -v -a overcloud-full.qcow2 --run-command '/tmp/ MLNX_OFED_LINUX-5.2-2.2.0.0-rhel8.3-x86_64/mlnxofedinstall --force'
$ virt-customize -v -a overcloud-full.qcow2 --run-command ' /etc/init.d/openibd restart'
$ virt-customize -a overcloud-full.qcow2 --selinux-relabel
For vDPA, all VFS must be bound before starting the OVS container. To do that, use this patch in os-net-config. Since it is not merged yet, it should be applied manually on the overcloud image.
Customizing the Overcloud Image with os-net-config
To customize the overcloud image with os-net-config, run:
$ cat << EOF > os-net-config-sriov-bind
#!/bin/python3
import
sys
from os_net_config.sriov_bind_config import
main
if
__name__ == "__main__"
:
sys.exit(main())
EOF
$ chmod 755
os-net-config-sriov-bind
$ virt-copy-in -a overcloud-full.qcow2 os-net-config-sriov-bind /usr/bin/
Deploying the Overcloud
Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:
openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml
openstack overcloud deploy\
--templates /usr/share/openstack-tripleo-heat-templates \
--libvirt-type kvm \
-r /home/stack/roles_data.yaml\
--timeout 240
\
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\
--validation-warnings-fatal \
-e /home/stack/cloud-names.yaml\
-e /home/stack/overcloud_storage_params.yaml\
-e /home/stack/containers-prepare-parameter.yaml\
-e /home/stack/overcloud-baremetal-deployed.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\
-e /home/stack/overcloud-selinux-config.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
Applying the Openstack Patches
To apply the patches on all compute nodes, run the following command:
$ echo
'group = "hugetlbfs"'
>> /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf $ podman exec -it -u root nova_compute bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git
$ yum install patch -y $ cd /usr/lib/python3.6
/site-packages/nova $ patch -p2 < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/nova_os_vif_util.patch $ cp -a /containerized-ovs-forwarder/python/ovs_module /usr/lib/python3.6
/site-packages/ $ cd /usr/lib/python3.6
/site-packages/vif_plug_ovs $ patch < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/os-vif.patch $ exit $ podman restart nova_compute nova_libvirt $ chmod775
/var/lib/vhost_sockets/ $ chown qemu:hugetlbfs /var/lib/vhost_sockets/To apply the patches on all controller nodes, run the following command:
$ podman exec -it -u root neutron_api bash $ git clone https:
//github.com/Mellanox/containerized-ovs-forwarder.git
$ yum install patch -y $ cd /usr/lib/python3.6
/site-packages/neutron $ patch -p1 < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/networking-ovn.patch $ exit $ podman restart neutron_api
Preparing the OVS-Forwarder Container
To prepare the OVS-Forwarder container on all compute nodes, do the following:
Pull the ovs-forwarder image from the io with a specific tag:
$ podman pull mellanox/ovs-forwarder:
52220
Create the ovs-forwarder container with the right PCI of SriovPF and the range of VFS:--pci-args <pci_address> pf0vf[<vfs_range>]:
$ mkdir -p /forwarder/var/run/openvswitch/ $ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder:
52220
\ --pci-args0000
:02
:00.0
pf0vf[0
-3
]Note: In case the VF-LAG pass the PCI and VFS range for the second port, you may also run:
$ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder:
52220
\ --pci-args0000:02
:00.0
pf0vf[0
-3
] --pci-args0000
:02
:00.0
pf1vf[0
-3
]Start the ovs-forwarder container:
$ podman start ovs_forwarder_container
Create ovs forwarder container service:
$ wget https:
//raw.githubusercontent.com/Mellanox/containerized-ovs-forwarder/master/openstack/ovs_forwarder_container_service_create.sh
$ bash ovs_forwarder_container_service_create.sh
Booting the VM
To boot the VM on the undercloud machine, perform the following actions:
Load the overcloudrc configuration:
$ source ./overcloudrc
Create a flavor:
$ openstack flavor create --ram
1024
--vcpus1
--property hw:mem_page_size=1GB --public
dpdk.1gCreate a “cirrios” image:
$ openstack image create --
public
--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network:
$ openstack network create
private
--provider-network-type geneve --shareCreate a subnet:
$ openstack subnet create private_subnet --dhcp --network
private
--subnet-range11.11
.11.0
/24
Boot a VM on the overcloud, using the following command after creating the vDPA port accordingly:
For the first VM:
$ virtio_port0=`openstack port create virtio_port --vnic-type=virtio-forwarder --network
private
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port0 --availability-zone nova:overcloud-computesriov-0
.localdomain vm0For the second VM:
$ virtio_port1=`openstack port create virtio_port --vnic-type=virtio-forwarder --network
private
| grep' id '
| awk'{print $4}'
` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port1 --availability-zone nova:overcloud-computesriov-0
.localdomain vm1