NVIDIA Support for TripleO Ussuri Application Notes

ML2 OVN OVS-Kernel

Configuration

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  • Section 6.4: Configuring OVS Hardware Offload

  • Section 6.5: Tuning Examples for OVS Hardware Offload

  • Section 6.6: Components of OVS Hardware Offload

  • Section 6.7: Troubleshooting OVS Hardware Offload

  • Section 6.8: Debugging HW Offload Flow

  1. Use the ovs-hw-offload.yaml file from the following location:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

    Configure it over OVN setup in the following way:

    Copy
    Copied!
                

    parameter_defaults:   NeutronOVSFirewallDriver: openvswitch   NeutronFlatNetworks: datacentre   NeutronNetworkType:     - geneve     - flat   NeutronTunnelTypes: 'geneve'   NovaPCIPassthrough:     - devname: "enp3s0f0"       physical_network: null     NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter     ComputeSriovParameters:     NeutronBridgeMappings:       - datacentre:br-ex     TunedProfileName: "throughput-performance"     KernelArgs: "intel_iommu=on iommu=pt"     OvsHwOffload: True

  2. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/control.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    -type: interface  name: <interface_name>  addresses:  - ip_netmask:      get_param: TenantIpSubnet

  3. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    - type: sriov_pf   name: enp3s0f0   link_mode: switchdev   numvfs: 16   promisc: true   use_dhcp: false

  4. Create a new role for the compute node, and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  5. Update the ~/cloud-names.yaml file accordingly. You may use the following example:

    Copy
    Copied!
                

    parameter_defaults:   ComputeSriovCount: 2   OvercloudComputeSriovFlavor: compute   ControllerCount: 3   OvercloudControllerFlavor: control

  6. Assign the yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml

Deploying the Overcloud

Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:

Copy
Copied!
            

openstack overcloud deploy\ --templates /usr/share/openstack-tripleo-heat-templates/ \ --libvirt-type kvm \ -r /home/stack/roles_data.yaml\ --timeout 240 \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\ --validation-warnings-fatal \ -e /home/stack/cloud-names.yaml\ -e /home/stack/overcloud_storage_params.yaml\ -e /home/stack/containers-prepare-parameter.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\ -e /home/stack/overcloud-selinux-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml

Booting the VM

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    $ source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    $ openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1

  3. Create a “cirrios” image.

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network.

    Copy
    Copied!
                

    $ openstack network create private --provider-network-type geneve --share

  5. Create a subnet.

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud, using the following command after creating the port accordingly:

  • For the first VM:

    Copy
    Copied!
                

    $ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port1 vm1

  • For the second VM:

    Copy
    Copied!
                

    $ direct_port2=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port2 vm2

Configuration

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  • Section 6.4: Configuring OVS Hardware Offload

  • Section 6.5: Tuning Examples for OVS Hardware Offload

  • Section 6.6: Components of OVS Hardware Offload

  • Section 6.7: Troubleshooting OVS Hardware Offload

  • Section 6.8: Debugging HW Offload Flow

  1. Use the ovs-hw-offload.yaml file from the following location:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

  2. Configure it over OVN setup in the following way:

    Copy
    Copied!
                

    parameter_defaults:   NeutronFlatNetworks: datacentre   NeutronNetworkType:   - geneve   - flat   NeutronTunnelTypes: 'geneve'   NovaPCIPassthrough:   - devname: "enp3s0f0"     physical_network: null   NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter   ComputeSriovParameters:     NeutronBridgeMappings:     - datacentre:br-ex     TunedProfileName: "throughput-performance"     KernelArgs: "intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=1024"     OvsHwOffload: True

  3. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/control.yaml file by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    - type: interface   name: <interface_name>   addresses:   - ip_netmask:       get_param: TenantIpSubnet

  4. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    - type: sriov_pf   name: enp3s0f0   link_mode: switchdev   numvfs: 16   promisc: true   use_dhcp: false

  5. Create a new role for the compute node and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  6. Update the ~/cloud-names.yaml file. You may use the following example as reference:

    Copy
    Copied!
                

    parameter_defaults:   ComputeSriovCount: 2   OvercloudComputeSriovFlavor: compute   ControllerCount: 3   OvercloudControllerFlavor: control

  7. Assign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml

Customizing the Overcloud Images with MOFED

To customize the overcloud Images with MOFED, run:

Copy
Copied!
            

$ sudo su $ yum install -y libguestfs-tools $ export LIBGUESTFS_BACKEND=direct $ cd /home/stack/images/ $ wget https://www.mellanox.com/downloads/ofed/MLNX_OFED-5.1-2.3.7.1/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz $ virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz /tmp $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'yum install pciutils tcl tcsh pkgconf-pkg-config gcc-gfortran make tk -y' $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz && rm -rf /tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz' $ virt-customize -v  -a overcloud-full.qcow2 --run-command '/tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64/mlnxofedinstall --force' $ virt-customize -v  -a overcloud-full.qcow2 --run-command ' /etc/init.d/openibd restart' $ virt-customize -a overcloud-full.qcow2 --selinux-relabel

For vDPA, all VFS must be bound before starting the OVS container. To do that, use this patch in os-net-config. Since it is not merged yet, it should be applied manually on the overcloud image.

Customizing the Overcloud Image with os-net-config

To customize the overcloud image with os-net-config, run:

Copy
Copied!
            

$ cat << EOF > os-net-config-sriov-bind #!/bin/python3 import sys   from os_net_config.sriov_bind_config import main if __name__ == "__main__":       sys.exit(main()) EOF $ chmod 755 os-net-config-sriov-bind $ virt-copy-in -a overcloud-full.qcow2 os-net-config-sriov-bind /usr/bin/

Deploying the Overcloud

Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:

Copy
Copied!
            

openstack overcloud deploy\       --templates /usr/share/openstack-tripleo-heat-templates \       --libvirt-type kvm \       -r /home/stack/roles_data.yaml\       --timeout 240 \       -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\       --validation-warnings-fatal \       -e /home/stack/cloud-names.yaml\       -e /home/stack/overcloud_storage_params.yaml\       -e /home/stack/containers-prepare-parameter.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \       -e /home/stack/network-environment.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\       -e /home/stack/overcloud-selinux-config.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml

Applying the Openstack Patches

  • To apply the patches on all compute nodes, run the following command:

    Copy
    Copied!
                

    $ echo 'group = "hugetlbfs"' >> /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf $ podman  exec -it -u root nova_compute bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/nova $ patch -p2 < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/nova_os_vif_util.patch $ cp -a /containerized-ovs-forwarder/python/ovs_module /usr/lib/python3.6/site-packages/ $ cd /usr/lib/python3.6/site-packages/vif_plug_ovs $ patch < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/os-vif.patch $ exit $ podman restart nova_compute nova_libvirt $ mkdir -p /var/lib/vhost_sockets/ $ chmod 775 /var/lib/vhost_sockets/ $ chown qemu:hugetlbfs /var/lib/vhost_sockets/

  • To apply the patches on all controller nodes, run the following command:

    Copy
    Copied!
                

    $ podman  exec -it -u root neutron_api bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/neutron $ patch -p1 < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/networking-ovn.patch $ exit $ podman  restart neutron_api

Preparing the OVS-Forwarder Container

  • To prepare the OVS-Forwarder container on all compute nodes, do the following:

  1. Pull the ovs-forwarder image from the io with a specific tag:

    Copy
    Copied!
                

    $ podman pull mellanox/ovs-forwarder:51237

  2. Create the ovs-forwarder container with the right PCI of SriovPF and the range of VFS:--pci-args <pci_address> <vfs_range>:

    Copy
    Copied!
                

    $ mkdir -p /forwarder/var/run/openvswitch/ $ podman container create \     --privileged \     --network host \     --name ovs_forwarder_container \     --restart always \       -v /dev/hugepages:/dev/hugepages \       -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \       -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \     ovs-forwarder:51237 \       --pci-args 0000:02:00.0 0-3

    Note: In case the VF-LAG pass the PCI and VFS range for the second port, you may also run:

    Copy
    Copied!
                

    $ --pci-args 0000:02:00.0 0-3 --pci-args 0000:02:00.1 0-3

  3. Start the ovs-forwarder container:

    Copy
    Copied!
                

    $ podman start ovs_forwarder_container

Booting the VM

  • To boot the VM on the undercloud machine, perform the following actions:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    $ source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    $ openstack flavor create --ram 1024 --vcpus 1 --property dpdk=true --property hw:mem_page_size=1GB --public dpdk.1g

  3. Create a “cirrios” image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network:

    Copy
    Copied!
                

    $ openstack network create private --provider-network-type geneve --share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud, using the following command after creating the vDPA port accordingly:

  • For the first VM:

    Copy
    Copied!
                

    $ virtio_port0=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port0 --availability-zone nova:overcloud-computesriov-0.localdomain vm0

  • For the second VM:

    Copy
    Copied!
                

    $ virtio_port1=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port1 --availability-zone nova:overcloud-computesriov-0.localdomain vm1

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.