NVIDIA Support for TripleO Victoria Application Notes

Full Offload and VDPA

Before following the instructions in this chapter, please read the following chapters in the latest RedHat Network Functions Virtualization Planning and Configuration Guide:

  • Chapter 7: Planning Your OVS-DPDK Deployment

  • Chapter 8: Configuring an OVS-DPDK Deployment

  1. Use the ovs-hw-offload.yaml file that is available in the following location:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

  2. Configure the file over VXLAN setup in the following way:

    Copy
    Copied!
                

    parameter_defaults:   NeutronTunnelTypes: 'vxlan'   NeutronNetworkType:   - flat   - vxlan   NeutronOVSFirewallDriver: openvswitch   OvsDpdkDriverType: mlx5_core   NovaPCIPassthrough:   - devname: "enp3s0f1"     physical_network: null        ComputeOvsDpdkParameters:     KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=16 iommu=pt intel_iommu=on isolcpus=-5,7-11,14-17,19-23"     TunedProfileName: "cpu-partitioning"     IsolCpusList: "2-5,7-11,14-17,19-23"     NovaReservedHostMemory: 4096     OvsDpdkSocketMemory: "1024,1024"     OvsDpdkMemoryChannels: "4"     OvsHwOffload: True     OvsPmdCoreList: "4,7,8,16,19,20"     NovaComputeCpuDedicatedSet: 2,3,5,9,10,11,14,15,17,21,22,23       OvsDpdkCoreList: "0,1,6,12,13,18"     NovaComputeCpuSharedSet: 0,1,6,12,13,18

  3. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/control.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    -type: interface  name: <interface_name>  addresses:  -ip_netmask:       get_param: TenantIpSubnet

  4. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    - type: ovs_user_bridge   name: br-link   members:   - type: ovs_dpdk_port       name: dpdk0       driver: mlx5_core       members:       - type: interface       name: enp3s0f1       primary: true       use_dhcp: false   addresses:   - ip_netmask:       get_param: TenantIpSubnet   - type: sriov_pf   name: enp3s0f1   link_mode: switchdev   numvfs: 8   use_dhcp: false

  5. Create a new role for the compute node, and change it to ComputeOvsDpdk:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeOvsDpdk

  6. Update the ~/cloud-names.yaml file accordingly. You may refer to the following example:

    Copy
    Copied!
                

    parameter_defaults:   ComputeOvsDpdkCount: 2   OvercloudComputeOvsDpdkFlavor: compute   ControllerCount: 1   OvercloudControllerFlavor: control

  7. Assign the compute.yaml file to the ComputeOvsDpdk role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeOvsDpdk::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml

To customize overcloud images with MOFED, run:

Copy
Copied!
            

$ sudo su $ yum install -y libguestfs-tools $ export LIBGUESTFS_BACKEND=direct $ cd /home/stack/images/ $ wget https://www.mellanox.com/downloads/ofed/MLNX_OFED-5.1-2.3.7.1/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz $ virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz /tmp $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'yum install pciutils tcl tcsh pkgconf-pkg-config unbound gcc-gfortran make tk -y' $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz && rm -rf /tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz' $ virt-customize -v  -a overcloud-full.qcow2 --run-command '/tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64/mlnxofedinstall --force --ovs-dpdk'   $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'systemctl enable openvswitch' $ virt-customize -v  -a overcloud-full.qcow2 --run-command '/etc/init.d/openibd restart' $ virt-customize -a overcloud-full.qcow2 --selinux-relabel

For OVS-DPDK with SR-IOV, all VFS must be bound before creating VMS. For this purpose, you may use this patch, in os-net-config. Since, it is not merged yet, apply it manually on the overcloud image, as shown in the following example:

Copy
Copied!
            

$ cat << EOF > os-net-config-sriov-bind #!/bin/python3 import sys from os_net_config.sriov_bind_config import main if __name__ == "__main__":       sys.exit(main()) EOF $ chmod 755 os-net-config-sriov-bind $ virt-copy-in -a overcloud-full.qcow2 os-net-config-sriov-bind /usr/bin/ $ virt-customize -v -a overcloud-full.qcow2 --run-command "yum install patch -y " $ wget https://raw.githubusercontent.com/wmousa/os-net-config-bind/main/os-net-config.patch $ virt-copy-in -a overcloud-full.qcow2 os-net-config.patch /usr/lib/python3.6/site-packages/ $ virt-customize -v -a overcloud-full.qcow2 --run-command "cd /usr/lib/python3.6/site-packages/ && patch -p1 < os-net-config.patch"

Deploy the overcloud using the appropriate templates and yaml files from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:

Copy
Copied!
            

openstack overcloud deploy\       --templates /usr/share/openstack-tripleo-heat-templates/\       --libvirt-type kvm \       -r /home/stack/roles_data.yaml\       --timeout 240 \       -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\       --validation-warnings-fatal \       -e /home/stack/cloud-names.yaml\       -e /home/stack/overcloud_storage_params.yaml\       -e /home/stack/containers-prepare-parameter.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \       -e /home/stack/network-environment.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\       -e /home/stack/overcloud-selinux-config.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs-dpdk.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml

  • Run the following on all compute nodes:

    Copy
    Copied!
                

    $ podman  exec -it -u root nova_compute bash $ git clone https://github.com/wmousa/openstack-ovs-dpdk.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/ $ patch -p1 < /openstack-ovs-dpdk/victoria/nova.patch $ patch -p1 < /openstack-ovs-dpdk/victoria/os-vif.patch $ exit $ podman restart nova_compute $ mkdir -p /var/lib/vhost_sockets/ $ chmod 775 /var/lib/vhost_sockets/ $ chown qemu:hugetlbfs /var/lib/vhost_sockets/

  • Run the following on all controller nodes:

    Copy
    Copied!
                

    $ podman  exec -it -u root neutron_api bash $ git clone https://github.com/mellanox/openstack-ovs-dpdk.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/ $ patch -p1 < /openstack-ovs-dpdk/victoria/neutron.patch $ exit

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    $ source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    $ openstack flavor create --ram 1024 --vcpus 1 --property hw:mem_page_size=1GB --public dpdk.1g

  3. Create a “cirrios” image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network:

    Copy
    Copied!
                

    $ openstack network create private --provider-network-type vxlan --share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. To boot a VM with an SR-IOV port, use the following command after creating the SR-IOV port accordingly:

    • For the first VM:

      Copy
      Copied!
                  

      $ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$direct_port1 vm1

    • For the second VM:

      Copy
      Copied!
                  

      $ direct_port2=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$direct_port2 vm2

  7. To boot a VM with a VDPA port, use the following command after creating the VDPA port accordingly:

    • For the first VM:

      Copy
      Copied!
                  

      $ virtio_port0=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port0 --availability-zone nova:overcloud-computesriov-0.localdomain vm0

    • For the second VM:

      Copy
      Copied!
                  

      $ virtio_port1=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port1 --availability-zone nova:overcloud-computesriov-0.localdomain vm1

© Copyright 2023, NVIDIA. Last updated on Sep 8, 2023.