ML2 OVS - Kernel - Full Offload

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latestRedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  1. As OVS mech is not the default one, this file must be used in deployment command to use OVS mech:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml

  2. Use the ovs-hw-offload.yaml file, and configure it on the VLAN/VXLAN setup in the following way:

    1. In case of VLAN setup, configure the ovs-hw-offload.yaml:

      Copy
      Copied!
                  

      $ vi /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

      Copy
      Copied!
                  

      parameter_defaults:   NeutronFlatNetworks: datacentre   NeutronNetworkType:     - vlan   NeutronTunnelTypes: ''   NeutronOVSFirewallDriver: openvswitch   NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter   NovaPCIPassthrough:     - devname: <interface_name>       physical_network: datacentre   ComputeSriovParameters:     NeutronBridgeMappings:       - 'datacentre:br-ex'     OvsHwOffload: true

    2. In the case of a VXLAN setup, perform the following:

      1. Configure the ovs-hw-offload.yaml:

        Copy
        Copied!
                    

        parameter_defaults:   NeutronFlatNetworks: datacentre   NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter     NovaPCIPassthrough:     - devname: <interface_name>       physical_network: null   ComputeSriovParameters:     NeutronBridgeMappings:       - datacentre:br-ex     OvsHwOffload: True

      2. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

        Copy
        Copied!
                    

        - type: interface name: <interface_name> addresses: - ip_netmask: get_param: TenantIpSubnet

      3. Configure the interface names in the /usr/share/openstack-tripleo-heat- templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

        Copy
        Copied!
                    

        - type: sriov_pf name: enp3s0f0 link_mode: switchdev numvfs: 16 promisc: true use_dhcp: false

  3. Create a new role for the compute node and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  4. Update the ~/cloud-names.yaml file accordingly. You may use the following example:

    Copy
    Copied!
                

    parameter_defaults:   ComputeSriovCount: 2   OvercloudComputeSriovFlavor: compute

  5. Assign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat- templates//environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml

  6. Run the overcloud-prep-containers.sh file.

The default deployment of the overcloud is done by using the appropriate templates and yamls from the ~/heat-templates, as shown in the following example:

Copy
Copied!
            

openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates/ \ --libvirt-type kvm -r ~/roles_data.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /usr/share/openstack-tripleo-heat-templates/enviroments/ovs-hw-offload.yaml \ --timeout 180 \ -e /home/stack/cloud-names.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ --validation-warnings-fatal \ --ntp-server pool.ntp.org

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    # openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1

  3. Create a "cirrios" image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network.

    1. In case of a VLAN network:

      Copy
      Copied!
                  

      $ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share

    2. In case of a VXLAN network:

      Copy
      Copied!
                  

      $ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud using the following command after creating the direct port accordingly.

  • For the first VM:

    Copy
    Copied!
                

    $ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'`   $openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port1 vm1

  • For the second VM:

    Copy
    Copied!
                

    $ direct_port2=`openstack port create direct2 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'`   $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port2 vm2

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.