ML2 OVS - Kernel - Full Offload
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latestRedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
As OVS mech is not the default one, this file must be used in deployment command to use OVS mech:
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml
Use the ovs-hw-offload.yaml file, and configure it on the VLAN/VXLAN setup in the following way:
In case of VLAN setup, configure the ovs-hw-offload.yaml:
$ vi /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - vlan NeutronTunnelTypes:
''NeutronOVSFirewallDriver: openvswitch NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network: datacentre ComputeSriovParameters: NeutronBridgeMappings: -'datacentre:br-ex'OvsHwOffload:trueIn the case of a VXLAN setup, perform the following:
Configure the ovs-hw-offload.yaml:
parameter_defaults: NeutronFlatNetworks: datacentre NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network:
nullComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex OvsHwOffload: TrueConfigure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
- type:
interfacename: <interface_name> addresses: - ip_netmask: get_param: TenantIpSubnetConfigure the interface names in the /usr/share/openstack-tripleo-heat- templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:
- type: sriov_pf name: enp3s0f0 link_mode: switchdev numvfs:
16promisc:trueuse_dhcp:false
Create a new role for the compute node and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the ~/cloud-names.yaml file accordingly. You may use the following example:
parameter_defaults: ComputeSriovCount:
2OvercloudComputeSriovFlavor: computeAssign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat- templates//environments/net-single-nic-with-vlans.yaml file, by adding the following line:
OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml
Run the overcloud-prep-containers.sh file.
The default deployment of the overcloud is done by using the appropriate templates and yamls from the ~/heat-templates, as shown in the following example:
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates/ \
--libvirt-type kvm -r ~/roles_data.yaml \
-e /home/stack/containers-default-parameters.yaml \
-e /usr/share/openstack-tripleo-heat-templates/heat-templates/environments/docker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/enviroments/ovs-hw-offload.yaml \
--timeout 180 \
-e /home/stack/cloud-names.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e /home/stack/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
--validation-warnings-fatal \
--ntp-server pool.ntp.org
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
source ./overcloudrc
Create a flavor:
# openstack flavor create m1.small --id
3--ram2048--disk20--vcpus1Create a "cirrios" image:
$ openstack image create --
public--file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanoxCreate a network.
In case of a VLAN network:
$ openstack network create
private--provider-physical-network datacentre --provider-network-type vlan -shareIn case of a VXLAN network:
$ openstack network create
private--provider-physical-network datacentre --provider-network-type vlan -share
Create a subnet:
$ openstack subnet create private_subnet --dhcp --network
private--subnet-range11.11.11.0/24Boot a VM on the overcloud using the following command after creating the direct port accordingly.
For the first VM:
$ direct_port1=`openstack port create direct1 --vnic-type=direct --network
private--binding-profile'{"capabilities":["switchdev"]}'| grep' id '| awk'{print $4}'` $openstack server create --flavor3--image mellanox --nic port-id=$direct_port1 vm1For the second VM:
$ direct_port2=`openstack port create direct2 --vnic-type=direct --network
private--binding-profile'{"capabilities":["switchdev"]}'| grep' id '| awk'{print $4}'` $ openstack server create --flavor3--image mellanox --nic port-id=$direct_port2 vm2