Configuring ASAP2 on an Open vSwitch
Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:
As OVS mech is not the default one, this file must be used in deployment command to use OVS mech:
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml
- Use the
ovs-hw-offload.yaml
file, and configure it on the VLAN/VXLAN setup in the following way:In case of VLAN setup, configure the ovs-hw-offload.yaml:
$ vi /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml
parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - vlan NeutronTunnelTypes: '' NeutronOVSFirewallDriver: openvswitch NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network: datacentre ComputeSriovParameters: NeutronBridgeMappings: - 'datacentre:br-ex' OvsHwOffload: true
- In the case of a VXLAN setup, perform the following:
Configure the ovs-hw-offload.yaml:
parameter_defaults: NeutronFlatNetworks: datacentre NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter NovaPCIPassthrough: - devname: <interface_name> physical_network: null ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex OvsHwOffload: True
Configure the interface names in the
/usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml
file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:- type: interface name: <interface_name> addresses: - ip_netmask: get_param: TenantIpSubnet
Configure the interface names in the
/usr/share/openstack-tripleo-heat- templates/network/config/single-nic-vlans/compute.yaml
file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:- type: sriov_pf name: enp3s0f0 link_mode: switchdev numvfs: 16 promisc: true use_dhcp: false
Create a new role for the compute node and change it to ComputeSriov:
$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
Update the
~/cloud-names.yaml
file accordingly. You may use the following example:parameter_defaults: ComputeSriovCount: 2 OvercloudComputeSriovFlavor: compute
Assign the
compute.yaml
file to the ComputeSriov role. Update the/usr/share/openstack-tripleo-heat- templates//environments/net-single-nic-with-vlans.yaml
file, by adding the following line:OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml
- Run the
overcloud-prep-containers.sh
file.
Deploying the Overcloud
The default deployment of the overcloud is done by using the appropriate templates and yamls from the ~/heat-templates
, as shown in the following example:
openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates/ \ --libvirt-type kvm -r ~/roles_data.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /usr/share/openstack-tripleo-heat-templates/enviroments/ovs-hw-offload.yaml \ --timeout 180 \ -e /home/stack/cloud-names.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ --validation-warnings-fatal \ --ntp-server pool.ntp.org
Booting the VM
To boot the VM on the undercloud machine, perform the following steps:
Load the overcloudrc configuration:
source ./overcloudrc
Create a flavor:
# openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1
Create a "cirrios" image:
$ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox
- Create a network.
In case of a VLAN network:
$ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share
In case of a VXLAN network:
$ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share
Create a subnet:
$ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24
- Boot a VM on the overcloud using the following command after creating the direct port accordingly.
For the first VM:
$ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port1 vm1
For the second VM:
$ direct_port2=`openstack port create direct2 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port2 vm2