NVIDIA Support for TripleO Ussuri Application Notes

InfiniBand Virtualization with SR-IOV

Starting from a fresh bare metal server, install and configure the undercloud according to the official TripleO installation documentation. Use the "Ussuri" OpenStack Release TripleIO repositories:

Copy
Copied!
            

$ sudo -E tripleo-repos -b ussuri current

Create the following directories and dockerfiles on the undercloud node to be used for building custom Neutron components with IB support:

Copy
Copied!
            

#dhcp-agent $ mkdir /home/stack/neutron_components_custom/dhcp cat > /home/stack/neutron_components_custom/dhcp/Dockerfile <<EOF FROM 192.168.24.1:8787/tripleou/centos-binary-neutron-dhcp-agent:current-tripleo USER root RUN pip-3 install networking-mlnx USER neutron EOF   #l3-agent $ mkdir /home/stack/neutron_components_custom/l3 cat > /home/stack/neutron_components_custom/l3/Dockerfile <<EOF FROM 192.168.24.1:8787/tripleou/centos-binary-neutron-dhcp-agent:current-tripleo USER root RUN pip-3 install networking-mlnx USER neutron EOF   #mlnx-agent $ mkdir /home/stack/neutron_mlnx_agent_custom cat > /home/stack/neutron_mlnx_agent_custom/Dockerfile <<EOF FROM 192.168.24.1:8787/tripleou/centos-binary-neutron-dhcp-agent:current-tripleo USER root RUN pip-3 install --upgrade pyroute2 USER neutron EOF

Edit the “containers-prepare-parameter.yaml" file, and add the following lines:

Copy
Copied!
            

#global parameter_defaults: ContainerImagePrepare: - push_destination: 192.168.24.1:8787 set: name_prefix: centos-binary- name_suffix: '' namespace: docker.io/tripleou neutron_driver: null tag: current-tripleo tag_from_label: rdo_version #nova+neutron server - push_destination: "192.168.24.1:8787" set: tag: "current-tripleo" namespace: "docker.io/tripleou" name_prefix: "centos-binary-" name_suffix: "" rhel_containers: "false" includes: - nova-compute - neutron-server modify_role: tripleo-modify-image modify_append_tag: "-updated"  modify_vars: tasks_from: yum_install.yml yum_repos_dir_path: /etc/yum.repos.d yum_packages: ['python3-networking-mlnx'] #dhcp-agent - push_destination: "192.168.24.1:8787" set: tag: "current-tripleo" namespace: "docker.io/tripleou" name_prefix: "centos-binary-" name_suffix: "" rhel_containers: "false" includes: - neutron-dhcp-agent modify_role: tripleo-modify-image modify_append_tag: "-updated"  modify_vars: tasks_from: modify_image.yml modify_dir_path: /home/stack/neutron_components_custom/dhcp   #l3-agent - push_destination: "192.168.24.1:8787" set: tag: "current-tripleo" namespace: "docker.io/tripleou" name_prefix: "centos-binary-" name_suffix: "" rhel_containers: "false" includes: - neutron-l3-agent modify_role: tripleo-modify-image modify_append_tag: "-updated"  modify_vars: tasks_from: modify_image.yml modify_dir_path: /home/stack/neutron_components_custom/l3   #mlnx-agent - push_destination: "192.168.24.1:8787" set: tag: "current-tripleo" namespace: "docker.io/tripleou" name_prefix: "centos-binary-" name_suffix: "" rhel_containers: "false" includes: - neutron-mlnx-agent modify_role: tripleo-modify-image modify_append_tag: "-updated"  modify_vars: tasks_from: modify_image.yml modify_dir_path: /home/stack/neutron_mlnx_agent_custom

As shown in the following configuration example , set the physical network “datacentre” to Ethernet, while creating a new physical InfiniBand network - “ibnet”.

Warning

-e /home/stack/tripleo-heat-templates/environments/services/neutron-mlnx-agent.yaml

Copy
Copied!
            

resource_registry: 'OS::TripleO::Services::NeutronMlnxAgent': ../../deployment/neutron/neutron-mlnx-agent-container-puppet.yaml 'OS::TripleO::Services::NeutronAgentsIBConfig': ../../deployment/neutron/neutron-agents-ib-config-container-puppet.yaml parameter_defaults: NeutronMechanismDrivers: - mlnx_sdn_assist - mlnx_infiniband NeutronPhysicalDevMappings: - 'datacentre:ib0' NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter MultiInterfaceEnabled: true BindNormalPortsPhysnet: datacentre MultiInterfaceDriverMappings: - 'datacentre:ipoib' IPoIBPhysicalInterface: ib0 NovaPCIPassthrough: - devname: ib0 physical_network: ibnet ComputeSriovIBParameters: KernelArgs: intel_iommu=on iommu=pt TunedProfileName: throughput-performance

In case of a mixed environment (Ethernet and InfiniBand) please use the following parameters:

Copy
Copied!
            

parameter_defaults: null NeutronMechanismDrivers: - mlnx_sdn_assist - mlnx_infiniband - openvswitch NeutronBridgeMappings: 'datacentre:br-ex' NeutronNetworkVLANRanges: 'ibnet:350:360' NeutronPhysicalDevMappings: - 'ibnet:ib0' BindNormalPortsPhysnet: ibnet MultiInterfaceDriverMappings: - 'ibnet:ipoib,datacentre:openvswitch' NovaPCIPassthrough: - devname: ib0 physical_network: ibnet

  1. Download and Install MLNX_OFED on the overcloud image, using the following commands on the undercloud node:

    Copy
    Copied!
                

    export LIBGUESTFS_BACKEND=direct virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-4.7-1.0.0.1-rhel7.7-x86_64.tgz /tmp virt-customize -v -a overcloud-full.qcow2 --run-command 'yum install gtk2 atk cairo tcl gcc-gfortran tcsh tk -y' virt-customize -v -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-4.7-1.0.0.1- rhel7.7-x86_64.tgz' virt-customize -v -a overcloud-full.qcow2 --run-command '/tmp//MLNX_OFED_LINUX-4.7-1.0.0.1-rhel7.7-x86_64/ mlnxofedinstall --force'

  2. Upload the modified image to the undercloud:

    Copy
    Copied!
                

    $ openstack overcloud image upload --image-path <overcloud images folder> --update-existing

Generate the required roles [Controller, ComputeSriovIB] to use in the overcloud deployment command:

Copy
Copied!
            

$ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriovIB

Update the “~/cloud-names.yaml” file to include the following lines:

Copy
Copied!
            

parameter_defaults: ComputeSriovIBCount: 1  OvercloudComputeSriovIBFlavor: compute

  1. Assign the yaml file to the ComputeSriovIB role.

  2. Update the “~/heat-templates/environments/net-single-nic-with-vlans.yaml” file by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeSriovIB::Net::SoftwareConfig: /home/stack/nic_configs/compute.yaml

  3. Update the controller.yaml file by creating an IPoIB child interface for the provision network default gateway:

    Copy
    Copied!
                

    - type: ib_child_interface parent: ib0 use_dhcp: false pkey_id: get_param: OcProvisioningNetworkVlanID addresses: - ip_netmask: get_param: OcProvisioningIpSubnet

Configure the bare metal node by adding the node's information to a new file: "~/bm-nodes.yaml", as shown in the following example:

Copy
Copied!
            

nodes:   - name: node-0     driver: ipmi     network_interface: neutron     driver_info:       ipmi_address: 10.209.225.81       ipmi_username: ADMIN       ipmi_password: ADMIN     resource_class: baremetal     properties:       cpu_arch: x86_64       local_gb: 930       memory_mb: '32768'       cpus: 24     ports:       - address: 'ec:0d:9a:bf:54:d4'         pxe_enabled: true         physical_network: ibnet

At this stage, the node must be added to the overcloud. This can be done by executing the following commands in the overcloud:

Copy
Copied!
            

source overcloudrc openstack baremetal create bm-nodes.yaml

Warning

Before continuing to the next step, login to the UFM webUI. Make sure to create PKeys for all infrastructure networks and add the overcloud nodes GUIDs as members. This is required in order to allow overcloud nodes connectivity during deployment.

Deploy the overcloud, using the appropriate templates and yaml file from the heat templates, as shown in the following example:

Copy
Copied!
            

openstack overcloud deploy \ --templates /home/stack/tripleo-heat-templates \ --libvirt-type kvm \ -r ~/roles_data.yaml\ --timeout 180 \ --validation-warnings-fatal \ -e /home/stack/cloud-names.yaml \ -e /home/stack/containers-prepare-parameter.yaml \ -e /home/stack/tripleo-heat-templates/environments/docker.yaml \ -e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e /home/stack/nic_configs/network.yaml \ -e /home/stack/overcloud-selinux-config.yaml \ -e /home/stack/tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /home/stack/tripleo-heat-templates/environments/neutron-ml2-mlnx-sdn.yaml \ -e /home/stack/tripleo-heat-templates/environments/services/neutron-mlnx-agent.yaml

On the Undercloud machine:

  1. Load the overcloudrc data.

    Copy
    Copied!
                

    $ source overcloudrc

  2. Create a flavor.

    Copy
    Copied!
                

    $ openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1

  3. Create a guest image with MLNX_OFED installed. You can use a disk-image-builder for this purpose. Once the image is ready, upload it to the cloud image store.

    Copy
    Copied!
                

    $ openstack image create --public --file <image file> --disk-format qcow2 --container-format bare <image name>

  4. Create the InfiniBand network.

    Copy
    Copied!
                

    $ openstack network create ibnet --provider-physical-network ibnet--provider-network-type vlan $ openstack subnet create ibnet_subnet --dhcp --network ibnet --subnet-range 11.11.11.0/24

  5. Create a direct port.

    Copy
    Copied!
                

    $ openstack port create direct1 --vnic-type=direct --network ibnet


  6. Create an instance.

    Copy
    Copied!
                

    $ openstack server create --flavor m1.small --image <image_name> --port direct1 vm1

Issue

Cause

Solution

Missing the InfiniBand interface inside the guest VM

The required InfiniBand kernel modules are missing

Make sure that the required InfiniBand kernel modules (mlx5_ib, ib_core, ib_ipoib) are installed and loaded on the instance.

  • InfiniBand IPv6 tenant network is not supported

  • Distributed Virtual Router (DVR) is not supported

  • RHEL/CentOS 8 image does not get DHCP

  • The number of networks the user can create is limited to 128 due to a limitation of the ConnectX adapter

  • InfiniBand tenant network does not support IPv6

For more details please refer to the following wiki page: https://wiki.openstack.org/wiki/Mellanox-Neutron-Train-InfiniBand

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.