NVIDIA Support for TripleO Victoria Application Notes

ASAP OVS Offload

ASAP2 solution combines the performance and efficiency of server/storage networking hardware with the flexibility of virtual switching software. ASAP2 offers up to 10 times better performance than non-offloaded OVS solutions, delivering software-defined networks with the highest total infrastructure efficiency, deployment flexibility and operational simplicity.

Starting from NVIDIA® ConnectX®-5 NICs, NVIDIA supports accelerated virtual switching in server NIC hardware through the ASAP2 feature.

While accelerating the data plane, ASAP2 keeps the SDN control plane intact thus staying completely transparent to applications, maintaining flexibility and ease of deployments.

The following OverCloud operating system packages are supported:

Item

Version

Kernel

5.7.12 or above with connection tracking modules

Open vSwitch

2.13.0 or above

Supported Network Interface Cards and Firmware

OVS Hardware Offload with kernel Datapath. OVS-Kernel uses Linux Traffic Control (TC) to program the NIC with offload flows.

Supported Network Interface Cards and Firmware

NVIDIA® support for TripleO Victoria supports the following NVIDIA NICs and their corresponding firmware versions:z

NICs

Supported Protocols

Recommended Firmware Rev.

ConnectX®-6

Ethernet

20.26.1040

ConnectX®-6 Dx

Ethernet

22.28.1002

ConnectX®-5

Ethernet

18.25.1040

Configuring ASAP2 on an Open vSwitch

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latestRedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  1. As OVS mech is not the default one, this file must be used in deployment command to use OVS mech:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml

  2. Use the ovs-hw-offload.yaml file, and configure it on the VLAN/VXLAN setup in the following way:

    1. In case of VLAN setup, configure the ovs-hw-offload.yaml:

      Copy
      Copied!
                  

      $ vi /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

      Copy
      Copied!
                  

      parameter_defaults:   NeutronFlatNetworks: datacentre   NeutronNetworkType:     - vlan   NeutronTunnelTypes: ''   NeutronOVSFirewallDriver: openvswitch   NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter   NovaPCIPassthrough:     - devname: <interface_name>       physical_network: datacentre   ComputeSriovParameters:     NeutronBridgeMappings:       - 'datacentre:br-ex'     OvsHwOffload: true

    2. In the case of a VXLAN setup, perform the following:

      1. Configure the ovs-hw-offload.yaml:

        Copy
        Copied!
                    

        parameter_defaults:   NeutronFlatNetworks: datacentre   NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter     NovaPCIPassthrough:     - devname: <interface_name>       physical_network: null   ComputeSriovParameters:     NeutronBridgeMappings:       - datacentre:br-ex     OvsHwOffload: True

      2. Make sure to move the tenant network from VLAN on a bridge to a separated interface by having this section in your controller.j2.yaml file:

        Copy
        Copied!
                    

        - type: interface name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~ '/' ~ tenant_cidr }}

      3. Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file:

        Copy
        Copied!
                    

        - type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~ '/' ~ tenant_cidr }} link_mode: switchdev numvfs: 64 promisc: true use_dhcp: false

  3. Create a new role for the compute node and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  4. Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:

    Copy
    Copied!
                

    - name: Controller count: 1 instances: - name: control-0 - name: ComputeSriov count: 2 instances: - name: compute-0 - name: compute-1

  5. Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line::

    Copy
    Copied!
                

    ComputeSriovNetworkConfigTemplate: '/home/stack/new/nic_configs/compute.j2.yaml'

Deploying the Overcloud

The default deployment of the overcloud is done by using the appropriate templates and yamls from the ~/heat-templates, as shown in the following example:

Copy
Copied!
            

openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml   openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates/ \ --libvirt-type kvm -r ~/roles_data.yaml \ -e /home/stack/overcloud-baremetal-deployed.yaml\ -e /home/stack/containers-prepare-parameter.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /usr/share/openstack-tripleo-heat-templates/enviroments/ovs-hw-offload.yaml \ --timeout 180 \ -e /home/stack/cloud-names.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ --validation-warnings-fatal \ --ntp-server pool.ntp.org

Booting the VM

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    # openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1

  3. Create a "cirrios" image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare Mellanox

  4. Create a network.

    1. In case of a VLAN network:

      Copy
      Copied!
                  

      $ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share

    2. In case of a VXLAN network:

      Copy
      Copied!
                  

      $ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud using the following command after creating the direct port accordingly.

  • For the first VM:

    Copy
    Copied!
                

    $ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'`   $openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port1 vm1

  • For the second VM:

    Copy
    Copied!
                

    $ direct_port2=`openstack port create direct2 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'`   $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port2 vm2

Configuration

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  • Section 6.4: Configuring OVS Hardware Offload

  • Section 6.5: Tuning Examples for OVS Hardware Offload

  • Section 6.6: Components of OVS Hardware Offload

  • Section 6.7: Troubleshooting OVS Hardware Offload

  • Section 6.8: Debugging HW Offload Flow

  1. Use the ovs-hw-offload.yaml file from the following location:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

    Configure it over OVN setup in the following way:

    Copy
    Copied!
                

    parameter_defaults:   NeutronOVSFirewallDriver: openvswitch   NeutronFlatNetworks: datacentre   NeutronNetworkType:     - geneve     - flat   NeutronTunnelTypes: 'geneve'   NovaPCIPassthrough:     - devname: "enp3s0f0"       physical_network: null     NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter     ComputeSriovParameters:     NeutronBridgeMappings:       - datacentre:br-ex     TunedProfileName: "throughput-performance"     KernelArgs: "intel_iommu=on iommu=pt"     OvsHwOffload: True

  2. Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your controller.j2.yaml file:

    Copy
    Copied!
                

    - type: interface name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~ '/' ~ tenant_cidr }}

  3. Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file::

    Copy
    Copied!
                

    - type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~ '/' ~ tenant_cidr }} link_mode: switchdev numvfs: 64 promisc: true use_dhcp: false

  4. Create a new role for the compute node, and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  5. Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:

    Copy
    Copied!
                

    - name: Controller count: 1 instances: - name: control-0 - name: ComputeSriov count: 2 instances: - name: compute-0 - name: compute-1

  6. Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    ComputeSriovNetworkConfigTemplate: '/home/stack/new/nic_configs/compute.j2.yaml'

Deploying the Overcloud

Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:

Copy
Copied!
            

openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml   openstack overcloud deploy\ --templates /usr/share/openstack-tripleo-heat-templates/ \ --libvirt-type kvm \ -r /home/stack/roles_data.yaml\ --timeout 240 \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\ --validation-warnings-fatal \ -e /home/stack/cloud-names.yaml\ -e /home/stack/overcloud_storage_params.yaml\ -e /home/stack/overcloud-baremetal-deployed.yaml \ -e /home/stack/containers-prepare-parameter.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\ -e /home/stack/overcloud-selinux-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml

Booting the VM

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    $ source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    $ openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1

  3. Create a “cirrios” image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network:

    Copy
    Copied!
                

    $ openstack network create private --provider-network-type geneve --share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud, using the following command after creating the port accordingly:

  • For the first VM:

    Copy
    Copied!
                

    $ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port1 vm1

  • For the second VM:

    Copy
    Copied!
                

    $ direct_port2=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port2 vm2

Configuration

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  1. Section 6.4: Configuring OVS Hardware Offload

  2. Section 6.5: Tuning Examples for OVS Hardware Offload

  3. Section 6.6: Components of OVS Hardware Offload

  4. Section 6.7: Troubleshooting OVS Hardware Offload

  5. Section 6.8: Debugging HW Offload Flow

  6. Use the ovs-hw-offload.yaml file from the following location:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

  7. Configure it over OVN setup in the following way:

    Copy
    Copied!
                

    parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes: 'geneve' NovaPCIPassthrough: - devname: "enp3s0f0" physical_network: null NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName: "throughput-performance" KernelArgs: "intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=8" OvsHwOffload: True

  8. Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your controller.j2.yaml file:

    Copy
    Copied!
                

    - type: interface name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~ '/' ~ tenant_cidr }}

  9. Make sure to move the tenant network from VLAN on a bridge to a separated interface by having the following section in your compute.j2.yaml file:

    Copy
    Copied!
                

    - type: sriov_pf name: enp3s0f0 addresses: - ip_netmask: {{ tenant_ip ~ '/' ~ tenant_cidr }} link_mode: switchdev numvfs: 64 promisc: true use_dhcp: false 

  10. Create a new role for the compute node, and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  11. Update the /home/stack/overcloud_baremetal_deploy.yaml file accordingly. You may use the following example:

    Copy
    Copied!
                

    - name: Controller     count: 1     instances:     - name: control-0   - name: ComputeSriov     count: 2     instances:     - name: compute-0     - name: compute-1

  12. Assign the compute.j2.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    ComputeSriovNetworkConfigTemplate: '/home/stack/new/nic_configs/compute.j2.yaml'

Customizing the Overcloud Images with MOFED

To customize the overcloud images with MOFED, run:

Copy
Copied!
            

$ sudo su $ yum install -y libguestfs-tools $ export LIBGUESTFS_BACKEND=direct $ cd /home/stack/images/ $ wget https://www.mellanox.com/downloads/ofed/MLNX_OFED-5.1-2.3.7.1/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz $ virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz /tmp $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'yum install pciutils tcl tcsh pkgconf-pkg-config gcc-gfortran make tk -y' $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz && rm -rf /tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz' $ virt-customize -v  -a overcloud-full.qcow2 --run-command '/tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64/mlnxofedinstall --force' $ virt-customize -v  -a overcloud-full.qcow2 --run-command ' /etc/init.d/openibd restart' $ virt-customize -a overcloud-full.qcow2 --selinux-relabel

Deploying the Overcloud

  1. Deploy the overcloud, using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:

    Copy
    Copied!
                

    openstack overcloud node provision --stack overcloud --output /home/stack/overcloud-baremetal-deployed.yaml /home/stack/overcloud_baremetal_deploy.yaml   openstack overcloud deploy\ --templates /usr/share/openstack-tripleo-heat-templates \ --libvirt-type kvm \ -r /home/stack/roles_data.yaml\ --timeout 240 \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\ --validation-warnings-fatal \ -e /home/stack/cloud-names.yaml\ -e /home/stack/overcloud_storage_params.yaml\ -e /home/stack/overcloud-baremetal-deployed.yaml \ -e /home/stack/containers-prepare-parameter.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\ -e /home/stack/overcloud-selinux-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml

  2. Apply Openstack patches:

  • Perform the following on all compute nodes:

    Copy
    Copied!
                

    $ echo 'group = "hugetlbfs"' >> /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf $ podman exec -it -u root nova_compute bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/nova $ patch -p2 < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/nova_os_vif_util.patch $ cp -a /containerized-ovs-forwarder/python/ovs_module /usr/lib/python3.6/site-packages/ $ cd /usr/lib/python3.6/site-packages/vif_plug_ovs $ patch < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/os-vif.patch $ exit $ podman restart nova_compute nova_libvirt $ chmod 775 /var/lib/vhost_sockets/ $ chown qemu:hugetlbfs /var/lib/vhost_sockets/

  • Perform the following on all controller nodes:

    Copy
    Copied!
                

    $ podman  exec -it -u root neutron_api bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/neutron $ patch -p1 < /containerized-ovs-forwarder/openstack/victoria/ovs-kernel/networking-ovn.patch $ exit $ podman  restart neutron_api

    Preparing the OVS-Forwarder Container

  • For vDPA, make sure you have the related PFs configured with ASAP2 and all VFs bound before creating and starting the OVS container.

  • To prepare the OVS-forwarder container, perform the following on all compute nodes:

  1. Pull the ovs-forwrder image from the docker.io with a specific tag:

    Copy
    Copied!
                

    $ podman pull mellanox/ovs-forwarder:52220

  2. Create the ovs-forwarder container with the right PCI of SriovPF and the range of VFs: --pci-args <pci_address> pf0vf[<vfs_range>]:

    Copy
    Copied!
                

    $ mkdir -p /forwarder/var/run/openvswitch/ $ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder: 52220 \ --pci-args 0000:02:00.0 pf0vf[0-3]

    Note: In VF-LAG case, the –pci-args flag for the second PF should be the PCI address of the first PF and the physical name of the second PF --pci-args 0000:02:00.0 pf1vf[0-3] :

    Copy
    Copied!
                

    $ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder:52220 \ --pci-args0000:02:00.0 pf0vf[0-3] --pci-args 0000:02:00.0 pf1vf[0-3]

  3. Start the ovs-forwarder container:

    Copy
    Copied!
                

    $ podman start ovs_forwarder_container

  4. Create ovs forwarder container service:

    Copy
    Copied!
                

    $ wget https://raw.githubusercontent.com/Mellanox/containerized-ovs-forwarder/master/openstack/ovs_forwarder_container_service_create.sh $ bash ovs_forwarder_container_service_create.sh

Booting the VM

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    $ source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    $ openstack flavor create --ram 1024 --vcpus 1 --property hw:mem_page_size=1GB --public dpdk.1g

  3. Create a “cirrios” image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network:

    Copy
    Copied!
                

    $ openstack network create private --provider-network-type geneve --share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud, using the following command after creating the vDPA port accordingly:

  • For the first VM:

    Copy
    Copied!
                

    $ virtio_port0=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port0 --availability-zone nova:overcloud-computesriov-0.localdomain vm0

  • For the second VM:

    Copy
    Copied!
                

    $ virtio_port1=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port1 --availability-zone nova:overcloud-computesriov-0.localdomain vm1

© Copyright 2023, NVIDIA. Last updated on Sep 8, 2023.