ASAP OVS Offload

ASAP2 solution combines the performance and efficiency of server/storage networking hardware with the flexibility of virtual switching software. ASAP2 offers up to 10 times better performance than non-offloaded OVS solutions, delivering software-defined networks with the highest total infrastructure efficiency, deployment flexibility and operational simplicity.

Starting from NVIDIA® Mellanox® ConnectX®-5 NICs, Mellanox supports accelerated virtual switching in server NIC hardware through the ASAP2 feature.

While accelerating the data plane, ASAP2 keeps the SDN control plane intact thus staying completely transparent to applications, maintaining flexibility and ease of deployments.

The following OverCloud operating system packages are supported:

Item

Version

Kernel

5.7.12 or above with connection tracking modules

Open vSwitch

2.13.0 or above

Supported Network Interface Cards and Firmware

OVS Hardware Offload with kernel Datapath. OVS-Kernel uses Linux Traffic Control (TC) to program the NIC with offload flows.

Supported Network Interface Cards and Firmware

NVIDIA® Mellanox® support for TripleO Ussuri supports the following Mellanox NICs and their corresponding firmware versions:

NICs

Supported Protocols

Recommended Firmware Rev.

ConnectX®-6

Ethernet

20.26.1040

ConnectX®-6 Dx

Ethernet

22.28.1002

ConnectX®-5

Ethernet

18.25.1040

Configuring ASAP2 on an Open vSwitch

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies the latestRedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  1. As OVS mech is not the default one, this file must be used in deployment command to use OVS mech:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml

  2. Use the ovs-hw-offload.yaml file, and configure it on the VLAN/VXLAN setup in the following way:

    1. In case of VLAN setup, configure the ovs-hw-offload.yaml:

      Copy
      Copied!
                  

      $ vi /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

      Copy
      Copied!
                  

      parameter_defaults:   NeutronFlatNetworks: datacentre   NeutronNetworkType:     - vlan   NeutronTunnelTypes: ''   NeutronOVSFirewallDriver: openvswitch   NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter   NovaPCIPassthrough:     - devname: <interface_name>       physical_network: datacentre   ComputeSriovParameters:     NeutronBridgeMappings:       - 'datacentre:br-ex'     OvsHwOffload: true

    2. In the case of a VXLAN setup, perform the following:

      1. Configure the ovs-hw-offload.yaml:

        Copy
        Copied!
                    

        parameter_defaults:   NeutronFlatNetworks: datacentre   NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter     NovaPCIPassthrough:     - devname: <interface_name>       physical_network: null   ComputeSriovParameters:     NeutronBridgeMappings:       - datacentre:br-ex     OvsHwOffload: True

      2. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

        Copy
        Copied!
                    

        - type: interface name: <interface_name> addresses: - ip_netmask: get_param: TenantIpSubnet

      3. Configure the interface names in the /usr/share/openstack-tripleo-heat- templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

        Copy
        Copied!
                    

        - type: sriov_pf name: enp3s0f0 link_mode: switchdev numvfs: 64 promisc: true use_dhcp: false

  3. Create a new role for the compute node and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  4. Update the ~/cloud-names.yaml file accordingly. You may use the following example:

    Copy
    Copied!
                

    parameter_defaults:   ComputeSriovCount: 2   OvercloudComputeSriovFlavor: compute

  5. Assign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat- templates//environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml

  6. Run the overcloud-prep-containers.sh file.

Deploying the Overcloud

The default deployment of the overcloud is done by using the appropriate templates and yamls from the ~/heat-templates, as shown in the following example:

Copy
Copied!
            

openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates/ \ --libvirt-type kvm -r ~/roles_data.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /usr/share/openstack-tripleo-heat-templates/enviroments/ovs-hw-offload.yaml \ --timeout 180 \ -e /home/stack/cloud-names.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ --validation-warnings-fatal \ --ntp-server pool.ntp.org

Booting the VM

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    # openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1

  3. Create a "cirrios" image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network.

    1. In case of a VLAN network:

      Copy
      Copied!
                  

      $ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share

    2. In case of a VXLAN network:

      Copy
      Copied!
                  

      $ openstack network create private --provider-physical-network datacentre --provider-network-type vlan -share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud using the following command after creating the direct port accordingly.

  • For the first VM:

    Copy
    Copied!
                

    $ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'`   $openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port1 vm1

  • For the second VM:

    Copy
    Copied!
                

    $ direct_port2=`openstack port create direct2 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'`   $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port2 vm2

Configuration

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  • Section 6.4: Configuring OVS Hardware Offload

  • Section 6.5: Tuning Examples for OVS Hardware Offload

  • Section 6.6: Components of OVS Hardware Offload

  • Section 6.7: Troubleshooting OVS Hardware Offload

  • Section 6.8: Debugging HW Offload Flow

  1. Use the ovs-hw-offload.yaml file from the following location:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

    Configure it over OVN setup in the following way:

    Copy
    Copied!
                

    parameter_defaults:   NeutronOVSFirewallDriver: openvswitch   NeutronFlatNetworks: datacentre   NeutronNetworkType:     - geneve     - flat   NeutronTunnelTypes: 'geneve'   NovaPCIPassthrough:     - devname: "enp3s0f0"       physical_network: null     NovaSchedulerDefaultFilters:     - AvailabilityZoneFilter     - ComputeFilter     - ComputeCapabilitiesFilter     - ImagePropertiesFilter     - ServerGroupAntiAffinityFilter     - ServerGroupAffinityFilter     - PciPassthroughFilter     - NUMATopologyFilter   NovaSchedulerAvailableFilters:     - nova.scheduler.filters.all_filters     - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter     ComputeSriovParameters:     NeutronBridgeMappings:       - datacentre:br-ex     TunedProfileName: "throughput-performance"     KernelArgs: "intel_iommu=on iommu=pt"     OvsHwOffload: True

  2. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/control.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    -type: interface  name: <interface_name>  addresses:  - ip_netmask:      get_param: TenantIpSubnet

  3. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    - type: sriov_pf   name: enp3s0f0   link_mode: switchdev   numvfs: 16   promisc: true   use_dhcp: false

  4. Create a new role for the compute node, and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  5. Update the ~/cloud-names.yaml file accordingly. You may use the following example:

    Copy
    Copied!
                

    parameter_defaults:   ComputeSriovCount: 2   OvercloudComputeSriovFlavor: compute   ControllerCount: 3   OvercloudControllerFlavor: control

  6. Assign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml

Deploying the Overcloud

Deploy the overcloud using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:

Copy
Copied!
            

openstack overcloud deploy\ --templates /usr/share/openstack-tripleo-heat-templates/ \ --libvirt-type kvm \ -r /home/stack/roles_data.yaml\ --timeout 240 \ -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\ --validation-warnings-fatal \ -e /home/stack/cloud-names.yaml\ -e /home/stack/overcloud_storage_params.yaml\ -e /home/stack/containers-prepare-parameter.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\ -e /home/stack/overcloud-selinux-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml

Booting the VM

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    $ source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    $ openstack flavor create m1.small --id 3 --ram 2048 --disk 20 --vcpus 1

  3. Create a “cirrios” image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network:

    Copy
    Copied!
                

    $ openstack network create private --provider-network-type geneve --share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud, using the following command after creating the port accordingly:

  • For the first VM:

    Copy
    Copied!
                

    $ direct_port1=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port1 vm1

  • For the second VM:

    Copy
    Copied!
                

    $ direct_port2=`openstack port create direct1 --vnic-type=direct --network private --binding-profile '{"capabilities":["switchdev"]}' | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor 3 --image mellanox --nic port-id=$direct_port2 vm2

Configuration

Starting from a fresh bare metal server, install and configure the undercloud, as instructed in Deploying SR-IOV Technologies in the latest RedHat Network Functions Virtualization Planning and Configuration Guide. Please make sure you review the following chapters before resuming:

  1. Section 6.4: Configuring OVS Hardware Offload

  2. Section 6.5: Tuning Examples for OVS Hardware Offload

  3. Section 6.6: Components of OVS Hardware Offload

  4. Section 6.7: Troubleshooting OVS Hardware Offload

  5. Section 6.8: Debugging HW Offload Flow

  6. Use the ovs-hw-offload.yaml file from the following location:

    Copy
    Copied!
                

    /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml

  7. Configure it over OVN setup in the following way:

    Copy
    Copied!
                

    parameter_defaults: NeutronFlatNetworks: datacentre NeutronNetworkType: - geneve - flat NeutronTunnelTypes: 'geneve' NovaPCIPassthrough: - devname: "enp3s0f0" physical_network: null NovaSchedulerDefaultFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter NovaSchedulerAvailableFilters: - nova.scheduler.filters.all_filters - nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter ComputeSriovParameters: NeutronBridgeMappings: - datacentre:br-ex TunedProfileName: "throughput-performance" KernelArgs: "intel_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=1024" OvsHwOffload: True

  8. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/control.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    - type: interface   name: <interface_name>   addresses:   - ip_netmask:       get_param: TenantIpSubnet

  9. Configure the interface names in the /usr/share/openstack-tripleo-heat-templates/network/config/single-nic-vlans/compute.yaml file, by adding the following code to move the tenant network from VLAN on a bridge to a separated interface:

    Copy
    Copied!
                

    - type: sriov_pf   name: enp3s0f0   link_mode: switchdev   numvfs: 64   promisc: true   use_dhcp: false

  10. Create a new role for the compute node, and change it to ComputeSriov:

    Copy
    Copied!
                

    $ openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov

  11. Update the ~/cloud-names.yaml accordingly. You may refer to the following example:

    Copy
    Copied!
                

    parameter_defaults:   ComputeSriovCount: 2   OvercloudComputeSriovFlavor: compute   ControllerCount: 3   OvercloudControllerFlavor: control

  12. Assign the compute.yaml file to the ComputeSriov role. Update the /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml file, by adding the following line:

    Copy
    Copied!
                

    OS::TripleO::ComputeSriov::Net::SoftwareConfig: ../network/config/single-nic-vlans/compute.yaml

Customizing the Overcloud Images with MOFED

To customize the overcloud images with MOFED, run:

Copy
Copied!
            

$ sudo su $ yum install -y libguestfs-tools $ export LIBGUESTFS_BACKEND=direct $ cd /home/stack/images/ $ wget https://www.mellanox.com/downloads/ofed/MLNX_OFED-5.1-2.3.7.1/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz $ virt-copy-in -a overcloud-full.qcow2 MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz /tmp $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'yum install pciutils tcl tcsh pkgconf-pkg-config gcc-gfortran make tk -y' $ virt-customize -v  -a overcloud-full.qcow2 --run-command 'cd /tmp && tar -xf MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz && rm -rf /tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64.tgz' $ virt-customize -v  -a overcloud-full.qcow2 --run-command '/tmp/MLNX_OFED_LINUX-5.1-2.3.7.1-rhel8.2-x86_64/mlnxofedinstall --force' $ virt-customize -v  -a overcloud-full.qcow2 --run-command ' /etc/init.d/openibd restart' $ virt-customize -a overcloud-full.qcow2 --selinux-relabel

Customizing the Overcloud Image with os-net-config

For vDPA, all VFS must be bound before starting the OVS container. For this purpose, use this patch in the os-net-config. Since it is not merged yet, apply it manually on the overcloud image:

Copy
Copied!
            

$ cat << EOF > os-net-config-sriov-bind #!/bin/python3 import sys   from os_net_config.sriov_bind_config import main if __name__ == "__main__":       sys.exit(main()) EOF $ chmod 755 os-net-config-sriov-bind $ virt-copy-in -a overcloud-full.qcow2 os-net-config-sriov-bind /usr/bin/

Deploying the Overcloud

  • Deploy the overcloud, using the appropriate templates and yamls from /usr/share/openstack-tripleo-heat-templates/, as shown in the following example:

    Copy
    Copied!
                

    openstack overcloud deploy\       --templates /usr/share/openstack-tripleo-heat-templates \       --libvirt-type kvm \       -r /home/stack/roles_data.yaml\       --timeout 240 \       -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml\       --validation-warnings-fatal \       -e /home/stack/cloud-names.yaml\       -e /home/stack/overcloud_storage_params.yaml\       -e /home/stack/containers-prepare-parameter.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \       -e /home/stack/network-environment.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml\       -e /home/stack/overcloud-selinux-config.yaml \       -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/ovs-hw-offload.yaml\       -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml

    Apply Openstack patches:

  • Perform the following on all compute nodes:

    Copy
    Copied!
                

    $ echo 'group = "hugetlbfs"' >> /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf $ podman  exec -it -u root nova_compute bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/nova $ patch -p2 < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/nova_os_vif_util.patch $ cp -a /containerized-ovs-forwarder/python/ovs_module /usr/lib/python3.6/site-packages/ $ cd /usr/lib/python3.6/site-packages/vif_plug_ovs $ patch < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/os-vif.patch $ exit $ podman restart nova_compute nova_libvirt $ mkdir -p /var/lib/vhost_sockets/ $ chmod 775 /var/lib/vhost_sockets/ $ chown qemu:hugetlbfs /var/lib/vhost_sockets/

  • Perform the following on all controller nodes:

    Copy
    Copied!
                

    $ podman  exec -it -u root neutron_api bash $ git clone https://github.com/Mellanox/containerized-ovs-forwarder.git $ yum install patch -y $ cd /usr/lib/python3.6/site-packages/neutron $ patch -p1 < /containerized-ovs-forwarder/openstack/ussuri/ovs-kernel/networking-ovn.patch $ exit $ podman  restart neutron_api

    Preparing the OVS-Forwarder Container

  • To prepare the OVS-forwarder container, perform the following on all compute nodes:

  1. Pull the ovs-forwrder image from the docker.io with a specific tag:

    Copy
    Copied!
                

    $ podman pull mellanox/ovs-forwarder:51237

  2. Create the ovs-forwarder container with the right PCI of SriovPF and the range of VFS: --pci-args <pci_address> <vfs_range>:

    Copy
    Copied!
                

    $ mkdir -p /forwarder/var/run/openvswitch/ $ podman container create \ --privileged \ --network host \ --name ovs_forwarder_container \ --restart always \ -v /dev/hugepages:/dev/hugepages \ -v /var/lib/vhost_sockets/:/var/lib/vhost_sockets/ \ -v /forwarder/var/run/openvswitch/:/var/run/openvswitch/ \ ovs-forwarder:51237 \ --pci-args 0000:02:00.0 0-3

    Note: In case the VF-LAG pass the PCI and VFS range for the second port, you may also run:

    Copy
    Copied!
                

    $ --pci-args 0000:02:00.0 0-3 --pci-args 0000:02:00.1 0-3

  3. Start the ovs-forwarder container:

    Copy
    Copied!
                

    $ podman start ovs_forwarder_container

Booting the VM

To boot the VM on the undercloud machine, perform the following steps:

  1. Load the overcloudrc configuration:

    Copy
    Copied!
                

    $ source ./overcloudrc

  2. Create a flavor:

    Copy
    Copied!
                

    $ openstack flavor create --ram 1024 --vcpus 1 --property dpdk=true --property hw:mem_page_size=1GB --public dpdk.1g

  3. Create a “cirrios” image:

    Copy
    Copied!
                

    $ openstack image create --public --file cirros-mellanox_eth.img --disk-format qcow2 --container-format bare mellanox

  4. Create a network:

    Copy
    Copied!
                

    $ openstack network create private --provider-network-type geneve --share

  5. Create a subnet:

    Copy
    Copied!
                

    $ openstack subnet create private_subnet --dhcp --network private --subnet-range 11.11.11.0/24

  6. Boot a VM on the overcloud, using the following command after creating the vDPA port accordingly:

  • For the first VM:

    Copy
    Copied!
                

    $ virtio_port0=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port0 --availability-zone nova:overcloud-computesriov-0.localdomain vm0

  • For the second VM:

    Copy
    Copied!
                

    $ virtio_port1=`openstack port create virtio_port --vnic-type=virtio-forwarder --network private | grep ' id ' | awk '{print $4}'` $ openstack server create --flavor dpdk.1g --image mellanox --nic port-id=$virtio_port1 --availability-zone nova:overcloud-computesriov-0.localdomain vm1

© Copyright 2023, NVIDIA. Last updated on May 23, 2023.