NVIDIA Support for TripleO Ussuri Application Notes

Overview

As a leading supplier of end-to-end InfiniBand and Ethernet network and storage solutions, Mellanox is working with the OpenStack community to develop advanced and innovative network and storage capabilities. Mellanox is also working closely with the leading OpenStack distribution vendors to integrate and automate the provisioning of these capabilities in their OpenStack commercial distribution editions.

Document Revision History

Revision

Date

Description

1.0

November 22, 2020

Initial version of this document

Definitions, Acronyms, and Abbreviations

Term

Description

SR-IOV

Single Root I/O Virtualization (SR-IOV) is a specification that allows a PCI device to appear virtually on multiple Virtual Machines (VMs), each of which has its own virtual function. This specification defines virtual functions (VFs) for the VMs and a physical function for the hypervisor. Using SR-IOV in a cloud infrastructure helps to achieve higher performance since traffic bypasses the TCP/IP stack in the kernel.

RoCE

RDMA over Converged Ethernet (RoCE) is a standard protocol which enables RDMA's efficient data transfer over Ethernet networks, allowing transport offload with hardware RDMA engine implementation, and superior performance. RoCE is a standard protocol defined in the InfiniBand Trade Association (IBTA) standard. RoCE makes use of UDP encapsulation allowing it to transcend Layer 3 networks. RDMA is a key capability natively used by the InfiniBand interconnect technology. Both InfiniBand and Ethernet RoCE share a common user API but have different physical and link layers.

Open vSwitch (OVS)

Open vSwitch (OVS) allows Virtual Machines (VM) to communicate with each other and with the outside world. OVS traditionally resides in the hypervisor, and switching is based on twelve tuples matching on flows. The OVS software-based solution is CPU intensive, affecting system performance and preventing fully utilizing available bandwidth.

OVS-DPDK

OVS-DPDK extends Open vSwitch performances, while interconnecting with Mellanox DPDK Poll Mode Driver (PMD). It accelerates the hypervisor networking layer for better latency and higher packet rate, while maintaining Open vSwitch data plane networking characteristics.

ASAP2

Mellanox ASAP2 Accelerated Switching And Packet Processing® technology allows to offload OVS by handling OVS data-plane in Mellanox ConnectX-5 (and onwards) NIC hardware (Mellanox Embedded Switch or eSwitch) while maintaining OVS control-plane unmodified. The result is significantly higher OVS performance without the associated CPU load.

The current actions supported by ASAP2 include packet parsing and matching, forward, drop along with VLAN push/pop or VXLAN encapsulated/ decapsulated.

vDPA

Virtual Data Path Acceleration is an approach to standardize the NIC SR-IOV data plane using the virtio ring layout, and placing a single standard virtio driver in the guest decoupled from any vendor implementation, while adding a generic control plane and software infrastructure to support it

InfiniBand

Computer networking communications standard, used in high-performance computing that features very high throughput and very low latency.

IPoIB

IP over InfiniBand is a Upper Layer Protocol (ULP) driver, which implements the network interface over InfiniBand.

© Copyright 2023, NVIDIA. Last updated on Sep 5, 2023.