Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space ConnectX5VPIOCP2DEV and version 2.3


PCI Express (PCIe)Uses PCIe Gen 3.0 (8GT/s) or Gen 4.0 (16GT/s) through x16 lanes (TBD: two B2B FCI x8 connectors). Gen 1.1 and 2.0 compatible.

100Gb/s Virtual Protocol
Interconnect (VPI) Adapter

ConnectX-5 offers the highest throughput VPI adapter, supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet and enabling any standard networking, clustering, or storage to operate seamlessly over any converged network leveraging a consolidated software stack.
InfiniBand EDRA standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 25.78125Gb/s with a 64b/66b encoding, resulting in an effective bandwidth of 100Gb/s.
Up to 100 Gigabit Ethernet

Mellanox adapters NVIDIA adapters comply with the following IEEE 802.3 standards:

• 100GbE/ 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
• IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet
• IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes
• IEEE 802.3ba 40 Gigabit Ethernet
• IEEE 802.3by 25 Gigabit Ethernet
• IEEE 802.3ae 10 Gigabit Ethernet
• IEEE 802.3ap based auto-negotiation and KR startup
• Proprietary Ethernet protocols (20/40GBASE-R2, 50GBASE-R4)
• IEEE 802.3ad, 802.1AX Link Aggregation
• IEEE 802.1Q, 802.1P VLAN tags and priority
• IEEE 802.1Qau (QCN)
• Congestion Notification
• IEEE 802.1Qaz (ETS)
• IEEE 802.1Qbb (PFC)
• IEEE 802.1Qbg
• IEEE 1588v2
• Jumbo frame support (9.6KB)

  • SPI - includes 128Mb SPI Flash device (W25Q128FVSIG by WINBOND-NUVOTON).
  • FRU EEPROM capacity is 2Kb.
Overlay NetworksIn order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE and VXLAN. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines, placing higher loads on the host CPU. ConnectX-5 effectively addresses this by providing advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol.
RDMA and RDMA over Converged Ethernet (RoCE)ConnectX-5, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over Band and Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-5 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
Mellanox PeerDirect™NVIDIA PeerDirect™PeerDirect™ communication provides high-efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-5 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes.
CPU OffloadAdapter functionality enabling reduced CPU overhead allowing more available CPU for computation tasks.
Open vSwitch (OVS) offload using ASAP2(TM)
• Flexible match-action flow tables
• Tunneling encapsulation/decapsulation
Quality of Service (QoS)Support for port-based Quality of Service enabling various application requirements for latency and SLA.
Hardware-based I/O VirtualizationConnectX-5 provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server.
Storage AccelerationA consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage RDMA for high-performance storage access.
• NVMe over Fabric offloads for the target machine
• Erasure Coding
• T10-DIF Signature Handover
SR-IOVConnectX-5 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server.
NC-SIThe adapter supports a Network Controller Sideband Interface (NC-SI), MCTP over SMBus and MCTP over PCIe - Baseboard Management Controller interface.
High-Performance Accelerations• Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing
Wake-on-LAN (WoL)Supported
Reset-on-LAN (RoL)Supported