Clusters using commodity servers and storage systems are seeing widespread deployments in large and growing markets such as high performance computing, data warehousing, online transaction processing, financial services and large scale web 2.0 deployments. To enable distributed computing transparently with maximum efficiency, applications in these markets require the highest I/O bandwidth and lowest possible latency. These requirements are compounded with the need to support a large interoperable ecosystem of networking, storage, and other applications and interfaces. NVIDIA® offers a robust and full set of protocol software and driver for FreeBSD with the NVIDIA® ConnectX®-4 onwards Host Adapters with Ethernet, InfiniBand and RoCE.
The driver release introduces the following capabilities:
- Single/Dual port
- Number of RX queues per port - according to number of CPUs
- Number of TX queues per port - according to number of CPUs
- MSI-X or INTx
- Hardware Tx/Rx checksum calculation
- Large Send Offload (i.e., TCP Segmentation Offload)
- Large Receive Offload
- VLAN Tx/Rx acceleration (Hardware VLAN stripping/insertion)
- ifnet statistics
Further information on this product can be found in the following NVIDIA® FreeBSD documents:
Please visit http://www.mellanox.com → Products → Software → InfiniBand (Learn More) → Mellanox for FreeBSD Driver
Document Revision History
For the list of changes made to the User Manual, refer to User Manual Revision History.
For the list of changes made to the Release Notes, refer to Release Notes Revision History.