About This Manual
This User Manual describes NVIDIA® BlueField® BF1600 Controller Card. It provides details as to the interfaces of the board, specifications, required software and firmware for operating the card, hardware installation, driver installation and bring-up instructions.
Overview of Document Content
|Introduction||Provides a general overview of the BlueField BF1600 Controller Cards and discusses its benefits and features.|
|Supported Interfaces||Provides descriptions of all supported interfaces.|
|Pin Description||Provides interfaces’ Pinouts description.|
|Configuration Scenarios||Provides examples of different configuration scenarios for NVMe SSD connectivity.|
|Cables and Cabling Configurations||Describes the required cables and cabling configurations.|
|Thermal Sensors||Describes the available thermal sensors on the BlueField SoC|
|Hardware Installation||Describes the procedures for installing and uninstalling the Controller Card.|
|Bring-Up and Driver Installation||Describes driver installation and bring-up instructions|
|Troubleshooting||Describes potential system problems - you can use this troubleshooting information to identify and resolve the problem.|
|Specifications||Lists the physical, electrical, operational and regulatory specifications of the BlueField|
This manual is intended for the installer and user of these cards. The manual assumes basic familiarity with Ethernet network and architecture specifications.
Customers who purchased NVIDIA products directly from NVIDIA are invited to contact us through the following methods:
Customers who purchased Mellanox M-1 Global Support Services, please see your contract for details regarding Technical Support.
Customers who purchased Mellanox products through a Mellanox approved reseller should first seek assistance through their reseller.
|IEEE Std 802.3 Specification||IEEE Ethernet specification at http://standards.ieee.org|
|PCI Express Specifications||Industry Standard PCI Express Base and Card Electromechanical Specifications at https://pcisig.com/specifications|
|Mellanox LinkX Interconnect Solutions|
Mellanox LinkX InfiniBand cables and transceivers are designed to maximize the performance of High-Performance Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch nodes. Mellanox offers one of the industry’s broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100 (100Gb/s), and HDR (200Gb/s) cables, including Direct Attach Copper cables (DACs), copper splitter cables, Active Optical Cables (AOCs) and transceivers in a wide range of lengths from 0.5m to 10km. In addition to meeting IBTA standards, Mellanox tests every product in an end-to-end environment ensuring a Bit Error Rate of less than 1E-15. Read more at https://www.nvidia.com/en-us/networking/interconnect/
When discussing memory sizes, MB and MBytes are used in this document to mean size in MegaBytes. The use of Mb or Mbits (small b) indicates size in MegaBits. In this document, PCIe is used to mean PCI Express.
A list of the changes made to this document are provided in Document Revision History.