If you are using the current version of Cumulus NetQ, the content on this page may not be up to date. The current version of the documentation is available here. If you are redirected to the main page of the user guide, then this page may have been renamed; please search for it there.

NVLink4 Glossary

NVLink4 Terminology and Acronyms

The following table lists terms and acronyms used throughout the NVLink4 user documentation.

Access NVLinkAn NVLink between a GPU and an NVSwitch
Computer nodeA server system with DGXA100 baseboard
FMFabric Manager
GFMGlobal Fabric Manager. There is one GFM per NVLink domain (cluster).
L1 NVSwitchFirst-level NVSwitch. For example, the NVSwitches on compute nodes.
L2 NVSwitchSecond-level NVSwitch. The NVSwitches in the NVLink Rack Switch are L2.
LFMLocal Fabric Manager. Runs on each compute node to manage NVSwitches. Only the GFM directly communicates with LFMs.
NVLink domain/clusterA set of nodes that can communicate over NVLink
NVOSNVIDIA Networking OS, formerly known as MLNX-OS
OSFP Port/NVLinkOctal Small Form Factor Pluggable based NVLink ports attached to NVIDIA GPU baseboard
Rack switch nodeA rack switch with 2 NVSwitch devices with multiple OSFP ports
Trunk NVLinkAn NVLink between 2 NVSwitch devices

For more information, refer to the Fabric Manager User Guide.