Introduction

NVIDIA Spectrum® based 1U switch systems are an ideal spine and Top of Rack (ToR) solution, allowing maximum flexibility, with port speeds spanning from 10Gb/s to 100Gb/s per port, and port density that enables full rack connectivity to any server at any speed. The uplink ports allow a variety of blocking ratios that suit any application requirement. Powered by the NVIDIA Spectrum ASIC, the systems carry whopping switching and processing capacities in a compact 1U form factor.

Keeping with the NVIDIA tradition of setting performance record switch systems, the NVIDIA Spectrum-based systems introduce the world’s lowest latency for 100GbE switching and routing elements, and do so while having the lowest power consumption in the market. They enable the use of 10, 25, 40, 50 and 100GbE in a large scale without changing power infrastructure facilities.

The NVIDIA Spectrum-based 1U switch systems are a part of NVIDIA’s complete end-to-end solution, which provides 10GbE through 100GbE interconnectivity within the data center. Other devices in this solution include ConnectX®-4 based network interface cards, and LinkX® copper or fiber cabling/transceivers. This end-to-end solution is topped with NEO, a management application that relieves some of the major obstacles standing in the way when deploying a network. NEO enables a fully certified and interoperable design, speeds up time to service and RoI. The systems introduce hardware capabilities for multiple tunneling protocols that enable increased reachability and scalability for today’s data centers. Implementing MPLS, NVGRE and VXLAN tunneling encapsulations in the network layer of the data center allows more flexibility for terminating a tunnel by the network, in addition to termination on the server endpoint.

While NVIDIA Spectrum provides the thrust and acceleration that powers the switch systems, they get yet another angle of capabilities, running with a powerful x86-based processor, which allows them to not only be the highest performing switch fabric elements, but also grants them the ability to incorporate a Linux running server into the same device. This opens up multiple application aspects of utilizing the high CPU processing power and the best switching fabric, to create a powerful machine with unique appliance capabilities that can improve numerous network implementation paradigms. The NVIDIA Spectrum-based 1U switch systems support the Open Network Install Environment (ONIE) for zero touch installations of network operating systems. While all Ethernet systems can be purchased preloaded with NVIDIA Onyx (MLNX-OS), the SN2000 switches are offered with Cumulus Linux as well, and support SONiC (Software for Open Networking in the Cloud) as an alternative operating system.

For a full list of all available ordering options, see Ordering Information.

Front View

SN2700

image2018-11-12_11-49-57-version-1-modificationdate-1542658551167-api-v2.png

SN2740

image2018-11-12_11-51-17-version-1-modificationdate-1542658551210-api-v2.png

SN2410

image2018-11-12_11-51-50-version-1-modificationdate-1542658551257-api-v2.png

SN2100

image2018-11-12_11-52-24-version-1-modificationdate-1542658551307-api-v2.png

SN2010

image2018-11-12_11-54-24-version-1-modificationdate-1542658551353-api-v2.png

Rear View

SN2700 and SN2410

image2018-11-12_11-54-51-version-1-modificationdate-1542658551410-api-v2.png

SN2740

image2018-11-12_11-55-18-version-1-modificationdate-1542658551460-api-v2.png

SN2100

image2018-11-12_11-56-24-version-1-modificationdate-1542658551507-api-v2.png

SN2010

image2018-11-12_11-57-14-version-1-modificationdate-1542658551553-api-v2.png

SN2201

image2021-12-7_10-37-52-version-1-modificationdate-1643324243577-api-v2.png

SN2201

image2021-12-7_10-54-1-version-1-modificationdate-1643324243637-api-v2.png

The table below describes maximum throughput and interface speed per system model.

System Model

1GbT RJ45 Interfaces

1/10/25GbE SFP28 Interfaces*

40/50/100GbE QSFP28 Interfaces*

Max Throughput

SN2700

32**

64

(using QSFP to SFP splitter cables)

32

6.4Tb/s

SN2740

32**

64

(using QSFP to SFP splitter cables)

32

6.4Tb/s

SN2410

48*+8**

Total 64, 48SFP+16

(using QSFP to SFP splitter cables)

8

4Tb/s

SN2100

16**

64

(using QSFP to SFP splitter cables)

16

(or 32 50GbE interfaces when using QSFP to 2xQSFP splitter cables)

3.2Tb/s

SN2010

18*+4**

Total 34 - 18SFP+16

(using QSFP to SFP splitter cables)

4

(or 8 50GbE interfaces when using QSFP to 2xQSFP splitter cables)

1.7Tb/s

SN2201

48+4**

16

(using QSFP to SFP splitter cables)

4

(or 8 50GbE interfaces when using QSFP to 2xQSFP splitter cables)

448Gb/s

(*) Requiring a 1GBT SFP module

(**) Requiring a QSA adapter and a 1GBT SFP module

The table below lists the various management interfaces, PSUs and fans per system model.

System Model

USB

MGT

Console

PSU

Fan

SN2700

Rear

Rear (2 ports)

Rear

Yes, 2

Yes, 4

SN2740

Front

Front (1 port)

Front

Yes, 2

Yes, 4

SN2410

Rear

Rear (2 ports)

Rear

Yes, 2

Yes, 4

SN2100

Front (mini USB)

Front (1 port)

Front

Yes 2 (non-replaceable)

Yes, 4 (non-replaceable)

SN2010

Front

Front (1 port)

Front

Yes, 2 (non-replaceable)

Yes, 4 (non-replaceable)

SN2201

Front

Front (1 port)

Front

Yes, 2

Yes, 4

For a full feature list, please refer to the system’s product brief. Go to https://www.nvidia.com/en-us/networking/. In the main menu, click on Products > Ethernet Switch Systems, and select the desired product page.

The list of certifications (such as EMC, Safety and others) per system for different regions of the world is located on the NVIDIA website at https://www.nvidia.com/en-us/networking/environmental-and-regulatory-compliance/.

© Copyright 2024, NVIDIA. Last updated on Mar 26, 2024.