DGX OS Software Release Notes

Release information for all users of DGX OS software.

Current status, information about included software, and known issues for DGX OS software.

1. DGX OS Releases and Versioning

This information helps you understand the DGX OS release numbering convention and your options to upgrade your DGX OS software.

DGX OS Releases

DGX OS is a customized Linux distribution that is based on Ubuntu Linux. It includes platform-specific configurations, diagnostic and monitoring tools, and the drivers that are required to provide the stable, tested, and supported OS to run AI, machine learning, and analytics applications on DGX systems.

DGX OS is typically released twice a year and is often aligned with major feature enhancements of the components and new hardware introductions.

Release Versions

The DGX OS release numbering convention is MAJOR.MINOR, and it defines the following types of releases:

  • Major releases are typically based on Ubuntu releases, which include new kernel versions and new features that are not always backwards compatible.
    For example:
    • DGX OS 5.x releases are based on Ubuntu 20.04.
    • DGX OS 4.x is based on Ubuntu 18.04.
  • Minor releases include mostly new NVIDIA features and accumulated bug fixes and security updates.

    These releases are incremental and always include all previous software changes.

    • In DGX OS 4 and earlier, minor releases were also typically aligned with NVIDIA Graphics Drivers for Linux releases.
    • In DGX OS 5, you now have the option to install newer NVIDIA Graphic Drivers independently of the DGX OS release.

DGX OS Release Mechanisms

This section provides information about the DGX OS release mechanisms that are available to install or upgrade to the latest version of the DGX OS.

The ISO Image

DGX OS is released as an ISO image that includes the necessary packages and an autonomous installer. Updated versions of the ISO image are also released that:

  • Provide bug fixes and security mitigations.

  • Improve the installation experience.

  • Provide hardware configuration support.

You should always use the latest ISO image, except when you need to restore the system to an earlier version.

Warning: This image allows you to install or reimage a DGX system to restore the system to a default state, but the process erases all of the changes that you applied to the OS.

The Linux Software Repositories

Upgrades to DGX OS are provided through the software repositories. Software repositories are storage locations from which your system retrieves and installs OS updates and applications. The repositories used by DGX OS are hosted by Canonical for the Ubuntu OS and NVIDIA for DGX specific software and other NVIDIA software. Each repository is a collection of software packages that are intended to install additional software and to update the software on DGX systems.

New versions of these packages, which contain bug fixes and security updates, provide an update to DGX OS releases. The repositories are also updated to include hardware enablement, which might add support for a new system or a new hardware component, such as a network card or disk drive. This update does not affect existing hardware configurations.

System upgrades are cumulative, which means that your systems will always receive the latest version of all of the updated software components.

Important: We recommend that you do not update only individual components.

Before you update a system, refer to the DGX OS Software Release Notes for a list of the available updates. For more information on displaying available updates and upgrade instructions, refer to the DGX OS 5 User Guide.

2. DGX OS 5 Releases

2.1. DGX OS Release 5.0

2.1.1. New Features in DGX OS Release 5.0

Here are the new features in DGX OS 5.0:

  • First release to support all NVIDIA DGX servers as well as DGX Station
  • Ubuntu 20.04 LTS
  • NVIDIA GPU driver Release 450.
  • Supports the CUDA Toolkit up to 11.0 natively, or newer versions via the compatibility module.
  • Added rootfs encryption option, configurable during the re-imaging process.
  • Added option to password protect the GRUB menu, configurable during the first boot process.
  • Updated NVSM
  • Added support for custom drive partitioning
  • Added monitoring of firmware health
  • Includes MOFED 5.1.
  • Updated the the default InfiniBand network naming policy.

    The InfinBand interfaces, enumerated as ibx in previous releases, now enumerate as ibpxsy (similar to Ethernet (enpxsy). Refer to the DGX A100 User Guide for the new naming.

Update Advisement

  • IMPORTANT: This release is a major DGX OS release and incorporates the following updates.
    • Ubuntu 20.04 LTS
    • Mellanox OFED 5.1

    Customers are advised to consider these updates and any effect they may have on their application. For example, some MOFED-dependent applications may be affected.

    A best practice is to upgrade on select systems and verify that your applications work as expected before deploying on more systems.

  • NVIDIA KVM not Supported

    This release does not support the Linux Kernel-based Virtual Mode (KVM) on DGX systems.

    Note: NVIDIA KVM is available only with DGX-2 systems. DGX-2 customers that require this feature should stay with the latest DGX OS Server 4.x release.
  • Update DGX OS on DGX A100 prior to updating VBIOS

    DGX A100 systems running DGX OS earlier than version 4.99.8 should be updated to the latest version before updating the VBIOS to version 92.00.18.00.0 or later (via the DGX A100 firmware update container version 20.05.12.x). Failure to do so will result in the GPUs not getting recognized.

  • NGC Containers

    With DGX OS 5.0, customers should update their NGC containers to container release 20.10 if they are using multi-node training. For all other use cases, refer to the NCG Framework Containers Support Matrix.

    Refer to the NVIDIA Deep Learning Frameworks documentation for information about the latest container releases and how to access the releases.

  • Ubuntu Security Updates

    Customers are responsible for keeping the DGX server up to date with the latest Ubuntu security updates using the ‘apt full upgrade’ procedure. See the Ubuntu Wiki Upgrades web page for more information. Also, the Ubuntu Security Notice site (https://usn.ubuntu.com/) lists known Common Vulnerabilities and Exposures (CVEs), including those that can be resolved by updating the DGX OS software.

2.1.2. Current Version

Here is a a current list of the main DGX software stack component versions in the software repositories:
Component Version Additional Information
GPU Driver 450.102.04 Refer to the NVIDIA Tesla documentation.
CUDA Toolkit 11.0 Refer to the NVIDIA CUDA Toolkit Release Notes.

Note: For DGX servers, CUDA is updated only if it has been previously installed.

Docker Engine 19.03.14 Refer to v19.03.14.
NVIDIA Container Tookit

libnvidia-container1: 1.3.0-1

libnvidia-container-tools: 1.3.0-1

nvidia-container-runtime: 3.4.0-1

nvidia-container-toolkit: 1.3.0-1

nvidia-docker: 2 2.5.0-1
Refer to the NVIDIA Container Toolkit documentation.
NVSM 20.09.20 Refer to the NVIDIA System Management Documentation.
DCGM 2.0.14 Refer to the DCGM Release Notes.
NVIDIA System Tools 20.09-1  
Mellanox OFED MLNX 5.1-2.6.2.0 Refer to MLNX_OFED v5.1-2.6.2.0.
GPUDirect Storage (GDS) Preview 0.95v1 Refer to GDS Documentation

In addition to upgrading to the versions described in the table, performing an over-the-network update will also upgrade the Ubuntu 20.04 LTS version and Ubuntu kernel, depending on when the upgrade is performed.

For a list of the updates to DGX OS 5.0, see Update History.

2.1.3. Update History

This section provides information about the updates to DGX OS 5.0.

The updates listed include:

  • Major component updates in the NVIDIA repositories.
  • NVIDIA driver updates in the Ubuntu repository

2.1.3.1. Update: April 13, 2021

The following changes were made to the NVIDIA repositories:

2.1.3.2. Update: March 30, 2021

The following changes were made to the NVIDIA repositories:
  • MOFED: MLNX 5.1-2.5.8.0.47

    If you have already updated to the latest Ubuntu kernel (uname -a reports 5.4.0-67 or later), then you need to uninstall MOFED and then reinstall it as follows.
    $ apt-get purge mlnx-ofed-all mlnx-ofed-kernel-dkms --auto-remove
    $ apt-get update
    $ apt-get install mlnx-ofed-all nvidia-peer-memory-dkms

2.1.3.3. Update: March 2, 2021

2.1.3.4. Update: February 23, 2021

The following change was made to NVIDIA repositories:
  • NVSM: 20.09.17

2.1.3.5. Update: January 20, 2021

The following change was made in the Ubuntu repositories:
  • NVIDIA GPU Driver: 450.102.04

2.1.3.6. Update: December 11, 2020

The following changes were made in the NVIDIA repositories:

  • MOFED: MLNX 5.1-2.5.8.0

    When the update is made, the Mellanox FW updater updates the ConnectX card firmware as follows:

    Card Firmware Version
    ConnectX-4 12.28.2006

    To force a downgrade, see Downgrading Firmware for Mellanox ConnectX-4 Cards for more information.

    ConnectX-5 16.28.4000
    ConnectX-6 20.28.4000
  • Docker: docker-ce 19.03.14

    This addresses CVE-2020-15257.

2.1.3.7. DGX OS 5.0 Release: October 31, 2020

DGX OS 5.0 was released with the DGX OS 5.0.0 ISO. Here are the contents of the DGX OS 5.0.0 ISO:
Component Version Additional Information
Ubuntu 20.04 LTS Refer to the Ubuntu 20.04 Desktop Guide.
Ubuntu kernel 5.4.0-52-generic See linux 5.4.0-52-generic.
GPU Driver 450.80.02 Refer to the NVIDIA Tesla documentation.
CUDA Toolkit 11.0 Refer to the NVIDIA CUDA Toolkit Release Notes.

Note: CUDA is installed from the ISO only on DGX Station systems, including DGX Station A100.

Docker Engine 19.03.13 Refer to v10.03.14.
NVIDIA Container Tookit

libnvidia-container1: 1.3.0-1

libnvidia-container-tools: 1.3.0-1

nvidia-container-runtime: 3.4.0-1

nvidia-container-toolkit: 1.3.0-1

nvidia-docker: 2 2.5.0-1

Refer to the NVIDIA Container Toolkit documentation.
NVSM 20.07.40 Refer to the NVIDIA System Management Documentation.
DCGM 2.0.13 Refer to the DCGM Release Notes.
NVIDIA System Tools 20.09-1  
Mellanox OFED MLNX 5.1-2.4.6.0  

2.1.4. Known Issues

2.1.4.1. USB Errors are Logged When Shutting Down the System

Issue

Reported in release 4.99.9.

When rebooting the system, "USB 3-1-port1" error messages appear on the console. This occurs even with no physical USB flash drive plugged in, or without the BMC ISO image mounted.

Explanation

This issue will be addressed in a future DGX OS release. The error messages can be ignored as the system will still boot.

2.1.4.2. Erroneous Insufficient Power Error May Occur for PCIe Slots

Issue

Reported in release 4.99.9.

The DGX A100 server reports "Insufficient power" on PCIe slots when network cables are connected.

Explanation

This may occur with optical cables and indicates that the calculated power of the card + 2 optical cables is higher than what the PCIe slot can provide.

The message can be ignored.

2.1.4.3. AMD Crypto Co-processor is not Supported

Issue

Reported in release 4.99.9.

The DGX A100 currently does not support the AMD Cryptograph Co-processor. When booting the system, you may see the following error message in the syslog:

ccp initialization failed 
Explanation

Even if the message does not appear, CCP is still not supported. The SBIOS makes zero CCP queues available to the driver, so CCP cannot be activated.

2.1.4.5. cudaMemFree CUDA API Performance Regression

Issue

Reported in release 4.99.10.

In cases when NVLINK peers are enabled, there is a performance regression of cuMemFree CUDA API.

Explanation

The cuMemFree API is usually used during application teardown and is discouraged from being used in performance-critical paths, so the regression should not impact application end-to-end performance.

2.1.4.6. nvsm show health Reports Firmware as Not Authenticated

Issue

Reported in release 5.0.

When issuing nvsm show health, the output shows CEC firmware components as Not Authenticated, even when they have passed authentication.

Example:
CEC:
 CEC Version: 3.5
 EC_FW_TAG0: Not Authenticated
 EC_FW_TAG1: Not Authenticated
 BMC FW authentication state: Not Authenticated 
Explanation

The message can be ignored and does not affect the overall nvsm health output status.

2.1.4.7. Running NGC Containers Older than 20.10 May Produce “Incompatible MOFED Driver” Message

Issue

Reported in release 5.0.

DGX OS 5.0 incorporates Mellanox OFED 5.1 for high performance multi-node connectivity. Support for this version of OFED was added in NGC containers 20.10, so when running on earlier versions (or containers derived from earlier versions), a message similar to the following may appear.

ERROR: Detected MOFED driver 5.1-2.4.6, but this container has version 4.6-1.0.1.
 Unable to automatically upgrade this container.
 Multi-node communication may be unreliable or may result in crashes with this version.
 This incompatibility will be resolved in an upcoming release. .
Explanation

For applications that rely on OFED (typically those used in multi-node jobs), this is an indication that an update to NGC containers 20.10 or greater is required. For most other applications, this error can be ignored.

Some applications may return an error such as the following when running with NCCL debug messages enabled:

export NCCL_DEBUG=WARN

misc/ibvwrap.cc:284 NCCL WARN Callto ibv_modify_qp failedwitherrorNo such device
...
common.cu:777'unhandled system error'

This may occur even for single-node training jobs. To work around this, issue the following:

export NCCL_IB_DISABLE=1

2.1.4.8. System May Slow Down When Using mpirun

Issue

Customers running Message Passing Interface (MPI) workloads may experience the OS becoming very slow to respond. When this occurs, a log message similar to the following would appear in the kernel log:

 kernel BUG at /build/linux-fQ94TU/linux-4.4.0/fs/ext4/inode.c:1899!
Explanation

Due to the current design of the Linux kernel, the condition may be triggered when get_user_pages is used on a file that is on persistent storage. For example, this can happen when cudaHostRegister is used on a file path that is stored in an ext4 filesystem. DGX systems implement /tmp on a persistent ext4 filesystem.

Note: If you performed this workaround on a previous DGX OS software version, you do not need to do it again after updating to the latest DGX OS version.

In order to avoid using persistent storage, MPI can be configured to use shared memory at /dev/shm (this is a temporary filesystem).

If you are using Open MPI, then you can solve the issue by configuring the Modular Component Architecture (MCA) parameters so that mpirun uses the temporary file system in memory.

For details on how to accomplish this, see the Knowledge Base Article DGX System Slows Down When Using mpirun (requires login to the NVIDIA Enterprise Support portal).

2.1.4.9. Software Power Cap Not Reported Correctly by nvidia-smi

Issue

On DGX-1 systems with Pascal GPUs, nvidia-smi does not report Software Power Cap as "Active" when clocks are throttled by power draw.

Explanation

This issue is with nvidia-smi reporting and not with the actual functionality.

2.1.4.10. Forced Reboot Hangs the OS

Issue

When issuing reboot -f (forced reboot), I/O error messages appear on the console and then the system hangs.

The system reboots normally when issuing reboot.

Explanation

This issue will be resolved in a future version of the DGX OS.

2.1.4.11. NVSM Enumerates NVSwitches as 8-13 Instead of 0-5

Issue

Reported in release 4.99.9.

NVSM commands that list the NVSwitches (such as nvsm show nvswitches) will return the switches with 8-13 enumeration.

Example:

nvsm show /systems/localhost/nvswitches
/systems/localhost/nvswitches
Targets:
 NVSwitch10
 NVSwitch11
 NVSwitch12
 NVSwitch13
 NVSwitch8
 NVSwitch9
Explanation

Currently, NVSM recognizes NVSwitches as graphics devices, and enumerates them as a continuation of the GPU 0-7 enumeration.

2.1.4.12. Applications that call the cuCTXCreate API Might Experience a Performance Drop

Issue

Reported in release 5.0.

When some applications call cuCtxCreate, cuGLCtxCreate, or cuCtxDestroy, there might be a drop in performance.

Explanation

This issue occurs with Ubuntu 20.04, but not with previous versions. The issue affects applications that perform graphics/compute interoperations or have a plugin mechanism for CUDA, where every plugin creates its own context, or video streaming applications where computations are needed. Examples include ffmpeg, Blender, simpleDrvRuntime, and cuSolverSp_LinearSolver.

This issue is not expected to impact deep learning training.

2.1.4.13. NVSM Platform Displays as Unsupported

Issue

Reported in release 5.0.

In DGX Station, when you run
$ nvsm show version
instead of displaying DGX Station, the platform field displays Unsupported.
Explanation

You can ignore this message.

2.1.4.15. NVSM Fails to Show CPU Information on Non-English Locales

Issue

Reported in release 4.1.0 and 5.0 update 3

If the locale is other than English, the nvsm show cpu command reports the target processor does not exist.

$ sudo nvsm show cpu
ERROR:nvsm:Not Found for target address /systems/localhost/processors
ERROR:nvsm:Target address "/systems/*/processors/*" does not exist
Explanation

To work around, set the locale to English before issuing nvsm show cpu.

2.1.4.16. NVIDIA Desktop Shortcuts Not Updated After a DGX OS Release Upgrade

Issue

Reported in release 4.0.4.

In DGX OS 4 releases, the NVIDIA desktop shortcuts have been updated to reflect current information about NVIDIA DGX systems and containers for deep learning frameworks. These desktop shortcuts are also organized in a single folder on the desktop.

After a DGX OS release upgrade, the NVIDIA desktop shortcuts for existing users are not updated. However, the desktop for a user added after the upgrade will have the current desktop shortcuts in a single folder.

Explanation

If you want quick access to current information about NVIDIA DGX systems and containers from your desktop, replace the old desktop shortcuts with the new desktop shortcuts.

  1. Change to your desktop directory.

    $ cd /home/your-user-login-id/Desktop
  2. Remove the existing NVIDIA desktop shortcuts.

    $ rm dgx-container-registry.desktop \
    dgxstation-userguide.desktop \
    dgx-container-registry-userguide.desktop \
    nvidia-customer-support.desktop
  3. Copy the folder that contains the new NVIDIA desktop shortcuts and its contents to your desktop directory.

    $ cp -rf /etc/skel/Desktop/Getting\ Started/ .

2.1.4.17. Unable to Set a Separate/xinerama Mode through the xorg.conf File or through nvidia-settings

Issue

Reported in release 5.0.2

In Station A100, in the BIOS, in OnBrd/Ext VGA Select=, when Auto or External is selected, the nvidia-conf-xconfig service sets up Xorg to use only the Display adapter.

Explanation
Manually edit the existing the /etc/X11/xorg.conf.d/xorg-nvidia.conf file with the following settings:
--- xorg-nvidia.conf    2020-12-10 02:42:25.585721167 +0530
+++ /root/working-xinerama-xorg-nvidia.conf     2020-12-10 02:38:05.368218170 +0530
@@ -8,8 +8,10 @@
 Section "ServerLayout"
     Identifier     "Layout0"
     Screen      0  "Screen0"
+    Screen      1  "Screen0 (1)" RightOf "Screen0"
     InputDevice    "Keyboard0" "CoreKeyboard"
     InputDevice    "Mouse0" "CorePointer"
+    Option         "Xinerama" "1"
 EndSection

 Section "Files"
@@ -43,6 +45,7 @@
     Driver         "nvidia"
     BusID          "PCI:2:0:0"
     VendorName     "NVIDIA Corporation"
+    Screen          0
 EndSection

 Section "Screen"
@@ -51,6 +54,25 @@
     Monitor        "Monitor0"
     DefaultDepth    24
     Option         "AllowEmptyInitialConfiguration" "True"
+    SubSection     "Display"
+        Depth       24
+    EndSubSection
+EndSection
+
+Section "Device"
+    Identifier     "Device0 (1)"
+    Driver         "nvidia"
+    BusID          "PCI:2:0:0"
+    VendorName     "NVIDIA Corporation"
+    Screen          1
+EndSection
+
+Section "Screen"
+    Identifier     "Screen0 (1)"
+    Device         "Device0 (1)"
+    Monitor        "Monitor0"
+    DefaultDepth    24
+    Option         "AllowEmptyInitialConfiguration" "True"
     SubSection     "Display"
         Depth       24
EndSubSection

2.1.4.18. NSCQ Library and Fabric Manager Might Not Install When Installing a New NVIDIA Driver

Issue

When you install a new NVIDIA Driver from the Ubuntu repository, the NSCQ library and Fabric Manager might not install.

Explanation

The libnvidia-nscq-XXX packages provide the same /usr/lib/x86_64-linux-gnu/libnvidia-nscq.so file, so multiple packages cannot exist on your DGX system at the same time.

We recommend that you remove the old packages before installing the new driver branch. Refer to Upgrading your NVIDIA Data Center GPU Driver to a Newer Branch for instructions.

2.1.5. DGX OS Resolved Issues

Here are the issues that were resolved in the latest release.

  • [DGX A100]BMC is not Detectable After Restoring BMC to Default
  • [DGX A100]A System with Encrypted rootfs May Fail to Boot if one of the M.2 drives is Corrupted
  • [All DGX systems]: When starting the DCGM service, a version mismatch error message similar to the following will appear:
    [78075.772392] nvidia-nvswitch: Version mismatch, kernel version 450.80.02 user version 450.51.06
  • [All DGX systems]: When issuing nvsm show health, the nvsmhealth_log.txt log file reports that the /proc/driver/ folders are empty.
  • [DGX A100]: The Mellanox software that is included in the DGX OS installed on DGX A100 system does not automatically update the Mellanox firmware as needed when the Mellanox driver is installed.
  • [DGX A100]: nvsm stress-test Does not Stress the System if MIG is Enabled

    Reported in 4.99.10

  • [DGX A100]: With eight U.2 NVMe drives installed, the nvsm-plugin-pcie service reports ERROR: Device not found in mapping table" for the additional four drives (for example, in response to systemctl status nvsm*).

    Reported in 4.99.11

  • [DGX A100]: When starting the Fabric Manager service, the following error is reported: detected NVSwitch non-fatal error 10003 on NVSwitch pci.

    Reported in 4.99.9

2.1.5.1. BMC is not Detectable After Restoring BMC to Default

Issue

Reported in release 4.99.8. Fixed with BMC 0.13.06

After using the BMC Web UI dashboard to restore the factory defaults (Maintenance > Restore Factory Defaults), the BMC can no longer be detected and the system is rendered unusable.

Explanation

Do not attempt to restore the factory defaults using the BMC Web UI dashboard.

2.1.5.2. A System with Encrypted rootfs May Fail to Boot if one of the M.2 drives is Corrupted

Issue

Reported in release 4.99.9. Fixed in 5.0.2.

On systems with encrypted rootfs, if one of the M.2 drives is corrupted, the system stops at the BusyBox shell when booting.

Explanation

The inactive RAID array (due to the corrupted M.2 drive) is not getting converted to a degraded RAID array.

To work around, perform the following within the BusyBox.

  1. Issue the following.
     $ mdadm --run /dev/md?*
  2. Wait a few seconds for the RAID and crypt to be discovered.
  3. Exit.
     $ exit

2.1.6. Known Limitations

2.1.6.1. [DGX A100]: Hot-plugging of Storage Drives not Supported

Issue

Hot-plugging or hot-swapping one of the storage drives might result in system instability or incorrect device reporting.

Explanation and Workaround

Turn off the system before removing and replacing any of the storage drives.

2.1.6.2. [DGX A100]: Syslog Contains Numerous "SM LID is 0, maybe no SM is running" Error Messages

Issue

The system log (/var/log/syslog) contains multiple "SM LID is 0, maybe no SM is running" error message entries..

Explanation and Workaround

This issue is the result of the srp_daemon within the Mellanox driver. The daemon is used to discover and connect to InfiniBand SCSI RDMA Protocol (SRP) targets.

If you are not using RDMA, then disable the srp_daemon as follows.
$ sudo systemctl disable srp_daemon.service $ sudo systemctl disable srptools.service

2.1.6.3. [DGX-2]: Serial Over LAN Does not Work After Cold Resetting the BMC

Issue

After performing a cold reset on the BMC (ipmitool mc reset cold) while serial over LAN (SOL) is active, you cannot restart the SOL session.

Explanation and Workaround
To re-active SOL, either
  • Reboot the system, or
  • Kill and then restart the process as follows.
    1. Identify the Process ID of the SOL TTY process by running the following.
      ps -ef | grep "/sbin/agetty -o -p -- \u --keep-baud 115200,38400,9600 ttyS0 vt220" 
    2. Kill the process.
      kill <PID>
      where <PID> is the Process ID returned by the previous command.
    3. Either wait for the cron job to respawn the process or manually restart the process by running
      /sbin/agetty -o -p -- \u --keep-baud 115200,38400,9600 ttyS0 vt220 

2.1.6.5. [DGX-2]: Applications Cannot be Run Immediately Upon Powering on the DGX-2

Issue
When attempting to run an application that uses the GPUs immediately upon powering on the DGX-2 system, you may encounter the following error.
CUDA_ERROR_SYSTEM_NOT_READY 
Explanation and Workaround
The DGX-2 uses a fabric manager service to manage communication between all the GPUs in the system. When the DGX-2 system is powered on, the fabric manager initializes all the GPUs. This can take approximately 45 seconds. Until the GPUs are initialized, applications that attempt to use them will fail.
If you encounter the error, wait and launch the application again.

2.1.6.6. [DGX-1]: Script Cannot Recreate RAID Array After Re-inserting a Known Good SSD

Issue

When a good SSD is removed from the DGX-1 RAID 0 array and then re-inserted, the script to recreate the array fails.

Explanation and Workaround
After re-inserting the SSD back into the system, the RAID controller sets the array to offline and marks the re-inserted SSD as Unconfigured_Bad (UBad). The script will fail when attempting to rebuild an array when one or more of the SSDs are marked Ubad.
To recreate the array in this case,
  1. Set the drive back to a good state.
     # sudo /opt/MegaRAID/storcli/storcli64 /c0/e<enclosure_id>/s<drive_slot> set good 
  2. Run the script to recreate the array.
    # sudo /usr/bin/configure_raid_array.py -c -f

2.1.6.7. [DGX Station A100] Suspend and Power Button Section Appears in Power Settings

Issue

Reported in release 5.0.2.

In the Power Settings page of the DGX Station A100 GUI, the Suspend & Power Button section is displayed even though the options do not work.

Explanation

Suspend and sleep modes are not supported on the DGX Station A100.

A. Downgrading Firmware for Mellanox ConnectX-4 Cards

Explain what the concept is and why the reader should care about it in 50 words or fewer.

DGX OS 5.0.0 provides the mlnx-fw-updater package version 5.1-2.4.6.0 which automatically installs firmware version 12.28.2040 on ConnectX-4 devices.

Since 12.28.2006 is the recommended firmware version, on December 15 the updater package has been updated to install version 12.28.2006. However, if the firmware has already been updated to 12.28.2040, the updater will not install the downlevel firmware version since a newer version is already installed.

In this case, you will need to force the downgrade as explained in this section.

A.1. Checking the Device Type

You can use the mlxfwmanager tool to verify whether ConnectX-4 devices are installed on your DGX system.

Run the following command.
:~$ sudo mlxfwmanager
Querying Mellanox devices firmware ...
Device #1:
----------
 Device Type: ConnectX4
 Part Number: MCX455A-ECA_Ax
 Description: ConnectX-4 VPI adapter card; EDR IB (100Gb/s) and
100GbE; single-port QSFP28; PCIe3.0 x16; ROHS R6
 PSID: MT_2180110032
 PCI Device Name: /dev/mst/mt4115_pciconf1
 Base GUID: 248a070300945e60
 Versions: Current Available
 FW 12.28.2040 N/A
 PXE 3.6.0102 N/A
 UEFI 14.21.0017 N/A

A.2. Downgrading the Firmware

If the output indicates that ConnectX-4 devices are installed, you need to downgrade the firmware.

To downgrade the firmware:

  1. Determine the correct firmware package name.
    1. Switch to the /opt/Mellanox/mlnx-fw-updater/firmware directory, where the updater installs the firmware files, and list the contents.
      :/opt/mellanox/mlnx-fw-updater/firmware$ ls

    2. Identify the correct package from the output.
      mlxfwmanager_sriov_dis_x86_64_4115 mlxfwmanager_sriov_dis_x86_64_4119
      mlxfwmanager_sriov_dis_x86_64_4123 mlxfwmanager_sriov_dis_x86_64_4127
      mlxfwmanager_sriov_dis_x86_64_41686 mlxfwmanager_sriov_dis_x86_64_4117
      mlxfwmanager_sriov_dis_x86_64_4121 mlxfwmanager_sriov_dis_x86_64_4125
      mlxfwmanager_sriov_dis_x86_64_41682
  2. Execute the firmware package by using the -f flag.
    :/opt/mellanox/mlnx-fw-updater/firmware$ sudo
    ./mlxfwmanager_sriov_dis_x86_64_4115 -f

    The software queries the current firmware and then updates (downgrades) the firmware.

    Querying Mellanox devices firmware ...
    …
    ---------
    Found 2 device(s) requiring firmware update...
    Device #1: Updating FW ...
    Initializing image partition - OK
    Writing Boot image component - OK
    Done
    Device #2: Updating FW ...
    Initializing image partition - OK
    Writing Boot image component - OK
    Done
  3. Reboot the system to allow the updates to take effect.
    $ sudo reboot

Notices

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.

NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.

NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.

NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.

No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.

Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.

THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.

Trademarks

NVIDIA, the NVIDIA logo, DGX, DGX-1, DGX-2, DGX A100, DGX Station, and DGX Station A100 are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.