NVIDIA GPUDirect Storage Installation and Troubleshooting Guide

The NVIDIA® GPUDirect Storage® Installation and Troubleshooting Guide describes how to debug and isolate the performance and functional problems that are related to GDS and is intended for systems administrators and developers.

1. Introduction

This guide describes how to debug and isolate the NVIDIA® GPUDirect® Storage (GDS)-related performance and functional problems and is intended for systems administrators and developers.

GDS enables a direct data path for direct memory access (DMA) transfers between GPU memory and storage, which avoids a bounce buffer through the CPU. This direct path increases system bandwidth and decreases the latency and utilization load on the CPU.

Creating this direct path involves distributed filesystems like DDN EXAScaler® parallel filesystem solutions (based on the Lustre filesystem) and WekaFS, so the GDS environment is composed of multiple software and hardware components. This guide addresses questions that are related to the GDS installation and helps you triage functionality and performance issues. For non-GDS issues, contact the respective OEM or filesystems vendor to understand and debug the issue.

This guide describes how to debug and isolate the performance and functional problems that are related to GDS and is intended for systems administrators and developers.

For GDS support, contact GPUDirectStorageSupportExt@nvidia.com.

The following GDS technical specifications and guides provide additional context for the optimal use of and understanding of the solution:
To learn more about GDS, refer to the following blogs:

2. Installing GPUDirect Storage

The following scenarios might occur during a GDS installation.

2.1. Before You Install GDS

Here are the software requirements that you need to install GDS:

GDS is currently tested with the Ubuntu 18.04 and 20.04 Linux distributions. Linux kernels running greater than 4.15.0-xx-generic till 5.4.0-xx-generic on x86_64-based platforms are supported. GDS depends on MOFED 4.6 and later for EXAScaler® and WekaIO filesystems.
#MOFED 4.7 installation
$ sudo ./mlnxofedinstall --upstream-libs --force
$ ofed_info -s 
MLNX_OFED_LINUX-4.7-3.2.9.0:

# MOFED 4.6 installation
sudo ./mlnxofedinstall --dpdk --upstream-libs --force
$ofed_info -s 
MLNX_OFED_LINUX-4.6-1.0.1.1:

# MOFED 5.1 installation
sudo ./mlnxofedinstall --with-nvmf --with-nfsrdma --enable-gds 
--add-kernel-support

$ofed_info -s 
MLNX_OFED_LINUX-5.1-2.3.7.1:

# NVIDIA driver version:
$ nvidia-smi

-----------------------------------------------------------------
NVIDIA-SMI 418.100       Driver Version: 418.100       CUDA Version: 10.1     

Software Requirements

  • Ubuntu 18.04 and 20.04
  • MOFED 5.1-0.6.6.0 and later, which supports NVMe NVMeoF, NFSoRDMA (VAST) on Linux kernel 4.15.x and 5.4.X

    You need to install MOFED before you install GDS. Refer to Installing GPUDirect Storage for more information about installing MOFED.

  • The following distributed filesystems:
    • WekaFS 3.8.0
    • DDN Exascaler 5.2
    • VAST 3.4

2.2. Installed GDS Libraries and Tools

Here is some information about how you can determine which GDS libraries and tools you installed.

Note: GPUDirect Storage packages are installed at /usr/local/cuda-X.Y/gds, where X is the major version of the CUDA toolkit, and Y is the minor version.
The GDS userspace libraries are located in the /usr/local/cuda-X.Y/targets/x86_64-linux/lib/ directory.
$ ls -1 /usr/local/cuda-X.Y/targets/x86_64-linux/lib/*cufile*
cufile.h
libcufile.so
libcufile.so.0
libcufile.so.0.9.0
libcufile_rdma.so
libcufile_rdma.so.0
libcufile_rdma.so.0.9.0
The GDS tools and samples are located in the /usr/local/cuda-X.Y/gds directory.
$ ls -lh /usr/local/cuda-X.Y/gds/
total 20K
-rw-r--r-- 1 root root 8.4K Oct 15 23:08 README
drwxr-xr-x 2 root root 4.0K Oct 19 15:24 samples
drwxr-xr-x 2 root root 4.0K Oct 19 16:18 tools

2.3. Verifying a Successful GDS Installation

This section provides information about how you can verify whether your GDS installation was successful.

To verify that GDS installation was successful, run gdscheck:
/usr/local/cuda-x.y/gds/tools/gdscheck

The output of this command will show whether a supported filesystem or device that was installed on the system will support GDS. The output also shows whether PCIe ACS is enabled on any of the PCI switches.

Note: For best GDS performance, disable PCIe ACS.
Run the following command:
$ /usr/local/CUDA-X.y/tools/gdscheck -p

Here is the sample output for this example:

GDS release version (beta): 0.9.0.16
nvidia_fs version:  2.3 libcufile version: 2.3
cuFile CONFIGURATION:
NVMe           : Supported
NVMeOF         : Supported
SCSI           : Unsupported
SCALEFLUX CSD  : Unsupported
LUSTRE         : Supported
NFS            : Supported
WEKAFS         : Supported
USERSPACE RDMA : Supported
--MOFED peer direct  : enabled
--rdma library       : Loaded (libcufile_rdma.so)
--rdma devices       : Configured
--rdma_device_status : Up: 8 Down: 0
properties.use_compat_mode : 0
properties.use_poll_mode : 0
properties.poll_mode_max_size_kb : 4
properties.max_batch_io_timeout_msecs : 5
properties.max_direct_io_size_kb : 16384
properties.max_device_cache_size_kb : 131072
properties.max_device_pinned_mem_size_kb : 33554432
properties.posix_pool_slab_size_kb : 4096 1048576 16777216
properties.posix_pool_slab_count : 128 64 32
properties.rdma_peer_affinity_policy : RoundRobin
fs.generic.posix_unaligned_writes : 0
fs.lustre.posix_gds_min_kb: 0
fs.weka.rdma_write_support: 0
profile.nvtx : 0
profile.cufile_stats : 3
miscellaneous.api_check_aggressive : 0
GPU INFO:
GPU index 0 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
GPU index 1 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
GPU index 2 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
GPU index 3 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
GPU index 4 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
GPU index 5 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
GPU index 6 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
GPU index 7 A100-SXM4-40GB bar:1 bar size (MiB):65536 supports GDS
IOMMU : disabled
Platform verification succeeded

2.4. JSON Config Parameters Used by GPUDirect Storage

Here is a list of the JSON configuration paramaters that are used by GDS.

Note: Consider compat_mode for systems or mounts that are not yet set up with GDS support.
Table 1. JSON Configuration Parameters
Configuration Parameter Description
logging:dir

In the log directory for the cufile.log file.

If this parameter is not enabled, the log file is created under the current working directory. The default value is the current working directory.

logging:level The level indicates the type of messages that will be logged:
  • ERROR indicates only critical errors.
  • INFO indicates informational messages including the errors
  • DEBUG indicates that the log information includes the error, informational, and debugging the library.
  • TRACE indicates the lowest log level and will log all possible messages in the library.

The default value is ERROR.

properties:max_direct_io_size_kb

This parameter indicates the max unit of the IO size that is exchanged between the cuFile library and the storage system.

The default value is 16MB.

properties:max_device_cache_size_kb

This indicates the maximum per GPU memory size in KB that can be reserved for internal bounce buffers.

The default value is 128MB.

properties:max_device_pinned_mem_size_kb

This indicates the maximum per GPU memory size in KB, including the memory for the internal bounce buffers, that can be pinned.

The default value is 32GB.

properties:use_poll_mode

Boolean that indicates whether the cuFile library uses polling or synchronous wait for the storage to complete IO. Polling might be useful for small IO transactions.

The default value is false.

properties:poll_mode_max_size_kb The maximum IO size in KB that will be used as threshold when the polling mode is set to true.
properties:allow_compat_mode

If this parameter is set to true, the cuFile APIs work functionally with the nvidia-fs driver.

The purpose is to test newer filesystems, for the environments where GDS applications do not have the kernel driver installed, to install the driver, or complete comparison tests.

properties:rdma_dev_addr_list This parameter list provides the list of IPv4 addresses for all the interfaces that can be used for RDMA.
properties:rdma_load_balancing_policy This parameter is used to specify the load balancing policy for RDMA memory registration. By default, this value is set to RoundRobin. This parameter uses only the NICS that are the closest to the GPU for memory registration in a round robin fashion.
fs:generic:posix_unaligned_writes If this parameter is set to true, the GDS path is disabled for unaligned writes and will go through the POSIX compatibility mode.
fs:lustre:posix_gds_min_kb This parameter is applicable only for the EXAScaler® filesystem and is an option to fallback to the POSIX compatible mode for IO sizes that are smaller than the set KB value. This is applicable for reads and writes.
fs:weka:rdma_write_support If this parameter is set to true, cuFileWrite will use RDMA writes instead of falling back to posix writes for a WekaFs mount.
denylist:drivers This parameter is an administrative setting that disables supported storage drivers on the node.
denylist:devices

This parameter is an administrative setting that disables specific supported block devices on the node.

Not applicable for DFS.

denylist:mounts This parameter is an administrative setting that disables specific mounts in the supported GDS enabled filesystems on the node.
denylist:filesystems This parameter is an administrative setting that disables specific supported GDS-ready filesystems on the node.
profile.nvtx This parameter is boolean, which if set to true, generates NVTX traces for profiling.
profile.cufile_stats This parameter is an integer that ranges between 0 and 3, in increasing order of verbosity, to show GDS per process user-space statistics.

CUFILE_ENV_PATH_JSON is the environment variable that sets the default path of the /etc/cufile.json file for a specific application instance to use different settings for the application and further restrict using the denylist option if the application is not ready for that filesystem or mount paths.

2.5. Determining Which Version of GDS is Installed

Here is some information about how you can determine your GDS version.

To determine which version of GDS you have, run the following command:
gdscheck -v
Review the output, for example:
GDS release version (beta): 0.9.0.16
 nvidia_fs version:  2.3 libcufile version: 2.3

3. API Errors

This section provides information about the API errors you might get when using GDS.

3.1. CU_FILE_DRIVER_NOT_INITIALIZED

Here is some information about the CU_FILE_DRIVER_NOT_INITIALIZED API error.

If the cuFileDriverOpen API is not called, errors that are encountered in the implicit call to driver initialization will be reported as cuFile errors that were encountered when calling cuFileBufRegister or cuFileHandleRegister.

3.2. CU_FILE_DEVICE_NOT_SUPPORTED

Here is some information about the CU_FILE_DEVICE_NOT_SUPPORTED error.

GDS is supported only on NVIDIA graphics processing units (GPU) Tesla® or Quadro® models that support compute mode, and a compute major capability greater than or equal to 6.

Note: This includes V100 and T4 cards.

3.3. CU_FILE_IO_NOT_SUPPORTED

Here is some information about the CU_FILE_IO_NOT_SUPPORTED error.

GDS is currently only supported on the supported filesystems and block devices. See Before You Install GDS for a list of the supported filesystems. If the file descriptor is from a local filesystem, or a mount that is not GDS ready, the API returns this error.

Here is a list of common reasons for this error:
  • The file descriptor belongs to an unsupported filesystem.
  • The specified fd is not a regular UNIX file.
  • O_DIRECT is not specified on the file.
  • Any combination of encryption, and compression, compliance settings on the fd are set.

    For example, FS_COMPR_FL | FS_ENCRYPT_FL | FS_APPEND_FL | FS_IMMUTABLE_FL.

    Note: These settings are allowed when compat_mode is set to true.
  • Any combination of unsupported file modes are specified in the open call for the fd.
O_APPEND | O_NOCTTY | O_NONBLOCK | O_DIRECTORY | O_NOFOLLOW | O_TMPFILE

3.4. CU_FILE_CUDA_MEMORY_TYPE_INVALID

Here is some information about the CU_FILE_CUDA_MEMORY_TYPE_INVALID error.

Physical memory for cudaMallocManaged memory is allocated dynamically at the first use and currently does not provide a mechanism to expose physical memory or Base Address Register (BAR) memory to pin for use in GDS. However, GDS indirectly supports cudaMallocManaged memory when the memory is used as an unregistered buffer with cuFileWrite and cuFileRead.

4. Basic Troubleshooting

This section provides information about basic troubleshooting for GDS.

4.1. Log Files for the GDS Library

Here is some information about troubleshooting the GDS library log files.

A cufile.log file is created in the same location where the application binaries are located. Currently the maximum log file size is 32MB. If the log file size increases to greater than 32MB, the log file is truncated and logging is resumed on the same file.

4.2. Enabling a Different cufile.log File for Each Application

You can enable a different cufile.log file for each application.

There are several relevant cases:
  • If the logging:dir property in the default /etc/cufile.json file is not set, by default, the cufile.log file is generated in the current working directory of the application.
  • If the logging:dir property is set in the default /etc/cufile.json file, the log file is created in the specified directory path.
Note: This is usually not recommended for scenarios where multiple applications use the libcufile.so library.
For example:
E.g 
 "logging": {
    // log directory, if not enabled 
    // will create log file under current working      
    // directory
      "dir": "/opt/gdslogs/",
}

The cufile.log will be created as a /opt/gdslogs/cufile.log file.

If the application needs to enable a different cufile.log for different applications, the application can override the default JSON path by doing the following steps:

  1. Export CUFILE_ENV_PATH_JSON="/opt/myapp/cufile.json".
  2. Edit the /opt/myapp/cufile.json file.
    "logging": {
        // log directory, if not enabled 
        // will create log file under current working 
        // directory
        "dir": "/opt/myapp",
    }
  3. Run the application.
  4. To check for logs, run $ ls -l /opt/myapp/cufile.log.

4.3. Enabling Tracing GDS Library API Calls

There are different logging levels, which can be enabled in the /etc/cufile.json file.

By default, logging level is set to ERROR. Logging will have performance impact as we increase the verbosity levels like INFO, DEBUG, and TRACE, and should be enabled only to debug field issues.
Configure tracing and run the following:
"logging": {
// log directory, if not enabled // will create log file under local directory//"dir": "/home/<xxxx>",

// ERROR|WARN|INFO|DEBUG|TRACE (in decreasing order of priority)
"level": "ERROR"
},

4.4. cuFileHandleRegister Error

Here is some information about the cuFileHandleRegister error.

If you see this error on the cufile.log file when an IO is issued:
“cuFileHandleRegister error: GPUDirect Storage not supported on current file.”
Here are some reasons why this error might occur:
  • The filesystem is not supported by GDS.

    See CU_FILE_DEVICE_NOT_SUPPORTED for more information.

  • The DIRECT_IO functionality is not supported for the mount on which the file resides.

For more information, enable tracing in the /etc/cufile.json file.

4.5. Troubleshooting Applications that Return cuFile Errors

Here is some information about how to troubleshoot cuFile errors.

To debug these errors:

  1. See the cufile.h file for more information about errors that are returned by the API.
  2. If the IO was submitted to the GDS driver, check whether there are any errors in GDS stats.

    If the IO fails, the error stats should provide information about the type of error.

    See Finding the GDS Driver Statistics for more information.

  3. Enable GDS library tracing and monitor the cufile.log file.
  4. Enable GDS Driver debugging:
    $ echo 1 >/sys/module/nvidia_fs/parameters/dbg_enabled
After the driver debug logs are enabled, you might get more information about the error.

4.6. cuFile-* Errors with No Activity in GPUDirect Storage Statistics

This section provides information about a scenario where there are cuFile errorsin the GDS statistics.

This issue means that the API failed in the GDS library. You can enable tracing by setting the appropriate logging level in the /etc/cufile.json file to get more information about the failure in cufile.log.

4.7. CUDA Runtime and Driver Mismatch with Error Code 35

Here is some information about how to resolve CUDA error 35.

Error code 35 from the CUDA documentation points to cudaErrorInsufficientDriver, which indicates that the installed NVIDIA CUDA® driver is older than the CUDA runtime library. This is not a supported configuration. For the application to run, you must update the NVIDIA display driver.

Note: cufIle tools depend on CUDA runtime 10.1 and later. You must ensure that the installed CUDA runtime is compatible with the installed CUDA driver and is at the recommended version.

4.8. CUDA API Errors when Running the cuFile-* APIs

Here is some information about CUDA API errors.

The GDS library uses the CUDA driver APIs. If you observe CUDA API errors, you will observe an error code. Refer to the error codes in the CUDA Libraries documentation for more information.

4.9. Finding GDS Driver Statistics

Here is some information about how you can find the driver statistics.

To find the GDS Driver Statistics, run the following command:
$ cat /proc/driver/nvidia-fs/stats

GDS Driver kernel statistics for READ/WRITE are available only for the EXAScaler® filesystem. See Troubleshooting and FAQ for the WekaIO Filesystem for more information about READ/WRITE.

4.10. Tracking IO Activity that Goes Through the GDS Driver

Here is some information about tracking IO activity.

In GDS Driver statistics, the ops row shows the active IO operation. The Read and Write fields show the current active operation in flight. This information should provide an idea of how many total IOs are in flight across all applications in the kernel. If there is a bottleneck in the userspace, the number of active IOs will be less than the number of threads that are submitting the IO. Additionally, to get more details about the Read and Write bandwidth numbers, look out for counters in the Read/Write rows.

4.11. Read/Write Bandwidth and Latency Numbers in GDS Stats

Here is some information about Read/Write bandwidth and latency numbers in GDS.

Measured latencies begin when the IO is submitted and end when the IO completion is received by the GDS kernel driver. Userspace latencies are not reported. This should provide an idea whether the user space is bottlenecked or whether the IO is bottlenecked on the backend disks/fabric.

Note: The WekaIO filesystem reads do not go through the nvidia-fs driver, so Read/Write bandwidth stats are not available for WekaIO filesystem by using this interface.

See the Troubleshooting and FAQ for the WekaIO Filesystem for more information.

4.12. Tracking Registration and Deregistration of GPU Buffers

This section provides information about registering and deregistering GPU buffers.

In GDS Driver stats, look for the active field in BAR1-map stats row. The pinning and unpinning of GPU memory through cuFileBufRegister and cuFileBufDeregister is an expensive operation. If you notice a large number of registrations(n) and deregistration(free) in the nvidia-fs stats, it can hurt performance. Refer to the GPUDirect Storage Best Practices Guide for more information about using the cuFileBufRegister API.

5. Advanced Troubleshooting

This section provides information about troubleshooting some advanced issues.

5.1. Resolving Hung cuFile* APIs with No Response

Here is some information about how to resolve hung cuFile APIs.

  1. Check whether there are any kernel panics/warnings in dmesg:
    $ dmesg > warnings.txt. less warnings.txt
  2. Check whether the application process is in the ‘D’ (uninterruptible) state).
  3. If the process is in the ‘D’ state:
    1. Get the PID of the process by running the following command:
      $ ps axf | grep ‘ D’
    2. As a root user, get the backtrace of the ‘D’ state process:
      $ su root
      $ cat /proc/<pid>/stack
  4. Verify whether the threads are stuck in the kernel or in user space. For more information, review the backtrace of the ‘D’ state threads.
  5. Check whether any threads are showing heavy CPU usage.
    1. The htop/mpstat tools should show CPU usage per core.
    2. Get the call graph of where the CPUs are getting used. The following code snippet should narrow down whether the threads are hung in user space or in the kernel:
      $ perf top -g 

5.2. Sending Relevant Data to Customer Support

Here is some information about how to resolve a kernel panice with stack traces.

  1. Run the following command and collect the kernel version and distribution information.
    uname -a and lsb_release -a
  2. Run the following command and collect the output.
    cat /proc/cmdline
  3. Run the following command and collect the kernel model output.
    modinfo nvidia_fs
  4. Run the following command and collect the output.
    modinfo nvme nvme_rdma rpcrdma wekafs lustre lnet
  5. Run the following command and capture the dmesg output.
    dmesg > gds_dmesg.txt
  6. Run the following command and collect the output.
    gdscheck.py -pvV
  7. Run the following commands and collect the following GDS-related kernel stats.
    • /proc/driver/nvidia-fs/stats
    • /proc/driver/nvidia-fs/peer_affinity
    • /proc/driver/nvidia-fs/peer_distance
  8. Run the following command and collect the user space-related stats.
    gds_stats -p <PID> -l 3
  9. Collect the relevant cufile.log files.
  10. Collect the /etc/cufile.json file or the cufile.json file that is used by the application.
  11. Run the following command and collect the output.
    nvidia-smi
  12. Run the following command and collect the output.
    nvidia-smi topo -m
  13. Collect the /var/log/syslog and /var/log/kern.log files.
  14. Run the following command and collect the output.
    lspci -vvv.
  15. Run the following commands and collect the outputs.
    • /proc/cpuinfo
    • /proc/meminfo
  16. Run the following commands and collect the outputs.
    • ofed_info -s
    • ibstatus
    • ibdev2netdev
  17. Collect the available kernel core, application crash dump files, and relevant System.map files.
  18. Collect stack traces for hung user space processes as a super user that uses the following command.
    ‘cat /proc/<pid>/task/*/stack’
  19. Send all of the logs that you collected in a tar file to the GDS support team.

5.3. Resolving an IO Failure with EIO and Stack Trace Warning

Here is some information about how to resolve an IO failure with EIO and a warning with a stack trace with an nvfs_mgroup_check_and_set function in the trace.

This might mean that the EXAScaler® filesystem did not honor O_DIRECT and fell back to page cache mode. GDS tracks this information in the driver and returns EIO.

Note: The WARNING stack trace is observed only once during the lifetime of the kernel module. You will get an Error: Input/Output (EIO), but the trace message will be printed only once. If you consistently experience this issue, contact the GDS support team.

5.4. Controlling GPU BAR Memory Usage

Here is some information about how to manage and control your GPU BAR memory usage.

  1. To show how much BAR Memory is available per GPU, run the following command:
    /usr/local/cuda-x.y/gds/tools/gdscheck
  2. Review the output, for example:
    GPU INFO:
     GPU Index: 0 bar:1 bar size (MB):32768
     GPU Index: 1 bar:1 bar size (MB):32768
    GDS uses BAR memory in the following cases:
    • When the process invokes cuFileBufRegister.
    • When GDS uses the cache internally to allocate bounce buffers per GPU.
    Each process can control the usage of BAR memory via the configurable property in the /etc/cufile.json file:
    "properties": {
    
    // device memory size for reserving bounce buffers for the entire GPU (in KB)
    "max_device_cache_size" : 131072,
    // limit on maximum memory that can be pinned for a given process (in KB)
    "max_device_pinned_mem_size" : 33554432                            
    
    }
    Note: This configuration is per process, and the configuration is set across all GPUs.

5.5. Determining the Amount of Cache to Set Aside

Here is some information about how to determine how much cache to set aside.

By default, 128 MB of cache is set in the configurable max_device_cache_size property. However, this does not mean that GDS pre-allocates 128MB of memory per GPU up front. Memory allocation is done on the fly and is based on need. After the allocation is complete, there is no purging of the cache.

Note: This is where capping is required.

By default, since 128 MB is set, the cache can grow up to 128 MB. Setting the cache is application specific and depends on workload. Refer to the GPUDirect Storage Best Practices Guide to understand the need of cache and how to set the limit based on guidance in the guide.

5.6. Monitoring BAR Memory Usage

Here is some information about monitoring BAR memory usage.

There is no way to monitor the BAR memory usage per process. However, GDS Stats tracks the global BAR usage across all processes. For more information, see the following stat output from /proc/driver/nvidia_fs/stats for the GPU with B:D:F 0000:34:00.0:
GPU 0000:34:00.0  uuid:12a86a5e-3002-108f-ee49-4b51266cdc07 : Registered_MB=32 Cache_MB=10

Registered_MB tracks how much BAR memory is used when applications are explicitly using thecuFileBufRegister API.

Cache_MB tracks GDS usage of BAR memory for internal cache.

5.7. Resolving an ENOMEM Error Code

Here is some information about the -12 ENOMEM error code.

Each GPU has some BAR memory reserved. The cuFileBufRegister function makes the pages that underlie a range of GPU virtual memory accessible to a third-party device. This process is completed by pinning the GPU device memory in BAR space by using the nvidia_p2p_get_pages API. If the application tries to pin memory beyond the available BAR space, the nvidia_p2p_get_pages API returns an -12 (ENOMEM) error code.

To avoid running out of BAR memory, developers should use this output to manage how much memory is pinned by application. Administrators can use this output to investigate how to limit the pinned memory for different applications.

5.8. GDS and Compatibility Mode

Here is some information about GDS and the compatibility mode.

To determine the compatibility mode, complete the following tasks:
  • In the /etc/cufile.json file, verify that allow_compat_mode is set to true.
  • gdscheck -p displays whether the allow_compat_mode property is set to true.
  • Check the cufile.log file for the cufile IO mode: POSIX message.

    This message is in the hot IO path, where logging each instance significantly impacts performance, so the message is only logged when logging:level is explicitly set to the TRACE mode in the /etc/cufile.json file.

5.9. Enabling Compatibility Mode

Here is some information about how to enable the compatibility mode.

Compatibility mode can be used by application developers to test the applications with cuFile-enabled libraries under the following conditions:
  • When there is no support for GDS for a specific filesystem.
  • The nvidia-fs.ko driver is not enabled in the system by the administrator.

To enable compatibility mode:

  1. Remove the nvidia-fs kernel driver:
    $ rmmod nvidia-fs
  2. In the /etc/cufile.json file, set compat-mode to true .
The IO through cuFileRead/cuFileWrite will now fallback to the CPU path.

5.10. Tracking the IO After Enabling Compatibility Mode

Here is some information about tracking the IO after you enable the compatibility mode.

When GDS is used in compatibility mode, and cufile_stats is enabled in the /etc/cufile.json file, you can use gds_stats or another standard Linux tools, such as strace, iostat, iotop, SAR, ftrace, and perf. You can also use the BPF compiler collection tools to track and monitor the IO.

When compatibility mode is enabled, internally, cuFileRead and cuFileWrite use POSIX pread and pwrite system calls, respectively.

5.11. Bypassing GPUDirect Storage

There are some scenarios in which you can bypass GDS.

There are some tunables where GDS IO and POSIX IO can go through simultaneously.

Here are the cases where GDS can be bypassed without having to remove the GDS driver:
  • On a supported filesystems and block devices.

    In the /etc/cufile.json file, if the posix_unaligned_writes config property is set to true, the unaligned writes will fall back to the compatibility mode and will not go through GDS. See Before You Install GDS for a list of supported filesystems.

  • On an EXAScaler® filesystem

    In the /etc/cufile.json file, if the posix_gds_min_kb config property is set to a certain value (in KB), the IO for which the size is less than or equal to the set value, will fallback to POSIX mode. For example, if posix_gds_min_kb is set to 8KB, IOs with a size that is less than or equal to 8KB, will fallback to the POSIX mode.

  • On a WekaIO filesystem:
    Note: Currently, cuFileWrite will always fallback to the POSIX mode.
    In the /etc/cufile.json file, if the allow-compat-mode config property is set to true:
    • If RDMA connections and/or memory registrations cannot be established, cuFileRead will fallback to the POSIX mode.
    • cuFileRead fails to allocate an internal bounce buffer for non-4K aligned GPU VA addresses.

Refer to the GPUDirect Storage Best Practices Guide for more information.

5.12. GDS Does Not Work for a Mount

Here is some information to help you understand why GDS is not working for a mount.

GDS will not be used for a mount in the following cases:
  • When the necessary GDS drivers are not loaded on the system.
  • The filesystem associated with that mount is not supported by GDS.
  • The mount point is denylisted in the /etc/cufile.json file.

5.13. Simultaneously Running the GPUDirect Storage IO and POSIX IO on the Same File

Since file is opened in O_DIRECT mode for GDS, applications should avoid mixing O_DIRECT and normal I/O to the same file, and especially to overlapping byte regions in the same file.

Even when the filesystem correctly handles the coherency issues in this situation, overall I/O throughput might be slower than using either mode alone. Similarly, applications should avoid mixing mmap(2) of files with direct I/O to the same files. Refer to the filesystem-specific documentation for information about additional O_DIRECT limitations.

5.14. Running Data Verification Tests Using GPUDirect Storage

Here is some information about how you can run data verification tests by using GDS.

GDS has an internal data verification utility, gdsio_verify, which is used to test data integrity of reads and writes. Run gdsio_verify -h for detailed usage information.

Here is an example:
$ /usr/local/cuda-x.y/gds/tools/gds_verify -f /mnt/ai200/fio-seq-writes-1 -d 0 -o 0 -s 1G -n 1 -m 1
Here is the sample output:
gpu index :0,file :/mnt/ai200/fio-seq-writes-1, RING buffer size :0, 
gpu buffer alignment :0, gpu buffer offset :0, file offset :0, 
io_requested :1073741824, bufregister :true, sync :1, nr ios :1,
fsync :0,
address = 0x560d32c17000
Data Verification Success
Note: This test completes data verification of reads and writes through GDS.

6. Troubleshooting Performance

This section covers issues related to performance.

6.1. Running Performance Benchmarks with GDS

You can run performance benchmarks with GDS and compare the results with CPU numbers.

GDS has a homegrown benchmarking utility, /usr/local/cuda-x.y/gds/tools/gdsio, which helps you compare GDS IO throughput numbers with CPU IO throughput. Run gdsio -hfor detailed usage information.

Here are some examples:

GDS: Storage --> GPU Memory
$ /usr/local/cuda-x.y/tools/gdsio -f /mnt/ai200/fio-seq-writes-1 -d 0 -w 4 -s 10G -i 1M -I 0 -x 0
Storage --> CPU Memory
$ /usr/local/cuda-x.y/tools/gdsio -f /mnt/ai200/fio-seq-writes-1 -d 0 -w 4 -s 10G -i 1M -I 0 -x 1
Storage --> CPU Memory --> GPU Memory
$ /usr/local/cuda-x.y/tool/gdsio -f /mnt/ai200/fio-seq-writes-1 -d 0 -w 4 -s 10G -i 1M -I 0 -x 2

6.2. Tracking Whether GPUDirect Storage is Using an Internal Cache

You can determine whether GDS is using an internal cache.

Prerequisite: Before you start, read the GPUDirect Storage Best Practices Guide.

GDS Stats has per-GPU stats, and each piece of the GPU bus device function (BDF) information is displayed. If the cache_MB field is active on a GPU, GDS is using the cache internally to complete the IO.

GDS might use the internal cache when one of the following conditions are true:
  • The file_offset that was issued in cuFileRead/cuFileWrite is not 4K aligned.
  • The size in cuFileRead/cuFileWrite calls are not 4K aligned.
  • The devPtr_base that was issued in cuFileRead/cuFileWrite is not 4K aligned.
  • The devPtr_base+devPtr_offset that was issued in cuFileRead/cuFileWrite is not 4K aligned.

6.3. Tracking when IO Crosses the PCIe Root Complex and Impacts Performance

You can track when the IO crosses the PCIe root complex and affects performance.

See Review Peer Affinity Stats for a Kernel Filesystem and Storage Devices for more information.

6.4. Using GPUDirect Statistics to Monitor CPU Activity

Although you cannot use GDS statistics to monitor CPU activity, you can use some Linux tools to complete this task.

Here are the Linux tools you can use:
  • htop
  • perf
  • mpstat

6.5. Monitoring Performance and Tracing with cuFile-* APIs

You can monitor performance and tracing with the cuFile-* APIs.

You can use the FTrace, the Perf, or the BCC-BPF tools to monitor performance and tracing. Ensure that you have the symbols that you can use to track and monitor the performance with a standard Linux IO tool.

6.6. Example: Using Linux Tracing Tools

The cuFileBufRegister function makes the pages that underlie a range of GPU virtual memory accessible to a third-party device. This process is completed by pinning the GPU device memory in the BAR space, which is an expensive operation and can take up to a few milliseconds.

You can using the BCC/BPF tool to trace the cuFileBufRegister API, understand what is happening in the Linux kernel, and understand why this process is expensive.

Scenario

  1. You are running a workload with 8 threads where each thread is issuing cuFileBufRegister to pin to the GPU memory.
    $ ./gdsio -f /mnt/ai200/seq-writes-1 -d 0 -w 8 -s 10G -i 1M -I 0 -x 0
  2. When IO is in progress, use a tracing tool to understand what is going on with cuFileBufRegister:
    $ /usr/share/bcc/tools# ./funccount -Ti 1 nvfs_mgroup_pin_shadow_pages 
  3. Review the sample output:
    15:04:56
    FUNC                                    COUNT
    nvfs_mgroup_pin_shadow_pages                         8

    As you can see, the nvfs_mgroup_pin_shadow_pages function has been invoked 8 times in one per thread.

  4. To see the latency is for that function, run:
    $ /usr/share/bcc/tools# ./funclatency -i 1 nvfs_mgroup_pin_shadow_pages
  5. Review the output:
    Tracing 1 functions for "nvfs_mgroup_pin_shadow_pages"... Hit Ctrl-C to end.
    
         nsecs               : count     distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 0        |                                        |
           128 -> 255        : 0        |                                        |
           256 -> 511        : 0        |                                        |
           512 -> 1023       : 0        |                                        |
          1024 -> 2047       : 0        |                                        |
          2048 -> 4095       : 0        |                                        |
          4096 -> 8191       : 0        |                                        |
          8192 -> 16383      : 1        |*****                                   |
         16384 -> 32767      : 7        |****************************************|
    

    Seven calls of the nvfs_mgroup_pin_shadow_pages function took about 16-32 microseconds. This is probably coming from the Linux kernel get_user_pages_fast that is used to pin shadow pages.

    cuFileBufRegister invokes nvidia_p2p_get_pages NVIDIA driver function to pin GPU device memory in the BAR space. This information is obtained by running $ perf top -g and getting the call graph of cuFileBufRegister.

Here is the overhead of the nvidia_p2p_get_pages:
$ /usr/share/bcc/tools# ./funclatency -Ti 1 nvidia_p2p_get_pages

15:45:19
nsecs               : count     distribution
  0 -> 1          : 0        |                                        |
  2 -> 3          : 0        |                                        |
  4 -> 7          : 0        |                                        |
  8 -> 15         : 0        |                                        |
 16 -> 31         : 0        |                                        |
 32 -> 63         : 0        |                                        |
 64 -> 127        : 0        |                                        |
128 -> 255        : 0        |                                        |
256 -> 511        : 0        |                                        |
512 -> 1023       : 0        |                                        |
1024 -> 2047       : 0        |                                        |
2048 -> 4095       : 0        |                                        |
4096 -> 8191       : 0        |                                        |
8192 -> 16383      : 0        |                                        |
16384 -> 32767      : 0        |                                        |
32768 -> 65535      : 0        |                                        |
65536 -> 131071     : 0        |                                        |
131072 -> 262143     : 0        |                                        |
262144 -> 524287     : 2        |*************                           |
524288 -> 1048575    : 6        |****************************************|

6.7. Tracing the cuFile-* APIs

You can use nvprof/NVIDIA Nsight to trace the cuFile-* APIs.

cuFile-* APIs are not integrated into existing CUDA visibility tools and will not show up on nvprof or NVIDIA Nsight tools.

NVTX static tracepoints are available for public interface in the libcufile.so library. After these static tracepoints are enabled, you can view these traces in NVIDIA Nsight just like any other CUDA® symbols.

You can enable the NVTX tracing using the JSON configuration at /etc/cufile.json:
"profile": {
                            // nvtx profiling on(true)/off(false)
                            "nvtx": true, 
            },

7. Troubleshooting IO Activity

This section covers issues that are related to IO activity and the interactions with the rest of Linux.

7.1. Managing Coherency with the Page Cache

Here is some information about how filesystems maintain the coherency of data in the page cache and the data on disk.

When using GDS, files are opened with the O_DIRECT mode. When IO is complete, in the context of DIRECT IO, it bypasses the page cache.

  • On the EXAScaler® filesystem:
    • For reads, IO bypasses the page cache and fetches the data directly from backend storage.
    • When writes are issued, the nvidia-fs drivers will try to flush the data in the page cache for the range of offset-length before issuing writes to the VFS subsystem.
    • The stats that track this information are:
      • pg_cache
      • pg_cache_fail
      • pg_cache_eio
  • On the WekaIO filesystem:
    • For reads, IO bypasses the page cache and fetches the data directly from backend storage.

7.2. GPUDirect Storage API Integration with Benchmark Tools

Here is some information about whether the GDS API can be integrated with benchmark tools such as FIO and IOZONE.

Unfortunately, because of licensing issues, we cannot ship the cuFile APIs already integrated with FIO. However, you can use the gdsio binary, which is a internally developed GDS-specific tool to run the benchmarks.

8. EXAScaler® Filesystem LNet Troubleshooting

This section provides information about how to troubleshoot issues with the EXAScaler® Filesystem.

8.1. Determining the EXAScaler® Filesystem Client Module Version

You can determine the version of the EXAScaler® Filesystem Client module.

To check the EXAScaler® filesystem Client version, check dmesg after you install the EXAScaler® filesystem.

Note: The EXAScaler® server version should be EXA-5.2.

This table provides a list of the client kernel module versions that have been tested with DDN AI200 and DDN AI400 systems:

Table 2. Tested Kernel Module Versions
DDN Client Version Kernel Version MOFED version
2.12.3_ddn28 4.15.0 MOFED 4.7
2.12.3_ddn29 4.15.0 MOFED 4.7
2.12.3_ddn39 4.15.0 MOFED 5.1
2.12.5_ddn4 5.4.0 MOFED 5.1
To verify the client version, run the following command:
$ sudo lctl get_param version
Here is the output:
Lustre version: 2.12.3_ddn39

8.2. Checking the LNet Network Setup on a Client

You can check the LNet network setup on the client.

  1. Run the following command.
    $ lnetctl net show:
  2. Review the output, for example:
    net:
       - net type: lo

8.3. Checking the Health of the Peers

Here is some information about checking the health of your interface.

An Lnet health value of 1000 is the best possible value that has been reported for a network interface. Anything less than 1000 indicates that the interface is running in a degraded mode and has encountered some errors.
  1. Run the following command ;
    $ lnetctl net show -v 3 | grep health
    
  2. Review the output, for example:
        health stats:
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000

8.4. Checking for Multi-Rail Support

You can verify whether multi-rail is supported.

  1. Run the following command:
    $ lnetctl peer show | grep -i Multi-Rail:
  2. Review the output, for example:
    Multi-Rail: True

8.5. Checking GDS Peer Affinity

For peer affinity, you need to check whether the expected interfaces are being used for the associated GPUs.

The code snippet below is a description of a test that runs load on a specific GPU. The test validates whether the interface that is performing the send and receive is the interface that is the closest, and is correctly mapped, to the GPU. See Resetting the nvidia-fs Statistics and Reviewing Peer Affinity Stats for a Kernel File System and Storage Drivers for more information about the metrics that are used to check peer affinity.

You can run a gdsio test for the tools section and monitor the LNET stats. See the readme file for more information. In the gdsio test, a write test has been completed on GPU 0. The expected NIC interface for GPU 0 is ib0 on the NVIDIA DGX-2™ platform. The lnetctl net show statistics were previously captured, and after the gdsio test, you can see that the RPC send and receive have happened over the IB0.

  1. Run the gdsio test.
  2. Review the output, for example:
    $ sudo lustre_rmmod
    $ sudo mount -t lustre 192.168.1.61@o2ib,192.168.1.62@o2ib:/ai200 /mnt/ai200/
    $ lnetctl net show -v 3 | grep health
              health stats:
                  health value: 0
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
              health stats:
                  health value: 1000
    
    $ sudo lnetctl net show -v 3 | grep -B 2 -i 'send_count\|recv_count'
              status: up
              statistics:
                  send_count: 0
                  recv_count: 0
    --
                  0: ib0
              statistics:
                  send_count: 3
                  recv_count: 3
    --
                  0: ib2
              statistics:
                  send_count: 3
                  recv_count: 3
    --
                  0: ib3
              statistics:
                  send_count: 2
                  recv_count: 2
    --
                  0: ib4
              statistics:
                  send_count: 13
                  recv_count: 13
    --
                  0: ib5
              statistics:
                  send_count: 12
                  recv_count: 12
    --
                  0: ib6
              statistics:
                  send_count: 12
                  recv_count: 12
    --
                  0: ib7
              statistics:
                  send_count: 11
                  recv_count: 11
     
    $ echo 1 > /sys/module/nvidia_fs/parameters/peer_stats_enabled
     
    $ /usr/local/cuda-x.y/tools/gdsio -f /mnt/ai200/test -d 0 -n 0 -w 1 -s 1G -i 4K -x 0 -I 1
    IoType: WRITE XferType: GPUD Threads: 1  DataSetSize: 1073741824/1073741824 IOSize: 4(KB),Throughput: 0.004727 GB/sec, Avg_Latency: 807.026154 usecs ops: 262144 total_time 211562847.000000 usecs
    
     
    
    $ lnetctl net show -v 3 | grep -B 2 -i 'send_count\|recv_count'
    
              status: up
              statistics:
                  send_count: 0
                  recv_count: 0
    --
                  0: ib0
              statistics:
                  send_count: 262149
                  recv_count: 524293
    --
                  0: ib2
              statistics:
                  send_count: 6
                  recv_count: 6
    --
                  0: ib3
              statistics:
                  send_count: 6
                  recv_count: 6
    --
                  0: ib4
              statistics:
                  send_count: 33
                  recv_count: 33
    --
                  0: ib5
              statistics:
                  send_count: 32
                  recv_count: 32
    --
                  0: ib6
              statistics:
                  send_count: 32
                  recv_count: 32
    --
                  0: ib7
              statistics:
                  send_count: 32
                  recv_count: 32
    
    cat /proc/driver/nvidia-fs/peer_affinity                                                                                       
    GPU P2P DMA distribution based on pci-distance
    
    (last column indicates p2p via root complex)
    GPU :0000:be:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:3b:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e7:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e5:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e0:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:57:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:39:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:36:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e2:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:59:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:b7:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:b9:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:bc:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:34:00.0 :0 0 23872512 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:5e:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:5c:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    

8.6. Checking for LNet-Level Errors

Here is some information about how you can determine whether there are LNET-level errors.

The errors impact the health of individual NICs and affect how the EXAScaler® filesystem selects the best peer, which impacts GDS performance.

Note: To run these commands, you must have sudo priveleges.
  1. Run the following command:
    cat /proc/driver/nvidia-fs/peer_affinity 
  2. Review the ouput, for example:
    GPU P2P DMA distribution based on pci-distance
    (last column indicates p2p via root complex)
    GPU :0000:be:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 GPU :0000:3b:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 GPU :0000:e7:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 GPU :0000:e5:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 GPU :0000:e0:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 GPU :0000:57:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1276417
    
    (Note : if peer traffic goes over Root-Port, one of the reasons might be that health of nearest NIC might be  affected)
    GPU :0000:39:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:36:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e2:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:59:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:b7:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:b9:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:bc:00.0 :0 0 7056141 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:34:00.0 :0 0 8356175 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:5e:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:5c:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    
    $lnetctl stats show
    statistics:
        msgs_alloc: 1
        msgs_max: 126
        rst_alloc: 25
        errors: 0
        send_count: 243901
        resend_count: 1
        response_timeout_count: 1935
        local_interrupt_count: 0
        local_dropped_count: 208
        local_aborted_count: 0
        local_no_route_count: 0
        local_timeout_count: 1730
        local_error_count: 0
        remote_dropped_count: 0
        remote_error_count: 0
        remote_timeout_count: 0
        network_timeout_count: 0
        recv_count: 564436
        route_count: 0
        drop_count: 0
        send_length: 336176013248
        recv_length: 95073248
        route_length: 0
        drop_length: 0
     lnetctl net show -v 4
    
    net:
        - net type: o2ib
          local NI(s):
            - nid: 192.168.1.71@o2ib
              status: up
              interfaces:
                  0: ib0
              statistics:
                  send_count: 171621
                  recv_count: 459717
                  drop_count: 0
              sent_stats:
                  put: 119492
                  get: 52129
                  reply: 0
                  ack: 0
                  hello: 0
              received_stats:
                  put: 119492
                  get: 0
                  reply: 340225
                  ack: 0
                  hello: 0
              dropped_stats:
                  put: 0
                  get: 0
                  reply: 0
                  ack: 0
                  hello: 0
              health stats:
                  health value: 1000
                  interrupts: 0
                  dropped: 0
                  aborted: 0
                  no route: 0
                  timeouts: 0
                  error: 0
              tunables:
                  peer_timeout: 180
                  peer_credits: 32
                  peer_buffer_credits: 0
                  credits: 256
                  peercredits_hiw: 16
                  map_on_demand: 1
                  concurrent_sends: 64
                  fmr_pool_size: 512
                  fmr_flush_trigger: 384
                  fmr_cache: 1
                  ntx: 512
                  conns_per_peer: 1
              lnd tunables:
              dev cpt: 0
              tcp bonding: 0
              CPT: "[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]"
            - nid: 192.168.2.71@o2ib
              status: up
              interfaces:
                  0: ib1
              statistics:
                  send_count: 79
                  recv_count: 79
                  drop_count: 0
              sent_stats:
                  put: 78
                  get: 1
                  reply: 0
                  ack: 0
                  hello: 0
              received_stats:
                  put: 78
                  get: 0
                  reply: 1
                  ack: 0
                  hello: 0
              dropped_stats:
                  put: 0
                  get: 0
                  reply: 0
                  ack: 0
                  hello: 0
              health stats:
                  health value: 979
                  interrupts: 0
                  dropped: 0
                  aborted: 0
                  no route: 0
                  timeouts: 1
                  error: 0
              tunables:
                  peer_timeout: 180
                  peer_credits: 32
                  peer_buffer_credits: 0
                  credits: 256
                  peercredits_hiw: 16
                  map_on_demand: 1
                  concurrent_sends: 64
                  fmr_pool_size: 512
                  fmr_flush_trigger: 384
                  fmr_cache: 1
                  ntx: 512
                  conns_per_peer: 1
              lnd tunables:
              dev cpt: 0
              tcp bonding: 0
              CPT: "[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]"
            - nid: 192.168.2.72@o2ib
              status: up
              interfaces:
                  0: ib3
              statistics:
                  send_count: 52154
                  recv_count: 52154
                  drop_count: 0
              sent_stats:
                  put: 25
                  get: 52129
                  reply: 0
                  ack: 0
                  hello: 0
              received_stats:
                  put: 25
                  get: 52129
                  reply: 0
                  ack: 0
                  hello: 0
              dropped_stats:
                  put: 0
                  get: 0
                  reply: 0
                  ack: 0
                  hello: 0
              health stats:
                  health value: 66
                  interrupts: 0
                  dropped: 208
                  aborted: 0
                  no route: 0
                  timeouts: 1735
                  error: 0
              tunables:
                  peer_timeout: 180
                  peer_credits: 32
                  peer_buffer_credits: 0
                  credits: 256
                  peercredits_hiw: 16
                  map_on_demand: 1
                  concurrent_sends: 64
                  fmr_pool_size: 512
                  fmr_flush_trigger: 384
                  fmr_cache: 1
                  ntx: 512
                  conns_per_peer: 1
              lnd tunables:
              dev cpt: 0
              tcp bonding: 0
             CPT: "[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]"  
If you see incrementing error stats, capture the net logging and provide this information for debugging:
$ lctl set_param debug=+net
# reproduce the problem
$ lctl dk > log

8.7. Resolving LNet NIDs Health Degradation from Timeouts

With large machines, such as DGX™ that have multiple interfaces, if Linux routing is not correctly set up, there might be connection failures and other unexpected behavior.

Here is the typical network setting that is used to resolve local connection timeouts:
sysctl -w net.ipv4.conf.all.accept_local=1
There are also generic pointers for resolving LNet Network issues. Refer to MR Cluster Setup for more information.

8.8. Configuring LNet Networks with Multiple OSTs for Optimal Peer Selection

Here is some information about how to configure LNET networks that have multiple Object Storage Target (OSTs).

When there are multiple OSTs, and each OST is dual interface, to have to have one interface on each of the LNets for which the client is configured.

For example, you have the following two LNet Subnets on the client side:
  • o2ib
  • o2ib1
The server has only one Lnet subnet, o2ib. In this situation, the routing is not optimal, because you are restricting the ib selection logic to a set of devices, which may not be closest to the GPU. There is no way to reach OST2 except over the LNet to which it is connected.

The traffic that goes to this OST will never be optimal, and this configuration might affect overall throughput and latency. If, however, you configure the server to use two networks (o2ib0 and o2ib1, OST1 and OST2 can be reached over both networks. When the selection algorithm runs, it will determine that the best path is, for example, OST2 over o2ib1.

  1. To configure the client-side LNET, run the following command:
    $ sudo lnetctl net show
  2. Review the output, for example:
    net:
        - net type: lo
          local NI(s):
            - nid: 0@lo
              status: up
        - net type: o2ib
          local NI(s):
            - nid: 192.168.1.71@o2ib
              status: up
              interfaces:
                  0: ib0
            - nid: 192.168.1.72@o2ib
              status: up
              interfaces:
                  0: ib2
            - nid: 192.168.1.73@o2ib
              status: up
              interfaces:
                  0: ib4
            - nid: 192.168.1.74@o2ib
              status: up
              interfaces:
                  0: ib6
        - net type: o2ib1
          local NI(s):
            - nid: 192.168.2.71@o2ib1
              status: up
              interfaces:
                  0: ib1
            - nid: 192.168.2.72@o2ib1
              status: up
              interfaces:
                  0: ib3
            - nid: 192.168.2.73@o2ib1
              status: up
              interfaces:
                  0: ib5
            - nid: 192.168.2.74@o2ib1
              status: up
              interfaces:
                  0: ib7

For an optimal configuration, the LNet peer should show two LNet subnets.

In this case, the primary nid is only one o2ib:
$ sudo lnetctl peer show
Here is the sample output:
peer:
    - primary nid: 192.168.1.62@o2ib
      Multi-Rail: True
      peer ni:
        - nid: 192.168.1.62@o2ib
          state: NA
        - nid: 192.168.2.62@o2ib1
          state: NA
       - primary nid: 192.168.1.61@o2ib
      Multi-Rail: True
      peer ni:
        - nid: 192.168.1.61@o2ib
          state: NA
        - nid: 192.168.2.61@o2ib1
          state: NA
From the server side, here is an example of sub-optimal LNet configuration:
[root@ai200-090a-vm01 ~]# lnetctl net show
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
    - net type: o2ib (o2ib1 is not present)
      local NI(s):
        - nid: 192.168.1.62@o2ib
          status: up
          interfaces:
              0: ib0
        - nid: 192.168.2.62@o2ib
          status: up
          interfaces:
              0: ib1
Here is an example of an IB configuration for a non-optimal case, where a file is stripped over two OSTs, and there are sequential reads:
$ ibdev2netdev -v

0000:b8:00.1 mlx5_13 (MT4123 - MCX653106A-ECAT) ConnectX-6 VPI adapter card, 100Gb/s (HDR100, EDR IB and 100GbE), dual-port QSFP56                                                                                                    fw 20.26.4012 port 1 (ACTIVE) ==> ib4 (Up) (o2ib)

ib4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 2044

        inet 192.168.1.73  netmask 255.255.255.0  broadcast 192.168.1.255

0000:bd:00.1 mlx5_15 (MT4123 - MCX653106A-ECAT) ConnectX-6 VPI adapter card, 100Gb/s (HDR100, EDR IB and 100GbE), dual-port QSFP56                                                                                                     fw 20.26.4012 port 1 (ACTIVE) ==> ib5 (Up) (o2ib1)

 ib5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 2044

        inet 192.168.2.73  netmask 255.255.255.0  broadcast 192.168.2.255

        

$ cat /proc/driver/nvidia-fs/peer_distance  | grep 0000:be:00.0 | grep network

0000:be:00.0    0000:58:00.1    138     0       network
0000:be:00.0    0000:58:00.0    138     0       network
0000:be:00.0    0000:86:00.1    134     0       network
0000:be:00.0    0000:35:00.0    138     0       network
0000:be:00.0    0000:5d:00.0    138     0       network
0000:be:00.0    0000:bd:00.0    3       0       network
0000:be:00.0    0000:b8:00.1    7       30210269        network (ib4) (chosen peer)
0000:be:00.0    0000:06:00.0    134     0       network
0000:be:00.0    0000:0c:00.1    134     0       network
0000:be:00.0    0000:e6:00.0    138     0       network
0000:be:00.0    0000:3a:00.1    138     0       network
0000:be:00.0    0000:e1:00.0    138     0       network
0000:be:00.0    0000:bd:00.1    3       4082933 network (ib5) (best peer)
0000:be:00.0    0000:e6:00.1    138     0       network
0000:be:00.0    0000:86:00.0    134     0       network
0000:be:00.0    0000:35:00.1    138     0       network
0000:be:00.0    0000:e1:00.1    138     0       network
0000:be:00.0    0000:0c:00.0    134     0       network
0000:be:00.0    0000:b8:00.0    7       0       network
0000:be:00.0    0000:5d:00.1    138     0       network
0000:be:00.0    0000:3a:00.0    138     0       network
Here is an example of an optimal LNet configuration:
[root@ai200-090a-vm00 ~]# lnetctl net show
net:
    - net type: lo
      local NI(s):
        - nid: 0@lo
          status: up
    - net type: o2ib
      local NI(s):
        - nid: 192.168.1.61@o2ib
          status: up
          interfaces:
              0: ib0
    - net type: o2ib1
      local NI(s):
        - nid: 192.168.2.61@o2ib1
          status: up
          interfaces:
              0: ib1

9. Understanding EXAScaler® Filesystem Performance

Depending on the type of host channel adapter (HCA), commonly known as a NIC, there are mod parameters that can be tuned for LNet. The NICs that you select should be up and healthy.

To verify the health by mounting and running some basic tests, use lnetctl health statistics, and run the following command:
$ cat /etc/modprobe.d/lustre.conf
Review the output, for example:
options libcfs cpu_npartitions=24 cpu_pattern=""                        
options lnet networks="o2ib0(ib1,ib2,ib3,ib4,ib6,ib7,ib8,ib9)"                         
options ko2iblnd peer_credits=32 concurrent_sends=64 peer_credits_hiw=16 map_on_demand=0 

9.1. osc Tuning Performance Parameters

Here is some information about tuning filesystem parameters.

Note: To maximize the throughput, you can tune the following EXAScaler® filesystem client parameters, based on the network.
  1. Run the following command:
    $ lctl get_param osc.*.max* osc.*.checksums
    
  2. Review the output, for example:
    $ lctl get_param osc.*.max* osc.*.checksums
    
    osc.ai400-OST0024-osc-ffff916f6533a000.max_pages_per_rpc=4096
    osc.ai400-OST0024-osc-ffff916f6533a000.max_dirty_mb=512
    osc.ai400-OST0024-osc-ffff916f6533a000.max_rpcs_in_flight=32
    osc.ai400-OST0024-osc-ffff916f6533a000.checksums=0

To check llite parameters, run $ lctl get_param llite.*.*.

9.2. Miscellaneous Commands for osc, mdc, and stripesize

If the tuning parameters are set correctly, you can use these parameters to observe.

  1. To get an overall EXAScaler® filesystem client side statistics, run the following command:
    $ lctl get_param osc.*.import
    Note: The command includes rpc information.
  2. Review the output, for example:
    $ watch -d 'lctl get_param osc.*.import | grep -B 1 inflight'
        rpcs:
           inflight: 5
        rpcs:
           inflight: 33
    
  1. To get the maximum number of pages that can be transferred per rpc in a EXAScaler® filesystem client, run the following command:
    $ lctl get_param osc.*.max_pages_per_rpc
  1. To get the overall rpc statistics from a EXAScaler® filesystem client, run the following command:
    $ lctl set_param osc.*.rpc_stats=clear (to reset osc stats)
    $ lctl get_param osc.*.rpc_stats
  2. Review the output, for example:
    osc.ai200-OST0000-osc-ffff8e0b47c73800.rpc_stats=
    snapshot_time:         1589919461.185215594 (secs.nsecs)
    read RPCs in flight:  0
    write RPCs in flight: 0
    pending write pages:  0
    pending read pages:   0
    
                            read                    write
    
    pages per rpc         rpcs   % cum % |       rpcs   % cum %
    1:                14222350  77  77   |          0   0   0
    2:                       0   0  77   |          0   0   0
    4:                       0   0  77   |          0   0   0
    8:                       0   0  77   |          0   0   0
    16:                      0   0  77   |          0   0   0
    32:                      0   0  77   |          0   0   0
    64:                      0   0  77   |          0   0   0
    128:                     0   0  77   |          0   0   0
    256:               4130365  22 100   |          0   0   0
    
                            read                    write
    
    rpcs in flight        rpcs   % cum % |       rpcs   % cum %
    0:                       0   0   0   |          0   0   0
    1:                 3236263  17  17   |          0   0   0
    2:                  117001   0  18   |          0   0   0
    3:                  168119   0  19   |          0   0   0
    4:                  153295   0  20   |          0   0   0
    5:                   91598   0  20   |          0   0   0
    6:                   42476   0  20   |          0   0   0
    7:                   17578   0  20   |          0   0   0
    8:                    9454   0  20   |          0   0   0
    9:                    7611   0  20   |          0   0   0
    10:                   7772   0  20   |          0   0   0
    11:                   8914   0  21   |          0   0   0
    12:                   9350   0  21   |          0   0   0
    13:                   8559   0  21   |          0   0   0
    14:                   8734   0  21   |          0   0   0
    15:                  10784   0  21   |          0   0   0
    16:                  11386   0  21   |          0   0   0
    17:                  13148   0  21   |          0   0   0
    18:                  15473   0  21   |          0   0   0
    19:                  17619   0  21   |          0   0   0
    20:                  18851   0  21   |          0   0   0
    21:                  21853   0  21   |          0   0   0
    22:                  21236   0  21   |          0   0   0
    23:                  21588   0  22   |          0   0   0
    24:                  23859   0  22   |          0   0   0
    25:                  24049   0  22   |          0   0   0
    26:                  26232   0  22   |          0   0   0
    27:                  29853   0  22   |          0   0   0
    28:                  31992   0  22   |          0   0   0
    29:                  43626   0  22   |          0   0   0
    30:                 116116   0  23   |          0   0   0
    31:               14018326  76 100   |          0   0   0
  1. To get statistics that are related to client metadata operations, run the following command:
    Note: MetaDataClient (MDC) is the client side counterpart of MetaData Server (MDS).
    $ lctl get_param mdc.*.md_stats
  1. To get the stripe layout of the file on the EXAScaler® filesystem, run the following command:
    $ lfs getstripe /mnt/ai200 
    

9.3. Getting the Number of Configured Object-Based Disks

Here is some information about how you can get the number of configured object-bsaed disks.

  1. Run the following command:
    $ lctl get_param lov.*.target_obd
    
  2. Review the output, for example:
    0: ai200-OST0000_UUID ACTIVE
    1: ai200-OST0001_UUID ACTIVE

9.4. Getting Additional Statistics related to the EXAScaler® Filesystem

You can get additional statistics that are related to the EXAScaler® Filesystem.

Refer to the Lustre Monitoring and Statistics Guide for more information.

9.5. Getting Metadata Statistics

Here is some information about how you can get metadata statistics.

  1. Run the following command:
    $ lctl get_param lmv.*.md_stats
  2. Review the output, for example:
    snapshot_time             1571271931.653827773 secs.nsecs
    close                     8 samples [reqs]
    create                    1 samples [reqs]
    getattr                   1 samples [reqs]
    intent_lock               81 samples[reqs]
    read_page                 3 samples [reqs]
    revalidate_lock           1 samples [reqs]

9.6. Checking for an Existing Mount

Here is some information about how you can check for an existing mount in the EXAScaler® Filesystem.

  1. Run the following command:
    $ mount | grep lustre   
  2. Review the output, for example:
    192.168.1.61@o2ib,192.168.1.62@o2ib1:/ai200 on /mnt/ai200 type lustre
    (rw,flock,lazystatfs)

9.7. Unmounting an EXAScaler® Filesystem Cluster

Here is some information about how to unmount an EXAScaler® filesystem cluster.

Run the following command.
$ sudo umount /mnt/ai200

9.8. Getting a Summary of EXAScaler Filesystem Statistics

You can get a summary of statistics for the EXAScaler® filesystem.

Refer to the Lustre Monitoring and Statistics Guide for more information about EXAScaler® filesystem statistics.

9.9. Using GPUDirect Storage in Poll Mode

Here is some information about how to use GDS in Poll Mode with EXAScaler® filesystem files that have a Stripe Count greater than 1.

Currently, if poll mode is enabled, cuFileReads or cuFileWrites might return bytes that are less than the bytes that were requested. This behavior is POSIX compliant and is observed with files that have a stripe count that is greater than the count in their layout. If behavior occurs, we recommend that the application checks for returned bytes and continues until all of the data is consumed. You can also set the corresponding properties.poll_mode_max_size_kb,(say 1024(KB)) value to the lowest possible stripe size in the directory. This ensures that IO sizes that exceed this limit are not polled.
  1. To check EXAScaler® filesystem file layout, run the following command.
    $ lfs getstripe <file-path>
  2. Review the output, for example:
    lfs getstripe /mnt/ai200/single_stripe/md1.0.0
    /mnt/ai200/single_stripe/md1.0.0
    lmm_stripe_count:  1
    lmm_stripe_size:   1048576
    lmm_pattern:       raid0
    lmm_layout_gen:    0
    lmm_stripe_offset: 0
            obdidx           objid           objid           group
                 0            6146         0x1802                0

10. Troubleshooting and FAQ for the WekaIO Filesystem

This section provides troubleshooting and FAQ information about the WekaIO file system.

10.1. Downloading the WekaIO Client Package

Here is some information about how to download the WekaIO client package.

Run the following command:
$ curl http://<IP of one of the WekaIO hosts' IB interface>:14000/dist/v1/install | sh

For example, $ curl http://172.16.8.1:14000/dist/v1/install | sh.

10.2. Determining Whether the WekaIO Version is Ready for GDS

Here is some information about how to determine whether the WekaIO version is ready for GDS.

Currently, the only WekaIO FS version that supports GDS is * 3.6.2.5-rdma-beta:
  1. Run the following command:
    $ weka version
    
  2. Review the output, for example:
    * 3.6.2.5-rdma-beta

10.3. Mounting a WekaIO File System Cluster

Here is some information about how to mount a WekaIO file system cluster.

The WekaIO filesystem can take a parameter to reserve a fixed number of cores for the user space process.
  1. To mount a server_ip 172.16.8.1 with two dedicated cores, run the following command:
    $ mkdir -p /mnt/weka
    $ sudo mount -t wekafs -o num_cores=2 -o net=ib0,net=ib1,net=ib2,net=ib3,net=ib4,net=ib5,net=ib6,net=ib7 
    172.16.8.1/fs01 /mnt/weka
    
  2. Review the output, for example:
    Mounting 172.16.8.1/fs01 on /mnt/weka
    Creating weka container
    Starting container
    Waiting for container to join cluster
    Container "client" is ready (pid = 47740)
    Calling the mount command
    Mount completed successfully

10.4. Resolving a Failing Mount

Here is some information about how you can resolve a failing mount.

  1. Before you use the IB interfaces in the mount options, verify that the interfaces are set up for net=<interface>:
    $ sudo mount -t wekafs -o num_cores=2 -o 
    net=ib0,net=ib1,net=ib2,net=ib3,net=ib4,net=ib5,net=ib6,net=ib7 
    172.16.8.1/fs01 /mnt/weka
  2. Review the output, for example:
    Mounting 172.16.8.1/fs01 on /mnt/weka
    Creating weka container
    Starting container
    Waiting for container to join cluster
    error: Container "client" has run into an error: Resources 
    assignment failed: IB/MLNX network devices should have 
    pre-configured IPs and ib4 has none
  3. Remove interfaces that do not have network connectivity from the mount options.
    $ ibdev2netdev
    
    mlx5_0 port 1 ==> ib0 (Up)
    mlx5_1 port 1 ==> ib1 (Up)
    mlx5_2 port 1 ==> ib2 (Up)
    mlx5_3 port 1 ==> ib3 (Up)
    mlx5_4 port 1 ==> ib4 (Down)
    mlx5_5 port 1 ==> ib5 (Down)
    mlx5_6 port 1 ==> ib6 (Up)
    mlx5_7 port 1 ==> ib7 (Up)
    mlx5_8 port 1 ==> ib8 (Up)
    mlx5_9 port 1 ==> ib9 (Up)
    

10.5. Resolving 100% Usage for WekaIO for Two Cores

If you have two cores, and you are experiencing 100% CPU usage, here is some information about how to resolve this situation.

  1. Run the following command.
    $ top
  2. Review the output, for example:
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    54816 root      20   0 11.639g 1.452g 392440 R  94.4  0.1 781:06.06 wekanode
    54825 root      20   0 11.639g 1.452g 392440 R  94.4  0.1 782:00.32 wekanode

    When the num_cores=2 parameter is specified, two cores are used for the user mode poll driver for WekaIO FE networking. This process improves the latency and performance. Refer to the WekaIO documentation for more information.

10.6. Checking for an Existing Mount in the Weka File System

Here is some information about how to check for an existing mount in the WekaIO file system.

  1. Run the following command:
    $ mount | grep wekafs
  2. Review the output, for example:
    172.16.8.1/fs01 on /mnt/weka type wekafs (
    rw,relatime,writecache,inode_bits=auto,dentry_max_age_positive=1000,
    dentry_max_age_negative=0)

10.7. Checking for a Summary of the WekaIO Filesystem Status

Here is some information about how you can check for a summary of the WekaIO file system status.

  1. Run the following command:
    $ weka status
  2. Review the output, for example:
    WekaIO v3.6.2.5-rdma-beta (CLI build 3.6.2.5-rdma-beta)
           cluster: Nvidia (e4a4e227-41d0-47e5-aa70-b50688b31f40)
            status: OK (12 backends UP, 72 drives UP)
        protection: 8+2
         hot spare: 2 failure domains (62.84 TiB)
     drive storage: 62.84 TiB total, 819.19 MiB unprovisioned
             cloud: connected
           license: Unlicensed
    
         io status: STARTED 1 day ago (1584 buckets UP, 228 io-nodes UP)
        link layer: InfiniBand
           clients: 1 connected
             reads: 61.54 GiB/s (63019 IO/s)
            writes: 0 B/s (0 IO/s)
        operations: 63019 ops/s
            alerts: 3 active alerts, use `Wekaalerts` to list them
    

10.8. Displaying the Summary of the WekaIO Filesystem Statistics

You can display a summary of the status of the WekaIO filesystem.

  1. Run the following command.
    $ cat /proc/wekafs/stat
  2. Review the output, for example:
    IO type:      UM Average      UM Longest      KM Average      KM Longest                IO count
    --------------------------------------------------------------------------------------------------------------------------------
              total:          812 us       563448 us         9398 ns     10125660 ns               718319292 (63260 IOPS, 0 MB/sec)
             lookup:          117 us         3105 us         6485 ns       436709 ns                4079 (12041)
            readdir:            0 us            0 us            0 ns            0 ns                       0
              mknod:          231 us          453 us         3970 ns         6337 ns                      96
               open:            0 us            0 us            0 ns            0 ns                   0 (3232)
            release:            0 us            0 us            0 ns            0 ns                   0 (2720)
               read:            0 us            0 us            0 ns            0 ns                       0
              write:        18957 us       563448 us       495291 ns       920127 ns              983137 (983041)
            getattr:           10 us           10 us         6771 ns         6771 ns                   1 (9271)
            setattr:          245 us          424 us         4991 ns        48222 ns                      96
              rmdir:            0 us            0 us            0 ns            0 ns                       0
             unlink:            0 us            0 us            0 ns            0 ns                       0
             rename:            0 us            0 us            0 ns            0 ns                       0
            symlink:            0 us            0 us            0 ns            0 ns                       0
           readlink:            0 us            0 us            0 ns            0 ns                       0
           hardlink:            0 us            0 us            0 ns            0 ns                       0
             statfs:         4664 us         5072 us        38947 ns        59618 ns                       7
         SG_release:            0 us            0 us            0 ns            0 ns                       0
        SG_allocate:         1042 us         7118 us         2161 ns       110282 ns                  983072
             falloc:          349 us          472 us         4184 ns        10239 ns                      96
        atomic_open:            0 us            0 us            0 ns            0 ns                       0
              flock:            0 us            0 us            0 ns            0 ns                       0
           backcomm:            0 us            0 us            0 ns            0 ns                       0
            getroot:        19701 us        19701 us        57853 ns        57853 ns                       1
              trace:            0 us            0 us            0 ns            0 ns                       0
        jumbo alloc:            0 us            0 us            0 ns            0 ns                       0
      jumbo release:            0 us            0 us            0 ns            0 ns                       0
        jumbo write:            0 us            0 us            0 ns            0 ns                       0
         jumbo read:            0 us            0 us            0 ns            0 ns                       0
          keepalive:           46 us      1639968 us         1462 ns        38996 ns                  184255
              ioctl:          787 us        50631 us         8732 ns     10125660 ns               717328710
           setxattr:            0 us            0 us            0 ns            0 ns                       0
           getxattr:            0 us            0 us            0 ns            0 ns                       0
          listxattr:            0 us            0 us            0 ns            0 ns                       0
        removexattr:            0 us            0 us            0 ns            0 ns                       0
      setfileaccess:          130 us         3437 us         6440 ns        71036 ns                    3072
            unmount:            0 us            0 us            0 ns            0 ns                       0
    
    

10.9. Understanding Why WekaIO Writes Go Through POSIX

Here is some information to help you understand why, for GDS, WekaIO writes are going through POSIX.

For the WekaIO filesystem, GDS supports RDMA based reads and writes. You can use the fs:weka:rdma_write_support JSON property to enable writes on supported Weka filesystems. This option is disabled by default. If this option is set to false, writes will be internally staged through system memory, and the cuFile library will use pwrite POSIX calls internally for writes.

10.10. Checking for nvidia-fs.ko Support for Memory Peer Direct

Here is some information about how you can check for nvidia-fs.ko support for memory peer direct.

  1. Run the following command:
    $ lsmod | grep nvidia_fs | grep ib_core && echo “Ready for Memory Peer Direct”
  2. Review the output, for example:
    ib_core               319488  16 
    rdma_cm,ib_ipoib,mlx4_ib,ib_srp,iw_cm,nvidia_fs,ib_iser,ib_umad,
    rdma_ucm,ib_uverbs,mlx5_ib,ib_cm,ib_ucm
    “Ready for Memory Peer Direct”

10.11. Checking Memory Peer Direct Stats

Here is some information about how to check memory peer statistics.

  1. Run the following script, which shows the counter for memroy peer direct statistics:
    list=`ls /sys/kernel/mm/memory_peers/nvidia-fs/`. for stat in $list . 
    do echo  "$stat value: " $(cat /sys/kernel/mm/memory_peers/nvidia-fs/$stat). done
  2. Review the output.
    num_alloc_mrs value:  1288
    num_dealloc_mrs value:  1288
    num_dereg_bytes value:  1350565888
    num_dereg_pages value:  329728
    num_free_callbacks value:  0
    num_reg_bytes value:  1350565888
    num_reg_pages value:  329728
    version value:  1.0

10.12. Checking for Relevant nvidia-fs Statistics for the WekaIO Filesystem

Here is some information about how you can check for relevant nvida-fs statistics for the WekaIO file system.

Note: Reads, Writes, Ops, and Error counters are not available through this interface for the WekaIO filesystem, so the value will be zero. See Displaying the Summary of the WekaIO Filesystem Statistics about using the Weka status for reads and writes.
  1. Run the following command:
    $ cat /proc/driver/nvidia-fs/stats | egrep -v 'Reads|Writes|Ops|Error'
  2. Review the output, for example:
    GDS Version: 0.7.1
    NVFS statistics(ver: 1.0)
    NVFS Driver(version: 2:0)
    Active Shadow-Buffer (MB): 256
    Active Process: 1
    Mmap            : n=2088 ok=2088 err=0 munmap=1832
    Bar1-map        : n=2088 ok=2088 err=0 free=1826 callbacks=6 active=256
    GPU 0000:34:00.0  uuid:12a86a5e-3002-108f-ee49-4b51266cdc07 : Registered_MB=32 Cache_MB=0 max_pinned_MB=1977
    GPU 0000:e5:00.0  uuid:4c2c6b1c-27ac-8bed-8e88-9e59a5e348b5 : Registered_MB=32 Cache_MB=0 max_pinned_MB=32
    GPU 0000:b7:00.0  uuid:b224ba5e-96d2-f793-3dfd-9caf6d4c31d8 : Registered_MB=32 Cache_MB=0 max_pinned_MB=32
    GPU 0000:39:00.0  uuid:e8fac7f5-d85d-7353-8d76-330628508052 : Registered_MB=32 Cache_MB=0 max_pinned_MB=32
    GPU 0000:5c:00.0  uuid:2b13ed25-f0ab-aedb-1f5c-326745b85176 : Registered_MB=32 Cache_MB=0 max_pinned_MB=32
    GPU 0000:e0:00.0  uuid:df46743a-9b22-30ce-6ea0-62562efaf0a2 : Registered_MB=32 Cache_MB=0 max_pinned_MB=32
    GPU 0000:bc:00.0  uuid:c4136168-2a1d-1f3f-534c-7dd725fedbff : Registered_MB=32 Cache_MB=0 max_pinned_MB=32
    GPU 0000:57:00.0  uuid:54e472f2-e4ee-18dc-f2a1-3595fa8f3d33 : Registered_MB=32 Cache_MB=0 max_pinned_MB=32

10.13. Conducting a Basic WekaIO Filesystem Test

Here is some information about how to conduct a basic WekaIO filesystem test.

  1. Run the following command:
    $ /usr/local/cuda-x.y/tools/gdsio_verify  -f /mnt/weka/gdstest/tests/reg1G 
    -n 1 -m 0 -s 1024 -o 0  -d 0 -t 0 -S -g 4K
  2. Review the output, for example:
    gpu index :0,file :/mnt/weka/gdstest/tests/reg1G, RING buffer size :0, 
    gpu buffer alignment :4096, gpu buffer offset :0, file offset :0, 
    io_requested :1024, bufregister :false, sync :0, nr ios :1,fsync :0,
    address = 0x564ffc5e76c0
    Data Verification Success

10.14. Unmounting a WekaIO File System Cluster

Here is some information about how to unmount a WekaIO file system cluster.

  1. Run the following command.
    $ sudo umount /mnt/weka
    
  2. Review the output, for example:
    Unmounting /mnt/weka
    Calling the umount command
    umount successful, stopping and deleting client container
    Umount completed successfully
    

10.15. Verify the Installed Libraries for the WekaIO Filesystem

Here is some information about verifying the installled libraries for the WekaIO filesystems.

Table 3. Verifying the Installed Libraries for WekaIO Filesystems
Task Output
Check the WekaIO version.
$ weka status
WekaIO v3.6.2.5-rdma-beta (CLI build 3.6.2.5-rdma-beta)
 
Check whether GDS support for WekaFS is present.
$ /usr/local/cuda-x.y//gdscheck -p

GDS release version (beta): 0.9.0
     nvidia_fs version:  2.3 libcufile version: 2.2
     cuFile CONFIGURATION:
     NVM: Supported
     NVMeOF: Unsupported
     SCSI : Unsupported
     LUSTRE: Unsupported
     NFS : Unsupported
     WEKAFS: Supported
     USERSPACE RDMA: Supported
     --MOFED peer direct: enabled
     --rdma lib: (libcufile_rdma.so)  Loaded
     --rdma devices       : Configured
     --rdma_device_status : Up: 8 , Down: 0
Check for MLNX OFED information.
Check for ofed_info -s 
Currently supported with MLNX_OFED_LINUX-4.6-1.0.1.1 and MLNX_OFED_LINUX-4.7-3.2.9.0
MLNX_OFED_LINUX-5.1-0.6.6.0
$ ofed_info -s
MLNX_OFED_LINUX-4.7-3.2.9.0:
Check for the nvidia-fs.ko driver.
$ lsmod | grep nvidia_fs | grep ib_core && echo “Ready for Memory Peer Direct” 
Check for libibverbs.so
$ dpkg -s libibverbs-dev
Package: libibverbs-dev
Status: install ok installed
Priority: optional
Section: libdevel
Installed-Size: 1151
Maintainer: Linux RDMA Mailing List <linux-rdma@vger.kernel.org>
Architecture: amd64
Multi-Arch: same
Source: rdma-core
Version: 47mlnx1-1.47329

10.16. GDS Configuration File Changes to Support the WekaIO Filesystem

Here is some information about the GDS configuration file changes that are required to support the WekaIO filesystem.

  1. By default, the configuration for Weka RDMA-based writes is disabled.
     "fs": {
                       "weka": {
     
    
                                // enable/disable WekaFs rdma write
                                "rdma_write_support" : false
                        }
                }
  2. Change the configuration to add a new property, rdma_dev_addr_list.
    "properties": {
       // allow compat mode,
       // this will enable use of cufile posix read/writes
       //"allow_compat_mode": true,
    
    "rdma_dev_addr_list": [
           "172.16.8.88" , "172.16.8.89", 
           "172.16.8.90" , "172.16.8.91",
           "172.16.8.92" , "172.16.8.93",
           "172.16.8.94", "172.16.8.95" ]
          }

10.17. Check for Relevant User-Space Statistics for the WekaIO Filesystem

Here is some information about how you can check for relevant user-space statistics for the WekaIO filesystem.

Issue the following command:
$ ./gds_stats -p <pid> -l 3 | grep GPU

See GDS User-Space RDMA Counters for more information about statistics.

10.18. Check for WekaFS Support

Here is some information about how to check for WekaFS support.

If WekaFS support does not exist, the following issues are possible:

Table 4. Weka Filesystem Support Issues
Issue Action
MOFED peer direct is not enabled.

Check whether MOFED is installed (ofed_info -s).

This issue can occur if the nvidia-fs Debian package was installed before MOFED was installed. When this issue occurs, uninstall and reinstall the nvidia-fs package.

RDMA devices are not populated in the /etc/cufile.json file. Add IP addresses to properties.rdma_dev_addr_list. Currently only IPV4 addresses are supported.
None of the configured RDMA devices are UP. Check IB connectivity for the interfaces.

11. Setting Up and Troubleshooting VAST Data (NFSoRDMA+MultiPath)

This section provides information about how to set up and troubleshoot VAST data (NFSoRDMA+MultiPath).

11.1. Installing MOFED and VAST NFSoRDMA+Multipath packages

This section provides information about system requirements and how to install MOFED and VAST NFSoRDMA+Multipath packages.

11.1.1. Client Software Requirements

Here is the information for the minimum client software requirements.

Table 5. Minimum Client Requirements
NFS Connection Type Linux Kernel MOFED
NFSoRDMA + Multipath
The following kernel versions are supported:
  • 3.10
  • 4.15
  • 4.18
  • 5.4
The following MOFED versions are supported:
  • 4.6
  • 4.7
  • 5.0
  • 5.1

For the most up to date supportability matrix and client configuration steps and package downloads, refer to: https://support.vastdata.com/hc/en-us/articles/360016813140-NFSoRDMA-with-Multipath.

MOFED must be installed for the VAST NFSoRDMA+Multipath package to function optimally. It is also important to download the correct VAST software packages to match your kernel+MOFED version combination. See Troubleshooting and FAQ for NVMe and NVMeOF support for information about how to install MOFED with GDS support.

  • To verify the current version of MOFED, issue the following command:
    $ ofed_info -s
    MLNX_OFED_LINUX-5.1-2.3.7.1:
    
  • To verify the currently installed Linux kernel version, issue the following command:
    $ uname -r -v
    5.4.0-48-generic #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020
    

After you verify that your system has the correct combination of kernel and MOFED, you can install the VAST Multipath package.

11.1.2. Install the VAST Multipath Package

Here is the procedure to install the VAST Multipath package.

Although the VAST Multipath with NFSoRDMA package has been submitted upstream for inclusion in a future kernel release, it is currently only available as a download from: https://support.vastdata.com/hc/en-us/articles/360016813140-NFSoRDMA-with-Multipath.

Be sure to download the correct .deb file that is based on your kernel and MOFED. version.

  1. Install the VAST NFSoRDMA+Multipath package.
    $ sudo apt-get install mlnx-nfsrdma-*.deb
  2. Generate a new initramfs image.
    $ sudo update-initramfs -u -k `uname -r`
  3. Verify that the package is installed, and the version is the number that you expected.
    $ dpkg -l | grep mlnx-nfsrdma
    ii  mlnx-nfsrdma-dkms         5.1-OFED.5.1.2.3.7.1      all  DKMS support for NFS RDMA kernel module
  4. Reboot the host and run the following commands to verify that the correct version is loaded.
    Note: The versions shown by each command should match.
    $ cat /sys/module/sunrpc/srcversion
    4CC8389C7889F82F5A59269
    $ modinfo sunrpc | grep srcversion
    srcversion:     4CC8389C7889F82F5A59269

11.2. Set Up the Networking

This section provides information about how to set up client networking for VAST for GDS.

To ensure optimal GPU-to-storage performance while leveraging GDS, you need to configure VAST and client networking in a balanced manner.

11.2.1. VAST Network Configuration

Here is some information about the VAST network configuration.

VAST is a multi-node architecture. Each node has multiple high-speed (IB-HDR100 or 100GbE) interfaces, which can host-client-facing Virtual IPs. Refer to VAST-Managing Virtual IP (VIP) Pools for more information.

Here is the typical workflow:
  1. Multiply the number of VAST-Nodes * 2 (one per Interface).
  2. Create a VIP Pool with the resulting IP count.
  3. Place the VAST-VIP Pool on the same IP-subnet as the client.

11.2.2. Client Network Configuration

Here is some information about client network configuration.

Typically, GPU optimized clients (such as the NVIDIA DGX-2 and DGX-A100) are configured with multiple high speed network interface cards (NICs). In the following example, the system contains 8 separate NICs that were selected for optimal balance for NIC -->GPU and NIC -->CPU bandwidth.

$ sudo ibdev2netdev
mlx5_0 port 1 ==> ibp12s0 (Up)
mlx5_1 port 1 ==> ibp18s0 (Up)
mlx5_10 port 1 ==> ibp225s0f0 (Down)
mlx5_11 port 1 ==> ibp225s0f1 (Down)
mlx5_2 port 1 ==> ibp75s0 (Up)
mlx5_3 port 1 ==> ibp84s0 (Up)
mlx5_4 port 1 ==> ibp97s0f0 (Down)
mlx5_5 port 1 ==> ibp97s0f1 (Down)
mlx5_6 port 1 ==> ibp141s0 (Up)
mlx5_7 port 1 ==> ibp148s0 (Up)
mlx5_8 port 1 ==> ibp186s0 (Up)
mlx5_9 port 1 ==> ibp202s0 (Up)

Not all interfaces are connected, and this is to ensure optimal bandwidth.

When using the aforementioned VAST NFSoRDAM+Multipath package, it is recommended to assign static IP’s to each interface on the same subnet, which should also match the subnet configured on the VAST VIP Pool. If using GDS with NVIDIA DGX-A100’s, a simplistic netplan is all that is required, e.g.:

    ibp12s0:
      addresses: [172.16.0.17/24]
      dhcp4: no
    ibp141s0:
      addresses: [172.16.0.18/24]
      dhcp4: no
    ibp148s0:
      addresses: [172.16.0.19/24]
      dhcp4: no

However, if you are using other systems, or non-GDS code, you need to apply the following code to ensure that the proper interfaces are used to traverse from Client-->VAST.

Note: See the routes section for each interface in the following sample.
$cat /etc/netplan/01-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    enp226s0:
      dhcp4: yes
    ibp12s0:
      addresses: [172.16.0.25/24]
      dhcp6: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.25
           table: 101
      routing-policy:
          - from: 172.16.0.25
           table: 101
    ibp18s0:
      addresses: [172.16.0.26/24]
      dhcp4: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.26
           table: 102
      routing-policy:
          - from: 172.16.0.26
           table: 102
    ibp75s0:
      addresses: [172.16.0.27/24]
      dhcp4: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.27
           table: 103
      routing-policy:
          - from: 172.16.0.27
           table: 103
    ibp84s0:
      addresses: [172.16.0.28/24]
      dhcp4: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.28
           table: 104
      routing-policy:
          - from: 172.16.0.28
           table: 104
    ibp141s0:
      addresses: [172.16.0.29/24]
      dhcp4: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.29
           table: 105
      routing-policy:
          - from: 172.16.0.29
           table: 105
    ibp148s0:
      addresses: [172.16.0.30/24]
      dhcp4: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.30
           table: 106
      routing-policy:
          - from: 172.16.0.30
           table: 106
    ibp186s0:
      addresses: [172.16.0.31/24]
      dhcp4: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.31
           table: 107
      routing-policy:
          - from: 172.16.0.31
           table: 107
    ibp202s0:
      addresses: [172.16.0.32/24]
      dhcp4: no
      routes:
         - to: 172.16.0.0/24
           via: 172.16.0.32
           table: 108
      routing-policy:
          - from: 172.16.0.32
           table: 108
After making changes to the netplan, before issuing the following command, ensure that you have a IPMI/console connection to the client:
$ sudo netplan apply

11.2.3. Verify Network Connectivity

Here is some information about how you can verify network connectivity.

Once the proper netplan is applied, verify connectivity between all client interfaces and all VAST-VIPs with a ping loop:
# Replace with appropriate interface names

$ export IFACES="ibp12s0 ibp18s0 ibp75s0 ibp84s0 ibp141s0 ibp148s0 ibp186s0 ibp202s0"
# replace with appropriate VAST-VIPs

$ export VIPS=$(echo 172.16.0.{101..116})

$ echo "starting pingtest" > pingtest.log

$ for i in $IFACES;do for v in $VIPS; do echo $i >> pingtest.log; ping -c 1 $v -W 0.2 -I $i|grep loss >> pingtest.log;done;done;

# Verify no failures:
$ grep '100%' pingtest.log
You should also verify that one of the following conditions are met:
  • All client interfaces are directly cabled to the same IB switches as VAST.
  • There are sufficient InterSwitch Links (ISLs) between client-switches, and switches to which VAST is connected.
To verify the current IB switch topology, issue the following command:
$ sudo ibnetdiscover 
<output trimmed>

[37] "H-b8599f0300c3f4cb"[1](b8599f0300c3f4cb) # "vastraplab-cn1 HCA-2" lid 55 2xHDR  # <-- example of Vast-Node

[43] "S-b8599f0300e361f2"[43]           # "MF0;RL-QM87-C20-U33:MQM8700/U1" lid 1 4xHDR # <-- example of ISL

[67] "H-1c34da030073c27e"[1](1c34da030073c27e) # "rl-dgxa-c21-u19 mlx5_9" lid 23 4xHDR # <-- example of client

11.3. Mount VAST NFS

Here is some information about how to mount VAST NFS.

To fully utilize available VAST VIPs, you must mount the filesystem by issuing the following command:
$ sudo mount -o proto=rdma,port=20049,vers=3 \

-o noidlexprt,nconnect=40 \
-o localports=172.16.0.25-172.16.0.32 \
-o remoteports=172.16.0.101-172.16.0.140 \
172.16.0.101:/ /mnt/vast
Here is additional information about the options:
proto
RDMA must be specified.
port=20049
Must be specified, this is RDMA control port.
noidlexprt
Do not disconnect idle connections. This is to detect and recover failing connections when there are no pending I/O’s.
nconnect
Number of concurrent connections. Should be divisible evenly by the number of remoteports specified below for best balance.
localports
A list of IPv4 addresses for the local ports to bind.
Remoteports
A list of NFS server IPv4 ports to bind.

For both localports and remoteports you can specify an inclusive range with the -delimiter, for example, FIRST-LAST. Multiple ranges or individual IP addresses can be separated by ~ (a tilde)

11.4. Debugging and Monitoring

Here is some information about debugging and monitoring.

Typically, mountstats under /proc shows xprt statistics. However, instead of modifying it in a non-compatible way with the nfsstat utility, the VAST Multipath package extends mountstats with extra state reporting, to be exclusively accessed from /sys/kernel/debug.

The stats node was added for each RPC client, and the RPC client 0 shows the mount that is completed:

$ sudo cat /sys/kernel/debug/sunrpc/rpc_clnt/0/stats

The added information is multipath IP address information per xprt and xprt state in string format.

For example:
       xprt:   rdma 0 0 1 0 24 3 3 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 0 0 0
               172.25.1.101 -> 172.25.1.1, state: CONNECTED BOUND
       xprt:   rdma 0 0 1 0 24 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 0 0 0
               172.25.1.102 -> 172.25.1.2, state: CONNECTED BOUND
       xprt:   rdma 0 0 1 0 23 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 0 0 0
               172.25.1.103 -> 172.25.1.3, state: CONNECTED BOUND
       xprt:   rdma 0 0 1 0 22 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11 0 0 0
               172.25.1.104 -> 172.25.1.4, state: CONNECTED BOUND
       xprt:   rdma 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
               172.25.1.101 -> 172.25.1.5, state: BOUND
       xprt:   rdma 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
               172.25.1.102 -> 172.25.1.6, state: BOUND
       xprt:   rdma 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
               172.25.1.103 -> 172.25.1.7, state: BOUND
       xprt:   rdma 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
               172.25.1.104 -> 172.25.1.8, state: BOUND

12. Troubleshooting and FAQ for NVMe and NVMeOF support

This section provides troubleshooting information for NVME and NVMeOF support.

12.1. MOFED Requirements and Installation

Here is some information about the requirements to install MOFED.

  • To enable GDS support for NVMe and NVMeOF, you need to install at least MOFED 5.1.
  • You must install MOFED with support for GDS.

After installation is complete, for the changes to take effect, update -initramfs and reboot. The Linux kernel version that was tested with MOFED 5.1 is 4.15.0-x and 5.4.0-x. Issue the following command:

$ sudo ./mlnxofedinstall --with-nvmf --with-nfsrdma --enable-gds --add-kernel-support
$sudo update-initramfs -u -k `uname -r`

$ reboot
Here is the output:
$ /usr/local/cuda-x.y/gds/tools/gdscheck

GDS release version (beta): 0.9v1
nvidia_fs version:  2.2 libcufile version: 2.2
cuFile CONFIGURATION:
NVMe    : Supported
NVMeOF  : Supported

12.2. Determining Whether the NVMe device is Supported for GDS

Here is some information about how to determine whether an NVMe device is supported for GDS.

NVMe devices must be compatible with GDS, the device cannot have the block device integrity capability. For device integrity, the Linux block layer completes the metadata processing based on the payload in the host memory. This is a deviation from the standard GDS IO path and, as a result, cannot accommodate these devices. The cuFile file registration will fail when this type of underlying device is detected with appropriate error log in the cufile.log file.

$ cat /sys/block/devices/<nvme>/device/integrity_check

12.3. Check for the RAID Level

Here is some information about RAID support in GDS.

Currently GDS only supports RAID 0.

12.4. Mounting an EXT4 Filesystem for GDS

Here is some information about how to mount an EXT4 filesystem for GDS.

Currently EXT4 is the only block device based filesystem that GDS supports. Because of Direct IO semantics, the filesystem must be mounted with the journaling mode set to data=ordered. This has to be explicitly part of the mount options so that the library can recognize it:

$ sudo mount -o data=ordered /dev/nvme0n1 /mnt

If the EXT4 journaling mode is not in the expected mode, the cuFIleHandleRegister will fail, and an appropriate error message will be logged in the log file. For instance, in the following case, /mnt1 is mounted with writeback, and GDS returns an error:

$ mount | grep /mnt1
/dev/nvme0n1p2 on /mnt1 type ext4 (rw,relatime,data=writeback)

$ ./cufile_sample_001 /mnt1/foo 0
opening file /mnt1/foo
file register error:GPUDirect Storage not supported on current file

12.5. Check for an Existing Mount

Here is some information about how to check for an existing mount.

$ mount | grep ext4
/dev/sda2 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/nvme1n1 on /mnt type ext4 (rw,relatime,data=ordered)
/dev/nvme0n1p2 on /mnt1 type ext4 (rw,relatime,data=writeback)

12.6. Check for IO Statistics with Block Device Mount

Here is a partial log that shows you how to obtain the I/O statistics:

$ sudo iotop
Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     193.98 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
  881 be/3 root        0.00 B/s   15.52 K/s  0.00 %  0.01 % [jbd2/sda2-8]
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init splash

12.7. RAID Group Configuration for GPU Affinity

Here is some information about RAID group configuration for GPU affinity.

Creating one RAID group from the available NVMe devices might not be optimal for GDS performance. You might need to create RAID groups that consist of devices that have a pci-affinity with the specified GPU. This is required to prevent and cross-node P2P traffic between the GPU and the NVMe devices.

If affinity is not enforced, GDS will use an internal mechanism of device bounce buffers to copy data from the NVMe devices to an intermediate device that is closest to the drives and copy the data back to the actual GPU. If NVLink is enabled, this will speed up these transfers.

12.8. Conduct a Basic EXT4 Filesystem Test

Here is some information about how you can conduct a basic EXT4 filesystem test.

Issue the following command:

$ /usr/local/cuda-x.y/gds/tools/gdsio_verify  -f /mnt/nvme/gdstest/tests/reg1G -n 1 -m 0 -s 1024 -o 0  -d 0 -t 0 -S -g 4K

Here is the output:

gpu index :0,file :/mnt/weka/gdstest/tests/reg1G, RING buffer size :0, gpu buffer alignment :4096, gpu buffer offset :0, file offset :0, io_requested :1024, bufregister :false, sync :0, nr ios :1,fsync :0,
address = 0x564ffc5e76c0
Data Verification Success

12.9. Unmount a EXT4 Filesystem

Here is some information about how to unmount an EXT4 filesystem.

Issue the following command:

$ sudo umount /mnt/

12.10. Udev Device Naming for a Block Device

Here is some information about the Udev device naming for a block device.

The library has a limitation when identifying the NVMe-based block devices in that it expects device names to have the nvme prefix as part of the naming convention.

13. Displaying GDS NVIDIA FS Driver Statistics

GDS exposes the IO statistics information on the procfs filesystem.

  1. Run the following command.
    $ cat /proc/driver/nvidia-fs/stat
  2. Review the output, for example:
    GDS Version: 0.9.0.16
    NVFS statistics(ver: 2.0)
    NVFS Driver(version: 2:3:1)
    
    Active Shadow-Buffer (MB): 0
    Active Process: 0
    Reads                : n=38969972 ok=38969972 err=0 readmb=38969972 io_state_err=0
    Reads                : Bandwidth(MB/s)=4433 Avg-Latency(usec)=445
    Sparse Reads         : n=0 io=0 holes=0 pages=0
    Writes               : n=128102910 ok=128102910 err=0 writemb=128102910 io_state_err=0 pg-cache=6 pg-cache-fail=0 pg-cache-eio=0
    Writes               : Bandwidth(MB/s)=3809 Avg-Latency(usec)=420
    Mmap                 : n=1766 ok=1766 err=0 munmap=1766
    Bar1-map             : n=1761 ok=1761 err=0 free=1013 callbacks=748 active=0
    Error                : cpu-gpu-pages=0 sg-ext=0 dma-map=0
    Ops                  : Read=0 Write=0
    GPU 0000:5e:00:0  uuid:dc87fe99-4d68-247b-b5d2-63f96d2adab1 : pinned_MB=0 cache_MB=0 max_pinned_MB=79
    GPU 0000:b7:00:0  uuid:b3a6a195-d08c-09d1-bf8f-a5423c277c04 : pinned_MB=0 cache_MB=0 max_pinned_MB=76
    GPU 0000:e7:00:0  uuid:7c432aed-a612-5b18-76e7-402bb48f21db : pinned_MB=0 cache_MB=0 max_pinned_MB=80
    GPU 0000:57:00:0  uuid:aa871613-ee53-9a0c-a546-851d1afe4140 : pinned_MB=0 cache_MB=0 max_pinned_MB=80
    GPU 0000:be:00:0  uuid:911ce8ad-e74f-45cc-7af0-0147ce393763 : pinned_MB=0 cache_MB=0 max_pinned_MB=80
    GPU 0000:e0:00:0  uuid:fd66f293-e3e9-2352-9d1f-776c04d69abc : pinned_MB=0 cache_MB=0 max_pinned_MB=77
    GPU 0000:3b:00:0  uuid:64a5ffb0-795b-9206-f98f-24e1b48c45f3 : pinned_MB=0 cache_MB=0 max_pinned_MB=80
    GPU 0000:34:00:0  uuid:df17c3d9-67bc-8b4a-4cea-420d61cda78f : pinned_MB=0 cache_MB=0 max_pinned_MB=74
    

13.1. Understanding nvidia-fs Statistics

Here is some information to help you understand nvidia-fs statistics.

Table 6. NVIDIA-FS Statistics
Type Statistics Description
Reads n Total number of read requests.
  ok Total number of successful read requests.
  err Total number of read errors.
  Readmb (mb) Total data read into the GPUs.
  io_state_err

Read errors that were seen.

Some pages might have been in the page cache.

Reads Bandwidth (MB/s)

Active Read Bandwidth when IO is in flight. This is the period from when IO was submitted to the GDS kernel driver until the IO completion was received by the GDS kernel driver.

There was no userspace involved.

  Avg-Latency (usec)

Active Read latency when IO is in flight. This is from the period from when IO was submitted to the GDS kernel driver until the IO completion is received by the GDS kernel driver.

There was no userspace involved.

Sparse Reads n Total number of sparse read requests.
  holes Total number of holes that were observed during reads.
  pages Total number of pages that span the holes.
Writes n Total number of write requests.
  ok Total number of successful write requests.
  err Total number of write errors.
  Writemb (mb) Total data that was written from the GPUs to the disk.
  io_state_err

Write errors that were seen.

Some pages might have been in the page cache.

  pg-cache Total number of write requests that were found in the page cache.
  pg-cache-fail Total number of write requests that were found in the page cache but could not be flushed.
  pg-cache-eio Total number of write requests that were found in the page-cache, but could not be flushed after multiple retries, and IO failed with EIO.
Writes Bandwidth (MB/s)

Active Write Bandwidth when IO is in flight. This is the period from when IO is submitted to the GDS kernel driver until the IO completion is received by the GDS kernel driver.

There was no userspace involved.

  Avg-Latency (usec)

Active Write latency when IO is in flight. This is the period from when IO is submitted to the GDS kernel driver until the IO completion is received by the GDS kernel driver.

There was no userspace involved.

Mmap
n
Total number of mmap system calls that were issued.
  ok Total number of successful mmap system calls.
  err Errors that were observed through the mmap system call.
  munmap Total number of munmap that were issued.
Bar-map n Total number of times the GPU BAR memory was pinned.
  ok Total number of times the successful GPU BAR memory was pinned.
  err Total errors that were observed during the BAR1 pinning.
  free Total number of times the BAR1 memory was unpinned.
  callbacks
Total number of times the NVIDIA kernel driver invoked callback to the GDS driver. This is invoked on the following instances:
  • When the process crashes or was abruptly killed.
  • When cudaFree is invoked on memory, which is pinned through cuFileBufRegister, but cuFileBufDeregister is not invoked.
  active

Active number of BAR1 memory that was pinned.

(This value is the total number and not the total memory.)

Error cpu-gpu-pages Number of IO requests that had a mix of CPU-GPU pages when nvfs_dma_map_sg_attrs is invoked.
  sg-ext Scatterlist that could not be expanded because the number of GPU pages is greater than blk_nq_nr_phys_segments.
  dma-map A DMA map error.
ops Read Total number of Active Read IO in flight.
  Write Total number of Active Write iO in flight.

13.2. Analyze Statistics for each GPU

You can analyze the statistics for each GPU to better understand what is happening in that GPU.

Consider the following example output:
GPU 0000:5e:00:0  uuid:dc87fe99-4d68-247b-b5d2-63f96d2adab1 : pinned_MB=0 cache_MB=0 max_pinned_MB=79
GPU 0000:b7:00:0  uuid:b3a6a195-d08c-09d1-bf8f-a5423c277c04 : pinned_MB=0 cache_MB=0 max_pinned_MB=76
GPU 0000:e7:00:0  uuid:7c432aed-a612-5b18-76e7-402bb48f21db : pinned_MB=0 cache_MB=0 max_pinned_MB=80
GPU 0000:57:00:0  uuid:aa871613-ee53-9a0c-a546-851d1afe4140 : pinned_MB=0 cache_MB=0 max_pinned_MB=80

In this sample output, 0000:5e:00:0, is the PCI BDF of the GPU with the Dc87fe99-4d68-247b-b5d2-63f96d2adab1 UUID. This is the same UUID that can be used to observe nvidia-smi statistics for this GPU.

Here is some additional information about the statistics:
  • pinned-MB shows the active GPU memory that is pinned by using nvidia_p2p_get_pages from the GDS driver in MB across all active processes.
  • cache_MB shows the active GPU memory that is pinned by using nvidia_p2p_get_pages, but this memory is used as the internal cache by GDS across all active processes.
  • max_pinned_MB shows the max GPU memory that is pinned by GDS at any point in time on this GPU across multiple processes.

    This value indicates that the max BAR size and administrator can be used for system sizing purposes.

13.3. Resetting the nvidia-fs Statistics

You can reset the nvidia-fs statistics.

Run the following command:
$ sudo bash
$ echo 1 >/proc/driver/nvidia-fs/stats

13.4. Review Peer Affinity Stats for a Kernel Filesystem and Storage Drivers

Here is some information about reviewing nvidia-fs PCI peer affinity statistics for a kernel file system and storage drivers.

The following proc files contain information about peer affinity DMA statistics via nvidia-fs callbacks:
  • nvidia-fs/stats
  • nvidia-fs/peer_affinity
  • nvidia-fs/peer_distance
To enable the statistics, run the following command:
$ sudo bash
$ echo 1 > /sys/module/nvidia_fs/parameters/peer_stats_enabled
To view consolidated statistics as a regular user, run the following command:
$ cat /proc/driver/nvidia-fs/stats
Here is the sample output:
GDS Version: 0.9.0.16
NVFS statistics(ver: 1.0)
NVFS Driver(version: 2:0)

Active Shadow-Buffer (MB): 0
Active Process: 0
Reads                         : n=26948 ok=26545 err=435 readmb=27367 io_state_err=0
Reads                         : Bandwidth(MB/s)=0 Avg-Latency(usec)=70
Sparse Reads                  : n=24393 io=9051 holes=10151 pages=2273716
Writes                        : n=12795 ok=12737 err=26 writemb=72272 io_state_err=0 pg-cache=6 pg-cache-fail=0 pg-cache-eio=0
Writes                        : Bandwidth(MB/s)=2 Avg-Latency(usec)=932
Mmap                          : n=2016 ok=1980 err=36 munmap=1980
Bar1-map                      : n=1950 ok=1950 err=0 free=1846 callbacks=104 active=0
Error                         : cpu-gpu-pages=0 sg-ext=0 dma-map=0
Ops                           : Read=4294967264 Write=32
GPU 0000:00:00:0  uuid:a096002c-5727-8728-2422-1476af4f441a : pinned_MB=0 cache_MB=0 max_pinned_MB=10 cross_root_port(%)=0
GPU 0000:34:00:0  uuid:98bb4b5c-4576-b996-3d84-4a5d778fa970 : pinned_MB=0 cache_MB=0 max_pinned_MB=20486 cross_root_port(%)=0
GPU 0000:36:00:0  uuid:e5e78eec-7047-9ad1-d53d-52b3e54efdc2 : pinned_MB=0 cache_MB=0 max_pinned_MB=10 cross_root_port(%)=0

The cross_root_port (%) port is the percentage of total DMA traffic through nvidia-fs callbacks, and this value spans across PCIe root ports between GPU and its peers such as HCA.

  • This can be a major reason for low throughput on certain platforms.
  • This does not consider the DMA traffic that is initiated via cudaMemcpyDeviceToDevice or cuMemcpyPeer with the specified GPU.

13.5. Checking the Peer Affinity Usage for a Kernel File System and Storage Drivers

Here is some information about how you can check peer affinity usage for a kernel file system and storage drivers.

  1. To get the peer affinity usage, run the following command:
    $ cat /proc/driver/nvidia-fs/peer_affinity
  2. Review the sample output, for example:
    Here is the output:
    GPU P2P DMA distribution based on pci-distance
    
    (last column indicates p2p via root complex)
    GPU :0000:be:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:3b:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e7:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e5:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e0:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:57:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:39:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:36:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:e2:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:59:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:b7:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:b9:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:bc:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:34:00.0 :0 0 2621440 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:5e:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    GPU :0000:5c:00.0 :0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Each row represents a GPU entry, and the columns indicate the peer ranks in ascending order. The lower the rank, the better the affinity. Each column entry is the total number of DMA transactions that occurred between the specified GPU and the peers that belong to the same rank.

For example, the row with GPU 0000:34:00.0 has 2621440 IO operations through the peer with rank 3. Non-zero values in the last column indicate that the IO is routed through the root complex.

Here are some examples:

Run the following command:

$ /usr/local/cuda-x.y/samples/cufile_sample_001 /mnt/lustre/test 0
$ cat /proc/driver/nvidia-fs/stats
Here is the output:
GDS Version: 0.9.0.16
NVFS statistics(ver: 2.0)
NVFS Driver(version: 2:3:1)

Active Shadow-Buffer (MB): 0
Active Process: 0
Reads                           : n=0 ok=0 err=0 readmb=0 io_state_err=0
Reads                           : Bandwidth(MB/s)=0 Avg-Latency(usec)=0
Sparse Reads                : n=0 io=0 holes=0 pages=0
Writes                          : n=1 ok=1 err=0 writemb=0 io_state_err=0 pg-cache=0 pg-cache-fail=0 
pg-cache-eio=0
Writes                          : Bandwidth(MB/s)=0 Avg-Latency(usec)=0 
Mmap                            : n=1 ok=1 err=0 munmap=1
Bar1-map                       : n=1 ok=1 err=0 free=1 callbacks=0 active=0
Error                : cpu-gpu-pages=0 sg-ext=0 dma-map=0
Ops                             : Read=0 Write=0
GPU 0000:34:00:0  uuid:98bb4b5c-4576-b996-3d84-4a5d778fa970 : pinned_MB=0 cache_MB=0 max_pinned_MB=0 cross_root_port(%)=100

Run the following command:

$ cat /proc/driver/nvidia-fs/peer_affinity
Here is the output:
GPU peer affinity distribution by pci-distance (version: 1.0)
GPU :0000:b7:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:b9:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:bc:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:be:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:e0:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:e2:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:e5:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:e7:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:34:00:0 :0 0 0 0 0 0 0 0 0 0 0 2
GPU :0000:36:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:39:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:3b:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:57:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:59:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:5c:00:0 :0 0 0 0 0 0 0 0 0 0 0 0
GPU :0000:5e:00:0 :0 0 0 0 0 0 0 0 0 0 0 0

In the above example, there are DMA transactions between the GPU (34:00.0) and one of its peers. The peer device has the highest possible rank which indicates it is farthest away from the respective GPU pci-distance wise.

To check the percentage of traffic, check the cross_root_ port % in /proc/driver/nvidia-fs/stats. In the third example above, this value is 100%, which means that the peer-to peer-traffic is happening over QPI links.

13.6. Display the GPU-to-Peer Distance Table

The peer_distance table displays the device-wise IO distribution for each peer with its rank for the specified GPU, and it complements the rank-based stats.

The peer_distance table displays the device-wise IO distribution for each peer with its rank for the specified GPU. It complements the rank-based stats.

The ranking is done in the following order:
  1. Primary priority given to p2p distance (upper 2 bytes).
  2. Secondary priority is given to the device bandwidth (lower 2 bytes)

For peer paths that cross the root port, a fixed cost for p2p distance (127) is added. This is done to induce a preference for paths under one CPU root port relative to paths that cross the CPU root ports.

Issue the following command:
$ cat /proc/driver/nvidia-fs/peer_distance
Here is the output:
   gpu             peer            peerrank                p2pdist np2p    link    gen     class
    0000:af:00.0    0000:86:00.0    0x820088                0x82    0       0x8     0x3     network
    0000:af:00.0    0000:18:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:af:00.0    0000:86:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:af:00.0    0000:19:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:af:00.0    0000:87:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:af:00.0    0000:19:00.0    0x820088                0x82    0       0x8     0x3     network
    0000:3b:00.0    0000:86:00.0    0x820088                0x82    0       0x8     0x3     network
    0000:3b:00.0    0000:18:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:3b:00.0    0000:86:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:3b:00.0    0000:19:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:3b:00.0    0000:87:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:3b:00.0    0000:19:00.0    0x820088                0x82    0       0x8     0x3     network
    0000:5e:00.0    0000:86:00.0    0x820088                0x82    0       0x8     0x3     network
    0000:5e:00.0    0000:18:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:5e:00.0    0000:86:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:5e:00.0    0000:19:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:5e:00.0    0000:87:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:5e:00.0    0000:19:00.0    0x820088                0x82    0       0x8     0x3     network
    0000:d8:00.0    0000:86:00.0    0x820088                0x82    0       0x8     0x3     network
    0000:d8:00.0    0000:18:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:d8:00.0    0000:86:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:d8:00.0    0000:19:00.1    0x820088                0x82    0       0x8     0x3     network
    0000:d8:00.0    0000:87:00.0    0x820088                0x82    0       0x8     0x3     nvme
    0000:d8:00.0    0000:19:00.0    0x820088                0x82    0       0x8     0x3     network

13.7. The GDSIO Tool

Here is some information about the GDSIO tool.

GDSIO is a synthetic IO benchmarking tool that uses cufile APIs for IO. The tool can be found in the /usr/local/cuda-x.y/tools directory. For more information about how to use this tool, run $ /usr/local/cuda-x.y/tools/gdsio -h or review the gdsio section in the/usr/local/cuda-x.y/tools/README file. In the examples below, the files are created on an ext4 file system.

Issue the following command:
# ./gdsio -f /root/sg/test -d 0 -w 4 -s 1M -x 0 -i 4K:32K:1K -I 1
Here is the output:
IoType: WRITE XferType: GPUD Threads: 4 DataSetSize: 671/1024(KiB) 
IOSize: 4-32-1(KiB) Throughput: 0.044269 GiB/sec, Avg_Latency: 
996.094925 usecs ops: 60 total_time 0.014455 secs

This command does a write IO (-I 1) on a file named test of size 1MiB (-s 1M) with an IO size that varies between 4KiB to 32 KiB in steps of 1KiB (-i 4K:32K:1K). The transfer is performed using GDS (-x 0) using 4 threads (-w 4) on GPU 0 (-d 0).

Here are some of additional features of the tool:
  • Support for read/write at random offsets in a file.

    The gdsio tool provides options to perform a read and write to a file at random offsets.

    • Using -I 2 and -I 3 options does a file read and write operation at random offset respectively but the random offsets are always 4KiB aligned.
      # ./gdsio -f /root/sg/test -d 0 -w 4 -s 1M -x 0 -i 4K:32K:1K -I 3
      IoType: RANDWRITE XferType: GPUD Threads: 4 DataSetSize: 706/1024(KiB) IOSize: 4-32-1(KiB) Throughput: 0.079718 GiB/sec, Avg_Latency: 590.853274 usecs ops: 44 total_time 0.008446 secs
    • To perform a random read and write at unaligned 4KiB offsets, the -U option can be used with -I 0 or -I 1 for read and write, respectively.
      # ./gdsio -f /root/sg/test -d 0 -w 4 -s 1M -x 0 -i 4K:32K:1K -I 1 -U
      
      IoType: RANDWRITE XferType: GPUD Threads: 4 DataSetSize: 825/1024(KiB) IOSize: 4-32-1(KiB) Throughput: 0.055666 GiB/sec, Avg_Latency: 919.112500 usecs ops: 49 total_time 0.014134 secs
    • Random buffer fill for dedupe and compression.

      Using the ‘-R’ option fills the io size buffer (-i) with random data. This random data is then written to the file onto different file offsets.

      # ./gdsio -f /root/sg/test -d 0 -w 4 -s 1M -x 0 -i 4K:32K:1K -I 1 -R
      
      IoType: WRITE XferType: GPUD Threads: 4 DataSetSize: 841/1024(KiB) IOSize: 4-32-1(KiB) Throughput: 0.059126 GiB/sec, Avg_Latency: 788.884580 usecs ops: 69 total_time 0.013565 secs
    • Using the -F option will fill the entire file with random data.
      # ./gdsio -f /root/sg/test -d 0 -w 4 -s 1M -x 0 -i 4K:32K:1K -I 1 -F
      
      IoType: WRITE XferType: GPUD Threads: 4 DataSetSize: 922/1024(KiB) IOSize: 4-32-1(KiB) Throughput: 0.024376 GiB/sec, Avg_Latency: 1321.104532 usecs ops: 73 total_time 0.036072 secs
      

      This is useful for file systems that use dedupe and compression algorithms to minimize disk access. Using random data increases the probability that these file systems will hit the backend disk more often.

  • Variable block size.

    To perform a read or a write on a file, you can specify the block size (- i), which says that IO would be performed in chunks of block sized lengths. To check the stats for what block sizes are used use the gds_stats tool. Ensure the the /etc/cufile.json file has cufile_stats is set to 3:

    # ./gds_stats -p <pid of the gdsio process> -l 3
    Here is the output:
    0-4(KiB): 0  0 
    4-8(KiB): 0  17205
    8-16(KiB): 0  45859
    16-32(KiB): 0  40125
    32-64(KiB): 0  0
    64-128(KiB): 0  0
    128-256(KiB): 0  0
    256-512(KiB): 0  0
    512-1024(KiB): 0  0
    1024-2048(KiB): 0  0
    2048-4096(KiB): 0  0
    4096-8192(KiB): 0  0
    8192-16384(KiB): 0  0
    16384-32768(KiB): 0  0
    32768-65536(KiB): 0  0
    65536-...(KiB): 0  0
    

    The highlighted counters show that, for the command above, the block sizes that are used for file IO are in the 4-32 KiB range.

  • Verification mode usage and limitations.
    To ensure data integrity, there is an option to perform IO in a Write and Read in verify mode using the -V option. Here is an example:
    # ./gdsio -V -f /root/sg/test -d 0 -w 1 -s 2G -o 0 -x 0 -k 0 -i 4K:32K:1K -I 1
    IoType: WRITE XferType: GPUD Threads: 1 DataSetSize: 2097144/2097152(KiB) IOSize: 4-32-1(KiB) Throughput: 0.074048 GiB/sec, Avg_Latency: 231.812570 usecs ops: 116513 total_time 27.009349 secs
    Verifying data
    IoType: READ XferType: GPUD Threads: 1 DataSetSize: 2097144/2097152(KiB) IOSize: 4-32-1(KiB) Throughput: 0.103465 GiB/sec, Avg_Latency: 165.900663 usecs ops: 116513 total_time 19.330184 secs

    The command mentioned above will perform a write followed by a read verify test.

    While using the verify mode, remember the following points:
    • read test (-I 0) with verify option (-V) should be used with files written (-I 1) with the -V option
    • read test (-I 2) with verify option (-V) should be used with files written (-I 3) with the -V option and using same random seed (-k) using same number of threads, offset, and data size
    • write test (-I 1/3) with verify option (-V) will perform writes followed by read.
    • Verify mode cannot be used in timed mode (-T option).

      If Verify mode is used in a timed mode, it will be ignored.

  • The configuration file

    GDSIO has an option to configure the parameters that are needed to perform an IO in a configuration file and run the IO using those configurations. The configuration file gives the option of performing multiple jobs, where each job has some different configurations.

    The configuration file has global parameters and job specific parameter support. For example, with a configuration file, you can configure each job to perform on a GPU and with a different number of threads. The global parameters, such as IO Size and transfer mode, remain the same for each job. For more information, refer to /usr/local/cuda-x.y/tools/README and /usr/local/cuda-x.y/tools/rw-sample.gdsio files. After configuring the parameters, to perform the IO operation using the configuration file, run the following command:
    #./gdsio <config file name>

See Tabulated Fields for a list of the tabulated fields.

13.8. Tabulated Fields

Here is the lsit of tabulated fields after you run the #./gdsio <config file name> command.

Table 7. Tabulated Fields
Global Option Description
xfer_type GDSIO Transfer types:
  • 0 : Storage->GPU
  • 1 : Storage->CPU
  • 2 : Storage->CPU->GPU
  • 3 : Storage->CPU->GPU_ASYNC
  • 4 : Storage->PAGE_CACHE->CPU->GPU
  • 5 : Storage->GPU_ASYNC
rw IO type, rw=read, rw=write, rw=randread, rw=randwrite
bs block size, e.g.bs=1M, for variable block size can specify range e.g. bs=1M:4M:1M, (1M : start block size, 4M : end block size, 1M :steps in which size is varied)
size File-size, for example, size=2G
runtime Duration in seconds
do_verify Use 1 for enabling verification
skip_bufregister Skip cufile buffer registration, ignored in cpu mode
enable_nvlinks

Set up NVlinks.

This field is recommended if p2p traffic is cross node.

random_seed Use random seed, e.g. 1234
refill_buffer Refill io buffer after every write
fill_random Fill request buffer with random data
unaligned_random Use random offsets which are not page-aligned
start_offset File offset to start read/write from.
Per-Job Options Description
numa_node numa node
gpu_dev_id GPU device index (check nvidia-smi)
num_threads Number of IO Threads per job
directory Directory name where files are present. Each thread will work on a per file basis.
filename Filename for single file mode, where threads share the same file. (Note directory mode and filemode should not be used in a mixed manner across jobs).

13.9. GDSCHECK

Here is some information about the GDSCHECK tool.

The /usr/local/cuda-x.y/tools/gdscheck.py tool is used to perform a GDS platform check and has other options that can be found by using -h option.

$ ./gdscheck.py -h
usage: gdscheck.py [-h] [-p] [-f FILE] [-v] [-V]
GPUDirectStorage platform checker
optional arguments:
  -h, --help  show this help message and exit
  -p          gds platform check
  -f FILE     gds file check
  -v          gds version checks
  -V          gds fs checks
To perform a GDS platform check, issue the following command and expect the output in the following format:
# ./gdscheck.py -p
GDS release version (beta): 0.9.0
 nvidia_fs version:  2.3 libcufile version: 2.3
 cuFile CONFIGURATION:
 NVMe    : Supported
 NVMeOF  : Unsupported
 SCSI    : Unsupported
 LUSTRE  : Unsupported
 NFS     : Unsupported
 WEKAFS  : Unsupported
 USERSPACE RDMA : Unsupported
 --MOFED peer direct  : enabled
 --rdma library       : Not Loaded (libcufile_rdma.so)
 --rdma devices       : Not configured
 --rdma_device_status : Up: 0 Down: 0
 properties.use_compat_mode : 0
 properties.use_poll_mode : 0
 properties.poll_mode_max_size_kb : 4
 properties.max_batch_io_timeout_msecs : 5
 properties.max_direct_io_size_kb : 16384
 properties.max_device_cache_size_kb : 131072
 properties.max_device_pinned_mem_size_kb : 33554432
 properties.rdma_peer_affinity_policy : RoundRobin
 fs.generic.posix_unaligned_writes : 0
 fs.lustre.posix_gds_min_kb: 0
 fs.weka.rdma_write_support: 1
 profile.nvtx : 0
 profile.cufile_stats : 3
 miscellaneous.api_check_aggressive : 0
 GPU INFO:
 GPU index 0 Tesla T4 bar:1 bar size (MiB):256 supports GDS
 Found ACS enabled for switch 0000:40:01.1
 IOMMU : disabled
 Platform verification succeeded

13.10. NFS Support with GPUDirect Storage

This section provides information about NFS support with GDS.

13.10.1. Install Linux NFS server with RDMA Support on MOFED 5.1

Here is some information about how to install a Linux NFS server with RMMA support on MOFED 5.1.

To install a standard Linux kernel-based NFS server with RDMA support, complete the following steps:
Note: The server must have a mellanox connect-X4/5 NIC with mlnx MOFED 5.1 installed.
  1. Issue the following command:
    $ ofed_info -s MLNX_OFED_LINUX-5.1-0.6.6.0:
  2. Review the output to ensure that the server was installed.
    $ sudo apt-get install nfs-kernel-server
    $ mkfs.ext4 /dev/nvme0n1
    $ mount -o data=ordered /dev/nvme0n1 /mnt/nvme
    $ cat /etc/exports
      /mnt/nvme *(rw,async,insecure,no_root_squash,no_subtree_check)
    $ service nfs-kernel-server restart
    $ modprobe rpcrdma
    $ echo rdma 20049 > /proc/fs/nfsd/portlist
    

13.10.2. Install GPUDirect Storage support for the NFS Client

Here is some information about installing GDS support for the NFS client.

To install a NFS client with GDS support complete the following steps:

Note: The client must have a Mellanox connect-X4/5 NIC with mlnx MOFED 5.1 installed.
  1. Issue the following command:
    $ ofed_info -s MLNX_OFED_LINUX-5.1-0.6.6.0:
  2. Review the output to ensure that the support exists.
    $ sudo apt-get install nfs-common
    $ modprobe rpcrdma
    $ mkdir -p /mnt/nfs_rdma_gds
    $ sudo mount -v -o proto=rdma,port=20049,vers=3 172.16.0.101:/ /mnt/nfs_rdma_gds
    To mount with nconnect using VAST nfs client package:
    Eg: client IB interfaces 172.16.0.17 , 172.16.0.18, 172.16.0.19, 172.16.0.20, 172.16.0.21,172.16.0.22,172.16.0.23 172.16.0.24
    $ sudo mount -v -o proto=rdma,port=20049,vers=3,nconnect=20,localports=172.16.0.17-172.16.0.24,remoteports=172.16.0.101-172.16.0.120 172.16.0.101:/ /mnt/nfs_rdma_gds
    

13.11. NFS GPUDirect Storage Statistics and Debugging

Here is some information about NFS and GDS statistics and debugging.

NFS IO can be observed using regular Linux tools that are used for monitoring IO, such as iotop and nfsstat.
  • To enable NFS RPC stats debugging, run the following command.
    $ rpcdebug -v
  • To observer GDS-related IO stats, run the following command.
    $ cat /proc/driver/nvidia-fs/stats
  • To determine GDS statistics per process, run the following command.
    $ /usr/local/cuda-x.y/tools/gds_stats -p <PID> -l 3

13.12. GPUDirect Storage IO Behavior

This section provides information about IO behavior in GDS.

13.12.1. Read/Write Atomicity Consistency with GPUDirect Storage Direct IO

Here is some information about read/write atomiity consistency with GDS.

In GDS, the max_direct_io_size_kb property controls the IO unit size in which the limitation is issued to the underlying file system. By default, this value is 16MB. This implies that from a Linux VFS perspective, the atomicity of size is limited to the max_direct_io_size_kb size and not the original request size. This limitation exists in the standard GDS path and in compatible mode.

13.12.2. Write with File a Opened in O_APPEND Mode (cuFileWrite)

Here is some information about writing to a file that is opened in O_APPEND mode.

For a file that is opened in O_APPEND mode with concurrent writers, if the IO size that is used is larger than the max_direct_io_size_kb property, because of the write atomicity limitations, the file might have interleaved data from multiple writers. This cannot be prevented even if the underlying file-system has locking guarantees.

13.12.3. GPU to NIC Peer Affinity

Here is some information about GPU to NIC peer affinity.

The library maintains a peer affinity table that is a pci-distance-based ranking for a GPU and the available NICs in the platform for RDMA. Currently, the limitation in the ranking does not consider NUMA attributes for the NICs. For a NIC that does not share a common root port with a GPU, the P2P traffic might get routed cross socket over QPI links even if there is a NIC that resides on the same CPU socket as the GPU.

13.12.4. Compatible Mode with Unregistered Buffers

Here is some information about the compatible mode with unregistered buffers.

Currently in compatible mode, the IO path with non-registered buffers does not have optimal performance and does buffer allocation and deallocation in every cuFileRead or cuFileWrite.

13.12.5. Unaligned writes with Non-Registered Buffers

Here is some information about unaligned writes with non-registered buffers.

For unaligned writes, using unregistered buffers performance may not be optimal as compared to registered buffers.

13.12.6. Process Hang with NFS

Here is some information about process hanges with NFS.

A process hang is observed in NFS environments when the application crashes.

13.12.7. Tools Support Limitations for CUDA 9 and Earlier

Here is some information about the tool support issues with CUDA 9 and earlier.

The gdsio binary has been built against CUDA runtime 10.1 and has a dependency on the CUDA runtime environment to be equal to version 10.1 or later. Otherwise a driver dependency error will be reported by the tool.

13.13. Installing and Uninstalling the Debian Package

This section provides information about how to install and uninstall the Debian packages.

The following packages are shipped as part of the Debian package:
  1. nvidia-fs_2.2_amd64.deb
  2. gds_0.8.0_amd64.deb
  3. gds-tools_0.8.0_amd64.deb

Currently, NVIDIA only supports the Debian installation with Ubuntu 18.04 and 20.04. The first package, nvidia-fs_2.2_amd64.deb, can be installed or uninstalled without any dependencies. You must install the other three packages in the order as listed above and uninstalled in reverse order that is listed above.

For more information about how to install and uninstall the Debian package, see:

13.13.1. Install the Debian Package

Here are the steps to install the Debian package.

Before you install the Debian package, complete the following tasks:
  • Ensure that the NVIDIA driver is installed by using APT package manager.
  • You installed the NVIDIA driver by using NVIDIA-Linux-x86_64.

    The <version>.run file is not supported with the nvidia-gds package.

  • Ensure that you download the correct GDS debian package based on your Ubuntu distribution and CUDA toolkit.
    • For 20.04:
       $ sudo dpkg -i gpudirect-storage-local-repo-ubuntu2004-cuda-x.y-0.9.0_1.0-1_amd64.deb
    • For 18.04:
       $ sudo dpkg -i gpudirect-storage-local-repo-ubuntu1804-cuda-x.y-0.9.0_1.0-1_amd64.deb
The following packages, which are shipped in the Debian package, will be uninstalled:
  • nvidia-fs_2.3_amd64.deb
  • gds_0.9.0_amd64.deb
  • gds-tools_0.9.0_amd64.deb
  1. Download the debian packages to local client.
    $ sudo apt-key add /var/gpudirect-storage-local-repo-*/7fa2af80.pub
    $ sudo apt-get update
  2. Install all the GDS-related packages by running nvidia-gds metapackage.
    If you installed version 0.8.0, run the following commands before you upgrade to version 0.9.0:
    $ sudo dpkg --purge nvidia-fs
     $ sudo dpkg --purge gds-tools
     $ sudo dpkg --purge gds
  3. To get the current NVIDIA driver version in the system:
    $ NVIDIA_DRV_VERSION=$(cat /proc/driver/nvidia/version | grep Module | awk '{print $8}' | cut -d '.' -f 1)
    On DGX-based systems or systems with nvidia prebuilt kernels, run the following commands to install nvidia-gds with correct dependencies:
    $ sudo apt install nvidia-gds nvidia-dkms-${NVIDIA_DRV_VERSION}-server
    $ sudo modprobe nvidia_fs
    For systems with the nvidia-dkms-${NVIDIA_DRV_VERSION} package installed:
    $ sudo apt install nvidia-gds
    $ sudo modprobe nvidia_fs
  4. Verify the package installation.
    $ dpkg -s nvidia-gds
    Package: nvidia-gds
    Status: install ok installed
    Priority: optional
    Section: multiverse/devel
    Installed-Size: 7
    Maintainer: cudatools <cudatools@nvidia.com>
    Architecture: amd64
    Source: gds-ubuntu1804
    Version: 0.9.0.15-1
    Provides: gds
    Depends: libcufile0, gds-tools, nvidia-fs
    Description: Metapackage for GPU Direct Storage
    GPU Direct Storage metapackage
  5. To verify GDS install run gdscheck:
    $ /usr/local/cuda-x.y/gds/tools/gdscheck.py -p
    Here is the ouput:
    GDS release version (beta): 0.9.0.15
    nvidia_fs version:  2.3 libcufile version: 2.3
    cuFile CONFIGURATION:
    NVMe           : Supported
    NVMeOF         : Unsupported
    SCSI           : Unsupported
    SCALEFLUX CSD  : Unsupported
    LUSTRE         : Unsupported
    NFS            : Unsupported
    WEKAFS         : Supported
    USERSPACE RDMA : Supported
    --MOFED peer direct  : enabled
    --rdma library       : Loaded (libcufile_rdma.so)
    --rdma devices       : Configured
    --rdma_device_status : Up: 1 Down: 0
    properties.use_compat_mode : 1
    properties.use_poll_mode : 0
    properties.poll_mode_max_size_kb : 4
    properties.max_batch_io_timeout_msecs : 5
    properties.max_direct_io_size_kb : 16384
    properties.max_device_cache_size_kb : 131072
    properties.max_device_pinned_mem_size_kb : 33554432
    properties.posix_pool_slab_size_kb : 4096 1048576 16777216
    properties.posix_pool_slab_count : 128 64 32
    properties.rdma_peer_affinity_policy : RoundRobin
    fs.generic.posix_unaligned_writes : 0
    fs.lustre.posix_gds_min_kb: 0
    fs.weka.rdma_write_support: 0
    profile.nvtx : 0
    profile.cufile_stats : 3
    miscellaneous.api_check_aggressive : 0
    GPU INFO:
    GPU index 0 Tesla T4 bar:1 bar size (MB):256 supports GDS
    GPU index 1 Tesla T4 bar:1 bar size (MB):256 supports GDS
    GPU index 2 Tesla T4 bar:1 bar size (MB):256 supports GDS
    GPU index 3 Tesla T4 bar:1 bar size (MB):256 supports GDS
    IOMMU : disabled
    Platform verification succeeded

13.13.2. Uninstall the Debian Package

Here is the information about how to uninstall the Debian package.

Before you install GDS version 0.9, you need to uninstall GDS version 0.8 or earlier to uninstall GDS and remove the following packages:
  • nvidia-fs_2.2_amd64.deb
  • gds_0.8.0_amd64.deb
  • gds-tools_0.8.0_amd64.deb
To uninstall the Debian package, run the following command:
$ sudo dpkg --purge nvidia-gds

14. GDS Library Tracing

The GDS Library has USDT (static tracepoints), which can be used with Linux tools such as lttng, bcc/bpf, perf. This section assumes familiarity with these tools.

The examples in this section show tracing by using the bcc/bpf tracing facility. GDS does not ship these tracing tools. Refer to Installing BCC for more information about installing bcc/bpf tools. Users must have root privileges to install.

Note: The user must also have sudo access to use these tools.

14.1.1. Example: Tracepoint Arguments

Here are examples of tracepoint arguments.

cufio_px_read

This tracepoint tracks POSIX IO reads and takes the following arguments:
  • Arg1: File descriptor
  • Arg 2: File offset
  • Arg 3: Read size
  • Arg 4: GPU Buffer offset
  • Arg 5: Return value
  • Arg 6: GPU ID for which IO is done

cufio_rdma_read

This tracepoint tracks IO reads for through WEKA filesystem and takes the following arguments:
  • Arg1: File descriptor
  • Arg2: File offset
  • Arg3: Read size
  • Arg4: GPU Buffer offset
  • Arg5: Return value
  • Arg6: GPU ID for which IO is done
  • Arg7: Is the IO done to GPU Bounce buffer

cufio_gds_read

This tracepoint tracks IO reads going through the GDS kernel drive and takes the following arguments:
  • Arg1: File descriptor
  • Arg2: File offset
  • Arg3: Read size
  • Arg4: GPU Buffer offset
  • Arg5: Return value
  • Arg6: GPU ID for which IO is done
  • Arg7: Is the IO done to GPU Bounce buffer

cufio_gds_read_async

This tracepoint tracks iO reads going through the GDS kernel driver and poll mode is set and takes the following arguments:
  • Arg1: File descriptor
  • Arg2: File offset
  • Arg3: Read size
  • Arg4: GPU Buffer offset
  • Arg5: Return value
  • Arg6: GPU ID for which IO is done
  • Arg7: Is the IO done to GPU Bounce buffer

cufio_px_write

This tracepoint tracks POSIX IO writes and takes the following arguments:
  • Arg1: File descriptor
  • Arg 2: File offset
  • Arg 3: Write size
  • Arg 4: GPU Buffer offset
  • Arg 5: Return value
  • Arg 6: GPU ID for which IO is done

cufio_gds_write

This tracepoint tracks IO writes going through the GDS kernel driver and takes the following arguments:
  • Arg1: File descriptor
  • Arg2: File offset
  • Arg3: Write size
  • Arg4: GPU Buffer offset
  • Arg5: Return value
  • Arg6: GPU ID for which IO is done
  • Arg7: Is the IO done to GPU Bounce buffer

cufio_gds_write_async

This tracepoint tracks IO writes going through the GDS kernel driver, and poll mode is set and takes the following arguments:
  • Arg1: File descriptor
  • Arg2: File offset
  • Arg3: Write size
  • Arg4: GPU Buffer offset
  • Arg5: Return value
  • Arg6: GPU ID for which IO is done
  • Arg7: Is the IO done to GPU Bounce buffer

cufio-internal-write-bb

This tracepoint tracks IO writes going through internal GPU Bounce buffers and is specific to the EXAScaler® filesystem and block device-based filesystems. This tracepoint is in hot IO-path tracking in every IO and takes the following arguments:
  • Arg1: Application GPU (GPU ID)
  • Arg2: GPU Bounce buffer (GPU ID)
  • Arg3: File descriptor
  • Arg4: File offset
  • Arg5: Write size
  • Arg6: Application GPU Buffer offset
  • Arg7: Size is bytes transferred from application GPU buffer to target GPU bounce buffer.
  • Arg8: Total Size in bytes transferred so far through bounce buffer.
  • Arg9: Pending IO count in this transaction

cufio-internal-read-bb

This tracepoint tracks IO reads going through internal GPU Bounce buffers and is specific to the EXAScaler® filesystem and block device-based filesystems. This tracepoint is in hot IO-path tracking every IO and takes the following arguments:
  • Arg1: Application GPU (GPU ID)
  • Arg2: GPU bounce buffer (GPU ID)
  • Arg3: File descriptor
  • Arg4: File offset
  • Arg5: Read size
  • Arg6: Application GPU Buffer offset
  • Arg7: Size is bytes transferred from the GPU bounce buffer to application GPU buffer.
  • Arg8: Total Size in bytes transferred so far through bounce buffer.
  • Arg9: Pending IO count in this transaction.

cufio-internal-bb-done

This tracepoint tracks all IO going through bounce buffers and is invoked when IO is completed through bounce buffers. The tracepoint can be used to track all IO going through bounce buffers and takes the following arguments:
  • Arg1: IO-type READ - 0, WRITE - 1
  • Arg2: Application GPU (GPU ID)
  • Arg3: GPU Bounce buffer (GPU ID)
  • Arg4: File descriptor
  • Arg5: File offset
  • Arg6: Read/Write size
  • Arg7: GPU buffer offset
  • Arg8: IO is unaligned (1 - True, 0 - False)
  • Arg9: Buffer is registered (1 - True, 0 - False)

cufio-internal-io-done

This tracepoint tracks all IO going through the GDS kernel driver. This tracepoint is invoked when the IO is completed and takes the following arguments:
  • Arg1: IO-type READ - 0, WRITE - 1
  • Arg2: GPU ID for which IO is done
  • Arg3: File descriptor
  • Arg4: File offset
  • Arg5: Total bytes transferred

cufio-internal-map

This tracepoint tracks GPU buffer registration using cuFileBufRegister and takes the following arguments:
  • Arg1: GPU ID
  • Arg2: GPU Buffer size for which registration is done
  • Arg3: max_direct_io_size that was used for this buffer.

    The shadow memory size is set in the /etc/cufile.json file.

  • Arg4: boolean value indicating whether buffer is pinned.
  • Arg5: boolean value indicating whether this buffer is a GPU bounce buffer.
  • Arg6: GPU offset.
The data type of each argument in these tracepoints can be found by running the following command:
# ./tplist -l /usr/local/cuda-x.y/lib/libcufile.so -vvv | grep cufio_px_read -A 7
cufio:cufio_px_read [sema 0x0]
Here is the output:
# ./tplist -l /usr/local/cuda-x.y/lib/libcufile.so -vvv | grep cufio_px_read -A 7
cufio:cufio_px_read [sema 0x0]
  location #1 /usr/local/cuda-x.y/lib/libcufile.so 0x16437c
    argument #1 4 signed   bytes @ dx
    argument #2 8 signed   bytes @ cx
    argument #3 8 unsigned bytes @ si
    argument #4 8 signed   bytes @ di
    argument #5 8 signed   bytes @ r8
    argument #6 4 signed   bytes @ ax

14.2. Example: Track the IO Activity of a Process that Issues cuFileRead/ cuFileWrite

This example provides information about how you can track the IO activity of a process that issues the cuFileRead or the cuFileWrite API.

  1. Run the folloiwng command.
    # ./funccount u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_* -i 1 -T -p 59467
    Tracing 7 functions for "u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_*"... Hit Ctrl-C to end.
    
    
  2. Review the output, for example:
    cufio_gds_write                          1891
    
    16:21:13
    FUNC                                    COUNT
    cufio_gds_write                          1852
    
    16:21:14
    FUNC                                    COUNT
    cufio_gds_write                          1865
    ^C
    16:21:14
    FUNC                                    COUNT
    cufio_gds_write                          1138
    Detaching…

14.3. Example: Display the IO Pattern of all the IOs that Go Through GDS

This example provides information about how you can display and understand the IO pattern of all IOs that go through GDS.

  1. Run the following command:
    # ./argdist -C 'u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():size_t:arg3# Size Distribution'
    
  2. Review the output, for example:
    [16:38:22]
    IO Size Distribution
        COUNT      EVENT
        4654       arg3 = 1048576
        7480       arg3 = 131072
        9029       arg3 = 65536
        13561      arg3 = 8192
        14200      arg3 = 4096
    [16:38:23]
    IO Size Distribution
        COUNT      EVENT
        4682       arg3 = 1048576
        7459       arg3 = 131072
        9049       arg3 = 65536
        13556      arg3 = 8192
        14085      arg3 = 4096
    [16:38:24]
    IO Size Distribution
        COUNT      EVENT
        4678       arg3 = 1048576
        7416       arg3 = 131072
        9018       arg3 = 65536
        13536      arg3 = 8192
        14082      arg3 = 4096
The 1M, 128K, 64K, 8K, and 4K IOs are all completing reads through GDS.

14.4. Understand the IO Pattern of a Process

You can review the output to understand the IO pattern of a process.

  1. Run the following command.
    # ./argdist -C 'u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():size_t:arg3#IO 
    Size Distribution' -p 59702
    
  2. Review the output.
    [16:40:46]
    IO Size Distribution
        COUNT      EVENT
        20774      arg3 = 4096
    [16:40:47]
    IO Size Distribution
        COUNT      EVENT
        20727      arg3 = 4096
    [16:40:48]
    IO Size Distribution
        COUNT      EVENT
        20713      arg3 = 4096
Process 59702 issues 4K IOs.

14.5. Understand the IO Pattern of a Process with the File Descriptor on Different GPUs

Explain the benefits of the task, the purpose of the task, who should perform the task, and when to perform the task in 50 words or fewer.

  1. Run the following command.
    # ./argdist -C 
    'u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size:arg1,
    arg6,arg3#IO Size Distribution arg1=fd, arg6=GPU# arg3=IOSize' -p `pgrep -n gdsio`
  2. Review the output, for example:
    [17:00:03]
    u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size_t:arg1,arg6,arg3#IO Size Distribution arg1=fd, arg6=GPU# arg3=IOSize
        COUNT      EVENT
        5482       arg1 = 87, arg6 = 2, arg3 = 131072
        7361       arg1 = 88, arg6 = 1, arg3 = 65536
        9797       arg1 = 89, arg6 = 0, arg3 = 8192
        11145      arg1 = 74, arg6 = 3, arg3 = 4096
    [17:00:04]
    u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size_t:arg1,arg6,arg3#IO Size Distribution arg1=fd, arg6=GPU# arg3=IOSize
        COUNT      EVENT
        5471       arg1 = 87, arg6 = 2, arg3 = 131072
        7409       arg1 = 88, arg6 = 1, arg3 = 65536
        9862       arg1 = 89, arg6 = 0, arg3 = 8192
        11079      arg1 = 74, arg6 = 3, arg3 = 4096
    [17:00:05]
    u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size_t:arg1,arg6,arg3#IO Size Distribution arg1=fd, arg6=GPU# arg3=IOSize
        COUNT      EVENT
        5490       arg1 = 87, arg6 = 2, arg3 = 131072
        7402       arg1 = 88, arg6 = 1, arg3 = 65536
        9827       arg1 = 89, arg6 = 0, arg3 = 8192
        11131      arg1 = 74, arg6 = 3, arg3 = 4096
gdsio issues READS to 4 files with fd=87, 88,89, 74 to GPU 2, 1, 0, and 3 and with IO-SIZE of 128K, 64K, 8K, and 4K.

14.6. Determine the IOPS and Bandwidth for a Process in a GPU

You can determine the IOPS and bandwidth for each process in a GPU.

  1. Run the following command.
    #./argdist -C 
    'u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size_t:arg1,
    arg6,arg3:arg6==0||arg6==3#IO Size Distribution arg1=fd, arg6=GPU# 
    arg3=IOSize' -p `pgrep -n gdsio`
    
  2. Review the output.
    [17:49:33]
    u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size_t:arg1,arg6,arg3:arg6==0||arg6==3#IO Size Distribution arg1=fd, arg6=GPU# arg3=IOSize
        COUNT      EVENT
        9826       arg1 = 89, arg6 = 0, arg3 = 8192
        11168      arg1 = 86, arg6 = 3, arg3 = 4096
    [17:49:34]
    u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size_t:arg1,arg6,arg3:arg6==0||arg6==3#IO Size Distribution arg1=fd, arg6=GPU# arg3=IOSize
        COUNT      EVENT
        9815       arg1 = 89, arg6 = 0, arg3 = 8192
        11141      arg1 = 86, arg6 = 3, arg3 = 4096
    [17:49:35]
    u:/usr/local/cuda-x.y/lib/libcufile.so:cufio_gds_read():int,int,size_t:arg1,arg6,arg3:arg6==0||arg6==3#IO Size Distribution arg1=fd, arg6=GPU# arg3=IOSize
        COUNT      EVENT
        9914       arg1 = 89, arg6 = 0, arg3 = 8192
        11194      arg1 = 86, arg6 = 3, arg3 = 4096
  • gdsio is doing IO on all 4 GPUs, and the output is filtered for GPU 0 and GPU 3.
  • Bandwidth per GPU is GPU 0 - 9826 IOPS of 8K block size, and the bandwidth = ~80MB/s .

14.7. Display the Frequency of Reads by Processes that Issue cuFileRead

You can display information about the frequency of reads by process that issue the cuFileRead API.

  1. Run the following command.
    #./argdist -C 'r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID'
    
  2. Review the output, for example:
    [17:58:01]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID
        COUNT      EVENT
        31191      $PID = 60492
        31281      $PID = 60593
    [17:58:02]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID
        COUNT      EVENT
        11741      $PID = 60669
        30447      $PID = 60593
        30670      $PID = 60492
    [17:58:03]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID
        COUNT      EVENT
        29887      $PID = 60593
        29974      $PID = 60669
        30017      $PID = 60492
    [17:58:04]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID
        COUNT      EVENT
        29972      $PID = 60593
        30062      $PID = 60492
        30068      $PID = 60669

14.8. Display the Frequency of Reads when cuFileRead Takes More than 0.1 ms

You can display the frequency of reads when the cuFileRead API takes more than 0.1 ms.

  1. Run the following command.
    #./argdist -C 'r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID:$latency > 100000'
  2. Review the output, for example:
    [18:07:35]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID:$latency > 100000
        COUNT      EVENT
        17755      $PID = 60772
    [18:07:36]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID:$latency > 100000
        COUNT      EVENT
        17884      $PID = 60772
    [18:07:37]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID:$latency > 100000
        COUNT      EVENT
        17748      $PID = 60772
    [18:07:38]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID:$latency > 100000
        COUNT      EVENT
        17898      $PID = 60772
    [18:07:39]
    r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead():u32:$PID:$latency > 100000
        COUNT      EVENT
        17811      $PID = 60772

14.9. Displaying the Latency of cuFileRead for Each Process

You can display the latency of the the cuFileRead API for each process.

  1. Run the following command.
    #./funclatency /usr/local/cuda-x.y/lib/libcufile.so:cuFileRead -i 1 -T -u
  2. Review the output, for example:
    Tracing 1 functions for 
    "/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead"... Hit Ctrl-C to end.
    

    Here are two process with PID 60999 and PID 60894 that are issuing cuFileRead:

    18:12:11
    Function = cuFileRead [60999]
         usecs               : count     distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 17973    |****************************************|
           128 -> 255        : 13383    |*****************************           |
           256 -> 511        : 27       |                                        |
    Function = cuFileRead [60894]
         usecs               : count     distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 17990    |****************************************|
           128 -> 255        : 13329    |*****************************           |
           256 -> 511        : 19       |                                        |
    
    18:12:12
    Function = cuFileRead [60999]
         usecs               : count     distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 18209    |****************************************|
           128 -> 255        : 13047    |****************************            |
           256 -> 511        : 58       |                                        |
    
    Function = cuFileRead [60894]
         usecs               : count     distribution
             0 -> 1          : 0        |                                        |
             2 -> 3          : 0        |                                        |
             4 -> 7          : 0        |                                        |
             8 -> 15         : 0        |                                        |
            16 -> 31         : 0        |                                        |
            32 -> 63         : 0        |                                        |
            64 -> 127        : 18199    |****************************************|
           128 -> 255        : 13015    |****************************            |
           256 -> 511        : 46       |                                        |
           512 -> 1023       : 1        |                     
    

14.10. Example: Tracking the Processes that Issue cuFileBufRegister

This example provides information about how you can track processes that issue the cuFileBufRegister API.

  1. Run the following command:
    # ./trace 'u:/usr/local/cuda-x.y/lib/libcufile.so:cufio-internal-map "GPU 
    %d Size %d Bounce-Buffer %d",arg1,arg2,arg5'
  2. Review the output, for example:
    PID     TID     COMM     FUNC             -
    62624   62624   gdsio_verify    cufio-internal-map GPU 0 Size 1048576 Bounce-Buffer 1
    62659   62726   fio             cufio-internal-map GPU 0 Size 8192 Bounce-Buffer 0
    62659   62728   fio             cufio-internal-map GPU 2 Size 131072 Bounce-Buffer 0
    62659   62727   fio             cufio-internal-map GPU 1 Size 65536 Bounce-Buffer 0
    62659   62725   fio             cufio-internal-map GPU 3 Size 4096 Bounce-Buffer 0
gdsio_verify issued an IO, but it did not register GPU memory using cuFileBufRegister. As a result, the GDS library pinned 1M of a bounce buffer on GPU 0. FIO, on the other hand, issued a cuFileBufRegister of 128K on GPU 2.

14.11. Example: Tracking Whether the Process is Constant when Invoking cuFileBufRegister

You can track whether the process is constant when invoking the cuFileBufRegister API.

  1. Run the following command:
    # ./trace 'u:/usr/local/cuda-x.y/lib/libcufile.so:cufio-internal-map (arg5 == 0) 
    "GPU %d Size %d",arg1,arg2'
  2. Review the output, for example:
    PID     TID     COMM            FUNC             -
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
    444     472     cufile_sample_0 cufio-internal-map GPU 0 Size 1048576
As seen in this example, there is one thread in a process that continuously issues 1M of cuFileBufRegister on GPU 0. This might mean that the API is called in a loop and might impact performance.
Note:cuFileBufRegister involves pinning GPU memory, which is an expensive operation.

14.12. Example: Monitoring IOs that are Going Through the Bounce Buffer

This example provides information about how you can monitor whether IOs are going through the bounce buffer.

  1. Run the following command:
    # ./trace 'u:/usr/local/cuda-x.y/lib/libcufile.so:cufio-internal-bb-done 
    "Application GPU %d Bounce-Buffer GPU %d Transfer Size %d Unaligned %d Registered %d", 
    arg2,arg3,arg8,arg9,arg10'
  2. Review the output, for example:
    PID TID COMM  FUNC             -
    1013 1041  gdsio App-GPU 0 Bounce-Buffer GPU 0 Transfer Size 1048576 Unaligned 1 Registered 0
    1013 1042  gdsio App-GPU 3 Bounce-Buffer GPU 3 Transfer Size 1048576 Unaligned 1 Registered 0
    1013 1041  gdsio App-GPU 0 Bounce-Buffer GPU 0 Transfer Size 1048576 Unaligned 1 Registered 0
    1013 1042  gdsio App-GPU 3 Bounce-Buffer GPU 3 Transfer Size 1048576 Unaligned 1 Registered 0

    The gdsio app has 2 threads and both are doing unaligned IO on GPU 0 and GPU 3. Since the IO is unaligned, bounce buffers are also from the same application GPU.

14.13. Example: Tracing cuFileRead and cuFileWrite Failures, Print, Error Codes, and Time of Failure

This example shows you how to trace the cuFileRead and cuFileWrite failures, print, error codes, and time of failure.

  1. Run the following command:
    # ./trace 'r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileRead ((int)retval < 0) 
    "cuFileRead failed: %d", retval' 'r:/usr/local/cuda-x.y/lib/libcufile.so:cuFileWrite ((int)retval < 0) 
    "cuFileWrite failed: %d", retval' -T
  2. Review the output, for example:
    TIME      PID      TID               COMM                     FUNC             -
    23:22:16 4201    4229    gdsio           cuFileRead       cuFileRead failed: -5
    23:22:42 4237    4265    gdsio           cuFileWrite      cuFileWrite failed: -5
In this example, two failures were observed with EIO (-5) as the return code with the timestamp.

14.14. Example: User-Space Statistics for Each GDS Process

This example provides information about the user-space statistics for each GDS process.

The cuFile library exports user-level statistics in the form of API level counters for each

process. In addition to the regular GDS IO path, there are paths for user-space file-systems and IO compatibility modes that use POSIX read/writes, which do not go through the nvidia-fs driver. The user-level statistics are more useful in these scenarios.

There is a verbosity level for the counters which users can specify using JSON configuration file to enable and set the level. The following describes various verbosity levels.

Table 8. User-Space Statistics for Each GDS Process
Level Description
Level 0 cuFile stats will be disabled.
Level 1 cuFile stats will report only Global Counters like overall throughput, average latency and error counts.
Level 2 With the Global Counters, an IO Size histogram will be reported for information on access patterns.
Level 3

At this level, per GPU counters are reported and also live usage

of cuFile internal pool buffers.

Here is the JSON configuration key to enable GDS statistics by using the /etc/cufile.json file:
"profile": {
                // cufile stats level(0-3)
                "cufile_stats": 3
            },

14.15. Example: Viewing GDS User-Level Statistics for a Process

This example provides information about how you can use the gds_stats tool to display user-level statistics for a process.

Prerequisite: Before you run the tool, ensure that the IO application is active, and the gds_stats has the same user permissions as the application.

The gds_stats tool can be used to read statistics that are exported by libcufile.so.

The output of the statistics is displayed in the standard output. If the user permissions are not the same, there might not be sufficient privilege to view the stats. A future version of gds_stats will integrate nvidia-fs kernel level statistics into this tool.

To use the tool, run the following command:
$ /usr/local/cuda-x.y/tools/gds_stats -p <pidof application> -l <stats_level(1-3)>

When specifying the statistics level, ensure that the corresponding level (profile.cufile_stats) is also enabled in the /etc/cufile.json file.

The GDS user level statistics are logged once to cufile.log file when the library is shut down, or the cuFileDriverClose API is run. To view statistics in the log file, set the log level to INFO.

14.16. Example: Displaying Sample User-Level Statistics for each GDS Process

This example shows you how to display sample user-level statistics for each GDS process.

  1. Run the following command:
    $ ./gds_stats -p 23198 -l 3
  2. Review the sample output, for example:
    cuFile STATS VERSION : 1
    GLOBAL STATS:
    Total Files: 1
    Total Read Errors : 0
    Total Read Size (MB): 0
    Read BandWidth (GiB/s): 0
    Avg Read Latency (us): 0
    Total Write Errors : 0
    Total Write Size (MB): 175
    Write BandWidth (GiB/s): 0.0145721
    Avg Write Latency (us): 494
    READ-WRITE SIZE HISTOGRAM :
    0-4(KB): 0  0
    4-8(KB): 0  44942
    8-16(KB): 0  0
    16-32(KB): 0  0
    32-64(KB): 0  0
    64-128(KB): 0  0
    128-256(KB): 0  0
    256-512(KB): 0  0
    512-1024(KB): 0  0
    1024-2048(KB): 0  0
    2048-4096(KB): 0  0
    4096-8192(KB): 0  0
    8192-16384(KB): 0  0
    16384-32768(KB): 0  0
    32768-65536(KB): 0  0
    65536-...(KB): 0  0
    PER_GPU STATS
    GPU 0 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0.0145645 n=44942 posix=0 unalign=0 err=0 mb=175 BufRegister: n=2 err=0 free=0 mb=0
    GPU 1 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 2 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 3 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 4 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 5 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 6 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 7 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 8 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 9 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 10 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 11 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 12 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 13 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 14 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    GPU 15 Read: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0 n=0 posix=0 unalign=0 err=0 mb=0 BufRegister: n=0 err=0 free=0 mb=0
    PER_GPU POOL BUFFER STATS:
    GPU : 12 pool_size_mb : 16 usage : 1/2 used_mb : 8

15. User-Space Counters in GPUDirect Storage

This section provides information about user-space counters in GDS.

Table 9. Global cuFile Counters
Counter Name Description
Total Files

Total number of files registered successfully with cuFileHandleRegister. This is a cumulative counter.

cuFileHandleDeregister does not change this counter.

Total Read Errors Total number of cuFileRead errors.
Total Read Size Total number of bytes read in MB using cuFileRead.
Read Bandwidth Average overall read throughput in GiB/s over one second time period.
Avg Read Latency Overall average read latency in microseconds over one second time period.
Total Write Errors Total number of cuFileWrite errors.
Total Write Size Total number of bytes written in MB using cuFileWrite.
Write Bandwidth Overall average write throughput in GiB/s over one second time period.
Avg Write Latency Overall average read latency in microseconds over one second time period.
Table 10. IO-Size Histogram
Counter Name Description
Read Distribution of number of cuFileRead requests based on IO size. Bin Size uses a 4K log scale.
Write Distribution of number of cuFileWrite requests based on IO size. Bin Size uses a 4K log scale.
Table 11. Per-GPU Counters
Counter Name Description
Read.bw/Write.bw Average GPU read/write bandwidth in GiB/s per GPU.
Read.util/Write.util Average per GPU read/write utilization in %. If A is the total length of time the resource was busy in a time interval T, then utilization is defined as A/T. Here the utilization is reported over one second period.
Read.n/Write.n Number of cuFileRead/cuFileWrite requests per GPU.
Read.posix/Write.posix Number of cuFileRead/cuFileWrite using POSIX read/write APIs per GPU.
Read.unalign/Write.unalign Number of cuFileRead/cuFileWrite per GPU which have at least one IO parameter not 4K aligned. It can be size, file offset or device pointer.
Read.error/Write.error Number of cuFileRead/cuFileWrite errors per GPU.
Read.mb/Write.mb Total number of bytes in MB read/written using cuFileRead/cuFileWrite per GPU.
BufRegister.n Total number of cuFileBufRegister calls per GPU.
BufRegister.err Total number of errors per GPU seen with cuFileBufRegister.
BufRegister.free Total number of cuFileBufRegister calls per GPU.
BufRegister.mb Total number of bytes in MB currently registered per GPU.
Table 12. Bounce Buffer Counters Per GPU
Counter Name Description
pool_size_mb Total size of buffers allocated for per GPU bounce buffers in MB.
used_mb Total size of buffers currently used per GPU for bounce buffer based IO.
usage Fraction of bounce buffers used currently.

15.1. Distribution of IO Usage in Each GPU

Here is some information about how to display the distribution of IO usage in each GPU.

The cuFile library has a metric for IO utilization per GPU by application. This metric indicates the amount of time, in percentage, that the cuFile resource was busy in IO.

To run a single-threaded gdsio test, run the following command:
$./gdsio -f /mnt/md1/test -d 0 -n 0 -w 1 -s 10G -i 4K -x 0 -I 1
Here is the sample output:
PER_GPU STATS
GPU 0 Read: bw=0 util(%)=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0.154598 
util(%)=89 n=510588 posix=0 unalign=0 err=0 mb=1994 BufRegister: n=1 err=0 free=0 mb=0

The util metric says that the application was completing IO on GPU 0 89% of the time.

To run a gdsio test using two-threads, run the following command:
$./gdsio -f /mnt/md1/test -d 0 -n 0 -w 2 -s 10G -i 4K -x 0 -I 1
Here is the sample output:
PER_GPU STATS
GPU 0 Read: bw=0 util(%)=0 n=0 posix=0 unalign=0 err=0 mb=0 Write: bw=0.164967 util(%)=186 n=140854 posix=0 unalign=0 err=0 mb=550 BufRegister: n=2 err=0 free=0 mb=0 

Now the utilization is ~186%, which indicates the amount of parallelism in the way each GPU is used for IO.

16. User-Space RDMA Counters in GPUDirect Storage

This section provides information about user-space RDMA counters in GDS.

The library provides counters to monitor the RDMA traffic at a per-GPU level and requires that cuFile starts verbosity with a value of 3.

Table 14-1 provides the following information:
  • Each column stores the total number of bytes that are sent/received between a GPU and a NIC.
  • Each row shows the distribution of RDMA load with regards to a GPU across all NICS.
  • Each row reflects the order of affinity that a GPU has with a NIC.

    Ideally, all traffic should be routed through the NIC with the best affinity or is closest to the GPU as shown in Example 1 in cuFile RDMA IO Counters (PER_GPU RDMA STATS).

In the annotation of each NIC entry in the table, the major number is the pci-distance in terms of the number of hops between the GPU and the NIC, and the minor number indicates the current bandwidth of the NIC (link_width multiplied by pci-generation). The NICs that the GPUs use for RDMA are loaded from the rdma_dev_addr_list cufile.json property:
"rdma_dev_addr_list": [ 
      "172.172.1.240",
      "172.172.1.241",
      "172.172.1.242",
      "172.172.1.243",
      "172.172.1.244",
      "172.172.1.245",
      "172.172.1.246",
      "172.172.1.247" ],

Each IP address corresponds to an IB device that appear as column entries in the RDMA counter table.

16.1. cuFile RDMA IO Counters (PER_GPU RDMA STATS)

Here is some information about cuFile RDMA IO counters.

Table 13. cuFile RDMA IO Counters (PER_GPU RDMA STATS)
Entry Description
GPU Bus device function
NIC
+)Bus device function
+)Device Attributes
   ++)pci-distance between GPU and NIC
   ++)device bandwidth indicator
+)Send/Receive bytes
Table 14. Example 1
GPU 0000:34:00.0 :
mlx5_3 (3:48):6293
mlx5_5 (7:48):0
mlx5_15 (138:48:0
mlx5_15 (138:48:0
mlx5_17 (138:48):0
mlx5_9 (138:48):0 
mlx5_13 (138:48):0
mlx5_7 (138:12):0
GPU 0000:36:00.0 :
mlx5_3 (3:48):6077
mlx5_5 (7:48):1858
mlx5_15 (138:48):0
mlx5_19 (138:48):0
mlx5_17 (138:48):0
mlx5_9 (138:48):0 
mlx5_13 (138:48):0
mlx5_7 (138:12):0
GPU 0000:3b:00.0
mlx5_5 (3:48):5467
mlx5_3 (7:48):0
mlx5_15 (138:48):0
mlx5_19 (138:48):0
mlx5_17 (138:48):0
mlx5_9 (138:48):0
mlx5_13 (138:48):0
mlx5_7(138:12):0
GPU 0000:57:00.0
mlx5_7 (3:12):4741
mlx5_9 (7:48):0
mlx5_15 (138:48):0
mlx5_19 (138:48):0
mlx5_5 (138:48):0
mlx5_17 (138:48):0
mlx5_13 (138:48):0
mlx5_3 (138:48):0
GPU 0000:59:00.0
mlx5_7 (3:12):4744
mlx5_9 (7:48):1473
mlx5_15 (138:48):0
mlx5_19 (138:48):0
mlx5_5 (138:48):0
mlx5_17 (138:48):0
mlx5_13 (138:48):0
mlx5_3 (138:48):0
GPU 0000:5c:00.0
mlx5_9 (3:48):4707
mlx5_7 (7:12):0
mlx5_15 (138:48):0
mlx5_19 (138:48):0
mlx5_5 (138:48):0
mlx5_17 (138:48):0
mlx5_13 (138:48):0
mlx5_3 (138:48):0
GPU 0000:5e:00.0
mlx5_9 (3:48):4700
mlx5_7 (7:12):0
mlx5_15 (138:48):0
mlx5_19 (138:48):0
mlx5_5 (138:48):0
mlx5_17 (138:48):0
mlx5_13 (138:48):0
mlx5_3 (138:48):0

16.2. cuFile RDMA Memory Registration Counters (RDMA MRSTATS)

Here is some information about cuFile RDMA memory registeration counters.

Table 15. cuFile RDMA IO Counters (PER_GPU RDMA STATS)
Entry Description
peer name System name of the NIC.
nr_mrs Count of active memory registration per NIC.
mr_size(mb) Total size
Table 16. Example 2
peer name nr_ms mr_size (mb)
mlx5_3 128 128
mlx5_5 128 128
mlx5_11 0 0
mlx5_1 0 0
mlx5_15 128 128
mlx5_19 128 128
mlx5_17 128 128
mlx5_9 128 128
mlx5_13 128 128
mlx5_7 128 128

Notices

Notice

This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality.

NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice.

Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete.

NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document.

NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk.

NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.

Notices

No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA.

Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices.

THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product.

VESA DisplayPort

DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are trademarks owned by the Video Electronics Standards Association in the United States and other countries.

HDMI

HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.

OpenCL

OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.

Notices

Trademarks

NVIDIA, the NVIDIA logo, DGX, DGX-1, DGX-2, Tesla, and Quadro are trademarks and/or registered trademarks of NVIDIA Corporation in the Unites States and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.