External Storage

As an organization scales out their GPU enabled data center, there are many shared storage technologies which pair well with GPU applications. Since the performance of a GPU enabled server is so much greater than a traditional CPU server, special care needs to be taken to ensure the performance of your storage system is not a bottleneck to your workflow.

Different data types require different considerations for efficient access from filesystems. For example:

  • Running parallel HPC applications may require the storage technology to support multiple processes accessing the same files simultaneously.

  • To support accelerated analytics, storage technologies often need to support many threads with quick access to small pieces of data.

  • For vision based deep learning, accessing images or video used in classification, object detection or segmentation may require high streaming bandwidth, fast random access, or - fast memory mapped (mmap()) performance.

For other deep learning techniques, such as recurrent networks, working with text or speech can require any combination of fast bandwidth with random and small files. HPC workloads typically drive high simultaneous multi-system write performance and benefit greatly from traditional scalable parallel file system solutions. You can size HPC storage and network performance to meet the increased dense compute needs of GPU servers. It is not uncommon to see per-node performance increases from between 10-40x for a 4 GPU system vs a CPU system for many HPC applications.

Data Analytics workloads, similar to HPC, drive high simultaneous access, but are more read focused than HPC. Again, it is important to size Data Analytics storage to match the dense compute performance of GPU servers. As you adopt accelerated analytics technologies such as GPU-enabled in-memory databases, make sure that you can populate the database from your data warehousing solution quickly to minimize startup time when you change database schemas. This may require a network with 10 Gbe for greater performance. To support clients at this rate, you may have to revisit your data warehouse architecture to identify and eliminate bottlenecks.

Deep learning is a fast evolving computational paradigm and it is important to know what your requirements are in the near and long term to properly architect a storage system. The ImageNet database is often used as a reference when benchmarking deep learning frameworks and networks. The resolution of the images in ImageNet are 256x256. However, it is more common to find images at 1080p or 4k. Images in 1080p resolution are 30 times larger than those in ImageNet. Images in 4k resolution are 4 times larger than that (120X the size of ImageNet images). Uncompressed images are 5-10 times larger than compressed images. If your data cannot be compressed for some reason, for example if you are using a custom image formats, the bandwidth requirements increase dramatically.

For AI-Driven Storage, it is suggested that you make use of deep learning framework features that build databases and archives versus accessing small files directly; reading and writing many small files will reduce performance on the network and local file systems. Storing files in formats such as HDF5, LMDB or TFRecord can reduce metadata access to the filesystem helping performance. However, these formats can lead to their own challenges with additional memory overhead or requiring support for fast mmap() performance. All this means that you should plan to be able to read data at 150-200 MB/s per GPU for files at 1080p resolution. Consider more if you are working with 4k or uncompressed files.

NFS Storage

NFS can provide a good starting point for AI workloads on small GPU server configurations with properly sized storage and network bandwidth. NFS based solutions can scale well for larger deployments, but be aware of possible single node and aggregate bandwidth requirements and make sure that matches your vendor of choice. As you scale your data center to need more than 10 GB/s or your data center grows to hundreds or thousands of nodes, other technologies may be more efficient and scale better.

Generally, it is a good idea to start with NFS using one or more of the Gigabit Ethernet connections on the DGX family. After this is configured, it is recommended that you run your applications and check if IO performance is a bottleneck. Typically, NFS over 10Gb/s Ethernet provides up to 1.25 GB/s of IO throughput for large block sizes. If, in your testing, you see NFS performance that is significantly lower than this, check the network between the NFS server and a DGX server to make sure there are no bottlenecks (for example, a 1 GigE network connection somewhere, a misconfigured NFS server, or a smaller MTU somewhere in the network).

There are a number of online articles, such as Optimizing Your NFS Filesystem, that list some suggestions for tuning NFS performance on both the client and the server. For example:

  • Increasing Read, Write buffer sizes

  • TCP optimizations including larger buffer sizes

  • Increasing the MTU size to 9000

  • Sync vs. Async

  • NFS Server options

  • Increasing the number of NFS server daemons

  • Increasing the amount of NFS server memory

Linux is very flexible and by default most distributions are conservative about their choice of IO buffer sizes since the amount of memory on the client system is unknown. A quick example is increasing the size of the read buffers on the DGX (the NFS client). This can be achieved with the following system parameters:

  • net.core.rmem_max=67108864

  • net.core.rmem_default=67108864

  • net.core.optmem_max=67108864

The values after the variable are example values (they are in bytes). You can change these values on the NFS client and the NFS server, and then run experiments to determine if the IO performance improves.

The previous examples are for the kernel read buffer values. You can also do the same thing for the write buffers where you use wmem instead rmem.

You can also tune the TCP parameters in the NFS client to make them larger. For example, you could change the net.ipv4.tcp_rmem=”4096 87380 33554432” system parameter. This changes the TCP buffer size, for iPv4, to 4,096 bytes as a minimum, 87,380 bytes as the default, and 33,554,432 bytes as the maximum.

If you can control the NFS server, one suggestion is to increases the number of NFS daemons on the server.

One way to determine whether more NFS threads helps performance is to check the data in /proc/net/rpc/nfs entry for the load on the NFS daemons. The output line that starts with th lists the number of threads, and the last 10 numbers are a histogram of the number of seconds the first 10% of threads were busy, the second 10%, and so on.

Ideally, you want the last two numbers to be zero or close to zero, indicating that the threads are busy and you are not “wasting” any threads. If the last two numbers are fairly high, you should add NFS daemons, because the NFS server has become the bottleneck. If the last two, three, or four numbers are zero, then some threads are probably not being used.

One other option, while a little more complex, can prove to be useful if the IO pattern becomes more write intensive. If you are not getting the IO performance you need, change the mount behavior on the NFS clients from “sync” to “async”.

Warning

By default, NFS file systems are mounted as “sync” which means the NFS client is told the data is on the NFS server after it has actually been written to the storage indicating the data is safe. Some systems will respond that the data is safe if it has made it to the write buffer on the NFS server and not the actual storage.

Switching from “sync” to “async” means that the NFS server responds to the NFS client that the data has been received when the data is in the NFS buffers on the server (in other words, in memory). The data hasn’t actually been written to the storage yet, it’s still in memory. Typically, writing to the storage is much slower than writing to memory, so write performance with “async” is much faster than with “sync”. However, if, for some reason, the NFS server goes down before the data in memory is written to the storage, then the data is lost.

If you try using “async” on the NFS client (in other words, the DGX system), ensure that the data on the NFS server is replicated somewhere else so that if the server goes down, there is always a copy of the original data. The reason is if the NFS clients are using “async” and the NFS server goes down, data that is in memory on the NFS server will be lost and cannot be recovered.

NFS “async” mode is very useful for write IO, both streaming (sequential) and random IO. It is also very useful for “scratch” file systems where data is stored temporarily (in other words, not permanent storage or storage that is not replicated or backed up).

If you find that the IO performance is not what you expected and your applications are spending a great deal of time waiting for data, then you can also connect NFS to the DGX system over InfiniBand using IPoIB (IP over IB). This is part of the DGX family software stack and can be easily configured. The main point is that the NFS server should be InfiniBand attached as well as the NFS clients. This can greatly improve IO performance.

Distributed Filesystem

Distributed filesystems such as EXAScaler, GRIDScaler, Cephhttps://ceph.com/, Lustre, MapR-FS, General Parallel File System, Weka.io, and Gluster can provide features like improved aggregate IO performance, scalability, and/or reliability (fault tolerance). These filesystems are supported by their respective providers unless otherwise noted.

Scaling Out Recommendations

Based on the general IO patterns of deep learning frameworks, below are suggestions for storage needs based on the use case. These are suggestions only and are to be viewed as general guidelines.

Scaling out suggestions and guidelines

Use case

Adequate Read Cache?

Network Type recommended

Network File System Options

Data Analytics

NA

10 GbE

Object-Storage, NFS, or other system with good multithreaded read and small file performance

HPC

NA

10/40/100 GbE, InfiniBand

NFS or HPC targeted filesystem with support for large numbers of clients and fast single-node performance

DL, 256x256 images

Yes

10 GbE

NFS or storage with good small file support

DL, 1080p images

Yes

10/40 GbE, InfiniBand

High-end NFS, HPC filesystem or storage with fast streaming performance

DL, 4k images

Yes

40 GbE, InfiniBand

HPC file system, high-end NFS or storage with fast streaming performance capable of 3+ GB/s per node

DL, uncompressed Images

Yes

InfiniBand, 40/100 GbE

HPC filesystem, high-end NFS or storage with fast streaming performance capable of 3+ GB/s per node

DL, Datasets that are not cached

No

InfiniBand, 10/40/100 GbE

Same as above, aggregate storage performance must scale to meet the all applications simultaneously

As always, it is best to understand your own applications’ requirements to architect the optimal storage system.

Lastly, this discussion has focused only on performance needs. Reliability, resiliency and manageability are as important as the performance characteristics. When choosing between different solutions that meet your performance needs, make sure that you have considered all aspects of running a storage system and the needs of your organization to select the solution that will provide the maximum overall value.