Magnum IO GPUDirect Storage

A Direct Path Between Storage and GPU Memory

As datasets increase in size, the time spent loading data can impact application performance. GPUDirect® Storage creates a direct data path between local or remote storage, such as NVMe or NVMe over Fabrics (NVMe-oF), and GPU memory. By enabling a direct-memory access (DMA) engine near the network adapter or storage, it moves data into or out of GPU memory—without burdening the CPU.

DownloadTechnical Overview Read Blog
GPU direct storage
GPUDirect Storage enables a direct data path between storage and GPU memory and avoids extra copies through a bounce buffer in the CPU’s memory.

Partner Ecosystem

GA

NVIDIA GPUDirect Storage integrated solution in production.

In Development

Key Features of v1.1

The following features have been added in v1.1:

  • The XFS file system has been added to the list of supported file systems at a beta support level.
  • Improved support for unregistered buffers.
  • Add options (start_offset and io_size) to gdsio config file per job options.
  • Improved performance of 4K and 8K IO sizes for local file systems.
  • Added user-configurable priority for internal cuFile CUDA streams.

Software Download

GPUDirect Storage v1.1 Release

NVIDIA Magnum IO GPUDirect® Storage (GDS) is now part of CUDA.
See https://docs.nvidia.com/gpudirect-storage/index.html for more information.

GDS is currently supported on Linux x86-64 distributions of RHEL8 and Ubuntu 18.04 and 20.04; it is not supported on Windows. When choosing which CUDA packages to download, please select Linux first followed by x86-64 then either RHEL or Ubuntu distributions along with the desired packaging format(s).

Download