Magnum IO GPUDirect Storage
A Direct Path Between Storage and GPU Memory
As datasets increase in size, the time spent loading data can impact application performance. GPUDirect® Storage creates a direct data path between local or remote storage, such as NVMe or NVMe over Fabrics (NVMe-oF), and GPU memory. By enabling a direct-memory access (DMA) engine near the network adapter or storage, it moves data into or out of GPU memory—without burdening the CPU.
Key Features of v1.1
The following features have been added in v1.1:
- The XFS file system has been added to the list of supported file systems at a beta support level.
- Improved support for unregistered buffers.
- Add options (start_offset and io_size) to gdsio config file per job options.
- Improved performance of 4K and 8K IO sizes for local file systems.
- Added user-configurable priority for internal cuFile CUDA streams.
GPUDirect Storage v1.1 Release
NVIDIA Magnum IO GPUDirect® Storage (GDS) is now part of CUDA.
See https://docs.nvidia.com/gpudirect-storage/index.html for more information.
- Read the blog: Accelerating IO in the Modern Data Center - Magnum IO Storage Partnerships
- NVIDIA Magnum IO™ SDK
- Read the blog: Optimizing Data Movement in GPU Applications with the NVIDIA Magnum IO Developer Environment
- Read the blog: Accelerating IO in the Modern Data Center: Magnum IO Architecture
- Watch the webinar: NVIDIA GPUDirect Storage: Accelerating the Data Path to the GPU
- NVIDIA-Certified Systems Configuration Guide
- NVIDIA-Certified Systems
- Contact us at firstname.lastname@example.org