In performance testing validated by NVIDIA, the Pavilion HyperParallel Data Platform demonstrated unrivaled performance with NVIDIA DGX A100 systems.
Unmatched Storage for NVIDIA
Unrivaled Performance and Scale for NVIDIA GPU systems

Unleash the Power of AI
The Pavilion HyperParallel Data Platform™ delivers unmatched performance, density, scalability, and flexibility to NVIDIA systems. Pavilion enables organizations to run AI and Analytics operations faster and on any size dataset, so they can do more with their data than they ever thought possible.
Validated by NVIDIA
High Performance Ingest
Capable of writing data faster than most competitors can read, Pavilion can ingest more data and make it available for AI inference and analytics than any other solution.
Data at Scale
AI and big data analytics are powered by massive data sets. Pavilion enables DGX systems to quickly process datasets of any size with unmatched performance.
High performance computing requires high performance IO.
The rule is simple, the higher processing power the computer element the more data it can process hence faster data delivery is required.
Fast delivery is just one part. Efficient network interfaces that are required to keep computing going: RDMA, (and) GPUDirect Technologies enable direct communication between GPU powered computer elements and the clusterMichael Kagan,
NVIDIA CTO at SuperComputing20
High Performance
120GB/s Read @100µs
90GB/s Write @25µs
Efficient Networking
40 x 100 GbE/EDR or 10 x 200 HDR ports
NFS RDMA, NVMe-RoCE, NVME-RDMA
MagnumIO GPUDirect Storage for block and file data
Pavilion Data Takes Pole Position in GPUDirect Race
Blocks and Files says:
“Pavilion Data, the high-end storage array maker, sends data to an Nvidia DGX-A100 GPU server faster than DDN, VAST Data and WekaIO, according to a Nvidia-validated test result.”
–Blocks and Files, January 26, 2021
Ultra-Low Latency
GPU powered systems process data in parallel, making low latency IO more critical than ever. With this in mind, NVIDIA developed MagnumIO GPUDirect Storage to deliver data with ultra low latency. In NVIDIA validated testing using GDSIO, Pavilion demonstrated the lowest latency available for both block and file data.
Block
Read: 182GB/s @ 3.87ms
Write: 149GB/s @ 4.51ms
File
Read: 191GB/s @ 1.75ms
Write: 118GB/s @ 5.60ms
For environments that have not adopted MagnumIO GPUDirect Storage, Pavilion still delivers ultra low latency through NVMe-oF/RoCE. Whether customers use GPUDirect storage or not, Pavilion delivers ultra-low latency.
Linear Scalability for Any Size Dataset
AI engines need to process massive amounts of data to deliver meaningful results, but GPU based systems with limited internal storage cannot contain the volumes of data needed. The Pavilion HyperParallel Data Platform, with up to 2PB of usable capacity per system, delivers the performance of internal storage with the ability to scale performance and capacity linearly, so organizations can use any size dataset.
To learn more read the Pavilion Scalability Solution Brief.
Pavilion and NVIDIA showcase how organizations can maximize investments and accelerate applications
Pavilion partnered with NVIDIA to showcase how organizations can maximize investments and accelerate applications running on NVIDIA platforms.
High performance: How a multi-controller storage architecture shatters expectations for modern applications
How High Performance NVMe-oF Storage Accelerates CPU & GPU-Powered Virtualized Environments Demonstrated by Pavilion
Visually investigating patterns in logs at scale with Graphistry, RAPIDS, and Pavilion
How to respond to rapidly scaling Geospatial Intelligence with OmniSci and Pavilion
“High performance compute requires high performance IO.”
-Michael Kagan, CTO, NVIDIA