NVMe-of and RoCE Storage Performance for GPU and GPUDirect at NVIDIA GTC Session # SS33030 (Click now to add to your conference agenda)
AI/ML/DL, Analytics and Virtualization with GPUs is transforming how we achieve extraordinary insights into extremely complex questions. These new insights span industries like Federal, HPC & supercomputing, Large Enterprise and M&E. Yet data storage IO with performance density and capacity density have not kept pace.
It is time to think differently about data storage for GPU-based applications.
IO must keep up. You must have modern storage to rapidly ingest, analyze, visualize and respond in real-time at scale.
Beyond a few days or a few weeks of ingest with DGX, you run out of NVMe capacity for GPU-based analytics and visualization tools. You need storage that is as performant as locally attached NVMe, yet scales up and out in a linear fashion using low-latency, simple and cost-effective NVMe with RoCE to keep hungry GPUs satiated.
In the upcoming GTC21 session High Performance: How a Multi-Controller Storage Architecture Shatters Expectations for Modern Applications Pavilion will describe how its HyperParallel Data Platform:
- Offers months and years (not days and weeks) of analytics at the speed of thought.
- Can deliver best-in-class performance across all vectors for block, file and object storage
- Is proven with OmniSci, Graphistry, VMware and PixiT Media for your organization
For those looking to maximize on GPU investments and optimize pipelines, this is a must-attend session.
If you want to learn more about how the Pavilion is shattering expectations for GPU-based workloads, check out our other sessions and pre-session blogs: