Achieve superior GPU sharing and aggregation by using Pavilion, VMware®, and NVIDIA™ Magnum IO GPUDirect® Storage

Artificial Intelligence is changing everything about the world. In this new, datacentric era, AI has redefined how organizations use data to make decisions. As a result, every major organization either has AI initiatives or is planning to start one.

The Pavilion HyperParallel™ Flash Array (HFA) is the only solution that delivers industry-leading high performance for block, file, and object data simultaneously, all in 4RU. Pavilion is the performance leader for each data type, with unrivaled performance density, making it ideal for AI applications.

The leading AI platform, the NVIDIA DGX A100, provides impressive specs. But NVIDIA’s own internal NVMe SSDs do not surpass the industry-leading performance Pavilion gives to AI workflows by using NVIDIA’s own GPUDirect Storage protocol for block and file data.

Pavilion’s latest certifications and proof points for VMware® and NVIDIA enables greater productivity for global workforces and provides industry-leading performance to AI operations to speed training and inference in the data center and at the edge while performing comprehensive data analytics and machine learning, speeding time-to-results. Michael Kagan, NVIDIA CTO said:

"High-performance computing requires high-performance I/O."

Using NVIDIA AI Enterprise software running on VMware vSphere 7, Update 2+, customer workloads access NVIDIA's CUDA applications, AI frameworks, pretrained models, and software development kits. With Pavilion, the VMware certified NVMe-oF and NFS RDMA drivers share, aggregate, and automate GPU utilization while managing the entire environment, including GPUs & vGPUs, under Tanzu and ESXi using the latest NVIDIA A100 Tensor Core GPUs.