Meeting the Storage Challenge of Nvidia GPU Computing

Nvidia is helping to lead the growing adoption of artificial intelligence (AI), machine learning (ML), and deep learning, which are some of the key technologies that will help humanity solve the kinds of challenges that we have all faced this year. 

Nvidia’s GPU based systems are fundamentally different from CPU powered environments and require storage that can deliver data in parallel, to take advantage of the performance that these systems offer. This is one of the major challenges for GPU based systems - taking advantage of GPU storage that can keep the GPU saturated. Traditional storage, including AFAs, scale out NAS, and even direct attach solutions either cannot scale in a performant manner to meet the demands of these modern environments or they do so by exploding the cost, complexity, and footprint of the data center. And if your GPUs are not being fully utilized, then you are missing out on the benefits that they offer. 

If this describes you:

  • You are using a parallel file system, such as Spectrum Scale (GPFS) or Lustre, 
  • you use a scale-out NFS solution, 
  • you need to access high performance local S3 storage
  • you have a rackscale, direct attach solution 

You will discover how Pavilion can uniquely solve the challenge of delivering content to GPU based systems. 

Learn more about how the Pavilion Hyperparallel Flash Array can meet the needs of GPU based systems by watching this GTC session, Performant Architectures for GPU Computing (Nvidia Account Required), presented by Carahsoft. Available on-demand.