Only one solution offers customers the most performant, dense, scalable, and flexible data storage platform in the universe?
It’s Pavilion, with the Pavilion HyperParallel Data Platform.
With a statement like that, we better be able to back it up.
This is the third of a four-part blog series, each addressing a specific component of that claim. The focus of this blog will be on scalability.
The Pavilion HyperParallel Data Platform is the only high performance storage platform to offer independent, linear scale up and scale out of capacity and/or performance across any number of systems and with any combination of block, file, and object workloads.
No other solution can match what Pavilion delivers in a single, high density platform.
Independent Scale
Leveraging the power of the unique network based architecture of the Pavilion HyperParallel Data Platform, customers can add capacity or processing power, independently of each other. They can start with as few as 18 drives and then scale their capacity up to 72 drives in each 4RU system for a total of up to 2PB.
Customers scale the processing power of the Pavilion HyperParallel Data Platform, which accelerates throughput and IOPS, by increasing the number of controllers. Each controller is completely independent and comes with its own processor, memory, networking, and OS instance. They can begin with as few as four controllers and increase throughput and IOPS as needed by adding more controllers, up to a total of 20 in each system.
Linear Scale
As capacity and performance is added to the Pavilion HyperParallel Data Platform, each incremental increase is a linear addition to the system total. 18 drives, depending on the capacity, can provide 500TB of capacity. 36 drives can offer 1TB. A full system with 72 drives supports up to 2PB of usable capacity.
Performance increases are also linear. Each controller delivers up to 1M IOPS and 6GB/s of read performance and every controller added can increase the total performance by that amount. 10 controllers provide up to 10M IOPS and 60GB/s of throughput. A complete system with 20 controllers offers up to 120GB/s of performance and 20M IOPS.
Scale Up or Scale Out
Add controllers or drives to scale up within a single system. Then scale out across any number of additional systems. Block workloads can scale out using an external file system such as Spectrum Scale, Lustre, or BeeGFS. Pavilion customers can also use Pavilion HyperOS 3 to scale across systems.
File and object workloads use the global namespace in Pavilion HyperOS 3 to scale across systems in any combination.
Performance and capacity increases scale linearly across systems. Within the global namespace, the Pavilion HyperParallel Data Platform delivers up to 75GB/s of read throughput for file workloads for each system. So, two systems deliver up to 150GB/s of throughput, five systems offer up to 375GB/s, and ten systems provide 750GB/s of file read performance. At rack scale, customers can scale across multiple racks to support any size data set.
Scale Across Data Types
The Pavilion HyperParallel Data Platform uniquely gives users the flexibility to support block, file, and object workloads with simultaneous high performance. These workloads can scale linearly across controllers in multiple systems, in any configuration.
The Pavilion Difference
The Pavilion HyperParallel Data Platform delivers unmatched performance for block, file, and object workloads. Best in class performance, for each workload, simultaneously. That performance is delivered in a compact 4RU footprint, and scales linearly across any number of systems, in any combination, without limits. That is the Pavilion difference.
To learn more about how the unmatched performance of the Pavilion HyperParallel Data Platform can enable your organization to scale without limits, read our scalability solution brief.
Want to learn more about how Pavilion can say it offers the most performant, dense, scalable, and flexible data storage platform in the universe? Read the other blogs associated with this series: