In 1943, Abraham Maslow published a paper that described the Motivation Theory. In short, he suggests five interdependent levels of basic human needs (motivators) that must be satisfied in a strict sequence, ranging from fundamental level (survival) to the highest level (self-actualization). In the world of storage technologies, we think Maslow’s Hierarchy provides a compelling analogy for data storage hierarchy to frame some of the most exciting and transformative developments occurring today.
Prior to digging into Maslow, previous blog posts describe two major concepts:
The NVME specification that standardizes high-performance access to direct-attached NVMEs, and the evolution to NVMe-oF which allows these NVME devices to be disaggregated from the server’s captive PCI-e network to shareable low-latency fabrics.
For all practical purposes, Ethernet Fabrics are the only game in the data center. To further sweeten the pot, the upcoming NVME over TCP/IP standard allows a smoother transition from Distributed Direct Attached Storage (DDAS) without requiring an upgrade to an RDMA fabric. Meanwhile, Ethernet marches on towards Terabit speeds. The answer is blowing in the wind: Ethernet is the way forward for data center storage.
Modern scale-out applications broke with tradition and embraced DDAS with HDDs and SSDs. The DDAS architecture often provides Data Protection and Availability using 3-factor replication performed by the applications themselves. In addition to the 3x capacity bloat, there are other serious downsides to DAS. Server-side storage reliability and availability are limited by the individual server(s) that own the data. Server reliability, including server crashes, as well as replacement, maintenance, and end-of-life cycle, translate to storage recovery episodes. To make matters worse, the recovery traffic snarls the network and consumes server CPU, which may cause server and storage performance degradation.
For the more latency-sensitive applications, replacing DAS HDD with more expensive and more reliable DAS NVME further exacerbates the pre-existing cost of poor utilization by stranding the most expensive flash media, while magnifying the extant reliability gap between servers and flash-based media.
The next logical step in to disaggregate the DAS NVMEs from the server which is precisely the opportunity offered by NVMe-oF. The fault domains remain separated and storage is not hostage to servers.
Back to Maslow: an immediate advancement within the hierarchy need pyramid is the ability to both pool and virtualize the capacity of remote NVME drives without introducing latency. This mitigates over-provisioning, improves asset utilization and enables consistent application performance by freeing up host CPU cycles currently relegated to storage management.
Or does it? Under normal circumstances, anyone paying 3x the going rate for “protection” would consider it to be extortion! But such are the times.
The obvious choice is using well-established RAID techniques to offer data protection.
Pavilion offers a high-performance built-in RAID6. This has numerous advantages over “host-side” RAID6 implementation. To start with, performing the compute-intensive RAID6 on the host robs server CPU cycles especially during drive failure/rebuild episodes.
Host-based RAID6 generates incremental IOs continually (up to 6x for RAID6), further congesting previous network capacity.
Pavilion uses an ultra-low latency internal PCIe fabric that stitches up to 20 active-active IO controllers within a single 4U system to manage the RAID6 micro-metadata as distributed persistent memory.
OTOH, unlike classic controller-based arrays that provide 1+1 HA, Pavilion Data has 20 controllers to provide *endless* availability to your data.
Any way you look at it, Pavilion Data offers a superior solution.
Pavilion Data offers 20-way HA protection and ultra-fast zero-host footprint RAID6. Perhaps, the Maslow in you is satisfied, or perhaps it yearns for more.
We know and expect that our customers and users will want to do much more with their data sitting in the Pavilion Memory Array.
Pavilion offers snapshots and clones as first-class native objects so that the performance of these is at par with the original volume. In addition, Pavilion snapshots are fine-grained and instantaneous unlike the “instantaneous” snapshots running on hosts that require multiple nodes (data owners and accessors and meta node) to coordinate and recall/give up leases and which use coarse granularity for CoW.
Pavilion’s snapshots and clones can be accessed through any of the 20 IO controllers.
This is cool because it separates the backup stream from the production stream.
It allows the customer the unprecedented capability to share and modify as needed large data sets *without* getting bottlenecked on IOPs. This ability to share very large data sets [a PB in 4u] with a scaleable IOPs pool is yin-and-yang.