Performance Density – A New Metric for Rack-Scale Design

I’m constantly amazed by the specsmanship of the data storage industry. Every month we hear about some new system that can achieve a gazillion IOPS or store 100’s of petabytes. We revel in our own glory, often without consideration of the consequences. Current examples are NVMe All-Flash Array and Software-Defined Storage (SDS) marketeers running amok.

Our industry is consistently rewarded for storage capacity density. This is the number of TBs that can be crammed into a shelf or rack unit (RU). It is why drive makers constantly increase capacities for every form factor and Big Storage is cleverly stuffing as many drives as possible into a chassis. Runaway data growth continually fuels this paradigm.

Datacenter real estate is costly and precious. Each floor tile has power, cooling, conduit, and more running to it. There are hard limits on how much data center space is available. This space is commonly measured and allocated in RUs. RU consumption has a high operating cost (OpEx). Each RU consumed requires a greater percentage of fixed data center allocation and personnel costs along with the equipment costs.

Increasing storage capacity per RU does little for the performance side of the equation. In fact, the opposite is true. Performance hard disk drive shipments are rapidly declining because as the drives got bigger, speed did not keep up. The same trend is now occurring with SSD’s. A 1 TB SSD has the same performance as a 15 TB SSD. You might even say that the 15 TB SSD is 15 times slower than the 1 TB SSD. Storage capacity increase per RU erodes storage performance making a new metric – performance density per RU more important than ever.

Higher storage performance density means fewer application server nodes; reduced network switch ports and switches; reduced cables, transceivers, less RU consumption for applications. Limited storage performance density results in lesser performance which harms time-to-market, worker productivity, business reputation, etc.

Pavilion Data solves the storage performance density problem. Our NVMe-oF Storage Platform offers 120GB/sec of bandwidth, 20M IOPS and nearly 1PB with your choice of NVMe SSDs in a 4U footprint. This eliminates complicated scaling problems associated with AFAs and SDS.

With performance so close to that of embedded SSDs, application servers can boot directly from the array. This creates a new paradigm where server nodes can be a smaller form factor such as a 1 RU server, half RU microserver, or blade server at much lower total costs. This also improves application availability. When a server node fails just swap out the server, boot it off of the Pavilion NVMe-oF Storage Platform and point it at its volumes. Cost reduction in server hardware, maintenance, RUs, power, cooling, etc. practically pays for the system.

Next time you are subjected to a chest-pounding elevator pitch, be sure to ask for one “little” metric – Performance Density.

To learn more about Storage Performance Density, check out our White Paper here.