Next Generation Storage with No Compromise

Businesses are transforming to be more digital and are constantly in the quest of getting a competitive advantage over their competitors. One of the important and popular means is to do more with the data that they have. We all know that the amount of data as in volume, velocity, and variety is continuing to grow unabated. To maintain a competitive edge in the digital era implies the ability to take control of this deluge, analyze and deliver insights accurately to better serve business needs. This has turned the focus back to storage software, storage media, and systems as it was proving to be one of the big bottlenecks.

It is no secret that Storage performance has not kept up with the speeds of CPU and Network. A perfect storm is brewing, an over-served market seemed ripe for disruption. The innovation around NVMe based flash, Storage class memory such as Optane along NVMe protocol seem to prepare us to better handle the issues for the near future. Many established companies have either announced new retrofitted products or updated their roadmaps with new storage arrays that are using some of these new innovations, along with touting how fast each one’s storage is in millions of IOPS. It is difficult to come to grips as to what is it that each one delivers and what are the compromises, and whether they are designed to maximize the true benefits of NVMe based media. With NVMe and NVMe over the fabric, delivering comparatively high performance, low-cost system that can store half a PB of data is not very difficult.

Our goal was to do more to the customer leveraging the new media and protocol along with the new architectural paradigms around rack scale designs. We not only wanted to satisfy table stakes for entry into the market as in Performance, Capacity and Resiliency at an affordable cost, but also to take it a step further to try to alleviate some of the pain points that data center architects and operators feel on a day to day basis.

With the buzz and mindshare around Software-Defined Storage, our instinct was to go along the same route because it keeps costs down, and aligns with the common belief that there is absolutely no need for any hardware innovation and it is “modern”. It was not about being cool, it was about delivering something that adds value and simplifies the life of the data center planners, architects and operators. Our goal was to ultimately produce a system that not only provided the basic functionality of any storage system around performance, resiliency, and cost but do it in a form factor that would be ideal for the rack scale along with off-host data management from the servers into our system. With applications being more CPU and memory hungry, we did not want resources in the server to be used for storage data services like metadata management, snapshots, RAID, etc.

Granted, one school of thought would argue saying “CPU is cheap”, so we can add more servers, however, we felt that that would induce a server sprawl and create a storage capacity management problem. In addition, we wanted to work with the host operating system as-is, with no additional software. It is a fact that all software eventually needs to be patched, upgraded and maintained. The lesser the touchpoints on the server tier, the lesser the operational headache for our customers. To keep the cost down for our customers and to facilitate rapid innovation, we did not want to embark on developing new ASICs of FPGAs. This implied being smart about using off the shelf components that are readily available and developing a purpose-built software stack that leverages our system.

Our journey of storage system design essentially started off with asking a fundamentally simple question as to what is the one class of device in the data center that is available, has enough CPU and memory that can ingress a Tb of data and egress a Tb of data in 1 second at line rate? The answer was quite evident, they were the networking devices. NVMe media, when accessed in multiple and in parallel, can indeed store a Tb/s. The networking devices were modular, expandable and purpose-built for what they serve. Even though software-defined networking exists, people still buy and consume popular switches from successful vendors.

However, replacing egress ports with fast media was not going to cut it because we needed some real CPU horsepower to do all the data management at ingress speeds and this was a daunting task. We consciously borrowed design concepts from networking world, expanded from our knowledge in blade chassis and server designs built an architecture with enough CPU, memory, internal bandwidth and storage capacity along with a purpose-built clustered storage software to deliver a shared accelerated storage device that not only provided performance and resiliency but provided the aspects that are needed to serve storage that is manageable at scale. All using commodity components while being totally standards-compliant.

To summarize, we are the only NVMe over fabric shared accelerated storage array that is performant, modular, dense, resilient, field serviceable, economic, has zero host side footprint with built-in data management in a right for the rack-scale form factor. A high performance storage system with absolutely no-compromise is finally here!