In a blog from VMware, Jason Massae, Core Storage Technical Marketing Architect for the Storage and Availability Business Unit at VMware, writes, “vSphere now supports NVMe over Fabrics allowing connectivity to external NVMe arrays using either FC or RDMA (RoCE v2). As NVMe continues to grow and become the preferred storage, being able to connect to external NVMe arrays is critical. With this, first iteration partners and customers will be able to evaluate NVMeoF.”
This statement makes a lot of sense, not just because VMware now supports NVMe-oF, but because the move to NVMe-oF is, as he says, “critical”.
NVMe-oF is the final piece of the puzzle that is transforming what enterprise storage is capable of and, for the first time, making it possible for organizations to extract maximum value from their entire datasets in a practical time frame and at a reasonable cost.
That transformation began with the adoption of flash storage which, while fundamentally different than legacy spinning media, was initially constrained by legacy disk interfaces, such as SAS and SATA. Then NVMe was developed specifically to take advantage of the high performance and parallelism that flash memory offers. NVMe enabled SSDs to finally deliver the performance that flash promised, but never could behind legacy interfaces.
Still, the transition to NVMe only exposed the next set of bottlenecks. Traditional AFAs, which have not progressed from the outdated dual controller design from the era of disk drives are unable to take advantage of the parallel access that flash offers. If that wasn’t enough, even AFAs equipped with NVMe drives still use legacy SAN interfaces, such as iSCSI and Fibre Channel, ultimately limiting their potential.
The advent of hyperparallel flash arrays (HFAs) removed the dual-controller architecture as a barrier to performance, allowing multiple controllers to access the flash in parallel. Hyperparallel arrays can deliver orders of magnitude better performance than legacy AFAs. A typical, legacy AFA can deliver about 1M IOPS at about 1ms of latency. Compare that to an HFA which can deliver 20M IOPS at only 40μs. This is the power of an HFA.
Now, NVMe-oF completes the puzzle. NVMe-oF extends the NVMe protocol over a fabric to enable an HFA to act as shared storage. RDMA is then used to enable the storage controller on an HFA to fulfill read requests by placing data directly into the memory buffer of a server node on the network, significantly reducing latency while increasing overall performance.
RDMA can also take advantage of Zero Copy, which allows data to be transferred from the HFA, over the network, and into the memory of the server node. Zero Copy enables this to be done without using the network stack. By moving data over the network without using the network stack, network latency is greatly reduced. RDMA over Converged Ethernet (RoCE), pronounced “rocky”, is the standard that enables the use of RDMA over Ethernet.
These revolutionary changes can have a dramatic effect in virtualized environments. Imagine a world where an external array can deliver all the benefits of shared storage, with the same or better performance than DAS. You get all the speed of local NVMe drives, but without any of the drawbacks of direct-attached storage. Capable of supporting hundreds of thousands of VMs on a single array, that is what a Pavilion HFA with NVMe-oF can deliver.