Quantum Leap in Performance with VMware vSphere 7.0 NVMe-oF

VMware is releasing version 7.0 of their virtualization and cloud management software vSphere. They’ve talked a lot about the new support for containers (or K8s), vMotion upgrades, and a myriad of cool new features to make IT admins more productive and happier. It’s an exciting release, for sure.

vSphere 7.0 also adds a new feature which can increase performance of I/O bound VMs (such as databases or business intelligence) massively: NVMe over Fabrics (NVMe-oF). With NVMe-oF, it is now possible to virtualize I/O intensive workloads that needed to be run on bare metal before. NVMe-oF can also increase the performance of more traditional VMs, allowing more of them to be run on the same hardware, faster.

What’s even cooler is that the Pavilion HFA is one of the first NVMe-oF all-flash arrays to be certified by VMware for use with vSphere 7.0. I’ve run some basic tests and seen amazing performance improvements using the same hardware and just converting to use NVMe-oF for VMware.

NVMe-over-Fabrics in 100 Words or Less

NVMe was built to accelerate flash storage without being tied to legacy architectures. Most SSDs today use it to deliver multiple gigabytes of bandwidth at microseconds of latency. But NVMe has a major problem for virtualization users: it’s only for direct attached storage (DAS) and isn’t sharable.

NVMe-oF extends NVMe to work over a fabric, enabling an Hyperparallel Flash Array (HFA) to act as shared storage. NVMe-oF uses RDMA, which allows for 0-overhead data transfers directly to VM memory. NVMe-oF can deliver performance rivaling even locally attached SSDs while preserving the shareability, reliability and availability guarantees of traditional SANs.

vSphere 7.0 NFS vs. NVMe-oF Performance with the Pavilion HFA

Because the Pavilion HFA supports multiple protocols, including NFS, we’re in a unique position to compare VM performance apples-to-apples. Since we’ve been working behind the scenes with VMware during the development of NVMe-oF for vSphere 7.0, we have had the opportunity to do lots of compatibility and performance testing in our labs.

Because the Pavilion Hyperparallel Flash Array (HFA) supports both NFS and NVMe-oF on all controllers, it allowed us to compare the same exact storage hardware and virtual machine’s performance when using either an NFS volume or an NVMe-oF volume. That’s as close to a perfect A/B test as I’ve seen.

A virtual machine was built with 16GB of RAM, 48 virtual cores, a standard HDD-based boot disk running CentOS 8.1, and both NFS-based and an NVMe-oF datastore based VMDKs were attached. We ran ezFIO which takes the industry-standard FIO and runs a complete series of I/O performance measurements and generates a spreadsheet with graphs and raw data. The same test was run to completion on each of the two virtual disks, and the results were divided to enable relative percent improvement analysis.

The relative performance difference was dramatic.

In a mixed 70% read, 30% write 4K random workload, NVMe-oF provided up to 3.5x the IOPS compared to NFS.

chart of sustained random 4k mixed

Read latencies were up to 67% lower on the NVMe-oF volume vs. the NFS one.

chart of sustained random 4k read

And sustained write performance was up to almost 4x the NFS performance:

chart of sustained random 4k write

What’s Next?

In a nutshell, using NVMe-over-Fabrics for VMware vSphere 7.0 can give up to four times the performance of NFS using the exact same Pavilion HFA hardware. That performance multiplier can let you run your I/O bound virtual workloads multiple times faster, and even consolidate move workloads per ESXi server. Thanks to VMware’s work bringing NVMe-oF to vSphere, enterprises can now benefit from its higher throughput and lower latency.

For more information on NVMe and NVMe-oF, please check out our library of white papers and solutions briefs here.