Fibre channel (FC) has become the compact disc of the data center. CDs once revolutionized how we listen to music. They sounded better, lasted longer, and held more music than the cassette tapes they replaced. In much the same way fibre channel has revolutionized storage by making SANs practical (even though it was not originally intended to be a storage interconnect).
Now, just like how CDs have been replaced by streaming, fibre channel SANs are being replaced by a superior technology called NVMe over TCP/RoCE. Bottomline, the technologies associated with CDs and fibre channels are obsolete.
Ever since iSCSI was standardized in 2000, there has been debate on what to use as a storage interconnect: fibre channel or ethernet. For most data centers, particularly those that demanded the highest performance, the answer was fibre channel. Fibre channel became the performance storage fabric of choice and has been deployed in almost every data center in the world. Right or wrong, TCP was seen as a low cost option for block level fabric attached storage, but not as a performant one.
Now that is all changing.
NVMe has replaced SAS and SATA as the drive interface of choice for SSDs, resulting in significant performance improvement at the drive level. NVMe-oF extends the NVMe protocol across the fabric, resulting in higher throughput and lower latency across the storage network.
NVMe-oF can use either fibre channel or ethernet as a transport. However, the overwhelming performance and low latency advantages of ethernet, particularly when using RDMA, make it the only real choice for today’s data centers. The most common implementation of fibre channel today is 32GFC, although 64GFC HBAs are available, and vendors offer switch speeds of up to 128GFC. In contrast, 200Gb ethernet is available now, and some switch vendors are offering 400GbE options, giving ethernet far greater potential bandwidth than even 128GFC.
Plus, the performance advantages of ethernet include more than just bandwidth. RDMA over Converged Ethernet (RoCE) dramatically reduces the latency of the storage network by enabling the storage controller to place data directly into system memory of the server. NVMe over FC cannot use RDMA, meaning that all things being equal, fibre channel will always have more latency than TCP/RDMA.
And if being capable of much greater bandwidth and far lower latency wasn’t enough, ethernet is also significantly lower in cost than FC.
The ubiquity of ethernet as a data network has resulted in economies of scale that fibre channel can never approach. The result is that, on average, ethernet networking components are far more cost effective than fibre channel. When it comes to price/performance there is simply no longer any comparison between fibre channel and ethernet.
The end of fibre channel has long been prognosticated. Yet the historical advantages of fibre channel have helped it maintain its dominance over ethernet as a storage interface. These include being a lossless protocol, having a large install base, the isolation of block storage traffic, and a well defined roadmap. Today, these advantages no longer exist.
Converged ethernet provided a lossless protocol which guarantees that packets will not be lost when a switch is overwhelmed with incoming traffic. Fibre channel does still have a large install base, but the ubiquity of ethernet as a data network actually makes that protocol far more common. While it is a best practice to separate storage storage and network traffic on different switches, they can also be safely segregated through the use of VLANs. The final advantage of fibre channel, a well defined roadmap, has also reached its end. While there is a Gen 8 being discussed, as of this writing there is no official roadmap beyond 128GFC.
With all that being said, what are the real world differences between these solutions? Let’s compare a leading all flash fabric attach solution that uses NVMe over FC to the Pavilion Hyperparallel Flash Array (HFA). This is the top of the line all flash fabric solution from a leading brand name vendor. The NVMe over FC array data sheet highlights performance of 300GB/s of read throughput, 11.4M IOPs of performance, and latency of 100µs in a cluster with a 48RU footprint. Now these are amazing numbers when compared to traditional SCSI FCP.
When we compare that to the Pavilion HFA, which uses NVMe over TCP, we get a very different result. The HFA delivers 120GB/s of read throughput, 20M IOPs and latency as low as 25µs in a single 4RU solution. This means that in less than 1/10th the rackspace the Pavilion HFA almost doubles the IOPs, and reduces latency by 75%. Those are meaningful numbers. Granted, it takes two HFAs to roughly match the throughput, but that is still a reduction of 48RU to 8RU. Plus, because the HFA can scale linearly, two arrays will result in 40M IOPs. There is clearly no comparison.
NVMe over TCP/RoCE offers significantly greater performance, reduced latency, lower cost, and stronger technology roadmap for the future. Fibre channel has had a good run. It has been the backbone of storage networking for two decades. Now, just as streaming replaced CDs for music, the overwhelming advantages of NVMe over TCP have made FC obsolete.
For more information on how NVMe over TCP/RoCE is replacing FC, read our blog on configuring the next generation storage network.