PCIe and SSD, Can the Combo Replace Your SAN?

Monday Oct 15th 2012 by David Chernicoff

The real game changer in the storage world isn't going to be a hard drive or even a specific storage technology. It's the impact of the availability of integrated PCIE 3.0 and 10 GbE as a standard server feature that will change the way that storage is approached.

The impact of storage as a business technology has most often been driven by two factors; price and capacity, with a third feature, performance, being a variable that depended upon the needs of the business and its applications. The rapid adoption of SATA technology brought down the cost of large storage arrays. And the current move up to the 6 Gbps SATA 3 specification from the 3 Gbps SATA 2 is providing a major jump in storage bandwidth at a minimal cost.

But the real game changer in the storage world isn't going to be a hard drive or even a specific storage technology. It's the impact of the availability of integrated PCIE 3.0 and 10 Gigabit Ethernet (10 GbE) as a standard server feature that will change the way that storage is approached.

The current state-of-the-art 16Gb Fibre Channel technology is the best SAN performance that can be found. For applications that require the ultimate in performance and reliability, FC has been, and will likely remain, the technology of choice for some time to come. But for almost everything else, iSCSI has been gaining market share as the preferable method for deploying storage networking due to its simpler implementation and lower cost to deploy.

In its original form iSCSI was simply implemented over standard Ethernet connections using common Ethernet adapters. As it became more successful and users wanted better performance from servers using iSCSI attached storage, technologies such as iSCSI Host Bust Adapters and TCP offload engines were deployed to lessen the CPU hit that was needed to manage the data transfer across the iSCSI storage network. These technologies had the effect of reducing the workload on the server CPU at the cost of adding complexity to the iSCSI deployment.

As 10 Gb Ethernet has become more prevalent, primarily as an interconnect strategy in datacenters, the Fibre Channel world took notice and developed Fibre Channel over Ethernet (FCoE), a technology for encapsulating Fibre Channel frames over an Ethernet connection. Specialized host bus adapters that contain both Fibre Channel and Ethernet modules can be used to provide this connectivity. It is also possible to deploy FCoE using standard Ethernet adapters, using a software implementation, at the cost of a performance hit on the server CPUs. FCoE is primarily being adopted by users who already have a large investment in Fibre Channel and are looking to extend their storage networking capabilities.

A third Ethernet hosted storage technology, ATA over Ethernet, has also made some inroads in the storage networking universe, but support for this technology lags behind both iSCSI and FCoe.

So what we have now is a combination of technologies that can run over standard Ethernet, which work best when deployed with specialized hardware, and, for optimal performance, require considerable bandwidth availability. Users deploying these technologies need take all of these issues into account when planning deployment of Ethernet-based storage networks. In most cases, the cost advantages over deploying a traditional Fibre Channel network still make these options attractive.

But with the release of the latest generation of Intel Xeon processors and some associated technologies, high-performance storage area networking over Ethernet will become much easier to deploy and an even more cost-effective solution.

The Intel E5 family of Xeon processors is the first generation of Xeon to support PCIE 3.0. This is significant on its face as the new technology provides double the bandwidth of the previous PCIE generation. But Intel also made additional advancements specific to I/O in the E5 Xeon including Intel Integrated I/O and Intel Data Direct I/O. Combining these advances, Intel has seen I/O bandwidth performance triple when compared to the previous generation of Xeon processors.

Intel has reduced I/O latency by as much as 30% with their Integrated I/O. This change moved the I/O controller from being a component of the motherboard chipset to a part of the physical CPU. This change in the way that I/O is handled would, by itself, have a significant improvement in the performance of SAN over Ethernet solutions, but Intel's second technology announcement is even more important.

The motherboard technology shipped with the new Xeon processors made a standards change to onboard networking, moving from 1GbE to 10GbE, with the ability to run 10GbE over appropriate copper (Intel has said their studies showed that more than 90% of datacenters already had appropriate cabling in place). But the Intel X540 10GBASE-T Ethernet controller, which is also available in a package form for add-on NICs as well as on-board deployment, goes a step further. The controller makes use of the Intel Data Direct I/O technology which allows it to write directly into the CPU's L3 cache.

Intel has tested the technology with up to 16 PCIe 3.0 Ethernet controllers in a single system and demonstrated the ability to hit more than 250 GB of throughput. The availability of this much bandwidth will greatly simplify the deployment of SAN over Ethernet technologies as the standard server NIC moves from 1GbE to 10GbE. And the way that the I/O is handled should reduce the need for additional hardware, in the form of the extra cost of HBA or TOE (TCP offload engine) enabled NICs to allow for adequate performance, beyond that required for the SAN, from systems handling storage connectivity.

More importantly, it makes hardware standardization simpler, as the need to have specialized hardware in place to support day-to-day SAN operation can go away for the majority of SAN data storage needs.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved