News

QLogic Mt. Rainier FabricCache pools PCIe flash cache across servers

Todd Erickson

QLogic Corp. today launched the first product based on its Mt. Rainier server-side flash

Requires Free Membership to View

cache-sharing technology. The FabricCache QLE10000 Fibre Channel host bus adapter enables pooling of cache across solid-state PCI Express cards.

QLogic refers to the FabricCache card as a "caching SAN adapter." Clustered servers running FabricCache adapters can access and utilize all of the combined caches in the cluster.

"Every FabricCache-equiped server in a SAN knows what other FabricCache adapters are out there on the SAN, and it pools the capacity together so that each individual FabricCache adapter has access to this combined cache," said Chris Humphrey, QLogic's vice president of marketing. "That gives you the ability to support clustered applications and workloads that cross multiple virtual machines."

Customers can move virtual machines (VMs) running in the cluster between physical hosts while the cache stays "warm" because there is only one active cache copy in the cluster.

Humphrey said the Qlogic Mt. Rainier FabricCache performs all of the caching, routing and I/O management processing on the card and appears to the server simply as a QLogic host bus adaptor (HBA). Each adapter works in simultaneous initiator and target mode, but only the initiator mode is exposed to the host. That allows clustered adapters to discover each other and their attached flash cards using standard SCSI commands.

The company is selling the Qlogic Mt. Rainer FabricCache adapter by itself, or bundled with 200 GB or 400 GB PCI Express (PCIe) flash cards attached to the adapters with PCIe cables. The FabricCache QLE 10000 is an 8 Gbps dual-port Fibre Channel (FC) HBA with double data rate (DDR) SDRAM and a four-core system-on-a-chip (SOC) for the caching and I/O management. The QLE10522-C and the QLE 10542-C add 200 GB and 400 GB solid-state single-level cell (SLC) PCIe flash cards, respectively.

Although most flash sold today is the lower-cost multi-level cell type, Humphrey said QLogic selected SLC flash because of its higher level of endurance. QLogic has an undisclosed OEM partner for its PCIe cards.

Humphrey called existing server-side caching systems "first-generation solutions" because they typically can't be shared. "It's really a direct-attached storage model that doesn't fit well into SAN [storage area network] storage, data protection and compliance policies," he said. "If you want to bring server-based acceleration to the vast majority of enterprise applications, you need to make cache a shared resource, and you need to support clustered application deployments."

Wikibon senior analyst Stuart Miniman said FabricCache fills a market need. "Looking at the flash marketplace, the number one gap in server-based flash is all the problems you have with direct-attached storage -- you can't share it," he said. "And this directly attacks that problem."

Miniman said this is the only network-led server-side flash-sharing solution he's seen, although there are software products that aim to enable flash sharing. Virident System Inc.'s FlashMAX Connect Suite software shares server-side flash. Also, PernixData Inc. is in beta with its Flash Virtualization Platform (FVP) that puts software in the vSphere hypervisor and pools resources across servers.

EMC recently scrapped its "Project Thunder" product, which would use flash to network servers, claiming it was unnecessary because of its other server-side and array-based flash systems and software.

FabricCache adapters can be managed with QLogic converged console, the command line interface, an open application program interface, or QLogic plug-ins for VMware Inc. vSphere, Microsoft Corp. Hyper-V and Citrix Systems Inc.'s XenServer server virtualization platforms.

Humphrey said QLogic plans to announce certifications with industry-standard solid-state cards within two months. He also said QLogic plans to add FabricCache adapters for 16 Gbps FC, as well as 10 Gigabit Ethernet (GigE) and iSCSI. No timeframes have been set, but Humphrey said the 10 GigE adapter would probably be next and would begin sampling by the end of 2013.


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: