3dmentat - Fotolia
Nimbus Data has returned from self-imposed all-flash exile with a new architecture built around petabyte-sized nodes that the company says can hit 50 cents per gigabyte of raw capacity.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Nimbus' new ExaFlash architecture was developed over the last two-plus years while the vendor went publicly silent, although founder and CEO Tom Isakovich said it continued to sell its previous generation Gemini arrays. Nimbus became one of the first all-flash array vendors in 2010, but was pushed into the background by fellow startups such as Pure Storage, Violin Memory and Nimble Storage and the large legacy vendors who jumped into the flash market. Isakovich ran Nimbus lean without a large marketing budget, and it got lost in the flash hype.
Isakovich said Nimbus went quiet in mid-2014 to rewrite its flash strategy.
"The last two-and-half years we put our nose to the grindstone and got to work on the next big thing," Isakovich said. "We had to rethink our software and hardware architecture to take advantage of where flash is going. Now we think we have the platform that will take us to the next 10 years with flash."
The result was ExaFlash, an API-driven architecture designed to scale from terabytes to exabytes of storage. "This is about how to deliver flash memory at scale," Isakovich said of ExaFlash.
"We've sold [Gemini arrays] to some of the biggest internet companies out there -- including eBay and PayPal -- and they told us, 'We're thinking petabytes, we're even thinking beyond petabytes.' What does that architecture look like? How does a company deploy flash at scale while taking full advantage of the performance and energy saving attributes of flash?"
Tom IsakovichCEO, Nimbus Data
Isakovich said the changes reflect the evolution of all-flash arrays from being all about speed in 2010 to emphasizing scale today. He said ExaFlash is built for modern data centers -- petabyte-range capacity, API-driven, high performance and approaching the price of hard disk drives. The target use cases are cloud, virtualization, databases, and big data analytics.
ExaFlash arrays support block and file storage with object on the 2017 roadmap. They support Fibre Channel, Ethernet and InfiniBand connectivity.
There are four ExaFlash node configurations -- the A Series, B Series, C Series and D Series. The 2U A and B Series use standard 2.5-inch SAS SSD drives in 4 TB, 8 TB and 16 TB capacities. The 4U C and D series use Nimbus-developed 3.5-inch 50 TB drives.
The A Series scales from 50 TB to 100 TB raw capacity, the B Series scales from 125 TB to 1 PB, the 60-slot C scales from 1.5 PB to 3 PB and the 90-slot D Series is 4.5 PB. All except the D Series can be purchased half or fully populated.
Each node has dual controllers for availability, and customers can mix nodes in a cluster.
Pricing starts at $50,000 for the A Series, $125,000 for the B Series, $957,000 for the C Series and the D Series costs $2.25 million. That comes to $0.95 per raw GB for the A and B, $0.65 per GB for the C and $0.50 per GB for the D. Nimbus also estimates an average of around 3-1 data reduction ratio from deduplication and compression. Nimbus is using multi-level cell (MLC) flash, and Isakovich said the price can be reduced in the future by adopting triple-level cell (TLC) and other lower cost types of flash.
The ExaFlash arrays include FPGA hardware acceleration for data reduction, encryption and metadata checksum calculations.
"We've been adamant for some time that to do all-flash right, you need to do your own hardware," Isakovich said. "These are not off-the-shelf servers. Intel CPUs are not optimized to do these calculations."
ExaFlash's other storage features include monitoring, snapshots, replication, cloning, and thin provisioning.
Isakovich said ExaFlash is not a scale up or scale out architecture. By decoupling data flow from metadata, data does not flow between nodes. The nodes only exchange management information so there is no interdependency between them. There is also no need to mirror data between nodes for availability. Customers can't extend LUNs across nodes, but Isakovich said the vendor's PB-sized nodes alleviate the need to do so.
"That's the one negative if someone wants to point to a negative," he said. "But because we've built such big nodes, we don't see that as a realistic compromise. We don't anticipate customers saying 'We need to make a 50 petabtye LUN and we can't do that with you.'"
Enterprise Strategy Group senior analyst Scott Sinclair said ExaFlash systems will have to be tested in the field before they can be properly judged, but their density and pricing promises make them worth considering.
"If they can hit the price points and density points they talk about with their 4u box, that's super compelling," he said. "They don't have scale out or scale up, and it's easy to look at that and say they're missing features. But they can aggregate management of multiple systems. A vast majority of organizations probably don't need LUNs that span individual systems, so they benefit from aggregating systems."
Perhaps the thing that makes Nimbus stand out the most in the industry is its business model. The company has no venture funding, but Isakovich said it is profitable from sales of its Gemini arrays. Its sales and marketing budget is a mere fraction of Pure Storage and Nimble Storage, let alone EMC, Hewlett Packard Enterprise, NetApp and IBM. Isakovich won't say how many people work for Nimbus, but claims the "lean and clean" business model is better suited for the long run than the likes of Pure, Nimble and Violin Memory -- all public companies losing money.
"It's a balancing act," Sinclair said of the Nimbus business model. "With its lean sales model, Nimbus won't get the breadth of visibility the others do, but it will be cost-competitive in accounts it goes after."
How to choose among AFA storage vendors
All-flash arrays became hotter topic in 2015
AFA storage pushing hybrid arrays and disks aside