When EMC launched its VFCache server SSD product this week, the storage vendor identified Fusion-io as its main...
competition in the market. But Fusion-io CEO David Flynn said his company has a more mature flash caching product and customers will resist EMC’s attempt at extending its vendor lock to the server.
SearchSolidStateStorage.com: How does it feel to have EMC bearing down on you with VFCache and “Project Thunder”?
Flynn: We view it like when EMC had to respond to NAS, NetApp really took off. When EMC had to respond to dedupe, Data Domain really took off. The pattern is already set for how these things go. They’ll try a few times to enter the market. EMC represents a closed system and a vendor lock that most folks are happy to be leaving as they go to more scale-out architectures and ‘big data’ architectures that are not dependent on what are basically storage mainframes. The big storage array is a vertically integrated, vendor locked environment just like the mainframes were.
We’ve been competing with EMC from Day 1. The day we came out of stealth mode and announced our product was the same month EMC announced they were putting flash in their storage array. This was late 2007. We both entered the market for enterprise solid state at the same time. Fusion-io was a 15-person company at the time, and we have outsold them.
That’s because we have had a singularity of focus on maximizing the disruptive potential of flash while EMC has to worry about its existing revenue sources in storage arrays.
We said put the flash in the server, it’s more potent there and it will accelerate applications better. EMC said put it in the storage array. They did that because they could charge for it because it’s inside their box and they can mark it up.
They have now admitted the flash is better deployed at the server. But they’re still trying to propagate their vendor lock.
That Micron card [used in VFCache] could be sold by a server vendor, why buy it from EMC? Why doesn’t EMC just sell the software? Because they’re trying to keep their hands around the premium flash and propagate the vendor lock.
I don’t think the customers who buy servers want to become held hostage by EMC as they’ve watched their storage administrator counterparts.
SearchSolidStateStorage.com: EMC first previewed VFCache as “Project Lightning” last May? Were there any surprises in the official launch this week?
Flynn: I was surprised at its low capacity point and that it is SLC [single-level cell]-only. It shows EMC does not trust the vendors who are selling them the flash devices to use cheaper higher-capacity MLC [multi-level cell].
The fact they do not support [VMware] vMotion and sharing of the cache among guests shows a rush to get it to market. Those are key things in virtualized environments. We chose not to go to market with something with roughly the same feature set and waited until we had vMotion and such. It was only late last year that we brought the IO Turbine product to market with a whole lot of sophistication around supporting virtualized environments.
SearchSolidStateStorage.com: EMC’s webcast presentation Monday included a graphic pitting Micron’s card head-to-head against Fusion-io’s, showing performance advantages to the Micron card.
Flynn: They showed our first-generation card that’s been in the market four years. It’s not indicative of our modern stuff. That’s one of the problems with having been in market so long, these guys can cherry pick old stuff and compare to it. It’s not a fair comparison. The other thing is they’re comparing MLC performance levels to SLC, so their comparing apples to oranges from an underlying NAND flash perspective.
The other thing that’s interesting is they talked about shipping 24 petabytes of flash in 2011. We shipped more than 50 petabytes.
SearchSolidStateStorage.com: EMC claims VFCache is faster than Fusion-io because VFCache handles flash management and wear leveling, while Fusion-io cards offload this to the server CPU.
Flynn: It’s exactly the opposite. Our caching technology bypasses the guest operating system and the hypervisor. We get I/Os directly from the application and do not have to traverse NTFS -- the Windows NT file system -- and the VMFS -- the VMware file system in ESX. By getting rid of that, we have direct access to those I/O requests and service them with much less CPU overhead. EMC is focused on the wrong thing if they’re focused on the CPU required to access the flash. They should be focused on bypassing the layers of operating system that get in way of receiving the I/Os from the application.
The whole CPU usage thing is not true. We accelerate applications better than other flash devices because of our architecture. But it is counter-intuitive, and that leads people to think it’s actually a weakness when it’s strength in how we run our flash.
SearchSolidStateStorage.com: Another point EMC makes is using server SSDs with a storage array enables customers to have data protection and high availability that they can’t get using a Fusion-io card with only the server and direct attached storage.
Flynn: We do provide those capabilities with our IO Turbine caching software and direct cache. When you look at the growth markets in IT, it’s scale-out, cloud, software as a service, big data, and web companies like Apple and Facebook. Those guys get high availability in a way that does not require a high end storage array. Even traditional enterprises are trying to emulate that architecture by adopting things like Hadoop, or they’re moving to where they’re leveraging those services using software-as-a-service providers.
SearchSolidStateStorage.com: What do you think of the Project Thunder product?
Flynn: Building an all-flash storage appliance is too little, too late to protect the storage mainframe business. Put flash in a server and you have a high-speed storage device, you don’t need EMC to do that. EMC is putting its own VFCache cards in a box. Well guess what? Put them in a regular server and you have something that doesn’t require the vendor lock, it’s an open architecture.