carloscastilla - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Micron Ceph block storage expands NVMe configurations

Micron Ceph storage comes as a four-node storage cluster that incorporates object features of Red Hat's Jewel release. A single cluster scales to 96 TB of effective capacity.

Micron Technology Inc. is adding an all-flash reference architecture for Red Hat Ceph block storage, the third offering in Micron's Accelerated Solutions portfolio that launched in 2016.

Product SKUs are being finalized for the Micron Accelerated Ceph Storage Solutions package, which will be sold as a four-node storage cluster and three attached 1U mapping nodes.

The recommended architecture includes qualified Supermicro Ultra Server or SuperServer storage servers, Micron 9100 MAX PCI Express-based NVMe SSDs and Red Hat Ceph 2.1 software-defined storage. It is designed as a building block to support scale-out OpenStack cloud infrastructure.

Each Ceph storage node accommodates 10 2.4 TB U.2 SSDs, or 96 TB of raw storage per cluster. Micron recommends nodes be configured with two Intel Xeon E5-2699 V4 processors, eight 32 GB DDR4 registered dual inline memory modules and two Mellanox single-port 50 Gigabit Ethernet networking switches.

Micron, Red Hat engineers tackle Ceph performance

"Historically, Ceph block storage has been dinged for poor performance on small block reads. We positioned this as a high-performance storage back end for OpenStack Cinder, but it also gets pretty good throughput," said Greg Kincade, senior product line manager of Micron's storage business unit.

Micron, based in Boise, Idaho, claims configuration testing delivered up to 1.1 million IOPS on 4K small block random reads and 1.75 Gbps on 4 MB object random reads.

Historically, Ceph block storage has been dinged for poor performance on small block reads.
Greg Kincadesenior product line manager for storage, Micron Technology

Ceph's distributed file system presents storage capacity in aggregate across all nodes. Ceph assigns an object storage daemon to individual disks to handle replication and network-based file sharing. Write and replication journaling is housed on SSDs.

"That combination [of disk and flash] works extremely well with object storage, which is Ceph's primary use case. But there is a huge performance penalty with small block objects," said Ryan Meredith, a Micron principal storage solutions engineer.

Micron and Red Hat engineers teamed up to exploit object features added in the Ceph Jewel release with NVMe flash drives. Meredith said the work predominantly involved an "ugly, difficult, iterative process" of tuning configuration files.

The Accelerated Solutions portfolio also includes the Micron Accelerated VMware vSAN Ready Node Solution and Micron Accelerated NexentaEdge Solution. The expanded Red Hat Ceph block storage comes one week after Micron introduced its NVMe-over-Fabric-based SolidScale architecture, which the vendor views as a replacement array for general-purpose workloads.

Next Steps

Studying the risks, requirements of object storage

SanDisk InfiniFlash qualifies Red Hat Ceph storage

Red Hat Gluster gains containers, performance jolt

Dig Deeper on Server-based SSD implementations

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What is the thorniest challenge when implementing Red Hat Ceph storage?
Cancel

-ADS BY GOOGLE

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

SearchStorage

SearchITChannel

Close