Skip to main content

3 posts tagged with "CES"

View All Tags

· 2 min read
Joachim Kraftmayer

FOSDEM 2024, 3 & 4th of February

We will be at FOSDEM again this year. this year we will also be giving a presentation on one of our own open source projects CHORUS. The topic is the management and distribution of data and the life cycle across multiple object stores. From real life, explain how we used CHORUS to move a productive object storage without interrupting operations.

presentation details

Chorus - Effortless Ceph S3 Petabyte Migration

room: K.3.201 date: Saturday 3th February, 15:30–16:00 (Europe/Brussels) Video conference: k3201

Efficiently migrating petabytes of object storage data between two production Ceph clusters posed a significant challenge with live data being written to both clusters, necessitating a seamless process to minimize disruptions. The migration strategy involved extracting user accounts, including access and secret keys, from the old cluster and seamlessly transferring them to the new one. The synchronization of buckets and live data has been improved by extending and enhancing powerful tools such as rclone, executed in parallel. This migration endeavor not only resulted in the successful transfer of vast amounts of data but also paved the way for the creation of a robust tool named Chorus. Chorus, specifically designed for synchronizing S3 data, emerged as a versatile solution capable of harmonizing data seamlessly across multiple cloud storage backends. This innovative tool is helpful in effective bridging of data between Ceph clusters, demonstrating the adaptability and scalability required for modern data management challenges. Key highlights of Chorus include persistence of migration, execution of migration on multiple machines, rate limiting RAM/network usage during migration.

FOSDEM 2024 - Chorus - Effortless Ceph S3 Petabyte Migration

· 9 min read
Mark Nelson

Hello Ceph community! It's time again for another blog post! One of the most common questions we've gotten over the years is whether or not users should deploy multiple OSDs per flash drive. This topic is especially complicated because our advice has changed over the years. Back in the Ceph Nautilus era, we often recommended 2, or even 4 OSDs per flash drive. There were obvious and significant performance advantages at the time when deploying multiple OSDs per flash device, especially when using NVMe drives.

· One min read
Mark Nelson

Hello Ceph community! Here at Clyso we’ve been thinking quite a bit about the tuning defaults and hardware/software recommendations we will be making for users of our upcoming Clyso Enterprise Storage (CES) product based on Ceph. We decided that given how useful some of this information is both for CES and for the upstream project, we’d open the document up to the community for feedback and to help us build a better product. We’ll be adding more content as time goes on. Feel free to reach out at mark.nelson at clyso.com if you have thoughts or questions!

Download