S3 migration with chorus
· 10 min read
An S3 system holds data—call it source
. It keeps applications running, but migration is needed, maybe due to scale limits or costs creeping up. A new S3 setup, target
, is set to replace it. The challenge is to move all data from source
to target
with no downtime, no data lost, and no breaks for the apps using source
. What can get this done?
What Are the Challenges?
Migrating S3 data brings several key difficulties:
- Data and Metadata Consistency: It’s essential to make sure that all data is copied correctly—no objects lost or corrupted. Besides this, applications relying on metadata (e.g., ACLs, versions, timestamps) need to keep working as expected, so that metadata has to be spot-on too.
- Ongoing Writes: Applications don’t stop writing to
source
storage during migration, and all that new data needs to reachtarget
storage too. Synchronous replication sounds good but can get slow and messy—what’s the right response to a user if aPUT
works onsource
storage but flops ontarget
? The alternative is downtime: pause writes tosource
storage and copy everything at once. For some applications, though, downtime just isn’t an option.
These issues don’t stand alone—they tangle together. Verifying data integrity gets tricky when ongoing writes shift the dataset mid-migration.
Regarding Tools and Strategies
Two high-level approaches can tackle these challenges:
- Do It Bucket by Bucket: This approach leans on a canary deployment strategy. Copy one bucket, switch the application to
target
storage, and check if everything runs as expected. If it does, move on to the next bucket; if not, flip back tosource
storage and dig into the problem. It cuts downtime too—copy a bucket at a time and switch the application only when all data’s in place. - Do It in Two Phases: Say a bucket holds 10 million objects, and copying takes a day. During that time, ongoing writes mean about 5% of objects get added, updated, or removed. So, copy all the data once without stopping writes. Then, use a short downtime to figure out which objects changed and copy just those. Call the first pass
initial replication
and the secondevent replication
.
These ideas sound simple, but execution isn’t. Questions arise:
- If applications expect one URL for all buckets, how does bucket-by-bucket work?
- How can writes to one bucket be stopped for downtime?
- How are changes (aka events) tracked for
event replication
? - How can 10 million objects be copied fast enough?
- How does this scale to 10,000 buckets automatically?
Now, let’s see how Chorus handles these challenges and strategies.