Skip to main content

103 posts tagged with "ceph"

View All Tags

Post Mortem: Tentacle v20.2.0 OSD crashing due to EC Bug

· 6 min read
Joshua Blanch
Software Engineer at Clyso
Zac Dover
Technical Writer at Clyso

On January 11, 2026, at 2:48 PST, an emergency support request was opened for OSD crashes in v20.2.0 that rendered CephFS inaccessible.

The incident was resolved, restoring cluster availability.

A secondary post-recovery issue related to scrubbing errors was subsequently identified and fixed.

The fix involved deploying a new build of Ceph that contained the patches for the bugs. The engineering team made use of Clyso's new build system, delivering the fix to the client as fast as possible.

Critical Known Bugs in Ceph Quincy, Reef and Squid Versions

· 4 min read
Joshua Blanch
Software Engineer at Clyso

Below are a list of known bugs in ceph versions that we want to highlight. As of time of writing this, the latest versions for each release are the following:

  • reef: 18.2.7
  • squid: 19.2.3

There are more bugs in Ceph that were not included here, we've highlighted a few that we wanted to share.

Cross-version Issues

These critical bugs affect multiple major versions of Ceph.

RadosGW --bypass-gc Data Loss Bug

Severity: Critical
Affected Versions: Quincy (17.2.x), Reef (18.2.x), Squid (19.2.x) Bug Tracker: https://tracker.ceph.com/issues/73348

Adding Capacity to Ceph -- the CLYSO Way!

· 2 min read
Dan van der Ster
CTO at Clyso

One of my favourite things to assist users with is simplifying their workflows for making major changes to their Ceph clusters, such as adding or removing multiple hosts at once. Ceph is inherently excellent at handling these tasks – one of its greatest strengths is the ability to transparently add or remove capacity, replace servers, and perform maintenance, all without downtime.