Skip to main content

ceph blocked requests in the cluster caused by deep-scrubing operations

· 3 min read
Joachim Kraftmayer
Managing Director at Clyso

With the default options you will see blocked requests in the cluster caused by deep-scrubing operations

recomended deep scrub options to minimize the impact of scrub/deep-scrub in the ceph cluster

the following options define the scrub and deep-scrub behaviour, by cpu load, osd scheduler priority, defined check intervals and ceph cluster health state

[osd]
#reduce scrub impact
osd max scrubs = 1
osd scrub during recovery = false
osd scrub max interval = 4838400 # 56 days
osd scrub min interval = 2419200 # 28 days
osd deep scrub interval = 2419200
osd scrub interval randomize ratio = 1.0
# osd deep scrub randomize ratio = 1.0
osd scrub priority = 1
osd scrub chunk max = 1
osd scrub chunk min = 1
osd deep scrub stride = 1048576 # 1 MB
osd scrub load threshold = 5.0
osd scrub sleep = 0.3
osd max scrubs
osd max scrubs

Description: The maximum number of simultaneous scrub operations for a Ceph OSD Daemon.

Type: 32-bit Int

Default: 1
osd scrub during recovery

Description: Allow scrub during recovery. Setting this to false will disable scheduling new scrub (and deep–scrub) while there is active recovery. Already running scrubs will be continued. This might be useful to reduce load on busy clusters.

Type: Boolean

Default: true
osd scrub min interval

Description: The minimal interval in seconds for scrubbing the Ceph OSD Daemon when the Ceph Storage Cluster load is low.

Type: Float

Default: Once per day. 60*60*24

5,14 6%
osd scrub interval randomize ratio

Description: Add a random delay to osd scrub min interval when scheduling the next scrub job for a placement group. The delay is a random value less than osd scrub min interval * osd scrub interval randomized ratio. So the default setting practically randomly spreads the scrubs out in the allowed time window of [1, 1.5] * osd scrub min interval.

Type: Float

Default: 0.5
osd scrub priority

Description: The priority set for scrub operations. It is relative to osd client op priority.

Type: 32-bit Integer

Default: 5

Valid Range: 1-63
osd scrub chunk max

Description: The maximum number of object store chunks to scrub during single operation.

Type: 32-bit Integer

Default: 25
osd scrub chunk min

Description: The minimal number of object store chunks to scrub during single operation. Ceph blocks writes to single chunk during scrub.

[docs.ceph.com/docs/master/rados/configuration/osd-config-ref/](http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/)

Default: 5
osd deep scrub stride

Description: Read size when doing a deep scrub.

Type: 32-bit Integer

Default: 512 KB. 524288
osd scrub load threshold

Description: The normalized maximum load. Ceph will not scrub when the system load (as defined by getloadavg() / number of online cpus) is higher than this number. Default is 0.5.

Type: Float

Default: 0.5
osd scrub sleep

Description: Time to sleep before scrubbing next group of chunks. Increasing this value will slow down whole scrub operation while client operations will be less impacted.

Type: Float

Default: 0

SOURCES

http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/

https://indico.cern.ch/event/588794/contributions/2374222/attachments/1383112/2103509/Configuring_Ceph.pdf

https://github.com/ceph/ceph/blob/master/src/common/options.cc#L3130

ceph - ceph radosgw S3 and delete SWIFT buckets with content

· One min read
Joachim Kraftmayer
Managing Director at Clyso

You can delete buckets and their contents with S3 Tools and Ceph's own board tools.

via S3 API

With the popular command line tool s3cmd, you can delete buckets with content via S3 API call as follows:

s3cmd rb --rekursives s3: // clyso_bucket

via radosgw-admin command

Radosgw-admin talks directly to the Ceph cluster and does not require a running radosgw process and is also the faster way to delete buckets with content from the Ceph cluster.

radosgw-admin bucket rm --bucket=clyso_bucket --purge-objects

If you want to delete an entire user and his or her data from the system, you can do so with the following command:

radosgw-admin user rm --uid=<username> --purge-data

Use this command wisely!

ceph - ceph radosgw S3 and delete SWIFT buckets with content

· One min read
Joachim Kraftmayer
Managing Director at Clyso

First you need access to the boot screen.

Than reboot your server, you will see the boot loader screen, select the recovery mode.

Type e for edit and add the following option to the kernel boot options:

init=/bin/bash

Press Enter to exit the edit mode and boot into the single-user mode.

This will boot the kernel with /bin/bash instead of the standard init.