101 posts tagged with "ceph"
View All Tagsofficial debian packages for ceph nautilus available
For some time now there are no more official packages for Debian on the ceph.io site. The reason for this is the switch to a C++ version, which was only supported by Debian with Buster. All the more pleasing is the fact that Bernd Zeimetz has been working on the Ceph package for Debian since 28.11.2019 and is currently maintaining it for the current Nautilus versions from 14.2.4-1 for Bullseye and Buster backports.
See changelog of the project:
Ceph Client Sessions on the Ceph Monitor (ceph-mon)
ceph daemon mon.clyso-mon1 sessions
If you are looking for the IP addresses for the output of ceph features.
ceph promo videos
Extension of a Ceph cluster
Before the Lumionous Release
- Ceph Cluster is in status HEALTH_OK
- Add all OSDs with weight 0 to the Ceph cluster
- Gradually increase the weight of all new OSDs by 0.1 to 1.0, depending on the base load of the cluster.
- Wait until the Ceph cluster has reached the status HEALTH_OK again or all PGs have reached the status active+clean
- Repeat the weight increase for the new OSDs until you have achieved the desired weighting.
Since the Luminous Release
- Ceph cluster is in HEALTH_OK status
- Set the 'norebalance' flag (and normally also nobackfill)
- Add the new OSDs to the cluster
- Wait until the PGs start peering with each other (this can take a few minutes)
- Remove the norebalance and nobackfill flag
- Wait until the Ceph cluster has reached the HEALTH_OK status again
Since the Nautilus Release
With the Nautilus release PG splitting and merging was introduced and the following default values were set:
"osd_pool_default_pg_num": "8"
"osd_pool_default_pgp_num": "0"
Furthermore, the osd_pool_default_pg_num should be set to a value that makes sense for the respective Ceph cluster.
The value 0 of osd_pool_default_pgp_num now indicates that this value is automatically monitored by the Ceph cluster and adjusted according to the following criteria:
Starting in Nautilus, this second step is no longer necessary: as long as pgp_num and pg_num currently match, pgp_num will automatically track any pg_num changes. More importantly, the adjustment of pgp_num to migrate data and (eventually) converge to pg_num is done gradually to limit the data migration load on the system based on the new target_max_misplaced_ratio config option (which defaults to .05, or 5%). That is, by default, Ceph will try to have no more than 5% of the data in a “misplaced” state and queued for migration, limiting the impact on client workloads. ceph.com/rados/new-in-nautilus-pg-merging-and-autotuning/
note
Before the Nautilus release, the number of PGs had to be adjusted manually for the respective pools. With Nautilus, the Ceph Manager module pg_autoscaler can take over.
Ceph mgr balancer module - pg-upmap
Since version Luminous 12.2.x, pg-upmap is available in the ceph mgr balancer module.
Ceph (I/O latency) für RBD
When commissioning new Ceph clusters, our standard tests also include measuring the I/O latency for RBD.
We also always measure the performance values for the entire stack. Over the years, we have seen the results of our hard work in improving ceph osd in various tests.
For our tests, we create a temporary work file and read random blocks with non-cached read operations from it.
We are now measuring latencies of 300 to 600 microseconds.
Cephalocon - Barcelona 2019
On the following page you will find the agenda of Cephalocon 2019 in Barcelona. In each subitem you will find the slides of the presentations.
ceph balancer up-map
supported by kernel version 4.13
ceph features - wrong display
ceph tries to determine the ceph client version based on the feature flags. However, the kernel ceph client is not the same codestream.
So the output is not always correct.
Bluestore Metadata Database sizing
RocksDB size targets are usually exponentially increasing:
300 MB, 3GB, 30GB, 300GB, ...
SST = Static Sorted Table
BlockBasedTable Format is rocksDB default SST.
github.com/facebook/rocksdb/wiki/Leveled-Compaction
github.com/facebook/rocksdb/wiki/Rocksdb-BlockBasedTable-Format