Improved Procedure for Adding Hosts or OSDs
Problem
When I add many hosts with new capacity to Ceph, way too much data needs to be backfilled and my cluster becomes unstable.
Solution
CLYSO recommends the following improved approach when adding capacity to a Ceph cluster. This procedure makes use of an external tool ("upmap-remapped.py") and the MGR balancer in order gain more control on the data movement needed to add hosts to an existing cluster.
Before you start, you must first apply our recommended MGR balancer configuration. The balancer is a Ceph component which moves objects around to achieve a uniform data distribution.
Recommended Balancer Configuration
It is best to configure the balancer to leave some idle time per week, so that internal data structures ("osdmaps") can be trimmed regularly. For example, with this config balancer will pause on Saturdays:
ceph config set mgr mgr/balancer/begin_weekday 0
ceph config set mgr mgr/balancer/end_weekday 5
Alternatively you may choose to balance PGs only some hours each day, for example, allow the backfilling PGs complete each night:
ceph config set mgr mgr/balancer/begin_time 0830
ceph config set mgr mgr/balancer/end_time 1800
Next, decrease the max misplaced ratio from its default 5% to 0.5%, to minimize the IO impact of backfilling and also ensure the tail of backfilling PGs can finish over the weekend or over night. You may increase this percentage if you find that 0.5% is too small for your cluster.
ceph config set mgr target_max_misplaced_ratio 0.005
Lastly, configure the mgr to balance until you have +/- 1 PG per pool per OSD -- this is the best uniformity we can hope for with the mgr balancer.
ceph config set mgr mgr/balancer/upmap_max_deviation 1
Procedure to add hosts with upmap-remapped
With the above balancer configuration, then you can use this procedure to add hosts gracefully using upmap-remapped
.
- Set these flags to prevent data from moving immediately when we add new OSDs:
ceph osd set norebalance
ceph balancer off
-
Add the new OSDs using cephadm or your preferred management tool. Note -- we always recommend having
watch ceph -s
in a window whenever making any changes to your ceph cluster. -
Download
./upmap-remapped.py
from here. Run it wherever you runceph
CLI commands, and inspect its output:
./upmap-remapped.py
It should output several lines like ceph osd pg-upmap-items ...
. If not, reach out for help.
- Now we run upmap-remapped for real, normally twice in order to get the minimal number of misplaced objects:
./upmap-remapped.py | sh -x
./upmap-remapped.py | sh -x
While the above are running, you should see the % misplaced objects decreasing in your ceph -s
terminal. Ideally it will go to 0, meaning all PGs are active+clean and the cluster is fully healthy.
- Finally, unset the flags so data starts rebalancing again. At this point, the mgr balancer will move data in a controlled manner to your new empty OSDs:
ceph osd unset norebalance
ceph balancer on
Discussion
Placement Groups, Upmap, and the Balancer are all complex topics but offer very power tools to optimize Ceph operations. CLYSO has presented on this topic regularly -- feel free to reach out if you have any questions: