Limit test how ceph behaves with billions of rados objects.
blocksandfiles.com/2020/09/22/ceph-scales-to-10-billion-objects/
Limit test how ceph behaves with billions of rados objects.
blocksandfiles.com/2020/09/22/ceph-scales-to-10-billion-objects/
When commissioning new Ceph clusters, our standard tests also include measuring the I/O latency for RBD.
We also always measure the performance values for the entire stack. Over the years, we have seen the results of our hard work in improving ceph osd in various tests.
For our tests, we create a temporary work file and read random blocks with non-cached read operations from it.
We are now measuring latencies of 300 to 600 microseconds.
When commissioning a cluster, it is always advisable to log and evaluate the ceph osd bench results.
The values can also be helpful for performance analysis in a productive Ceph cluster.
ceph tell osd.<int|*> bench {<int>} {<int>} {<int>}
OSD benchmark: write <count> <size> -byte objects
, (default 1G size 4MB)
osd_bench_max_block_size=65536 kB
Example:
1G size 4MB (default)
ceph tell osd.* bench
1G size 64MB
ceph tell osd.* bench 1073741824 67108864