Skip to main content

24 posts tagged with "osd"

View All Tags

· One min read
Joachim Kraftmayer

When commissioning a cluster, it is always advisable to log and evaluate the ceph osd bench results.

The values can also be helpful for performance analysis in a productive Ceph cluster.

ceph tell osd.<int|*> bench {<int>} {<int>} {<int>}

OSD benchmark: write <count> <size> -byte objects, (default 1G size 4MB)

osd_bench_max_block_size=65536 kB

Example:

1G size 4MB (default)

ceph tell osd.* bench

1G size 64MB

ceph tell osd.* bench 1073741824 67108864

· One min read
Joachim Kraftmayer
osd max object size

Description: The maximum size of a RADOS object in bytes.
Type: 32-bit Unsigned Integer
Default: 128MB

Before the Ceph Luminous release, the default value was 100 GB. Now it has been reduced to 128 MB. This means that unpleasant performance problems can be prevented right from the start

github.com/ceph/ceph/pull/15520

docs.ceph.com/docs/master/releases/luminous/

docs.ceph.com/docs/master/rados/configuration/osd-config-ref/

· 2 min read
Joachim Kraftmayer

size & size_kb: summary of all objects sizes in the bucket/container = output swift stat <bucket/container> | grep Bytes

size_actual & size_kb_actual: account for compression, encryption (showing the nearest 4k alignment) = output swift stat <bucket/container> | grep X-Container-Bytes-Used-Actual

num_objects: number of objects = output swift stat <bucket/container> | grep Objects

size_utilized & size_kb_utilized: represent the total size of compressed data in byte and kilobytes => we don´t use compression so size = size_utilized

The size does not include the information of the underlying replication of 3 or erasure coding.

ceph-rgw4:~/clyso# radosgw-admin bucket stats --bucket=size-container
{
"bucket": "size-container",
"zonegroup": "226fe09d-0ebf-4f30-a93b-d136f24a04d3",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "d667b6f1-5737-4f5e-bad0-fc030f0a4e94.11750341.561",
"marker": "d667b6f1-5737-4f5e-bad0-fc030f0a4e94.11750341.561",
"index_type": "Normal",
"owner": "0fdfa377cd56439ab3e3e65c69787e92",
"ver": "0#7",
"master_ver": "0#0",
"mtime": "2018-09-03 12:37:37.744221",
"max_marker": "0#",
"usage": {
"rgw.main": {
"size": 4149,
"size_actual": 16384,
"size_utilized": 4149,
"size_kb": 5,
"size_kb_actual": 16,
"size_kb_utilized": 5,
"num_objects": 3
}
},
"bucket_quota": {
"enabled": false,
"check_on_raw": true,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
}
ceph-rgw4:~/clyso # swift stat size-container
Account: v1
Container: size-container
Objects: 3
Bytes: 4149
Read ACL:
Write ACL:
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Storage-Policy: default-placement
X-Container-Bytes-Used-Actual: 16384
X-Timestamp: 1535967792.05717
X-Trans-Id: tx00000000000000002378a-005b8e218c-b2faf1-eu-de-997-1
Content-Type: text/plain; charset=utf-8
X-Openstack-Request-Id: tx00000000000000002378a-005b8e218c-b2faf1-eu-de-997-1

We first uploaded a 20 byte object, then another 20 byte object and then a 4097 byte object

The output of the sizes was as follows:

1 Objekt

"size": 26,
"size_actual": 4096,
"size_utilized": 26,
"size_kb": 1,
"size_kb_actual": 4,
"size_kb_utilized": 1,
"num_objects": 1

2 Objekte

"size": 52,
"size_actual": 8192,
"size_utilized": 52,
"size_kb": 1,
"size_kb_actual": 8,
"size_kb_utilized": 1,
"num_objects": 2

3 Objekte

"size": 4149,
"size_actual": 16384,
"size_utilized": 4149,
"size_kb": 5,
"size_kb_actual": 16,
"size_kb_utilized": 5,
"num_objects": 3