Skip to main content

7 posts tagged with "radosgw-admin"

View All Tags

· One min read
Joachim Kraftmayer

radosgw-admin key create --uid=clyso-user-id --key-type=s3 --gen-access-key --gen-secret

...

"keys": [
{
"user": "clyso-user-id",
"access_key": "VO8C17LBI9Y39FSODOU5",
"secret_key": "zExCLO1bLQJXoY451ZiKpeoePLSQ1khOJG4CcT3N"
}
],

...

access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/object_gateway_guide_for_red_hat_enterprise_linux/administration_cli#create_a_key

· One min read
Joachim Kraftmayer

List of users:

radosgw-admin metadata list user

List of buckets:

radosgw-admin metadata list bucket

List of bucket instances:

radosgw-admin metadata list user.instance

All necessary information

  • user-id = Output from the list of users
  • bucket-id = Output from the list of bucket instances
  • bucket-name = Output from the list of buckets or bucket instances
  • Change of user for this bucket instance:
radosgw-admin bucket link --bucket <bucket-name> --bucket-id <default-uuid>.267207.1 --uid=<user-uid>

Example:

radosgw-admin bucket link --bucket test-clyso-test --bucket-id aa81cf7e-38c5-4200-b26b-86e900207813.267207.1 --uid=c19f62adbc7149ad9d19-8acda2dcf3c0

If you compare the buckets before and after the change, the following values are changed:

  • ver: is increased
  • mtime: will be updated
  • owner: is set to the new uid
  • key: user.rgw.acl: The rights are reset for the user.rgw.acl key

· 3 min read
Joachim Kraftmayer

We encountered the first large omap objects in one of our Luminous Ceph clusters in Q3 2018 and worked with a couple of Ceph Core developers on the solution for internal management of RadosGW objects. This included topics such as large omap objects, dynamic resharding, multisite, deleting old object instances in the RadosGW index pool, and many small changes that were included in the Luminous, Mimic, and subsequent versions.

Here is a step by step guide on how to identify large omap objects and buckets and then manually reshard the affected objects.

output ceph status

ceph -s

cluster:
id: 52296cfd-d6c6-3129-bf70-db16f0e4423d
health: HEALTH_WARN
1 large omap object

output ceph health detail

ceph health detail
HEALTH_WARN 1 large omap objects
1 large objects found in pool 'clyso-test-sin-1.rgw.buckets.index'
Search the cluster log for 'Large omap object found' for more details.
search the ceph.log of the Ceph cluster:
2018-09-26 12:10:38.440682 mon.clyso1-mon1 mon.0 192.168.130.20:6789/0 77104 : cluster [WRN] Health check failed: 1 large omap objects (LARGE_OMAP_OBJECTS)
2018-09-26 12:10:35.037753 osd.1262 osd.1262 192.168.130.31:6836/10060 152 : cluster [WRN] Large omap object found. Object: 28:18428495:::.dir.143112fc-1178-40e1-b209-b859cd2c264c.38511450.376:head Key count: 2928429 Size (bytes): 861141085
2018-09-26 13:00:00.000103 mon.clyso1-mon1 mon.0 192.168.130.20:6789/0 77505 : cluster [WRN] overall HEALTH_WARN 1 large omap objects

From the ceph.log we extract the bucket instance, in this case:

143112fc-1178-40e1-b209-b859cd2c264c.38511450.376 and look for it in the RadosGW metadata

root@salt-master1.clyso.test:~ # radosgw-admin metadata list "bucket.instance" | egrep "143112fc-1178-40e1-b209-b859cd2c264c.38511450.376"
"b1868d6d-9d61-49b0-b101-c89207009b16:143112fc-1178-40e1-b209-b859cd2c264c.38511450.376"
root@salt-master1.clyso.test:~ #

The instance exists and we checked the metadata of the instance.

root@salt-master1.clyso.test:~ # radosgw-admin metadata get bucket.instance:b1868d6d-9d61-49b0-b101-c89207009b16:143112fc-1178-40e1-b209-b859cd2c264c.38511450.376
{
"key": "bucket.instance:b1868d6d-9d61-49b0-b101-c89207009b16:143112fc-1178-40e1-b209-b859cd2c264c.38511450.376",
"ver": {
"tag": "_Ehz5PYLhHBxpsJ_s39lePnX",
"ver": 7
},
"mtime": "2018-04-24 10:02:32.362129Z",
"data": {
"bucket_info": {
"bucket": {
"name": "b1868d6d-9d61-49b0-b101-c89207009b16",
"marker": "143112fc-1178-40e1-b209-b859cd2c264c.38511450.376",
"bucket_id": "143112fc-1178-40e1-b209-b859cd2c264c.38511450.376",
"tenant": "",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
}
},
"creation_time": "2018-02-20 20:58:51.125791Z",
"owner": "d7a84e1aed9144919f8893b7d6fc5b02",
"flags": 0,
"zonegroup": "1c44aba5-fe64-4db3-9ef7-f0eb30bf5d80",
"placement_rule": "default-placement",
"has_instance_obj": "true",
"quota": {
"enabled": true,
"check_on_raw": true,
"max_size": 54975581388800,
"max_size_kb": 53687091200,
"max_objects": -1
},
"num_shards": 0,
"bi_shard_hash_type": 0,
"requester_pays": "false",
"has_website": "false",
"swift_versioning": "false",
"swift_ver_location": "",
"index_type": 0,
"mdsearch_config": [],
"reshard_status": 0,
"new_bucket_instance_id": ""
},
"attrs": [
{
"key": "user.rgw.acl",
"val": "AgK4A.....AAAAAAA="
},
{
"key": "user.rgw.idtag",
"val": ""
},
{
"key": "user.rgw.x-amz-read",
"val": "aW52YWxpZAA="
},
{
"key": "user.rgw.x-amz-write",
"val": "aW52YWxpZAA="
}
]
}
}
root@salt-master1.clyso.test:~ #

get the metadata infos from the bucket

root@salt-master1.clyso.test:~ # radosgw-admin metadata get bucket:b1868d6d-9d61-49b0-b101-c89207009b16
{
"key": "bucket:b1868d6d-9d61-49b0-b101-c89207009b16",
"ver": {
"tag": "_WaSWh9mb21kEjHCisSzhWs8",
"ver": 1
},
"mtime": "2018-02-20 20:58:51.152766Z",
"data": {
"bucket": {
"name": "b1868d6d-9d61-49b0-b101-c89207009b16",
"marker": "143112fc-1178-40e1-b209-b859cd2c264c.38511450.376",
"bucket_id": "143112fc-1178-40e1-b209-b859cd2c264c.38511450.376",
"tenant": "",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
}
},
"owner": "d7a84e1aed9144919f8893b7d6fc5b02",
"creation_time": "2018-02-20 20:58:51.125791Z",
"linked": "true",
"has_bucket_info": "false"
}
}
root@salt-master1.clyso.test:~ #

grep for the bucket_id in the radosgw index pool

root@salt-master1.clyso.test:~ # rados -p eu-de-200-1.rgw.buckets.index ls | egrep “143112fc-1178-40e1-b209-b859cd2c264c.38511450.376” | wc -l
1
root@salt-master1.clyso.test:~ #

the bucket rados object, that has to be resharded

143112fc-1178-40e1-b209-b859cd2c264c.38511450.376

· One min read
Joachim Kraftmayer

incomplete state

The Ceph cluster has recognized that a placement group (PG) is missing important information. This may be missing information on any write operations that have occurred or that there are no error-free copies.

The recommendation is to bring all OSDs that are in the down or out state back into the Ceph cluster, as these could contain the required information. In the case of an Ereasure Coding (EC) pool, the temporary reduction of the min_size can enable recovery. However, the min_size cannot be smaller than the number of defined data shunks for this pool.

Sources

https://docs.ceph.com/docs/master/rados/operations/pg-states/ https://docs.ceph.com/docs/master/rados/operations/erasure-code/

· 2 min read
Joachim Kraftmayer

find user in user list

root@master.qa.cloud.clyso.com:~ # radosgw-admin user list
[
...
"57574cda626b45fba1cd96e68a57ced2",
...
"admin",
...
]

get infos for a specific user

radosgw-admin user info --uid=57574cda626b45fba1cd96e68a57ced2
{
"user_id": "57574cda626b45fba1cd96e68a57ced2",
"display_name": "qa-clyso-backup",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "keystone"
}

set the quota for one specific user

root@master.qa.cloud.clyso.com:~ # radosgw-admin quota set --quota-scope=user --uid=57574cda626b45fba1cd96e68a57ced2 --max-size=32985348833280```

## verify the set quota max_size and max_size_kb

```bash
root@master.qa.cloud.clyso.com:~ # radosgw-admin user info --uid=57574cda626b45fba1cd96e68a57ced2
{
"user_id": "57574cda626b45fba1cd96e68a57ced2",
"display_name": "qa-clyso-backup",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": 32985348833280,
"max_size_kb": 32212254720,
"max_objects": -1
},
"temp_url_keys": [],
"type": "keystone"
}

enable quota for one specific user

root@master.qa.cloud.clyso.com:~ # radosgw-admin quota enable --quota-scope=user --uid=57574cda626b45fba1cd96e68a57ced2
root@master.qa.cloud.clyso.com:~ # radosgw-admin user info --uid=57574cda626b45fba1cd96e68a57ced2
{
"user_id": "57574cda626b45fba1cd96e68a57ced2",
"display_name": "qa-clyso-backup",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 32985348833280,
"max_size_kb": 32212254720,
"max_objects": -1
},
"temp_url_keys": [],
"type": "keystone"
}

synchronize stats for one specific user

root@master.qa.cloud.clyso.com:~ # radosgw-admin user stats --uid=57574cda626b45fba1cd96e68a57ced2 --sync-stats
{
"stats": {
"total_entries": 10404,
"total_bytes": 54915680,
"total_bytes_rounded": 94674944
},
"last_stats_sync": "2017-08-21 07:09:58.909073Z",
"last_stats_update": "2017-08-21 07:09:58.906372Z"
}

Sources

https://docs.ceph.com/en/latest/radosgw/admin/)

· One min read
Joachim Kraftmayer

You can delete buckets and their contents with S3 Tools and Ceph's own board tools.

via S3 API

With the popular command line tool s3cmd, you can delete buckets with content via S3 API call as follows:

s3cmd rb --rekursives s3: // clyso_bucket

via radosgw-admin command

Radosgw-admin talks directly to the Ceph cluster and does not require a running radosgw process and is also the faster way to delete buckets with content from the Ceph cluster.

radosgw-admin bucket rm --bucket=clyso_bucket --purge-objects

If you want to delete an entire user and his or her data from the system, you can do so with the following command:

radosgw-admin user rm --uid=<username> --purge-data

Use this command wisely!