Skip to main content

Object Storage

CES and Ceph KB articles related to S3/SWIFT compatible object storage, covering the RGW.


RGW - DNS-Style Buckets with Multiple Domains

Problem

The RGW supports two URL methods to specify the bucket name used for an operation:

  • Path-Style: https://s3.domain.com/bucketname/path/to/object
  • DNS-Style (aka Virtual Hosted-Style): https://bucketname.s3.domain.com/path/to/object

In order to support DNS-style buckets, the RGW is normally configured using:

# ceph config set rgw rgw_dns_name s3.domain.com
# ceph config set rgw rgw_dns_s3website_name s3-website.domain.com

But if a user wants to configure multiple domains for DNS-style buckets, an alternate configuration must be used.

Solution

Set the hostname and hostname_s3_website fields in the zonegroup, for example:

# radosgw-admin zonegroup get | tee zonegroup.txt
{
"id": "c294f981-1dc9-4af3-9a56-3a8ae0f44c89",
"name": "default",
"api_name": "default",
"is_master": "true",
"endpoints": [],
"hostnames": [],
...
# vim zonegroup.txt # edit the hostnames field accordingly (see below).
# radosgw-admin zonegroup set < zonegroup.txt
# radosgw-admin zonegroup get
{
"id": "c294f981-1dc9-4af3-9a56-3a8ae0f44c89",
"name": "default",
"api_name": "default",
"is_master": "true",
"endpoints": [],
"hostnames": [
"s3.domain1.com",
"s3.domain2.com",
"s3.domain3.com"
],
...
#

Finally, restart all RGW daemons:

# ceph orch ps
# ceph orch daemon restart <rgw>

RGW - Log All S3 Operations

Problem

The customer noticed that the default RGW access log does not contain useful info such as the bucket name. How can they log more info for all S3 operations?

Solution

Enable the RGW ops log to file feature as follows:

# ceph config set global rgw_ops_log_rados false
# ceph config set global rgw_ops_log_file_path '/var/log/ceph/ops-log-$cluster-$name.log'
# ceph config set global rgw_enable_ops_log true

If you are using Cephadm and want to output the RGW ops log to the Container Logs then use /dev/stderr or /dev/stdout as File Path:

# ceph config set global rgw_ops_log_file_path '/dev/stdout'

Then restart all radosgw daemons.

# ceph orch ps
# ceph orch daemon restart <rgw>

Following this configuration change, the radosgw will log operations to the file /var/log/ceph/ceph-rgw-ops.json.log, for example:

root@ceph-rgw-1:~# tail -n1 /var/log/ceph/d6e57b01-8e9a-46c6-88ae-14476be461cc/ceph-rgw-ops.json.log
{"bucket":"mybucketname","time":"2023-10-31T22:02:43.565188Z","time_local":"2023-10-31T22:02:43.565188+0000","remote_addr":"1.2.3.4","user":"myusername","operation":"delete_obj","uri":"DELETE /path/to/my/object?x-id=DeleteObject?x-id=DeleteObject HTTP/1.1","http_status":"204","error_code":"NoContent","bytes_sent":0,"bytes_received":0,"object_size":0,"total_time":3,"user_agent":"aws-sdk-js/3.331.0 os/linux/4.19.0-24-amd64 lang/js md/nodejs/18.17.1 api/s3/3.331.0","referrer":"","trans_id":"tx00000****************-**********-*******-default","authentication_type":"Local","access_key_id":"********************","temp_url":false}

RGW - Increased IOPS on RGW Meta Pool after Upgrading to Pacific

Problem

A customer noticed that after upgrading from Nautilus to Pacific, the amount of read IOPS on the .rgw pool (aka the RGW Meta Pool) increased by a large factor. This was leading to a performance problem, with disks at nearly 100% IO utilization.

Debugging with debug_rgw=10 revealed that the RGW LRU cache was thrashing:

2023-10-06T14:21:38.749-0700 7f88225f0700 10 req 13873390272645627794 115.349494934s :get_bucket_info cache put: name=.rgw++scorpio-9B31 info.flags=0x6
2023-10-06T14:21:38.749-0700 7f88225f0700 10 removing entry: name=.rgw++convention823 from cache LRU

Solution

Pacific has an increased usage of the internal RGW bucket instance cache, and in a customer environment with many 10s of thousands of buckets, the default cache size 10000 is too small. The RGW cache size can be increased as follows:

# ceph config set global rgw_cache_lru_size 100000

RGW - S3 API configuring a bucket lifecycle policy to delete incomplete multipart uploads

Problem

During operation, multipart uploads are repeatedly left as incomplete without appropriate adjustments.

Users usually notice that the total number of objects displayed in a bucket is higher than the number of S3 objects. This difference can be explained by multipart uploads in the bucket that are not completed or not properly canceled.

Solution

Incomplete multipart uploads are caused by an interruption when uploading objects that are larger than the threshold value (default: 8MB) in order to divide them into several chunks and upload them. The storage does not know whether the upload will be continued or how long it should hold the data. This results in incomplete multipart upload artifacts that have to be cleaned up by the owner.

The owner of the bucket is responsible for creating a lifecycle policy or canceling the multipart uploads manually.

define a S3 lifecycle policy to delete ( abort ) incomplete multipart uploads after 3 days

abort-lc-mp-3days.xml

<LifecycleConfiguration>
<Rule>
<ID>abort-multipartupload-3days</ID>
<Prefix></Prefix>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>3</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>
check if lc already exists
root@ceph-clyso # s3cmd getlifecycle s3://quota-clyso
ERROR: S3 error: 404 (NoSuchLifecycleConfiguration)
root@ceph-clyso #

So no lc configuration exists for this bucket.

upload the lc policy file
root@ceph-clyso # s3cmd setlifecycle abort-lc-mp-3days.xml s3://quota-clyso
s3://quota-clyso/: Lifecycle Policy updated
root@ceph-clyso #
root@ceph-clyso # s3cmd getlifecycle s3://quota-clyso
<?xml version="1.0" ?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Rule>
<ID>abort-multipartupload-3days</ID>
<Prefix/>
<Status>Enabled</Status>
<AbortIncompleteMultipartUpload>
<DaysAfterInitiation>3</DaysAfterInitiation>
</AbortIncompleteMultipartUpload>
</Rule>
</LifecycleConfiguration>
root@ceph-clyso #

manual commands to delete ( abort ) incomplete multipart uploads after 3 days

list incomplete multipart uploads for a bucket
root@ceph-clyso# s3cmd multipart s3://quota-clyso/10g-multipart.bin
s3://quota-clyso/10g-multipart.bin
Initiated Path Id
2024-06-20T22:16:43.567Z s3://quota-clyso/10-9 2~W2b888tDQMnMy0g8n6XPQeiQSC1pQ9w
root@ceph-clyso#
abort incomplete multipart uploads for a bucket
root@ceph-clyso# s3cmd abortmp s3://quota-clyso/10-9 2~W2b888tDQMnMy0g8n6XPQeiQSC1pQ9w
s3://quota-clyso/10-9
root@ceph-clyso#
verify incomplete multipart uploads for a bucket
root@ceph-clyso# s3cmd multipart s3://quota-clyso/10g-multipart.bin
s3://quota-clyso/10g-multipart.bin
Initiated Path Id
root@ceph-clyso#
tip

For ceph/radosgw administrators are the following commands are useful for the status and processing of the lifecycle policies in the internals of radosgw:

  • radosgw-admin lc list
  • radosgw-admin lc process