Skip to main content

ceph radosgw tuning

· One min read
Joachim Kraftmayer

The aim is to achieve a scaling of the rgw instances for the production system so that 10,000 active connections are possible.

As a result of various test runs, the following configuration emerged for our setup

[client.rgw.<id>]
keyring = /etc/ceph/ceph.client.rgw.keyring
rgw content length compat = true
rgw dns name = <rgw.hostname.clyso.com>
rgw enable ops log = false
rgw enable usage log = false
rgw frontends = civetweb port=80
error_log_file=/var/log/radosgw/civetweb.error.log
rgw num rados handles = 8
rgw swift url = http://<rgw.hostname.clyso.com>
rgw thread pool size = 512

Notes on the configuration

rgw thread pool size ist der Standardwert für num_threads des civeweb webservers.

Line 54: https://github.com/ceph/ceph/blob/master/src/rgw/rgw_civetweb_frontend.cc

set_conf_default(conf_map, "num_threads",
std::to_string(g_conf->rgw_thread_pool_size));
[client.radosgw]
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw content length compat = true
rgw dns name = <fqdn hostname>
rgw enable ops log = false
rgw enable usage log = false
rgw frontends = civetweb port=8080 num_threads=512
error_log_file=/var/log/radosgw/civetweb.error.log
rgw num rados handles = 8
rgw swift url = http://<fqdn hostname>
rgw thread pool size = 51``

sources

https://github.com/ceph/ceph/blob/master/doc/radosgw/config-ref.rst

http://docs.ceph.com/docs/master/radosgw/config-ref/

https://github.com/ceph/ceph/blob/master/src/rgw/rgw_civetweb_frontend.cc

https://indico.cern.ch/event/578974/contributions/2695212/attachments/1521538/2377177/Ceph_pre-gdb_2017.pdf

http://www.osris.org/performance/rgw.html

https://www.swiftstack.com/docs/integration/python-swiftclient.html

https://github.com/civetweb/civetweb/tree/master/docs