Skip to main content

48 posts tagged with "operation"

View All Tags

· One min read
Joachim Kraftmayer

Create an erasure coded rbd pool

ceph osd pool create ec-pool 1024 1024 erasure 8-3
ceph osd pool set data01 allow_ec_overwrites true
rbd pool init ec-pool

note

Many things can be changed later in ceph during the runtime. However, the settings for the distribution of data and coding chunks must be defined when the EC pool is created. This means you should think carefully about what you plan to do with the pool in the future.

Create a erasure coded rbd image, in the EC data pool and for the metadata (OMAP objects) you need the replicated target-pool:

rbd create --size 25G --data-pool ec-pool/origin-image target-pool/new-image
rbd info target-pool/new-image

Sources

docs.ceph.com/en/latest/rados/operations/erasure-code/

· 3 min read
Joachim Kraftmayer

Preliminary remark: Perhaps some people still know the ceph-iscsi project under the name ceph-iscsi-cli.

Installation of necessary Debian packages

apt install ca-certificates
apt install librbd1 libkmod2 python-pyparsing python-kmodpy python-pyudev python-gobject python-urwid python-rados python-rbd python-netifaces python-crypto python-requests python-flask python-openssl python-rpm ceph-common

Ceph setup with pool and user

iscsi-ceph takes over the administration between iscsi devices and the conversion to rbd images. For this we need a separate ceph pool and a separate user. Contrary to the standard documentation, I do not use client.admin but create a restricted user client.iscsi.

Pool

The standard pool has the name rbd, here we give it the name iscsi.

ceph osd pool create <pool-name> 2048 2048 replicated <rule-name>

User

The user iscsi is created with the necessary authorizations for rbd on the pool iscsi

ceph auth add client.iscsi mon 'profile rbd' osd 'profile rbd pool=\<pool-name>'

Installation of necessary Debian packages for ceph-iscsi

apt install tcmu-runner targetcli-fb python-rtslib-fb

Manuelle Installation ceph-iscsi

apt install git
git clone https://github.com/ceph/ceph-iscsi.git
apt install python-setuptools python-configshell-fb
apt install librbd1 libkmod2 python-pyparsing python-kmodpy python-pyudev python-gobject python-urwid python-rados python-rbd python-netifaces python-crypto python-requests python-flask python-openssl python-rpm ceph-common
cd ceph-iscsi
python setup.py install --install-scripts=/usr/bin
cp usr/lib/systemd/system/rbd-target-gw.service /lib/systemd/system
cp usr/lib/systemd/system/rbd-target-api.service /lib/systemd/system
systemctl daemon-reload
systemctl enable rbd-target-gw
systemctl start rbd-target-gw
systemctl enable rbd-target-api
systemctl start rbd-target-api

ISCSI configuration

[config] name of the *.conf file. A suitable conf file allowing access to the ceph cluster from the gateway node is required. cluster_name = ceph Pool name where internal gateway.conf object is stored pool = rbd pool = rbd CephX client name cluster_client_name = client. # E.g.: client.admin cluster_client_name = client.iscsi API settings. The api supports a number of options that allow you to tailor it to your local environment. If you want to run the api under https, you will need to create crt/key files that are compatible for each gateway node (i.e. not locked to a specific node). SSL crt and key files must be called iscsi-gateway.crt and iscsi-gateway.key and placed in /etc/ceph on each gateway node. With the SSL files in place, you can use api_secure = true to switch to https mode. To support the api, the bear minimum settings are; api_secure = false Additional API configuration options are as follows (defaults shown); api_user = admin api_password = admin api_port = 5000 trusted_ip_list = IP,IP trusted_ip_list = 10.27.252.176, 127.0.0.1 Refer to the ceph-iscsi-config/settings module for more options logger_level=DEBUG

Sources

github.com/ceph/ceph-iscsi https://docs.ceph.com/docs/master/rbd/iscsi-initiator-esx/ https://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/ https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/

· One min read
Joachim Kraftmayer

login into one ceph monitor node and create a new recovey client:

you can do it with the client.admin but i prefer to create a seperate recovery client.

cephadm docker host:

ceph -n mon. --keyring /var/lib/ceph/&lt;fsid&gt;/mon/&lt;mon-name&gt;/keyring get-or-create client.recovery mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *'

ceph standard host:

ceph -n mon. --keyring /var/lib/ceph/mon/&lt;mon-name&gt;/keyring get-or-create client.recovery mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *'

install ceph-common:

apt install ceph-common

create two files:

/etc/ceph/ceph.conf

[global]
fsid=<you find the ceph_fsid file in each path of osd, mon or mgr>
mon_host = [v2:<ip addr of the active ceph monitor>;:3300/0,v1:<ip addr of the active ceph monitor>:6789/0]
/etc/ceph/ceph.client.recovery.keyring (add the output of the ceph get-or-create command. replace the : with = and set the name in [])

· One min read
Joachim Kraftmayer

This guide will detail the process of adding OSD nodes to an existing cluster running RedHat Enterprise Storage 4 (Nautilus). The process can be completed without taking the cluster out of production.

Set ceph cluster into maintenance mode

ceph osd set norebalance

ceph osd set nobackfill

ceph osd set norecover

Verify ceph cluster status

ceph status

Make sure that the new ceph node is defined in the /etc/hosts file.

vim /usr/share/ceph-ansible/hosts
[mons]
...
[mgrs]
...
[osds]
ceph-node1
ceph-node2
ceph-node3
ceph-node4
...

ping test before ansible playbook execution


ansible-playbook site-conatiner.yml --limit ceph-node4

unset maintenance mode

ceph osd unset nobackfill

ceph osd unset norecover

ceph osd unset norebalance

verify added Check that all Osds with hard drives have been added as expected

ceph osd tree
ceph osd crush tree
ceph osd df
ceph -s

verify all services uses the same version

ceph versions

sources

docs.ceph.com/projects/ceph-ansible/en/latest/day-2/osds.html

docs.ceph.com/projects/ceph-ansible/en/latest/

· 2 min read
Joachim Kraftmayer

Perhaps someone has already thought about using EC (erasure coding) for ceph pools, so that the overhead for the secure storage of data is not too high. This was already a topic in many of the trainings we have held in recent years.

But what most people forget after creating EC pools is how to get all the information about an existing pool.

ceph osd pool ls

or

ceph osd pool ls detail

don't really give information about the configuration of erasure coding pools. However, there is a small option that lets ceph spill the beans a bit more.

ceph osd pool ls detail --format=json

you might get more information than you want.

But with

ceph osd pool ls detail --format=json | jq '.'

the whole thing looks much more friendly to the eyes.

And here we find more information about the erasure coded pools:

ceph osd pool ls detail --format=json | jq '.' | grep erasure_code_profile
erasure_code_profile": "clyso-costum-profile",

If you want to list all defined profiles, then use

ceph osd erasure-code-profile ls

You can get detailed information about an erasure code profile with:

ceph osd erasure-code-profile get clyso-costum-profile

· One min read
Joachim Kraftmayer

We had the problem of getting the correct authorizations for the Ceph CSI user on the pools.

We then found the following bug for the version prior to 14.2.12.

https://github.com/ceph/ceph/pull/36413/files#diff-1ad4853f970880c78ea0e52c81e621b4

Was then solved with version 14.2.12.

https://tracker.ceph.com/issues/46321

· One min read
Joachim Kraftmayer
monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]

We see this example again and again with customers who copy their keyring file directly from the output of:

 ceph auth ls

In the client.\<name\>.keyring the name is enclosed in square brackets and the key is separated by an equal sign and in the ceph auth ls by a colon.