100 posts tagged with "ceph"
View All TagsWhere can I download ceph client for windows
Ceph, the leading open source distributed storage system, has been ported to Windows, including RBD and CephFS. This opens new interoperability scenarios where both Linux and Windows systems can benefit from a unified distributed storage strategy, without performance compromises.
Ceph for Windows - Cloudbase Solutions
Ceph, the leading open source distributed storage system, has been ported to Windows, including RBD and CephFS. This opens new interoperability scenarios where both Linux and Windows systems can benefit from a unified distributed storage strategy, without performance compromises.
Ceph message „daemons have recently crashed“
The crash module collects information about daemon crashdumps and stores it in the Ceph cluster for later analysis.
If you see this message in the status of Ceph (ceph -s), you should first execute the following command to list all collected crashes:
ceph crash ls
Here you can see in the output which OSD(s) had or have problems with the respective time of occurrence.
You can get more information with the help of
ceph crash info <ID>
for the respective crash event.
If the crash is no longer relevant it can be confirmed with the following two commands:
ceph crash archive
or
ceph crash archive-all
After that the warning disappears from the ceph status output.
Sources
Howto create an erasure coded rbd pool
Create an erasure coded rbd pool
ceph osd pool create ec-pool 1024 1024 erasure 8-3
ceph osd pool set data01 allow_ec_overwrites true
rbd pool init ec-pool
note
Many things can be changed later in ceph during the runtime. However, the settings for the distribution of data and coding chunks must be defined when the EC pool is created. This means you should think carefully about what you plan to do with the pool in the future.
Create a erasure coded rbd image, in the EC data pool and for the metadata (OMAP objects) you need the replicated target-pool:
rbd create --size 25G --data-pool ec-pool/origin-image target-pool/new-image
rbd info target-pool/new-image
Sources
Install Ceph ISCSI Gateways under Debian with ceph-iscsi
Preliminary remark: Perhaps some people still know the ceph-iscsi project under the name ceph-iscsi-cli.
Installation of necessary Debian packages
apt install ca-certificates
apt install librbd1 libkmod2 python-pyparsing python-kmodpy python-pyudev python-gobject python-urwid python-rados python-rbd python-netifaces python-crypto python-requests python-flask python-openssl python-rpm ceph-common
Ceph setup with pool and user
iscsi-ceph takes over the administration between iscsi devices and the conversion to rbd images. For this we need a separate ceph pool and a separate user. Contrary to the standard documentation, I do not use client.admin but create a restricted user client.iscsi.
Pool
The standard pool has the name rbd, here we give it the name iscsi.
ceph osd pool create <pool-name> 2048 2048 replicated <rule-name>
User
The user iscsi is created with the necessary authorizations for rbd on the pool iscsi
ceph auth add client.iscsi mon 'profile rbd' osd 'profile rbd pool=\<pool-name>'
Installation of necessary Debian packages for ceph-iscsi
apt install tcmu-runner targetcli-fb python-rtslib-fb
Manuelle Installation ceph-iscsi
apt install git
git clone https://github.com/ceph/ceph-iscsi.git
apt install python-setuptools python-configshell-fb
apt install librbd1 libkmod2 python-pyparsing python-kmodpy python-pyudev python-gobject python-urwid python-rados python-rbd python-netifaces python-crypto python-requests python-flask python-openssl python-rpm ceph-common
cd ceph-iscsi
python setup.py install --install-scripts=/usr/bin
cp usr/lib/systemd/system/rbd-target-gw.service /lib/systemd/system
cp usr/lib/systemd/system/rbd-target-api.service /lib/systemd/system
systemctl daemon-reload
systemctl enable rbd-target-gw
systemctl start rbd-target-gw
systemctl enable rbd-target-api
systemctl start rbd-target-api
ISCSI configuration
[config] name of the *.conf file. A suitable conf file allowing access to the ceph cluster from the gateway node is required. cluster_name = ceph Pool name where internal gateway.conf object is stored pool = rbd pool = rbd CephX client name cluster_client_name = client. # E.g.: client.admin cluster_client_name = client.iscsi API settings. The api supports a number of options that allow you to tailor it to your local environment. If you want to run the api under https, you will need to create crt/key files that are compatible for each gateway node (i.e. not locked to a specific node). SSL crt and key files must be called iscsi-gateway.crt and iscsi-gateway.key and placed in /etc/ceph on each gateway node. With the SSL files in place, you can use api_secure = true to switch to https mode. To support the api, the bear minimum settings are; api_secure = false Additional API configuration options are as follows (defaults shown); api_user = admin api_password = admin api_port = 5000 trusted_ip_list = IP,IP trusted_ip_list = 10.27.252.176, 127.0.0.1 Refer to the ceph-iscsi-config/settings module for more options logger_level=DEBUG
Sources
github.com/ceph/ceph-iscsi https://docs.ceph.com/docs/master/rbd/iscsi-initiator-esx/ https://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/ https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/
Kubernetes resize pvc – cephfs
the mysql database was running out of space and we have to increase the pvc size for it.
nothing simple than that. just verified if we have enough free space left.
edited the pvc defintion of the mysql statefulset and set the spec storage size to 20 Gi.
a few seconds later the mysql database space was doubled.
**ceph version 15.2.8
**ceph-csi version: 3.2.1
kubectl edit pvc data-mysql-0 -n mysql
how to recover accidentally deleted client.admin keyring
login into one ceph monitor node and create a new recovey client:
you can do it with the client.admin but i prefer to create a seperate recovery client.
cephadm docker host:
ceph -n mon. --keyring /var/lib/ceph/<fsid>/mon/<mon-name>/keyring get-or-create client.recovery mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *'
ceph standard host:
ceph -n mon. --keyring /var/lib/ceph/mon/<mon-name>/keyring get-or-create client.recovery mon 'allow *' mds 'allow *' mgr 'allow *' osd 'allow *'
install ceph-common:
apt install ceph-common
create two files:
/etc/ceph/ceph.conf
[global]
fsid=<you find the ceph_fsid file in each path of osd, mon or mgr>
mon_host = [v2:<ip addr of the active ceph monitor>;:3300/0,v1:<ip addr of the active ceph monitor>:6789/0]
/etc/ceph/ceph.client.recovery.keyring (add the output of the ceph get-or-create command. replace the : with = and set the name in [])
erasure coding, recovery under min_size
Ceph Octopus with version 15.2.0 attempts to restore data even if the min_size for objects in EC pools is not reached. Of course, only the objects for which sufficient shards are still available.
sources
Windows drivers for RBD and maybe soon for cephfs
presentation from 2017 was presented again at SUSECON Digital 2020.
www.youtube.com/watch?v=BWZIwXLcNts
download
sources
suse.com/betaprogram/suse-enterprise-storage-windows-driver-beta/
rbd mirroring - snapshot based
The Ceph Octopus release 15.2.5 introduces the new feature rbd mirroring based on snapshots.
The new method no longer uses the journal to synchronize the data. It synchronizes the data between two snapshots using the fast-diff and delta-export features.
This type of synchronization requires fewer IOP resources and does not directly affect the performance of the current system, as is the case with journal base mirroring.
The new implementation directly uses the kernel features of Ceph and not external libraries: librbd, rbd-nbd, ...