Skip to main content

SUSE Enterprise Storage to CES

SUSE Ceph uses cephadm so the procedure should be similar to the procedure described above for migrating a cephadm based cluster to CES.

Before Migration

  1. Remove SES specific container image configuration for SES monitoring and HA NFS ganesha services, this configuration would need to be removed before migration.

  2. SES Ceph orchestrator uses cephadm user (can be checked with ceph cephadm get-user command), which requires sudo privileges

  3. Reconfigure ceph-iscsi targets

1. Remove SES monitoring and HA NFS ganesha services

To check if there is SES specific container image configuration, run ceph config dump looking for entries like below:

mgr                                            advanced  mgr/cephadm/container_image_alertmanager   registry.suse.de/devel/storage/7.0/pacific/containers/ses/7.1/ceph/prometheus-alertmanager:latest   * 
mgr advanced mgr/cephadm/container_image_grafana registry.suse.de/devel/storage/7.0/pacific/containers/ses/7.1/ceph/grafana:latest *
mgr advanced mgr/cephadm/container_image_haproxy registry.suse.de/devel/storage/7.0/pacific/containers/ses/7.1/ceph/haproxy:latest *
mgr advanced mgr/cephadm/container_image_keepalived registry.suse.de/devel/storage/7.0/pacific/containers/ses/7.1/ceph/keepalived:latest *
mgr advanced mgr/cephadm/container_image_node_exporter registry.suse.de/devel/storage/7.0/pacific/containers/ses/7.1/ceph/prometheus-node-exporter:latest *
mgr advanced mgr/cephadm/container_image_prometheus registry.suse.de/devel/storage/7.0/pacific/containers/ses/7.1/ceph/prometheus-server:latest *
mgr advanced mgr/cephadm/container_image_snmp_gateway registry.suse.de/devel/storage/7.0/pacific/containers/ses/7.1/ceph/prometheus-snmp_notifier:latest *

To make cephadm deploy these services with the default CES (upstream) images, remove all these custom settings:

ceph config rm mgr mgr/cephadm/container_image_alertmanager
ceph config rm mgr mgr/cephadm/container_image_grafana
ceph config rm mgr mgr/cephadm/container_image_haproxy
ceph config rm mgr mgr/cephadm/container_image_keepalived
ceph config rm mgr mgr/cephadm/container_image_node_exporter
ceph config rm mgr mgr/cephadm/container_image_prometheus
ceph config rm mgr mgr/cephadm/container_image_snmp_gateway

2. Update sudo privileges for cephadm user

Using a not root user requires setting sudo privileges for cephadm orchestrator to run commands with root privileges. For this SES deploys /etc/sudoers.d/ceph-salt file on every node. The list of commands that need root privileges depends on Ceph version (extending in new versions). Taking that the list already allows to run any python script, which actually means any command, as the root, there is no much sense to keep these granularity and we recommend just to add this line to /etc/sudoers:

cephadm ALL=(ALL) NOPASSWD: ALL

2(b) An alternative method may be to switch cephadm user to root

Add the cephadm pub key to root's authorized_keys on all hosts

cat /var/lib/cephadm/.ssh/authorized_keys >> /root/.ssh/authorized_keys

Or you may setup the key you already have in root's ssh authorized_keys, e.g:

ceph cephadm clear-key
ceph cephadm set-priv-key -i /root/.ssh/id_rsa
ceph cephadm set-pub-key -i /root/.ssh/id_rsa.pub

and then set cephadm user to root

ceph cephadm set-user root

3. Reconfigure ceph-iscsi targets

The SES ceph-iscsi supports both rbd (kernel-based) and user:rbd (tcmu-runner) backstores [1] , and rbd is the default, while CES and upstream supports on user:rbd (tcmu-runner) backstore. So if you have ceph-iscsi targets configured to use rbd backstore, before migration you need to reconfigure them to use user:rbd backstore. You may do this via either gwcli or Ceph dashboard. Please refer to SES documentation for details.

After all pre-migration conditions are fulfilled, migration is similar to upgrade, i.e

ceph orch upgrade start --image harbor.clyso.com/ces/ces:v1.0.0-rc1-x86_64  

References:

[1] Suse documentation - Exporting RADOS Block Device images using tcmu-runner