Skip to main content

Install CES

info

The following instructions are a condensed version of how to set up a Ceph cluster with Clyso Enterprise Storage (CES). If you already have an existing Ceph cluster, head over to the Migrate to CES section.

Install with cephadm

Clyso Enterprise Storage is installed using the cephadm orchestrator. The following steps will guide you through the installation process. First, you need to acquire an initial copy of cephadm from Clyso. This will automatically obtain the latest version of CES from the Clyso registry.

Step 1: Install cephadm via curl-based Installation

Download the cephadm script and ensure the file is executable.

curl --silent --remote-name --location <INSERT LINK HERE>
chmod +x cephadm

Step 2: Setting up and Bootstrapping A New Cluster

Any operations through the cephadm script will pull the CES image so bootstrapping a new cluster will automatically install CES. This command will successfully install the latest version of CES on the cluster. For more information about bootstrapping.

./cephadm bootstrap --mon-ip *<mon-ip>*

Step 3: Adding Hosts

Adding hosts

To add hosts to the cluster, you can use the following command:

  1. Install the cluster’s public SSH key in the new host’s root user’s authorized_keys file:
ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*

For example:

ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
  1. Tell Ceph that the new node is part of the cluster:
ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
note

One or more labels can also be included to immediately label the new host. For example, by default, the _admin label will make cephadm maintain a copy of the ceph.conf file and a client.admin keyring file in /etc/ceph:

ceph orch host add host4 10.10.0.104 --labels _admin

For a more in-depth guide on host management check the ceph upstream docs

Step 4: Adding Storage (OSDs)

There are a few ways to create new OSDs:

Tell Ceph to consume any available and unused storage device automatically:

ceph orch apply osd --all-available-devices

Alternatively, you can specify the device to use:

ceph orch daemon add osd *<host>*:*<device-path>*

For example:

ceph orch daemon add osd host1:/dev/sdb

For more information on adding OSDs