Skip to main content

Install Clyso Enterprise Storage (CES) with Rook

Powered by Rook

This guide outlines the deployment of CES Community Edition, a hardened storage solution utilizing the Rook operator.

Prerequisites

  • A running Kubernetes cluster (v1.25+ recommended).
  • Validation (Recommended): Qualify your Kubernetes environment using the (Clyso Kubernetes Analyzer)[https://analyzer.clyso.com/#/analyzer/kubernetes] to ensure storage and network readiness.
  • Nodes with raw storage (unformatted disks) for OSDs.
  • kubectl and helm installed.
  • The CES community edition image must be available in a registry accessible by your Kubernetes nodes (e.g., http://harbor.clyso.com/ces/ceph/ceph:ces-v25.03.2).

Step 1: Install the Rook Operator

The Rook Operator is the brains of the operation. It must be running before you define your cluster.

# Add the Rook Helm repository
helm repo add rook-release https://charts.rook.io/release
helm repo update

# Install the operator in the rook-ceph namespace
helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph

Step 2: Install CES Community Edition with Rook

The CephCluster resource defines the footprint of your Community Edition cluster. By pointing to the validated Clyso Enterprise Storage image, you leverage a distribution optimized for production stability.

The cluster configuration is pulled from the central Clyso Manifest repository to ensure the use of validated enterprise images and parameters.

Install the helm chart:

helm repo add rook-release https://charts.rook.io/release
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \
rook-release/rook-ceph-cluster \
--set operatorNamespace=rook-ceph \
-f https://raw.githubusercontent.com/clyso/ces-manifests/refs/heads/main/rook/helm-charts/rook-ceph-cluster-values.yaml

Step 3: Verify the Installation

Rook will now pull the CES image and start the Monitors (MONs), Managers (MGRs), and OSDs.

Check Pod Status: Ensure all pods are using your custom image.

kubectl -n rook-ceph get pods -o jsonpath='{.items[*].spec.containers[*].image}' | tr ' ' '\n' | uniq

Check Cluster Health: Check health of cluster using toolbox pod to run native Ceph commands.

kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status

Troubleshooting

  • CrashLoopBackOff: If the pods fail to start, check the logs of the Rook Operator (kubectl logs -l app=rook-ceph-operator -n rook-ceph). If you forgot allowUnsupported: true, you will see an error stating "failed the ceph version check."
  • ImagePullBackOff: Ensure your image registry credentials (if private) are added as an imagePullSecret in the CephCluster spec or the service account.