Skip to main content

Setting Memory Limits on Cephadm Services

Cephadm manages Ceph daemons as containers, but by default these containers have no memory constraints. This article demonstrates how to apply hard memory limits at the container level using the extra_container_args parameter.

Overview

Cephadm supports injecting native container runtime arguments (Docker or Podman) into service specifications via the extra_container_args field. This enables fine-grained control over container resource limits, including memory.

When a memory limit is applied, the container runtime enforces it through Linux cgroups. If a daemon exceeds its limit, the kernel's OOM killer terminates the process rather than allowing it to consume unbounded memory.

Configuration

Add the extra_container_args field to your service specification with the --memory flag:

service_type: osd
service_id: all-available-devices
service_name: osd.all-available-devices
placement:
host_pattern: '*'
extra_container_args:
- --memory=4g

Apply the specification:

ceph orch apply -i osd-spec.yaml

Cephadm will redeploy the affected daemons with the new container runtime flags.

Verification

After redeployment, verify the limit is active using these methods:

Check the container runtime arguments:

ps aux | grep ceph-osd

The output should include the memory flag:

root ... /usr/bin/docker run ... --memory=4g ... /usr/bin/ceph-osd ...

Inspect container stats directly:

docker stats ceph-<fsid>-osd-0 --no-stream

Example output:

CONTAINER ID   NAME                    MEM USAGE / LIMIT   MEM %
79d25fde1ca1 ceph-049...-osd-0 2.1GiB / 4GiB 52.50%

OOM Behavior

When a daemon exceeds its memory limit, the kernel terminates it via cgroup enforcement. You can observe this in dmesg:

Memory cgroup out of memory: Killed process 358433 (ceph-osd) ...

Cephadm will automatically restart the daemon after an OOM termination.

Additional Resources