Skip to main content

2 posts tagged with "worker nodes"

View All Tags

· 2 min read
Joachim Kraftmayer

validate if the RBD Cache is active on your client

By default the cache is enabled, since Version 0.87.

To enable the cache on the client side you have to add following config /etc/ceph/ceph.conf:

[client]
rbd cache = true
rbd cache writethrough until flush = true

add local admin socket

So that you can also verify the status on the client side, you must add the following two parameters:

[client]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/

configure permissions and security

Both paths must be writable by the user who uses the RBD library. Applications such as SELinux or AppArmor must be properly configured.

request infos via admin socket

Once this is done, run your application that is supposed to use librbd (kvm, docker, podman, ...) and request the information via the admin daemon socket:

$ sudo ceph --admin-daemon /var/run/ceph/ceph-client.admin.66606.140190886662256.asok config show | grep rbd_cache "rbd_cache": "true", "rbd_cache_writethrough_until_flush": "true", "rbd_cache_size": "33554432", "rbd_cache_max_dirty": "25165824", "rbd_cache_target_dirty": "16777216", "rbd_cache_max_dirty_age": "1", "rbd_cache_max_dirty_object": "0", "rbd_cache_block_writes_upfront": "false",

Verify the cache behaviour

To compare the performance difference you can test the cache, you can deactivate it in the [client] section in your ceph.conf as follows:

[client]
rbd cache = false

Then run a fio benchmark with the following command:

fio --ioengine=rbd --pool=<pool-name> --rbdname=rbd1 --direct=1 --fsync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_base

Finally, run this test with RBD client cache enabled and disabled and you should notice a significant difference.

Sources

https://www.sebastien-han.fr/blog/2015/09/02/ceph-validate-that-the-rbd-cache-is-active/

· 2 min read
Joachim Kraftmayer

What are hugepages?

For example, x86 CPUs normally support 4K and 2M (1G if architecturally supported) page sizes, ia64 architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M, 256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical translations. Typically this is a very scarce resource on processor. Operating systems try to make best use of limited number of TLB resources. This optimization is more critical now as bigger and bigger physical memories (several GBs) are more readily available. https://www.kernel.org/doc/> Documentation/vm/hugetlbpage.txt

How to configure huge pages

clyso@compute-21:~$ grep Hugepagesize /proc/meminfo
Hugepagesize: 2048 kB
clyso@compute-21:~$
echo 1024 > /proc/sys/vm/nr_hugepages
echo "vm.nr_hugepages=1024" &gt; /etc/sysctl.d/hugepages.conf

total huge pages

clyso@compute-21:/etc/sysctl.d# grep HugePages_Total /proc/meminfo
HugePages_Total: 1024
clyso@compute-21:/etc/sysctl.d#

free hugepages

clyso@compute-21:/etc/sysctl.d# grep HugePages_Free /proc/meminfo
HugePages_Free: 1024
clyso@compute-21:/etc/sysctl.d#

free memory

clyso@compute-21:/etc/sysctl.d# grep MemFree /proc/meminfo
MemFree: 765177380 kB
clyso@compute-21:/etc/sysctl.d#

How to make huge pages available in kubernetes?

restart kubernetes kublet on worker node

sudo systemctl restart kubelet.service

verify in kubernetes

Allocated resources

clyso@compute-21:~$ kubectl describe node compute-21 | grep -A 8 "Allocated resources:"
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 4950m (10%) 15550m (32%)
memory 27986Mi (3%) 292670Mi (37%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 400Mi (19%) 400Mi (19%)
clyso@compute-21:~$

Capacity

clyso@compute-21:~$ kubectl describe node compute-21 | grep -A 13 "Capacity:"
Capacity:
cpu: 48
ephemeral-storage: 1536640244Ki
hugepages-1Gi: 0
hugepages-2Mi: 2Gi
memory: 792289900Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 1416167646526
hugepages-1Gi: 0
hugepages-2Mi: 2Gi
memory: 790090348Ki
pods: 110
clyso@compute-21:~$

Sources:

Manage HugePages
Brief summary of hugetlbpage support in the Linux kernel
Configuring Huge Pages in Red Hat Enterprise Linux 4 or 5