Skip to main content

17 posts tagged with "k8s"

View All Tags

· One min read
Joachim Kraftmayer

Use of programming languages to define customised infrastructure environments using Terraform. Use of language constructs and tools that follow Infrastructure as Code (IoC) design patterns.

Use of functions and libraries within programming languages to develop complex infrastructure projects.

Use programming languages such as TypeScript, Python and Go to map multi-cloud environments. With the modular and open architecture to include hundreds of vendors and thousands of module definitions.

· One min read
Joachim Kraftmayer

After more than 10 years of experience with different evolutionary stages of timeseries monitoring systems and the growing need for metrics, we decided to replace database and file based solutions.

To store metrics for the long term, we rely on object storage technology to provide almost unlimited storage capacity in the backend.

The Object Storage is provided to us by multiple Ceph clusters. We are now also able to dynamically connect alternative storage locations, such as AWS and/or GCP, as needed.

· 2 min read
Joachim Kraftmayer

validate if the RBD Cache is active on your client

By default the cache is enabled, since Version 0.87.

To enable the cache on the client side you have to add following config /etc/ceph/ceph.conf:

[client]
rbd cache = true
rbd cache writethrough until flush = true

add local admin socket

So that you can also verify the status on the client side, you must add the following two parameters:

[client]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/

configure permissions and security

Both paths must be writable by the user who uses the RBD library. Applications such as SELinux or AppArmor must be properly configured.

request infos via admin socket

Once this is done, run your application that is supposed to use librbd (kvm, docker, podman, ...) and request the information via the admin daemon socket:

$ sudo ceph --admin-daemon /var/run/ceph/ceph-client.admin.66606.140190886662256.asok config show | grep rbd_cache "rbd_cache": "true", "rbd_cache_writethrough_until_flush": "true", "rbd_cache_size": "33554432", "rbd_cache_max_dirty": "25165824", "rbd_cache_target_dirty": "16777216", "rbd_cache_max_dirty_age": "1", "rbd_cache_max_dirty_object": "0", "rbd_cache_block_writes_upfront": "false",

Verify the cache behaviour

To compare the performance difference you can test the cache, you can deactivate it in the [client] section in your ceph.conf as follows:

[client]
rbd cache = false

Then run a fio benchmark with the following command:

fio --ioengine=rbd --pool=<pool-name> --rbdname=rbd1 --direct=1 --fsync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_base

Finally, run this test with RBD client cache enabled and disabled and you should notice a significant difference.

Sources

https://www.sebastien-han.fr/blog/2015/09/02/ceph-validate-that-the-rbd-cache-is-active/

· One min read
Joachim Kraftmayer

We were speakers at the first edition of Cloudland

Cloudland is the festival of the German-speaking Cloud Native Community (DCNC), with the aim of communicating the current status quo in the use of cloud technologies and focusing in particular on future challenges.

Our contribution on Multi Cloud Deployment met with great interest at the "Container & Cloud Technologies" theme day. cloudland

cloudland

cloudland

cloudland

· 2 min read
Joachim Kraftmayer

What are hugepages?

For example, x86 CPUs normally support 4K and 2M (1G if architecturally supported) page sizes, ia64 architecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M, 256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical translations. Typically this is a very scarce resource on processor. Operating systems try to make best use of limited number of TLB resources. This optimization is more critical now as bigger and bigger physical memories (several GBs) are more readily available. https://www.kernel.org/doc/> Documentation/vm/hugetlbpage.txt

How to configure huge pages

clyso@compute-21:~$ grep Hugepagesize /proc/meminfo
Hugepagesize: 2048 kB
clyso@compute-21:~$
echo 1024 > /proc/sys/vm/nr_hugepages
echo "vm.nr_hugepages=1024" &gt; /etc/sysctl.d/hugepages.conf

total huge pages

clyso@compute-21:/etc/sysctl.d# grep HugePages_Total /proc/meminfo
HugePages_Total: 1024
clyso@compute-21:/etc/sysctl.d#

free hugepages

clyso@compute-21:/etc/sysctl.d# grep HugePages_Free /proc/meminfo
HugePages_Free: 1024
clyso@compute-21:/etc/sysctl.d#

free memory

clyso@compute-21:/etc/sysctl.d# grep MemFree /proc/meminfo
MemFree: 765177380 kB
clyso@compute-21:/etc/sysctl.d#

How to make huge pages available in kubernetes?

restart kubernetes kublet on worker node

sudo systemctl restart kubelet.service

verify in kubernetes

Allocated resources

clyso@compute-21:~$ kubectl describe node compute-21 | grep -A 8 "Allocated resources:"
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 4950m (10%) 15550m (32%)
memory 27986Mi (3%) 292670Mi (37%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 400Mi (19%) 400Mi (19%)
clyso@compute-21:~$

Capacity

clyso@compute-21:~$ kubectl describe node compute-21 | grep -A 13 "Capacity:"
Capacity:
cpu: 48
ephemeral-storage: 1536640244Ki
hugepages-1Gi: 0
hugepages-2Mi: 2Gi
memory: 792289900Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 1416167646526
hugepages-1Gi: 0
hugepages-2Mi: 2Gi
memory: 790090348Ki
pods: 110
clyso@compute-21:~$

Sources:

Manage HugePages
Brief summary of hugetlbpage support in the Linux kernel
Configuring Huge Pages in Red Hat Enterprise Linux 4 or 5

· One min read
Joachim Kraftmayer

the mysql database was running out of space and we have to increase the pvc size for it.

nothing simple than that. just verified if we have enough free space left.

edited the pvc defintion of the mysql statefulset and set the spec storage size to 20 Gi.

a few seconds later the mysql database space was doubled.

**ceph version 15.2.8

**ceph-csi version: 3.2.1

kubectl edit pvc data-mysql-0 -n mysql

· One min read
Joachim Kraftmayer

We had the problem of getting the correct authorizations for the Ceph CSI user on the pools.

We then found the following bug for the version prior to 14.2.12.

https://github.com/ceph/ceph/pull/36413/files#diff-1ad4853f970880c78ea0e52c81e621b4

Was then solved with version 14.2.12.

https://tracker.ceph.com/issues/46321