Skip to main content

13 posts tagged with "kubernetes"

View All Tags

· One min read
Joachim Kraftmayer

Today, Kubernetes is the first choice for running microservices in the public or private cloud. More and more developers and enterprises are building their applications on the modern microservice architecture.

Many of them are using Kubernetes for automated deployment of their workloads and want to benefit from the new flexibility and robustness. We are working on a solution for our customers to simplify and unify Day One and Day Two operations in their operations. With the increasing number of clusters, the management, updating and monitoring should be able to deal with it efficiently.

· One min read
Joachim Kraftmayer

Since 2018, we have been accompanying Rook.io in its development and had direct exchanges with various members of the project at Cephalocon in Beijing 2018 and Barcelona 2019.

In 2019, we began serving customers in production who use Rook.io to manage Ceph.

Storage Operators for Kubernetes

Rook transforms distributed storage systems into self-managing, self-scaling and self-healing storage services. It automates the tasks of a storage administrator: provisioning, bootstrapping, configuring, deploying, scaling, upgrading, migrating, disaster recovery, monitoring, and resource management. Rook leverages the power of the Kubernetes platform to deliver its services to any storage provider through a Kubernetes operator.

[https://rook.io/](https://rook.io/)

Since 2020, we are now working on improving the automated operation of Ceph with Rook.io. Furthermore, it is planned to have the platform fully audited by various certification bodies.

· One min read
Joachim Kraftmayer

After the acquisition of CoreOS by RedHat and the discontinuation of CoreOS support.

CoreOS Container Linux will reach its end of life on May 26, 2020 and will no longer receive updates.
(source: https://coreos.com/releases/)

We have decided together with our customer to provide an alternative to CoreOS before the 26.05.2020.

The current project name is Gardenlinux, although the name may not change until the public release.

Gardenlinux builds a full replacement for CoreOS based on Debian, without being biased by a target architecture.

Currently, the project supports the following platforms: BareMetal, AWS, GCP, Azure, VMWare, Openstack, and KVM and Docker.

Ports for AlibabaCloud are still under development.

In Produkton, Gardenlinux has already proven itself on BareMetal, KVM and AWS.

· One min read
Joachim Kraftmayer

Even though microservices with Kubernetes are gaining traction in the cloud environment, we still have a high demand to serve for managing virtual machines.

To keep the technology stack as lean as possible, we are phasing out our Cloud Controller environments and managing Virtual Machines using Kubernetes.

In a further step, we are thereby able to use Kubernetes to map complete environments with microservices and virtual machines through one technology.

· One min read
Joachim Kraftmayer

On behalf of the customer, we provided the optimal Kubernetes platform for ONAP as a managed service.

ONAP is a comprehensive platform for orchestration, management, and automation of network and edge computing services for network operators,

cloud providers, and enterprises. Real-time, policy-driven orchestration and automation of physical and virtual network functions enables rapid automation of new services and complete lifecycle management critical for 5G and next-generation networks.

· One min read
Joachim Kraftmayer

Use of programming languages to define customised infrastructure environments using Terraform. Use of language constructs and tools that follow Infrastructure as Code (IoC) design patterns.

Use of functions and libraries within programming languages to develop complex infrastructure projects.

Use programming languages such as TypeScript, Python and Go to map multi-cloud environments. With the modular and open architecture to include hundreds of vendors and thousands of module definitions.

· 2 min read
Joachim Kraftmayer

validate if the RBD Cache is active on your client

By default the cache is enabled, since Version 0.87.

To enable the cache on the client side you have to add following config /etc/ceph/ceph.conf:

[client]
rbd cache = true
rbd cache writethrough until flush = true

add local admin socket

So that you can also verify the status on the client side, you must add the following two parameters:

[client]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/

configure permissions and security

Both paths must be writable by the user who uses the RBD library. Applications such as SELinux or AppArmor must be properly configured.

request infos via admin socket

Once this is done, run your application that is supposed to use librbd (kvm, docker, podman, ...) and request the information via the admin daemon socket:

$ sudo ceph --admin-daemon /var/run/ceph/ceph-client.admin.66606.140190886662256.asok config show | grep rbd_cache "rbd_cache": "true", "rbd_cache_writethrough_until_flush": "true", "rbd_cache_size": "33554432", "rbd_cache_max_dirty": "25165824", "rbd_cache_target_dirty": "16777216", "rbd_cache_max_dirty_age": "1", "rbd_cache_max_dirty_object": "0", "rbd_cache_block_writes_upfront": "false",

Verify the cache behaviour

To compare the performance difference you can test the cache, you can deactivate it in the [client] section in your ceph.conf as follows:

[client]
rbd cache = false

Then run a fio benchmark with the following command:

fio --ioengine=rbd --pool=<pool-name> --rbdname=rbd1 --direct=1 --fsync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_base

Finally, run this test with RBD client cache enabled and disabled and you should notice a significant difference.

Sources

https://www.sebastien-han.fr/blog/2015/09/02/ceph-validate-that-the-rbd-cache-is-active/