Skip to main content

99 posts tagged with "ceph"

View All Tags

· One min read
Joachim Kraftmayer

On behalf of the customer, we provided the optimal Kubernetes platform for ONAP as a managed service.

ONAP is a comprehensive platform for orchestration, management, and automation of network and edge computing services for network operators,

cloud providers, and enterprises. Real-time, policy-driven orchestration and automation of physical and virtual network functions enables rapid automation of new services and complete lifecycle management critical for 5G and next-generation networks.

· One min read
Joachim Kraftmayer

When moving a customer’s onpremise data centre to the public cloud, we were commissioned to plan the Ceph clusters for the Microsoft Azure environment for use with Microsoft Azure AKS. After an intensive testing and optimisation phase, we put the Ceph clusters into production there and took over the migration to the Microsoft Azure AKS offering.

· One min read
Joachim Kraftmayer
*TISAX* (Trusted Information Security Assessment Exchange) is a standard for information security defined by the automotive industry. A large number of automotive manufacturers and suppliers in the German automotive industry have required many business partners to have existing TISAX certification since 2017.
[https://de.wikipedia.org/wiki/TISAX](https://de.wikipedia.org/wiki/TISAX)

On behalf of the customer, we support the customer in replacing its existing storage solution and introducing and commissioning Ceph as a future-proof and TISAX-compliant storage solution for its internal processes and data volumes.

The customer decided to connect its existing environment with NFS and Kerberos authentication and for its private cloud to connect via RBD.

· 11 min read
Mark Nelson

Hello Ceph community! It's that time again for another blog post! Recently, a user on the ceph subreddit asked whether Ceph could deliver 10K IOPs in a combined random read/write FIO workload from one client. The setup consists of 6 nodes with 2 4GB FireCuda NVMe drives each. They wanted to know if anyone would mind benchmarking a similar setup and report the results. Here at Clyso we are actively working on improving the Ceph code to achieve higher performance. We have our own tests and configurations for evaluating our changes to the code, but it just so happens that one of the places we do our work (the upstream ceph community performance lab!) appears to be a good match for testing this user's request. We decided to sit down for a couple of hours and give it a try. u/DividedbyPi, one of our friends over at 45drives.com, wrote that they are also going to give it a shot and report the results on their youtube channel in the coming weeks. We figure this could be a fun way to get results from multiple vendors. Let's see what happens!

· One min read
Joachim Kraftmayer

After more than 4 years of development, mclock is the default scheduler for ceph quincy (version 17).If you don't want to use the scheduler, you can disable it with the option osd_op_queue.

WPQ was the default before Ceph Quincy and the change requires a restart of the OSDs.

Source:

https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/#confval-osd_op_queue"

https://docs.ceph.com/en/quincy/rados/configuration/osd-config-ref/#qos-based-on-mclock>"