Skip to main content

The End of Kubernetes as We Know It: A New Era of Multi‑Tenant Control Planes

· 12 min read
Mangirdas Judeikis

Kubernetes is evolving. For nearly a decade, it has been the default choice to deploy and manage containers at scale – the go-to platform for container orchestration. However, Kubernetes today is no longer just a container orchestrator; it has grown into a foundational technology for modern infrastructure. While some people still try to create unified cloud standards or agree on some shared communication protocols[^10], they ignore the fact we already have this standard. Not by explicit choice, but by its success and adoption. Industry experts foresee Kubernetes becoming the common control plane spanning clouds, datacenters, and the edge​[^1]. This means Kubernetes provides a consistent way to manage containers and all kinds of workloads and resources across diverse environments, becoming de-facto the most widely adopted standard to manage services and resources.

This transformation didn't happen overnight. The Kubernetes API and its resource model - often called the Kubernetes Resource Model (KRM) - have become an industry standard for declarative infrastructure management. Many open-source projects now extend Kubernetes well beyond running containers. For example, Crossplane, a Cloud Native Computing Foundation (CNCF) project, builds on Kubernetes to orchestrate "anything… not just containers"[^2], leveraging Kubernetes' reliability and integration capabilities with other tools​. Likewise, projects for certificates (cert-manager), secrets management, databases, and more are leveraging Kubernetes as a universal automation engine. The message is clear: Kubernetes has grown into a powerful general-purpose platform API for infrastructure.

So what's the catch? As organizations embrace Kubernetes for more use cases, new challenges have surfaced with how we traditionally run Kubernetes.

Lifecycle Management - Configuration and Installation Reconsidered

· 22 min read
Vasu Chandrasekhara
Uwe Krüger

Software configuration and procedures, practices, and tools for (first time) installation, patching, or updating are at the heart of software lifecycle management.

Generally, lifecycle management involves the parametrization and adaptation of a generic (installation) procedure. There are plenty of popular practices, tools, and environments, like Terraform, Chef, Ansible, package managers like APT, Crossplane or even Kubernetes (we will see later why this appears in this list). Configuration is then used to adapt the installation procedure to the needs of particular applications or their installations.

There are several approaches to simplifying the description of controlling the installation procedure. From templating that leverages patterns and rules to avoid duplicate information, to complete general or product-specific Domain Specific Languages (DSL) that express complex configuration descriptions tighter to the problem domain than simple value structures. Those DSLs can be generalized by frameworks to shift the installer development from a general-purpose language to the composition of DSL elements.

In the following, we will systematically describe what configurations look like and how they work together with installation and update procedures.

While complex initial installations may be covered by the approach, we conclude with the argument that the optimization of configurations and configuration DSLs will not solve the automation problem for complex updates. Instead, the crucial aspect is the abstraction level between the configuration elements focusing on the problem domain and the finally maintained elements in their target environment. Complex updates require problem domain-specific flexible coding.