Skip to content

Blog

  • Garden Linux: Enabling AI on Kubernetes with NVIDIA GPUs

    Artificial Intelligence (AI) has become essential for business innovation, enabling companies to unlock new revenue streams, automate processes, and make data-driven decisions automatically and at scale.

    There is industry-wide agreement that Kubernetes provides an ideal platform for running AI workloads (see Cloud Native AI Whitepaper). Furthermore, the CNCF community is in the process of defining infrastructure level AI Conformance which will make Kubernetes ubiquitous for AI workloads.

    But for Kubernetes to support GPUs, you need the worker nodes' operating systems enabled with the right GPU drivers and associated access frameworks.

  • From Desired State to the Status of a Resource

    In the last blog, we looked into the heart of Kubernetes' functionality, its API centric core model only dealing with the CRUD of typed documents, the externalized business logic in reconciling controllers, and their archetypes. In this blog, we will discuss the layout and specifics of Custom Resource Definitions (CRDs) used to extend the Kubernetes Resource Model.

  • Kubernetes API Server and Controller Archetypes

    Why is the status of a resource in Kubernetes so difficult to determine? Hard to believe, since we are talking about Kubernetes, the orchestration platform which made it so easy to deploy, scale and manage containerized applications. In the next few blog posts we will make a journey, starting to explore the Kubernetes Resource Model, the heart of Kubernetes' functionality. We will discover its responsibility and its declarative nature, visit afterwards the controllers and at the end will look at the resource manifests, where the users define the desired state but also the status has his home. When extending Kubernetes you are responsible in making the status meaningful, therefore we look at the end at some best practices for the status subresource.

  • Open Resource Discovery (ORD): A New Open Standard for Discovering Services, Events, and Data Products

    In complex enterprise environments, knowing which digital services, events, and data products are available — and how to integrate with them — is a well-known challenge. As businesses evolve, the ability to discover, understand, and consume digital offerings becomes increasingly critical. Open Resource Discovery (ORD), now an independent open standard under the Linux Foundation (via the NeoNephos Foundation), was designed to solve this challenge. It provides a standard, automated way for industries to describe and publish services, events, and data products directly from their source systems, making it an ideal foundation for digital collaboration and integration across platforms.

  • Namespace Support in OpenBao: A Key Building Block for Apeiro Security Architecture

    The OpenBao project has just published the 2.3 Beta release with support for namespaces, a feature that introduces logical isolation within a single deployment. This enhancement allows the Apeiro Reference Architecture to securely separate sensitive information from different operator groups and trust domains across layers of the Apeiro stack.

    Major parts of the namespace feature were contributed by members of the Apeiro project, providing a solid foundation for multi-tenancy that spans beyond use cases of the multi-provider cloud-edge continuum.

  • The End of Kubernetes as We Know It: A New Era of Multi‑Tenant Control Planes

    Kubernetes is evolving. For nearly a decade, it has been the default choice to deploy and manage containers at scale – the go-to platform for container orchestration. However, Kubernetes today is no longer just a container orchestrator; it has grown into a foundational technology for modern infrastructure. While some people still try to create unified cloud standards or agree on some shared communication protocols[^10], they ignore the fact we already have this standard. Not by explicit choice, but by its success and adoption. Industry experts foresee Kubernetes becoming the common control plane spanning clouds, datacenters, and the edge​[^1]. This means Kubernetes provides a consistent way to manage containers and all kinds of workloads and resources across diverse environments, becoming de-facto the most widely adopted standard to manage services and resources.

    This transformation didn't happen overnight. The Kubernetes API and its resource model - often called the Kubernetes Resource Model (KRM) - have become an industry standard for declarative infrastructure management. Many open-source projects now extend Kubernetes well beyond running containers. For example, Crossplane, a Cloud Native Computing Foundation (CNCF) project, builds on Kubernetes to orchestrate "anything… not just containers"[^2], leveraging Kubernetes' reliability and integration capabilities with other tools​. Likewise, projects for certificates (cert-manager), secrets management, databases, and more are leveraging Kubernetes as a universal automation engine. The message is clear: Kubernetes has grown into a powerful general-purpose platform API for infrastructure.

    So what's the catch? As organizations embrace Kubernetes for more use cases, new challenges have surfaced with how we traditionally run Kubernetes.

  • Lifecycle Management - Configuration and Installation Reconsidered

    Software configuration and procedures, practices, and tools for (first time) installation, patching, or updating are at the heart of software lifecycle management.

    Generally, lifecycle management involves the parametrization and adaptation of a generic (installation) procedure. There are plenty of popular practices, tools, and environments, like Terraform, Chef, Ansible, package managers like APT, Crossplane or even Kubernetes (we will see later why this appears in this list). Configuration is then used to adapt the installation procedure to the needs of particular applications or their installations.

    There are several approaches to simplifying the description of controlling the installation procedure. From templating that leverages patterns and rules to avoid duplicate information, to complete general or product-specific Domain Specific Languages (DSL) that express complex configuration descriptions tighter to the problem domain than simple value structures. Those DSLs can be generalized by frameworks to shift the installer development from a general-purpose language to the composition of DSL elements.

    In the following, we will systematically describe what configurations look like and how they work together with installation and update procedures.

    While complex initial installations may be covered by the approach, we conclude with the argument that the optimization of configurations and configuration DSLs will not solve the automation problem for complex updates. Instead, the crucial aspect is the abstraction level between the configuration elements focusing on the problem domain and the finally maintained elements in their target environment. Complex updates require problem domain-specific flexible coding.

Funded by the European Union, NextGenerationEU; Supported by Federal Ministry of Economic Affairs and Energy on the basis of a decision by the German Bundestag

Funded by the European Union – NextGenerationEU.

The views and opinions expressed are solely those of the author(s) and do not necessarily reflect the views of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.

Logo of SAP SELogo of the Apeiro Reference ArchitectureLogo of the NeoNephos foundation