Multi-tenancy options for Kubernetes: Tenant, or Namespaces? Pick at least one!

By Dario Tranchitella, 13 Jan 2022.

With the accelerating adoption of Kubernetes the dilemma of multi-tenancy is gaining more traction and being more hot and relevant. Implementing a multi-tenancy architecture is proving to be one of the more strategic decisions that need to be considered when implementing Kubernetes-based infrastructure, as it has long term implications to operational efficiency, scale velocity and costs.
From experience with our customer base, adopting a multi-tenancy architecture results in deploying 70% fewer clusters, with greater operational agility and lower costs. And adding multi-tenancy to a kubernetes cluster can take less than 10 minutes!

The journey to a kubernetes-based infrastructure usually starts small and scales rapidly after validation. In this journey, while the lack of a robust multi-tenancy architecture may not be impactful in smaller deployments, it becomes a serious issue as companies scale and need to respond rapidly to changing requirements, and ultimately need to manage and pay for larger infrastructure. Hence making a multi-tenancy decision early is very important.

There are different approaches to achieve multi-tenancy. As in everything there are always nuances and differences that need to be understood to make the right business decision. In this blog, I want to cover multi-tenancy achieved through Kubernetes Namespaces versus how Clastix approached the architecture at the Tenant level.

Kubernetes Namespaces

Kubernetes Namespace resources provide a way to group workloads and scoped resources in something that in the past was referred to as a virtual cluster, although this definition is not correct at all, and furthermore, misleading.

The community has been active to find ways to address the need for multi-tenancy and started a WG (working group) named multitenancy, which is leading the following projects:

  • Benchmarks: a set of compliance tests to determine if a cluster is well-configured for different levels of multi-tenancy
  • Hierarchical Namespace (aka HNC): a graduated project in May 2021, allowing Namespace resources to own each other and create a tree of resources
  • Virtual Cluster: graduated too in May 2021, aimed to offer a harder multi-tenancy using virtualized control planes

Hierarchical Namespace Controller, also: go with Namespaces

In a small world, a Kubernetes cluster could be used by a single operator that connects to it using their terminal, and everything is smooth and dead simple, as well as a set of operators, such as a Tenant.

Things start getting more complicated when multiple tenants need their own portion of the cluster and the Namespace resource presents its limit, leading to the inglorious kube-sprawl, or the cluster-sprawl.

A cluster sprawl is the unrestricted growth of many Kubernetes clusters to address the requirements of multi-tenancy by enterprise companies, putting more operational toil to the IT department that will have to monitor, operate, and maintain clusters: a nightmare of a linear exponential hassle.

The HNC finally offers a solution to prevent this, allowing to use of a single cluster for many tenants: thanks to a Controller the cluster administrator can create a hierarchical tree of Namespace resources, preparing the field for policy enforcements such as resource quotas, Secret propagation, and Limit Range.

While Namespaces are a great start to address the need for Multi-Tenancy, it still presents limitations especially for regulated environments.

Meet the Tenant approach by Clastix's Capsule

Since the limitations of the Namespace resources, although mitigated by several tools such as HNC, Kubernetes by default struggles to provide a native way to offer an abstraction that could address the tenant requirements, such as partitioning the cluster according to the business needs. This creates additional requirements on the administrators to support the developers’ community.

Since the absence of such abstraction, Capsule introduces the Tenant resource, a collector of Namespaces that belongs to a specification that enforces user-defined policies and considers quota and limits at the Tenant level.

Capsule has been architected to not only address the multi-tenancy issue, but do so maintaining the original developer experience so beloved by end-users, which enable them to operate in a self-service model, thus enabling velocity and reducing workload on administrators.

Extending Kubernetes is simply as tricky, and prone to break the expected developer experience required and beloved by the end-users that want to stick to the official documentation. Capsule, upon a Tenant creation, allows any user being part of a Tenant to provision in a self-service manner their Namespace resources, without the need of external tools, or plugins.

$: kubectl apply -f path/to/my/tenant.yaml created

$: kubectl --as alice --as-group create namespace gas-production
namespace/gas-production created

With a just single Custom Resource definition, there’s no need for custom binaries, plugins, or anything else not upstream: the single source of truth for the multi-tenant policies is the Tenant manifest, allowing to specify a set of trusted registries, limiting the amount of Namespace owned by a Tenant, and much more.

More examples on how to design your multi-tenancy are available in the Capsule documentation:

But how to manage resource quotas and limits, since the respective resource (such as ResourceQuota) is Namespace scoped resources, and not aware of being shared by a Tenant? Capsule is smart enough to calculate the usage across all the Namespace belonging to it and update the related quota without the need for additional Custom Resources, eliminating the need of creating one cluster per tenant.

Tenant, or no Tenant, that is the question

Although the community is addressing upstream these needs by providing tools and technologies, companies must start thinking seriously about how to settle their Kubernetes clusters to boost their developer and operation teams.

Clastix has gained significant experience around the multi-tenancy topic, and Capsule is the community response for these needs, acting as a transparent framework for companies that need to get the best from Kubernetes limiting the burdening operational cost and providing high velocity to their developers to provision their Namespaces in a glimpse of a terminal command.

Please, contact us for a demo of Capsule using the following link and meet the maintainers to know more about how your company can benefit by addressing the multi-tenancy!

And stay tuned, we are far from done creating value with multi-tenancy, as we have some exciting developments in progress…