capsule multi tenancy

Expanding Kubernetes ResourceQuota with Capsule v0.1.1

As multi-tenancy continues to remain a hot topic in the market, we are proud to announce the next release of Capsule multi-tenancy operator v0.1.1!

Tuesday, January 25, 2022 Dario Tranchitella

As multi-tenancy continues to remain a hot topic in the market, we are proud to announce the next release of Capsule multi-tenancy operator v0.1.1!

While we continue to see the value that customers are experiencing using Capsule multi-tenancy, which enables them to operate with 70-90% fewer Kubernetes clusters, we are receiving requests for new and interesting capabilities.

A core new capability in this new version is the feature that expands the native ResourceQuota options offered by Kubernetes. From a practical standpoint, users will now be able to easily limit the number of resources deployed in a Namespace, and subsequently, in a Tenant!

While it is always challenging to deliver new software, we are very proud of the continued interest and positive feedback that we are receiving on our Capsule offering, with over 650 stargazers from all over the world. Our stargazers helped suggest new feature designs, with real use-case scenarios, guidance which we really appreciate, as well as help us address issues. So thank you to all the stargazers, please keep your ideas coming!

Now to get more specific with the new Clastix's Custom ResourceQuota capabilities in Capsule v0.1.1


The current state with ResourceQuota API

The ResourceQuota is a Namespaced-scope API that provides the ability to limit a certain number of resources: these can be Pods, Services, ConfigMaps, and other Kubernetes core resources. The goal is dead simple: avoiding the unlimited sprawl of resources that could potentially fill the cluster capacity, creating safety boundaries between Namespaces.

apiVersion: v1
kind: ResourceQuota
metadata:
name: resources
spec:
hard:
limits.cpu: "8"
limits.memory: 16Gi
requests.cpu: "8"
requests.memory: 16Gi
scopes:
- NotTerminating

Taking a look at the said example, with such limits we're ensuring the ResourceQuota named resource will block the creation of Pods in case one of the specified quotas is exceeding its threshold.

More interesting, is the ability to assign scopes, a sort of selector to match the resources that must be targeted during the quota calculation: this is something we care a lot about designing Capsule, providing a non-opinionated way to build your business logic.

ResourceQuota is the API to use when limiting resources supported by Kubernetes is requested, not the right one when dealing with complex situations such as multi-tenant environments: there's no way to limit the usage of Custom Resource Definitions managed by Operators at a Tenant scope.


Presenting the Capsule's Custom ResourceQuota

With the new v0.1.1 Capsule release, we wanted to provide further tools to help cluster administrators to build enforced Tenant-based platforms, with a single source of truth and avoid writing their own admission controllers or operators.

The new annotation quota.resources.capsule.clastix.io has been introduced and provides a way to define a limit of Custom Resource Definitions in a Tenant, leveraging the Kubernetes capabilities and providing a wider scope.

apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: solar
annotations:
quota.resources.capsule.clastix.io/stations.green.energy.io_v1: 3
spec:
owners:
- kind: user
name: alice

Let's imagine Alice, the owner of the Tenant named solar that operates energy plants: in their Namespaces, multiple Custom Resource Definitions named Station must be deployed, with a constraint, let's say maximum 3 instances across their Tenant made of 3 Namespaces.

Alice starts deploying their first Station resource as follows.

$: kubectl get tenants.capsule.clastix.io solar -o yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: solar
annotations:
quota.resources.capsule.clastix.io/stations.green.energy.io_v1: 3
used.resources.capsule.clastix.io/stations.green.energy.io_v1: 1
creationTimestamp: “2022-01-25T18:37:17Z”
resourceVersion: “2664”
uid: b51ced1d-8901-4ef4-980b-cbbdb17a1ff6
spec:
owners:
- kind: user
name: alice

Alice needs to deploy more instances, and they proceed as follows.

$: kubectl --namespace solar-site-2 -f station.yaml
stations.green.energy.io/collector created

$: kubectl --namespace solar-site-3 -f station.yaml
stations.green.energy.io/collector created

Everything's smooth and nothing new under the belt, but what happens if Alice tries to deploy an additional resource, exceeding the quota assigned? Let's discover

$: kubectl --namespace solar-site-1 -f new-station.yaml
Error from server (Resource solar-site-1/new-collector in API group green.energy.io_v1 cannot be created, limit usage of 3 has been reached): admission webhook "cordoning.tenant.capsule.clastix.io denied the request: Resource solar-site-1/new-collector in API group green.energy.io_v1 cannot be created, limit usage of 3 has been reached

As you can see, Capsule kicks in limiting the creation of additional resources that would exceed the Tenant assigned quota. This is made possible thanks to the Capsule Policy Engine, a set of rules leveraging the Kubernetes Admission Controller building a set of rules according to the multi-tenancy logic put in place by the cluster administrators, and in our case, by limiting the amount of stations instances.

blog-03-policyengine

Quotas and their usages are stored in the Tenant annotations map and can be easily parsed out.

$: kubectl get tenants.capsule.clastix.io solar -o yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: solar
annotations:
quota.resources.capsule.clastix.io/stations.green.energy.io_v1: 3
used.resources.capsule.clastix.io/stations.green.energy.io_v1: 3
creationTimestamp: “2022-01-25T18:37:17Z”
resourceVersion: “2664”
uid: b51ced1d-8901-4ef4-980b-cbbdb17a1ff6
spec:
owners:
- kind: user
name: alice


Keep in mind that composing the API Group, Version, Kind is made of the following pattern: ${plural_kind}.${api_group}_${version}.
A more structured declaration will be provided with the next Capsule API versions, providing seamless integration and a smoother developer experience.

This feature enables a fine-grained control on CRDs that is not offered natively by Kubernetes, and moreover, expands its capabilities at the Tenant level, a concept that does't exist in the Kubernetes domain that Capsule, instead, addresses.

Thanks to the Custom ResourceQuota offered by Capsule, cluster administrators can:

  • offer a static billing approach with a hard quota

  • offer in advance a pool of instances mitigating resource exhaustion

  • creating quota rules at the Tenant level, not only at Namespace one


What's to expect more from Capsule?

With more adopters and end-users super-charging the Kubernetes multi-tenancy capabilities thanks to Capsule, we're excited to continue our development progress, adding new features and supporting more use-cases.

CLASTIX, the company behind Capsule, is able to help companies and cloud providers in building resilient and fault-tolerant Kubernetes platforms, powered by our Open Source tools and state-of-the-art Cloud Native solutions.

As a reminder, it takes less than 10 minutes to add multi-tenancy to a Kubernetes cluster with Capsule!

To learn more, please schedule a 30 minutes demo with one of our experts to understand how multi-tenancy can boost your operational efficiency. We are confident that the return on this investment of time will pay big dividends in a very short period of time!