A recipe for a bespoke on-prem Kubernetes cluster

So you want to build yourself a Kubernetes cluster? You have your reasons. Some may want to utilize the hardware they own, some may not fully trust these fancy cloud services or just simply want to have a choice and build themselves a hybrid solution. There are a couple of products available that I’ve reviewed, but you’ve decided to build a platform from scratch. And again, there are a myriad of reasons why it might be a good idea and also many that would convince you it’s not worth your precious time. In this article, I will focus on providing a list of things to consider when starting a project building a Kubernetes-based platform using only the most popular open source components.

Target groups

Before we jump into the technicalities, I want to describe three target groups that are referred to in the below sections.

  • (SUP) – very small companies or the ones with basic needs; their focus is on using basic Kubernetes API and facilitating services around it
  • Medium businesses (MBU) – medium companies which want to leverage Kubernetes to boost their growth and innovation; their focus is on building a scalable platform that is also easy to maintain and extend
  • Enterprises (ENT) – big companies with even bigger needs, scale, many policies, and regulations; they are the most demanding and are focused on repeatability, security, and scalability (in terms of the growing number of developers and teams working on their platform)

All these groups have different needs and thus they should build their platform in a slightly different way with different solutions applied to particular areas. I will refer to them using their abbreviations or as ALL when referring to all of them.

Installation

When to apply: Mandatory for ALL Purpose: To have a robust and automated way of management your cluster(s)

When deciding on installing Kubernetes without using any available distribution you have a fairly limited choice of installers. You can try using kubeadm directly or use more generic kubespray. The latter one will help you not only install, but also maintain your cluster (upgrades, node replacement, cluster configuration management). Both of these are universal and are unaware of how cluster nodes are provisioned. If you wish to use an automated solution that would also handle provisioning cluster nodes then Metal3 could be something you might want to try. It’s still in the alpha stage, but it looks promising.

If you want a better and more cloud-native way of managing your clusters that would enable easy scaling then you may want to try ClusterAPI project. It supports multiple cloud providers, but it can be used on on-prem environments with the aforementioned Metal3, vSphere, or OpenStack.

One more thing worth noting here: the operating system used by cluster nodes. Since the future of CentOS seems unclear, Ubuntu becomes the main building block for bespoke Kubernetes clusters. Some may want to choose a slim alternative that has replaced CoreOS – Flatcar Linux.

Cluster autoscaler

When to apply: Highly recommended for  ENT, optional for others Purpose: Scale up and down automatically your platform

If you choose ClusterAPI or your cluster uses some API in another way to manage cluster nodes (e.g. vSphere, OpenStack, etc.) then you should also use the cluster autoscaler component. It is almost a mandatory feature for ENT but it can also be useful for MBU organizations. By forcing nodes to be ephemeral entities that can be easily replaced/removed/added, you decrease the maintenance costs.

Network CNI plugin

When to apply: Mandatory for ALL Purpose: Connect containers with optional additional features such as encryption

The networking plugin is one of the decisions that need to be taken prudently, as it cannot be easily changed afterward. To make things brief I would shorten the list to two plugins – Calico or Cilium. Calico is older and maybe a little bit more mature, but Cilium looks very promising and utilizes Linux Kernel BPF. For a more detailed comparison I would suggest reading this review of multiple plugins. Choose wisely and avoid CNI without NetworkPolicy support – having a Kubernetes cluster without the possibility to implement firewall rules is a bad idea. Both Calico and Cilium support encryption, which is a nice thing to have, but Cilium is able to encrypt all the traffic (Calico encrypts only pod-to-pod).

Ingress controller

When to apply: Mandatory for ALL Purpose: Provide an easy and flexible way to expose web applications with optional advanced features

Ingress is a component that can be easily swapped out when the cluster is running. Actually, you can have multiple Ingress controllers by leveraging IngressClass introduced in Kubernetes 1.18. A comprehensive comparison can be found here, but I would limit it to a select few controllers depending on your needs.

For those looking for compatibility with other Kubernetes clusters (e.g. hybrid solution), I would suggest starting with the most mature and battle-tested controller – nginx ingress controller. The reason is simple – you need only basic features described in Ingress API that have to be implemented by every Ingress controller. That should cover 90% of cases, especially for SUP group.

If more features are required (such as sophisticated http routing, authentication, authorization, etc.) then the following options are the most promising:

  • Contour – it’s the only CNCF project that is in the Incubating maturity level group. And it’s based on Envoy which is the most flexible proxy available out there.
  • Ambassador – has nice features, but many of them are available in the paid version. And yes – it also uses Envoy.
  • HAproxy from HAproxytech – for those who are familiar with HAproxy and want to leverage it to provide a robust Ingress controller
  • Traefik – they have an awesome logo and if you’ve been using it for some Docker load-balancing then you may find it really useful for Ingress as well

Monitoring

When to apply: Mandatory for ALL (unless an existing monitoring solution compatible with Kubernetes exists) Purpose: Provide insights on cluster state for operations teams

There is one king here – just use Prometheus. Probably the best approach would be using an operator that would install Grafana alongside some predefined dashboards.

Logging

When to apply: Mandatory for ALL (unless an existing central logging solution is already is) Purpose: Provide insights on cluster state for operations teams

It’s quite similar to monitoring – the majority of solutions are based on Elasticsearch, Fluentd and Kibana. This suite has broad community support and many problems have been solved and described thoroughly in many posts on the web. ALL should have a logging solution for their platforms and the easiest way to implement it is to use an operator like this one or a Helm Chart like this based on Open Distro (it’s an equivalent of Elasticsearch with more lenient/open source license).

Tracing

When to apply: Optional for ALL Purpose: Provide insights and additional metrics useful for application troubleshooting and performance tuning

Tracing is a feature that will be highly coveted in really big and complex environments. That’s why ENT organizations should adopt it and the best way is to implement it using Jaeger. It’s one of graduated CNCF projects which only makes it more appealing, as it’s been proven to be not only highly popular but also has a healthy community around it. Implementation requires some work on the application’s part, but the service itself can be easily installed and maintained using this operator.

Backup

When to apply: Mandatory for ENT, optional for the rest Purpose: Apply the *“Redundancy is not a backup solution” approach

ALL should remember that redundancy is not a backup solution. Although with a properly implemented GitOps solution, where each change of the cluster state goes through a dedicated git repository, the disaster recovery can be simplified, in many cases, it’s not enough. For those who plan to use persistent storage, I would recommend implementing Velero.

Storage

When to apply: For ALL if stateful applications are planned to be used Purpose: Provide flexible storage for stateful applications and services

The easiest use case of Kubernetes is stateless applications that don’t need any storage for keeping their state. Most microservices use some external service (such as databases) that can be deployed outside of a cluster. If persistent storage is required it can still be provided using already existing solutions from outside a Kubernetes cluster. There are some drawbacks (i.e. the need to provision persistent volumes manually, less reliability and flexibility) in many of them and that’s why keeping storage inside a cluster can be a viable and efficient alternative. I would limit the choices for such storage to the following projects:

Rook is the most popular and when properly implemented (e.g. deployed on a dedicated cluster or on a dedicated node pool with monitoring, alerting, etc.) can be a great way of providing storage for any kind of workloads, including even production databases (although this topic is still controversial and we all need time to accustom to this way of running them).

Security

This part is crucial for organizations that are focused on providing secure platforms for the most sensitive parts of their systems.

Non-root containers

When to apply: Mandatory for ENT and probably MBU Purpose: Decrease the risk of potential exploiting of vulnerabilities found in applications or the operating system they use

OpenShift made a very brave and good decision by providing a default setting that forbids running containers under the root account. I think this setting should be also implemented for ALL organizations that want to increase the level of workloads running on their Kubernetes clusters. It is quite easy to achieve by implementing PodSecurityPolicy admission controller and applying proper rules. It’s not even an external project, but it’s a low-hanging fruit that should be mandatory to implement for larger organizations. This, however, brings consequences in what images would be used on a platform. Most ”official” images available on Docker Hub run as root, but I see how it changes, and hopefully, it will change in the future.

Enforcing policies with OpenPolicyAgent

When to apply: Mandatory for ENT, optional for others Purpose: Enforce security and internal policies

Many organizations produce tons of security policies written down in some documents. They are often enforced by processes and audited yearly or rarely. In many cases, they aren’t adjusted to the real world and were created mostly to meet some requirements instead of protecting and ensuring best security practices are in place. It’s time to start enforcing these policies on the API level and that’s where OpenPolicyAgent comes to play. Probably it’s not required for small organizations, but it’s definitely mandatory for larger ones where risks are much higher. In such organizations properly configured rules that may:

  • prevent pulling images from untrusted container registries
  • prevent pulling images outside of a list of allowed container images
  • enforce the use of specific labels describing a project and its owner
  • enforce the applying of best practices that may have an impact on the platform reliability (e.g. defining resources and limits, use of liveness and readiness probes)
  • granularly restrict the use of the platform’s API (Kubernetes RBAC can’t be used to specify exceptions)

Authentication

When to apply: Mandatory for ALL, for some SUP it may be optional Purpose: Provide a way for user to authenticate and authorize to the platform

This is actually a mandatory component for all organizations. One thing that may surprise many is how Kubernetes treats authentication and how it relies on external sources for providing information on users. This means almost unlimited flexibility and at the same time adds even more work and requires a few decisions to be made. To make it short – you probably want something like DEX that acts as a proxy to your real Identity Provider (DEX supports many of these, including LDAP, SAML 2.0, and most popular OIDC providers). To make it easier to use you can add Gangway. It’s a pair of projects that are often used together.

You may find Keycloak as an alternative that is more powerful, but at the same time is also more complex and difficult to configure.

Better secret management

When to apply: Mandatory for ENT Purpose: Provide a better and more secure way of handling confidential information on the platform

For smaller projects and organizations encrypting Secrets in a repo where they are stored should be sufficient. Tools such as git-crypt , git-secret or SOPS do a great job in securing these objects. I recommend especially the last one – SOPS is very universal and combined with GPG can be used to create a very robust solution. For larger organizations, I would recommend implementing HashiCorp Vault which can be easily integrated with any Kubernetes cluster. It requires a bit of work and thus the use of it for small clusters with few applications seems to make no sense. For those who have dozens or even hundreds of credentials or other confidential data to store Vault can make their life easier. Auditing, built-in versioning, seamless integration, and what is the killer feature – dynamic secrets. By implementing access to external services (i.e. various cloud providers, LDAP, RabbitMQ, ssh and database servers) using credentials created on-demand with a short lifetime, you set a different level of security for your platform.

Security audits

When to apply: Mandatory for ENT and MBU Purpose: Get more information on potential security breaches

When handling a big environment, especially one that needs to be compliant with some security standards, providing a way to report suspicious activity is one of the most important requirements. Setting auditing for Kubernetes is quite easy and it can even be enhanced by generating more granular information on specific events generated not by API components, but by containers running on a cluster. The project that brings these additional features is Falco. It’s really amazing how powerful this tool is – it uses the Linux kernel’s internal API to trace all activity of a container such as access to files, sending or receiving network traffic, access to Kubernetes API, and many, many more. The built-in rules already provide some useful information, but they need to be adjusted for specific needs to get rid of false positives and triggers when unusual activities are discovered on the cluster.

Container images security scanning

When to apply: Mandatory for ALL Purpose: Don’t allow to run containers with critical vulnerabilities found

The platform security mostly comes down to vulnerabilities in the containers running on it. That’s why it is so important to ensure that the images used to run these containers are scanned against most critical vulnerabilities. This can be achieved in two ways – one is by scanning the images on a container registry and the other is by including an additional step in the CI/CD pipeline used for the deployment.

It’s worth considering keeping container images outside of the cluster and relying on existing container registries such as Docker HubAmazon ECRGoogle GCR or Azure ACR. Yes – even when building an on-prem environment sometimes is just easier to use a service from a public cloud provider. It is especially beneficial for smaller organizations that don’t want to invest too much time in building a container registry and at the same time they want to provide a proper level of security and reliability.

There is one major player in the on-prem container registries market that should be considered when building such a service. It’s Harbor which has plenty of features, including security scanning, mirroring of other registries, and replication that allows adding more nines to its availability SLO. Harbor has a built-in Trivy scanner that works pretty well and is able to find vulnerabilities on the operating system level and also in the application packages.

Trivy can also be used as a standalone tool in a CI/CD pipeline to scan the container image built by one of the stages. This one-line command might protect you from serious troubles as many can be surprised by the number of critical vulnerabilities that exist even in the official docker images.

Extra addons

On top of basic Kubernetes features there are some interesting addons that extend Kubernetes basic features.

User-friendly interface

When to apply: Mandatory for ENT and MBU Purpose: Allow less experienced users to use the platform

Who doesn’t like a nice GUI that helps to get a quick overview of what’s going on with your cluster and applications running on it? Even I crave such interfaces and I spend most of my time in my command line or with my editor. These interfaces when designed properly can speed up the process of administration and just make the work with the Kubernetes environment much more pleasant. The ”official” Kubernetes dashboard project is very basic and it’s not the tool that I would recommend for beginners, as it may actually scare people off instead of drawing them to Kubernetes. I still believe that OpenShift’s web console is one of the best, but unfortunately it cannot be easily installed with any Kubernetes cluster. If it was possible then it would definitely be my first choice. Octant looks like an interesting project that is extensible and there are already useful plugins available (e.g. Aqua Security Starboard). It’s rather a platform than a simple web console, as it actually doesn’t run inside a cluster, but on a workstation. The other contestant in the UI category is Lens. It’s also a standalone application. It works pretty well and shows nice graphs when there’s a prometheus installed on the cluster.

Service mesh

When to apply: Optional for ALL Purpose: Enable more advanced traffic management, more security and flexibility for the applications running on the platform

Before any project name appears here there’s a fundamental question that needs to be asked here – do you really need a service mesh for your applications? I wouldn’t recommend it for organizations which just start their journey with cloud native workloads. Having an additional layer can make non-so-trivial management of containers even more complex and difficult. Maybe you want to use service mesh only to encrypt traffic? Consider a proper CNI plugin that would bring this feature transparently. Maybe advanced deployment seems like a good idea, but did you know that even basic Nginx Ingress controller supports canary releases? Introduce a service mesh only then when you really need a specific feature (e.g. multi-cluster communication, traffic policy, circuit breakers, etc.). Most readers would probably be better off without service mesh and for those prepared for the additional effort related to increased complexity the choice is limited to few solutions. The first and most obvious one is Istio. The other that I can recommend is Consul Connect from HashiCorp. The former is also the most popular one and is often provided as an add-on in the Kubernetes services in the cloud. The latter one seems to be much simpler, but also is easier to use. It’s also a part of Consul and together they enable creation and management of multi-cluster environments.

External dns

When to apply: Optional for ALL, recommended for dynamic environments Purpose: Decrease the operational work involved with managing new DNS entries

Smaller environments will probably not need many dns records for the external access via load balancer or ingress services. For larger and more dynamic ones having a dedicated service managing these dns records may save a lot of time. This service is external-dns and can be configured to manage dns records on most dns services available in the cloud and also on traditional dns servers such as bind. This addon works best with the next one which adds TLS certificates to your web applications.

Cert-manager

When to apply: Optional for ALL, recommended for dynamic environments Purpose: Get trusted SSL certificates for free!

Do you still want to pay for your SSL/TLS certificates? Thanks to Let’s Encrypt you don’t need to. But this is just one of the Let’s Encrypt’s features. Use of Let’s Encrypt has been growing rapidly over the past few years. Tand the reason why is that it’s one of the things that should be at least considered as a part of the modern Kubernetes platform is how easy it is to automate. There’s a dedicated operator called cert-manager that makes the whole process of requesting and refreshing certificates very quick and transparent to applications. Having trusted certificates saves a lot of time and trouble for those who manage many web services exposed externally, including test environments. Just ask anyone who had to inject custom certificate authority keys to dozens of places to make all the components talk to each other without any additional effort. And cert-manager can be used for internal Kubernetes components as well. It’s one of my favourite addons and I hope many will appreciate it as much as I do.

Additional cluster metrics

When to apply: Mandatory for ALL Purpose: Get more insights and enable autoscaling

There are two additional components that should be installed on clusters used in production. They are metrics-server and kube-state-metrics. The first is required for the internal autoscaler (HorizontalPodAutoscaler) to work, as metrics-server exposes metrics gathered from various cluster components. I can’t imagine working with a production cluster that lack of these features and all the events that should be a part of standard security review processes and alerting systems.

GitOps management

When to apply: Optional for ALL, recommended for ENT Purpose: Decrease the operational work involved with cluster management

It is not that popular yet, but cluster and environment management is going to be an important topic, especially for larger organizations where there are dozens of clusters, namespaces and hundreds of developers working on them. Management techniques involving git repositories as a source of truth are known as GitOps and they leverage the declarative nature of Kubernetes. It looks like ArgoCD has become a major player in this area and installing it on the cluster may bring many benefits for teams responsible for maintenance, but also for security of the whole platform.

Conclusion

The aforementioned projects do not even begin to exhaust the subject of the solutions available for Kubernetes. This list merely shows how many possibilities are out there, how rich the Kubernetes ecosystem is, and finally how quickly it evolves. For some it may be also surprising how standard Kubernetes lacks some features required for running production workloads. Even the multiple versions of Kubernetes-as-a-Service available on major cloud platforms are missing most of these features, let alone the clusters that are built from scratch for on-prem environments. It shows how difficult this process of building a bespoke Kubernetes platform can become, but at the same time those who will manage to put it all together can be assured that their creation will bring their organization to the next level of automation, reliability, security and flexibility. For the rest there’s another and easier path – using a Kubernetes-based product that has most of these features built-in.

Which Kubernetes distribution to choose for on-prem environments?

Most people think that Kubernetes was designed to bring more features and more abstraction layers to cloud environments. Well, I think the biggest benefits can be achieved in on-premise environments, because of the big gap between those environments and the ones that can be easily created in the cloud. This opens up many excellent opportunities for organizations which for some reasons choose to stay outside of the public cloud. In order to leverage Kubernetes using on-premise hardware, one of the biggest decisions that needs to be made which software platform to use for Kubernetes. According to the official listing of available Kubernetes distributions, there are dozens of options available. If you look closely at them, however, there are only a few viable ones, as many of them are either inactive or have been merged with other projects (e.g. Pivotal Kubernetes Service merged with VMware Tanzu). I expect that 3-5 of these distributions will eventually prevail in the next 2 years and they will target their own niche market segments. Let’s have a look at those that have stayed in the game and can be used as a foundation for a highly automated on-premise platform.

1. OpenShift

I’ll start with the obvious and probably the best choice there is – OpenShift Container Platform. I’ve written about this product many times and still there’s no better Kubernetes distribution available on the market that is so rich in features. This also comes with its biggest disadvantage – the price that for some is just too high. OpenShift is Red Hat’s flagship product that is targeted at enterprises. Of course they sell it to medium or even small companies, but the main target group is big enterprises with a big budget. It has also become a platform for Red Hat’s other products or other vendors’ services that are easily installable and available at https://www.operatorhub.io/. OpenShift can be installed in the cloud, but it’s on-premise environments is where it shows its most powerful features. Almost every piece of it is highly automated and this enables easy maintenance of clusters (installation, upgrades and scaling), rapid deployment of supplementary services (databases, service mesh) and platform configuration. There is no other distribution that has achieved that level of automation. OpenShift is also the most complete solution which includes integrated logging, monitoring and CI/CD (although they are still working on switching from Jenkins to Tekton engine which is not that feature-rich yet).

When to choose OpenShift

  • If you have a big budget – money can’t bring happiness, but it can buy you the best Kubernetes distribution, so why hesitate?
  • If you want to have the easiest and smoothest experience with Kubernetes – a user-friendly web console that is second to none and comprehensive documentation.
  • You don’t plan to scale rapidly but you need a bulletproof solution – OpenShift can be great for even small environments and as long as they won’t grow it can be financially reasonable
  • Your organization has few DevOps/Ops people – OpenShift is less demanding from a maintenance perspective and may help to overcome problems with finding highly skilled Kubernetes and infrastructure experts
  • The systems that your organization builds are complex – in cases where the development and deployment processes require a lot of additional services, there’s no better way to create and maintain clusters on on-premise environments than by using operators (and buying additional support for them if needed)
  • If you need support (?) – I’ve put it here just for the sake of providing some reasonable justification for the high price of an OpenShift subscription, but unfortunately many customers are not satisfied with the level of product support and thus it’s not the biggest advantage here

When to avoid OpenShift

  • All you need is Kubernetes API – maybe all these fancy features are just superfluous and just plain Kubernetes distribution is enough, provided that you have a team of skilled people that could build and maintain it
  • If your budget is tight – that’s obvious, but many believe they can somehow overcome the high price of OpenShift by efficiently bin packing their workloads on smaller clusters or get a real bargain when ordering their subscriptions (I guess it’s possible, but only for really big orders for hundreds of nodes)
  • Your organization is an avid supporter of open source projects and avoids any potential vendor lock-ins – although OpenShift includes Kubernetes and can be fully compatible with other Kubernetes distributions, there are some areas where a potential vendor lock-in can occur (e.g. reliance on builtin operators and their APIs)

2. OKD

Back in the day Red Hat used upstream-downstream strategy for product development where open source upstream projects were free to use and their downstream, commercial products were heavily dependent on their upstreams and built on top of them. That has changed with OpenShift 4 where its open source equivalent – OKD – was released months after OpenShift had been redesigned, with help from guys from CoreOS (Red Hat acquired CoreOS in 2018). So OKD is an open source version of OpenShift and it’s free. It’s a similar strategy that Red Hat has been using for years – to attract people and accustom them to the free (upstream) versions and also give them a very similar experience to their paid products. The only difference is of course lack of support and few features that are available in OpenShift only. That’s what the key factors to consider are when deciding on a Kubernetes platform – does your organization need support or will it get by without it? Things got a little bit more complicated after Red Hat (who own CentOS project) has announced that CentOS 8 will cease to exist in the form that has been known for years. CentOS is widely used by many companies as a free version of RHEL (Red Hat Enterprise Linux) and it looks like it has changed and we don’t know what IBM will do with OKD (I suspect it was their business decision to pull the plug). There’s a risk that OKD will also no longer be developed, or at least it will not resemble OpenShift like it does now. As for now being still very similar to OpenShift, OKD can be also considered as one of the best Kubernetes platforms to use for on-premise installations.

When to choose OKD

  • You don’t care about Red Hat addons, but still need a highly automated platform – OKD can still brings your environment to a completely different level by leveraging operators, builtin services (i.e. logging, monitoring)
  • You don’t need support, because you have really smart people with Kubernetes skills – either you pay Red Hat for its support or build an internal team that would act as 1st, 2nd and 3rd line of support (not mentioning the vast resources available on the web)
  • You plan to run internal workloads only without exposing them outside – Red Hat brags about providing curated list of container images while OKD relies on community’s work on providing security patches and this causes some delays; for some this can be an acceptable risk, especially if the platform is used internally
  • You need a Kubernetes distribution that is user-friendly – web console in OKD is almost identical to the one in OpenShift which I already described before as second to none; it often helps less experienced users to use it and even more experienced ones can use it to perform daily tasks even faster by leveraging all the information gathered in a concise form
  • You want to decrease costs of OpenShift and use it for testing environments only – this idea seems to be reasonable from the economic point of view and if planned and executed well it makes sense; there are some caveats though (e.g. it is against Red Hat license to use most of their container images)

When to avoid OKD

  • Plain Kubernetes is all you need – with all these features comes complexity that may be just not what your organization needs and you’d be better off with some simpler Kubernetes distribution
  • You expect quick fixes and patches – don’t get me wrong, it looks like they are delivered, but it’s not guaranteed and relies solely on community (e.g. for OpenShift Origin 3, a predecessor of OKD, some container images used internally by the platform haven’t been updated for months whereas OpenShift provided updates fairly quickly)
  • You need a stable and predictable platform – nobody expected CentOS 8 would no longer be an equivalent to RHEL and so similar decisions of IBM executives can affect OKD and there’s a risk that sometime in the future all OKD users would have no choice but to migrate to some other solution

3. Rancher

After Rancher had been accquired by SUSE, a new chapter opened for this niche player on the market. Although SUSE already had their own Kubernetes solution, it’s likely that they will only have a single offering of that type and it’s going to be Rancher. Basically, Rancher offers an easy management of multiple Kubernetes clusters that can be provisioned manually and imported into the Cluster Manager management panel or provisioned by Rancher using its own Kubernetes distribution. They call it RKE – Rancher Kubernetes Engine and it can be installed on most major cloud providers, but also on vSphere. Managing multiple clusters using Rancher is very easy and combining it with plenty of authentication options makes it a really compelling solution for those who plan to manage hybrid, multi-cluster, or even multi-cloud environments. I think that Rancher has initiated many interesting projects, including K3S (simpler Kubernetes control plane targeted for edge computing) , RKE (the aforementioned Kubernetes distribution), and Longhorn (distributed storage). You can see they are in the middle of an intensive development cycle – even by looking at the Rancher’s inconsistent UI which is divided into two: Cluster Manager with a fresh look, decent list of options, and Cluster Explorer that is less pleasant, but offers more insights. Let’s hope they will continue improving Rancher and its RKE to be even more usable so that it would become an even more compelling Kubernetes platform for on-premise environments.

When to choose Rancher

  • If you already have VMware vSphere – Rancher makes it very easy to spawn new on-premise clusters by leveraging vSphere API
  • If you plan to maintain many clusters (all on-premise, hybrid or multi-cloud) – it’s just easier to manage them from a single place where you log in using unified credentials (it’s very easy to set up authentication against various services)
  • You focus on platform maintenance more than on features supporting development – with nice integrated backup solution, CIS benchmark engine and only few developer-focused solution (I think their CI/CD solution was put there just for the sake of marketing purposes – it’s barely usable) it’s just more appealing to infrastructure teams
  • If you really need paid support for your Kubernetes environment – Rancher provides support for its product, including their own Kubernetes distribution (RKE) as well as for custom installations; When it comes to price it’s a mystery that will be revealed when you contact Sales
  • You need a browser-optimized access to your environment – with builtin shell it’s very easy to access cluster resources without configuring anything on a local machine

When to avoid Rancher

  • You don’t care about fancy features – although there are significantly less features in Rancher than in OpenShift or OKD, it is still more than just a nice UI and some may find it redundant and can get by without them
  • You’re interested in more mature products – it looks like Rancher has been in an active development over the past few months and probably it is going to be redesigned and some point, just like it happened with OpenShift (version 3 and 4 are very different)
  • You don’t plan or need to use multiple clusters – maybe one is enough?

4. VMware Tanzu

The last contender is Tanzu from the biggest on-premise virtualization software vendor. When they announced project Pacific I knew it was going to be huge. And it is. Tanzu is a set of products that leverage Kubernetes and integrate them with vSphere. The product that manages Kubernetes clusters is called Tanzu Kubernetes Grid (TKG) and it’s just the beginning of Tanzu offering. There’s Tanzu Mission Control for managing multiple clusters, Tanzu Observability for.. observability, Tanzu Service Mesh for.. yes, it’s their service mesh, and many more. For anyone familiar with enterprise offering it may resemble any other product suite from a big giant like IBM, Oracle and so on. Let’s be honest here – Tanzu is not for anyone that is interested in “some” Kubernetes, it’s for enterprises accustomed to enterprise products and everything that comes with it (i.e. sales, support, software that can be downloaded only for authorized users, etc.). And it’s especially designed for those whose infrastructure is based on the VMware ecosystem – it’s a perfect addition that meets requirements of development teams within an organization, but also addresses operations teams concerts with the same tools that’s been known for over a decade now. When it comes to features they are pretty standard – easy authentication, cluster scaling, build services based on buildpacks, networking integrated with VMware NSX, storage integrated with vSphere – wait, it’s starting to sound like a feature list of another vSphere addon. I guess it is an addon. For those looking for fancy features I suggest waiting a bit more for VMware to come up with new Tanzu products (or for a new acquisition of another company from cloud native world like they did with Bitnami).

When to choose Tanzu

  • When your company already uses VMware vSphere – just contact your VMware sales guy who will prepare you an offer and the team that takes care of your infrastructure will do the rest
  • If you don’t plan to deploy anything outside of your own infrastructure – although VMware tries to be a hybrid provider by enabling integration with AWS or GCP, it will stay focused on on-premise market where it’s undeniably the leader
  • If you wish to use multiple clusters – Tanzu enables easy creation of Kubernetes clusters that can be assigned to development teams
  • If you need support – it’s an enterprise product with enterprise support

When to avoid Tanzu

  • If you don’t have already vSphere in your organization – you need vSphere and its ecosystem, that Tanzu is a part of, to start working with VMware’s Kubernetes services; otherwise it will cost you a lot more time and resources to install it just to leverage them
  • When you need more features integrated with the platform – although Tanzu provides interesting features (my favourite is Tanzu Build Service) it still lacks of some distinguished ones (although they provide some for you to install on your own from Solutions Hub) that would make it more appealing

Conclusion

I have chosen these four solutions for Kubernetes on-premise platform because I believe they provide a real alternative to custom-built clusters. These products make it easier to build and maintain production clusters, but also in many cases help to speed up the development process and provide insights for the deployment process as well. So here’s what I would do if I were to choose one:

  • if I had a big budget I would go with OpenShift, as it’s just the best
  • if I had a big budget and already existing VMware vSphere infrastructure I would consider Tanzu
  • if I had skilled Kubernetes people in my organization and I wanted to have an easy way to manage my clusters (provisioned manually) without vSphere I would choose Rancher (and optionally I would buy a support for those clusters when going to prod)
  • if I had skilled Kubernetes people in my organization and I would like to use these fancy OpenShift features I would go with OKD, as it’s the best alternative to custom-built Kubernetes cluster

That’s not all. Of course you can build your own Kubernetes cluster and it’s a path that is chosen by many organizations. There are many caveats and conditions that need to be met (e.g. scale of such endeavour, type of workloads to be deployed on it) for this to succeed. But that’s a different story which I hope to cover in some other article.