Maintaining big Kubernetes environments with factories

People are fascinated by containers, Kubernetes and cloud native approach for different reasons. It could be enhanced security, real portability, greater extensibility or more resilience. For me personally, and for organizations delivering software products for their customers, there is one reason that is far more important – it’s the speed they can gain. That leads straight to decreased Time To Market, so highly appreciated and coveted by the business people, and even more job satisfaction for guys building application and platforms for them.

It starts with code

So how to speed it up? By leveraging this new technology and all the goodies that come with it. The real game-changer here is the way you can manage your platform, environments, and applications that run there. With Kubernetes based platforms you do it in a declarative manner which means you define your desired state and not particular steps leading to the implementation of it (like it’s done in imperative systems). That opens up a way to manage the whole system with code. Your primary job is to define your system state and let Kubernetes do its magic. You probably want to keep it in files in a versioned git repository (or repositories) and this article shows how you can build your platform by efficiently splitting up the code to multiple repositories.

Code converted by Kubernetes to various resources

Areas to manage

Since we can manage all the things from the code we could distinguish a few areas to delegate control over a code to different teams.
Let’s consider these three areas:

1. Platform

This is a part where all platform and cluster-wide configuration are defined. It affects all environments and their security. It can also include configuration for multiple clusters (e.g. when using OpenShift’s Machine Operator or Cluster API to install and manage clusters).

Examples of objects kept here:

  • LimitRange, ResourceQuota
  • NetworkPolicy, EgressNetworkPolicy
  • ClusterRole, ClusterRoleBinding
  • PersistentVolume – static pool
  • MachineSet, Machine, MachineHealthCheck, Cluster

2. Environments (namespaces) management

Here we define how particular namespaces should be configured to run applications or services and at the same time keep it secure and under control.

Examples of objects kept here:

  • ConfigMap, Secret
  • Role, RoleBinding

3. CI/CD system

All other configuration that is specific to an application. Also, the pipeline definition is kept here with the details on how to build an application artifact from code, put it in a container image and push it to a container registry.

Examples of objects kept here:

  • Jenkinsfile
  • Jenkins shared library
  • Tekton objects: Pipeline, PipelineRun, Task, ClusterTask, TaskRun, PipelineResource
  • BuildConfig
  • Deployment, Ingress, Service
  • Helm Charts
  • Kustomize overlays

Note that environment-specific configuration is kept elsewhere.


Our goal here is simple – leverage containers and Kubernetes features to quickly deliver applications to production environments and keep it all as code. To do so we can delegate the management of particular areas to special entities – let’s call them factories.
We can have two types of factories:

  • Factory Building Environments (FBE) – responsible for maintaining objects from area 1 (platform).
  • Factory Building Applications (FBA) – responsible for maintaining objects from area 2 (environments) and area 3 (CI/CD)

Factory Building Environments

First is a Factory Building Environments. In general, a single factory of this type is sufficient because it can maintain multiple environments and multiple clusters.
It exists for the following main reasons:

  • To delegate control over critical objects (especially security-related) to a team of people responsible for platform stability and security
  • To keep track of changes and protect global configuration that affects all the shared services and applications running on a cluster (or clusters)
  • To ease management of multiple clusters and dozens (or even hundreds) of namespaces

FBE takes care of environments and cluster configuration

Main tasks

So what kind of tasks does this factory is responsible for? Here are the most important ones.

Build and maintain shared images

There are a couple of container images that are used by many services inside your cluster and have a critical impact on platform security or stability. This could be in particular:

  • a modified Fluentd container image
  • a base image for all your java (or other types) applications with your custom CA added to a PKI configuration on a system level
  • similarly – a custom s2i (Source to Image) builder
  • a customized Jenkins Image with a predefined list of plugins and even seed jobs

Apply security and global configuration

This is actually the biggest and most important task of this factory. It should read a dedicated repository where all the files are kept and apply it to either at a cluster level or for a particular set of environments (namespaces).

Provide credentials

In some cases, this should also be a place where some credentials are configured in environments – for example, database credentials that shouldn’t be visible by developers or stored in an application repository.

Build other factories

Finally, this factory also builds other factories (FBA). This task includes creating new namespaces, maintaining their configuration and deploying required objects forming a new factory.

How to implement

FBE is just a concept that can be implemented in many ways. Here’s a list of possible solutions:

  1. The simplest case – a dedicated repository with restricted access, code review policy, and manual provisioning process.
  2. The previous solution can be extended with a proper hook attached to an event of merge of a pull request that will apply all changes automatically.
  3. As a part of git integration there can be a dedicated job on CI/CD server (e.g. Jenkins) that tracks a particular branch of the repo and also applies it automatically or on-demand.
  4. The last solution is the most advanced and also follows the best practices of cloud native approach. It is a dedicated operator that tracks the repository and applies it from inside a container. There could be different Custom Resources that would be responsible for different parts of configurations (e.g. configuration of namespace, global security settings of a cluster, etc.).

Factory Building Applications

The second type of factory is a factory building applications. It is designed to deliver applications to end-users to prod environments. It addresses the following challenges of delivery and deployment processes:

  • Brings more autonomy for development teams who can use a dedicated set of namespaces for their delivery process
  • Creates a safe place for experiments with preview environments created on-demand
  • Ease the configuration process by reducing duplication of config files and providing default settings shared by applications and environments
  • Enables grouping of applications/microservices under a single, manageable group of environments with shared settings and an aggregated view on deployment pipelines runs
  • Separates configuration of Kubernetes objects from a global configuration (maintained by FBE) and application code to keep track of changes

FBA produces applications and deploys them to multiple environments

Main tasks

Let’s have a look at the main tasks of this factory.

Build and deploy applications

The most important job is to build applications from a code, perform tests, put them in a container image and publish. When a new container image is ready it can be deployed to multiple environments that are managed by this factory. It is essentially the description of CI/CD tasks that are implemented here for a set of applications.

Provide common configuration for application and services

This factory should provide an easy way of creating a new environment for an application with a set of config files defining required resources (see examples of objects in area 2 and 3).

Namespace management

FBA manages two types of environments (namespaces):

  • permanent environments – they are a part of CI/CD pipeline for a set of applications (or a single app) and their services
  • preview environments – these are environments that are not a part of CI/CD pipeline but are created on-demand and used for different purposes (e.g. feature branch tests, performance tests, custom scenario tests, etc.)

It creates multiple preview environments and destroys them if they are no longer needed. For permanent environments, it ensures that they have a proper configuration but never deletes them (they are special and protected).

How to implement

Here are some implementation tips and more technical details.

  1. A factory can be created to maintain environments and CI/CD pipeline for a single application, however, often many applications are either developed by a single team or are a part of a single business domain and thus it is convenient to keep all the environments and processes around deployment in a single place.
  2. A factory consists of multiple namespaces, for example:
    • FN-cicd – namespace where all build-related and delivery activities take place (FN could be a factory name or some other prefix shared by namespaces managed by it)
    • FN-test, FN-stage, FN-prod – permanent environments
    • various number of preview environments
  3. Main tasks can be implemented by Jenkins running inside FN-cicd namespace and can be defined either by independent Jenkinsfiles or with jobs defined in a shared library configured on a Jenkins instance.
    In OpenShift it’s even easier, as you can use BuildConfig objects of Pipeline type which will create proper jobs inside a Jenkins instance.
  4. A dedicated operator seems to be again the best solution. It could be implemented as the same operator which maintains FBE with a set of Custom Resources for managing namespaces, pipelines and so on.


A couple of years ago, before docker and containers, I was a huge fan of Infrastructure as Code. With Kubernetes, operators, and thanks to its declarative nature, it is now possible to manage all the aspects of application building process, deployment, management of environments it would run in and even whole clusters deployed across multiple datacenters or clouds. Now it’s becoming increasingly important how are you handling the management of the code responsible for maintaining it. The idea of using multiple factories is helpful for organizations with many teams and applications and allows easy scaling of both and keeping it manageable at the same time.

Honest review of OpenShift 4

We waited over 7 months for OpenShift Container Platform 4 release. We even got version 4.1 directly because Red Hat decided not to release version 4.0. And when it was finally released we almost got a new product. It’s a result and implication of acquisition of CoreOS by Red Hat announced at the beginning of 2018. I believe that most of the new features in OpenShift 4 come from the hands of a new army of developers from CoreOS and their approach to building innovative platforms.
But is it really that good? Let me go through the most interesting features and also things that are not as good as we’d expect from over 7-month development (OpenShift 3.11 was released in October 2018).

If ain’t broke, don’t fix it

Most parts of OpenShift haven’t changed or changed very little. In my comparison of OpenShift and Kubernetes I’ve pointed out the most interesting features of it and there are also a few remarks on version 4.
To make it short here’s my personal list of the best features of OpenShift that just stayed at the same good level comparing to version 3:

  • Integrated Jenkins – makes it easy to build, test and deploy your containerized apps
  • BuildConfig objects used to create container images – with Source-To-Image (s2i) it is very simple and easy to maintain
  • ImageStreams as an abstraction level that eases the pain of upgrading or moving images between registries (e.g. automatic updates)
  • Tightened security rules with SCC that disallows running containers as root user. Although it’s a painful experience at first this is definitely a good way of increasing overall security level.
  • Monitoring handled by the best monitoring software dedicated to container environments – Prometheus
  • Built-in OAuth support for services such as Prometheus, Kibana and others. Unified way of managing your users, roles and permissions is something you’ll appreciate when you start to manage access for dozens of users

Obviously, we can also leverage Kubernetes features remembering that some of them are not supported by Red Hat and you’ll be on your own with any problems they may cause.

The best features

I’ll start with features I consider to be the best and sometimes revolutionary, especially when comparing it to other Kubernetes-based platforms or even previous version 3 of OpenShift.

New flexible and very fast installer

This is huge and probably one of the best features. If you’ve ever worked with Ansible installer available in version 3, then you’d be pleasantly surprised or even relieved you don’t need to touch it ever again. Its code was messy, upgrades were painful and often even small changes took a long time (sometimes resulting in failures at the end) to apply.
Now it’s something far better. Not only because it uses Terraform underneath (the best tool available for this purpose) for managing, is faster and more predictable, but also it’s easier to operate. Because the whole installation is performed by a dedicated operator all you need to do is provide a fairly short yaml file with necessary details.
Here’s the whole file that is sufficient to install a multi-node cluster on AWS:

apiVersion: v1
- hyperthreading: Enabled
  name: worker
  replicas: 3
        size: 50
        type: gp2
      type: t3.large
  replicas: 2
  hyperthreading: Enabled
  name: master
        size: 50
        type: gp2
      type: t3.xlarge
  replicas: 3
  creationTimestamp: null
  name: ocp4demo
  - cidr:
    hostPrefix: 23
  networkType: OpenShiftSDN
    region: us-east-1
pullSecret: '{"auths":{"":{"auth":"REDACTED","email":"[email protected]"}}}'
sshKey: |
  ssh-rsa REDACTED [email protected]

The second and yet more interesting thing about the installer is that it uses Red Hat Enterprise Linux CoreOS (RHCOS) as a base operating system. The biggest difference from classic Red Hat Enterprise Linux (RHEL) is how it’s configured and maintained. While RHEL is a traditional system you operate manually with ssh and Linux commands (sometimes they are executed by config management tools such as Ansible), RHCOS is configured with Ignite (custom bootstrap and config tool developed by CoreOS) at the start and shouldn’t be configured in any other way. That basically allows to create a platform that follows an immutable infrastructure principle – all nodes (except control plane with master components) can be treated as ephemeral entities and just like pods can be quickly replaced with fresh instances.

Unified way of managing nodes

Red Hat introduced a new API for node management. It’s called “Machine API” and is mostly based on Kubernetes Cluster API project. This is a game changer when it comes to provisioning of nodes. With MachineSets you can distribute easily your nodes among different availability zones but also you can manage multiple node pools (just like in GKE I reviewed some time ago with different settings (e.g. pool for testing, pool for machine learning with GPU attached). Management of the nodes has never been that easy!
For me, that’s a game changer and I predict it’s going to be also a game changer for Red Hat. With this new flexible way of provisioning alongside with RHCOS as default system, OpenShift becomes very competitive to Kubernetes services available on major cloud providers (GKE,EKS,AKS).

Rapid cluster autoscaling

Thanks to the previous feature we can finally scale our cluster in a very easy and automated fashion. OpenShift delivers cluster autoscaling operator that can adjust the size of your cluster by provisioning or destroying nodes. With RHCOS it is done very quickly which is a huge improvement over the manual, error-prone process used in the previous version of OpenShift and RHEL nodes.
Not only does it work on AWS but also on-premise installation based on VMware vSphere. Hopefully, soon it will be possible on most major cloud providers and maybe on non-cloud environments as well (spoiler alert – it will, see below for more details).
We missed this elasticity feature and finally it minimizes the gap between those who are lucky (or simply prefer) to use cloud and those who for some reasons choose to build it using their own hardware.

Good parts you’ll appreciate

New nice-looking web console that is very practical

This is the most visible for end-user and it looks like it was completely rewritten, better designed, good looking piece of software. We’ve seen a part of it in version 3 responsible for cluster maintenance but now it’s a single interface for both operations and developers.
Cluster administrators will appreciate friendly dashboards where you can check cluster health, leverage tighter integration with Prometheus monitoring to observe workloads running on it.
Although many buttons open a simple editor with yaml template in it, it is still the best web interface available for managing your containers, their configuration, external access or deploying a new app without any yaml.

Red Hat also prepared a centralized dashboard ( for managing all your OpenShift clusters. It’s quite simple at the moment but I think it’s an early version of it.

Oh, they also got rid of one annoying thing – now you finally log in once and leverage Single Sign-On feature to access external services i.e. Prometheus, Grafana and Kibana dashboards, Jenkins and others that you can configure with OAuth.

Operators for cluster maintenance and as first-class citizens for your services

Operator pattern leverage Kubernetes API and promises “to put operational knowledge into software” which for end-user brings an easy way for deploying and maintaining complex services. It’s not a big surprise that in OpenShift almost everything is configured and maintained by operators. After all, this concept was born in CoreOS and has brought us the level of automation we could only dream of. In fact, Red Hat deprecated its previous attempt to automate everything with Ansible Service Broker and Service Catalog. Now operators handle most of the tasks such as cluster installation, its upgrades, ingress and registry provisioning, and many, many more. No more Ansible – just feed these operators with proper yaml files and wait for the results.
At the same time, Red Hat created a website with operators ( and embedded it inside OpenShift. They say it will grow and you’ll be able to find there many services that are very easy to use. Actually, during the writing of this article, the number of operators available on OperatorHub has doubled and it will grow and become stable (some of them didn’t work for me or required additional manual steps).
For any interested in providing their software as operator there is operator-framework project that helps to build it (operator-sdk), run it and maintain it (with Operator Lifecycle Manager). In fact, you can start even without knowing how to write in golang, as it provides a way to create an operator using Ansible (and converts Helm Charts too). With some small shortcomings, it’s the fastest way to try this new way of writing kubernetes-native applications.

Global configuration handled by operators and managed with yaml files kept inside a control plane[/caption]In short – operators can be treated as a way of providing services in your own environment similarly to the ones available on public cloud providers (e.g. managed database, kafka cluster, redis cluster etc.) with a major difference – you have control over the software that provides those services and you can build them on your own (become a producer) while on cloud you are just a consumer.

I think that essentially aligns perfectly with open source spirit that started an earlier revolution – Linux operating system that is the building block for most of the systems running today.

Cluster configuration kept as API objects that ease its maintenance

Forget about configuration files kept somewhere on the servers. They cause too many problems with maintenance and are just too old-school for modern systems. It’s time for “everything-as-code” approach. In OpenShift 4 every component is configured with Custom Resources (CR) that are processed by ubiquitous operators. No more painful upgrades and synchronization among multiple nodes and no more configuration drift. You’re going to appreciate how easy now maintenance has become.
Here are the short list of operators that configure cluster components that were previously maintained in a rather cumbersome way (i.e. different files provisioned by ansible or manually):

  • API server (feature gates and options)
  • Nodes via Machine API (see above for more details)
  • Ingress
  • Internal DNS
  • Logging (EFK) and Monitoring (Prometheus)
  • Sample applications
  • Networking
  • Internal Registry
  • OAuth (and authentication in general)
  • And many more..
Global configuration handled by operators and managed with yaml files kept inside a control plane

Now all these things are maintained from code that is (or rather should be) versioned, audited and reviewed for changes. Some people call it GitOps, I myself call it “Everything as Code” or to put it simply – the way it should be managed from the beginning.

Bad parts (or not good enough yet)

Nothing is perfect, even OpenShift. I’ve found a few things that I consider to be less enjoyable than previous features. I suspect and hope they will improve in future releases, but at the time of writing (OpenShift 4.1) they spoil this overall picture of it.

Limited support for fully automatic installation

Biggest disappointments of it – list of supported platform that leverages automatic installation. Here it is:

  • AWS
  • VMware vSphere

Quite short, isn’t it? It means that when you want to install it on your own machines you need to have vSphere. If you don’t then be prepared for a less flexible install process that involves many manual steps and is much, much slower.
It also implies another flaw – without a supported platform, you won’t be able to use cluster autoscaling or even manual scaling of machines. It will be all left for you to manage manually.

This makes OpenShift 4 usable only on AWS an vSphere. Although it could work anywhere, it is a less flexible option with a limited set of features. Red Hat promises to extend the list of supported platforms in future releases (Azure,GCP and OpenStack is coming in version 4.2) – there are already existing implementation also for bare metal installations so hopefully, this will be covered as well.

You cannot perform disconnected installations

Some organizations have very tight security rules that cut out most of the external traffic. In previous version, you would use a disconnected installation that could be performed offline without any access to the internet. Now OpenShift requires access to Red Hat resources during installation – they collect anonymized data (Telemetry) and provide a simple dashboard from which you can control your clusters.
They promise to fix it in upcoming version 4.2 so please be patient.

Istio is still in Tech Preview and you can’t use it in your prod yet

I’m not sure about you but many organizations (and individuals like me) have been waiting for this particular feature. We’ve had enough of watching demos, listening to how Istio is the best service mesh and how many problems it will address. Give us stable (and in case of Red Hat also supported) version of Istio! According to published roadmap It was supposed to be available already in version 4.0 but it wasn’t released so we obviously expected it to be GA in 4.1. For many, this is one of the main reasons to consider OpenShift as enterprise container platform for their systems. I sympathize with all of you and hope this year we’re all going to move Istio from test systems to production. Fingers crossed!

CDK/Minishift options missing makes testing harder

I know it’s going to be fixed soon but at the moment the only way of testing OpenShift 4 is either use it as a service (OpenShift Online, Azure Red Hat OpenShift) or install it which takes roughly 30 minutes. For version 3 we have Container Development Kit (or its open source equivalent for OKD – minishift) which launches a single node VM with Openshift and it does it in a few minutes. It’s perfect for testing also as a part of CI/CD pipeline.
Certainly, it’s not the most coveted feature but since many crucial parts have changed since version 3 it would be good to have a more convenient way of getting to know it.

UPDATED on 30.8.2019 – there is a working solution for single node OpenShift cluster. It is provided by a new project called CodeReady Containers and it works pretty well.

Very bad and disappointing

Now this is a short “list” but I just have to mention it since it’s been a very frustrating feature of OpenShift 3 that just happened to be a part of version 4 too.

Single SDN option without support for egress policy

I still can’t believe how the networking part has been neglected. Let me start with a simple choice or rather lack of it. In version 3 we could choose Calico as an SDN provider alongside with OpenShift “native” SDN based on Open vSwitch (overlay network spanned over software VXLAN ). Now we have only this single native implementation but I guess we could live with it if it was improved. However, it’s not. In fact when deploying your awesome apps on you freshly installed cluster you may want to secure your traffic with NetworkPolicy acting as Kubernetes network firewall. You even have a nice guide for creating ingress rules and sure, they work as they should. If you want to limit egress traffic you can’t leverage egress part of NetworkPolicy, as for some reason OpenShift still uses its dedicated “EgressNetworkPolicy” API which has the following drawbacks:

  • You should create a single object for an entire namespace with all the rules – although many can be created, only one is actually used (in a non-deterministic way, you’ve been warned) – no internal merge is being done as it is with standard, Kubernetes NetworkPolicy objects
  • You can limit only traffic based on IP CIDR ranges or DNS names but without specifying ports (sic!) – that’s right, it’s like a ‘80s firewall appliance operating on L3 only…
OpenShift web interface for managing NetworkPolicy is currently simple web yaml editor with some built-in tips on how to write them

I said it – for me, it’s the worst part of OpenShift that makes the management of network traffic harder. I hope it will be fixed pretty soon and for now Istio could potentially fix it on an upper layer. Oh wait, it’s not supported yet..


Was it worth waiting for OpenShift 4? Yes, I think it was. It has some flaws that soon are going to be fixed and it’s still the best platform for Kubernetes workloads that comes with support. I consider this version as an important milestone for Red Hat and its customers looking for a solution to build a highly automated platform – especially when they want to do it on their own hardware, with full control and freedom of choice. Now with operator pattern so closely integrated and promoted, it starts to look like a really good alternative to the public cloud, something that was promised by OpenStack and it looks like it’s going to be delivered by Kubernetes with OpenShift

Czas introwertyków

Jestem introwertykiem. Jak wielu ludzi w IT. Często słyszę dowcip – “nie chcesz rozmawiać z ludźmi to idź na studia informatyczne” i chyba panuje takie ogólne przeświadczenie, że w branży IT jest nas najwięcej. Jest to jednak dalekie od prawdy, gdyż według wielu badań rozkład wychodzi mniej więcej po równo t.j. cechy introwertyczne i ekstrawertyczne rozkładają się równomiernie po całej populacji. I od razu obalmy pierwszy mit – nie jest prawdą, że się jest albo introwertykiem albo ekstrawertykiem. Ta cecha to spektrum – można być super-introwertykiem skrywającym się w samym sobie i można też być gdzieś po środku bliżej ekstrawertyka. Wówczas często określa się taką osobę jako ambiwertyka.

Czym jest ten introwertyzm?

Nie chcę tutaj przytaczać naukowych teorii – te łatwo znaleźć w sieci (np. wikipedia). Napiszę to z mojej własnej perspektywy i własnymi słowami.
Otóż jest to cecha, która określa w jakich sytuacjach gromadzisz energię a w jakich ją zużywasz. Dla nas introwertyków magazynujemy ją przebywając samotnie lub w bardzo małej grupie osób. I samotnie wcale nie oznacza na odosobnieniu. Chodzi bardziej o mentalnym przebywaniu z samym sobą, z własnymi myślami, odczuciami i refleksjami. Kierujemy uwagę do siebie – stąd określenie introwertyk.
Jak już ją zgromadzimy to możemy ją spożytkować na – uwaga – przebywanie z innymi, wdawać się w konwersacje i inne aktywności w grupie. Jestem żywym przykładem, że introwertyk może również występować publicznie czy prowadzić kilkudniowe szkolenia dla grupy osób, której wcześniej nie znał.

Ekstrawertycy mają lepiej?

Nawet introwertycy lubią ekstrawertyków. W końcu są to osoby, które często brylują w tłumie, często przewodzą zabawie, są duszą towarzystwa. Jako stworzenia socjalne potrzebujemy przynależności, powiązania z innymi i stąd wydaje nam się, że im większa grupa tym lepiej dla nas. Jak się okazuje nie dla wszystkich. Introwertycy również lubią grupy, ale mniejsze. I do tego nie do końca odpowiada im tzw. “small talk” i wolą poważniejsze rozmowy niż zaledwie płytką wymianę słów.
Niestety nasza kultura preferuje ekstrawertyczne zachowania. W końcu lepiej sprzedaje się kolejny show z celebrytami niż coś mniej wystrzałowego o głębszym przekazie. Wydaje się też nam, że tylko będąc osobą wystrzałową da się osiągnąć sukces. To jednak wielkie kłamstwo – w końcu introwertyczni ludzie sukcesu (i nie tylko) unikają rozgłosu i płytkich rozmów skupiając się na rzeczach ważniejszych.

Rewolucja informacyjna i nowy porządek

Wprowadzenie nowych technologii wywróciło dotychczasowy porządek. Odkąd otworzyły się nowe środki komunikacji i dzielenia się informacjami, obecna przewaga ekstrawertyków w udziale i kształtowaniu naszego świata. Introwertycy stoją w dużej mierze za tymi technologiami oraz jest ona dla nich sposobem na wyrażanie siebie. W końcu komunikacja mailowa pozwala przemyśleć co chcemy przekazać i sformułować lepiej nasze myśli przed faktycznym jej przekazaniem. Faktem jest, że takie sieci społecznościowe jak Instagram czy Facebook są opanowane w głównej mierze przez ekstrawertyczne osoby (część o dość narcystycznym usposobieniu i wykorzystujący je do budowania swojej własnej wartości na “lajkach”), ale czy wiesz kto zbudował te wszystkie wspaniałe portale? Tak, to w dużej mierze robota introwertyków. Ludzie, którzy często spędzają dużo czasu na rozmyślaniu, ale też na projektowaniu i tworzeniu nowej, wirtualnej rzeczywistości. Twórca Facebooka Mark Zuckeberk, twórca Microsoft Bill Gates czy też Linus Torvalds jako twórca Linuksa będącego fundamentem tworu znanego dzisiaj jako chmura publiczna. Oni wszyscy są introwertykami i stoją za jedynymi z największych rewolucji w świecie IT i nie tylko.

Obalamy mity o introwertykach

Poniżej kilka mitów krążących w społeczeństwie i będącego w sumie przykładem jak zachowania ekstrawertyczne są preferowane, a te introwertyczne kojarzone z cechami negatywnymi (podobnie jak inne, “odbiegające” od średniej jak niski wzrost, waga itp.).

Introwertycy zawsze siedzą przed komputerami i boją się ludzi

Kiedyś książki, a teraz komputery stały się ucieczką od ludzi, ale nie tylko dla introwertyków. I nie jest prawdą, że introwertycy wolą komputery. To fakt – komputery są bardziej przewidywalne, nie posiadają emocji i jeśli nie działają to dość sprawnie da się je naprawić lub wymienić.
Niemniej introwertycy również łakną kontaktów z ludźmi, ale przede wszystkim czują się lepiej w mniejszym gronie zaufanych przyjaciół. Wielkie bankiety to nie jest ich naturalne środowisko co nie oznacza, że nie potrafią się na nich odnaleźć. Z reguły będą starać się znaleźć mniejsze grono, w którym będą mogli prowadzić głębsze rozmowy niż tylko te na temat pogody. Szczególnie jeśli znajdą osoby znajome – zawieranie nowych znajomości trwa u nas po prostu dłużej.

Introwertycy nie umieją sprzedawać

Bzdura. Jeśli rozumiesz przez sprzedaż pewnego rodzaju akwizycję, tzw. “cold calling” czy też inne formy poszukiwania szybkiego kontaktu i natychmiastowego “zamknięcia” procesu to pewnie tak – niektórzy ekstrawertycy potrafią czarować i dokonać takich rzeczy.
Doskonale opisuje to Daniel Pink w książce “To sell is human” (polski tytuł to “Jak być dobrym sprzedawcą” – moim zdaniem zupełnie nietrafione i przeinaczające intencję autora tłumaczenie tytułu), gdzie pokazuje, że osoby gdzieś pośrodku spektrum sprawdzają się najlepiej – są to wspomniani na początku ambiwertycy.
Osobiście uważam, że dzisiaj sprzedażą może zająć się każdy. Obawiasz się kontaktu z innymi? Zawsze możesz schować to za automatem jak np. aukcja internetowa czy sklep z fajnym opisem. Nie jesteś super przebojowy, ale umiesz rozmawiać w małym gronie? To jak najbardziej możesz sprzedawać usługi i produkty. Jesteśmy przesyceni bzdurami i sztuczkami wyciągniętymi wprost z podręczników sprzedaży sprzed kilkudziesięciu lat. Ludzie autentyczni i empatyczny są w stanie osiągnąć o wiele więcej na dłuższą metę niż naśladowcy tych wspomnianych wzorców akwizycji.

Introwertycy to samotnicy z wyboru

To podobnie jak powiedzieć, że ekstrawertycy to ludzie żyjący tylko od imprezy do imprezy. Prawda jest taka, że ekstrawertycy ładują swoje “baterie” pośród ludzi, a energię tą zużywają będąc w małym gronie lub też samotnie. Introwertycy z kolei czerpią ją z przebywania z samym sobą i wykorzystują do wyjścia na zewnątrz, do większej grupy gdzie również nawiązują znajomości i prowadzą normalne życie towarzyskie. Nie jest ono pewnie w takiej skali i intensywności jak w przypadku ekstrawertyków i z pewnością nie zobaczysz wielu zdjęć na Instagramie, ale nie jest to w żadnym wypadku samotność.

Wystąpienia publiczne to domena ekstrawertyków

Skoro ładujemy te swoje baterie to występy publiczne mogą być tym miejscem, gdzie ta energia może być spożytkowana. I tak się dzieje chociażby w moim wypadku – chęć dzielenia się wiedzą, doświadczenia i inspirowania innych jest silniejsza oraz na tyle motywujące, że wyjście na scenę wygrywa z potrzebą poprzebywania z samym sobą. Nawet jeśli początki są trudne to jest to coś czego da się nauczyć niezależnie od typu osobowości. Możliwe, że ekstrawertycy mają ku temu lepsze predyspozycje (nie mam tego jak sprawdzić empirycznie niestety) to niekoniecznie wszyscy posiadają chęci rozwoju w tym kierunku.

Introwetyzm da się zmienić – trzeba więcej imprezować

Potrzeba zmiany swoich cech i postępowania jest istotnie możliwa, ale do pewnego stopnia. To z czym przychodzimy na świat jest częścią nas i jeśli w całości tego nie zaakceptujemy to staniemy się klientami wszelkiego rodzaju pseudo-coachów i programów rozwoju (piszę to jako osoba, które przeczytała sporo książek tego typu i uczestniczyła w paru warsztatach). Możesz się nauczyć technik, przełamywać naturalny strach i dążyć do swoich celów, ale twoje główne cechy będą zbliżone do tego co dostałeś, a nie tego kim chciałbyś być. Ten idealny obraz tworzony przez media kreuje ekstrawertyzm jako pożądaną cechę jakby to było podobne do bycia fit, gdzie kształtowanie ciała jest również ograniczone przez twoje predyspozycje.
Nie, nie zamienisz się w ekstrawertyka więcej imprezując, ale najwięcej uzyskasz wykorzystując to wszystko co jest z tym masz “w zestawie”.

Jak sprawdzić czy jestem introwertykiem?

Najprościej wybrać jeden z testów dostępnych za darmo w sieci. Może to być test 16 osobowości (znany też jako test Myersa-Briggsa) – na przykład ten.

Wykorzystaj swój czas

Nigdy nie było introwertykom tak łatwo pozostać sobą wykorzystując dzisiejszą technikę i swobodę jednocześnie osiągając swoje osobiste cele. Pomimo, że w społeczeństwie żywy jest obraz idealnej osoby ekstrawertycznej to warto iść swoją drogą. Nawet jeśli będzie to droga w mniejszym gronie, ale ludzi z którymi warto ją przemierzać.

How to increase container security with proper images

Security is a major factor when it comes to a decision of whether to invest your precious time and resources in new technology. It’s no different for containers and Kubernetes. I’ve heard a lot of concerns around it and decided to write about the most important factors that have the biggest impact on the security of systems based on containers running on Kubernetes.
This is particularly important, as it’s often the only impediment blocking potential implementation of container-based environment and also taking away chances for speeding up innovation. That’s when I decided to help all of you who wants strengthen security of their containers images.

Image size (and content) matters

The first factor is the size of a container image. It’s also a reason why container gained so much popularity – whole solutions, often quite complex are now consisting of multiple containers running from images available publicly on the web (mostly from Docker hub) that you can run in a few minutes.
In case your application running inside a container gets compromised and the attacker gets access to an environment with a shell, he needs to use his own tools and often it’s the first thing he does – he downloads them on the host. What if he can’t access tools that will enable him to do so? No scp, curl, wget, python or whatever he could use. That would make things harder or even impossible to make harm to your systems.
That is exactly why you need to keep your container images small and provide libraries and binaries that are really essential and used by the process running in a container.

Recommended practices

Use slim versions for base images

Since you want to keep your images small choose “fit” versions of base images. They are often called slim images and have less utils included. For example – the most popular base image is Debian and it’s standard version debian:jessie weights 50MB while debian:jessie-slim only 30MB. Besides less exciting doc files (list available here) it’s missing the following binaries


They are mostly useful for network troubleshooting and I doubt if ping is a deadly weapon. Still, the less binaries are present, the less potential files are available for an attacker that might use some kind of zero-day exploits in them.
Unfortunately, neither official Ubuntu nor CentOS don’t have slim versions. Looks like Debian is a much better choice, it contains maybe too many files but it’s very easy to fix – see below.

Delete unnecessary files

Do you need to install any packages from a running container? If you answered yes then you probably doing it wrong. Container images follow a simple unix philosophy – they should do one thing (and do it well) which is run a single app or service. Container images are immutable and thus any software should be installed during a container image build. Afterward, you don’t need it and you can disable it by deleting binaries used for that.

For Debian just put the following somewhere at the end of your Dockerfile

RUN rm /usr/bin/apt-* /usr/bin/dpkg* 

Did you know that even without binaries you may still perform actions by using system calls? You can use any language interpreter like python, ruby or perl. It turns out that even slim version of Debian contains perl! Unless your app is developed in perl you can and should delete it

RUN /usr/bin/perl*

For other base images you may want to delete the following file categories:
Package management tools: yum, rpm, dpkg, apt
Any language interpreter that is unused by your app: ruby, perl, python, etc.
Utilities that can be used to download remote content: wget, curl, http, scp, rsync
Network connectivity tools: ssh, telnet, rsh

Separate building from embedding

For languages that need compiling (e.g. java, golang) it’s better to decouple building application artifact from building a container image. If you use the same image for building and running your container, it will be very large as development tools have many dependencies and not only they will enlarge your final image but also download binaries that could be used by an attacker.

If you’re migrating your app to container environment you probably already have some kind of CI/CD process or at least you build your artifacts on Jenkins or other server. You can leverage this existing process and add a step that will copy the artifact into the app image.

For full container-only build you can use multistage build feature. Notice however that it works only with newer version of Docker (17.05 and above) and it doesn’t work with OpenShift. On OpenShift there is similar feature called chained builds, but it’s more complicated to use.

Consider the development of statically compiled apps

Want to have the most secure container image? Put a single file in it. That’s it. Currently, the easiest way to accomplish that is by developing it in golang. Easier said than done, but consider it for your future projects and also prefer vendors that are able to deliver software written in go. As long it doesn’t compromise quality and other factors (e.g. delivery time, cost) it definitely increases security.

Maintaining secure images

There are three types of container images you use:

  1. Base images
  2. Services (a.k.a. COTS – Common off-the-shelf)
  3. Your applications

Base images aren’t used to run containers – they provide the first layer with all necessary files and utilities
For services you don’t need to build anything – you just run them and provide your configuration and a place to store data.
Application images are built by you and thus require a base image on top of which you build your final image.

Container image types

Of course, both services and applications are built on top of a base image and thus their security depends largely depends on its quality.
We can distinguish the following factors that have the biggest impact on container image security:

  • The time it takes to publish a new image with fixes for vulnerabilities found in its packages (binaries, libraries)
  • Existence of a dedicated security team that tracks vulnerabilities and publish advisories
  • Support for automatic scanning tools which often needs access to security database with published fixes
  • Proven track of handling major vulnerabilities in the past

In a case where there are many applications built in an organization, an intermediary image is built and maintained that is used as a base image for all the applications. It is used to primarily to provide common settings, additional files (e.g. company’s certificate authority) or tighten security. In other words – it is used to standardize and enforce security in all applications used throughout the organization.

Use of intermediary images

Now we know how important it is to choose a secure base image. However, for service images, we rely on a choice made by a vendor or an image creator. Sometimes they publish multiple editions of the image. In most cases it is based on some Debian/Ubuntu and additionally on Alpine (like redis, RabbitMQ, Postgres).

Doubts around Alpine

And what about alpine? Personally, I’m not convinced it’s a good idea to use it in production. I mean there are many unexpected and strange behaviors and I just don’t find it good enough for a reliable, production environment. Lack of security advisories, a single repo maintained by a single person is not the best recommendation. I know that many have trusted this tiny, Linux distribution that is supposed to be “Small. Simple. Secure.”. I’m using it for many demos, training and testing but still, it hasn’t convinced me to be as good as good old Debian, Ubuntu, CentOS or maybe even
And just recently they had a major vulnerability discovered – an empty password for user root was set in most of their images. Check if your images use the affected version and fix it ASAP.

Recommended practices

Trust but verify

As I mentioned before there sometimes official images don’t offer the best quality and that’s why it’s better to verify them as a part of CI/CD pipeline.
Here are some tools you should consider:
Anchore (opensource and commercial)
Clair (opensource)
Openscap (opensource)
Twistlock (commercial)
jFrog Xray (commercial)
Aquasec (commercial)

Use them wisely, also to check your currently running containers – often critical vulnerabilities are found and waiting for next deployment can cause a serious security risk.

Use vendor provided images

For service images, it is recommended to use images provided by a vendor. They are available as so-called “official images”. The main advantage is that not only they are updated when a vulnerability is found in the software but also in an underlying OS layer. Think twice before choosing a non-official image or building one yourself. It’s just a waste of your time. If you really need to customize beyond providing configuration, you have at least two ways to achieve it:
For smaller changer you can override entrypoint with your script that modifies some small parts; script itself can be mounted from a configmap.

Here’s a Kubernetes snippet of a spec for a sample nginx service with a script mounted from nginx-entrypoint ConfigMap

  - name: mynginx
    image: mynginx:1.0
    command: ["/ep/"]
    - name: entrypoint-volume
        mountPath: /ep/
  - name: entrypoint-volume
      name: nginx-entrypoint

For bigger changes, you may want to create your image based on the official one and rebuild it when a new version is released

Use good quality base images

Most official images available on Docker Hub are based on Debian. I even did a small research on this a couple of years ago (it is available here) and for some reason, Ubuntu is not the most popular distribution. So if you like Debian you can feel safe and join the others who also chose it. It’s a distribution with a mature ecosystem around
If you prefer rpm-based systems (or for some reasons your software requires it) then I would avoid CentOS and consider Red Hat Enterprise Linux images. With RHEL 8, Red Hat released Universal Base Image (UBI) that can be used freely without any fees. I just trust more those guys, as they invest a lot of resources in containers recently and these new UBI images should be updated frequently by their security team.

Avoid building custom base images

Just don’t. Unless your paranoia level is close to extreme, you can reuse images like Debian, Ubuntu or UBI described earlier. On the other hand, if you’re really paranoid I don’t think you trust anyone, including guys who brought us containers and Kubernetes as well.

Auto-rebuild application and intermediary images when vulnerabilities are found in a base image

In the old, legacy vm-based world the process of fixing vulnerabilities found in a system is called patching. Similar practice takes place in container-based world but in its specific form. Instead of fixing it on the fly, you need to replace the whole image with your app. The best way to achieve it is by creating an automated process that will rebuild it and even trigger a deployment. I found OpenShift ImageStreams feature to address this problem in the best way (details can be found in my article), but it’s not available on vanilla Kubernetes.


We switched from fairly static virtual machines to dynamic, ephemeral containers with the same challenges – how to keep them secure. In this part, we have covered the steps required to address them on the container image level. I always say that container technology enforces the new way of handling things we used to accomplish in rather manual (e.g. golden images, vm templates), error-prone processes. When migrating to containers you need to embrace an automatic and declarative style of managing your environments, including security-related activities. Leverage that to tighten security of your systems starting with basics – choosing and maintaining your container images.

Hamulce innowacji w Polsce

Lepiej nie było nigdy

Podobno żyjemy w najbezpieczniejszych czasach – brak wojen od kilkudziesięciu lat, brak poważniejszych epidemii, względny spokój i dobrobyt.
Byli, są i będą ciągle narzekający – ważne jednak nie dać się wciągnąć w ten wir czarnowidztwa (polskie “z kim się zadajesz, takim się stajesz” lub naukowe podejścia dotyczące średniej z 5 ludzi ciebie otaczających).

Jednocześnie te dobrodziejstwa przybliżyły wszystkim (no dobra – wszystkim z dostępem do jakiegokolwiek komputera i internetu, czyli ~80-90% ludzi w Polsce?) dostęp do wiedzy. Teraz jak nigdy wystarczy chcieć. Chcieć i zacząć działać. Nigdy nie było to prostsze, naprawdę!

I mamy rok 2018, a ja mam wrażenie, że w dużej części organizacji, niezależnie czy sektora publicznego czy prywatnego, wciąż niewiele się zmieniło. No dobra – są fajniejsze komputery niż 10 czy 15 lat temu, są fajniejsze narzędzia, szybszy internet (zdjęcia z Pudelka ładują się szybciej i to pewnie w 4K!). Baaa – wielu ludzi posiadło wiedzę na temat wprowadzania nowych metod usprawniających organizację pracy – wszechobecny Scrum, standupy, Kanbany i Leany.


Zatem co obecnie blokuje innowację w dziedzinie IT w Polsce? Co powoduje, że część ludzi wybiera karierę na zachodzie nie widząc perspektyw rozwoju w kraju? Oto co według mnie jest głównymi hamulcami.

Stara gwardia, stare przyzwyczajenia

Poprzez “stare” nie mam na myśli wieku. Są ludzie młodsi ode mnie, którzy osiągnęli naprawdę wiele, ciągle się rozwijają, widzą potrzebę usprawniania, posiadają energię i ciężko pracują. Są też ludzie blisko emerytury, którzy mimo, że ich młodość przypadła na czasy, gdy telewizor kolorowy był czymś “na wypasie” to i tak są w stanie zauważyć zmiany, a dzięki swojemu doświadczeniu odróżnić co jest ważne, a co nie. Z drugiej strony znam takich, którzy niezależnie od wieku tkwią w miejscu, w swoich przekonaniach, niezmienności i niemalże są dumni z tego.

I nie – to nie jest tak, że kiedyś to były lepsze, mniej zawodne samochody, jedzenie zdrowsze, a ludzie bardziej życzliwi. Psychologia to wyjaśnia (np. tutaj), ale dla mnie jest to po prostu ucieczka i kolejna próba usprawiedliwiania. Szukania wymówek, lenistwa itp.
Dla odmiany jedne z najbardziej wartościowych książek powstały faktycznie dawno (chociażby “Medytacje” Marka Aureliusza), a co za tym idzie były dostępne dla wielu ludzi podobnie jak dostęp do technologii dzisiaj..

Strach i perfekcjonizm

Ileż to razy słyszałem przy wprowadzaniu zmiany, że powinniśmy wziąć pod uwagę to i tamto, aby zrobić to raz a dobrze. Jasne – jak byśmy budowali most, dom to ok, ale nie przy IT! Tutaj zwinność jest dodawana do nazwy każdego nowego produktu (kurcze – może niektórzy nie sprawdzili co oznacza “Agile”?) jest absurdem. Nie zmieni tego wprowadzenie Scruma, “wdrożenie DevOpsa” czy migracja aplikacji na mikroserwisy. To zakamuflowany perfekcjonizm tkwiący w nas, wpajany od przedszkola, utrwalany w szkołach i egzekwowany w organizacjach. Bo jak popełnisz błąd to najwidoczniej jesteś niedostatecznie dobry (“w końcu nie po to ci tyle płacimy w tym IT?”). A co powie jeszcze szef twojego szefa? A co jeśli dojdzie to do kierownictwa z dalekiego kraju? Musi być dobrze. Od razu. I najgorsze, że sami nawzajem to w sobie utrwalamy prześcigając się w znajdowaniu wad rozwiązań – kto więcej ten lepszy. Przyznaję się, że sam nauczony przez lata również często łapię się na uczestnictwo w tej rywalizacji. Smutne, ale prawdziwe. Dlatego o ile jestem wielkim zwolennikiem metod zwinnych i metody małych kroków to nie wierzę, że jest to do osiągnięcia bez zmiany sposobu myślenia – wszakże nawet realizując małe części większego projektu możemy być ograniczeni myśleniem o doprowadzeniu wszystkiego na raz przez uwagę na potencjalną krytykę.

Brak zaufania

Podstawa budowania dobrych zespołów (przynajmniej według Patricka Lencioniego i jego “Pięć dysfunkcji pracy zespołu”) to zaufanie. Jak możesz powierzyć komuś w pełni zadanie jednocześnie pilnując go za każdym razem, sprawdzając i mikrozarządzając? Nie możesz, tylko udajesz i w dużej mierze wykorzystujesz tylko swoją pozycję w uwielbianej przez polaków hierarchii służbowej. Być może to wciąż zalegające w podświadomości naszych dziadków i rodziców przyzwyczajenia z epoki, gdzie ostrożność była cnotą ze względu na opresyjny charakter ustroju, w którym przyszło im żyć. Nie mi tu dochodzić przyczyn, ale fakt jest taki, że wciąż zbyt często doświadczam ograniczonego lub zupełnego braku zaufania. To jest coś czego wciąż się musimy nauczyć, bo nie możemy zmienić tego co było, ale z pewnością mamy wpływ na to co będzie.


Na szczęście spotykam też grupy osób, które nauczone (same, przez dobre otoczenie lub na bazie doświadczeń i refleksji) innego sposobu myślenia i działania. To z nimi właśnie warto “góry przenosić”, pchać do przodu i jednocześnie zmagać się wspólnie z tymi, których wartości takie są dalekie oraz jakże obce. Oby więcej tych pierwszych niż tych drugich na naszej zawodowej (i nie tylko) ścieżce!