Myths around containers. Part 1: Security

5 minute read

We had many revolutions in IT infrastructure world over past 20 years or so. Virtualization promised hardware abstraction, private cloud promised lower costs and flexibility and containers keep adding more to that pile creating a vision of perfect world. It’s a natural part of evolution and progress and that’s okay. I don’t believe however in perfect solutions or empty promises we’ve been told. The same goes for the latest revolution - containerization. Don’t get me wrong - I’m a big fan of putting software in self-contained boxes that you can run anywhere, especially with many abstractions layer brought by Kubernetes, but some might believe it’s something that is just sufficient to make everything perfect from now on.

Let’s debunk those myths and take a look at promises that have arisen around containers. We’ll start with one of the most important parts - security.

Myth: you need to focus mostly on securing containers regardless where they run

One of my favourite security related questions I ask when I come to a customer is

Do your servers run in SElinux enforcing mode?

Most of the time I hear awkward silence followed by negative answer. SElinux is one of the best technology to help your systems secure even when you “forgot” to update them regularly. It’s even better when you adjust it to your software by writing your own. Containers can be more secure the same way virtual machine can with one major exception - they run on a single kernel and thus this single host’a part is crucial, as it affects security of all containers running on it. Another thing are vulnerabilities not yet discovered or published. That’s why it’s so important to keep those “old-school” security mechanisms.

It’s even more important after disclosing Spectre and Meltdown vulnerabilities. Regardless how smart you are in securing your containers, you’ll fail miserably when you skip the host part.

Truth:

Container security depends vastly on security of the host it’s running

Myth: Containers are strongly isolated from host OS and other containers

Bare containers are isolated and restricted by default. They are disallowed to call internal kernel functions (syscalls) that may be used to change host system state or do some other harm. The list can extended with seccomp feature to limit them to only ones which are really used and required by your app. This is however similar to writing SElinux policy I mentioned before and boils down to the same conclusion - many people will stick to default settings. Hopefully most will avoid putting container in privileged mode, but I’m sure there will be some brave ones (I’d call them rather lazy and reckless). And if you think that containers are fully isolated then please read this great article from Sysdig. Not all parts of Linux kernel are “container-aware” and many still are shared among all containers running on particular host.

Truth:

Containers bring powerful isolation for those who wants to use it, but they still share some parts of host operating system with host and other containers

Myth: Using orchestrators makes containers even more secure

And what about orchestrators - do they bring even more security? They might. For example OpenShift in its default settings forbids you to launch container running as root - you need to change OpenShift configuration or provide container which runs as non-root user. There are even more settings for storage access, uid mapping and filesystem sharing available - all of those and many more coming from Kubernetes. There’s one thing however where orchestrators are falling behind traditional infrastructure - network traffic isolation. Traditionally we’ve used IP addresses and ports, but in highly dynamic environment it’s no longer that easy. Only this year Kubernetes released mechanism for ingress and egress filtering. Without it control over traffic in constantly changing environment was almost impossible. In mixed environments it’s even worse, as it’s hard to control outgoing addresses container use to access legacy world where access is being controlled via IP based firewalls.

Truth:

Orchestrators can enhance security, some even do it by default, but they still lack of features available in traditional, VM-based environments.

Myth: Container images cannot be easily tampered with

When delivering packages (rpms or debs) you rely on GPG signatures which is enabled by default. You can congratulate yourself if you’ve been using similar approach with containers which is image signing, but many people even haven’t heard of it, let alone used it. The only method used widely to make sure you download untampered images is… https. You trust hub.docker.com by trusting their SSL certificate and built-in verification in Docker client. I’ve seen however many private docker registries using self-signed certs or disabled security for whole IP classes (–insecure-registry docker option). It’s a serious subject that it’s not that easy to implement and maybe thus not so popular. That’s why OpenShift has ImagePolicy to prevent using images from untrusted sources. I trust guys from Docker that their procedure for official images prevent any tampering and removes vulnerabilities as soon as they are disclosed, but relying only on SSL certificate chain when downloading them is just ridiculous.

Truth:

Container image signing is not enabled by default, it’s not that easy to use and you need to be careful where you’re downloading images from

Conclusion

Containers can be much safer in some ways, in some they still falling behind (firewalling, tools), they bring you more security by default (unless you disable it explicitly) and it’s always YOUR job to leverage it.

And do they overpromise when it comes to security? I don’t think so. In many ways you could achieve the same with existing tools. In my opinion people don’t start using containers to increase security - it’s not a major factor when considering investment of precious resources (i.e. people’s time) in adopting them.

Next time I’m going to have a look at containers portability to see how it’s different from other, well known tools and methods born in pre-container world. Stay tuned!

Update 2018-1-29 - Part 2 published

Leave a Comment