I like the concept of Docker and containerization in general, but I have some pretty fundamental concerns:
- How many deployed docker images were torn down and redeployed upon the revelation of heartbleed? Of shellshock? In practice, not in theory.
- How many Docker images are regularly destroyed and redeployed for the purpose of updating their userlands? Again, in reality, even with the most agile orchestration.
- How many Docker images are actually deployed with a minimal attack surface, that being only the executables and libraries they need, rather than entire userlands?
- How many Docker images are given to IT/Ops as single filesystem images rather than multi-gigabyte change layers, contributing heavily to wasted storage space?
- How can Docker images composed of random people’s aging Linux userlands ever be taken seriously in an environment that needs to be kept certified, stable and secure?
- What is the benefit of Docker given the above, when LXC and Libvirt-LXC performs the same containerization and provides Ops with much greater flexibility in terms of orchestration and change management, and has for years?
- Dan Walsh of Red Hat has much to say about the security of Docker and LXC containers – the most important statement he makes is that “containers don’t contain” – containers provide no security, they are only useful for the purpose of deploying applications in a manageable way. Given this, is it responsible to use containers based on full Linux filesystems? If you do, you’d better be ready to tear down your ENTIRE stack each and every time a major vulnerability comes to light.
Points worth pondering – these affect the future direction of container technology and shed light on the implications.