Vir­tu­al­iza­tion has rev­o­lu­tion­ized the world of in­for­ma­tion tech­nol­o­gy. The method of dis­trib­ut­ing a physical computer’s resources onto several virtual machines (VM) first  occurred in the form of hardware vir­tu­al­iza­tion . This approach is based on emulating hardware com­po­nents to be able to supply different virtual servers with their own operating systems (OS) on one shared hosting system. A structure such as this is often used in the software de­vel­op­ment when different test en­vi­ron­ments should run on a single computer. Vir­tu­al­iza­tion also forms the basis of various cloud-based web hosting products

One al­ter­na­tive to hardware vir­tu­al­iza­tion is operating-system-level vir­tu­al­iza­tion. This is where various server ap­pli­ca­tions are realized in isolated virtual en­vi­ron­ments, or con­tain­ers, which all run on the same operating system. This is also called container-based vir­tu­al­iza­tion. Like virtual machines, which have their own operating systems, con­tain­ers can also run different ap­pli­ca­tions with varying re­quire­ments on the same physical system. Since con­tain­ers don’t have their own OS, this vir­tu­al­iza­tion tech­nol­o­gy is char­ac­ter­ized by a con­sid­er­ably more stream­lined in­stal­la­tion process and a smaller overhead.

Server con­tain­ers are nothing new, but today, the tech­nol­o­gy has come to promi­nence through open source projects, like Docker and CoreOS’s rkt.

What are server con­tain­ers?

Hardware vir­tu­al­iza­tion is supported by a so-called Hy­per­vi­sor, which runs on the host system’s hardware and dis­trib­utes its resources pro­por­tion­ate­ly between the guest operating systems. With container-based vir­tu­al­iza­tion, on the other hand, no ad­di­tion­al operating systems are started; instead, the common OS creates isolated instances of itself. A complete runtime en­vi­ron­ment is available for ap­pli­ca­tions to use on these virtual con­tain­ers.

Software con­tain­ers can be fun­da­men­tal­ly regarded as server apps. To install an ap­pli­ca­tion, a container is loaded into a portable format (or image) with all the required files, which is then loaded onto a computer and started in a virtual en­vi­ron­ment. It’s possible to implement ap­pli­ca­tion con­tain­ers on prac­ti­cal­ly any operating system. While Windows systems use Virtuozzo (the software developed by Parallels), FreeBSD uses the vir­tu­al­iza­tion en­vi­ron­ment Jails, and Linux systems support OpenVZ and LXC con­tain­ers. Operating system vir­tu­al­iza­tion has only become at­trac­tive for the mass market through container platforms such as Docker or rkt, which add basic features that make handling server con­tain­ers a simpler task.

Side note: Docker and the comeback of container tech­nol­o­gy

Users dealing with container-based vir­tu­al­iza­tion will in­vari­ably encounter Docker at some point. Thanks to its out­stand­ing marketing, the open source project has quickly become syn­ony­mous with container tech­nol­o­gy. The command line tool, Docker, is used for starting, stopping and managing con­tain­ers. It’s based on Linux tech­niques, like Cgroups and Name­spaces to separate the resources of in­di­vid­ual con­tain­ers. Initially, the LXC interface of the Linux kernel was used; these days, however, Docker con­tain­ers use a self-developed pro­gram­ming interface called Lib­con­tain­er.

One central feature of the Docker platform is the Docker Hub, an online service that contains a repos­i­to­ry for Docker images so that self-created images can be shared easily with other users. For Linux users, in­stalling a pre-built server container is as simple as going to the app store. Ap­pli­ca­tions can be down­loaded via simple command line in­struc­tions from the central Docker Hub and run on your own system.

Docker’s biggest com­peti­tor on the container solution market is rkt, which supports Docker images as well as its own format, app container images (ACI).

Char­ac­ter­is­tics of container-based vir­tu­al­iza­tion

With ap­pli­ca­tion con­tain­ers, all the files that are required for operating server ap­pli­ca­tions are provided in one handy package, allowing for a more stream­lined in­stal­la­tion and simpler operation of complex server programs. However, their main selling points are the man­age­ment and au­toma­tion of container-based ap­pli­ca­tions.

  • Easier in­stal­la­tion process: software con­tain­ers are started from images. This refers to a container’s portable images, which consist of a single server program and all the required com­po­nents, such as libraries, sup­port­ing programs, and con­fig­u­ra­tion files. The dif­fer­ences between various operating system dis­tri­b­u­tions can thus be com­pen­sat­ed, allowing for a simpler in­stal­la­tion process with just one command line in­struc­tion.
  • Platform in­de­pen­dence: images can be easily trans­ferred from one system to another and are char­ac­ter­ized by a high level of platform in­de­pen­dence. To start a software container from an image, you just need an operating system with a cor­re­spond­ing container platform.
  • Minimal vir­tu­al­iza­tion overhead: a Linux with Docker consists of around 100 megabytes and can be set up in a matter of minutes. But it’s not only its compact size that’s a great selling point for system ad­min­is­tra­tors; the container solution can keep vir­tu­al­iza­tion overhead to a minimum. This contrasts with the sig­nif­i­cant­ly reduced per­for­mance with hardware vir­tu­al­iza­tion, caused by the Hy­per­vi­sor and ad­di­tion­al operating systems. Fur­ther­more, booting virtual machines can take several minutes, whereas container apps for servers are always im­me­di­ate­ly available.
  • Isolated ap­pli­ca­tions: every program in a server container runs in­de­pen­dent­ly from other software con­tain­ers on the OS. This allows even ap­pli­ca­tions with con­tra­dic­to­ry re­quire­ments to operate parallel on the same system with ease.
  • Stan­dard­ized ad­min­is­tra­tion and au­toma­tion: as the man­age­ment of all server con­tain­ers takes place on one container platform (i.e. Docker) with the same tools, the ap­pli­ca­tions in the data center can largely be automated. Container solutions are therefore es­pe­cial­ly suited to server struc­tures, in which in­di­vid­ual com­po­nents are dis­trib­uted across multiple servers, so that load is carried by several machines. For areas of ap­pli­ca­tion such as these, Docker provides tools to configure au­toma­tion, which enable new instances to start in peak loads. Google also offers a software solution for or­ches­trat­ing large container clusters, a software tailored es­pe­cial­ly for Docker, called Ku­ber­netes.

How secure are container solutions?

Forgoing separate operating systems provides a per­for­mance advantage for container-based vir­tu­al­iza­tion. However, this is ac­com­pa­nied by a reduced level of security. In hardware vir­tu­al­iza­tion, security issues in operating systems normally just apply to virtual machines, but they affect the vir­tu­al­iza­tion on operating system levels in all software con­tain­ers. Con­tain­ers are therefore not en­cap­su­lat­ed to the same extent as virtual machines with their own OS. Indeed, an attack on the Hy­per­vi­sor could cause sig­nif­i­cant damage in hardware vir­tu­al­iza­tion systems. However, thanks to its low com­plex­i­ty, there are fewer op­por­tu­ni­ties for attackers to strike than, for instance, with a Linux kernel. Server con­tain­ers therefore serve as a credible al­ter­na­tive to hardware vir­tu­al­iza­tion, although for the time being, it can’t be con­sid­ered a complete re­place­ment.

Go to Main Menu