Operating System Containers vs. Application Containers

This article is a guest post from Akshay Karle, who is a developer at ThoughtWorks, currently working on Snap CI.

Thanks to Docker, containers have gained significant popularity lately among Developer and Ops communities alike. Many people simply want to use Docker because of its rising popularity, but without understanding if a Docker container is what they need. There are many container technologies out there to choose from, but there is a general lack of knowledge about the subtle differences in these technologies and when to use what.

The need for containers

Hypervisor based virtualization technologies have existed for a long time now. Since a hypervisor or full virtualization mechanism emulates the hardware, you can run any operating system on top of any other, Windows on Linux, or the other way around. Both the guest operating system and the host operating system run with their own kernel and the communication of the guest system with the actual hardware is done through an abstracted layer of the hypervisor.

hypervisor based virtualization

This approach usually provides a high level of isolation and security as all communication between the guest and host is through the hypervisor. This approach is also usually slower and incurs significant performance overhead due to the hardware emulation. To reduce this overhead, another level of virtualization called "operating system virtualization" or "container virtualization" was introduced which allows running multiple isolated user space instances on the same kernel.

Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!

What are containers?

Containers are the products of operating system virtualization. They provide a lightweight virtual environment that groups and isolates a set of processes and resources such as memory, CPU, disk, etc., from the host and any other containers. The isolation guarantees that any processes inside the container cannot see any processes or resources outside the container.

virtualization of OS containers

The difference between a container and a full-fledged VM is that all containers share the same kernel of the host system. This gives them the advantage of being very fast with almost 0 performance overhead compared with VMs. They also utilize the different computing resources better because of the shared kernel. However, like everything else, sharing the kernel also has its set of shortcomings.

  • Type of containers that can be installed on the host should work with the kernel of the host. Hence, you cannot install a Windows container on a Linux host or vice-versa.
  • Isolation and security -- the isolation between the host and the container is not as strong as hypervisor-based virtualization since all containers share the same kernel of the host and there have been cases in the past where a process in the container has managed to escape into the kernel space of the host.

Common cases where containers can be used

As of now, I have noticed that containers are being used for two major uses - as a usual operating system or as an application packaging mechanism. There are also other cases like using containers as routers but I don't want to get into those in this blog.

I like to classify the containers into special types based on how they can be used. Although I will also point out that it is not a must to use a container technology for just that case, and you may very well use it for other cases. I've classified them this way because I find certain technologies easier to use for certain cases. Based on the two uses I mentioned above I've classified containers as OS containers and application containers.

OS containers

OS containers are virtual environments that share the kernel of the host operating system but provide user space isolation. For all practical purposes, you can think of OS containers as VMs. You can install, configure and run different applications, libraries, etc., just as you would on any OS. Just as a VM, anything running inside a container can only see resources that have been assigned to that container.

OS containers are useful when you want to run a fleet of identical or different flavors of distros. Most of the times containers are created from templates or images that determine the structure and contents of the container. It thus allows you to create containers that have identical environments with the same package versions and configurations across all containers.

OS containers

Container technologies like LXC, OpenVZ, Linux VServer, BSD Jails and Solaris zones are all suitable for creating OS containers.

Application containers

While OS containers are designed to run multiple processes and services, application containers are designed to package and run a single service. Container technologies like Docker and Rocket are examples of application containers. So even though they share the same kernel of the host there are subtle differences make them different, which I would like to talk about using the example of a Docker container:

Run a single service as a container

When a Docker container is launched, it runs a single process. This process is usually the one that runs your application when you create containers per application. This very different from the traditional OS containers where you have multiple services running on the same OS.

Layers of containers

layers of containers

Any RUN commands you specify in the Dockerfile creates a new layer for the container. In the end when you run your container, Docker combines these layers and runs your containers. Layering helps Docker to reduce duplication and increases the re-use. This is very helpful when you want to create different containers for your components. You can start with a base image that is common for all the components and then just add layers that are specific to your component. Layering also helps when you want to rollback your changes as you can simply switch to the old layers, and there is almost no overhead involved in doing so.

Built on top of other container technologies

Until some time ago, Docker was built on top of LXC. If you look at the Docker FAQ, they mention a number of points which point out the differences between LXC and Docker.

The idea behind application containers is that you create different containers for each of the components in your application. This approach works especially well when you want to deploy a distributed, multi-component system using the microservices architecture. The development team gets the freedom to package their own applications as a single deployable container. The operations teams get the freedom of deploying the container on the operating system of their choice as well as the ability to scale both horizontally and vertically the different applications. The end state is a system that has different applications and services each running as a container that then talk to each other using the APIs and protocols that each of them supports.

In order to explain what it means to run an app container using Docker, let's take a simple example of a three-tier architecture in web development which has a PostgreSQL data tier, a Node.js application tier and an Nginx as the load balancer tier.

In the simplest cases, using the traditional approach, one would put the database, the Node.js app and Nginx on the same machine.

simplest 3-tier architecture

Deploying this architecture as Docker containers would involve building a container image for each of the tiers. You then deploy these images independently, creating containers of varying sizes and capacity according to your needs.

typical 3-tier architecture with Docker containers


So in general when you want to package and distribute your application as components, application containers serve as a good resort. Whereas, if you just want an operating system in which you can install different libraries, languages, databases, etc., OS containers are better suited.

OS Containers vs. App Containers

This article is a guest post from Akshay Karle, a developer at ThoughtWorks, currently working on Snap CI.

Shipping Node.js Applications with Docker and Codeship

Setting up continuous deployment of Node.js applications now is easier than ever. We have tools like Jenkins, Strider, Travis or Codeship. In this article we are going to use Codeship with Docker and Ansible to deploy our Node.js application.

A key principle I want to emphasize before diving deeper is immutable infrastructures, what they are and how they can make your life easier.

Immutable Infrastructures

Immutable infrastructures usually consist of data and everything else. The everything else part is replaced on each deploy. Not even security patches, or configuration changes happen on production systems. To achieve this we can choose between two approaches: the machine-based and the container-based approaches.


Machine-based immutability can happen like this: on each deploy you set up entirely new EC2 machines and deploy your applications on them. If everything is okay, then you simply modify your load balancer configuration to point to your new machines. Later on you can delete the old machines.


You can think of the container-based approach as an improvement of the machine-based one: on one machine you can have multiple containers running. Docker makes this relatively easy. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.

Sure, you could use VMWare or VirtualBox for the container-based way, but while a Docker start takes seconds, the others take minutes.

Advantages of Immutable Infrastructures

In order to full take advantage of this approach, you should have a Continuous Delivery pipeline set up, with tests and orchestration as well.

The main advantages:

  • Going back to older versions is easy
  • Testing the new infrastructure in isolation is possible
  • Simplify change management as servers never rot

Get started

It is time to get our hands dirty! We are going to create and deploy a Hello Docker & Codeship application.

For this, we are going to use It is a simple application that returns the "We <3 Docker & Codeship" string via HTTP.

Here is what we are going to do:

  • When someone pushes to the master branch, GitHub will trigger a build on Codeship
  • If everything is OK, Codeship triggers a build on Docker Hub
  • After the new Docker image is ready (pushed), Docker triggers a webhook
  • Ansible pulls the latest image to the application servers (Docker Deployer)

Docker with Ansible and Codeship

Create a Docker Hub account

What is Docker Hub?

Docker Hub manages the lifecycle of distributed apps with cloud services for building and sharing containers and automating workflows.

Go to Docker Hub and sign up.

Setting up a Docker repository

After signing up, and adding your GitHub account, go under My Profile > My Repositories > Add repositories and click Automated build.

After setting up your repository, enable Build triggers. This will result in a command similar to this:

$ curl --data "build=true" -X POST

Also make sure, that you deactivate the GitHub commit hook under Automated build - remember, CodeShip will listen on commits to the git repository.

That's it, your Docker Hub is ready to be used by Codeship.

Get a Codeship account

Go to Codeship, and get one.

Set up your repository on Codeship

You can connect to your GitHub/BitBucket account from Codeship. After you have given access to Codeship, you will see you repositories listed. Here I chose the repository mentioned before. Then choose Node.js and click "Save and go to my dashboard".

Modify your Deploy Commands

Under the deploy settings, choose custom script - insert the previously generated curl command from Docker Hub. That's it :).

The Docker Deployer

This part does not come out of the box. You have to implement a little API server, that listens to the Docker Hub webhook. When the endpoint is called, it runs Ansible, that pulls the latest Docker image available to the application servers.

Note: of course, you are not limited to use Ansible - any other deploy/orchestration tool will do the job.

Always keep shipping

As you can see, setting up a Continuous Delivery pipeline with immutable infrastructure can be achieved easily - not only can it be used in your production environments, but staging or development environments as well.

Note: This post was picked up and republished by Codeship. You can read more about how to ship applications with Docker and Codeship on their blog.