Why Containers

Why Containers

In this blog post, we’re going to talk about distributed architectures, and the progression they’ve made over time. We’re going to do this because a long time ago, when we were watching the Getting Started with Docker training on PluralSite, Josh started ranting about this very topic and Laine told him it should to be a blog post because the information that just falls out of his head sometimes is really cool. You’re welcome, internet!

The Progression

One Server, Two Server, Red Server, 5000 Servers ahh panic! (1990s-2000s)

You need to run server applications, e.g. a mail server, so you spin one up. You pay the hardware and licensing costs, install and configure some software, and keep up to date with updates, patches, and user requests. By hand.

One server, not so bad. We definitely got this, you guys.

…but eventually you need to run more server applications, because “hey, these things are cool!” This leads to more servers. Which of course leads to more patching, installs, configuring, space, networking, costs – and ultimately, more timeWe maybe do not got this…

It’s a bird! It’s a plane! It’s…VMWare! (Late 2000s)

Hypervisor technology (VMWare, KVM, Microsoft Hyper-V) lets server admins virtualize servers. This means that they could:

  1. Create massive pools of compute, networking, and storage resources from relatively inexpensive commodity blade servers
  2. Manage all of those from a central location (e.g. VMWare)
  3. Allocate those pools to specific virtual servers (aka virtual machines aka VMs)
  4. Configure and patch the virtual servers automatically using profiles
  5. Monitor the virtual servers

This was a revolution in server management. Rather than spending days buying/setting up/configuring physical machines, server admins could buy hardware once, and then allocate it as needed without ever moving hardware around.

However, there are still some annoyances:

  1. Each server had to have its own operating system and application install configured
  2. That configuration then had to be done multiple times for testing environments, high availability/failover, etc.
  3. Even though the compute, networking, and storage was paid for once, the operating system, utilities, and applications had to be paid for per server

Containers! (Mid 2010s)

Containers (e.g. Docker) changed the game again, by changing how applications are installed and deployed, and how they integrate with their operating systems.

  1. Containers run in an operating system on their host machine, but they pretend to run each application in its own operating system. Tl;dr: they have one actual operating system shared by many, many containers. One install, one license, and one setup (to run the containers).
  2. Containers define their setup and configuration in the container. This setup and configuration can then be reused anywhere the container (which contains the application) is deployed. No need for application install on each server.

So now, with virtualized compute hardware (hypervisor technology) and virtualized operating systems (Docker), plus standardized configuration spec (dockerfiles), the annoyances we talked about for maintaining all those virtualized servers have pretty cool solutions.

Solving all of these problems means it’s very very fast and easy to spin up a lot of containers – and then you get into the problem of ohhh sooo mannyyy connntaiinnerrssss, which is where you need a container management and orchestration tool like DCOS or our definite nerdy favoriteOpenshift (related blog posts).

Noteat no point did we say anything here about “written Java/etc applications.” Actually, this was more about setting up and maintaining external applications that run on servers than about applications built internally. One of the places that Docker shines is in setting up these types of server applications – but it’s genuinely pretty great at running applications written in-house too.

Bring it on home…

Distributed architecture is great for a lot of reasons, but handling the servers has always been a pain point for infrastructure/operations people – which means it’s a pain point for developers, and for end users too. Making server setup, configuration, and care easier, faster, and cheaper has been a goal since, uh…since servers existed. Container technology like Docker is another leap forward in the evolution of how the industry tries to meet that goal.

Comments are closed.