Unless you’ve been living under a rock (in which case I envy you) you’ve heard a fair bit about The Twelve-Factor App. A wonderful stateless application that is completely disposable and can run anywhere from your own physical servers to Deis, Cloud Foundry or Heroku.
Chances are you’re stuck writing and running an application that is decidely not 12Factor, nor will it ever be. In a perfect world you’d scrap it and rewrite it as a dozen microservices that are loosely coupled but run and work indepently of eachother. The reality however is you could never get the okay to do that.
For some values of ‘right’
Almost since Docker was first introduced to the world there has been a fairly strong push to keeping containers to be single process. This makes a lot of sense and definitely plays into the 12 Factor way of thinking where all application output should be pushed to
stdout and docker itself with tools like logspout now has fairly strong tooling to deal with those logs.
Sometimes however it just makes sense to run more than one process in a container, a perfect example would be running confd as well as your application in order to modify the application’s config file based on changes in service discovery systems like etcd. The ambassador container way of working can achieve similar things, but I’m not sure that running two containers with a process each to run your application is any better than running one container with two processes.
I have expermented with using
Docker together in the past but wanted to tackle the problem from a slightly different angle. I’ve recently been working on some PAAS stuff, both Deis and Solum these both utilize the tooling from Flynn which builds heroku style
I recently did a presentation at the Cloud Austin meetup titled Docking with Unicorns about new PAAS on the block DEIS. Building out DEIS is quite easy, make more easy by some tight integration they have with Rackspace Cloud. If you’re interested in what deis is go through my slides linked above, and the documentation on their website. If you want to build out an environment to kick the tires a bit, then click ‘Read on’ below and follow me down the rabbit hole.
I have been having a lot of internal debate about the idea of running more than one service in a docker container. A Docker container is built to run a single process in the foreground and to live for only as long as that process is running. This is great in a utopian world where servers are immutable and sysadmins drink tiki drinks on the beach, however it doesn’t always translate well to the real world.
Examples where you might want to be able to run multiple servers span from the simple use case of running
sshd as well as your application to running a web app such as
wordpress where you might want both
mysql running in the same container.
Building applications in a docker.io Dockerfile is relatively simple, but sometimes you want to just install the application exactly as you would normally via already built chef cookbooks. Turns out this is actually pretty simple.
The first thing you’ll need to do is build a container with chef-client and berkshelf installed. You can grab the one I’ve built by running
docker pull paulczar/chef-solo or build one youself from a
Dockerfile that looks a little something like the following…
At DevOps Days Austin @mattray did an Openspace session on Omnibus which is a toolset based around the concept of installing an app and all of it’s prerequisites from source into a directory and then building a package ( either .deb or .rpm ) of that using fpm.
Having battled many times with OS Packages trying to get newer versions of Ruby, or Redis or other software installed and having to hunt down some random package repo or manually build from source this seems like an excellent idea.
To learn the basics I decided to build an omnibus package for fpm which helped me work out the kinks and learn the basics.