Paul Czarkowski

Random musings mostly about tech

Flexible Private Docker Registry Infrastructure

Previously I showed how to run a basic secure Docker Registry. I am now going to expand on this to show you something that you might use in production as part of your CI/CD infrastructure. The beauty of running Docker is that you can push an image from a developer’s laptop all the way into production which helps ensure that what you see in development and your various test/qa/stage environments are exactly the same as what you run in production.

Deploying a Simple and Secure Docker Registry

There comes a time in everybody’s life where they realize they have to run their own Docker Registry. Unfortunately there’s not a lot of good information on how to run one. Docker’s documentation is pretty good, but is verbose and across a lot of different pages which means having half a dozen tabs open and searching for the right information. While it’s pretty common to run the Docker Registry itself with little to no security settings and fronting it with NGINX or Apache to provide this security I wanted to show how it can be done with just the Docker Registry.

Securing Docker with TLS certificates

By default Docker (and by extension Docker Swarm) has no authentication or authorization on its API, relying instead on the filesystem security of its unix socket /var/run/docker.sock which by default is only accessible by the root user. This is fine for the basic use case of the default behavior of only accessing the Docker API on the local machine via the socket as the root user. However if you wish to use the Docker API over TCP then you’ll want to secure it so that you don’t give out root access to anyone that happens to poke you on the TCP port.

Deploying a HA Docker Swarm Cluster

Given Docker’s propensity for creating easy to use tools it shouldn’t come as a surprise that Docker Swarm is one of the easier to understand and run of the “Docker Clustering” options currently out there. I recently built some Terraform configs for deploying a Highly Available Docker Swarm cluster on Openstack and learned a fair bit about Swarm in the process. This guide is meant to be a platform agnostic howto on installing and running a Highly Available Docker Swarm to show you the ideas and concepts that may not be as easy to understand from just reading some config management code.

Openstacks and Ecosystems

I have recently had a number of lengthy discussions on the Twitter about Interop, Users, and Ecosystems. Specifically about our need to focus on the OpenStack ecosystem to extend the OpenStack IaaS user experience to something a bit more platform[ish]. I wrote a post for SysAdvent this year on developing applications on top of OpenStack using a collection of OpenSource tools to create a PaaS and CI/CD pipelines. I think it turned out quite well and really helped reinforce my beliefs on the subject.

Optimizing your Dockerfiles

Docker images are “supposed” to be small and fast. However unless you’re precompiling GO binaries and dropping them in the busybox image they can get quite large and complicated. Without a well constructed Dockerfile to improve build cache hits your docker builds can become unnecessarily slow. Dockerfile’s are regularly [and incorrectly] treated like bash scripts and therefore are often written out as a series of commands which you would curl | sudo bash from a website to install.

Factorish and The Twelve-Fakter App

Unless you’ve been living under a rock (in which case I envy you) you’ve heard a fair bit about The Twelve-Factor App. A wonderful stateless application that is completely disposable and can run anywhere from your own physical servers to Deis, Cloud Foundry or Heroku.

Chances are you’re stuck writing and running an application that is decidely not 12Factor, nor will it ever be. In a perfect world you’d scrap it and rewrite it as a dozen microservices that are loosely coupled but run and work indepently of eachother. The reality however is you could never get the okay to do that.

Multi Process Docker Images Done Right

For some values of ‘right’

Almost since Docker was first introduced to the world there has been a fairly strong push to keeping containers to be single process. This makes a lot of sense and definitely plays into the 12 Factor way of thinking where all application output should be pushed to stdout and docker itself with tools like logspout now has fairly strong tooling to deal with those logs.

Sometimes however it just makes sense to run more than one process in a container, a perfect example would be running confd as well as your application in order to modify the application’s config file based on changes in service discovery systems like etcd. The ambassador container way of working can achieve similar things, but I’m not sure that running two containers with a process each to run your application is any better than running one container with two processes.

BreadOps - Continuous Delivery of Fresh Baked Bread

“See how this sparkly devop princess bakes bread every day with almost no effort at all with this one weird trick” Store bought bread is shit. Even the “artisanal” bread at most supermarkets is little better than cake baked in a bread shaped mold ( seriously check next time you’re at a supermarket ). You might be lucky and have a really good bread baker near you, but like butchers and other important crafts they have all but disappeared.