Docker Swarm is a basic and very predictable container orchestration tool. Puppet is a feature-packed and complex configuration management tool. Both of them always try to ensure a specified state on the system. That’s why it’s sometimes difficult to get these two tools to work together.
This solution, developed by my colleague Markus Opolka (@martialblog) and me, seems very elegant though, since both tools are used in their intended way and thus using this workflow feels very natural to me.
First of all, here is a little figure showing the entire logic flow. It goes through quite a few stages, so it’s good to have a visualization in your head.
The following Puppet define is the heart:
it generates the Docker-Compose YAML files from Puppet manifests.
It is declared a
define instead of a
class so we can call it multiple times with different parameters.
define can then be used inside the individual application modules to write their Docker config.
Note that the applications modules need not know how to do that or where to write the file, they just pass along the
Here is an example for configuring a Limesurvey service:
This will then generate the following YAML in
Okay, so we can generate compose files. How do we deploy our services? Here the puppetlabs-docker module comes into play. We can use it to deploy a Docker Swarm stack from our compose file (from the documentation):
We can also specify multiple compose files - so each application module can generate its own compose file completely independently and all of them then get picked up in a central role, such as the following one:
Instead of manually specifying the files as an array (like shown above), one could also opt for using exported resources: first exporting the filepaths as strings in the individual modules and then collecting them inside the role. However, since we want to reduce our reliance on PuppetDB and our list of applications is still quite manageable, we decided to go with the manual declaration. Admittedly, using exported resources would be more elegant, though.
Behind the scenes, the docker puppet module then generates the following command for the docker service which takes cares of merging the individual YAML files into one (the command is visible when running Puppet in DEBUG mode):
/usr/bin/docker stack deploy \ -c /data/swarm/traefik.yaml \ -c /data/swarm/minio.yaml \ -c /data/swarm/limesurvey.yaml \ my_stack
The Puppet run output might look as follows:
Info: Checking for stack my_stack Info: Checking for compose service traefik traefik:1.7.14 Info: Checking for compose service minio minio/minio:RELEASE.2019-07-10T00-34-56Z Info: Checking for compose service limesurvey martialblog/limesurvey:latest Notice: Applied catalog in 10.55 seconds
Verify that the Stack is running as intended:
docker stack services my_stack ID NAME MODE REPLICAS IMAGE 5al8v1rv9mok my_stack_minio replicated 1/1 minio/minio:RELEASE.2019-07-10T00-34-56Z mu8zoiehzaph my_stack_traefik replicated 1/1 traefik:1.7.17 udzzdejojcwd my_stack_limesurvey replicated 1/1 martialblog/limesurvey:latest
We successfully deployed our Docker Swarm Stack with Puppet!
If any of the configuration parameters are changed, Puppet updates them in the YAML files and then triggers a
docker stack deploy.
Docker then makes sure that the state of the stack is the same as the one described in the compose files.
One aspect I really like about this deployment workflow is that the individual Puppet modules don’t have to deal with any Docker deployment - they just generate YAML files. This also makes testing a lot easier, because plaintext files can easily be checked in Serverspec acceptance tests.
If you want some real-world code examples of how exactly we use this, check out our Puppet modules:
But what should you do if you have more than one manager node in your Swarm (in our case: three)? When different managers try to deploy different versions of the same stack, chaos is going to ensue!
By putting the Swarm configuration files
/data/swarm/*.yaml on a shared network drive, such as a GlusterFS volume for instance, all managers have access to the same Docker stack configuration.
Also, since all servers try to maintain the same compose YAML files, Puppet always ensures these are in a consistent state across all nodes.