Deploying Docker Swarm Stacks with Puppet

Docker Swarm is a basic and very predictable container orchestration tool. Puppet is a feature-packed and complex configuration management tool. Both of them always try to ensure a specified state on the system. That’s why it’s sometimes difficult to get these two tools to work together.

This solution, developed by my colleague Markus Opolka (@martialblog) and me, seems very elegant though, since both tools are used in their intended way and thus using this workflow feels very natural to me.

First of all, here is a little figure showing the entire logic flow. It goes through quite a few stages, so it’s good to have a visualization in your head.



The following Puppet define is the heart: it generates the Docker-Compose YAML files from Puppet manifests. It is declared a define instead of a class so we can call it multiple times with different parameters.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
define vision_docker::to_compose (
  Hash $compose,
  String $path = '/data/swarm',
  String $owner = 'root',
  String $group = 'root',
  String $mode = '0600',
  ) {

  # ensure the target directory exists
  if !defined(File[$path]) {
    file { $path:
      ensure => directory,
      mode   => '1750',
    }
  }

  # generate YAML file via inline template
  file { "${path}/${title}.yaml":
    ensure  => present,
    owner   => $owner,
    group   => $group,
    mode    => $mode,
    content => inline_template("# This file is managed by Puppet\n<%= @compose.to_yaml %>"),
  }
}

This define can then be used inside the individual application modules to write their Docker config. Note that the applications modules need not know how to do that or where to write the file, they just pass along the $compose object.

Here is an example for configuring a Limesurvey service:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
  $docker_environment = concat([
    'DB_DATABASE=limesurvey',
    "DB_USERNAME=${mysql_user}",
    "DB_PASSWORD=${mysql_password}",
  ], $environment)

  $compose = {
    'version' => '3.7',
    'services' => {
      'limesurvey' => {
        'image'       => "martialblog/limesurvey:${limesurvey_tag}",
        'volumes'     => [
          '/data/limesurvey/config.php:/var/www/html/application/config/config.php:ro',
          '/data/limesurvey/upload:/var/www/html/upload',
        ],
        'environment' => $docker_environment,
        'deploy'      => {
          'labels' => [
            'traefik.port=80',
            "traefik.frontend.rule=${traefik_rule}",
          ],
        },
      }
    }
  }

  vision_docker::to_compose { 'limesurvey':
    compose => $compose,
  }

This will then generate the following YAML in /data/swarm/limesurvey.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# This file is managed by Puppet
---
version: '3.7'
services:
    limesurvey:
        image: martialblog/limesurvey:latest
        volumes:
        - "/data/limesurvey/config.php:/var/www/html/application/config/config.php:ro"
        - "/data/limesurvey/upload:/var/www/html/upload"
        environment:
        - DB_NAME=limesurvey
        - DB_USERNAME=limesurvey
        - DB_PASSWORD=foobar123
        deploy:
            labels:
                - traefik.port=80
                - traefik.frontend.rule=PathPrefix:/surveys

Okay, so we can generate compose files. How do we deploy our services? Here the puppetlabs-docker module comes into play. We can use it to deploy a Docker Swarm stack from our compose file (from the documentation):

1
2
3
4
5
6
docker::stack { 'yourapp':
  ensure  => present,
  stack_name => 'yourapp',
  compose_files => ['/tmp/docker-compose.yaml'],
  require => [Class['docker'], File['/tmp/docker-compose.yaml']],
}

We can also specify multiple compose files - so each application module can generate its own compose file completely independently and all of them then get picked up in a central role, such as the following one:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
class vision_roles::cluster {
  # base dependencies
  contain ::vision_docker
  contain ::vision_docker::swarm
  # docker apps
  contain ::vision_traefik
  contain ::vision_minio
  contain ::vision_limesurvey

  $compose = [
    '/data/swarm/traefik.yaml',
    '/data/swarm/minio.yaml',
    '/data/swarm/limesurvey.yaml',
  ]

  # deploy
  docker_stack { 'my_stack':
    ensure        => present,
    compose_files => $compose,
    require       => File[$compose],
  }
}

Instead of manually specifying the files as an array (like shown above), one could also opt for using exported resources: first exporting the filepaths as strings in the individual modules and then collecting them inside the role. However, since we want to reduce our reliance on PuppetDB and our list of applications is still quite manageable, we decided to go with the manual declaration. Admittedly, using exported resources would be more elegant, though.

Behind the scenes, the docker puppet module then generates the following command for the docker service which takes cares of merging the individual YAML files into one (the command is visible when running Puppet in DEBUG mode):

1
2
3
4
5
/usr/bin/docker stack deploy \
    -c /data/swarm/traefik.yaml \
    -c /data/swarm/minio.yaml \
    -c /data/swarm/limesurvey.yaml \
    my_stack

The Puppet run output might look as follows:

1
2
3
4
5
Info: Checking for stack my_stack
Info: Checking for compose service traefik traefik:1.7.14
Info: Checking for compose service minio minio/minio:RELEASE.2019-07-10T00-34-56Z
Info: Checking for compose service limesurvey martialblog/limesurvey:latest
Notice: Applied catalog in 10.55 seconds

Verify that the Stack is running as intended:

1
2
3
4
5
docker stack services my_stack
ID              NAME                  MODE         REPLICAS  IMAGE
5al8v1rv9mok    my_stack_minio        replicated   1/1       minio/minio:RELEASE.2019-07-10T00-34-56Z
mu8zoiehzaph    my_stack_traefik      replicated   1/1       traefik:1.7.17
udzzdejojcwd    my_stack_limesurvey   replicated   1/1       martialblog/limesurvey:latest

We successfully deployed our Docker Swarm Stack with Puppet!

If any of the configuration parameters are changed, Puppet updates them in the YAML files and then triggers a docker stack deploy. Docker then makes sure that the state of the stack is the same as the one described in the compose files.

One aspect I really like about this deployment workflow is that the individual Puppet modules don’t have to deal with any Docker deployment - they just generate YAML files. This also makes testing a lot easier, because plaintext files can easily be checked in Serverspec acceptance tests.

If you want some real-world code examples of how exactly we use this, check out our Puppet modules:

Multi-manager Swarm

But what should you do if you have more than one manager node in your Swarm (in our case: three)? When different managers try to deploy different versions of the same stack, chaos is going to ensue!

By putting the Swarm configuration files /data/swarm/*.yaml on a shared network drive, such as a GlusterFS volume for instance, all managers have access to the same Docker stack configuration. Also, since all servers try to maintain the same compose YAML files, Puppet always ensures these are in a consistent state across all nodes.


Happy deploying!