Docker Swarm supports a variety of methods for discovering swarm members. Each has arguments in its favor. In this post I shall discuss the various methods and thoughts regarding each.
Background
I originally started with the idea of having a portable cluster, a “cloud in a box” if you will, so that I could go and give talks without having to worry about network dependencies and so forth. I also was intrigued by the idea of low power, inexpensive devices which could be used for building a cloud. Two days after my initial purchase of 5 Pi B+, the Pi 2 was released. Despite my initial grump, I realized that this presented possibilities for distributing workloads across a heterogenous environment which is an interesting problem space — determining how best to distribute work across an infrastructure.
I still have the goal, for the present of having a portable cloud. I’ve been challenged, however, to build a larger one than Britain’s GCHQ Pi cloud. It is tempting. Since they’re using all single core Pi’s, it wouldn’t be terribly difficult to build a cloud with more oomph and far fewer nodes. Of course, if the workload is IO intensive then more members are needed.
At present, my cloud is consisting of the following:
- 5x Pi B+ Worker Nodes, 16GB micro SD
- 5x Pi 2B Worker Nodes, 16GB micro SD
- 1x Pi 2B Master Node (temporarily being used as a porting/compiling node), 16GB Micro SD
- 2x Pi 2B Data Nodes (one of these will become a docker registry, among other things)
a. One has 2x 240GB SSD
b. One has a 240GB SSD and a 160GB Spinning disk (for “slow” data) - 16 Port 100Mbit Switch. This may shortly be swapped out for gigabit switch(es).
Criteria for Evaluation
I strongly believe that metrics, monitoring, and alerting are necessary in building any infrastructure.
I am seeking maximum portability; my Cloud in a Box™ should be able to do Real Work™ without depending on anything outside the cluster. Additionally, the less I need to know ahead of time the better. Names trump numbers — if I can use a name or use a name to look up a number that is better than having to remember a number.
Given the limited resources of the Pi, lightweight solutions are preferred over heavyweight, saving where they can serve dual purposes.
The Contendors
The list of Discovery Services can be found in the Docker Documentation.
Hosted Discovery Service
The hosted discovery service presents an easy way to test and get started with Swarm. Swarm communicates with the Docker Hub in order to maintain a list of swarm members.
The Good
It’s easy, presented in the tutorial, and is supported by Docker.
The Bad
Unfortunately the requirement of connecting the the Docker Hub means that it’s not self contained; in order for it to work a network connection is needed.
As of today, there a couple of issues with it:
- There is no way to remove a host from the swarm.
-
docker -H $SWARM_MANAGER info
returns what I believe is an incorrect count:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
$ sudo docker -H 127.0.0.1:3456 info Containers: 68 Nodes: 10 apis-rpi-03: 192.168.1.103:2375 ? Containers: 5 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 925.3 MiB apis-rpi-02: 192.168.1.102:2375 ? Containers: 11 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 925.3 MiB apis-rpi-10: 192.168.1.110:2375 ? Containers: 11 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 434.4 MiB apis-rpi-08: 192.168.1.108:2375 ? Containers: 6 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 434.4 MiB apis-rpi-06: 192.168.1.106:2375 ? Containers: 6 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 434.4 MiB apis-rpi-07: 192.168.1.107:2375 ? Containers: 7 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 434.4 MiB apis-rpi-09: 192.168.1.109:2375 ? Containers: 7 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 434.4 MiB apis-rpi-04: 192.168.1.104:2375 ? Containers: 6 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 925.3 MiB apis-rpi-05: 192.168.1.105:2375 ? Containers: 5 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 925.3 MiB apis-rpi-01: 192.168.1.101:2375 ? Containers: 4 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 925.3 MiB |
As an example, in the case of apis-rpi-04, it’s claiming that there are six (6) containers. However, there are not six (6) containers running:
1 2 3 4 5 |
HypriotOS: pi@apis-rpi-04 in ~ $ docker ps | grep -v CONTAINER|wc -l 3 |
There are, however, six in the ps -a
results:
1 2 3 4 5 |
HypriotOS: pi@apis-rpi-04 in ~ $ docker ps -a| grep -v CONTAINER|wc -l 6 |
On a whim, I removed the containers which were not up:
1 2 3 4 5 6 7 |
HypriotOS: pi@apis-rpi-04 in ~ $ docker ps -a|grep -v CONT|grep -v Up|awk '{print $1}'|xargs docker rm 239194d97248 a8e8a05f4062 4096cf03487c |
At this point, info
returns the three (3) which I’d expect. However, on further investigation, it turns out that docker info
outside of swarm returns the total number of containers, not the number of running containers. I’ve
opened an issue about this. I think that having an entry for the number of containers running would be useful, but barring that documentation is good.
Static File Describing the Cluster
In this case, a file is used by all the members which has a list of all the hosts IP addresses and ports.
The Good
It’s pretty simple.
The Bad
Since my cluster is portable, I don’t necessarily know what the IP addresses are — I may happen to be on a network where the addresses I’ve chosen are already in use. For simplicity sake I don’t want to have to worry at this point about NAT translation.
The file needs to be copied to all of the servers. Additionally, it violates principles espoused by The Twelve-Factor App — primarily there is a configuration artifact which needs to be maintained.
A Static List of IPs
The Good
Same as the file list, but this also has the added goodness of being more 12 Factor compliant.
The Bad
Same as the file list.
Range pattern for IP addresses
The Good
The Bad
etcd
When I investigated coreos/etcd on the Raspberry Pi wouldn’t compile without patching the code — to whit, a structure needs to be edited. I don’t view it as very portable and I have concerns whether the structure change will keep it from working properly with other frameworks. At least for the moment I don’t consider it to be a good choice.
zookeeper
The Good
Established codebase with lots of users.
The Bad
It’s really heavyweight compared to some of the other options. However, if plannng to use hadoop or anthing in the hadoop ecosystem, it might be a good choice.
Consul
The Good
Consul can serve multiple purposes — service discovery, monitoring, and DNS. Additionally there’s a fairly useful web ui which provides a dashboard showing the status of the members.
It’s fairly lightweight — the client takes approximately 3MB:
1 2 3 4 |
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O swarm-agent 0.00% 3.305 MiB/925.3 MiB 0.36% 204.1 MiB/68.01 MiB |
The Server, on the other hand, takes about 12MB for managing 13 hosts:
1 2 3 |
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9642 root 20 0 1048m 12m 7012 S 5.9 1.6 821:14.11 consul |
That 821 minutes is over 11 days:
1 2 3 |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 27a80051fcce nimblestratus/rpi-consul:latest "/bin/start -server 11 days ago Up 11 days 172.17.42.1:53->53/udp, 192.168.1.125:8300->8300/tcp, 192.168.1.125:8301->8301/tcp, 192.168.1.125:8301->8301/udp, 192.168.1.125:8302->8302/udp, 192.168.1.125:8302->8302/tcp, 192.168.1.125:8400->8400/tcp, 192.168.1.125:8500->8500/tcp consul |
That’s about 1.25 hours/day of CPU time. By comparison, the client is a bit lighter:
1 2 3 |
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6796 root 20 0 1049m 18m 14m S 7.3 2.0 267:32.96 consul |
The Bad
It’s rather chattier than the other methods. Every 10 seconds or so the client wakes up and does some work.
Verdict
For the moment, based upon my goals and an analysis of the good and the bad of the various methods available today, I think that consul is the best choice at this point.
1 ping
Bulk Adding Hosts to a Pi Swarm » Ramblings
September 7, 2015 at 1:14 pm (UTC -5) Link to this comment
[…] discovery method; it may not work properly if your cluster is not connected to the internet. Swarming Raspberry Pi: Docker Swarm Discovery Options provides an analysis of different discovery […]