I’ve refreshed my stack and am rebuilding the cluster. I’m using Hypriot for the linux distribution, but I’m a smart sysadmin (read lazy) and don’t want to manually add them each time. Also, I want to make a script so that I can quickly rebuild.
This is inspired by Let Docker Swarm all over your Raspberry Pi Cluster. The Hypriot folk are most excellent people.
I’m using Hypriot’s docker-machine
— it certainly makes life easier. Also I’ve updated docker to version 1.8.1.
This version is also using the token://
discovery method; it may not work properly if your cluster is not connected to the internet. Swarming Raspberry Pi: Docker Swarm Discovery Options provides an analysis of different discovery methods.
Add SSH Keys
1 2 3 4 5 |
for j in `for i in $(avahi-browse -Dat|grep apis|grep -v dev|awk '{print $4}'|sort|uniq);do echo "root@${i}.local"; done`;do ssh-copy-id $j done |
Determine the Discovery Token
If this is a new cluster, then you’ll want to generate a token. Let Docker Swarm all over your Raspberry Pi Cluster provides a method for doing so. If it’s an existing cluster, then use its token. If you’ve forgotten it, then you can use the method describe in Getting the Docker Swarm Discovery Token to find the token.
Add the Master
I’ve chosen a host to be my master. In this case, since I created the swarm previously, I chose apis-rpi-dev
as the master. You’ll want to set it properly for your cluster.
1 |
export MASTER_NAME=apis-rpi-dev |
Add to the Cluster
Finally add the members; this will take a little while.
1 2 |
$ for i in $(avahi-browse -Dat|grep $NAME_PATTERN|grep -v $MASTER_NAME|awk '{print $4}'|sort|uniq);do echo -n "docker-machine create -d hypriot --swarm --swarm-discovery token://$TOKEN --hypriot-ip-address "; avahi-resolve-host-name -4 "${i}.local"|sed -e 's/.local//' |awk '{print $2 " " $1}';done > /tmp/runme $ bash -x /tmp/runme |
And here we can see the swarm:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM apis-rpi-01 hypriot Running tcp://192.168.1.109:2376 apis-rpi-dev apis-rpi-02 hypriot Running tcp://192.168.1.101:2376 apis-rpi-dev apis-rpi-03 hypriot Running tcp://192.168.1.104:2376 apis-rpi-dev apis-rpi-04 hypriot Running tcp://192.168.1.110:2376 apis-rpi-dev apis-rpi-05 hypriot Running tcp://192.168.1.102:2376 apis-rpi-dev apis-rpi-06 hypriot Running tcp://192.168.1.105:2376 apis-rpi-dev apis-rpi-07 hypriot Running tcp://192.168.1.108:2376 apis-rpi-dev apis-rpi-08 hypriot Running tcp://192.168.1.107:2376 apis-rpi-dev apis-rpi-09 hypriot Running tcp://192.168.1.106:2376 apis-rpi-dev apis-rpi-10 hypriot Running tcp://192.168.1.103:2376 apis-rpi-dev apis-rpi-dev hypriot Running tcp://192.168.1.125:2376 apis-rpi-dev (master) apis-rpi-master hypriot Running tcp://192.168.1.213:2376 apis-rpi-dev apis-rpi-s01 hypriot Running tcp://192.168.1.166:2376 apis-rpi-dev apis-rpi-s02 hypriot Running tcp://192.168.1.203:2376 apis-rpi-dev apis-rpi-s04 hypriot Running tcp://192.168.1.185:2376 apis-rpi-dev apis-rpi-util01 hypriot Running tcp://192.168.1.123:2376 apis-rpi-dev apis-rpi-util02 hypriot Running tcp://192.168.1.121:2376 apis-rpi-dev apis-rpi-util03 hypriot Running tcp://192.168.1.122:2376 apis-rpi-dev rpi-disk-tester hypriot Running tcp://192.168.1.165:2376 apis-rpi-dev $ docker $(docker-machine config --swarm apis-rpi-dev) info Containers: 22 Images: 0 Strategy: spread Filters: affinity, health, constraint, port, dependency Nodes: 19 apis-rpi-01: 192.168.1.109:2376 ? Containers: 1 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 456 MiB apis-rpi-02: 192.168.1.101:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-03: 192.168.1.104:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-04: 192.168.1.110:2376 ? Containers: 1 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 456 MiB apis-rpi-05: 192.168.1.102:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-06: 192.168.1.105:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-07: 192.168.1.108:2376 ? Containers: 1 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 456 MiB apis-rpi-08: 192.168.1.107:2376 ? Containers: 1 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 456 MiB apis-rpi-09: 192.168.1.106:2376 ? Containers: 1 ? Reserved CPUs: 0 / 1 ? Reserved Memory: 0 B / 456 MiB apis-rpi-10: 192.168.1.103:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-dev: 192.168.1.125:2376 ? Containers: 2 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-master: 192.168.1.213:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-s01: 192.168.1.166:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-s02: 192.168.1.203:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-s04: 192.168.1.185:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-util01: 192.168.1.123:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-util02: 192.168.1.121:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB apis-rpi-util03: 192.168.1.122:2376 ? Containers: 1 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB rpi-disk-tester: 192.168.1.165:2376 ? Containers: 3 ? Reserved CPUs: 0 / 4 ? Reserved Memory: 0 B / 971.3 MiB CPUs: 0 Total Memory: 0 B |
And…. if you shutdown a host, upon reboot it shows up again in the cluster. That’s cool.
rpi-disk-tester
had been added to the cluster yesterday, then rebooted after the other hosts to test if max_usb_current=1
would allow a spinning drive to work without a powered hub.
3 comments
tongfamily
September 13, 2015 at 2:50 pm (UTC -5) Link to this comment
Wow a cool way to use avahi. going to implement something similar. I was using arp-scan to find things rather than avahi.
tongfamily
September 13, 2015 at 3:11 pm (UTC -5) Link to this comment
At least for me, avahi-browse -Datp (the p btw is nice because you get easy to parse output at least on ubuntu) yields nothing while arp-scan pings all the devices and wakes them up. so
sudo arp-scan --local-net 10.0.1.0/24 | grep b8:27
seems reliable. You do have to figure out your network ip address whic is kind of terrible, but a little ifconfig mashing gets yout that.Matt Williams
September 13, 2015 at 8:19 pm (UTC -5) Link to this comment
avahi-browse
only works if all the hosts are running the avahi daemon. You’re right;arp
is far more portable as a result. I’d not encounteredarp-scan
before — I’ll have to check it out. I think using arp previously I would have ping’d the network and then ‘arp -a’. I did find that I could do asudo arp-scan -l
without the network address on ubuntu 14.04.I’m a little partial to awk for grabbing fields and parsing — the ‘p’ mucks with the mac address ;-).