Apologies to W. Somerset Maugham for the title.
For my Cloud in a Box, I want the Data hosts, at the least to have
“more” network I/O capability, whether it be for a Docker Registry or other data. (As an aside, I am playing with doing substructure matching of chemical compounds and/or other Cheminformatics with the Pi. Compressed, the base data from the NIH is 50GB. News to follow shortly)
One way to do this is via Link Aggregation. This post explores this topic.
My test is rather artificial; I am reading 100Mb from /dev/zero
and sending it across the wire.
In the first case, I used ssh for the transport. Afterwards, I used netcat
with similar results.
Additionally, there are different modes of balancing and/or HA posible; UbuntuBonding provides a good description of the types of bonding as well as configuration.
In the test, the following types of bonding are tested:
- No Bonding; this is a baseline for comparison.
balance-rr
– Balanced Round Robin — this uses all the Nics to send packets round robin.balance_alb
balance_tlb
The Hosts used in testing were both plugged into a gigabit switch which was otherwise empty. Only one Pi at a time was plugged into the switch.
The second Nic is a USB 2.0 device 10/100 Mb/s. I didn’t grab a picture, but the Pi with both Nics in use was drawing about 2 watts (or, if you will, less than 500mA). Without the 2nd Nic, both the B+ and 2B were drawing ~1.25 watts. So my compute section of the cloud will be drawing < 20 Watts. Likely < 15.
Setup
My current switch does not support 802.3ad Link Bundling, so I am doing without in this test.
There is a driver which needs to be loaded:
1 2 |
apt-get install ifenslave |
Then add the following line to /etc/modules:
1 2 |
bonding |
Once you’ve done so, you can lsmod
to verify it’s loaded. If it’s not, then do a modprobe bonding
. You should not need to do so again; the module should be loaded on the next book.
/etc/network/interfaces
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
<br />auto lo iface lo inet loopback auto eth0 iface eth0 inet manual bond-master bond0 bond-primary eth0 auto eth1 iface eth1 inet manual bond-master bond0 auto bond0 iface bond0 inet static address 192.168.1.126 netmask 255.255.255.0 network 192.168.1.0 gateway 192.168.1.1 bond-mode balance-rr # bond-mode balance-alb bond-miimon 100 bond-downdelay 200 bond-updelay 200 |
Pi B+
unbonded
1 2 3 4 5 6 7 8 9 10 11 12 |
$ time (ssh pi@192.168.1.126 dd if=/dev/zero count=200K|dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 39.6221 s, 2.6 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 50.0889 s, 2.1 MB/s real 0m50.092s user 0m1.587s sys 0m2.660s |
balance_alb
1 2 3 4 5 6 7 8 9 10 11 12 |
$ time (ssh pi@192.168.1.126 dd if=/dev/zero count=200K|dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 41.6745 s, 2.5 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 52.1433 s, 2.0 MB/s real 0m52.146s user 0m1.713s sys 0m2.667s |
balance_rr
1 2 3 4 5 6 7 8 9 10 11 12 |
$ time (ssh pi@192.168.1.126 dd if=/dev/zero count=200K|dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 41.3007 s, 2.5 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 51.7677 s, 2.0 MB/s real 0m51.773s user 0m1.656s sys 0m2.465s |
balance_tlb
1 2 3 4 5 6 7 8 9 10 11 12 |
$ time (ssh pi@192.168.1.126 dd if=/dev/zero count=200K|dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 41.5891 s, 2.5 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 52.0192 s, 2.0 MB/s real 0m52.024s user 0m1.695s sys 0m2.670s |
Pi 2B
Unbonded
1 2 3 4 5 6 7 8 9 10 11 12 |
~$ time (ssh pi@192.168.1.213 dd if=/dev/zero count=200K|dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 16.9039 s, 6.2 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 27.3254 s, 3.8 MB/s real 0m27.346s user 0m1.196s sys 0m1.959s |
balanced_rr
1 2 3 4 5 6 7 8 9 10 11 12 |
$ time (ssh pi@192.168.1.126 dd if=/dev/zero count=200K | dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 15.6955 s, 6.7 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 15.8961 s, 6.6 MB/s real 0m15.901s user 0m1.271s sys 0m1.646s |
balance-alb
1 2 3 4 5 6 7 8 9 10 11 12 |
$ time (ssh pi@192.168.1.126 dd if=/dev/zero count=200K | dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 15.1264 s, 6.9 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 15.3294 s, 6.8 MB/s real 0m15.336s user 0m1.131s sys 0m1.783s |
balance-tlb
1 2 3 4 5 6 7 8 9 10 11 12 |
$ time (ssh pi@192.168.1.126 dd if=/dev/zero count=200K | dd of=/dev/null) 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 15.2317 s, 6.9 MB/s 204800+0 records in 204800+0 records out 104857600 bytes (105 MB) copied, 15.4328 s, 6.8 MB/s real 0m15.437s user 0m1.279s sys 0m1.656s |
Results and Observations
I believe that /dev/zero
is implemented “oddly” or at least differently oon the B+ than the Pi 2. It seems to require more work by the sytem.
I also noticed that there appears to be an issue with cron on the Pi B+. It is eating the CPU approximatly 1/2 the time. This will be investigated in another blog post.
- I didn’t notice any difference between the bonded and unbounded test on the Pi B+.
-
The bonding is working on the Pi 2; this is evidenced by the time spend transferring the files, approximately 1/2 the time for the un-bonded host.
All in all, it was a useful little experiment to see how well bonding works on the Pi.