Setting up a Raspberry Pi cluster

Let’s build a Raspberry Pi cluster with Kubernetes (or actually a Kubernetes implementation) step by step.

Hardware requirements

We need multiple Raspberry Pi. I have chosen for the Raspberry Pi 3B+. This is not the most recent version, but it will do. A Pi needs

  • power, so I decided to connect them to a USB power charger containing 6 ports which fits on the cluster,
  • an Operating System on a memory SD card,
  • a switch connecting them and fits on the cluster,
  • and the cluster case which should look like the one from the James Bond movie “Skyfall”. So a transparent cluster supporting the selected Pi model.

My cluster contains four Raspberry Pi with the following hardware shopping list.

  • 4 x Raspberry Pi 3B+
  • 1x power supply: Anker 60W 6-Port PowerPort USB Charger (white)
  • 4 x Anker 1ft / 0.3m PowerLine USB 2 to micro USB cable (white)
  • 4 x UTP cat5 cable 1ft / 0.3m (white)
  • 2 x ModMyPi stackable Raspberry Pi case (transparant)
  • 4 x 16 or 32 GB micro SD-card class 10
  • D-Link Switch GO-SW-8E 8-Port

Now that we got the parts, prepare for assembling. We need a screwdriver, double sided tape for power bank, hardware to flash an OS on the memory SD card.

Hardware alternatives

Instead of a USB charger with multiple ports, you can also choose a power HAT for the Raspberry Pi.

The Raspberry Pi Power over Ethernet HAT is a small accessory for the Raspberry Pi computer which allows you to power your Raspberry Pi using Power over Ethernet–enabled networks

There are newer cluster cases which includes fans.

Configure the cluster Pi by Pi, so each Pi will have it’s own host name. First download OS and flash OS on the memory SD card.We will configure each Pi with the command line as much as possible, just for fun.

Optional steps

We will use wget for this. On MacOs this is not available by default, so let’s install wget first (assuming HomeBrew is already installed).

No HomeBrew?

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

Now install wget for MacOS

brew install wget

Download OS

Now download Raspbian Lite. We will use this OS for the Pi. You can download it using the browser from https://raspberrypi.org/downloads

of use the command line.

wget https://downloads.raspberrypi.org/raspbian_lite_latest
unzip ./raspbian_lite_latest

You will see something like

Archive:  ./raspbian_lite_latest.zip
inflating: 2020-02-13-raspbian-buster-lite.img

We will flash the file 2020–02–13-raspbian-buster-lite.img to the memory SD card.

Flash OS to disk

Determine SD card device. On MacOS we use the tool diskutil.

diskutil list

This will show all disks on your machine. The SD card will most likely NOT be /dev/disk0 or /dev/disk1. Show below is a possible output for /dev/disk2

/dev/disk2 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *32.0 GB disk2
1: Windows_FAT_32 NO NAME 32.0 GB disk2s1

Unmount and flash OS using CLI

diskutil unmountDisk /dev/disk2

Now flash OS to the memory card. We will use as output the raw device /dev/rdisk2, as this will be faster.

sudo dd bs=1m if=./2020-02-13-raspbian-buster-lite.img of=/dev/rdisk2 conv=sync

Enable SSH

SSH is disabled by default on the Pi. As I want to use the command line, we need to enable SSH here, so we can access the Pi from the Mac without a screen directly attached to the Pi. We will use the command line with SSH to access the Pi. Now start enabling SSH.

Locate SD card boot volume first

ls /Volumes

This should show the boot partition as Volume on MacOS.

Now enable ssh on the Pi by adding an empty file named “ssh” on the boot partition of the SD card. When the Pi will boot with this SD card, SSH will be enabled

touch /Volumes/boot/ssh

SSH will be enabled now. The SD card is ready. Now eject the SD card.

diskutil eject /dev/disk2

Remove the SD card from your Mac.

Put the SD card in the Pi. Attach network cable of the Pi to the switch, switch the router, USB cable to the power bank, et cetera.

Running Pi cluster

We will configure each Pi separately so we know which Pi has which host name (and IP address).

Configure the Pi

We should be able to access the Pi from the MacOS when it is attached to the cluster and the cluster is connected to the same network as the Mac.

IP address

But what is the IP address of the Pi in the cluster and in the network? Handy tool is nmap, available on Linux. But we are working now on MacOS. Install nmap using homebrew (or sdkman).

brew install nmap

Now scan the network. Check the IP address of your router

netstat -nr | grep default

In the next examples I will use the following IP address as example: 192.168.40.1

This might gave the following output:

default         192.168.40.1       UGSc          231        0    en0       
default fe80::%utun0 UGcI utun0

So the IP address of the router is 192.168.40.1

Now scan the network using the IP address of the router.

nmap -sP 192.168.40.1/24

In the output, find the IP address and name of the Raspberry Pi.

Host is up (0.0021s latency).
Nmap scan report for raspberrypi (192.168.40.153)

As you can see the hostname of the Raspberry Pi is “raspberrypi”. This will be the hostname for each Pi in the network if we don’t change it. We will change it later on the command line. First access the Pi.

As we have enabled SSH, let’s access the Pi using the network. Using the command line. We can either use the name of the Pi or the IP address.

using the name “raspberrypi” it would be:

ssh pi@raspberrypi.local

or using the IP address

ssh pi@192.168.40.153

The default password is raspberry.

pi@raspberrypi:~ $

We are in using SSH.

Login using ssh key (Optional)

If you have a ssh key, you can log into the Pi with your key after you copy the ssh key to the Pi. From the MacOS command line you can use the command ssh-copy-id

The name of the Pi is now rpimaster or rpinode<x>.

ssh-copy-id pi@rpimaster.local

Next time you log into the Pi use command

ssh pi@rpimaster.local

BUT the password is now the password of the ssh key (assuming it is password protected).

Configure Pi hostname

Now verify and change the name of our Pi, so each Pi will have it’s own unique hostname within the network.

pi@raspberrypi:~ $ hostname

will show the output

raspberrypi

Now change hostname

pi@raspberrypi:~ $ sudo vi /etc/hostname

Change the name to

  • rpimaster for the first Pi. We will use this as the master for Kubernetes.
  • rpinode1, rpinode2, et cetera for the next Pi

We will change also file /etc/hosts

sudo vi /etc/hosts

find

127.0.1.1       raspberrypi

change to

127.0.1.1       rpimaster

Kubernetes for the Pi cluster

Besides the enterprise Kubernetes, also known as K8S, there are also other distributions. We will use k3s, “the certified Kubernetes distribution built for IoT & Edge computing”, for the Pi cluster. We will install k3s separately on each Pi, one being the master node.

Installing k3s on master

Switch to root for now. On the (master) Pi

root@rpimaster:~# sudo su -

Enabled Linux control groups

We need to setup Linux Control Groups so we can use resource monitoring and enables isolation that are needed by Kubernetes or an Kubernetes implementation like k3s.

Edit /boot/cmdline.txt and add the following line to the end of the file.

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Starting installation on the master node

root@rpimaster:~# curl -sfL https://get.k3s.io | sh -[INFO]  Finding release for channel stable
[INFO] Using v1.17.4+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/sha256sum-arm.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/k3s-armhf
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.[INFO] systemd: Starting k3s
root@rpimaster:~#

Wow, that was easy. Verify k3s, for example get the node.

root@rpimaster:~# k3s kubectl get node -o wide

Now determine the token of the master, we need it for the worker nodes.

sudo cat /var/lib/rancher/k3s/server/node-token

This will output something like the following (not showing the real key):

K10315549bc9c9b8e29aa90c3f42f17777ec81d5fbea8fccf1023d1826e1::server:0f59df3f34552e4ac62ac22ff

Install k3s on worker node

Now we will prepare and configure the worker node Pi almost the same as the master node. Only the hostname is rpinode<x> instead of rpimaster

ssh into the worker node (first node is named rpinode1), like the following:

ssh pi@rpinode1.local

You will see at least the following message

Wi-Fi is currently blocked by rfkill.

The pi is connected on switch, so we leave Wi-Fi for now. Prepare for installation k3s on the worker node.

Technical Workout!!

Due to the following issue, we will use the IP address of the master.

We need to have the IP address of the master node:

ping rpimaster.localING rpimaster.local (192.168.30.153): 56 data bytes
64 bytes from 192.168.40.153: icmp_seq=0 ttl=64 time=1.644 ms

Now export some variables which are need by k3s. The variable K3S_URL is used to point to the master. Also we need a token.

export K3S_URL="https://192.168.40.153:6443"
export K3S_TOKEN=<TOKEN>

Install k3s on worker node, like we did on the master node.

sudo curl -sfL https://get.k3s.io | sh -

This will show that the agent is installed and starting

...
INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent

You can verify the status

pi@rpinode1:~ $ sudo systemctl status k3s-agent

if you see that the k3s-agent gives errors concerning the CA (certificates)

Oct 21 13:15:29 rpinode1 k3s[921]: time="2020-10-21T13:15:29.366724836+01:00" level=error msg="failed to get CA certs at...

then you can alter the config.

sudo vi /etc/systemd/system/k3s-agent.service.env

edit the K3S_URL and use the IP address of the master instead of the hostname.

K3S_URL=https://192.168.40.153:6443

Restart the agent and check again.

sudo systemctl restart k3s-agent

Now the errors related to the certificates should be gone. At least this node worker can join. Repeat this for rpinode2 and rpinode3 as well.

Verify cluster

We have made sure that at least one Pi, rpinode1, could join. Let’s verify on the master node

kubectl get node -o wide

Giving the following output:

NAME        STATUS   ROLES    AGE    VERSION        INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
rpimaster Ready master 179d v1.17.4+k3s1 192.168.40.154 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ containerd://1.3.3-k3s2
rpinode1 Ready <none> 177d v1.17.4+k3s1 192.168.40.161 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ containerd://1.3.3-k3s2
rpinode2 Ready <none> 78s v1.17.4+k3s1 192.168.40.192 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ containerd://1.3.3-k3s2
rpinode3 Ready <none> 16s v1.17.4+k3s1 192.168.40.190 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ containerd://1.3.3-k3s2

You can see that we still use containerd instead of Docker. We leave it for now and deploy a hello world app.

Choose correct container image

For deployment we can use a hello world application in a container. Choose one for the correct architecture.

We need to make sure that the container matches the architecture. An image for X86_64 will not run on a ARM.

First time I deployed a hello-world application for a AMD64 architecture. Finally the container crashed with the following message:

stderr F standard_init_linux.go:211: exec user process caused "exec format error"

Deploy pod

As one of my references is the blog of alexellis.io, we will take the hello world app from that blog. See

https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/

The service YAML ,with name openfaas-figlet-svc.yaml, should look like this:

apiVersion: v1
kind: Service
metadata:
name: openfaas-figlet
labels:
app: openfaas-figlet
spec:
type: NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
nodePort: 31111
selector:
app: openfaas-figlet

and the deployment of the service, with name openfaas-figlet-dep.yaml, like the following.

apiVersion: apps/v1
kind: Deployment
metadata:
name: openfaas-figlet
labels:
app: openfaas-figlet
spec:
replicas: 1
selector:
matchLabels:
app: openfaas-figlet
template:
metadata:
labels:
app: openfaas-figlet
spec:
containers:
- name: openfaas-figlet
image: functions/figlet:latest-armhf
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP

Notice that the image name is functions/figlet:latest-armhf. It supports ARM and so the container made from that image can run on a Raspberry Pi.

Deploy Pod

kubectl apply -f openfaas-figlet-dep.yaml,openfaas-figlet-svc.yaml

verify the rollout of the deployment

root@rpimaster:~# kubectl rollout status deploy/openfaas-figlet
deployment "openfaas-figlet" successfully rolled out

Now we can list the pods

root@rpimaster:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
svclb-hello-kubernetes-mlwjg 0/1 Pending 0 21h
svclb-hello-kubernetes-v5z8v 0/1 Pending 0 21h
svclb-hello-kubernetes-gsnlk 0/1 Pending 0 21h
svclb-hello-kubernetes-pb4pl 0/1 Pending 0 21h
openfaas-figlet-5c5887f45-4s92c 1/1 Running 0 5h11m
hello-world-deployment-5686685fcc-pk2ms 0/1 CrashLoopBackOff 248 20h
hello-kubernetes-fb869d65c-tx6k7 0/1 CrashLoopBackOff 256 21h
hello-world-deployment-5686685fcc-4s8jn 0/1 CrashLoopBackOff 253 20h

The openfaas-figlet pod, with name openfaas-figlet-5c5887f45–4s92c, has status Running. The pods with the wrong container image architecture show status CrashLoopBackOff. When we check pod with command

kubectl describe pod openfaas-figlet-5c5887f45-4s92c

we can see that the status is Running and the Raspberry Pi is rpinode1:

Name:         openfaas-figlet-5c5887f45-4s92c
Namespace: default
Priority: 0
Node: rpinode1/192.168.40.161
Start Time: Thu, 22 Oct 2020 07:04:23 +0100
Labels: app=openfaas-figlet
pod-template-hash=5c5887f45
Annotations: <none>
Status: Running
...Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
...

Done

We have setup our cluster and verified that deploying a pod is working. Be aware that if you want to shutdown the cluster you need to gracefully shutdown the individual Raspberry Pi.

SSH into the Pi. Example shows only node1.

ssh pi@rpinode1

Now shutdown the Pi with option halt and power

sudo shutdown -h -P now

We are done for now.

My references

I used the following references for setting up the cluster and could not have done it with them.

https://www.raspberrypi.org/documentation/installation/installing-images/mac.md

https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/

Written by

Software Developer — Loves Scala and Golang.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store