Deploy a Highly Available K3s Cluster with K3sup

As you may have seen from our previous episodes, we have a bunch of Raspberry Pis setup with net-boot trough NFS/ISCSI, trough our Sinology NAS. Among the benefits of using a block storage over NFS, besides from increased throughput, the ability to run containerized applications, why not taking full advantage of this feature ?
I've settled down to deploy a K3s cluster, a stripped down version of K8s. Thankfully over the years has become easier to deploy a highly available cluster, so why making it harder. K3sup is a binary that allows us to deploy in minutes a fully working highly available K3s cluster, from our endpoint.

The first step is to download K3s binary on our machine :

curl -sLS https://get.k3sup.dev | sh
Generate a configuration file :

touch bootstrap.sh

Edit the following content to suit your needs :

#!/bin/bash
set -e
export NODE_1="192.168.4.2"
export NODE_2="192.168.4.3"
export NODE_3="192.168.4.4"
export USER=root
# The first server starts the cluster
k3sup install \
  --cluster \
  --user $USER \
  --ip $NODE_1
# The second node joins
k3sup join \
  --server \
  --ip $NODE_2 \
  --user $USER \
  --server-user $USER \
  --server-ip $NODE_1
# The third node joins
k3sup join \
  --server \
  --ip $NODE_3 \
  --user $USER \
  --server-user $USER \
  --server-ip $NODE_1


Since K3s leverages ssh to log in our hosts, it is essential our ssh public key is added to the authorized keys in all our Raspberry Pis, in the case it was not already present, we will use Ansible to keep track of our configurations, making it easy to add new hosts and automate configuration tasks.

- name: Copy ssh
hosts: {YOUR_HOSTS_GROUP}
become: true
gather_facts: false
tasks:
    - name: Set authorized key
ansible.posix.authorized_key:
user: root
state: present
key: "{YOUR_PUBLIC_KEY}"


Our public key is on our nodes. Time to execute our k3s bootstrap script !
If everything went as planned, we should now have 3 master nodes up and running.
Let's join our workers !

k3sup join --user root --server-ip $MASTERIP --host $NODEHOSTNAME
Last but not least, we can control the cluster from our current machine. After making sure to have kubectl is currently installed, we copy the config file at /etc/rancher/k3s/k3s.yaml to our local path. If directory and file do not already exists create them.
touch ~/.kube/config
cat kubeconfig > ~/.kube/config
take a backup of your cluster now !
k3s etcd-snapshot

Comments

Subscribe to our Newsletter