Setting Up a K3s Cluster on Raspberry Pi with Ingress and Longhorn
Welcome to this comprehensive guide on setting up a K3s cluster on your Raspberry Pi! K3s is a lightweight Kubernetes distribution, perfect for edge computing and IoT applications. In this guide, I'll walk you through setting up a K3s cluster, configuring essential components like MetalLB, Ingress, ArgoCD, and Longhorn. By the end, you'll have a powerful and scalable Kubernetes environment running on your Raspberry Pi.
Prerequisites
Before diving in, ensure you have multiple Raspberry Pi devices with internet access and SSH enabled. You'll also need a networked storage solution like NFS or iSCSI if you plan to use Longhorn for persistent storage.
Step 1: System Preparation
Let's start by disabling unused hardware and configuring cgroups. This helps optimize your Raspberry Pi for running a Kubernetes cluster by freeing up resources.
Disable Unused Hardware
You can disable the built-in Wi-Fi and Bluetooth to free up system resources:
sudo nano /boot/firmware/config.txt
Add the following lines at the bottom of the file:
dtoverlay=disable-wifi
dtoverlay=disable-bt
Configure cgroups
Kubernetes heavily relies on cgroups (control groups) for resource management. Enabling memory cgroups ensures your Pi can efficiently manage containers' memory usage:
sudo nano /boot/firmware/cmdline.txt
Add the following options to the end of the single line in this file:
cgroup_memory=1 cgroup_enable=memory
These settings ensure that the Linux kernel tracks memory usage per container, allowing Kubernetes to enforce memory limits and prevent container resource exhaustion.
Update and Install Required Packages
Next, update your system and install necessary packages:
sudo apt update
sudo apt full-upgrade
sudo apt install nfs-common open-iscsi
sudo systemctl enable open-iscsi --now
sudo reboot
Step 2: Setting Up the K3s Cluster
K3s simplifies the deployment of Kubernetes on lightweight infrastructure. We'll set up a master node and additional worker nodes.
Installing K3s on the Master Node
To install K3s on your master node, use the following command. We're disabling the default service load balancer and Traefik (K3s' built-in Ingress controller) to customize our setup later:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable servicelb --disable traefik" sh -s -
After installation, retrieve the node token, which you'll need to join other nodes to the cluster:
sudo cat /var/lib/rancher/k3s/server/node-token
Installing K3s on Worker Nodes
On each worker node, run the following command. Replace master-ip with your master node's IP address, and node-token with the token you retrieved earlier:
curl -sfL https://get.k3s.io | K3S_URL="https://<master-ip>:6443" K3S_TOKEN="<node-token>" K3S_NODE_NAME="<node-name>" sh -
Step 3: Configuring Essential Tools
With the cluster up and running, let's configure some essential Kubernetes tools to enhance your cluster's functionality.
MetalLB - Load Balancer
MetalLB provides load balancing for bare-metal Kubernetes clusters. This allows you to expose services with an external IP:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml
kubectl apply -f "<path_to_your_AddressPool.yaml>" -n metallb-system
MetalLB is essential for enabling external access to your services, making it a great choice for a home lab.
Ingress-NGINX - Ingress Controller
Ingress-NGINX handles routing external traffic to your internal services, which is crucial for exposing web applications:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Ingress allows you to define custom routing rules, making your services easily accessible from the outside.
ArgoCD - GitOps Continuous Deployment
ArgoCD is a declarative continuous deployment tool for Kubernetes. It automates the deployment of applications to your cluster based on Git repositories:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
You can also patch the configuration to enable exec functionality:
kubectl patch configmap argocd-cm -n argocd --type=merge -p='{"data":{"exec.enabled":"true"}}'
kubectl patch clusterrole/argocd-server --type=json -p='[{"op":"add","path":"/rules/-","value":{"apiGroups":[""],"resources":["pods/exec"],"verbs":["create"]}}]'
ArgoCD simplifies application management by ensuring your cluster's state matches your Git repositories. It's a powerful tool for automated CI/CD.
Step 4: Persistent Storage with Longhorn
Longhorn provides highly available persistent storage for Kubernetes. It's a great option for clusters running on bare-metal hardware:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
Longhorn ensures that your data is replicated across nodes and remains available even in case of node failure. It's an excellent choice for persistent storage in Kubernetes environments.
Step 5: Managing Secrets
Kubernetes secrets allow you to store and manage sensitive information like passwords and tokens securely.
TLS Secret
To store your SSL certificates as Kubernetes secrets, use the following command:
kubectl create secret tls tls-secret --namespace default \
--cert=/path_to/cert.pem \
--key=/path_to/cert.key
Docker Registry Secret
If you use a private Docker registry, create a Docker registry secret to store your credentials:
kubectl create secret docker-registry regcred \
--docker-server=docker.io/your_username/your_repo \
--docker-username=your_username \
--docker-password=your_token \
[email protected]
Security Best Practices
- Regularly update your system and K3s installation to ensure security patches are applied.
- Use secrets for sensitive data management.
- Limit network access to your Kubernetes API server and use a firewall for added protection.
Conclusion
Congratulations! You've successfully set up a K3s cluster on your Raspberry Pi with key components like MetalLB, Ingress-NGINX, ArgoCD, and Longhorn. With this setup, you're well on your way to running production-grade workloads on your Raspberry Pi cluster.
Make sure to regularly monitor and maintain your cluster to keep it running smoothly. Enjoy your powerful, lightweight Kubernetes environment!