Enable sudo
for the user
~$ su -
~# usermod -aG sudo <user>
~# apt install sudo
~# exit
~$ exit
Enable ssh
on server
sudo apt install openssh-server
On client
ssh-copy-id <user>@<ip>
Harden ssh
server
echo "PermitRootLogin no" | sudo tee /etc/ssh/sshd_config.d/01-disable-root-login.conf
echo "PasswordAuthentication no" | sudo tee /etc/ssh/sshd_config.d/02-disable-password-auth.conf
echo "ChallengeResponseAuthentication no" | sudo tee /etc/ssh/sshd_config.d/03-disable-challenge-response-auth.conf
echo "UsePAM no" | sudo tee /etc/ssh/sshd_config.d/04-disable-pam.conf
sudo systemctl reload ssh
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Install cert tools
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg
Add key and kubernetes repo
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install kubelet, kubeadm and kubectl
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Kubelet ≥ 1.26 requires containerd ≥ 1.6.0.
sudo apt install -y runc containerd
Disable swap for kubelet to work properly
sudo swapoff -a
Comment out swap in /etc/fstab
to disable swap on boot
sudo sed -e '/swap/ s/^#*/#/' -i /etc/fstab
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Persist sysctl
params across reboot
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
Apply sysctl
params without reboot
sudo sysctl --system
Generate default config
containerd config default | sudo tee /etc/containerd/config.toml
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
Configure the systemd
cgroup driver for containerd
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
Restart containerd
sudo systemctl restart containerd
We are going to use cilium in place of kube-proxy https://docs.cilium.io/en/v1.12/gettingstarted/kubeproxy-free/
sudo kubeadm init --skip-phases=addon/kube-proxy
https://kubernetes.io/docs/tasks/tools/
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
For remote kubectl copy the config file to local machine
scp <USER>@<IP>:/home/veh/.kube/config ~/.kube/config
Get taints on nodes
kubectl get nodes -o json | jq '.items[].spec.taints'
Remove taint on master node to allow scheduling of all deployments
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
To bootstrap the cluster we can install Cilium using its namesake CLI.
For Linux this can be done by running
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
See the Cilium official docs for more options.
Next we install Cilium in Kube proxy replacement mode and enable L2 announcements to reply to ARP requests. To not run into rate limiting while doing L2 announcements we also increase the k8s rate limits.
cilium install \
--set kubeProxyReplacement=true \
--set l2announcements.enabled=true \
--set externalIPs.enabled=true \
--set k8sClientRateLimit.qps=50 \
--set k8sClientRateLimit.burst=100
See this blog post for more details.
Validate install
cilium status
For Cilium to act as a load balancer and start assigning IPs
to LoadBalancer
Service
resources we need to create a CiliumLoadBalancerIPPool
with a valid pool.
Edit the cidr range to fit your network before applying it
kubectl apply -f infra/cilium/ip-pool.yaml
Next create a CiliumL2AnnouncementPolicy
to announce the assigned IPs.
Leaving the interfaces
field empty announces on all interfaces.
kubectl apply -f infra/cilium/announce.yaml
Used to create encrypted secrets
kubectl apply -k infra/sealed-secrets
Be sure to store the generated sealed secret key in a safe place!
kubectl -n kube-system get secrets
NB!: There will be errors if you use my sealed secrets as you (hopefully) don't have the decryption key
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/experimental-install.yaml
kubectl kustomize --enable-helm infra/cert-manager | kubectl apply -f -
Change the io.cilium/lb-ipam-ips
annotation in infra/traefik/values.yaml
to a valid IP address for your network.
Install Traefik
kubectl kustomize --enable-helm infra/traefik | kubectl apply -f -
Port forward Traefik ports in router from 8000 to 80 for http and 4443 to 443 for https.
IP can be found with kubectl get svc
(it should be the same as the one you gave in the annotation).
Deploy a test-application by editing the manifests in apps/test/whoami
and apply them
kubectl apply -k apps/test/whoami
An unsecured test-application whoami
should be available at https://test.${DOMAIN}.
If you configured apps/test/whoami/traefik-forward-auth
correctly a secured version should be available
at https://whoami.${DOMAIN}.
ArgoCD is used to bootstrap the rest of the cluster. The cluster uses a combination of Helm and Kustomize to configure infrastructure and applications. For more details read this blog post
kubectl kustomize --enable-helm infra/argocd | kubectl apply -f -
Get ArgoCD initial secret by running
kubectl -n argocd get secrets argocd-initial-admin-secret -o json | jq -r .data.password | base64 -d
An OIDC (traefik-forward-auth) protected Kubernetes Dashboard can be deployed using
kubectl apply -k infra/dashboard
Create a token
kubectl -n kubernetes-dashboard create token admin-user
NB!: This will not work before you've changed all the domain names and IP addresses.
Once you've tested everything get the ball rolling with
kubectl apply -k sets
kubectl drain gauss --delete-emptydir-data --force --ignore-daemonsets
sudo kubeadm reset
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X