We see on some Kubernetes cluster upgrading from 1.30 -> 1.31 following errors on cilium, coredns, kube-proxy, ... pods on Control Planes:
Warning Failed 15s (x3 over 12s) kubelet Error: services have not yet been read at least once, cannot construct envvars
The pods will not start on the updated Control Plane, so we must do it with a little workaround to ensure a seamless upgrade.
Upgrade the Cluster without errors
First of all if you face that problem, no worries, you can easily rollback the kubeadm/kubelet and patch it then. You can also exchange the kubectl but its not needed.
Rollback of kubeadm (Debian based OS)
Redownload the old kubeadm/kubelet and restart with systemctl:
# Rollback to 1.30
wget -P /usr/local/bin https://dl.k8s.io/release/v1.30.X/bin/linux/amd64/kubeadm
wget -P /usr/local/sbin https://dl.k8s.io/release/v1.30.X/bin/linux/amd64/kubelet
systemctl restart kubelet
Upgrade the Cluster before update the binaries
Download the new kubeadm to your home directory and rename it to avoid confusion:
wget https://dl.k8s.io/release/v1.31.X/bin/linux/amd64/kubeadm
mv kubeadm kubeadm-v1.31.X
chmod +x kubeadm-v1.31.X
./kubeadm-v1.31.X upgrade apply -y v1.31.X
If you have a cluster with more then one Control Plane update all Control Plane Nodes first before exchanging the binaries and reboot the Nodes.
And thats it !
If you still have problems with upgrading your Kubernetes clusters let us know. We can help you!