I have an EKS cluster v1.23 with Fargate nodes. Cluster and Nodes are in v1.23.x
$ kubectl version --short
Server Version: v1.23.14-eks-ffeb93d
Fargate nodes are also in v1.23.14
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
fargate-ip-x-x-x-x.region.compute.internal Ready <none> 7m30s v1.23.14-eks-a1bebd3
fargate-ip-x-x-x-xx.region.compute.internal Ready <none> 7m11s v1.23.14-eks-a1bebd3
When I tried to upgrade cluster to 1.24 from AWS console, it gives this error.
Kubelet version of Fargate pods must be updated to match cluster version 1.23 before updating cluster version; Please recycle all offending pod replicas
What are the other things I have to check?
Fargate nodes are also in v1.23.14
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
fargate-ip-x-x-x-x.region.compute.internal Ready <none> 7m30s v1.23.14-eks-a1bebd3
fargate-ip-x-x-x-xx.region.compute.internal Ready <none> 7m11s v1.23.14-eks-a1bebd3
From your question you only have 2 nodes, likely you are running only the coredns. Try kubectl scale deployment coredns --namespace kube-system --replicas 0
then upgrade. You can scale it back to 2 when the control plane upgrade is completed. Nevertheless, ensure you have selected the correct cluster on the console.