Search code examples
amazon-ec2kuberneteshigh-availabilitykubeadmetcd

Unable to setup external etcd cluster in Kubernetes v1.15 using kubeadm


I'm trying to setup Kubernetes cluster with multi master and external etcd cluster. Followed these steps as described in kubernetes.io. I was able to create static manifest pod files in all the 3 hosts at /etc/kubernetes/manifests folder after executing Step 7.

After that when I executed command 'sudo kubeadmin init', the initialization got failed because of kubelet errors. Also verified journalctl logs, the error says misconfiguration of cgroup driver which is similar to this SO link.

I tried as said in the above SO link but not able to resolve.

Please help me in resolving this issue.

For installation of docker, kubeadm, kubectl and kubelet, I followed kubernetes.io site only.

Environment:

Cloud: AWS

EC2 instance OS: Ubuntu 18.04

Docker version: 18.09.7

Thanks


Solution

  • After searching few links and doing few trails, I am able to resolve this issue.

    As given in the Container runtime setup, the Docker cgroup driver is systemd. But default cgroup driver of Kubelet is cgroupfs. So as Kubelet alone cannot identify cgroup driver automatically (as given in kubernetes.io docs), we have to provide cgroup-driver externally while running Kubelet like below:

    cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf

    [Service]

    ExecStart=

    ExecStart=/usr/bin/kubelet --cgroup-driver=systemd --address=127.0.0.1 --pod->manifest-path=/etc/kubernetes/manifests

    Restart=always

    EOF

    systemctl daemon-reload

    systemctl restart kubelet

    Moreover, no need to run sudo kubeadm init, as we are providing --pod-manifest-path to Kubelet, it runs etcd as Static POD.

    For debugging, logs of Kubelet can be checked using below command

    journalctl -u kubelet -r

    Hope it helps. Thanks.