Search code examples
kubernetesrackspace-cloud

Kubernetes - unable to start kubelet with cloud provider openstack (error fetching current node name from cloud provider)


I'm trying to setup a Kubernetes cluster in Rackspace, and I understand that to get persistent volume support I would need to use Cinder (Openstack supported by Rackspace).

Following the Cloud Provider Integrations setup guide, I have setup /etc/kubernetes/cloud-config as follows

[Global]
username=cinder
password=********
auth-url=https://identity.api.rackspacecloud.com/v2.0
tenant-name=1234567
region=LON

I've added the following to the kubelet startup command in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

--cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud-config

And I'm then running kubeadm init --config=kubeadm.conf where kubeadm.conf is:

kind: MasterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha1
cloudProvider: openstack
pod-network-cidr: 10.244.0.0/16

It fails waiting for the kubelet to start. I tracked down the kubelet error to the following:

07:24:51.407692   21412 feature_gate.go:156] feature gates: map[]
07:24:51.407790   21412 controller.go:114] kubelet config controller: starting controller
07:24:51.407849   21412 controller.go:118] kubelet config controller: validating combination of defaults and flags
07:24:51.413973   21412 mount_linux.go:168] Detected OS with systemd
07:24:51.414065   21412 client.go:75] Connecting to docker on unix:///var/run/docker.sock
07:24:51.414137   21412 client.go:95] Start docker client with request timeout=2m0s
07:24:51.415471   21412 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
07:24:51.437924   21412 iptables.go:564] couldn't get iptables-restore version; assuming it doesn't support --wait
07:24:51.440245   21412 feature_gate.go:156] feature gates: map[]
07:24:52.066765   21412 server.go:301] Successfully initialized cloud provider: "openstack" from the config file: "/etc/kubernetes/cloud-config"
07:24:52.066984   21412 openstack_instances.go:39] openstack.Instances() called
07:24:52.067048   21412 openstack_instances.go:46] Claiming to support Instances
07:24:52.070870   21412 metadata.go:84] Unable to run blkid: exit status 2
07:24:52.070993   21412 metadata.go:124] Attempting to fetch metadata from http://169.254.169.254/openstack/2012-08-10/meta_data.json
07:25:22.071444   21412 metadata.go:127] Cannot read http://169.254.169.254/openstack/2012-08-10/meta_data.json: Get http://169.254.169.254/openstack/2012-08-10/meta_data.json: dial tcp 169.254.169.254:80: i/o timeout
error: failed to run Kubelet: error fetching current node name from cloud provider: Get http://169.254.169.254/openstack/2012-08-10/meta_data.json: dial tcp 169.254.169.254:80: i/o timeout

How can I debug this further? I don't really understand how the IP address 169.254.169.254 works in this request.

Right now I can't tell if I have a Kubernetes issue or a Rackspace issue.


Solution

  • The answer is that Rackspace Cloud does not use the Openstack Metadata service. Instead it uses cloud-init with config-drive - a read-only block device (virtual CD-ROM) that is attached at boot.

    The config drive contains the cloud-init data. Example https://developer.rackspace.com/blog/using-cloud-init-with-rackspace-cloud/

    Anecdotally it seems most Rackspace customers who are using Kubernetes use CoreOS VMs which support cloud-config and the Openstack config drive. When K8s runs on a machine with the drive mounted, it attempts to obtain the metadata from there.