Search code examples
kubernetesopenstackkubelet

How to attach OpenStack volume to a Kubernetes staic pod?


Suppose I bootstrap a single master node with kubelet v1.10.3 in OpenStack cloud and I would like to have a "self-hosted" single etcd node for k8s necessities as a pod.

Before starting kube-apiserver component you need a working etcd instance, but of course you can't just perform kubectl apply -f or put a manifest to addon-manager folder because cluster is not ready at all. There is a way to start pods by kubelet without having a ready apiserver. It is called static pods (yaml Pod definitions usually located at /etc/kubernetes/manifests/). And it is the way I start "system" pods like apiserver, scheduler, controller-manager and etcd itself. Previously I just mounted a directory from node to persist etcd data, but now I would like to use OpenStack blockstorage resource. And here is the question: how can I attach, mount and use OpenStack cinder volume to persist etcd data from static pod?

As I learned today there are at least 3 ways to attach OpenStack volumes:

  • CSI OpenStack cinder driver which is pretty much new way of managing volumes. And it won't fit my requirements, because in static pods manifests I can only declare Pods and not other resources like PVC/PV while CSI docs say:

    The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object.

  • before-csi way to attach volumes is: FlexVolume.

    FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node (and in some cases master).

Ok, I added those binaries to my node (using this DS as a reference), added volume to pod manifest like this:

volumes:
- name: test
  flexVolume:
    driver: "cinder.io/cinder-flex-volume-driver"
    fsType: "ext4"
    options:
      volumeID: "$VOLUME_ID"
      cinderConfig: "/etc/kubernetes/cloud-config"

and got the following error from kubelet logs:

driver-call.go:258] mount command failed, status: Failure, reason: Volume 2c21311b-7329-4cf4-8230-f3ce2f23cf1a is not available

which is weird because I am sure this Cinder volume is already attached to my CoreOS compute instance.

  • and the last way to mount volumes I know is cinder in-tree support which should work since at least k8s 1.5 and does not have any special requirements besides --cloud-provider=openstack and --cloud-config kubelet options.

The yaml manifest part for declaring volume for static pod looks like this:

volumes:
  - name: html-volume
    cinder:
      # Enter the volume ID below
      volumeID: "$VOLUME_ID"
      fsType: ext4

Unfortunately when I try this method I get the following error from kubelet:

Volume has not been added to the list of VolumesInUse in the node's volume status for volume.

Do not know what it means but sounds like the node status could not be updated (of course, there is no etcd and apiserver yet). Sad, it was the most promising option for me.

Are there any other ways to attach OpenStack cinder volume to a static pod relying on kubelet only (when cluster is actually not ready)? Any ideas on what cloud I miss of got above errors?


Solution

  • Message Volume has not been added to the list of VolumesInUse in the node's volume status for volume. says that attach/detach operations for that node are delegated to controller-manager only. Kubelet waits for attachment being made by controller but volume doesn't reach appropriate state because controller isn't up yet. The solution is to set kubelet flag --enable-controller-attach-detach=false to let kubelet attach, mount and so on. This flag is set to true by default because of the following reasons

    • If a node is lost, volumes that were attached to it can be detached by the controller and reattached elsewhere.

    • Credentials for attaching and detaching do not need to be made present on every node, improving security.

    In your case setting of this flag to false is reasonable as this is the only way to achieve what you want.