Search code examples
kubernetesvsphere

mounting vSphere store in Kubernetes pod not working


I'm trying to mount a vSphere volume in a pod but I keep getting:

vsphere_volume_util.go:123] Cloud provider not initialized properly

/etc/kubernetes/environment/vsphere.conf

[Global]
    user="xxxxxx"
    password="xxxxxx"
    server="xxxxxx"
    port="443"
    insecure-flag="1"
    datacenter="Frankfurt"
    datastore="dfrclupoc01-001"
    #working-dir="dockvols"
[Disk]
    scsicontrollertype=pvscsi

In the "vmWare vSphere Web Client" I see:

<mltdfrd01.xx.com>
  <Frankfurt>
    <dfrclupoc01-001>

And under that store I have a folder "dockvols" with a subdirectory "11111111-1111-1111-1111-111111111111".

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  vsphereVolume:
    volumePath: "[Frankfurt/dfrclupoc01-001] dockvols/11111111-1111-1111-1111-111111111111/MyVolume.vmdk"
    fsType: ext4

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvcmilo1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

apiVersion: v1
kind: Pod
metadata:
  name: pod0001
spec:
  containers:
  - image: busybox
    name: pod0001
    volumeMounts:
    - mountPath: /data
      name: pod-volume
  volumes:
  - name: pod-volume
    persistentVolumeClaim:
      claimName: pvcmilo1

I tried different volume paths but I think the problem is earlier in the process.

Log of the node starting at the moment I create the pod:

I0602 05:43:20.781563   84854 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/224c6b51-24fc-11e7-adcd-005056890fa6-default-token-j6vgj" (spec.Name: "default-                 token-j6vgj") pod "224c6b51-24fc-11e7-adcd-005056890fa6" (UID: "224c6b51-24fc-11e7-adcd-005056890fa6").
I0602 05:43:24.279729   84854 kubelet.go:1781] SyncLoop (ADD, "api"): "pod0001_default(ebe97189-4777-11e7-8979-005056890fa6)"
E0602 05:43:24.378657   84854 vsphere_volume_util.go:123] Cloud provider not initialized properly
I0602 05:43:24.382952   84854 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/vsphere-volume/ebe97189-4777-11e7-8979-005056890fa6-pv0001" (spec.Name: "                 pv0001") pod "ebe97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6")
I0602 05:43:24.382985   84854 reconciler.go:230] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/ebe97189-4777-11e7-8979-005056890fa6-default-token-zsrfn" (spec.Na                 me: "default-token-zsrfn") pod "ebe97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6")
I0602 05:43:24.483237   84854 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/secret/ebe97189-4777-11e7-8979-005056890fa6-default-token-zsrfn" (spec.Name: "default-token-                 zsrfn") to pod "ebe97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6").
E0602 05:43:24.483265   84854 vsphere_volume_util.go:123] Cloud provider not initialized properly
I0602 05:43:24.483296   84854 reconciler.go:306] MountVolume operation started for volume "kubernetes.io/vsphere-volume/ebe97189-4777-11e7-8979-005056890fa6-pv0001" (spec.Name: "pv0001") to pod "eb                 e97189-4777-11e7-8979-005056890fa6" (UID: "ebe97189-4777-11e7-8979-005056890fa6").
E0602 05:43:24.492507   84854 mount_linux.go:119] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[dfrclupoc01-001] dockvols/11111111-1111-1111-1111-111111111111/MyVolume.vmdk /var/lib/kubelet/pods/ebe97189-4777-11                 e7-8979-005056890fa6/volumes/kubernetes.io~vsphere-volume/pv0001  [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[dfrclupoc01-001] dockvols/11111111-1111-1111-1111-111111111111/MyVolume.vmdk does not exist

Kubernete version: 1.5.2

Thanks for any help, Milo


Solution

  • Seems I missed a lot of details:

    • not only kubelet needs the cloudconfig but also the api-server and the controller-manager
    • the wwn disks by-id where missing. I had to enable that in the vSphere environment for the vm by setting disk.EnableUUID to TRUE
    • remove working-dirs entry. Seems to crash kubelet...
    • some other details I forgot

    See https://vanderzee.org/linux/article-170620-144221 for details.