Search code examples
kubernetesistiowindows-subsystem-for-linuxkind

Installing Istio on WSL2 fails with "FailedMount" for pods


I have set up a local kubernetes cluster using kind on WSL2 (Ubuntu distro). I managed to create a cluster successfully. Then I try to install istio using helm following the documentation.

Everything looks fine till I check the status of istio pods using kubectl get pods -n istio-system for which I get the response

istio-egressgateway-645df98b64-tml4k    0/1     ContainerCreating   0          39m
istio-ingressgateway-6c7f679666-lxj8r   0/1     ContainerCreating   0          39m
istiod-657558ff59-fhpgl                 0/1     ContainerCreating   0          39m

The pods continue to stay in ContainerCreating status. So, I checked the pod using kubectl describe pod -n istio-system istio-egressgateway-645df98b64-tml4k and I see the following events with warnings:

Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    27m                   default-scheduler  Successfully assigned istio-system/istio-egressgateway-645df98b64-tml4k to msg-local-worker2
  Warning  FailedMount  25m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[istio-token istiod-ca-cert], unattached volumes=[istio-token podinfo istio-envoy istio-data egressgateway-ca-certs config-volume egressgateway-certs istio-egressgateway-service-account-token-2k6nv istiod-ca-cert]: timed out waiting for the condition
  Warning  FailedMount  22m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[istio-token istiod-ca-cert], unattached volumes=[istio-token istio-envoy istiod-ca-cert config-volume istio-data podinfo egressgateway-certs egressgateway-ca-certs istio-egressgateway-service-account-token-2k6nv]: timed out waiting for the condition
  Warning  FailedMount  20m (x11 over 27m)    kubelet            MountVolume.SetUp failed for volume "istio-token" : failed to fetch token: the API server does not have TokenRequest endpoints enabled
  Warning  FailedMount  20m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert istio-token], unattached volumes=[istio-envoy podinfo istiod-ca-cert istio-egressgateway-service-account-token-2k6nv istio-token config-volume egressgateway-certs istio-data egressgateway-ca-certs]: timed out waiting for the condition
  Warning  FailedMount  11m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[istiod-ca-cert istio-token], unattached volumes=[podinfo istio-egressgateway-service-account-token-2k6nv istio-envoy config-volume istio-data istiod-ca-cert egressgateway-certs egressgateway-ca-certs istio-token]: timed out waiting for the condition
  Warning  FailedMount  7m2s (x3 over 9m16s)  kubelet            (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[istio-token istiod-ca-cert], unattached volumes=[egressgateway-certs istio-envoy istio-token egressgateway-ca-certs istio-egressgateway-service-account-token-2k6nv istio-data istiod-ca-cert config-volume podinfo]: timed out waiting for the condition
  Warning  FailedMount  38s (x18 over 27m)    kubelet            MountVolume.SetUp failed for volume "istiod-ca-cert" : configmap "istio-ca-root-cert" not found

Solution

  • Managed to figure out the problem thanks to this GitHub issue. I needed to enable service account token volume projection.

    The exact solution is found here. I changed my cluster configuration (kind-config.yaml) to

    kind: Cluster
    apiVersion: kind.sigs.k8s.io/v1alpha4
    kubeadmConfigPatches:
      - |
        apiVersion: kubeadm.k8s.io/v1beta2
        kind: ClusterConfiguration
        metadata:
          name: config
        apiServer:
          extraArgs:
            "service-account-issuer": "kubernetes.default.svc"
            "service-account-signing-key-file": "/etc/kubernetes/pki/sa.key"
    nodes:
      - role: control-plane
      - role: worker
      - role: worker
    

    Then started the cluster using kind create cluster --name my-cluster --config ./kind-config.yaml. I installed istio normally on this cluster and the pods are running now.