Search code examples
mongodbkubernetesmicrok8s

How to deploy Mongodb replicaset on microk8s cluster


I'm trying to deploy a Mongodb ReplicaSet on microk8s cluster. I have installed a VM running on Ubuntu 20.04. After the deployment, the mongo pods do not run but crash. I've enabled microk8s storage, dns and rbac add-ons but still the same problem persists. Can any one help me find the reason behind it? Below is my manifest file:

apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
spec:
  selector:
    matchLabels:
      role: mongo
      environment: test
  serviceName: mongodb-service
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo
        environment: test
        replicaset: MainRepSet
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: replicaset
                  operator: In
                  values:
                  - MainRepSet
              topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 10
      volumes:
        - name: secrets-volume
          secret:
            secretName: shared-bootstrap-data
            defaultMode: 256
      containers:
        - name: mongod-container
          #image: pkdone/mongo-ent:3.4
          image: mongo
          command:
            - "numactl"
            - "--interleave=all"
            - "mongod"
            - "--wiredTigerCacheSizeGB"
            - "0.1"
            - "--bind_ip"
            - "0.0.0.0"
            - "--replSet"
            - "MainRepSet"
            - "--auth"
            - "--clusterAuthMode"
            - "keyFile"
            - "--keyFile"
            - "/etc/secrets-volume/internal-auth-mongodb-keyfile"
            - "--setParameter"
            - "authenticationMechanisms=SCRAM-SHA-1"
          resources:
            requests:
              cpu: 0.2
              memory: 200Mi
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: secrets-volume
              readOnly: true
              mountPath: /etc/secrets-volume
            - name: mongodb-persistent-storage-claim
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongodb-persistent-storage-claim     
    spec:
      storageClassName: microk8s-hostpath
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi

Also, here are the pv, pvc and sc outputs:

yyy@xxx:$ kubectl get pvc
NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
mongodb-persistent-storage-claim-mongo-0   Bound    pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413   5Gi        RWO            microk8s-hostpath   13m
yyy@xxx:$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                              STORAGECLASS        REASON   AGE
pvc-5b75ddf6-abbd-4ff3-a135-0312df1e6703   20Gi       RWX            Delete           Bound    container-registry/registry-claim                  microk8s-hostpath            38m
pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413   5Gi        RWO            Delete           Bound    default/mongodb-persistent-storage-claim-mongo-0   microk8s-hostpath            13m
yyy@xxx:$ kubectl get sc
NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          Immediate           false                  108m

yyy@xxx:$ kubectl get pods -n kube-system 
NAME                                         READY   STATUS    RESTARTS   AGE
metrics-server-8bbfb4bdb-xvwcw               1/1     Running   1          148m
dashboard-metrics-scraper-78d7698477-4qdhj   1/1     Running   0          146m
kubernetes-dashboard-85fd7f45cb-6t7xr        1/1     Running   0          146m
hostpath-provisioner-5c65fbdb4f-ff7cl        1/1     Running   0          113m
coredns-7f9c69c78c-dr5kt                     1/1     Running   0          65m
calico-kube-controllers-f7868dd95-wtf8j      1/1     Running   0          150m
calico-node-knzc2                            1/1     Running   0          150m

I have installed the cluster using this command:

sudo snap install microk8s --classic --channel=1.21

Output of mongodb deployment:

yyy@xxx:$ kubectl get all
NAME          READY   STATUS             RESTARTS   AGE
pod/mongo-0   0/1     CrashLoopBackOff   5          4m18s

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
service/kubernetes        ClusterIP   10.152.183.1   <none>        443/TCP     109m
service/mongodb-service   ClusterIP   None           <none>        27017/TCP   4m19s

NAME                     READY   AGE
statefulset.apps/mongo   0/3     4m19s

Pod logs:

yyy@xxx:$ kubectl logs pod/mongo-0
{"t":{"$date":"2021-09-07T16:21:13.191Z"},"s":"F",  "c":"CONTROL",  "id":20574,   "ctx":"-","msg":"Error during global initialization","attr":{"error":{"code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"}}}
yyy@xxx:$ kubectl describe pod/mongo-0
Name:         mongo-0
Namespace:    default
Priority:     0
Node:         citest1/192.168.9.105
Start Time:   Tue, 07 Sep 2021 16:17:38 +0000
Labels:       controller-revision-hash=mongo-66bd776569
              environment=test
              replicaset=MainRepSet
              role=mongo
              statefulset.kubernetes.io/pod-name=mongo-0
Annotations:  cni.projectcalico.org/podIP: 10.1.150.136/32
              cni.projectcalico.org/podIPs: 10.1.150.136/32
Status:       Running
IP:           10.1.150.136
IPs:
  IP:           10.1.150.136
Controlled By:  StatefulSet/mongo
Containers:
  mongod-container:
    Container ID:  containerd://458e21fac3e87dcf304a9701da0eb827b2646efe94cabce7f283cd49f740c15d
    Image:         mongo
    Image ID:      docker.io/library/mongo@sha256:58ea1bc09f269a9b85b7e1fae83b7505952aaa521afaaca4131f558955743842
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      numactl
      --interleave=all
      mongod
      --wiredTigerCacheSizeGB
      0.1
      --bind_ip
      0.0.0.0
      --replSet
      MainRepSet
      --auth
      --clusterAuthMode
      keyFile
      --keyFile
      /etc/secrets-volume/internal-auth-mongodb-keyfile
      --setParameter
      authenticationMechanisms=SCRAM-SHA-1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 07 Sep 2021 16:24:03 +0000
      Finished:     Tue, 07 Sep 2021 16:24:03 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:        200m
      memory:     200Mi
    Environment:  <none>
    Mounts:
      /data/db from mongodb-persistent-storage-claim (rw)
      /etc/secrets-volume from secrets-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7nf8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  mongodb-persistent-storage-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-persistent-storage-claim-mongo-0
    ReadOnly:   false
  secrets-volume:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  shared-bootstrap-data
    Optional:    false
  kube-api-access-b7nf8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                     From               Message
  ----     ------            ----                    ----               -------
  Warning  FailedScheduling  7m53s                   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  7m52s                   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         7m50s                   default-scheduler  Successfully assigned default/mongo-0 to citest1
  Normal   Pulled            7m25s                   kubelet            Successfully pulled image "mongo" in 25.215669443s
  Normal   Pulled            7m21s                   kubelet            Successfully pulled image "mongo" in 1.192994197s
  Normal   Pulled            7m6s                    kubelet            Successfully pulled image "mongo" in 1.203239709s
  Normal   Pulled            6m38s                   kubelet            Successfully pulled image "mongo" in 1.213451175s
  Normal   Created           6m38s (x4 over 7m23s)   kubelet            Created container mongod-container
  Normal   Started           6m37s (x4 over 7m23s)   kubelet            Started container mongod-container
  Normal   Pulling           5m47s (x5 over 7m50s)   kubelet            Pulling image "mongo"
  Warning  BackOff           2m49s (x23 over 7m20s)  kubelet            Back-off restarting failed container


Solution

  • The logs you provided show that you have an incorrectly set parameter wiredTigerCacheSizeGB. In your case it is 0.1, and according to the message

    "code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"
    

    it should be at least 0.25.

    In the section containers:

    containers:
            - name: mongod-container
              #image: pkdone/mongo-ent:3.4
              image: mongo
              command:
                - "numactl"
                - "--interleave=all"
                - "mongod"
                - "--wiredTigerCacheSizeGB"
                - "0.1"
                - "--bind_ip"
                - "0.0.0.0"
                - "--replSet"
                - "MainRepSet"
                - "--auth"
                - "--clusterAuthMode"
                - "keyFile"
                - "--keyFile"
                - "/etc/secrets-volume/internal-auth-mongodb-keyfile"
                - "--setParameter"
                - "authenticationMechanisms=SCRAM-SHA-1"
    

    you should change in this place

    -  "--wiredTigerCacheSizeGB"  
    -  "0.1"
    

    the value "0.1" to any other greather or equal "0.25".


    Additionally I have seen another error:

    1 pod has unbound immediate PersistentVolumeClaims
    

    It should related to what I wrote earlier. However, you may find alternative ways to solve it here, here and here.