Search code examples
kubernetespersistent-volumeskubernetes-statefulsetpersistent-volume-claims

How to mount an existing file-based DB for a StatefulSet to each kubernetes pod independently?


Right from start: very beginner in kubernetes here.

I (think to) know how I can create a volume for my StatefulSet for persistence. For that, I have

          volumeMounts:
            - name: db
              mountPath: /var/lib/mydb

and

    - metadata:
      name: db
      labels:
        app: myapp
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 50Gi 

I assume this would give my pods each (?) 50GB of space to persist their stuff.

This helps me test my app when the DB is empty and start from scratch. It would, I guess, be able to reuse the DB if pods were shutdown and restarted - that's how I understand this persistence.

However, I would like to be able to test with a DB with history also, with potentially several GB size already. This means, every pod would mount an existing DB and go from there. Therefore every pod needs to access the file but also needs to mount it independently and exclusively. I can't just mount an existing volume and have it shared between the pods.

The DB should be an on-disk key-value store like leveldb.

This would allow to test the app from different existing states of the DB.

Is this possible?


Solution

  • Before starting playing with StatefulSets I advice you to read doc - statefulset-official-doc. Take a look also on dynamics-provisioning. Similary as @SYN said:

    1. Create create PVCs - specify storage class and size there, even before creating StatefulSet.

    2. Create a Job /Pod which would mount your volume and either automatically set it up and extract your data the way you want them. Mount it to the directory, which is used by your DB to store the data. When the volumes are ready, shut down your init Pod.

    3. Create your StatefulSet. Instead of creating new PVCs, reuse ones you've prepared.