Search code examples
dockerkubernetesdocker-volumekubernetes-pvckubernetes-statefulset

Adding Persistent Volume Claim to the existing file in Container


In my docker image I have a directory /opt/myapp/etc which has some files and directories. I want to create statefulset for my app. In that statefulset I am creating persistent volume claim and attach to /opt/myapp/etc. Statefulset yaml is attached below. Can anyone tell me how to attach volume to container in this case?

apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: statefulset
labels:
 app: myapp
spec:
  serviceName: myapp
 replicas: 1
selector:
matchLabels:
  app: myapp
template:
metadata:
  labels:
    app: myapp
spec:
  containers:
  - image: 10.1.23.5:5000/redis
    name: redis
    ports:
    - containerPort: 6379
      name: redis-port
  - image: 10.1.23.5:5000/myapp:18.1
    name: myapp
    ports:
    - containerPort: 8181
      name: port
    volumeMounts:
    - name: data
      mountPath: /opt/myapp/etc
volumeClaimTemplates:
- metadata:
  name: data
  spec:
   accessModes: [ "ReadWriteOnce" ]
  storageClassName: standard
  resources:
    requests:
        storage: 5Gi

Here is the output of describe pod

   Events:
  Type     Reason                  Age              From                     Message
  ----     ------                  ----             ----                     -------
  Warning  FailedScheduling        3m (x4 over 3m)  default-scheduler        pod has unbound PersistentVolumeClaims
  Normal   Scheduled               3m               default-scheduler        Successfully assigned controller-statefulset-0 to dev-k8s-2
  Normal   SuccessfulMountVolume   3m               kubelet, dev-k8s-2       MountVolume.SetUp succeeded for volume "default-token-xpskd"
  Normal   SuccessfulAttachVolume  3m               attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1"
  Normal   SuccessfulMountVolume   3m               kubelet, dev-k8s-2       MountVolume.SetUp succeeded for volume "pvc-77d2cef8-a674-11e8-9358-fa163e3294c1"
  Normal   Pulling                 2m               kubelet, dev-k8s-2       pulling image "10.1.23.5:5000/redis"
  Normal   Pulled                  2m               kubelet, dev-k8s-2       Successfully pulled image "10.1.23.5:5000/redis"
  Normal   Created                 2m               kubelet, dev-k8s-2       Created container
  Normal   Started                 2m               kubelet, dev-k8s-2       Started container
  Normal   Pulled                  1m (x4 over 2m)  kubelet, dev-k8s-2       Container image "10.1.23.5:5000/myapp:18.1" already present on machine
  Normal   Created                 1m (x4 over 2m)  kubelet, dev-k8s-2       Created container
  Normal   Started                 1m (x4 over 2m)  kubelet, dev-k8s-2       Started container
  Warning  BackOff                 1m (x7 over 2m)  kubelet, dev-k8s-2       Back-off restarting failed container

storageclass definition

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: standard
 namespace: controller
provisioner: kubernetes.io/cinder
reclaimPolicy: Retain
parameters:
 availability: nova

Solution

  • Kubernetes will not allow the mounting 2 volumes to a same directory. second mount will overwrite the files created by the first. In my case docker image had some files in etc directory, which were removed after mounting the volume. Solved the problem using subpath.