Search code examples
postgresqlamazon-web-serviceskubernetesgoogle-cloud-platformkubernetes-helm

K8S use volume to keep DB data


Ive created and volume and mount it (for the first time ) to my application. I’ve postgres db which I want to keep the data with a volume if the container done some restart/stop/killed etc

when I deploy my app using helm I see the following

Name:          feature
Namespace:     un
StorageClass:  default
Status:        Bound
Volume:        pvc-7f0-25d2-4-90c1-541f5d262
Labels:        app=un
               chart=un-0.0.1
               heritage=Tiller
               release=elder-fox
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      11Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age   From                         Message
  ----       ------                 ----  ----                         -------
  Normal     ProvisioningSucceeded  102m  persistentvolume-controller  Successfully provisioned volume  pvc-7f0-25d2-4-90c1-541f5d262
  using kubernetes.io/aws-ebs
Mounted By:  fe-postgres-9f8c7-49w26


My question is how can I verify that the data which I enter to the db is actually mapped to a volume and kept there, I see that the volume is bound but not sure if it really keep the data from the postgress db ?

This is the object I’ve been created

PersistentVolumeClaim

{{- if (and .Values.persistence.enabled (eq .Values.persistence.existingClaim "")) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: {{ template "un.fullname" . }}
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ template "un.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  accessModes:
    - {{ .Values.persistence.accessMode }}
  resources:
    requests:
      storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
  storageClassName: ''
{{- else }}
  storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
{{- end }}

postgress

{{- if .Values.config.postgres.internal }}
apiVersion: v1
kind: Service
metadata:
  name: {{ template "un.fullname" . }}-postgres
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ template "un.name" . }}-postgres
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  type: ClusterIP
  ports:
    - name: postgres
      port: 5432
      targetPort: container
  selector:
    app: {{ template "un.name" . }}-postgres
    release: {{ .Release.Name }}
{{- end }}

This is the deployment

{{- if .Values.config.postgres.internal }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: {{ template "un.fullname" . }}-postgres
  namespace: {{ .Values.namespace }}
  labels:
    app: {{ template "un.name" . }}-postgres
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  template:
    metadata:
      labels:
        app: {{ template "un.name" . }}-postgres
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ template "un.fullname" . }}-postgres
          image: {{ .Values.images.postgres.repository }}:{{ .Values.images.postgres.tag }}
          imagePullPolicy: {{ .Values.images.postgres.pullPolicy }}
          ports:
            - name: container
              containerPort: 5432
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
              subPath: postgres
          env:
            - name: POSTGRES_DB
              valueFrom:
                configMapKeyRef:
                  name: {{ template "un.fullname" . }}
                  key: postgres_database
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: {{ template "un.fullname" . }}
                  key: postgres_password
          livenessProbe:
            tcpSocket:
              port: container
{{ toYaml .Values.probes.liveness | indent 12 }}
          readinessProbe:
            tcpSocket:
              port: container
{{ toYaml .Values.probes.readiness | indent 12 }}
      volumes:
        - name: data
          {{- if .Values.persistence.enabled }}
          persistentVolumeClaim:
            claimName: {{ .Values.persistence.existingClaim | default (include "un.fullname" . ) }}
          {{- else }}
          emptyDir: {}
          {{- end }}
{{- end }}

This is the values yaml

images:
  postgres:
    repository: postgres
    tag: 10
    pullPolicy: IfNotPresent

config:
  postgres:
    database: un
    host: ''
    internal: true
    password: postgres
    port: 5432
    url: ''
    username: postgres
…


Solution

  • I don't see persistence.enabled is set in your value file so I assume that you are using emptyDir as volume(kubectl get deployment <your deployment name> -o yaml will give you the running status of your deployment). emptyDir has the same lifecycle as the Pod, which means if the Pod is removed from a node for any reason, the data in the emptyDir is deleted forever(Please note that container crashing does NOT remove a Pod from a Node, so the data in an emptyDir volume is safe across Container crashes).

    If you want to keep data persisted even after Pod is removed, you need to set persistence.enabled to true in your value file and specify a storageclass(or you have a default storageclass defined. Running kubectl get storageclasses to figure it out)

    You can verify that whether the data is persisted by deleting the postgres Pods(Deployment will recreate one after Pod removal)