Search code examples
shellkubernetes

sh Job container after it is stopped


I am backuping my Postgresql database using this cronjob:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: postgres-backup
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: postgres-backup
            image: postgres:10.4
            command: ["/bin/sh"]
            args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
            env:
            - name: PGPASS
              valueFrom:
                secretKeyRef:
                  name: pgpass
                  key: pgpass
            volumeMounts:
            - mountPath: /var/backups
              name: postgres-backup-storage
          restartPolicy: Never
          volumes:
          - name: postgres-backup-storage
            hostPath:
              path: /var/volumes/postgres-backups
              type: DirectoryOrCreate

The cronjob gets successfully executed, the backup is made and saved in the container of the Job but this container is stopped after successful execution of the script. of course I want to access the backup files in the container but I can't because it is stopped/terminated.

is there a way to execute shell commands in a container after it is terminated, so I can access the backup files saved in the container?

I know that I could do that on the node, but I don't have the permission to access it.


Solution

  • @confused genius gave me a great idea to create another same container to access the dump files so this is the solution that works:

    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: postgres-backup
    spec:
      schedule: "0 2 * * *"
      jobTemplate:
        spec:
          template:
            spec:
              containers:
              - name: postgres-backup
                image: postgres:10.4
                command: ["/bin/sh"]
                args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
                env:
                - name: PGPASS
                  valueFrom:
                    secretKeyRef:
                      name: dev-pgpass
                      key: pgpass
                volumeMounts:
                - mountPath: /var/backups
                  name: postgres-backup-storage
              - name: postgres-restore
                image: postgres:10.4
                volumeMounts:
                - mountPath: /var/backups
                  name: postgres-backup-storage
              restartPolicy: Never
              volumes:
              - name: postgres-backup-storage
                hostPath:
                # Ensure the file directory is created.
                  path: /var/volumes/postgres-backups
                  type: DirectoryOrCreate
    

    after that one just needs to sh into the "postgres-restore" container and access the dump files.

    thanks