Search code examples
kubernetesscheduled-tasksgoogle-kubernetes-enginesidecar

Kubernetes CronJob with a sidecar container


I am having some issues with a Kubernetes CronJob running two containers inside of a GKE cluster.

One of the two containers is actually executing the job that must be done by the CronJob.

This works perfectly fine. It is started when it is supposed to be started, does the job and then terminates. All fine up until this point.

What seems to be causing some issues is the second container, which is a sidecar container used to access a database instance. This won't terminate and seems to be leading to the problem that the CronJob itself won't terminate. Which is an issue, since I see an accumulation of running Job instances over time.

Is there a way to configure a Kubernetes batch CronJob to be terminating when one of the container is successfully exe

apiVersion: batch/v1
kind: CronJob
metadata:
  name: chron-job-with-a-sidecar
  namespace: my-namespace
spec:
#            ┌───────────── minute (0 - 59)
#            │ ┌───────────── hour (0 - 23)
#            │ │  ┌───────────── day of the month (1 - 31)
#            │ │  │ ┌───────────── month (1 - 12)
#            │ │  │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
#            │ │  │ │ │                                   7 is also Sunday on some systems)
#            │ │  │ │ │                                   OR sun, mon, tue, wed, thu, fri, sat
#            │ │  │ │ │
  schedule: "0 8 * * *" # -> Every day At 8AM
  jobTemplate:
    metadata:
      labels:
        app: my-label
    spec:
      template:
          containers:
          # --- JOB CONTAINER -----------------------------------------------
          - image: my-job-image:latest
            imagePullPolicy: Always
            name: my-job
            command:
            - /bin/sh
            - -c
            - /some-script.sh; exit 0;
          # --- SIDECAR CONTAINER ----------------------------------------------
          - command:
            - "/cloud_sql_proxy"
            - "-instances=my-instance:antarctica-south-3:user=tcp:1234"
            # ... some other settings ...
            image: gcr.io/cloudsql-docker/gce-proxy:1.30.0
            imagePullPolicy: Always
            name: cloudsql-proxy
            # ... some other values ...

Solution

  • No, strictly speaking there is no way to make Kubernetes stop a sidecar container automatically once a "main" container is done.

    The closest "kubernetes-native" solution I can think of is setting CronJob concurrencyPolicy to Replace (see CronJobSpec). It won't stop the sidecar once done, but at least each new job will be overriding the previous one, so the jobs won't be accumulating. Unfortunately, with this solution, you are gonna lose job history.

    If this solution does not fit your needs, you will need to implement some form of communication between containers, but nothing like that is built in into Kubernetes itself. There are some external tools though, that can help, e.g. kubexit.