I have a yaml
file that looks like this
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-cron-job
namespace: hello-world
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
env:
- name: services
value: $(kubectl get service -A)
command: ["echo", "$services"]
volumeMounts:
- name: scripts
mountPath: /tmp/python
restartPolicy: OnFailure
volumes:
- name: scripts
configMap:
name: test-scripts`
I'd like to get all currently running services and set services
env variable to those running services. Is it possible to achieve this?
I've tried using
- name: services
value: $(kubectl get service -A)
but it just treats value
as string.
There are 2 cases:
If you want to list the services at every cron execution, you'll need to run the kubectl
command from inside the pod. You'll also need to create a dedicated ServiceAccount
for your CronJob
. This ServiceAccount
must have the right permissions to list all services in the cluster. For that, you can define a ClusterRole
(to list all services in the cluster) and link it to your ServiceAccount
using a ClusterRoleBinding
. This is similar to what David Maze explained in his comment.
apiVersion: v1
kind: ServiceAccount
metadata:
name: hello-cron-job
namespace: hello-world
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: list-all-services
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hello-cron-job
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: list-all-services
subjects:
- kind: ServiceAccount
name: hello-cron-job
namespace: hello-world
From there, you can define your CronJob
using that service account:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-cron-job
namespace: hello-world
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: alpine/k8s:1.28.0
command:
- sh
- -c
- |
services=$(kubectl get services -A -o name)
echo $services
restartPolicy: OnFailure
serviceAccountName: hello-cron-job
Notice how spec.jobTemplate.spec.template.spec.serviceAccountName
is set to the previously created ServiceAccount
.
I'm using alpine/k8s
image here, as it has already kubectl
installed.
In case you only want to pass the list of services when applying the manifest, you can define your CronJob
in a file (let's say cronjob.yaml
):
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-cron-job
namespace: hello-world
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
env:
- name: services
value: "$SERVICES"
command: ["echo", "$(services)"]
restartPolicy: OnFailure
Then, substitute $SERVICES
value (with envsubst
) before kubectl apply
ing:
SERVICES=$(kubectl get svc -A -o name) \
envsubst '$SERVICES' < cronjob.yaml | \
kubectl apply -f -
This will create the CronJob
object with the right services
env var (names of the services when applying the manifest).
I've used -o name
to get only the names of the services, but you can do whatever you want here.
Also note that the command must be ["echo", "$(services)"]
and not ["echo", "$services"]
(see "Use environment variables to define arguments" in Kubernetes docs).