I currently have this bash script that do cleanup obsolete resources for deployment, services, and configmaps. when I try to add the kubectl delete pod
on the script, then it will delete all the remaining pods, but the problem is for a certain scenario wherein I will cleanup only the pods that do not have or without the object like the deployment, services, and configmaps it will also restart the pods due to the kubectl delete pod
and I do not want the other pods or resources also to restart but only delete the specific pod that have only an object pod, then for the other pod that have the object of deployment, services, and configmaps should not be restarted.
Here's the script:
active_pods=$(kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}')
active_deployments=$(kubectl get deployments -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}')
active_services=$(kubectl get services -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}')
active_configmaps=$(kubectl get configmap -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}')
readarray -t active_pods_array <<<"$active_pods"
readarray -t active_deployments_array <<<"$active_deployments"
readarray -t active_services_array <<<"$active_services"
readarray -t active_configmaps_array <<<"$active_configmaps"
echo "active pods: ${active_pods_array[@]}"
for pod in "${active_pods_array[@]}"; do
if ! grep -q "pod/${pod} " nonprod.txt && [[ -n $pod ]] ; then
kubectl delete pod "$pod"
fi
done
echo "active deployments: ${active_deployments_array[@]}"
for deployment in "${active_deployments_array[@]}"; do
if ! grep -q "deployment.apps/${deployment} " nonprod.txt && [[ -n $deployment ]] ; then
kubectl delete deployment "$deployment"
fi
done
echo "active services: ${active_services_array[@]}"
for service in "${active_services_array[@]}"; do
if ! grep -q "service/${service} " nonprod.txt && [[ -n $service ]] && [[ $service != 'kubernetes' ]] ; then
kubectl delete service "$service"
fi
done
echo "active configmap: ${active_configmaps_array[@]}"
for configmap in "${active_configmaps_array[@]}"; do
if ! grep -q "configmap/${configmap} " nonprod.txt && [[ -n $configmap ]] && [[ ${configmap} != "kube-root-ca.crt" ]] ; then
kubectl delete configmap "${configmap}"
fi
done
~ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
logging-event-7a8gl478e4-7pms9 1/1 Running 0 22m
my-workspace-92757a9w8p-ghedk 1/1 Running 0 25m
my-web-page-347817pt37-7wu1a 1/1 Running 0 21m
hello-world-8i64337229-4yd37 1/1 Running 0 21m
http-client-test 1/1 Running 0 21m
mysql8demo-7425681afg-krdlh 1/1 Running 0 22m
postgres16demo-4p25jr3g9k-2piex 1/1 Running 0 21m
mytestcurl 1/1 Running 0 21m
mytestdemo-9449873a4c-at5kg 1/1 Running 0 22m
~ $ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
logging-event 1/1 1 1 22m
my-workspace 1/1 1 1 25m
my-web-page 1/1 1 1 21m
hello-world 1/1 1 1 21m
mysql8demo 1/1 1 1 22m
postgres16demo 1/1 1 1 21m
mytestdemo 1/1 1 1 22m
~ $ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
logging-event ClusterIP ########## <none> 8080/TCP 22m
my-web-page ClusterIP ########## <none> 80/TCP 21m
hello-world ClusterIP ########## <none> 80/TCP 21m
kubernetes ClusterIP ########## <none> 443/TCP 31m
mysql8demo ClusterIP ########## <none> 1433/TCP 22m
postgres16demo ClusterIP ########## <none> 5432/TCP 21m
mytestdemo ClusterIP ########## <none> 8080/TCP 22m
~ $ kubectl get configmap
NAME DATA AGE
logging-event 1 22m
my-workspace-scripts 1 26m
hello-world-html 1 21m
kube-root-ca.crt 1 31m
mysql8demo-hook 1 22m
mysql8demo-part-66772g4g2ba1fdd37d7273f2gs64b4a8-aa-cm 1 22m
postgres16demo-hook 1 22m
postgres16demo-part-9r54446g81dbr16836g4214g85189139-aa-cm 1 22m
mytestdemo-mappings 3 22m
Found a way to filter out the "null"
which is needed to be deleted exclusive only for the Pod that have "owner_kind"
using the commands below.
Fix on my script:
active_pods=$(kubectl get pods -o json | jq '.items[] | select( .metadata.ownerReferences[0].kind == null )' | jq '.metadata.name' --raw-output)
Test from my terminal:
~ $ kubectl get pods -o json | jq '.items[] | select( .metadata.ownerReferences[0].kind == null )' | jq '.metadata.name' --raw-output
http-client-test
mytestcurl
List that show "Replicaset"
and "Null"
:
~ $ kubectl get pods -o json | jq '.items[] | {name: (.metadata.name), owner_kind: .metadata.ownerReferences[0].kind,}'
{
"name": "logging-event-7a8gl478e4-7pms9",
"owner_kind": "ReplicaSet"
}
{
"name": "my-workspace-92757a9w8p-ghedk",
"owner_kind": "ReplicaSet"
}
{
"name": "my-web-page-347817pt37-7wu1a",
"owner_kind": "ReplicaSet"
}
{
"name": "hello-world-8i64337229-4yd37",
"owner_kind": "ReplicaSet"
}
{
"name": "http-client-test",
"owner_kind": null
}
{
"name": "mysql8demo-7425681afg-krdlh",
"owner_kind": "StatefulSet"
}
{
"name": "postgres16demo-4p25jr3g9k-2piex",
"owner_kind": "StatefulSet"
}
{
"name": "mytestcurl",
"owner_kind": null
}
{
"name": "mytestdemo-9449873a4c-at5kg",
"owner_kind": "ReplicaSet"
}