I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been noted in this post that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue.
The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this:
kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1
kubectl delete service --namespace=kube-system monitoring-influxdb
kubectl delete service --namespace=kube-system monitoring-grafana
Then if I say:
kubectl get pods --namespace=kube-system
I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently.
You probably need to remove the manifest files for influxdb from the /etc/kubernetes/addons/
directory on your "master" host. Many of the kube-up.sh
implementations use a service (usually at /etc/kubernetes/kube-master-addons.sh
) that runs periodically and makes sure that all the manifests in /etc/kubernetes/addons/
are active.
You can also restart your cluster, but run export ENABLE_CLUSTER_MONITORING=none
before running kube-up.sh
. You can see other environment settings that impact the cluster kube-up.sh
builds at cluster/aws/config-default.sh