Search code examples
azurekuberneteskubernetes-helmfluentd

Why pod terminate it self?


i am trying to install fluend with elasticsearch and kibana using bitnami helm chat.

I am following below mention article

Integrate Logging Kubernetes Kibana ElasticSearch Fluentd

But when I deploy the elasticsearch it's pod goes on Terminating or Back-off state.

I am stuck on this from 3 days, any help is appreciated.

Events:

  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  41m (x2 over 41m)  default-scheduler  error while running "VolumeBinding" filter plugin for pod "elasticsearch-master-0": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         41m                default-scheduler  Successfully assigned default/elasticsearch-master-0 to minikube
  Normal   Pulling           41m                kubelet, minikube  Pulling image "busybox:latest"
  Normal   Pulled            41m                kubelet, minikube  Successfully pulled image "busybox:latest"
  Normal   Created           41m                kubelet, minikube  Created container sysctl
  Normal   Started           41m                kubelet, minikube  Started container sysctl
  Normal   Pulling           41m                kubelet, minikube  Pulling image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
  Normal   Pulled            39m                kubelet, minikube  Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
  Normal   Created           39m                kubelet, minikube  Created container chown
  Normal   Started           39m                kubelet, minikube  Started container chown
  Normal   Created           38m                kubelet, minikube  Created container elasticsearch
  Normal   Started           38m                kubelet, minikube  Started container elasticsearch
  Warning  Unhealthy         38m                kubelet, minikube  Readiness probe failed: Get http://172.17.0.7:9200/_cluster/health?local=true: dial tcp 172.17.0.7:9200: connect: connection refused
  Normal   Pulled            38m (x2 over 38m)  kubelet, minikube  Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
  Warning  FailedMount       32m                kubelet, minikube  MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
  Normal   SandboxChanged    32m                kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling           32m                kubelet, minikube  Pulling image "busybox:latest"
  Normal   Pulled            32m                kubelet, minikube  Successfully pulled image "busybox:latest"
  Normal   Created           32m                kubelet, minikube  Created container sysctl
  Normal   Started           32m                kubelet, minikube  Started container sysctl
  Normal   Pulled            32m                kubelet, minikube  Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
  Normal   Created           32m                kubelet, minikube  Created container chown
  Normal   Started           32m                kubelet, minikube  Started container chown
  Normal   Pulled            32m (x2 over 32m)  kubelet, minikube  Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
  Normal   Created           32m (x2 over 32m)  kubelet, minikube  Created container elasticsearch
  Normal   Started           32m (x2 over 32m)  kubelet, minikube  Started container elasticsearch
  Warning  Unhealthy         32m                kubelet, minikube  Readiness probe failed: Get http://172.17.0.6:9200/_cluster/health?local=true: dial tcp 172.17.0.6:9200: connect: connection refused
  Warning  BackOff           32m (x2 over 32m)  kubelet, minikube  Back-off restarting failed container


Solution

  • Short answer: it crashed. You can check the Pod status object for some details like exit status and if was an oomkill and then look at the container logs to see if they show anything.