Search code examples
kuberneteskubernetes-health-check

Does K8S automatically ensure pod availability by health checking when re-run pods inside service?


Let's say that I have a service with 2 pods (replicas). Each pod contains just one container that is a REST API war that runs on a Tomcat. Moreover, each pod has imagePullPolicy: Always so when there is a new version of the image, it will pull it.

When the container starts, obviously, Tomcat takes some seconds to start. This will happen in both containers.

Is it possible that in a particular time my REST API is not available? I mean, is it possible that both Tomcat aren't started yet and a request fails?

Does K8S use health checking on a pod before attemp to re-run the another one? If so, I could perform an http health checking against my REST API endpoint. Is it the right way?

Thanks in advance. Any advice will be appreciated.


Solution

  • Is it possible that in a particular time my REST API is not available? I mean, is it possible that both Tomcat aren't started yet and a request fails?

    Yes. You can prevent this from happening by making sure that at least one of your pods is ready to serve requests before creating the service (and then using rolling updates to avoid downtime as you upgrade you application).

    Does K8S use health checking on a pod before attemp to re-run the another one? If so, I could perform an http health checking against my REST API endpoint. Is it the right way?

    You should take a look at liveness and readiness probes. They are designed to capture the difference between a container running and the application inside the container being ready to serve requests.