Search code examples
kuberneteskubernetes-helm

Helm upgrade custom microservice causes temporary downtime


during deployment of new version of application sequentially 4 pods are terminated and replaced by newer ones; but for those ~10minutes the app is hitting other microservice is hitting older endpoints causing 502/404 errors - anyone know of a way to deploy 4 new pods, then drain traffic from old ones to new ones and after all connections to prev ver are terminated, then terminate the old pods ?


Solution

  • This probably means you don't have a readiness probe set up? Because the default is already to only roll 25% of the pods at once. If you have a readiness probe, this will include waiting until the new pods are actually available and Ready but otherwise it only waits until they start.