Search code examples
kubernetesopenshifttraefikkubernetes-ingressf5

Load balancing in front of Traefik edge router


Looking at OpenShift HA proxy or Traefik project: https://docs.traefik.io/. I can see Traefik ingress controller is deployed as a DaemonSet. It enables to route traffic to correct services/endpoints using virtual host.

Assuming I have a Kubernetes cluster with several nodes. How can I avoid to have a single point of failure?

Should I have a load balancer (or DNS load balancing), in front of my nodes?

If yes, does it mean that:

  1. Load balancer will send traffic to one node of k8s cluster
  2. Traefik will send the request to one of the endpoint/pods. Where this pod could be located in a different k8s node?

Does it mean there would be a level of indirection?

I am also wondering if the F5 cluster mode feature could avoid such indirection?

EDIT: when used with F5 Ingress resource


Solution

  • You can have a load balancer (BIG IP from F5 or a software load balancer) for traefik pods. When client request comes in it will sent to one of the traefik pods by the load balancer. Once request is in the traefik pod traefik will send the request to IPs of the kubernetes workload pods based on ingress rules by getting the IPs of those pods from kubernetes endpoint API.You can configure L7 load balancing in traefik for your workload pods.

    Using a software reverse proxy such as nginx and exposing it via a load balancer introduces an extra network hop from the load balancer to the nginx ingress pod.

    Looking at the F5 docs BIG IP controller can also be used as ingress controller and I think using it that way you can avoid the extra hop.