I have a haproxy as a load balancer running in k8s with a route to a service with two running pods. I want the server naming inside haproxy to correspond to the pod names behind my service. If I'm not mistaken the following configmap / annotation value should do exactly this: https://haproxy-ingress.github.io/docs/configuration/keys/#backend-server-naming
. But for me it doesn't and for the life of me I can't find out why. The relevant parts of my configuration look like this:
controller deployment:
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
spec:
replicas: 2
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: haproxy-ingress-service-account
containers:
- name: haproxy-ingress
image: haproxytech/kubernetes-ingress
args:
- --configmap=haproxy-controller/haproxy-ingress
- --configmap-errorfiles=haproxy-controller/errorfile-conf
- --default-ssl-certificate=haproxy-controller/haproxy-tls
- --ingress.class=haproxy
controller service:
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
spec:
selector:
run: haproxy-ingress
type: ClusterIP
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
controller configmap:
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: haproxy-controller
data:
server-ssl: "true"
scale-server-slots: "2"
cookie-persistence: "LFR_SRV"
backend-server-naming: "pod"
backend-config-snippet: |
cookie LFR_SRV indirect nocache insert maxidle 10m httponly secure
backend server ingress:
kind: Ingress
metadata:
name: liferay-dxp
namespace: backend
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- secretName: backend-tls
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend
port:
number: 443
The generated backend part of the haproxy.conf looks like this:
mode http
balance roundrobin
option forwardfor
cookie LFR_SRV indirect nocache insert
###_config-snippet_### BEGIN
cookie LFR_SRV indirect nocache insert maxidle 10m httponly secure
###_config-snippet_### END
server SRV_1 10.xx.xx.xx:443 check ssl alpn h2,http/1.1 weight 128 cookie SRV_1 verify none
server SRV_2 10.xx.xx.xx:443 check ssl alpn h2,http/1.1 weight 128 cookie SRV_2 verify none
Everything works fine except backend-server-naming: "pod"
. I also can't get any of the session-cookie-* properties from here to work. Because of that I used the backend-config-snippet
to overwrite the cookie line in the generated haproxy.conf with my custom one (I added maxidle 10m httponly secure
). What am I doing wrong?
Here are a few hints to help you out solving your issue.
Looking at the manifest files you shared, it's hard to tell which exact version of haproxy-ingress-controller
container you are running in your cluster (btw, it's against best practices in production envs to leave it w/o tag, read more on it here).
For backend-server-naming
configuration key to be working, minimum the v0.8.1
is required (it was backported).
Before you move on in troubleshooting, firstly please double check your ingress deployment for compatibility.
If I understand correctly the official documentation on this configuration key, setting a server naming of backends to pod names (backend-server-naming=pod
) instead of sequences
, does support a dynamic re-load of haproxy configuration, but does NOT support as of now dynamic updates to haproxy run-time configuration to server names at backend section (it was explained by haproxy-ingress author here, and here)
It means you need to restart your haproxy-ingress controller instance first, to be able to see changes in backend's server names reflected at haproxy configuration, e.g. situations when new Pod replicas appear or POD_IP changed due the Pod crash (expect addition/updates of server entries based on sequence naming).
I have tested successfully (see test below) the backend-server-naming=pod
setting on v0.13.4
with classified Ingress type, based on ingressClassName
field , rather than deprecated annotation kubernetes.io/ingress.class
, as in your case:
I'm not claiming your configuration won't work (it should too), but it's important to know, that dynamic updates to configuration (this includes changes to backend configs) won't happen on unclassified Ingress resource or wrongly classified one, unless you're really running v0.12
or newer version.
# Ingress class
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: my-class
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: haproxy-ingress.github.io/controller
# Demo Ingress resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
haproxy-ingress.github.io/backend-server-naming: "pod"
name: echoserver
spec:
ingressClassName: my-class
rules:
- http:
paths:
- backend:
service:
name: echoserver
port:
number: 8080
path: /
pathType: Prefix
HA proxy configuration with comment:
backend default_echoserver_8080
mode http
balance roundrobin
acl https-request ssl_fc
http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
http-request del-header x-forwarded-for
option forwardfor
http-response set-header Strict-Transport-Security "max-age=15768000" if https-request
# pod name start
server echoserver-75d6f584bb-jlwb8 172.17.0.2:8080 weight 1 check inter 2s
# pod name end
server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
...