Search code examples
kubernetesnginx-ingress

How to access to k8s Ingress from inside the cluster


I have stateful serviceA and I need to access using sticky session. To implement the sticky session I'm using nginx ingress with the annotations

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"

When I call the serviceA from outside the cluster, everything is fine.

Now the problem is that I need serviceB that is inside k8s to access to serviceA and still benefit of the sticky session, so the traffic needs to be serviceB -> Ingress -> serviceA

I could implement that, just using the public hostname of Ingress, but I'd like to avoid to have the traffic to go out of the cluster and then back again. So using the public host the traffic would be serviceB -> NAT -> Public LoadBalancer Ingress -> Logical Ingress -> serviceA

So I was wondering if there is the possibility for serviceB to access to Ingress directly so that the traffic would be serviceB -> Logical Ingress -> serviceA


Solution

  • Possible solution for that would to set a new service inside your cluster and configure it to selects the ingress controller pod.

    Let's say we would call this service ingress-internal-service and you can easily create that with command:

    ➜  ~ k expose deployment -n kube-system ingress-nginx-controller --name ingress-internal-service --port 80
    
    service/ingress-internal-service exposed
    

    As you can see my service now has one endpoint that matches my ingress-controller-pod:

    #POD 
    kube-system   ingress-nginx-controller-558664778f-dn2cl   1/1     Running     24h     172.17.0.7
    
    #SERVICE
    Name:              ingress-internal-service
    -----
    Type:              ClusterIP
    IP:                10.111.197.14
    Port:              <unset>  80/TCP
    TargetPort:        80/TCP
    Endpoints:         172.17.0.7:80
    

    And here is a test I made with curl (myapp.com is my ingress host)

    [root@cent /]#  curl -H "Host: myapp.com" http://10.111.197.14  
    {
      "path": "/",
      "headers": {
        "host": "myapp.com",
        "x-request-id": "ec03f71b9772921c4b07112297ee2e43",
        "x-real-ip": "172.17.0.1",
        "x-forwarded-for": "172.17.0.1",
        "x-forwarded-host": "myapp.com",
        "x-forwarded-port": "80",
        "x-forwarded-proto": "http",
        "x-scheme": "http",
        "user-agent": "curl/7.29.0",
        "accept": "*/*"
      },
      "method": "GET",
      "body": "",
      "fresh": false,
      "hostname": "myapp.com",
      "ip": "172.17.0.1",
      "ips": [
        "172.17.0.1"
    

    This is pretty similar to the node-port-bare-metal way of exposing nginx controller with the difference that we are just using the ClusterIp instead.

    PS. If you are using cloud environment you may want o check/consider using Using an internal TCP/UDP load balancer.