Search code examples
nginxkubernetesrabbitmqazure-aksnginx-ingress

Configure TCP Port on Nginx Ingress on Azure Kubernetes Cluster (AKS)


I need to configure a TCP port on my AKS Cluster to allow RabbitMQ to work

I have installed nginx-ingress with helm as follows:

kubectl create namespace ingress-basic

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

helm install nginx-ingress ingress-nginx/ingress-nginx \
    --namespace ingress-basic \
    --set controller.replicaCount=2 \
    --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux

I have setup an A record with our DNS provider to point to the public IP of the ingress controller.

I have created a TLS secret (to enable https)

I have created an ingress route with:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: rabbit-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  tls:
  - hosts:
    - my.domain.com
    secretName: tls-secret
  rules:
    - http:
        paths:
          - backend:
              serviceName: rabbitmq-cluster
              servicePort: 15672
            path: /(.*)

I can navigate to my cluster via the domain name from outside and see the control panel (internally on 15672) with valid https. So the ingress is up and running, and I can create queues etc... so rabbitmq is working correctly.

However, I can't get the TCP part to work to post to the queues from outside the cluster.

I have edited the yaml of the what I believe is the configmap (azure - cluster - configuration - nginx-ingress-ingress-nginx-controller) for the controller (nginx-ingress-ingress-nginx-controller) via the azure portal interface and added this to the end

data:
  '5672': 'default/rabbitmq-cluster:5672'

I have then edited they yaml for the service itself via the azure portal and added this to the end

  - name: amqp
      protocol: TCP
      port: 5672

However, when I try to hit my domain using a test client the request just times out. (The client worked when I used a LoadBalancer and just hit the external IP of the cluster, so I know the client code should work)

Is there another step that I should be doing?


Solution

  • I believe the issue here was that helm was configuring so much of my own stuff that I wasn't able to customise too much.

    I uninstalled the ingress with helm and changed the ingress creation script to this:

    helm install nginx-ingress ingress-nginx/ingress-nginx \
        --namespace ingress-basic \
        --set controller.replicaCount=2 \
        --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
        --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
        --set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
        --set tcp.5672="default/rabbitmq-cluster:5672"
    

    Which pre-configures the TCP port forwarding and I don't have to do anything else. I don't know if it effected it, but this seemed to 'break' my SSL implementation, so I upgraded the ingress route creation script from v1beta to v1 and https was working again perfectly.

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: rabbit-ingress
      namespace: default
      annotations:
        kubernetes.io/ingress.class: nginx
        nginx.ingress.kubernetes.io/use-regex: "true"
        nginx.ingress.kubernetes.io/rewrite-target: /$1
    spec:
      tls:
      - hosts:
          - my.domain.com
        secretName: tls-secret
      rules:
      - host: my.domain.com
        http:
          paths:
          - path: /(.*)
            pathType: Prefix
            backend:
              service:
                name: rabbitmq-cluster
                port:
                  number: 15672