Search code examples
nginxkubernetesazure-aksnginx-ingress

How to whitelist an nginx ingress custom port


I have an nginx ingress in Kubernetes with both a whitelist (handled by a nginx.ingress.kubernetes.io/whitelist-source-range annotation) and also a custom port mapping (which exposes an SFTP server port 22 via a --tcp-services-configmap configmap). The whitelist works great for 80 and 443, but it does not work for 22. How do I whitelist my custom port?

Configuration looks roughly like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ...
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
            - name: sftp
              containerPort: 22
        ...

kind: Ingress
metadata:
  name: {{ .controllerName }}
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/whitelist-source-range: {{ .ipAllowList }}

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  22: "default/sftp:22"

UPDATE

Thanks to @jordanm I discovered that I can restrict IP addresses for all ports via loadBalancerSourceRanges in the LoadBalancer rather than nginx:

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  loadBalancerIP: {{ .loadBalancerIp }}
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https
    - name: sftp
      port: 22
      targetPort: sftp
  loadBalancerSourceRanges:
    {{ .ipAllowList }}

Solution

  • Firstly take a look at this issue: ip-whitelist-support.

    IPs are not whitelisted for TCP services, an alternative would be to create a separate firewall for the TCP services and whitelist the IPs at the firewall level.

    For specific location {{ $path }} we have defined {{ if isLocationAllowed $location }}.

    Check official Ingress documentation: ingress-kubernetes.

    Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

    An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.

    You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.

    In this case Ingress resource instrument ingress-controller how to deal with http/https requests. In this approach nginx-ingress controller as a software (introduce layer-7 functionality/loadbalancing).

    If you are interested with nginx ingress tcp support:

    Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap

    See: exposing-tcp-udp-services

    If you want to check more granular configuration while working with your tcp service you should consider using L4 loadbalancing/firewall settings provided by your cloud provider.