Search code examples
kubernetestcpufwcalico

Ufw firewall blocks kubernetes (with calico)


I'm trying to install a kubernetes cluster on my server (Debian 10). On my server I used ufw as firewall. Before creating the cluster I allowed these ports on ufw:

179/tcp, 4789/udp, 5473/tcp, 443 /tcp, 6443/tcp, 2379/tcp, 4149/tcp, 10250/tcp, 10255/tcp, 10256/tcp, 9099/tcp, 6443/tcp

As calico doc suggests (https://docs.projectcalico.org/getting-started/kubernetes/requirements) and this git repo on kubernetes security too (https://github.com/freach/kubernetes-security-best-practice).

But when I want to create the cluster, the calico/node pod can't start because Felix is not live (I allowed 9099/tcp on ufw):

Liveness probe failed: calico/node is not ready: Felix is not live: Get http://localhost:9099/liveness: dial tcp [::1]:9099: connect: connection refused

If I disable ufw, the cluster is created and there is no error.

So I would like to know how I should configure ufw in order for kubernetes to work. If anyone could help me, it would be very great, thanks !

Edit: My ufw status

To                         Action      From
6443/tcp                   ALLOW       Anywhere
9099                       ALLOW       Anywhere
179/tcp                    ALLOW       Anywhere
4789/udp                   ALLOW       Anywhere
5473/tcp                   ALLOW       Anywhere
2379/tcp                   ALLOW       Anywhere
8181                       ALLOW       Anywhere
8080                       ALLOW       Anywhere
###### (v6)                LIMIT       Anywhere (v6)              # allow ssh connections in
Postfix (v6)               ALLOW       Anywhere (v6)
KUBE (v6)                  ALLOW       Anywhere (v6)
6443 (v6)                  ALLOW       Anywhere (v6)
6783/udp (v6)              ALLOW       Anywhere (v6)
6784/udp (v6)              ALLOW       Anywhere (v6)
6783/tcp (v6)              ALLOW       Anywhere (v6)
443/tcp (v6)               ALLOW       Anywhere (v6)
80/tcp (v6)                ALLOW       Anywhere (v6)
4149/tcp (v6)              ALLOW       Anywhere (v6)
10250/tcp (v6)             ALLOW       Anywhere (v6)
10255/tcp (v6)             ALLOW       Anywhere (v6)
10256/tcp (v6)             ALLOW       Anywhere (v6)
9099/tcp (v6)              ALLOW       Anywhere (v6)
6443/tcp (v6)              ALLOW       Anywhere (v6)
9099 (v6)                  ALLOW       Anywhere (v6)
179/tcp (v6)               ALLOW       Anywhere (v6)
4789/udp (v6)              ALLOW       Anywhere (v6)
5473/tcp (v6)              ALLOW       Anywhere (v6)
2379/tcp (v6)              ALLOW       Anywhere (v6)
8181 (v6)                  ALLOW       Anywhere (v6)
8080 (v6)                  ALLOW       Anywhere (v6)

53                         ALLOW OUT   Anywhere                   # allow DNS calls out
123                        ALLOW OUT   Anywhere                   # allow NTP out
80/tcp                     ALLOW OUT   Anywhere                   # allow HTTP traffic out
443/tcp                    ALLOW OUT   Anywhere                   # allow HTTPS traffic out
21/tcp                     ALLOW OUT   Anywhere                   # allow FTP traffic out
43/tcp                     ALLOW OUT   Anywhere                   # allow whois
SMTPTLS                    ALLOW OUT   Anywhere                   # open TLS port 465 for use with SMPT to send e-mails
10.32.0.0/12               ALLOW OUT   Anywhere on weave
53 (v6)                    ALLOW OUT   Anywhere (v6)              # allow DNS calls out
123 (v6)                   ALLOW OUT   Anywhere (v6)              # allow NTP out
80/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow HTTP traffic out
443/tcp (v6)               ALLOW OUT   Anywhere (v6)              # allow HTTPS traffic out
21/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow FTP traffic out
43/tcp (v6)                ALLOW OUT   Anywhere (v6)              # allow whois
SMTPTLS (v6)               ALLOW OUT   Anywhere (v6)              # open TLS port 465 for use with SMPT to send e-mails

Sorry my ufw rules are a bit messy, I tried too many things to get kubernetes working.


Solution

  • I'm trying to install a kubernetes cluster on my server (Debian 10). On my server I used ufw as firewall. Before creating the cluster I allowed these ports on ufw: 179/tcp, 4789/udp, 5473/tcp, 443 /tcp, 6443/tcp, 2379/tcp, 4149/tcp, 10250/tcp, 10255/tcp, 10256/tcp, 9099/tcp, 6443/tcp

    NOTE: all executable commands begin with $

    • Following this initial instruction, I installed ufw on a Debian 10 and enabled the same ports you mention:
    $ sudo apt update && sudo apt-upgrade -y
    $ sudo apt install ufw -y
    $ sudo ufw allow ssh
    Rule added
    Rule added (v6)
    
    $ sudo ufw enable
    Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
    Firewall is active and enabled on system startup
    
    $ sudo ufw allow 179/tcp
    $ sudo ufw allow 4789/tcp
    $ sudo ufw allow 5473/tcp
    $ sudo ufw allow 443/tcp
    $ sudo ufw allow 6443/tcp
    $ sudo ufw allow 2379/tcp
    $ sudo ufw allow 4149/tcp
    $ sudo ufw allow 10250/tcp
    $ sudo ufw allow 10255/tcp
    $ sudo ufw allow 10256/tcp
    $ sudo ufw allow 9099/tcp
    
    $ sudo ufw status
    Status: active
    To                         Action      From
    --                         ------      ----
    22/tcp                     ALLOW       Anywhere                  
    179/tcp                    ALLOW       Anywhere                  
    4789/tcp                   ALLOW       Anywhere                  
    5473/tcp                   ALLOW       Anywhere                  
    443/tcp                    ALLOW       Anywhere                  
    6443/tcp                   ALLOW       Anywhere                  
    2379/tcp                   ALLOW       Anywhere                  
    4149/tcp                   ALLOW       Anywhere                  
    10250/tcp                  ALLOW       Anywhere                  
    10255/tcp                  ALLOW       Anywhere                  
    10256/tcp                  ALLOW       Anywhere                  
    22/tcp (v6)                ALLOW       Anywhere (v6)             
    179/tcp (v6)               ALLOW       Anywhere (v6)             
    4789/tcp (v6)              ALLOW       Anywhere (v6)             
    5473/tcp (v6)              ALLOW       Anywhere (v6)             
    443/tcp (v6)               ALLOW       Anywhere (v6)             
    6443/tcp (v6)              ALLOW       Anywhere (v6)             
    2379/tcp (v6)              ALLOW       Anywhere (v6)             
    4149/tcp (v6)              ALLOW       Anywhere (v6)             
    10250/tcp (v6)             ALLOW       Anywhere (v6)             
    10255/tcp (v6)             ALLOW       Anywhere (v6)             
    10256/tcp (v6)             ALLOW       Anywhere (v6)       
    

    $ sudo apt-get update
    $ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common=
    
    • Adding Docker repository:
    $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
    $ sudo apt-key fingerprint 0EBFCD88
    $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"
    
    • Update source list and install Docker-ce:
    $ sudo apt-get update
    $ sudo apt-get -y install docker-ce
    

    NOTE: On production system recomend install a fixed version of docker:

    $ apt-cache madison docker-ce
    $ sudo apt-get install docker-ce=<VERSION>
    

    • Installing Kube Tools - kubeadm, kubectl, kubelet:
    $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    
    • Configure Kubernetes repository (copy the 3 lines and paste at once):
    $ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
    deb https://apt.kubernetes.io/ kubernetes-xenial main
    EOF
    
    • Installing packages:
    $ sudo apt-get update
    $ sudo apt-get install -y kubelet kubeadm kubectl
    
    • After installing mark theses packages to don’t update automatically:
    $ sudo apt-mark hold kubelet kubeadm kubectl
    

    $ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
    
    • Make kubectl enabled to non-root user:
    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    $ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.apps/calico-node created
    serviceaccount/calico-node created
    deployment.apps/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    
    • Check the status:
    $ kubectl get pods -n kube-system
    NAME                                           READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-555fc8cc5c-wnnvq       1/1     Running   0          26m
    calico-node-sngt8                              1/1     Running   0          26m
    coredns-66bff467f8-2qqlv                       1/1     Running   0          55m
    coredns-66bff467f8-vptpr                       1/1     Running   0          55m
    etcd-kubeadm-ufw-debian10                      1/1     Running   0          55m
    kube-apiserver-kubeadm-ufw-debian10            1/1     Running   0          55m
    kube-controller-manager-kubeadm-ufw-debian10   1/1     Running   0          55m
    kube-proxy-nx8cz                               1/1     Running   0          55m
    kube-scheduler-kubeadm-ufw-debian10            1/1     Running   0          55m
    

    Considerations:

    Sorry my ufw rules are a bit messy, I tried too many things to get kubernetes working.

    • It's normal to try many things to make something work, but it sometimes end up becoming the issue itself.
    • I'm posting you the step by step I did to deploy it on the same environment as you so you can follow it once again to achieve the same results.
    • My felix probe didn't got any error, only time it got error was when i tried (on purpose) deploying the kubernetes without creating the rules on ufw.

    If it does not solve, next steps:

    • Now, if after following this tutorial you still get a similar problem, please update the question with the following informations:
      • kubectl describe <pod_name> -n kube-system
      • kubectl get pod <pod_name> -n kube-system
      • kubectl logs <pod_name> -n kube-system
      • It's always recommended starting with a clean installation of Linux, if you are running a VM, delete the VM and create a new one.
      • If you are running on bare-metal, consider what else is running on the server, maybe there's another software messing with network communication.

    Let me know in the comments if you find any problem following these troubleshooting steps.