Search code examples
kubernetesgoogle-cloud-platformtcpload-balancing

Why use a TCP firewall rule with an external http load balancer?


Question

I'm deploying an external HTTP load balancer for a Kubernetes cluster. What is the necessity for having a VPC firewall rule that allows TCP traffic on port 80?

Context

In preparation for the Google Cloud Platform Associate Cloud Engineer exam, I'm studying on CloudSkillsBoost. There is a challenge lab (https://www.cloudskillsboost.google/focuses/10258?parent=catalog) and Task 3 requires me to configure an External HTTP Load Balancer for a Kubernetes Cluster containing 2 Nginx web server containers (that's a mouthful, I know).

I don't fully understand why the solution requires a TCP firewall rule. What is the thought process behind this architectural choice?

Architecture

I'm trying to practice my architecture design skills. This is my interpretation of this cloud solution (image below). I would love any constructive feedback on this diagram. enter image description here


Solution

    1. The load balancer does not need or support firewall rules. Listening ports are defined by frontends. Optionally, there is Cloud Armor that can act as a firewall.

    2. A Kubernetes cluster starts as a collection of Compute Engine instances. That means you must allow traffic into the VPC to reach the instances. In your drawing, the load balancer is listening on port 80 and forwarding traffic to port 80. Therefore you need a firewall ingress rule to allow port 80 traffic.