Search code examples
amazon-web-serviceskubernetesk3saws-load-balancer-controller

No instances attached to Target Group using AWS Load Balancer Controller


I'm deploying a K3s cluster in AWS and I want to use the AWS Load Balancer Controller, when I create an Ingress or a Service type LoadBalancer, all the resources are created on AWS (ALB, TargetGroup, SecurityGroup) but the target group is empty. I can manually add the instances on my cluster to the TargetGroup and everything will work perfectly, but doing that all the time would be problematic, and the aws controller is supposed to do it alone.

My cluster now is only one master node created with this script curl -sfL https://get.k3s.io | K3S_TOKEN=<token> sh -s - --cluster-init --tls-san <NLB> --kubelet-arg cloud-provider=external --kubelet-arg provider-id=aws:///us-east-1b/<instance-id> --disable traefik

The NLB was created before, and it is exposing the port 6443, the idea is to use it for more instances in the future. The version is v1.27.4+k3s1.

I installed the AWS Load Balancer Controller using helm with helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=your-cluster-name and it is working.

I have tagged my subnets with:
kubernetes.io/cluster/your-cluster-name: shared
kubernetes.io/role/internal-elb: 1

The instance has this tag:
kubernetes.io/cluster/your-cluster-name: owned

And the security group the instance has since the beginning has this tag:
kubernetes.io/cluster/your-cluster-name: owned

The logs of the controller shows everything is created successfully and there's not errors.

I have recreated the cluster and the reinstalled the Load balancer controller several times changing the name of the cluster. All the information I find about the issue focus on the tags but I have all the tags they are mentioning.
I added admin permissions to the instance in case it was permissions problems but it didn't solve the problem either.

I tried creating the cluster with provider=aws, but that way the cluster creation fails.

I tried disabling the ServiceLB and the cloud controller, and installing the aws_cloud_provicer as it shown in this repo https://github.com/kmcgrath/k3s-terraform-modules/blob/master/modules/k3s_master/master_instance.tf but still, the nodes are not attached to the TargetGroup.


Solution

  • I solved this issue by creating non master nodes, apparently AWS Load balancer controller by default ignores the master nodes, I haven't find a way to ignore this behavior in order to only have master nodes.