I have a k3s (light weighted k8s) cluster running on my Raspberry PI. So, I am not using any cloud hosted cluster but a bear metal one on my Raspberry PI.
I have deployed a application with this manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: myapp
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: bashofmann/rancher-demo:1.0.0
imagePullPolicy: Always
resources:
requests:
cpu: 200m
ports:
- containerPort: 8080
name: web
protocol: TCP
I also created a service to forward traffic to the application pod. Its manifest is:
apiVersion: v1
kind: Service
metadata:
name: demo-app-svc
namespace: myapp
spec:
selector:
app: hello-world
ports:
- name: web
protocol: TCP
port: 31113
targetPort: 8080
Then, I created a Ingress for the routing rules:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ing
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: myapp.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: demo-app-svc
port:
number: 31113
I successfully deployed above application pod, service & Ingress to my k3s cluster. Like the manifests indicate, they are under namespace myapp
.
The next thing I would like to do is to deploy the Kubernetes Nginx Ingress Controller in order to have the clients outside the cluster be able to access the deployed application.
So, I deployed it by :
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml
The above command successfully deployed Ingress Controller under namespace ingress-nginx
along with other objects as shown below with command k get all -n ingress-nginx
:
As you can see above, the LoadBalancer
type service
external IP is with value <pending>
. So, client outside the cluster still can not access the application pod.
Why is that & what do I miss deploying the Nginx Ingress Controller on a bear metal machine? The goal is to have an external IP that can be used to access the application pod from outside cluster, how can I achieve that?
===== Update =====
Based on the answer below from @Dawid Kruk , I decided to use the k3s default Traefik Ingress Controller.
So, I deleted all the deployed Nginx Ingress Controller resources by k delete all --all -n ingress-nginx
.
Then, I checked the Traefik Ingress related LoadBalancer
type service:
The external IP
of that Traefik service is exactly my Raspberry PI's IP address!
So, added this IP to /etc/hosts
to map it to the hostname defined in my Ingress object:
192.168.10.203 myapp.com
I opened browser & use address http://myapp.com, with the routing rules defined in my Ingress
object (see the manifest for my ingress above), I hoped I could see my deployed web application now. But get 404 Page Not Found
. What am I missing now to access my deployed application?
Another side question: I noticed when I check the deployed Ingress
object, its IP address is empty, I wonder am I supposed to see an IP address for this object or not when the Traefik Ingress Controller takes effect?
Another issue: Now, when I re-deploy my ingress manifest by k apply -f ingress.yaml
, I get error:
Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
...
for: "ingress.yaml": error when patching "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
It looks like even I decided to use Traefik Ingress Controller, I still need to instal Nginx Ingress Controller. I get confused now, anyone can explain it?
I'm not K3S expert but I think I found a piece of documentation that is addressing your issue.
Take a look:
Service Load Balancer
Any service load balancer (LB) can be used in your K3s cluster. By default, K3s provides a load balancer known as ServiceLB (formerly Klipper Load Balancer) that uses available host ports.
Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn't include a default load balancer implementation, so these services will remain
pending
until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration.How the Service LB Works
The ServiceLB controller watches Kubernetes Services with the
spec.type
field set toLoadBalancer
.For each LoadBalancer Service, a DaemonSet is created in the
kube-system
namespace. This DaemonSet in turn creates Pods with asvc-
prefix, on each node. These Pods use iptables to forward traffic from the Pod's NodePort, to the Service's ClusterIP address and port.If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's
status.loadBalancer.ingress
address list. Otherwise, the node's internal IP is used.If multiple LoadBalancer Services are created, a separate DaemonSet is created for each Service.
It is possible to expose multiple Services on the same node, as long as they use different ports.
If you try to create a LoadBalancer Service that listens on port 80, the ServiceLB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will remain Pending.
As a possible solution, I'd recommend to use Traefik
as it's a default Ingress
controller within K3S
.
The Pending
status on your LoadBalancer
is most likely caused by another service used on that port (Traefik
).
If you wish to still use NGINX
, the same documentation page explains how you can disable Traefik
.
I'd be more careful to delete resources as you did. The following command:
k delete all --all -n ingress-nginx
Will not delete all of the resources created. The better way in my opinion would be to use the command that you've used to create and instead of:
kubectl create -f ...
Use:
kubectl delete -f ...
I assume that you did not modify your Ingress
definition, hence you receive the error and the kubectl get ingress
is showing incorrect results.
What you will need to do:
spec:
ingressClassName: nginx # <-- DELETE IT OR CHANGE TO "traefik"
Either delete or change should work as traefik
is set to be a default IngressClass
for this setup.