I've been following this guide for starting kubernetes with kubeadm. I've completed that and now have a single machine cluster running on a Centos7 VM. I installed a pod network (Weave Net) and I have also installed the Kubernetes Dashboard. Next, I run kubectl proxy
and it responds with Starting to serve on 127.0.0.1:8001
.
However, whenever I try to access the dashboard using localhost:8001/ui or 127.0.0.1:8001/ui I am redirected to an error page linked to my corporate proxy notifying me of a gateway timeout when trying to reach http://10.32.0.4/.
Now, I figured there was some configuration in which 10.32.0.4 wasn't included in some exception, so I began adding it to env in no_proxy and NO_PROXY, I specified it in the actual proxy settings in the GUI, I've made sure docker is setup such that it has the same exception. I have even (to my best knowledge) completely removed any trace of proxy settings in the hope that it would not try to go through the corporate proxy to reach what should be an internal address. Additional info:
[root@localhost ~]# kubectl get nodes
NAME STATUS AGE VERSION
localhost.localdomain Ready 22h v1.6.4
[root@localhost ~]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
etcd-localhost.localdomain 1/1 Running 0 22h
kube-apiserver-localhost.localdomain 1/1 Running 0 22h
kube-controller-manager-localhost.localdomain 1/1 Running 0 22h
kube-dns-3913472980-8zm51 3/3 Running 0 22h
kube-proxy-3wslb 1/1 Running 0 22h
kube-scheduler-localhost.localdomain 1/1 Running 0 22h
kubernetes-dashboard-2039414953-79zbr 1/1 Running 0 22h
weave-net-z6kml 2/2 Running 0 22h
[root@localhost ~]# kubectl describe svc kubernetes-dashboard --namespace=kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.96.33.253
Port: <unset> 80/TCP
Endpoints: 10.32.0.4:9090
Session Affinity: None
Events: <none>
[root@localhost ~]# kubectl get deployment kubernetes-dashboard --namespace=kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1 1 1 1 22h
[root@localhost ~]# kubectl --namespace=kube-system get ep kubernetes-dashboard
NAME ENDPOINTS AGE
kubernetes-dashboard 10.32.0.4:9090 22h
[root@localhost ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.181.130:6443
KubeDNS is running at https://192.168.181.130:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@localhost ~]# kubectl get ns
NAME STATUS AGE
default Active 22h
kube-public Active 22h
kube-system Active 22h
[root@localhost ~]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 192.168.181.130:6443 22h
I'm really not sure where to go from here. There's a lot of moving parts here, and I can't find a way to see what goes wrong when the redirect happens.
Proxy settings are copied by kubeadm on node creation and don't reflect changes performed after. You are supposed to update proxy settings in /etc/kubernetes/manifests/kube-apiserver.yaml