Search code examples
kuberneteskubernetes-health-checkkubelet

Cannot start api server after doing modifications


I was able to successfully configure K8S cluster. But later I wanted allow annolymous access to kub apiserver so I added following parameters to /kube-apiserver.yaml

- --insecure-bind-address=0.0.0.0
- --insecure-port=8080

But when I restarted the service it was unable to start the apiserver successfully. So I reverted back to original configs but still when I start the service I get the following error. I get all sorts of errors, I think the main reason is kubelet is unable to start the Api server.

    �~W~O kubelet.service
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2017-05-14 01:57:40 UTC; 1min 16s ago
  Process: 4055 ExecStartPre=/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid (code=exited, status=0/SUCCESS)
  Process: 4050 ExecStartPre=/usr/bin/mkdir -p /var/log/containers (code=exited, status=0/SUCCESS)
  Process: 4045 ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests (code=exited, status=0/SUCCESS)
 Main PID: 4082 (kubelet)
    Tasks: 15 (limit: 32768)
   Memory: 55.0M
      CPU: 7.876s
   CGroup: /system.slice/kubelet.service
           �~T~\�~T~@4082 /kubelet --api-servers=http://127.0.0.1:8080 --register-schedulable=false --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin= --container-runtime=docker --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=192.168.57.12 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
           �~T~T�~T~@4126 journalctl -k -f

May 14 01:58:42 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:42.403056    4082 kubelet_node_status.go:101] Unable to register node "192.168.57.12" with API server: Post http://127.0.0.1:8080/api/v1/nodes: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 01:58:46 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:46.565119    4082 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node '192.168.57.12' not found
May 14 01:58:49 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:49.403315    4082 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 14 01:58:49 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:49.406572    4082 kubelet_node_status.go:77] Attempting to register node 192.168.57.12
May 14 01:58:49 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:49.467143    4082 kubelet_node_status.go:101] Unable to register node "192.168.57.12" with API server: rpc error: code = 13 desc = transport is closing
May 14 01:58:53 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:53.717328    4082 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:56.467325    4082 kubelet_node_status.go:230] Setting node annotation to enable volume controller attach/detach
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: I0514 01:58:56.469607    4082 kubelet_node_status.go:77] Attempting to register node 192.168.57.12
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:56.540698    4082 kubelet_node_status.go:101] Unable to register node "192.168.57.12" with API server: rpc error: code = 13 desc = transport is closing
May 14 01:58:56 a-test-1772868e-a036-4392-bbfc-d7c811967e88.novalocal kubelet-wrapper[4082]: E0514 01:58:56.624800    4082 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node '192.168.57.12' not found

How can I overcome this issue, Is there a way to clean everything and start this as a fresh thing. I assume some metadata is still haunting.

Edit

Full Logs from /var/log/pods

{"log":"[restful] 2017/05/14 02:13:39 log.go:30: [restful/swagger] listing is available at https://192.168.57.12:443/swaggerapi/\n","stream":"stderr","time":"2017-05-14T02:13:39.793102449Z"}
{"log":"[restful] 2017/05/14 02:13:39 log.go:30: [restful/swagger] https://192.168.57.12:443/swaggerui/ is mapped to folder /swagger-ui/\n","stream":"stderr","time":"2017-05-14T02:13:39.79318582Z"}
{"log":"E0514 02:13:39.808436       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.LimitRange: Get https://localhost:443/api/v1/limitranges?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.808684379Z"}
{"log":"E0514 02:13:39.827225       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ServiceAccount: Get https://localhost:443/api/v1/serviceaccounts?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.827488516Z"}
{"log":"E0514 02:13:39.827352       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *storage.StorageClass: Get https://localhost:443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.827527463Z"}
{"log":"E0514 02:13:39.836498       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ResourceQuota: Get https://localhost:443/api/v1/resourcequotas?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.85392487Z"}
{"log":"E0514 02:13:39.836599       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:443/api/v1/secrets?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.853986447Z"}
{"log":"E0514 02:13:39.836878       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Namespace: Get https://localhost:443/api/v1/namespaces?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T02:13:39.853997731Z"}
{"log":"I0514 02:13:40.063564       1 serve.go:79] Serving securely on 0.0.0.0:443\n","stream":"stderr","time":"2017-05-14T02:13:40.063882848Z"}
{"log":"I0514 02:13:40.063699       1 serve.go:94] Serving insecurely on 127.0.0.1:8080\n","stream":"stderr","time":"2017-05-14T02:13:40.063934866Z"}
{"log":"E0514 02:13:40.290119       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:34040-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:40.290393332Z"}
{"log":"E0514 02:13:40.425110       1 client_ca_hook.go:58] rpc error: code = 13 desc = transport: write tcp 192.168.57.12:34040-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:40.425345333Z"}
{"log":"E0514 02:13:41.169712       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:36072-\u003e192.168.57.13:2379: write: connection reset by peer\n","stream":"stderr","time":"2017-05-14T02:13:41.169945414Z"}
{"log":"E0514 02:13:42.597820       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T02:13:42.598129559Z"}
{"log":"E0514 02:13:44.957615       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:43412-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:44.957912009Z"}
{"log":"E0514 02:13:48.209202       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport: write tcp 192.168.57.12:49898-\u003e192.168.57.13:2379: write: broken pipe\n","stream":"stderr","time":"2017-05-14T02:13:48.209484622Z"}
{"log":"E0514 02:13:49.791540       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T02:13:49.79181274Z"}
{"log":"I0514 02:13:50.925762       1 trace.go:61] Trace \"Create /api/v1/namespaces/kube-system/pods\" (started 2017-05-14 02:13:40.914013106 +0000 UTC):\n","stream":"stderr","time":"2017-05-14T02:13:50.926040257Z"}
{"log":"[33.749µs] [33.749µs]

Solution

  • I was able to get this solved by setting the following two parameters in kube-apiserver.yaml. The issue was By default api server was configured to communicate with etcd3 server. So I had to specifically set the ETCD version.

    --storage-backend=etcd2
    --storage-media-type=application/json