So I found myself in a pretty sticky situation. I'm trying to create a simple replicaSet, but unfortunately I ran into some problems with the calico.
I have 2 VM running on OracleVM. I have them configured to use enp0s8 interface. The IP of the master node is 192.168.56.2 and the worker node's ip is 192.168.56.3
Here is what I'm doing in Kubernetes. First I'm creating the kubernetes master node:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.56.2
after successfuly initialzing I'm running:
export KUBECONFIG=/etc/kubernetes/admin.conf
Now I'm creating the POD network by running:
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
after that I'm joining from the worker node successfully. Whenever I start the replica with:
*** edit: I don't have to create the replicaset to obtain the same result of the calico-node creation getting stuck
kubectl create -f replicaset-definition.yml
in which the yml looks like this:
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
replicas: 2
selector:
matchLabels:
app: myapp
I'm getting a new calico-node created which eventually will get stuck
calico-node-mcb5g 0/1 Running 6 8m58s
calico-node-t9p5n 1/1 Running 0 12m
If I run
kubectl logs -n kube-system calico-node-mcb5g -f
on it I get the following logs:
2020-03-18 14:45:40.585 [INFO][8] startup.go 275: Using NODENAME environment for node name
2020-03-18 14:45:40.585 [INFO][8] startup.go 287: Determined node name: kubenode1
2020-03-18 14:45:40.587 [INFO][8] k8s.go 228: Using Calico IPAM
2020-03-18 14:45:40.588 [INFO][8] startup.go 319: Checking datastore connection
2020-03-18 14:46:10.589 [INFO][8] startup.go 334: Hit error connecting to datastore - retry error=Get https://10.96.0.1:443/api/v1/nodes/foo: dial tcp 10.96.0.1:443: i/o timeout
2020-03-18 14:46:41.591 [INFO][8] startup.go 334: Hit error connecting to datastore - retry error=Get https://10.96.0.1:443/api/v1/nodes/foo: dial tcp 10.96.0.1:443: i/o timeout
I've tried to configure the calico.yml and added the following line in env:
- name: IP_AUTODETECTION_METHOD
value: "interface=enp0s8"
but the result is still the same.
Thank you so much for reading this and if you have any advice I will be sooo grateful!!!
Ok, so here it goes. What it seemed to be was that calico node crashed because the service CIDR and host CIDR overlaped.
If I initiate the master node with the CIDR changed as:
kubeadm init --pod-network-cidr=20.96.0.0/12 --apiserver-advertise-address=192.168.56.2
works like a charm.
This helped a lot: Cluster Creation Successful but calico-node-xx pod is in CrashLoopBackOff Status