Search code examples
kuberneteskube-dns

Kubernetes service communication isse - Kubedns


I have two pods mapped to two services up and running using virtual box vm's in my laptop. I have kube dns working. One pod is a webservice and the other is a mongodb.

The spec of webapp pod is below

spec:
  containers:
    - resources:
        limits:
          cpu: 0.5
          .
          .
      name: wsemp
      ports:
        - containerPort: 8080
  #     name: wsemp
  #command: ["java","-Dspring.data.mongodb.uri=mongodb://192.168.6.103:30061/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
  command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

The spec of corresponding service

apiVersion: v1
kind: Service
metadata:
  labels:
    name: webappservice
  name: webappservice
spec:
  ports:
   - port: 8080
     nodePort: 30062
     targetPort: 8080
     protocol: TCP
  type: NodePort
  selector:
    name: webapp

Mongodb pod spec

apiVersion: v1
kind: Pod
metadata:
  name: mongodb
  labels:
    name: mongodb
spec:
  containers:
    .
    .
  name: mongodb
  ports:
    - containerPort: 27017

Mongodb service spec

apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongodb
  name: mongoservice
spec:
  ports:
   - port: 27017
     nodePort: 30061
     targetPort: 27017
     protocol: TCP
  type: NodePort
  selector:
    name: mongodb

UPDATED TARGET PORTS IN SERVICE AFTER COMMENT

Issue

The webapp when it starts is not able to connect with the mongoservice port and gives this error on start

Exception in monitor thread while connecting to server mongoservice:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.connection.SocketStream.open(SocketStream.java:63) ~[mongodb-driver-core-3.2.2.jar!/:na]
at        com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:114) ~[mongodb-driver-core-3.2.2.jar!/:na]
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:128) ~[mongodb-driver-core-3.2.2.jar!/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_111]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_111]

describe svc

kubectl describe svc mongoservice
Name:           mongoservice
Namespace:      default
Labels:         name=mongodb
Selector:       name=mongodb
Type:           NodePort
IP:         10.254.146.189
Port:           <unset> 27017/TCP
NodePort:       <unset> 30061/TCP
Endpoints:      172.17.99.2:27017
Session Affinity:   None
No events.

kubectl describe svc webappservice 
Name:           webappservice
Namespace:      default
Labels:         name=webappservice
Selector:       name=webapp
Type:           NodePort
IP:         10.254.112.121
Port:           <unset> 8080/TCP
NodePort:       <unset> 30062/TCP
Endpoints:      172.17.99.3:8080
Session Affinity:   None
No events.

Debugging

root@webapp:/# nslookup mongoservice
Server:     10.254.0.2
Address:    10.254.0.2#53

Non-authoritative answer:
Name:   mongoservice.default.svc.cluster.local
Address: 10.254.146.189

root@webapp:/# curl 10.254.146.189:27017
curl: (7) Failed to connect to 10.254.146.189 port 27017: Connection refused
root@webapp:/# curl mongoservice:27017
curl: (7) Failed to connect to mongoservice port 27017: Connection refused


sudo iptables-save | grep webapp

-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/webappservice:" -m tcp --dport 30062 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -s 172.17.99.3/32 -m comment --comment "default/webappservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-IE7EBTQCN7T6HXC4 -p tcp -m comment --comment "default/webappservice:" -m tcp -j DNAT --to-destination 172.17.99.3:8080
-A KUBE-SERVICES -d 10.254.217.24/32 -p tcp -m comment --comment "default/webappservice: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-NQBDRRKQULANV7O3
-A KUBE-SVC-NQBDRRKQULANV7O3 -m comment --comment "default/webappservice:" -j KUBE-SEP-IE7EBTQCN7T6HXC4
$ curl 10.254.217.24:8080
{"timestamp":1486678423757,"status":404,"error":"Not Found","message":"No message available","path":"/"}[osboxes@kube-node1 ~]$ 


sudo iptables-save | grep mongodb
[osboxes@osboxes ~]$ sudo iptables-save | grep mongo
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/mongoservice:" -m tcp --dport 30061 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -s 172.17.99.2/32 -m comment --comment "default/mongoservice:" -j KUBE-MARK-MASQ
-A KUBE-SEP-FVWOWAWXXVAVIQ5O -p tcp -m comment --comment "default/mongoservice:" -m tcp -j DNAT --to-destination 172.17.99.2:27017
-A KUBE-SERVICES -d 10.254.146.189/32 -p tcp -m comment --comment "default/mongoservice: cluster IP" -m tcp --dport 27017 -j KUBE-SVC-2HQWGC3WSIBZF7CN
-A KUBE-SVC-2HQWGC3WSIBZF7CN -m comment --comment "default/mongoservice:" -j KUBE-SEP-FVWOWAWXXVAVIQ5O
[osboxes@osboxes ~]$ sudo curl  10.254.146.189:8080
^C[osboxes@osboxes ~]$ sudo curl  10.254.146.189:27017

It looks like you are trying to access MongoDB over HTTP on the native driver port.


root@mongodb:/# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN     
tcp        0      0 172.17.99.2:60724       151.101.128.204:80      TIME_WAIT  
tcp        0      0 172.17.99.2:60728       151.101.128.204:80      TIME_WAIT  

mongodb container has no errors on startup.

Trying to follow steps in https://kubernetes.io/docs/user-guide/debugging-services/#iptables, stuck in the part where it says " try restarting kube-proxy with the -V flag set to 4" since I dont know how to do it.

I'm not a networking person, so dont know how and what needs to be analyzed in this. Any kind of tips to debug would be of great help.

Thanks.


Solution

  • thanks. I had got a clue on this and since I was using the flannel network, there was an issue with the communication between the pods in the flannel network.

    Particularly this part, FLANNEL_OPTIONS="--iface=eth1" as mentioned in the link http://jayunit100.blogspot.com/2015/06/flannel-and-vagrant-heads-up.html

    Thanks.