Search code examples
spring-bootkubernetesnamespacessynchronizationhazelcast

kubernetes hazelcast error synchronizing only in one namespace


I am trying to use an embedded hazelcast service on my microservices app deployed on kubernetes. I am able to connect to pods of such instances in one namespace by using a ServiceAccount, ClusterRoleBinding and a Service, but in anotehr namespace that I have they try to connect but nothing happens.

IMPORTANT: I DONT WANT TO CONNECT PODS FROM DEV AND RELEASE NAMESPACES. IN EACH NAMESPACE I HAVE TWO INSTANCE PODS THAT SHOULD BE CONNECTED.

The configuration is as follows:

ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"ClusterRoleBindingCache"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"view"},"subjects":[{"kind":"ServiceAccount","name":"service-account-caches","namespace":"dev"},{"kind":"ServiceAccount","name":"service-account-caches","namespace":"release"}]}
  creationTimestamp: "2020-05-07T09:29:19Z"
  name: ClusterRoleBindingCache
  resourceVersion: "31022325"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/ClusterRoleBindingCache
  uid: XXXXXX
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- kind: ServiceAccount
  name: service-account-caches
  namespace: dev
- kind: ServiceAccount
  name: service-account-caches
  namespace: release

SeriveAccount for dev namespace:

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"service-account-caches","namespace":"dev"}}
  creationTimestamp: "2020-03-02T14:23:55Z"
  name: service-account-caches
  namespace: dev
  resourceVersion: "19447813"
  selfLink: /api/v1/namespaces/dev/serviceaccounts/service-account-caches
  uid: XXXX
secrets:
- name: service-account-caches-token-nz7jh

ServiceAccount for release namespace:

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"service-account-caches","namespace":"release"}}
  creationTimestamp: "2020-04-06T08:28:45Z"
  name: service-account-caches
  namespace: release
  resourceVersion: "25692953"
  selfLink: /api/v1/namespaces/release/serviceaccounts/service-account-caches
  uid: XXXX
secrets:
- name: service-account-caches-token-x7dmc

SVC for dev:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"operation-cache-hazelcast","release":"operation-cache"},"name":"operation-cache-hazelcast","namespace":"dev"},"spec":{"ports":[{"name":"hazelcast","port":5701,"protocol":"TCP","targetPort":5701}],"selector":{"app":"back","release":"operation-cache"}}}
  creationTimestamp: "2020-03-03T09:42:38Z"
  labels:
    name: operation-cache-hazelcast
    release: operation-cache
  name: operation-cache-hazelcast
  namespace: dev
  resourceVersion: "19600693"
  selfLink: /api/v1/namespaces/dev/services/operation-cache-hazelcast
  uid: XXXXX
spec:
  clusterIP: 10.0.X1.XX1
  ports:
  - name: hazelcast
    port: 5701
    protocol: TCP
    targetPort: 5701
  selector:
    app: back
    release: operation-cache
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

SVC for release:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"operation-cache-hazelcast","release":"operation-cache"},"name":"operation-cache-hazelcast","namespace":"release"},"spec":{"ports":[{"name":"hazelcast","port":5701,"protocol":"TCP","targetPort":5701}],"selector":{"app":"back","release":"operation-cache"}}}
  creationTimestamp: "2020-05-07T10:26:38Z"
  labels:
    name: operation-cache-hazelcast
    release: operation-cache
  name: operation-cache-hazelcast
  namespace: release
  resourceVersion: "31029600"
  selfLink: /api/v1/namespaces/release/services/operation-cache-hazelcast
  uid: XXXXX
spec:
  clusterIP: 10.0.X2.XX2
  ports:
  - name: hazelcast
    port: 5701
    protocol: TCP
    targetPort: 5701
  selector:
    app: back
    release: operation-cache
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

the deployment it is exact the same for both. Now the logs are the following: dev endpoints:

In dev we can see how two pods connects perfectly:

kubectl config use-context dev
kubectl get endpoints

NAME                           ENDPOINTS                            AGE
operation-cache-back           10.244.5.15:8080,10.244.1.72:8080   42d
operation-cache-hazelcast      10.244.5.15:5701,10.244.1.72:5701   12d
2020-05-19 11:28:41,680 INFO class=org.springframework.boot.StartupInfoLogger  Starting Application on operation-cache-back-9bccc5d99-jxfsz with PID 6 (/app/app.jar started by ? in /)
2020-05-19 11:28:41,760 INFO class=org.springframework.boot.SpringApplication  The following profiles are active: DDBBSecurized,des-indra-env
2020-05-19 11:29:07,040 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [LOCAL] [dev] [3.11.4] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2020-05-19 11:29:07,135 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [LOCAL] [dev] [3.11.4] Picked [10.244.1.72]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
2020-05-19 11:29:07,277 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Hazelcast 3.11.4 (20190509 - d5ad9d4) starting at [10.244.1.72]:5701
2020-05-19 11:29:07,282 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
2020-05-19 11:29:07,287 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
2020-05-19 11:29:09,295 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Backpressure is disabled
2020-05-19 11:29:12,983 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: operation-cache-hazelcast, service-port: 0, service-label: null, service-label-value: true, namespace: evosago-app-dev, pod-label: null, pod-label-value: null, resolve-not-ready-addresses: false, use-node-name-as-external-address: false, kubernetes-api-retries: 3, kubernetes-master: https://kubernetes.default.svc}
2020-05-19 11:29:13,044 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Kubernetes Discovery activated with mode: KUBERNETES_API
2020-05-19 11:29:13,530 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Activating Discovery SPI Joiner
2020-05-19 11:29:14,856 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
2020-05-19 11:29:14,930 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-05-19 11:29:14,970 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] [10.244.1.72]:5701 is STARTING
2020-05-19 11:29:15,473 WARN class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Cannot fetch the current zone, ZONE_AWARE feature is disabled
2020-05-19 11:29:15,739 WARN class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  Cannot fetch public IPs of Hazelcast Member PODs, you won't be able to use Hazelcast Smart Client from outside of the Kubernetes network
2020-05-19 11:29:15,844 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Connecting to /10.244.5.15:5701, timeout: 0, bind-any: true
2020-05-19 11:29:15,951 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] Initialized new cluster connection between /10.244.1.72:34029 and /10.244.5.15:5701
2020-05-19 11:29:21,907 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4]

Members {size:2, ver:2} [
        Member [10.244.5.15]:5701 - bc482303-9e5a-4271-8d43-feaeeb833f60
        Member [10.244.1.72]:5701 - d34f1ae7-739b-44fa-83ae-2fac9c2fba98 this
]

2020-05-19 11:29:23,033 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.1.72]:5701 [dev] [3.11.4] [10.244.1.72]:5701 is STARTED

But in release namespace the try to connect but nothing happens: releaseEndPoints:

kubectl config use-context release
kubectl get endpoints

NAME                           ENDPOINTS                            AGE
operation-cache-back           10.244.5.26:8080,10.244.7.145:8080   42d
operation-cache-hazelcast      10.244.5.26:5701,10.244.7.145:5701   12d
2020-05-19 11:33:26,778 INFO class=org.springframework.boot.StartupInfoLogger  Starting Application on operation-cache-back-84f87ff564-wf57p with PID 6 (/app/app.jar started by ? in /)
2020-05-19 11:33:26,943 INFO class=org.springframework.boot.SpringApplication  The following profiles are active: DDBBSecurized,rel-indra-env
2020-05-19 11:33:43,765 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [LOCAL] [dev] [3.11.4] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2020-05-19 11:33:43,875 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [LOCAL] [dev] [3.11.4] Picked [10.244.5.26]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
2020-05-19 11:33:44,017 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Hazelcast 3.11.4 (20190509 - d5ad9d4) starting at [10.244.5.26]:5701
2020-05-19 11:33:44,022 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
2020-05-19 11:33:44,028 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
2020-05-19 11:33:45,559 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Backpressure is disabled
2020-05-19 11:33:47,383 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: operation-cache-hazelcast, service-port: 0, service-label: null, service-label-value: true, namespace: evosago-app-release, pod-label: null, pod-label-value: null, resolve-not-ready-addresses: false, use-node-name-as-external-address: false, kubernetes-api-retries: 3, kubernetes-master: https://kubernetes.default.svc}
2020-05-19 11:33:47,393 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Kubernetes Discovery activated with mode: KUBERNETES_API
2020-05-19 11:33:47,730 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Activating Discovery SPI Joiner
2020-05-19 11:33:48,242 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
2020-05-19 11:33:48,249 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-05-19 11:33:48,267 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] [10.244.5.26]:5701 is STARTING
2020-05-19 11:33:48,457 WARN class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Cannot fetch the current zone, ZONE_AWARE feature is disabled
2020-05-19 11:33:48,648 WARN class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  Cannot fetch public IPs of Hazelcast Member PODs, you won't be able to use Hazelcast Smart Client from outside of the Kubernetes network
2020-05-19 11:33:48,680 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] Connecting to /10.244.7.145:5701, timeout: 0, bind-any: true
2020-05-19 11:33:53,682 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4]

Members {size:1, ver:1} [
        Member [10.244.5.26]:5701 - eb440db8-6471-4adc-9428-1c23744eb1c9 this
]

2020-05-19 11:33:53,769 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.26]:5701 [dev] [3.11.4] [10.244.5.26]:5701 is STARTED
2020-05-19 16:09:14,065 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger  [10.244.5.28]:5701 [dev] [3.11.4] Could not connect to: /10.244.7.168:5701. Reason: SocketException[Operation timed out to address /10.244.7.168:5701]

As yo can see in the log, in release namespace it discover the other pod but it gives timeout in the second attemp

  • 2020-05-19 11:33:48,680 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger [10.244.5.26]:5701 [dev] [3.11.4] Connecting to /10.244.7.145:5701, timeout: 0, bind-any: true 2020-05-19 16:09:14,065 INFO class=com.hazelcast.logging.StandardLoggerFactory$StandardLogger [10.244.5.28]:5701 [dev] [3.11.4] Could not connect to: /10.244.7.168:5701. Reason: SocketException[Operation timed out to address /10.244.7.168:5701]

Deployments

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "5"
  creationTimestamp: "X"
  generation: 10
  labels:
    app: back
    chart: back-0.1.0
    heritage: Helm
    release: operation-cache
  name: operation-cache-back
  namespace: dev
  resourceVersion: "33070028"
  selfLink: /apis/extensions/v1beta1/namespaces/dev/deployments/operation-cache-back
  uid: XXXX
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 2
  revisionHistoryLimit: 2147483647
  selector:
    matchLabels:
      app: back
      release: operation-cache
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: back
        release: operation-cache
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: operation-cache-back
                  release: operation-cache
              topologyKey: kubernetes.io/hostname
            weight: 1
      containers:
      - env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: XXXX/back-operation-cache:latest
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 6
          httpGet:
            path: /actuator/health
            port: http
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: operation-cache-back
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 5701
          name: hazelcast
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health
            port: http
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 128Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /appconfiguration/application.yaml
          name: application-yaml
          readOnly: true
          subPath: application.yaml
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: registry-docker
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsUser: 1000
      serviceAccount: service-account-caches
      serviceAccountName: service-account-caches
      terminationGracePeriodSeconds: 30
      volumes:
      - name: application-yaml
        secret:
          defaultMode: 420
          secretName: operation-cache-back-application-yaml
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: "2020-05-07T09:37:40Z"
    lastUpdateTime: "2020-05-07T09:37:40Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 10
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2

The release deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "3"
  creationTimestamp: "X"
  generation: 21
  labels:
    app: back
    chart: back-0.1.0
    heritage: Helm
    release: operation-cache
  name: operation-cache-back
  namespace: release
  resourceVersion: "33070611"
  selfLink: /apis/extensions/v1beta1/namespaces/release/deployments/operation-cache-back
  uid: XXXX
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 2
  revisionHistoryLimit: 2147483647
  selector:
    matchLabels:
      app: back
      release: operation-cache
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: back
        release: operation-cache
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: operation-cache-back
                release: operation-cache
            topologyKey: kubernetes.io/hostname
      containers:
      - env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: XXXX/back-operation-cache:latest
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 6
          httpGet:
            path: /actuator/health
            port: http
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: operation-cache-back
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 5701
          name: hazelcast
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/health
            port: http
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 3
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 128Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /appconfiguration/application.yaml
          name: application-yaml
          readOnly: true
          subPath: application.yaml
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: registry-docker
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsUser: 1000
      serviceAccount: service-account-caches
      serviceAccountName: service-account-caches
      terminationGracePeriodSeconds: 30
      volumes:
      - name: application-yaml
        secret:
          defaultMode: 420
          secretName: operation-cache-back-application-yaml
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: "2020-05-07T08:00:07Z"
    lastUpdateTime: "2020-05-07T08:00:07Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 21
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2

I have realize that there is something different, but i change it and now they are both the same and it is still not connecting

spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: operation-cache-back
                  release: operation-cache
              topologyKey: kubernetes.io/hostname
            weight: 1
spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: operation-cache-back
                release: operation-cache
            topologyKey: kubernetes.io/hostname

application.yaml for dev pods:

  instance-name: hazelcastInstance
  map.default.time-to-live-seconds: -1
  network:
    join:
      multicast:
        enabled: false
      kubernetes:
        enabled: true
        namespace: dev
        service-name: operation-cache-hazelcast
        service-port: 5701

application.yaml for release pods:

  instance-name: hazelcastInstance
  map.default.time-to-live-seconds: -1
  network:
    join:
      multicast:
        enabled: false
      kubernetes:
        enabled: true
        namespace: release
        service-name: operation-cache-hazelcast
        service-port: 5701

I am deploying both with helm and same template.

Why it can be due to? Why is giving timeout only in release namespace, it doesnt make any sense

Thanks in advance


Solution

  • I found the solution... it was imposible to achive the solution only with the info i provided. The error was about the networkPolicies of kubernetes. In namespace develop it was configured to allow connections in port 5701 but in release namespace it doesnt. Sorry for not give you enough info, but at least I solve it.

    Thank you all.