Search code examples
kubernetesignite

Failed to retrieve Ignite pods IP addresses


I am trying to run apache ignite cluster using Google Kubernetes Engine.

After following the tutorial here are some yaml files.

First I create a service - ignite-service.yaml

apiVersion: v1
kind: Service
metadata:
  # Name of Ignite Service used by Kubernetes IP finder. 
  # The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName.
  name: ignite
  namespace: default
spec:
  clusterIP: None # custom value.
  ports:
    - port: 9042 # custom value.
  selector:
    # Must be equal to one of the labels set in Ignite pods'
    # deployement configuration.
    app: ignite

kubectl create -f ignite-service.yaml

Second, I create a deployment for my ignite nodes ignite-deployment.yaml

An example of a Kubernetes configuration for Ignite pods deployment.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  # Custom Ignite cluster's name.
  name: ignite-cluster
spec:
  # A number of Ignite pods to be started by Kubernetes initially.
  replicas: 2
  template:
    metadata:
      labels:
        app: ignite
    spec:
      containers:
        # Custom Ignite pod name.
      - name: ignite-node
        image: apacheignite/ignite:2.4.0
        env:
        - name: OPTION_LIBS
          value: ignite-kubernetes
        - name: CONFIG_URI
          value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
        ports:
        # Ports to open.
        # Might be optional depending on your Kubernetes environment.
        - containerPort: 11211 # REST port number.
        - containerPort: 47100 # communication SPI port number.
        - containerPort: 47500 # discovery SPI port number.
        - containerPort: 49112 # JMX port number.
        - containerPort: 10800 # SQL port number.       

kubectl create -f ignite-deployment.yaml

After that I check status of my pods which are running in my case. However when I check logs for any of my pod, I get the following error,

java.io.IOException: Server returned HTTP response code: 403 for URL: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite

Things I have tried:-

  1. I followed this link to make my cluster work. But in step 4, when I run the daemon yaml file, I get the following error

error: error validating "daemon.yaml": error validating data: ValidationError(DaemonSet.spec.template.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false

Can anybody point me to my mistake which I might be doing here?

Thanks.


Solution

  • Step 1: kubectl apply -f ignite-service.yaml (with the file in your question)

    Step 2: kubectl apply -f ignite-rbac.yaml

    ignite-rbac.yaml is like this:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ignite
      namespace: default
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: ignite-endpoint-access
      namespace: default
      labels:
        app: ignite
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        resourceNames: ["ignite"]
        verbs: ["get"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: ignite-role-binding
      namespace: default
      labels:
        app: ignite
    subjects:
      - kind: ServiceAccount
        name: ignite
    roleRef:
      kind: Role
      name: ignite-endpoint-access
      apiGroup: rbac.authorization.k8s.io
    

    Step 3: kubectl apply -f ignite-deployment.yaml (very similar to your file, I've only added one line, serviceAccount: ignite:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      # Custom Ignite cluster's name.
      name: ignite-cluster
      namespace: default
    spec:
      # A number of Ignite pods to be started by Kubernetes initially.
      replicas: 2
      template:
        metadata:
          labels:
            app: ignite
        spec:
          serviceAccount: ignite  ## Added line
          containers:
            # Custom Ignite pod name.
          - name: ignite-node
            image: apacheignite/ignite:2.4.0
            env:
            - name: OPTION_LIBS
              value: ignite-kubernetes
            - name: CONFIG_URI
              value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
            ports:
            # Ports to open.
            # Might be optional depending on your Kubernetes environment.
            - containerPort: 11211 # REST port number.
            - containerPort: 47100 # communication SPI port number.
            - containerPort: 47500 # discovery SPI port number.
            - containerPort: 49112 # JMX port number.
            - containerPort: 10800 # SQL port number.
    

    This should work fine. I've got this in the logs of the pod (kubectl logs -f ignite-cluster-xx-yy), showing the 2 Pods successfully locating each other:

    [13:42:00] Ignite node started OK (id=f89698d6)
    [13:42:00] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1, offheap=0.72GB, heap=1.0GB]
    [13:42:00] Data Regions Configured:
    [13:42:00]   ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]
    [13:42:01] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2, offheap=1.4GB, heap=2.0GB]
    [13:42:01] Data Regions Configured:
    [13:42:01]   ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]