Search code examples
elasticsearchkibana-4

Elasticsearch Tribe Node and Kibana - no known master node


I am using Elasticsearch 2.3.1 and Kibana 4.5. I have 2 elasticsearch clusters.

Cluster 1 - 1 Master Node, 1 Data Node and 1 client node. Cluster 2 - 1 Master Node, 1 Data Node and 1 Tribe node.

The tribe node is able to communicate with the nodes in both clusters. I also have 2 indices in both clusters, cluster1index in cluster 1 and cluster2index in cluster 2. I am able to view the indices :

yellow open cluster2index 5 1  22400 0  24.6mb  24.6mb 
yellow open cluster1index 5 1 129114 0 109.9mb 109.9mb 

However, if I try to connect Kibana with the tribe node, I get an error

[2016-05-05 11:49:03,162][DEBUG][action.admin.indices.create] [tribe-node-MS2] no known master node, scheduling a retry
[2016-05-05 11:49:33,163][DEBUG][action.admin.indices.create] [tribe-node-MS2] timed out while retrying [indices:admin/create] after failure (timeout [30s])
[2016-05-05 11:49:33,165][WARN ][rest.suppressed          ] /.kibana Params: {index=.kibana}
MasterNotDiscoveredException[null]
    at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:226)
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:236)
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:804)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

I tried to connect kibana to the client node instead, and was able to view my indices. After this, if I connect Kibana to the tribe node, I am able to view the dashboard.

My kibana config :

 server.port: 5601
 server.host: "hostname"
 elasticsearch.url: "http://hostname:port"
 kibana.index: ".kibana"

I am not sure why Kibana was not working with tribe node intially and if I am missing anything in my configuration.

I read in one of the answers in the elasticsearch forum :

"Regarding the issue you have with kibana, you can't create a .kibana index directly with the tribe node because it's a tribe node :slight_smile: sitting in a cluster that has no master node and data node. Yes, this tribe node is connected to two clusters in this case but it does not know where to put .kibana index if you are under the assumption that it should write to one of the clusters."

Is this the reason that I was unable to create the kibana index directly in the tribe node intially, but later when the index was already created, i was able to point Kibana the tribe node? If so, is there any configuration available to connect Kibana with tribe node directly?


Solution

  • Good additional information and, also, confirmations for this behavior you can find in this github issue and, also, in this one.

    As a summary...

    The Tribe Node documentation states that you cannot execute Master Level Write Operations such as Create Index, both of which are required when using Kibana 4 for the first time. Simply creating the index is not sufficient, because Put Mapping is also required, and is also a Master Level Write Operation.

    As a workaround, first bring up the Kibana 4 instance and configure it to point directly at one of the remote clusters so that it will initialize the .kibana index in that cluster. While Kibana 4 is connected to this single cluster, create and save the Index Settings/Index Pattern that you will be using for the tribe node there, and create/save at least 1 visualization and 1 dashboard. Then update Kibana yml file to reconfigure its ES connection to point to the tribe node and restart Kibana 4.

    From that point on, you should be able to continue managing Kibana Dashboards & Visualizations through the tribe node, providing that the .kibana index exists in only one of the remote clusters. If the index must exist in more than one cluster (e.g., you are doing snapshot/restore for redundancy), then instruct the Tribe node to prefer the master version with these settings (where clusterA holds the master .kibana index):

    tribe:
      on_conflict: prefer_clusterA