I set up a graylog stack (graylog / ES/ Mongo) everything went smooth (well almost), yesterday I tried to get some info using the following command :
curl 'http://127.0.0.1:9200/_nodes/process?pretty'
{
"cluster_name" : "log_server_graylog",
"nodes" : {
"Znz_72SZSyikw6DEC4Wgzg" : {
"name" : "graylog-27274b66-3bbd-4975-99ee-1ee3d692c522",
"transport_address" : "127.0.0.1:9350",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "2.4.4",
"build" : "fcbb46d",
"attributes" : {
"client" : "true",
"data" : "false",
"master" : "false"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 788,
"mlockall" : false
}
},
"XO77zz8MRu-OOSymZbefLw" : {
"name" : "test",
"transport_address" : "127.0.0.1:9300",
"host" : "127.0.0.1",
"ip" : "127.0.0.1",
"version" : "2.4.4",
"build" : "fcbb46d",
"http_address" : "127.0.0.1:9200",
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 946,
"mlockall" : false
}
}
}
}
I does look like (to me at least that there is 2 nodes running, someone on the ES IRC told me that there might be a transport client running (which show up as a second node)...
I really don't understand why where this transport client comes from, also, the guy from IRC told me it used to be a common setup (using transport client) but this is discouraged now, how can I reverse the config to follow ES best practices ? (which I couldn't find on the docs)
FYI, my config file :
cat /etc/elasticsearch/elasticsearch.yml cluster.name: log_server_graylog
node.name: test
path.data: /tt/elasticsearch/data
path.logs: /tt/elasticsearch/log
network.host: 127.0.0.1
action.destructive_requires_name: true
# Folowing are useless as we are defining swappiness to 1, this shloud prevent ES memeory space from being sawpped, unless emergency
#bootstrap.mlockall: true
#bootstrap.memory_lock: true
Thanks
I found the answer using the graylog IRC, the second client is the graylog client created by.... Graylog server :)
So everything is normal and as expected.