Search code examples
elasticsearchlogstashlogstash-groklogstash-configurationlogstash-file

logstash unable to connect to elasticsearch


My elasticsearch,kibana and logstash were running in the same machine. When I tired to install my logstash for connecting to my elasticsearch, I am getting the following error display in my CMD.


[2024-05-03T16:15:44,009][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@locahost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://locahost:9200/][Manticore::ResolutionFailure] locahost"}
[2024-05-03T16:15:51,704][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"No such host is known (locahost)", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: No such host is known (locahost)>}
[2024-05-03T16:15:51,706][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@locahost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://locahost:9200/][Manticore::ResolutionFailure] No such host is known (locahost)"}
[2024-05-03T16:15:54,751][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"locahost", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: locahost>}
[2024-05-03T16:15:54,754][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@locahost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://locahost:9200/][Manticore::ResolutionFailure] locahost"}
[2024-05-03T16:15:59,789][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"locahost", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: locahost>}
[2024-05-03T16:15:59,791][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@locahost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://locahost:9200/][Manticore::ResolutionFailure] locahost"}

``


Here are my logstash.conf

input
{
  stdin{}
}
output
{
  stdout
  {
    codec => rubydebug
  }
  elasticsearch
  {
    hosts => ["locahost:9200"]
    index => "this_log_index_name"
    user => "elastic"
    password => "P@ssw0rd"
  }
}

I had did some research online for my issue but it dont help me solve my current issue.For the logstash conf file, I had tried several option(with authentication to elastic/without authentication to elastic) but seem like its not working as well .I dont have any issue when I tried to connect my kibana to my elasticsearch
Here are my elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
discovery.type: single-node
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
`

FYI, I didnt configure anything on my logstash.yml as my installation reference didnt mentioned anything that we required to update inside the logstash.yml. So the logstash.yml will be the default setting and HTTP was used in my environment

I hope someone can provide me some guide on where is my logstash installation going wrong


Solution

  • I think your issue is related to the certificate.

    Elasticsearch comes up with own CA (certificate authority) certificate and if it is converted, you get .crt and .key certificates. If you have Kibana connected to Elasticsearch, you probably have these .crt and .key certs.

    These certificates are self sign in certificates and are responsible for the communication between nodes.

    I had the same problem. I solved it with:

        hosts => ["https://locahost:9200"]
        index => "my-index-name"
        user => "logstash_writer"
        password => "logstash_writer_password"
        ssl => true
        cacert => "/path/to/http_ca.crt"
    

    But you should create built-in logstash_writer role in Kibana - Stack Management - Kibana - Role.

    The role should have the privileges:

    Cluster: manage_index_templates, monitor

    Index: write, create, create_index

    Then you can go to Kibana - Stack Management - Users and create a logstash_writer user and assign the logstash_writer role to your user.

    Here is some documentation: https://www.elastic.co/guide/en/logstash/current/ls-security.html