Search code examples
elasticsearchlogstash

Logstash giving Elasticsearch Unreachable error


I am trying to connect Logstash with Elasticsearch but cannot get it working. My Elasticsearch is running fine on localhost:9200 and i can curl it.

My logstash.config looks like this:

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://localhost:9200

My logstash-sample.conf:

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

I use to start the container.

docker run                         --name logstash             -p 5044:5044                                      -e "discovery.type=single-node"  docker.elastic.co/logstash/logstash:6.4.2

However i get the following log with the error:

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-11-02T15:30:44,622][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2018-11-02T15:30:44,814][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.2"}
[2018-11-02T15:30:48,350][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"2196aa69258f6adaaf9506d8988cc76ab153e658434074dcf2e424e0aca0d381", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1afa70a3-eaef-4cf5-9762-0d759d720d1c", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-11-02T15:30:48,710][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-11-02T15:30:50,318][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-02T15:30:51,394][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-11-02T15:30:51,462][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x688d6788 run>"}
[2018-11-02T15:30:51,698][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-11-02T15:30:51,972][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-02T15:30:51,982][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:52,194][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:52,218][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-11-02T15:30:52,322][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-02T15:30:52,322][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:52,332][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:52,378][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2018-11-02T15:30:52,509][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:293:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in `block in get'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:162:in `get'", "/usr/share/logstash/x-pack/lib/license_checker/license_reader.rb:28:in `fetch_xpack_info'", "/usr/share/logstash/x-pack/lib/license_checker/license_manager.rb:40:in `fetch_xpack_info'", "/usr/share/logstash/x-pack/lib/license_checker/license_manager.rb:27:in `initialize'", "/usr/share/logstash/x-pack/lib/license_checker/licensed.rb:37:in `setup_license_checker'", "/usr/share/logstash/x-pack/lib/monitoring/inputs/metrics.rb:56:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:242:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:396:in `start_inputs'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:294:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:200:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:160:in `block in start'"]}
[2018-11-02T15:30:52,776][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x4bac2ad3 sleep>"}
[2018-11-02T15:30:52,842][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[2018-11-02T15:30:52,865][ERROR][logstash.inputs.metrics  ] X-Pack is installed on Logstash but not on Elasticsearch. Please install X-Pack on Elasticsearch to use the monitoring feature. Other features may be available.
[2018-11-02T15:30:53,119][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-02T15:30:57,213][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:57,222][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:57,347][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}

And this keeps going on. I would really appreciate your help and would be obliged. Thank You.


Solution

  • As @Val comment noticed, localhost INSIDE the container it's the container, not your HOST machine.

    If Docker Compose is fine with you, you could try 'sebp/elk' Docker image which contains ElasticSearch, LogStash and Kibana. And using this example Compose File

    docker-compose.yml :

    version: "2.4"
    services:
      elk:
        image: sebp/elk
        ports:
          - "5601:5601"
          - "9200:9200"
          - "5044:5044"
    
    1. Install Docker-Compose
    2. Save the sample 'docker-compose.yml' in a folder
    3. In the same folder run: $ docker-compose up

    But if you want to change the configuration you should:

    1. Create your own Dockerfile with that image as base, and overwrite the conf file you want
    2. User Docker Compose volumes, to overwrite individual files, by editing the docker-compose.yml instead of creating a new Docker Image with the Dockerfile

    Alternatives:

    1. Use your private, local LAN ip in the container configuration, instead of 'localhost', so the container can reach it
    2. Use both containerized systems, and on 'docker run' use the same network, setup hostnames, and use those hostnames in the configuration
    3. Use Docker Compose with one service for each system (Elastic and LogStash), and use the service name in the configuration (because Docker Compose will use the service name as hostname by default)
    4. Use the original suggestion, of Docker Compose, with a single service, with the sebp/elk Docker Image, so everything is in the same container, and you can keep the localhost conf

    Just remember each alternative may have a different approach to customize the configuration.