Logstash Config file
input {
elasticsearch {
hosts => ["https://staing-example.com:443"]
user => "userName"
password => "password"
index => "testingindex"
size => 100
scroll => "1m"
}
}
filter {
}
output {
amazon_es {
hosts => ["https://example.us-east-1.es.amazonaws.com:443"]
region => "us-east-1"
aws_access_key_id => "access_key_id"
aws_secret_access_key => "access_key_id"
index => "testingindex"
}
}
Using Logstash to transferred from one elastic search server to amazon elastic search
For the above config, Logstash is continuously throwing
2019-10-10T16:00:51,232][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"https://example.us-east-1.es.amazonaws.com:443/_bulk"} [2019-10-10T16:00:52,127][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"https://example.us-east-1.es.amazonaws.com:443/_bulk"} [2019-10-10T16:00:52,317][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"https://example.us-east-1.es.amazonaws.com:443/_bulk"}
Don't know the reason why it is happening
Below solution solves the issue for me Click on the configured cluster in Amazon Elastic Search configure cluster
Now click on Advanced Option Where you can able to see the Allow APIs that can span multiple indices and bypass index-specific access policies - check in the policy Advance Option