I'm having some trouble in connecting the Logstash (In AWS - EC2) with the ElasticSearch domain in AWS.
ES Configuration:
Logstash Pipeline Config:
input {
stdin { }
}
output {
amazon_es {
hosts => ["vpc-aaabbbccc111222333.es.amazonaws.com"]
ssl => true
region => "us-east-1"
}
}
So when I try to run the pipeline in the logstash (EC2 instance) for testing, it throws with the below error. I have opened all traffic in SGs. Also Logstash-EC2 and the ES domain have the same SG. I have also allowed the IAM-Role assigned with the EC2 in the ES access policy.
Error Log:
[2021-04-24T13:50:55,461][WARN ][logstash.outputs.amazonelasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://vpc-aaabbbccc111222333.us-east-1.es.amazonaws.com:443/", :error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '403' contacting Elasticsearch at URL 'https://vpc-aaabbbccc111222333.us-east-1.es.amazonaws.com:443/'"}
Did you map EC2 IAM role with any kibana role? When you integrate other AWS services (EC2 in this case) with FGAC ES domain, you need to map IAM role for that resource with kibana roles.