Originally I was trying to setup Flume to write to S3 to my AWS setup like this:
aws.sinks.channel1.type = hdfs
aws.sinks.channel1.hdfs.path=s3n://<id>:<secretkey>/<bucketname>
aws.sinks.channel1.hdfs.fileType=DataStream
aws.sinks.channel1.hdfs.writeFormat=Text
aws.sinks.channel1.hdfs.rollCount = 0
aws.sinks.channel1.hdfs.rollSize = 67108864
aws.sinks.channel1.hdfs.batchSize = 1000
aws.sinks.channel1.hdfs.rollInterval = 0
However, it has occurred to me that I do not have access to "bucketname".
Our ElasticSearch service on Amazon does not expose the file system layer.
Is there any way to use the elasticsearch sink or some kind of http sink in order to push flume information through to something like Kibana on AWS?
For clarity, I want to push ElasticSearch to Amazon. The sources I have are avro & http, and do not come from Amazon.
https://forums.aws.amazon.com/thread.jspa?messageID=683536
When speaking about AWS ElasticSearch Service, native transport protocol is not supported. REST API over HTTP protocol is supported currently.
Bummer!