I've got a configuration file for Flume that looks like this:
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type =
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey =
TwitterAgent.sources.Twitter.consumerSecret =
TwitterAgent.sources.Twitter.accessToken =
TwitterAgent.sources.Twitter.accessTokenSecret =
TwitterAgent.sources.Twitter.keywords =
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path =
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 10000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 10000
I've omitted private fields. This is downloading tweets into Apache Hadoop. However, each tweet file only reaches about 30 - 60 Kb in file, before another is created. How can I create much larger files so I don't end up with a plethora of small text files, but instead have just a few large ones (with, say, 10000 tweets each in them)?
I thought have rollCount at 10000 would do it, but it doesn't seem to.
I solved this by changing the rollCount to 0, the transactionCapactity to 1000 (to keep it smaller than the capacity) and left the batchSize at 10000. I think this is doing the trick, as now it is writing a large amount to each file (64MB to be precise).