Search code examples
amazon-redshiftamazon-kinesisamazon-kinesis-firehose

Kinesis Firehose demo has no delimiters; Redshift can't parse without. Which is wrong?


When I went through first learning steps with Kinesis, Firehose, and Redshift today, I was pleased to discover that Amazon had a "try our demo data producer" setup.

I was frustrated to learn that it does not seem to actually work.

So, I went digging. And I found STL_LOAD_ERROR contained errors suggesting that a delimiter was expected, and records' fronts that looked like {field:val,field:val}{field:val,field:val}.

...{"TICKER_SYMBOL": |     1214 | Delimiter not found 

"Must be stripping newlines somewhere," I thought.

After digging, I found that there are production records in the relevant S3 bucket, in a surprising format:

{field:val,field:val}{field:val,field:val}...

That is, there are no delimiters between the apparent records, which are single line files of several dozen K each.

Other SO posts seem to suggest that this is actually the expected data format.

Why does Redshift need data in a format the data demo doesn't use? Which do I reconfigure?


Solution

  • Okay. There were three problems.

    1. The AWS example data producer produces data in a format which needs alterations to the Redshift COPY command, but they don't tell you that.
    2. You need to add FORMAT AS JSON 's3://yourbucketname/aJsonPathFile.txt'
    3. You need to create a JSON path file because the default data producer produces upper-case column names, which redshift cannot consume