I'm working on an application wherein I'll be loading data into Redshift.
I want to upload the files to S3 and use the COPY command to load the data into multiple tables.
For every such iteration, I need to load the data into around 20 tables.
I'm now creating 20 CSV files for loading data into 20 tables wherein for every iteration, the 20 created files will be loaded into 20 tables. And for next iteration, new 20 CSV files will be created and dumped into Redshift.
With the current system that I have, each CSV file may contain a maximum of 1000 rows which should be dumped into tables. Maximum of 20000 rows for every iteration for 20 tables.
I wanted to improve the performance even more. I've gone through https://docs.aws.amazon.com/redshift/latest/dg/t_Loading-data-from-S3.html
At this point, I'm not sure how long it's gonna take for 1 file to load into 1 Redshift table. Is it really worthy to split every file into multiple files and load them parallelly?
Is there any source or calculator to give an approximate performance metrics of data loading into Redshift tables based on number of columns and rows so that I can decide whether to go ahead with splitting files even before moving to Redshift.
You should also read through the recommendations in the Load Data - Best Practices guide: https://docs.aws.amazon.com/redshift/latest/dg/c_loading-data-best-practices.html
Regarding the number of files and loading data in parallel, the recommendations are:
That last point is significant for achieving maximum throughput - if you have 8 nodes then you want n*8 files e.g. 16, 32, 64 ... this is so all nodes are doing maximum work in parallel.
That said, 20,000 rows is such a small amount of data in Redshift terms I'm not sure any further optimisations would make much significant difference to the speed of your process as it stands currently.