I am developing an extension in tableau which records user decisions in the form of rows in datatables (using JavaScript) which sends the modified rows to AWS lambda in JSON format. In AWS lambda we are reading the data from rows. Creating necessary update SQL queries and run them on redshift database one by one using a for loop.
But this is process taking too much time. Can some one suggest a better approach on
Note: Each modified row may have different values.
Do you know where all the time is being spent? I have a guess or 2 about what could be going on but data will help.
Redshift is a columnar database and is designed for bulk uploads of data not row by row updates. It is also designed for large amounts of data (100GB and up). This doesn't look like a use case that matches but you haven't provided enough info to be sure. Redshift is also not optimized to move data from SQL to the compute nodes so if you are loading the data by SQL literals then this won't be fast.
If you do have a lot of data you will want to bundle up enough records (100MB+) before loading to redshift and do this through S3. This will mean that the data will wait a bit before being available in Redshift. If this doesn't meet your needs then you may not have the right services picked out. In this model your Lambda will combine records into large S3 files. These will be COPY'ed into Redshift and intervals. This is basically what Kinesis Firehose does so you may want to look into this AWS service.
Redshift is optimized for many reads and few, large writes. If you need equal number of (small) writes as reads you likely need a transactional database. If you want to stay with Postgresql style interface then Postgresql in RDS is an option.