I have enabled the Export to data lake feature in F&O D365 and created external table in Serverless SQL pool database in Synapse to read the CSV. It's working fine since 6 month however now I am facing performance issue due huge amount of data and we are making join with multiple tables(approx. 10 tables) which has millions of data. To get the result it's taking around 30 seconds or sometime 40 seconds it's vary.
I am using logic app and function app to invoke the SQL queries.
My understanding was if it's Synapse serverless SQL Pool then automatically it'll handle the load and I will get result with 3-4 seconds but it's taking 40 seconds.
I checked all the best practices but it doesn't worked.
https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/best-practices-serverless-sql-pool
Finally, I moved the ADLS CSV data into Dedicated SQL Pool database using Synapse pipeline and data flow as we have complex join and above suggested approach did not worked. In dedicated SQL pool, we are able to increase the performance level as well as we can implement the materialized view, statistics and indexes which helped me to meet the API performance expectation.
In serverless, I was able to enhance some level performance if we have Parquet format file instead of CSV. So better I thought to move the data into dedicated SQL.
Thanks everyone for suggestion and time.