ter][1]1
I have a workflow with XML data read from files using XML Parser transformation. It has 12 target tables to load them. Currently it is working successfully, but the problem is Throughput(Rows/Sec) while reading the data from files. With number of files 10 or less it provides through put of 10 rows/sec. But if I provide more than 10 rows, then initially at the start of workflow throughput is 4 to 5 rows/sec and then suddenly drops to 1 row/sec. And it remains 1 row/sec for all the files. Some times I have 300 or 400 files and it takes too much time just to read these 300 400 rows with the throughout of 1 row/sec.
I have tried to improve by increasing the DTM buffer and default block size values. I have also tried "Dynamic Partitioning Option" to "Based on number of Partitions".
But no success.
As mentioned from the session log stats the 2 joiners were the bottleneck as their busy percentage is almost 100 percent. So I have added sorters before those joiners and then make the precision values less in order to avoid the sorter row size limit of 8MB. Now the throughput is also increased and equal to the number of files /sec rather than one row/sec.