In my mapping it tooks long time to fetch rows to the target table but it reads those rows from .CSV file within a minute.In this mapping One source is a flat file another source is a table which is having data from target.Here we create a logic to UPDATE or DELETE or INSERT in target based on comparison on the data from flat file and the source table(data from target table).while seeing sesion log it reads data from flat file within a minute but it fetches those data to the target to taget 9 rows/sec throughput.Here the target table is created using unix script ie CREATE TABLE STG_LM_INSTITUTION as (SELECT * FROM LM_INSTITUTION);
Target in this mapping is STG_LM_INSTITION.this is the copy of final target(LM_INSTITUTION).I think the problem is due to creation of the taget table in script but i'm not sure.Anyone please help me to solve this issue.Source flat file has 2L rows. I run this mapping with 2L rows.After 11hrs it fetches only 1L records into the target.But while running using 500rws only it fetches records in one minute to the target .
SQL_LM_INST_SEQ
retrieves PK values from an Oracle sequence and that's costly, because for every inserted row a roundtrip to the database is necessary to get a new ID.
Use a Sequence Generator transformation instead - Integration Service will generate IDs on its own.