I have a table like following:
Create Table Txn_History nologging (
ID number,
Comment varchar2(300),
... (Another 20 columns),
Std_hash raw(1000)
);
This table is 8GB with 19 Million rows with a growth of around 50,000 rows daily.
I need to delete 300,000 rows and update 100,000 rows. I know that normally delete and update statement will cause Oracle database to generate redo log. The only way I know to avoid this is to create a new table with the updated result.
However, consider that the delete and update statement is only talking about 2% of the entire table, it appears not very worth to create a new table, follow by all corresponding indexes.
Do you have any new idea?
To be honest I don't think that the redo generation is a big problem here: just 300k rows to delete and 100k rows to update... For such batch operations Oracle uses fast "array update" REDO operation. Probably you need to trace your operation to find out real bottlenecks and load profile(IO/CPU, access paths, triggers, indexes, etc).
Basically it's better to use the partitioning option properly to update/delete(or truncate) by whole partitions.
There is also new alter table ... move including rows where ...
feature starting from Oracle 12.2:
https://blogs.oracle.com/sql/how-to-delete-millions-of-rows-fast-with-sql