Search code examples
oracle-databasefile-iotuxedo

What is the best way to extract big data to file?


I am using Oracle as a DBMS and Tuxedo for application server. Customer has the need to export data from Oracle to SAMFILE for interface purpose. Unfortunately, the total number of records size is huge (over 10 million) so I was wondering what is the best practice to extract big amounts of data to a file on the database server.

I am used to creating a cursor and fetching a record then writing to file. Is there a better i.e. faster way to handle this? It is a recurring task.


Solution

  • I suggest you read Adrian Billington's article on tuning UTL_FILE. It covers all the bases. Find it here.

    The important thing is buffering records, so reducing the number of file I/O calls. You will need to benchmark the different implementations, to see which works best in your situation.

    Pay attention to his advice on query performance. Optimising file I/O is pointless if most of the time is spent on data acquisition.