Search code examples
javasqloracle-sqldevelopermybatisibatis

Running a query as a SQL Developer script from java and save script's output


I have a table at my db. I want fetch it as a .csv file using JAVA and ibatis.

I know that a quick way to do so using SQL Developer client would be to write the following query:

select /*csv*/* from employees

and execute it as a script at SQL Developer client. (As shown here)
I thought that com.ibatis.common.jdbc.ScriptRunner might suit my needs, but can't find a way to capture the script's output into .csv file.

Is it even possible to execute the query above from JAVA ?

Is there a faster way to do it? (sending a select query and parse its results is not an option for me.)

Thanks.


Solution

  • Well the best soution i found so far was to send the query:
    SELECT EMP_NAME||', '|| EMP_LAST ||', '|| FROM EMP_TABLE
    Map file:

    <select id="testing" resultClass="java.lang.String">  
        SELECT EMP_NAME||', '|| EMP_LAST ||', '|| FROM EMP_TABLE  
     </select>
    

    The query will return parsed strings as results.

    "Joe, Smith , 1000"
    "Dave, Devon , 1300"
    ...
    "Zoee, Zinther , 900"
    

    I tried 2 approaches:
    1. rowHandler:

     long started1 = System.nanoTime();
     SelectToFile selectToFileRowHandler = new SelectToFile("test1.csv");
     sqlMap.queryWithRowHandler("map.testing", selectToFileRowHandler);
     long time1 = (System.nanoTime()-started1);
     System.out.println("with row handler: " + time1);
    

    2. queryForList:

     long started2 = System.nanoTime();
     @SuppressWarnings("unchecked")
     List<String> lst = (List<String>)sqlMap.queryForList("map.testing");
     BufferedWriter bw = new BufferedWriter(new FileWriter("test2.csv"));
     for(String str : lst){
        bw.write(str);
     }
     long time2 = (System.nanoTime()-started2);
     System.out.println("with list: " + time2);
    

    The results were:

    with row handler: 2242707000
    with list: 2455066000
    rowhandler was faster by: 448629000 (approx 0.44 sec for 50K entries 7MB csv file.)
    

    If anyone knows a faster way, i would happily test it, and post the results.