I am using oracle 11g and have a table with an XMLType. There are many records in this table and each XML contains a list of 0 to n answers as follows:
<section>
<answer id="100">
<value>Something</value>
</answer>
...
</section>
There are two ways to xpath for values.
Example 1:
extract(table.column, '//answer[@id=100]').getStringVal()
which returns
<answer id="100">
<value>Something</value>
</answer>
Example 2:
extractvalue(table.column, '//answer[@id=100]/value/text()')
which returns
Something
There is a hit in performance extracting many text() values for a complex query, which has me asking the following question. Would I gain better performance if I utilized a library like xstream to parse out those values in java (example 1), or is it better to let oracle parse those values for me (example 2)?
I am trying to learn from other people's experiences or maybe some whitepapers before I commit time to rewrite a lot of daos and queries, so anything is much appreciated, thanks!
================== EDIT
Started looking into using Scala as an alternative to parsing xml in the JVM. Feel free to add Scala examples to the mix.
I finally got around to testing and here is what I found.
For a simple "reporting" jsp view using a table, I ran a query to populate 850 rows.
Given my codebase, I ran the query first as follows:
This took a total of:
8.27 seconds to run the query alone.
15 seconds total to go from start to a finished view
I then altered the query to do following:
This took a total of:
24.84 seconds to run the query alone.
60 seconds total to go from start to a finished view.
This is of course my codebase, but I think I can rule out the use of XStream as a mean of efficiency. But it seems that the XMLType and xpathing is not very efficient in the first place.