Search code examples
oracleperformanceplsqlquery-optimizationstored-functions

Joining tables with table type slows down calculation?


I have a big calculation that joins together about 10 tables and calculates some values from the result. I want to write a function that allows me to replace one of the tables that are joined (lets call it Table A) with a table (type) I give as an input parameter.

I have defined row and table types for table A like

create or replace TYPE t_tableA_row AS OBJECT(*All Columns of Table A*);
create or replace TYPE t_tableA_table as TABLE OF t_tableA_row;

And the same for the types of the calculation I need as an output of the function. My functions looks like this

create or replace FUNCTION calculation_VarInput (varTableA t_tableA_table)
 RETURN t_calculationResult_table AS

  result_ t_calculationResult_table;
BEGIN

SELECT t_calculationResult_row (*All Columns of Calculation Result*)
BULK COLLECT INTO result_

FROM (*The calculation*)
RETURN result_;
END;

If I test this function with the normal calculation that just uses Table A(ignoring the input parameter), it works fine and takes about 3 Second. However, if I replace Table A with varTableA (the input parameter that is a table type of Table A), the calculation takes so long I have never seen it finish.

When I use table A for the calculation it looks like this

/*Inside the calculation*/
*a bunch tables being joined*
JOIN TableA A On A.Value = B.SomeOtherValue
JOIN *some other tables*

When I use varTableA its

/*Inside the calculation*/
*a bunch tables being joined*
JOIN TABLE(varTableA ) A On A.Value = B.SomeOtherValue
JOIN *some other tables*

Sorry for not posting the exact code but the calculation is huge and would really bloat this post.

Any ideas why using the table type when joining makes the calculation so much slower when compared to using the actual table?


Solution

  • Your function encapsulates some selection logic in a function and so hides information from the optimizer. This may lead the optimizer to make bad or inefficient decisions.

    Oracle has gathered statistics for TableA so the optimizer knows how many rows it has, what columns are indexed and so on. Consequently it can figure out the best access path for the table. It has no stats for TABLE(varTableA ) so it assumes it will return 8192 (i.e. 8k) rows. This could change the execution plan if say the original TableA returned 8 rows. Or 80000. You can check this easily enough by running EXPLAIN PLAN for both versions of query.

    If that is the problem add a /*+ cardinality */ to the query which accurately reflects the number of rows in the function's result set. The hint (hint, not function) tells the optimizer the number of rows it should use in its calculation.

    I don't want to actually change the values in my tables permanently, I just want to know what the calculation result would be if some values were different.

    Why not use a view instead? A simple view which selects from TableA and applies the required modifications in its projection. Of course I know nothing about your data and how you want to manipulate it, so this may be impractical for all sorts of reasons. But it's where I would start.