Search code examples
oracleoracle11gquery-optimizationoracle12cdatabase-administration

Job is failing because of temp space getting full


Our customer is loading data of last 3 years in one new reporting product. It has millions of record. Our intermediate query is producing 964M record and at this stage job is getting failed due to insufficient temp storage. They increased temp space thrice. Our query plan and joins are proper

My Question is.

  1. Can we give them some estimate that for billion record you need this number of data file in your temp tablespace.
  2. Is it possible to calculate the above statistics? as I talked with some experts and they said capacity planning is not possible.

I know records are huge and they may need to change hardware as well. But is there anything we can suggest them to change at memory level and if possible hardware level.

If you want I can attach explain plan and trace here.


Solution

  • As pmdba noted, an EXPLAIN PLAN will assist with an estimate of the temp space required, but that is strictly an estimate. Notice that your execution plan has many nested hash joins, and thus you can easily get a compounding of temp space needs, because a multiple hash joins can be "active", (ie, one has commenced but requires the output of an inner one).

    But assuming this is a one time load, I would propose a different approach for your DBAs. Simply create a new temp tablespace with either a bigfile or many datafiles with a massive autoextend allowance. If necessary, put it on scratch/throwaway disk. Allocate just this user account to it, then run your load and just let the temp space grow to whatever size it needs.

    Once your load is done, drop the tablespace and resume normal operations. A massive temp doesn't make any difference to your database - it doesn't get backed up, its contents don't need logging.