Search code examples
oracleperformance

Event read by other session causing blocks


I have noticed in the database some blocks, the event was read by other session, I have identified the table and the index causing that. However I have noticed the table statistics are 3 month old where everyday there are DML (insert/update/deletes) and the pct free is 0.

My question is what is the relation between pct free 0 and the Event ? As per the site here saying changing the PCT will have benefits, in as per understanding value 0 affects if there are several updates on the table.


Solution

  • There is (almost) no relationship at all between pctfree and read by other session. pctfree 0 simply tells Oracle to pack the block as full as it can with rows without reserving any unused space for row expansion due to updates that may happen at a later time. This gives you maximum space usage efficiency as long as row expansion never happens. If it does, it will cause row migrations and/or chaining that will result in a worse situation than having a little unused space. There's a reason why tables default to pctfree 10 and not 0. Use 0 only when you are certain rows will never be updated and you care about a 10% space savings. (Note that enabling compression will automatically set it to 0, because compressed data is prioritizing space savings and doesn't play well with updates anyway.)

    read by other session means your session needs a block from disk but another session is also reading that same block at the same time. Oracle detects this and serializes the read - only one process reads it into the cache, then the other waiting process can get the buffer from the cache instead of reading it from disk all over again. This is not necessarily a bad thing. If you needed to remove this serialization and you are working with big data in bulk and you have the storage hardware to support a heavier load, you can use direct path reads (typically either by employing parallel query or engaging serial direct path) instead of buffered/cached reads. But again, just because you can doesn't mean you should.

    If you are not experiencing a performance issue (a job takes too long, a query takes too long, you're missing an SLA or getting a timeout, etc..) then don't look into metrics. You can spend an unlimited amount of time trying to improve numbers that don't need improving. Get something else done instead. But if you are having a perf problem, tackle the problem starting with examining wait-sampling-based metrics (like Active Session History) that breaks down what your session was doing over time and focus on where most of the time was spent and drill down from there. Focus more on what step in an execution plan your time is being consumed by than exactly what kind of wait event is being reported (not that that's not important, it just shouldn't be the first thing you chase after). For example, read by other session means you are doing buffered reads which are most likely single block reads. It's probable that you have indexes involved and/or nested loop joins. If the SQL is performing slowly, it may be that a nested loops join is inappropriate for the # of rows being fed into it, or that the index is inappropriate for the predicate. Fixing those issues will likely resolve your problem without having to worry too much about the underlying mechanics of read by other session vs. something non-blocking.