Recently, I noticed that although I gathered statistics on a table, the execution plan wasn't invalidated in Oracle 10g. I googled it and found the solution at Coskan's blog that you can invalidate an execution plan by executing a simple ddl statement on a table, like:
grant select on t1 to user1;
Today I came across this note on metalink [Doc ID: 557661.1] about this topic. It says that prior to 10g, by default the execution plan was invalidated when the statistics was gathered on underlying objects:
Cursor Invalidations on Gathering Statistics prior to Oracle10g
In releases prior to Oracle10g gathering statistics using DBMS_STATS resulted in immediate invalidations of dependent cached cursors, unless NO_INVALIDATE was set to TRUE.
But starting with 10g, by default statistics gathering MAY OR MAY NOT invalidate the execution plan:
Starting with Oracle10g, the DBMS_STATS package offers the AUTO_INVALIDATE option for the NO_INVALIDATE parameter of its GATHER_xxx_STATS and DELETE_xxx_STATS procedures. This parameter allows the user to specify when to invalidate dependent cursors i.e. cursors cached in the library cache area of the shared pool which reference a table, index, column or fixed object whose statistics are modified by the procedure call.
According to the documentation the values NO_INVALIDATE can take are:
TRUE: does not invalidate the dependent cursors
FALSE: invalidates the dependent cursors immediately
AUTO_INVALIDATE (default): have Oracle decide when to invalidate dependent cursors
So if you want to be sure that the cursor is invalidated after statistics gathering, use parameter NO_INVALIDATE => FALSE with DBMS_STATS.GATHER_TABLE_STATS and other procedures.
exec dbms_stats.gather_table_stats('USER','EMP', no_invalidate => false);As for the default AUTO_INVALIDATE option , when Oracle decides him/herself whether to invalidate or not invalidate an execution plan, they give a description of how it works. To be frank, I didn't read it all now, maybe I will read it when I need this info next time. So here is the exceprt from [Doc ID: 557661.1]:
Cursor Invalidations with Oracle10g and AUTO_INVALIDATE
With the AUTO_INVALIDATE option the goal is to spread out the cursor invalidations over a time period long enough for hard-parses not to cause noticeable spikes.
In this way a cached cursor depending on an object whose statistics have been modified by DBMS_STATS will be invalidated as follows:
when DBMS_STATS modifies statistics for an object, all current cached cursors depending on this object are marked for rolling invalidation. Let's call this time T0.
the next time a session parses a cursor marked for rolling invalidation, it sets a timestamp. This timestamp can take a random value up to _optimizer_invalidation_period sec from the time of this parse. The default for this parameter is 18000 sec i.e. 5 hours. Let's call the time of this parse T1 and the timestamp value Tmax. On this (first) parse we reuse the existing cursor i.e. we do not hard-parse and do not use the modified statistics to generate a new plan (it is a soft parse.)
on every subsequent parse of this cursor (which is now marked for rolling invalidation and timestamped) we check whether the current time T2 exceeds the timestamp Tmax. If not, we reuse the existing cursor again, as happened on the first (soft) parse at time T1. If Tmax has been exceeded, we invalidate the cached cursor and create a new version of it (a new child cursor) which uses the new statistics of the object to generate its execution plan. The new child is marked ROLL_INVALID_MISMATCH in V$SQL_SHARED_CURSOR to explain why we could not share the previous child.
From the above descriptions, it follows that:
a cursor which is never parsed again after being marked for rolling invalidation will not be invalidated and may eventually be flushed out of the shared pool if memory becomes scarce
a cursor which is only parsed once after being marked for rolling invalidation will not be invalidated (it will only be timestamped) and again may be eventually flushed out if memory in the shared pool becomes scarce
cursors which are regularly reused will become invalidated on the next parse that happens after the timestamp Tmax has been exceeded
It should be clear that the above method is efficient in that it incurs the overhead of invalidations only for frequently reused cursors.
Exception: parallel SQL are immediately invalidated in order to ensure consistency between execution plans of slaves and Query Coordinator across multiple RAC instances. This is not a problem as parallel SQL are usually heavy and therefore hard-parse resources are insignificant to their total resource usage.