Wednesday, April 09, 2014

_direct_read_decision_statistcs_driven, _small_table_threshold and direct path reads on partitioned tables in 11.2.0.3 (Part 2)

This is continuation of my last post regarding direct path reads on partitioned tables in Oracle 11.2.0.3.

To recap, the behavior I observed is that direct path reads will be performed if number of blocks for all partitions that will be accessed exceeds _small_table_threshold value. That is if a table is consisted of 10 partitions each having 100 blocks and if a query goes after two of the partitions, direct path reads will be performed if _small_table_threshold is lower than 200.

Also regardless of how much data has been cached(in the buffer cache)  for each of the partitions, if direct path reads are to be performed, all partition segments will be directly scanned. So, it is all or nothing situation.

I also indicated that _direct_read_decision_statistics_driven parameter was set to TRUE (default) for the tests done in my earlier post.

What is _direct_read_decision_statistics_driven anyway? According to the parameter description, it enables direct path read decision to be based on optimizer statistics. If the parameter is set to FALSE Oracle will use segment headers to determine how many blocks the segment has. (read Tanel Poder’s blogpost for more information)

Let’s see how queries that access table partitions (full scan) behave if _direct_read_decsision_statiscs_driven parameter is set to FALSE in 11.2.0.3. My expectation was that it should be the same as if it was set to TRUE. I thought that once Oracle gets information about the number of blocks in each of the partitions it would use the same calculation as if the parameter was set to TRUE. Let’s see.

But, before moving forward a small disclaimer: Do not perform these tests in production or any other important environment. Changing of undocumented parameters should be done under the guidance of Oracle Support. The information presented here is for demonstration purposes only.

I will use the same table, TEST_PART, that I used in my earlier post.

I started with flushing the buffer cache (to make sure none of the partitions has blocks in the cache).

I set _direct_read_decision_statistcs_driven parameter to false and ran a query that selects data from PART_1 partition only. Each of the partitions contains 4000 rows stored in 65 blocks, plus one segment header block.

_small_table_threshold in my sandbox environment was set to 117.


SQL>  alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> SELECT count(1) FROM test_part WHERE col1 in (1);

  COUNT(1)
----------
      4000


As expected, no direct path reads were performed (I used my sese.sql script that scans v$sesstat for statistics that match given keyword)


SQL> @sese direct

no rows selected


Now let’s see what happens with a query that accesses the first two partitions. Remember if _direct_read_decision_statistcs_driven parameter is set to TRUE, this query would perform direct path reads because the number of blocks in both partitions, 130 (2x65) exceeds
_small_table_threshold(117) parameter.


SQL> select count(1) from test_part where col1 in (1,2);

  COUNT(1)
----------
      8000

SQL> @sese direct

no rows selected


No direct reads. Definitely different compared to when _direct_read_decision_statistcs_driven was set to TRUE.

How about for a query that accesses three partitions:


 SQL> select count(1) from test_part where col1 in (1,2,3);

  COUNT(1)
----------
     12000

SQL> @sese direct

no rows selected


Still no direct path reads.

How about if we access all 7 partitions:


SQL>  select count(1) from test_part where col1 in (1,2,3,4,5,6,7);

  COUNT(1)
----------
     28000

SQL> @sese direct

no rows selected


No direct path reads.

So what is going on? Seems when _direct_read_decision_statistcs_driven is set to FALSE, Oracle makes decision on partition by partition basis. If the number of blocks in the partition is less or equal than _small_table_threshold buffer cache will be used, otherwise direct path reads.

What if some of the partitions were already cached in the buffer cache?

In the next test I’ll:
  • Flush the buffer cache again
  • Set _direct_read_decision_statistcs_driven is set to FALSE
  • Run a query that accesses the first two partitions
  • Decrease the value for _small_table_threshold to 60
  • Run a query that accesses the first three partitions.
  • Check if direct path reads were performed and how many
With this test I’d like to see if Oracle will utilize the buffer cache if the segment data is cached and the number of blocks in partition is greater than _small_table_threshold.


SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);

  COUNT(1)
----------
      8000

SQL> @sese direct

no rows selected


At this point, PART_1 and PART_2 partitions should be entirely in the buffer cache. If you want, you could query X$KCBOQH to confirm this (from a different session logged in as SYS).


SQL> conn /as sysdba
Connected.
SQL> select  o.subobject_name, b.obj#, sum(b.num_buf)
2 from    X$KCBOQH b, dba_objects o
3 where   b.obj#=o.data_object_id
4     and o.object_name='TEST_PART'
5 group by  o.subobject_name, b.obj#
6 order by 1;  

SUBOBJECT_NAME                       OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1                             146024             66
PART_2                             146025             66


As expected, both partitions are in the buffer cache.

Now let’s change decrease _small_table_threshold to 60 and run a query that scans the first three partitions:


SQL>  alter session set "_small_table_threshold"=60;

Session altered.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select count(1) from test_part where col1 in (1,2,3);

  COUNT(1)
----------
     12000

alter session set events '10046 trace name context off';

SQL> @sese direct

       SID         ID NAME                                                    VALUE
---------- ---------- -------------------------------------------------- ----------
         9         76 STAT.consistent gets direct                                65
         9         81 STAT.physical reads direct                                 65
         9        380 STAT.table scans (direct read)                              1


Here they are, 65 direct path reads, one table scan (direct read) which means one of the partitions was scanned using direct path reads. Which one? Yes, you are right, the one that is not in the buffer cache (PART_3 in this example).

If you query X$KCBOQH again you can see that only one block of PART_3 is in the cache. That is the segment header block.


SQL> conn /as sysdba
Connected.
SQL>  select  o.subobject_name, b.obj#, sum(b.num_buf)
2 from    X$KCBOQH b, dba_objects o
3 where   b.obj#=o.data_object_id
4     and o.object_name='TEST_PART'
5 group by  o.subobject_name, b.obj#
6 order by 1;  

SUBOBJECT_NAME                       OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1                             146024             66
PART_2                             146025             66
PART_3                             146026              1 <===


This means that when _direct_read_decision_statistcs_driven is set to FALSE, in 11.2.0.3, Oracle uses totally different calculation compared to the one used when the parameter is set to TRUE (see in my earlier post).

Moreover, seems Oracle examines each of the partitions separately (which I initially expected to be a case even when _direct_read_decision_statistcs_driven is set to TRUE ) and applies the rules as described in Alex Fatkulin’s blogpost. That is, if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed: 
  •  the number of blocks in the segment is lower or equal than _small_table_threshold 
  •  at least 50% of the segment data blocks are in the buffer cache
  •  at least 25% of the data blocks are dirty 
The conclusion so far is that in 11.2.0.3, you may observe different behavior for the queries that access table partitions using FTS if you decide to change _direct_read_decision_statistcs_driven parameter.

I will stop here. I ran the same tests against 11.2.0.4 and 12.1.0.1 and noticed some differences in the behavior compared to the one I just wrote about (11.2.0.3). I will post these results in the next few days.

Stay tuned...



Monday, April 07, 2014

_small_table_threshold and direct path reads on partitioned tables in 11.2.0.3


I was troubleshooting a performance problem few days ago. The database the problem was experienced on was recently migrated from Oracle 10.2.0.4 to Oracle 11.2.0.3.

Long story short, the problem was described as performance of a query that scans two or more partitions in a table is much worse compared to combined performances of queries accessing each of the partitions separately.

After a short investigation I narrowed down the problem to “direct path reads” being the culprit of the problem.

As you know, due to the adaptive direct read feature introduced in 11g full table scans may utilize PGA instead of the buffer cache as it was a case in the earlier versions.

There are few good articles on this change in behavior among which I personally favor Tanel’s blogpost and hacking session and the post by Alex Fatkulin. You could also check MOS Note 793845.1.

What I observed in 11.2.0.3.0 was quite surprising and a bit different from what I’ve read so far. I know that there are different parameters/variables that influence the decision whether or not direct part reads should be used. I tried to be careful and not to fall in any of these traps.

Please note all the tests were done in a sandbox environment. I advise against trying these tests in any production environment.

The database version was 11.2.0.3.0.

_serial_direct_read = auto

_direct_read_decision_statistics_driven = TRUE

_small_table_threshold = 117


I used ASSM, a tablespace with uniform extent size(64K)

As you may know _small_table_threshold parameter is set to about 2% of the size of the buffer cache. On my test machine I have pretty small buffer cache, 5892 buffers big (117 is 1.98% of 5892)


SQL> SELECT name,block_size,buffers FROM v$buffer_pool;

NAME                                               BLOCK_SIZE    BUFFERS
-------------------------------------------------- ---------- ----------
DEFAULT                                                  8192       5892


I will try to simplify the problem by using a partitioned table, TEST_PART containing 7 partitions.


CREATE TABLE test_part 
( 
    col1 NUMBER NOT NULL
  , col2 VARCHAR2(100)
 ) 
 PARTITION BY LIST (col1) 
 (  
    PARTITION PART_1  VALUES (1)
  , PARTITION PART_2  VALUES (2)
  , PARTITION PART_3  VALUES (3)
  , PARTITION PART_4  VALUES (4)
  , PARTITION PART_5  VALUES (5)
  , PARTITION PART_6  VALUES (6)
  , PARTITION PART_7  VALUES (7)
 ) ;


Each of the 7 partitions will be populated with 4000 rows (for total of 65 blocks allocated per partition) using the following SQL statement:


INSERT INTO test_part (SELECT mod(rownum,7)+1, rpad('A',100,'A') FROM dual CONNECT BY rownum<=28000);


I will collect stats using the statement below:


exec dbms_stats.gather_table_stats(user,'TEST_PART');


As you can see from the output below, each of the partitions has 65 blocks below the HWM:


SQL> select table_name, partition_name, num_rows, blocks, avg_row_len
from user_tab_partitions
where table_name='TEST_PART';  2    3

TABLE_NAME                     PARTITION_NAME                   NUM_ROWS     BLOCKS AVG_ROW_LEN
------------------------------ ------------------------------ ---------- ---------- -----------
TEST_PART                      PART_1                               4000         65         104
TEST_PART                      PART_2                               4000         65         104
TEST_PART                      PART_3                               4000         65         104
TEST_PART                      PART_4                               4000         65         104
TEST_PART                      PART_5                               4000         65         104
TEST_PART                      PART_6                               4000         65         104
TEST_PART                      PART_7                               4000         65         104

7 rows selected.



Observation #1 -
 _small_table_threshold is applied on the total number of blocks expected to be returned by the query (considering all partition segments that will be accessed)

As you can see number of blocks in each of the partitions (65) is lower than _small_table_threshold value (117). Therefore a query that accesses only one of the partitions uses the buffer cache to store the segment blocks.


SQL> select count(1) from test_part where col1 in (1);

  COUNT(1)
----------
      4000


I will use my sese.sql script to check the values for specific session statistics. It simply scans v$sesstat for the current session and a given keyword. If there are statistics that contain the specified keyword and their value is greater than 0 they will be reported. As you can see no direct path reads were performed.


SQL> @sese direct

no rows selected


I expected to see the next query utilizing the buffer cache as well. It scans two partitions. As you know, each of the partitions has 65 blocks which is less than _small_table_threshold value (117), hence I thought I won't see any direct path reads.


SQL> select count(1) from test_part where col1 in (1,2);

  COUNT(1)
----------
      8000

However, direct path reads were performed. Moreover, even though one of the partitions I previously scanned was already in the buffer cache, both partitions were scanned using direct path reads. As shown in the output below, two segments were fully scanned using direct reads for total of 130 direct reads were performed (2x65).


SQL>  @sese direct

       SID         ID NAME                                                    VALUE
---------- ---------- -------------------------------------------------- ----------
         7         76 STAT.consistent gets direct                               130
         7         81 STAT.physical reads direct                                130
         7        380 STAT.table scans (direct read)                              2

Let’s see what happens when I increase _small_table_threshold to 130 and run the last query.


SQL> alter session set "_small_table_threshold"=130;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);

  COUNT(1)
----------
      8000

SQL>  @sese direct

       SID         ID NAME                                                    VALUE
---------- ---------- -------------------------------------------------- ----------
         7         76 STAT.consistent gets direct                               130
         7         81 STAT.physical reads direct                                130
         7        380 STAT.table scans (direct read)                              2


The number of direct path reads stayed the same, which means no direct path reads were performed.

How about if I we add one more partition to the equation now (_small_table_threshold=130):


SQL> select count(1) from test_part where col1 in (1,2,3);

  COUNT(1)
----------
     12000

SQL> @sese direct

       SID         ID NAME                                                    VALUE
---------- ---------- -------------------------------------------------- ----------
         7         76 STAT.consistent gets direct                               325
         7         81 STAT.physical reads direct                                325
         7        380 STAT.table scans (direct read)                              5


Now since we scan 3 partitions, that is 195 blocks Oracle went back to direct path reads and the statistic numbers went up by 195 (3x65) , 130+195=325 or three new table/segment scans.

Therefore seems the logic behind the decision whether or not to perform direct path reads is:

IF SUM(blocks of all partitions that are accessed)>_small_table_threshold THEN
     perform direct path reads for all partitions that are accessed
ELSE
     utilize buffer cache

Again, just to remind you this behavior is specific to 11.2.0.3.


Observation #2 -
The percentage of cached blocks per partition is not relevant


This brings me to the second observation. If you query X$KCBOQH.NUM_BUF for the partition segments (read Tanel’s blogpost or watch his hacking session ) you can see that even though partitions PART_1 and PART_2 were in the cache, Oracle still performed direct path reads for all three partitions:


SQL> conn /as sysdba
Connected.

SQL> select  o.subobject_name, b.obj#, sum(b.num_buf)
2 from    X$KCBOQH b, dba_objects o
3 where   b.obj#=o.data_object_id
4     and o.object_name='TEST_PART'
5 group by  o.subobject_name, b.obj#
6 order by 1;  

SUBOBJECT_NAME                       OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1                             146024             66
PART_2                             146025             66
PART_3                             146026              1 


I ran the output above after the last test. As you can see PART_1 and PART_2 segments are completely in the buffer cache, 66 blocks each (65 blocks for the data and 1 block for the segment header). PART_3 however has only one block in the cache and that is most likely the segment header block.

But, even when all 3 partitions were fully loaded in the buffer cache, Oracle still performed direct path reads:


SQL> conn *****/*****
Connected.
SQL> select count(1) from test_part where col1 in (3);

  COUNT(1)
----------
      4000

SQL> conn /as sysdba
Connected.

SQL> select  o.subobject_name, b.obj#, sum(b.num_buf)
2 from    X$KCBOQH b, dba_objects o
3 where   b.obj#=o.data_object_id
4     and o.object_name='TEST_PART'
5 group by  o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME                       OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1                             146024             66
PART_2                             146025             66
PART_3                             146026             66

SQL> conn *****/*****
Connected.
SQL> @sese direct

no rows selected

SQL> select count(1) from test_part where col1 in (1,2,3);

  COUNT(1)
----------
     12000

SQL> @sese direct

       SID         ID NAME                                                    VALUE
---------- ---------- -------------------------------------------------- ----------
         7         76 STAT.consistent gets direct                               195
         7         81 STAT.physical reads direct                                195
         7        380 STAT.table scans (direct read)                              3

SQL>

I will stop with this post here. Tomorrow I will publish another post where I'll show what difference _direct_read_decision_statistics_driven could make for partitioned tables in 11.2.0.3.
(Update: Part2 - what difference does _direct_read_decision_statistics_driven=FALSE make)

I will repeat the same tests in 11.2.0.4 and 12.1.0.1 and see if the behavior is any different.

Stay tuned.
 

Saturday, March 15, 2014

Find the enabled events using ORADEBUG EVENTDUMP


Just a quick note on how to use ORADEBUG in order to find events that are enabled on system or session level.

Starting from Oracle 10.2 you could use ORADEBUG EVENTDUMP in order to get all events enabled either on session or system level. (not sure if this is available in 10.1)


The synopsis is:


oradebug eventdump 


Where level is either system, process or session:
 


SQL> oradebug doc event action eventdump
eventdump
        - list events that are set in the group
Usage
-------
eventdump( group           < system | process | session >)


For demonstration purposes I will set three events, two on session and one on system level.
 


SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> alter session set events '942 trace name errorstack';

Session altered.

SQL> alter system set events 'trace[px_control][sql:f54y11t3njdpj]';

System altered.


Now, let's check what events are enabled on SESSION or SYSTEM level using ORADEBUG EVENTDUMP

SQL> oradebug setmypid

SQL> oradebug eventdump session;
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]
sql_trace level=8
942 trace name errorstack

SQL> oradebug eventdump system
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]


As you may already know, ORADEBUG requires SYSDBA privilege. In order to check events set for other session, one could do so by attaching to the other session process using oradebug  setospid or oradebug setorapid.

This note was more for my own reference. I hope someone else finds it useful too.

Sunday, March 02, 2014

Fun with global temporary tables in Oracle 12c


Few months ago I wrote a post about 12c session specific statistics for global temporary tables (link). Long awaited feature no matter what.

Recently I had some discussions on the same subject with members of my team.

One interesting observation was the behavior of transaction specific GTTs with session specific statistics enabled. What attracted our interest was the fact that data in global temporary tables is not deleted after DBMS_STATS package is invoked.

Prior to 12c, a call to DBMS_STATS will result with an implicit commit. This would wipe out the content of a transaction specific global temporary table.

I’ll digress here a bit. Yes, I know, who would call DBMS_STATS to collect statistics on a transaction specific GTT knowing the data in the table will be lost. Well, things change a bit in 12c.

In Oracle 12c, no implicit commit is invoked when DBMS_STATS.GATHER_TABLE_STATS is invoked on a transaction specific with session specific statistics enabled thus letting users take advantage of session specific statistics for this type of GTTs.

This behavior is documented in Oracle documentation.

I’ll try to put some more light on this behavior through couple of examples:

For this purpose I’ll start with three tables. T1 and T2 are transaction specific temporary tables. T3 is a regular table. By default, in 12c, session specific statistics are used.



CREATE GLOBAL TEMPORARY TABLE t1 (id NUMBER);

CREATE GLOBAL TEMPORARY TABLE t2 (id NUMBER);

CREATE TABLE t3 (id NUMBER);



Scenario #1 – Insert 5 rows to each of the three tables and observe the state of the data after DBMS_STATS is invoked on a transaction specific GTT.



SQL> INSERT INTO t1 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t2 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t3 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> exec DBMS_STATS.GATHER_TABLE_STATS(user,'T1');
PL/SQL procedure successfully completed.

SQL> SELECT count(1) FROM t1;
  COUNT(1)
----------
  5


As you can see the data in T1 is still present. Furthermore if you open another session you can also see that T3 has no rows. This means commit was not invoked when session specific statistics were collected for T1.

Scenario 2# Insert 5 rows in each of the three tables and collect statistics only on the regular table, T3.


SQL> INSERT INTO t1 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t2 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t3 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> exec DBMS_STATS.GATHER_TABLE_STATS(user,'T3');
PL/SQL procedure successfully completed.

SQL> SELECT count(1) FROM t1;
  COUNT(1)
----------
  0


As you can see in this scenario implicit commit was invoked which resulted with data in T1 being purged.

Hope this helps … :-)

Cheers!