Thursday, September 17, 2009

Mailbox: “How to optimize this query when table is large?”

I received a mail seeking advice on tuning a query executing on a table with 4 million records. According to the information provided, this query is run very frequently. Here's the query:

select max(to_char(dt, 'YYYYMMDDHH24MISS')) 
from big_table;

Let's create a table and insert 4 Million records to reproduce the problem.

SQL> create table big_table(dt date);

Table created.

SQL> set timing on
SQL> begin
  2    for i in 1..365 loop
  3      insert into big_table
  4          select to_date('01-01-1980', 'dd-mm-yyyy') + level - 1
  5            from dual
  6            connect by level <= 11000;
  7    end loop;
  8    commit;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.86
SQL>
SQL>
SQL> exec dbms_stats.gather_table_stats(user, 'BIG_TABLE');

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.51
SQL> select num_rows from user_tables where table_name = 'BIG_TABLE';

  NUM_ROWS
----------
   4020746

Elapsed: 00:00:00.10
SQL>

Enable SQL trace at the session-level and execute the problem query:

SQL> alter session set tracefile_identifier='BIG_TABLE';

Session altered.

Elapsed: 00:00:00.01
SQL> alter session set sql_trace = true;

Session altered.

Elapsed: 00:00:00.00
SQL> select max(to_char(dt, 'YYYYMMDDHH24MISS')) from big_table;

MAX(TO_CHAR(DT
--------------
20100211000000

Elapsed: 00:00:03.37
SQL>

The TKPROF output shows the following information:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.02          0          0          0           0
Execute      1      0.00       0.04          0          0          0           0
Fetch        2      3.85       3.86          0       7265          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      3.85       3.93          0       7265          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=7265 pr=0 pw=0 time=3868185 us)
4015000   TABLE ACCESS FULL BIG_TABLE (cr=7265 pr=0 pw=0 time=79 us)

Information from the trace:

Query Elapsed Time: 3.93 Sec
Time spent on CPU: 3.85 Sec
Consistent Gets: 7265
Table Access:  Full Table Scan

So, most of the time the query was on the CPU. 7265 consistent gets were performed to arrive at the maximum value. We can do two things to speedup this query are: 1) Apply TO_CHAR function after fetching the maximum value. 2) Create an index on STAT_CHG_DT column

Let's trace the modified query (before creating the index) and hope things get better:

SQL> alter session set tracefile_identifier='BIG_TABLE_MAX';

Session altered.

Elapsed: 00:00:00.01
SQL> alter session set sql_trace = true;

Session altered.

Elapsed: 00:00:00.00
SQL> select to_char( max(dt), 'YYYYMMDDHH24MISS') from big_table;

TO_CHAR(MAX(DT
--------------
20100211000000

Elapsed: 00:00:00.84
SQL> alter session set sql_trace =false;

Session altered.

Elapsed: 00:00:00.00
SQL>

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.02          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.67       0.67          0       7265          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.68       0.69          0       7265          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=7265 pr=0 pw=0 time=671191 us)
4015000   TABLE ACCESS FULL BIG_TABLE (cr=7265 pr=0 pw=0 time=61 us)

Fantastic! The CPU usage has drastically come down and the query is no more a burden for the poor CPU. However, to arrive at the maximum value, Oracle still has to perform 7265 consistent gets. This seems to be still high.

How did it affect:

In the original query, TO_CHAR function was applied on all the 4 Million rows and then a maximum value was selected among them. But after interchanging positions of TO_CHAR and MAX functions, database had to do a little less job. MAX function fetched the maximum value from the 4 Million records table and later database applied TO_CHAR function against the maximum value. So, we got rid of 4 Million - 1 TO_CHAR transformations and this saved CPU time.

To further enhance the query, we will create an index on "DT" column and trace it again:

SQL> create index t_idx on big_table(dt);

Index created.

Elapsed: 00:00:44.39
SQL> alter session set tracefile_identifier='BIG_TABLE_IDX';

Session altered.

Elapsed: 00:00:00.17
SQL> alter session set sql_trace = true;

Session altered.

Elapsed: 00:00:00.03
SQL> select to_char( max(dt), 'YYYYMMDDHH24MISS') from big_table;

TO_CHAR(MAX(DT
--------------
20100211000000

Elapsed: 00:00:00.04
SQL> alter session set sql_trace =false;

Session altered.

Elapsed: 00:00:00.01

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.01          2          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.01          2          3          0           1


Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=3 pr=2 pw=0 time=14415 us)
      1   INDEX FULL SCAN (MIN/MAX) T_IDX (cr=3 pr=2 pw=0 time=14394 us)(object id 93231)

The index brings a huge huge difference. Everything seems to be under control. The consistent gets are reported as "3" compared to "7265", also CPU and Elapsed times are near zero. So, this query performs 2421 times better than the previous queries.

How did it affect:

After creating an index, instead of doing a Full Table Scan the optimizer went with "INDEX FULL SCAN (MIN/MAX)" operation.

Oracle will access the requested data by applying a shortcut to reach to the leaf index block. It will first hit the root block, follows the right most intermediate branch blocks, finally hitting the rightmost leaf block to get to the maximum value.

1 comment:

Fahd Mirza said...

Asif, extremely valuable post.

Would you please clarify little more about Index Full Scan in this case?

regards