Thursday, 1 December 2016

When 2 Queries Are Better Than 1

A general rule of thumb with queries in Oracle is to use just one query when you can and leave it to the Optimizer to work out the best possible execution plan i.e. decomposing one query into multiple separate queries can end up making the database do more work than it needs to. I came across a case the past week where the opposite was true - making Oracle do separate "smaller" queries was far quicker than using one overall query.

Background

Imagine that we have two tables - new data arrives in one table (INCOMING) and we process it in some way and insert the resultant data into another table (OUTGOING). Data records in both tables have a timestamp on them for when they were created, and they are inserted in timestamp order. We want to process new messages at regular intervals adding them to the OUTGOING table, but they are not deleted from the INCOMING table when processed. Instead a separate purge job runs each night to delete incoming data older than 48 hours. In this scenario we can use the timestamp on each record to work out the maximum timestamp of the OUTGOING data records we have processed so far, and only retrieve INCOMING data records that have been added since then.
select max (ts) from outgoing ;
An additional complexity in this is that the INCOMING data records fall into different categories or types, and these are processed separately to each other due to different processing rules and requirements. This means that when getting the maximum timestamp from the OUTGOING data set we also need to restrict on the category.
select max (ts) from outgoing 
where category = 'CATEGORY1' ;
But it also turns out that some categories are similar enough to each other that they can be processed together at the same time from INCOMING, which should be more efficient that executing multiple separate queries against what can be a large input table with historical data in it. This means we now need the maximum timestamp from OUTGOING across a number of categories. This is most obviously done with this query.
select max (ts) from outgoing 
where category in ( 'CATEGORY1' , 'CATEGORY2', 'CATEGORY3' ) ;
This is valid because these 3 categories are all extracted from INCOMING at the same time and processed together, so we do want the maximum timestamp across all of them together.

The Problem

When executed this "SELECT MAX (TS) WHERE CATEGORY IN (...)" query took a long time to execute (tens of seconds). Previous tests had shown that Oracle can efficiently use an index to get a minimum or maximum value from a table (sub-second), so why was this particular query taking so much longer?

The conclusion was that a single category maximum or minimum is executed differently to one on a group of categories, and this was the cause of the longer execution time. Furthermore, rewriting the query as a union of separate single category queries was much faster (over 100 times faster in this case!) and almost as fast as the original single category query.

Test Environment

This happened for real on Oracle 12c, on a non-CDB database. I'm going to replicate it on a separate test environment, also on Oracle 12c on a non-CDB database, running on Oracle Linux 7.
SQL> select * from v$version ;
BANNER                                                                           CON_ID
---------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production          0
PL/SQL Release 12.1.0.2.0 - Production                                                0
CORE        12.1.0.2.0        Production                                              0
TNS for Linux: Version 12.1.0.2.0 - Production                                        0
NLSRTL Version 12.1.0.2.0 - Production                                                0
I have taken a copy of the data in DBA_OBJECTS which in my test instance had over 90,000 rows in it occupying 12 MB of disk storage, and then doubled this 9 times to be 512 times larger than the original data set, giving over 45 million rows using over 5 GB of disk storage.

Initially I thought about using the CREATED column for the timestamp, and the OBJECT_TYPE for the category, but this ran into problems. First is that the doubling the rows each time does not change any data values, which means that we have very few different, distinct values for CREATED and each value repeats many times. This has an impact on calculations the Optimizer makes and the costs it estimates. Also the real data type in the real system was TIMESTAMP and not DATE. So I decided to add an extra column TS to the test table, and to update this later on to be a near-unique value. By giving the column an initial value it means that space is allocated within each record in each block in Oracle, and the later update should not cause any more space to be allocated or rows to be moved between blocks.

My test system has only 2 GB for UNDO, which means that I cannot do an UPDATE on a table larger than this. So what I have done is double the table 7 times to just over 1 GB in size, do the UPDATE on the TS to a near-unique random value, then double the table another 2 times. This means that each TS value occurs 4 times, which I hope will still be good enough for the tests I am doing.

In terms of the spread of values, in the real system the incoming data is purged every day, deleting data records older than 48 hours. The UPDATE is using the current system timestamp value, offset by up to 48 hours as a value in seconds.
create table outgoing
tablespace testdata
as select * from dba_objects ;
--
alter table outgoing modify (owner not null, object_name not null,
object_id not null, object_type not null, created not null) ;
--
-- Try and make the data loading go faster by minimising redo logging
alter table outgoing nologging ;
--
alter table outgoing add (ts timestamp default systimestamp not null) ;
--
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
--
update outgoing
set ts = systimestamp - numtodsinterval (dbms_random.value (0, 60*60*24*2), 'second') ;
commit ;
--
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
insert /*+ append */ into outgoing select * from outgoing ;
commit ;
--
select count (*) cnt , count (distinct ts) ts_cnt from outgoing ;
--
select bytes / (1024 * 1024) mb
from user_segments
where segment_name = 'OUTGOING' ;
This produced 46,384,640 rows occupying 6.3 GB of disk storage. I also checked the minimum and maximum values of the timestamp column, and they did cover the past 48 hours (29-NOV-16 14.56.36.872969 to 01-DEC-16 14.56.36.863852).

Test Queries & Execution Plans

In the real system there are less than 10 categories, and the processing involves picking some of the ones that occur the most frequently. I'll use the OBJECT_TYPE for the category, and pick the top 3 most frequently occurring values for the test queries:
select object_type, count (*)
from outgoing
group by object_type order by 2, 1 ;

OBJECT_TYPE               COUNT(*)
----------------------- ----------
...
TABLE                      1217536
TYPE                       1296384
INDEX                      2208256
VIEW                       3252224
JAVA CLASS                15777792
SYNONYM                   18917376
So our test categories will be the bottom 3.

Let's create an index on the category and timestamp columns, again just like the real system:
create index ix_outgoing on outgoing (object_type, ts)
tablespace testdata ;
And let's make sure our statistics on this table are up to date:
exec dbms_stats.gather_table_stats ('JOHN', 'OUTGOING')
When I ran these test queries and then displayed the execution plans the costs shown seemed wrong for the second query of multiple categories. The reported cost was too low given the much higher elapsed time and what I knew it was doing during the execution. To show the real elapsed time I have done "set timing on" in SQL*Plus, and I have run the queries with the "gather_plan_statistics" hint, and used extra options to get the run time execution statistics for the query execution. This will show more about what really happened during the execution.

And to try and make things more even and comparable between the queries I also flushed the buffer cache and shared pool between the execution of each query (done from another SQL*Plus session connected as SYSDBA).

A single maximum for one query has the following execution plan:
SQL> select /*+ gather_plan_statistics */ max (ts) from outgoing where object_type = 'SYNONYM' ;

MAX(TS)
---------------------------------------------------------------------------
01-DEC-16 14.56.36.841635

Elapsed: 00:00:00.14

SQL> select * from table(dbms_xplan.display_cursor(null, null, 'TYPICAL -BYTES IOSTATS LAST'));

SQL_ID 5mxhp3mh3gzmj, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ max (ts) from outgoing 
where object_type = 'SYNONYM'

Plan hash value: 856418741

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |      1 |        |     4 (100)|          |      1 |00:00:00.01 |       4 |      4 |
|   1 |  SORT AGGREGATE              |             |      1 |      1 |            |          |      1 |00:00:00.01 |       4 |      4 |
|   2 |   FIRST ROW                  |             |      1 |      1 |     4   (0)| 00:00:01 |      1 |00:00:00.01 |       4 |      4 |
|*  3 |    INDEX RANGE SCAN (MIN/MAX)| IX_OUTGOING |      1 |      1 |     4   (0)| 00:00:01 |      1 |00:00:00.01 |       4 |      4 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("OBJECT_TYPE"='SYNONYM')

The multi-category query has the following execution plan:
SQL> select /*+ gather_plan_statistics */ max (ts) from outgoing 
     where object_type in ( 'SYNONYM' , 'JAVA CLASS' , 'VIEW' ) ;

MAX(TS)
---------------------------------------------------------------------------
01-DEC-16 14.56.36.841635

Elapsed: 00:00:36.98

SQL> select * from table(dbms_xplan.display_cursor(null, null, 'TYPICAL -BYTES IOSTATS LAST'));

SQL_ID anb3s44frw6xa, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ max (ts) from outgoing 
where object_type in ( 'SYNONYM' , 'JAVA CLASS' , 'VIEW' )

Plan hash value: 2754621723

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation          | Name        | Starts | E-Rows | Cost (%CPU)| E-Time   | A-Rows | A-Time     | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |      1 |        |     4 (100)|          |      1 |00:00:36.87 |     169K|    169K|
|   1 |  SORT AGGREGATE    |             |      1 |      1 |            |          |      1 |00:00:36.87 |     169K|    169K|
|   2 |   INLIST ITERATOR  |             |      1 |        |            |          |     37M|00:02:23.89 |     169K|    169K|
|*  3 |    INDEX RANGE SCAN| IX_OUTGOING |      3 |      1 |     4   (0)| 00:00:01 |     37M|00:01:02.14 |     169K|    169K|
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access(("OBJECT_TYPE"='JAVA CLASS' OR "OBJECT_TYPE"='SYNONYM' OR "OBJECT_TYPE"='VIEW'))

The rephrased query using 3 separate per-category queries is:
SQL> select /*+ gather_plan_statistics */ max (maxts) from (
     select max (ts) maxts from outgoing where object_type = 'SYNONYM'
     union all
     select max (ts) maxts from outgoing where object_type = 'JAVA CLASS'
     union all
     select max (ts) maxts from outgoing where object_type = 'VIEW'
     ) ;

MAX(MAXTS)
---------------------------------------------------------------------------
01-DEC-16 14.56.36.841635

Elapsed: 00:00:00.15

SQL> select * from table(dbms_xplan.display_cursor(null, null, 'TYPICAL -BYTES IOSTATS LAST'));

SQL_ID 1yv5abfvs46y7, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ max (maxts) from ( 
select max (ts) maxts from outgoing where object_type = 'SYNONYM' 
union all select max (ts) maxts from outgoing where object_type = 'JAVA CLASS' 
union all select max (ts) maxts from outgoing where object_type = 'VIEW' )

Plan hash value: 2647832233

------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name        | Starts | E-Rows | Cost(%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |             |      1 |        |    12(100)|          |      1 |00:00:00.04 |      12 |      9 |
|   1 |  SORT AGGREGATE                 |             |      1 |      1 |           |          |      1 |00:00:00.04 |      12 |      9 |
|   2 |   VIEW                          |             |      1 |      3 |    12  (0)| 00:00:01 |      3 |00:00:00.04 |      12 |      9 |
|   3 |    UNION-ALL                    |             |      1 |        |           |          |      3 |00:00:00.04 |      12 |      9 |
|   4 |     SORT AGGREGATE              |             |      1 |      1 |           |          |      1 |00:00:00.01 |       4 |      4 |
|   5 |      FIRST ROW                  |             |      1 |      1 |     4  (0)| 00:00:01 |      1 |00:00:00.01 |       4 |      4 |
|*  6 |       INDEX RANGE SCAN (MIN/MAX)| IX_OUTGOING |      1 |      1 |     4  (0)| 00:00:01 |      1 |00:00:00.01 |       4 |      4 |
|   7 |     SORT AGGREGATE              |             |      1 |      1 |           |          |      1 |00:00:00.01 |       4 |      3 |
|   8 |      FIRST ROW                  |             |      1 |      1 |     4  (0)| 00:00:01 |      1 |00:00:00.01 |       4 |      3 |
|*  9 |       INDEX RANGE SCAN (MIN/MAX)| IX_OUTGOING |      1 |      1 |     4  (0)| 00:00:01 |      1 |00:00:00.01 |       4 |      3 |
|  10 |     SORT AGGREGATE              |             |      1 |      1 |           |          |      1 |00:00:00.02 |       4 |      2 |
|  11 |      FIRST ROW                  |             |      1 |      1 |     4  (0)| 00:00:01 |      1 |00:00:00.02 |       4 |      2 |
|* 12 |       INDEX RANGE SCAN (MIN/MAX)| IX_OUTGOING |      1 |      1 |     4  (0)| 00:00:01 |      1 |00:00:00.02 |       4 |      2 |
------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_TYPE"='SYNONYM')
   9 - access("OBJECT_TYPE"='JAVA CLASS')
  12 - access("OBJECT_TYPE"='VIEW')

Conclusions

The single category query executes in under 1 second (0.14 seconds on my test system), while the multiple category query using "IN" takes over 30 seconds (36.98 seconds) which is much slower (over 100 times slower for elapsed time).

We can see that the single category query uses an optimized index access to just get the maximum timestamp value - "INDEX RANGE SCAN (MIN/MAX)" combined with a "FIRST ROW" operation in the execution plan. However the multiple category query using "IN" does not do this and instead does an "INDEX RANGE SCAN" combined with a "INLIST ITERATOR" which takes far longer to execute (as seen from the A-Time value in the reported execution plan and the 37M for A-Rows).

The solution is to rewrite the query as a "UNION ALL" of 3 separate single category queries. We can safely use "UNION ALL" because the categories are distinct from each other. This query executes in under a second which is almost as fast as for just one category, and uses the optimized "INDEX RANGE SCAN (MIN/MAX)" access method combined with the "FIRST ROW" operation.

Sometimes it can be quicker to run separate SELECT queries and merge the results together than to run one single query that does it all at once. Sometimes it can even be 100 times faster to run separate queries than one single merged query.

No comments: