Thursday 21 November 2013

Basic Performance Analysis using AWR data

I've previously covered a number of queries that can be combined together to pull out key statistics from the Oracle AWR data for each snapshot it makes, which you could use to analyse the performance of a system. One example would be to use this to look at how the workload changes during the day, and to identify the period of peak activity. Another use would be to see if there were any periods experiencing significantly higher waits than at other times.

I'll now go through how you can use some of the AWR data to do an overall performance analysis of a system. But before we do that, we need to review some basics on performance tuning, just to clarify some things and avoid potential confusion.

Performance Tuning Basics

I'm of the same school of tuning as people like Cary Millsap and Jeff Holt and others - generally speaking you should be tuning specific performance problems as experienced by individual Oracle sessions, and not tuning the system as a whole. This is because any "improvements" you make to the system that do not benefit the slow sessions are completely wasted and pointless. For instance, the system may be experiencing slow disk I/O's, but it might be that the reported slow interactive response time by some users is due to very high CPU usage resulting from a poor execution plan for the particular SQL statement that session is executing. Making the disk I/O's happen quicker might "improve" things for the system as a whole, but it will have no impact at all on the CPU bound SQL statement that is the cause of slow response times on an end user's screen.

That said, there are times where system level tuning is appropriate or when you can only do system level tuning, and that is where AWR can help as it gathers system wide activity data. For session level tuning you need to look elsewhere - such as ASH (Active Session History) or using SQL Trace.

There are some big assumptions being made when we do system level tuning:
  • All sessions are executing the same or similar SQL statements
    • So a system wide "average" view across all sessions is valid
  • Any "poor performance" on the system impacts all sessions equally
    • So even one rogue SQL statement can slow down all other SQL statements being executed from other sessions
  • The system is under a high enough load that performance problems are due to interactions between the SQL statements
    • It is assumed that poorly performing individual SQL statements would have been identified earlier under initial testing
Providing these assumptions or similar are true, then you can do system level tuning.

System Level Tuning Measurements

A computer system exists to do work, and its performance is really about how long it takes to to that work and which resources it uses while doing it. Performance is an issue when a task takes too long to complete.

We need to measure both sides of this – the input work requests, and the output usage of the system resources that results. We can only do performance tuning properly when we know both of these, as this lets us see the impact of any changes we make - does resource usage go down or not for the same input workload?

Measuring the input workload is best done at business or application level transactions. And this is best achieved by instrumenting the application software itself to record both the type of transaction it is performing and the elapsed time it takes when submitted. However, very few applications if any are instrumented this way.

This leaves a database level measurement as the only viable and consistently available way of recording the input workload, and in my view the best available measurement is the number of SQL statements executed per second. While this is very imperfect as a measure of the input workload, it is the nearest we are going to get to it, and all input application transactions result in SQL statements being executed. So it does correlate with input workload.

For a measurement of workload on the system - the output resulting from the execution of the SQL statements - the best measurement is probably Average Active Sessions. AAS is simply the Database Time ("DB Time" from System Time Model) divided by elapsed time for the period over which Database Time was measured. The Database Time value is a measurement of the amount of elapsed time sessions were active executing SQL statements, and includes both active execution time (running on a CPU) and waiting time (for disk I/O or any other wait event). AAS indicates how many sessions were active at the same time executing SQL statements i.e. the level of concurrency on your system.

AAS correlates with the number of CPU's on your system - if all SQL statements executed with no waiting at all they would only use up CPU resource and your transaction rate would be limited by the number of CPU's in the system. If AAS is less than the number of CPU's in your system then you are not reaching capacity. If AAS is close to or more than the number of CPU's then you have an overloaded system and waiting of some form or another is occurring.

To make real use of the AAS value we also want to know the percentage of time spent waiting during each period i.e. total wait time as a percentage of database active time (DB time again). The Percentage Wait (which I label Wait%) indicates the efficiency of the SQL statement execution - if this waiting was eliminated then that wait time would be removed from the elapsed time of each SQL statement executed. This is only relevant when AAS is high enough. When the level of concurrency is very low you can get various anomalies, and there will always be some waiting somewhere. Again remember the assumptions I stated earlier - the system must be under a high enough load to experience performance problems due to the combined workload itself.

That's it - 3 simple measurements should be enough to tell you whether your system is overloaded or not, and whether it is performing efficiently. These can be easily extracted from AWR (see later in this post), and can graphed in your favourite spreadsheet tool to spot trends and anomalies.

Using these measurements

If you have a system wide bottleneck / capacity limit then checking on the total system resource usage during a period of time will hopefully identify any such bottlenecks. As stated, if AAS is high enough (I would say more than half of the number of CPU's), and the Wait% is also high (say 50% or more), then you have poor performance and it can be improved.

At this point I would use the expanded version of the AWR query that I have built up over the previous posts to get all of the data out of AWR for each snapshot, and again into a spreadsheet or something else. Within this AWR data is a breakdown of the Oracle Wait Time by Wait Class, which tells you which "type" of wait was consuming most of the wait time. This can either primarily be Disk (I/O), Network, Cluster, Commit (Redo), or Concurrency (locking).

Knowing the wait class you can then query AWR for SQL statement execution and sum them over the wait class identified earlier, and sort by the wait class time summed. I showed how to do this in my previous post, summing the number of disk reads per SQL statement executed in each snapshot. This way you easily get to see the top SQL by that wait type, and see how much time they spent waiting relative to the total wait time.

Equally you could just do all SQL statements by their total wait time, to see those that experienced the worst waits whatever type of wait they were. Wait time for a SQL statement would be calculated as its Elapsed time minus the CPU time.

AWR System Performance Query

Here is a query that gives you just the 3 key measurements mentioned before for a system from the AWR data. It reports all the measurements for yesterday - modify the date range constraint to report on different periods.
set feedback off
set verify off
set linesize 1000
set trimout on
set trimspool on
--
col snap_time           format a15      heading 'SNAP TIME'
col user_calls_sec      format 9,990    heading 'UCALL/S'
col aas                 format 90.0     heading 'AAS'
col wait_pct            format 990.0    heading 'WAIT%'
--
with
snaps as 
(select snap_id
      , dbid
      , end_snap_time
      , snap_interval
      , extract (second from snap_interval) 
       + (extract (minute from snap_interval) 
          + (extract (hour from snap_interval)
             + (extract (day from snap_interval) * 24)
             ) * 60
          ) * 60 snap_duration
  from (select csnaps.snap_id
             , csnaps.dbid
             , min (csnaps.end_interval_time) end_snap_time
             , min (csnaps.end_interval_time) - min (csnaps.begin_interval_time) snap_interval
          from dba_hist_snapshot csnaps
         group by csnaps.snap_id, csnaps.dbid
       )
) -- snaps
, systimes as 
-- One row per Database Time Model with change in value between snapshots
(select systime.snap_id
      , systime.dbid
      , systime.stat_name
      , sum (systime.value - psystime.value) value
         from dba_hist_sys_time_model systime, dba_hist_sys_time_model psystime
        where systime.snap_id = psystime.snap_id + 1
          and systime.dbid = psystime.dbid
          and systime.instance_number = psystime.instance_number
          and systime.stat_id = psystime.stat_id
-- Assume if stat_id the same so is the stat_name
        group by systime.snap_id, systime.dbid, systime.stat_name
) -- systimes
, sysstats as 
-- One row per System Statistic with change in value between snapshots
(select sysstat.snap_id
      , sysstat.dbid
      , sysstat.stat_name
      , sum (sysstat.value - psysstat.value) value
         from dba_hist_sysstat sysstat, dba_hist_sysstat psysstat
        where sysstat.snap_id = psysstat.snap_id + 1
          and sysstat.dbid = psysstat.dbid
          and sysstat.instance_number = psysstat.instance_number
          and sysstat.stat_id = psysstat.stat_id
-- Assume if stat_id the same so is the stat_name
        group by sysstat.snap_id, sysstat.dbid, sysstat.stat_name
) -- sysstats
, syswaits as 
-- One row for total wait time, plus break down into major wait classes, and events
(select sysevent.snap_id
      , sysevent.dbid
      , sum (sysevent.time_waited_micro - psysevent.time_waited_micro) time_waited_micro
      , sum (sysevent.total_waits - psysevent.total_waits) wait_count
   from dba_hist_system_event sysevent, dba_hist_system_event psysevent
  where sysevent.snap_id = psysevent.snap_id + 1
    and sysevent.dbid = psysevent.dbid
    and sysevent.instance_number = psysevent.instance_number
    and sysevent.event_id = psysevent.event_id
    and sysevent.wait_class != 'Idle'  -- Ignore Idle wait events
  group by sysevent.snap_id
      , sysevent.dbid
) -- syswaits
select to_char (snaps.end_snap_time, 'DD/MM/YY HH24:MI') snap_time
     , (user_calls_st.value / snaps.snap_duration)              user_calls_sec
     , (dbtime.value / 1000000)       / snaps.snap_duration     aas
     , (100 * syswaits.time_waited_micro / dbtime.value)        wait_pct
  from snaps
     join (select * from systimes where stat_name = 'DB time') dbtime
       on snaps.snap_id = dbtime.snap_id and snaps.dbid = dbtime.dbid
     join syswaits
       on snaps.snap_id = syswaits.snap_id and snaps.dbid = syswaits.dbid
     join (select * from sysstats where stat_name = 'user calls') user_calls_st
       on snaps.snap_id = user_calls_st.snap_id and snaps.dbid = user_calls_st.dbid
 where snaps.end_snap_time between 
       (trunc (sysdate) - 1) and (trunc (sysdate))
 order by snaps.end_snap_time
/
--
set feedback on
set lines 80

Wednesday 6 November 2013

AWR Reporting #6 - SQL Statements

This is the last of the main set of posts on this topic. As for the last post, I'll try and keep this brief and post the SQL involved.

Another data set we can look at is on SQL statements i.e. statistics collected by AWR on individual SQL statements, not overall totals. AWR snapshots SQL statements from V$SQLSTAT to DBA_HIST_SQLSTAT for per SQL statement statistics.

As before we could use the technique of subtracting the values from the previous snapshot from the values of the current snapshot to get the change in value between the snapshots. Unfortunately this runs into issues as not all SQL statements may be present in both snapshots, and SQL statements can be flushed out of the library cache in the SGA and then added back in again later so their statistics have been reset in-between.

Luckily Oracle has solved this problem for us and provided a set of DELTA columns for the most important statistics giving us just what we want. Also this avoids the need to join back to the previous snapshot to calculate the change between the snapshots - the DELTA columns are already the change in value from the previous snapshot.

The main query then to extract useful statistics per SQL statement per database per snapshot is:
select sqlstat.snap_id
     , sqlstat.dbid
     , sqlstat.sql_id
     , sum (sqlstat.elapsed_time_delta)  sql_time
     , sum (sqlstat.executions_delta) sql_execs
     , sum (sqlstat.disk_reads_delta) sql_reads
  from dba_hist_sqlstat sqlstat
 group by sqlstat.snap_id
     , sqlstat.dbid
     , sqlstat.sql_id
We cannot use this query directly in our current main AWR summary query as we are getting multiple data rows per snapshot - one per SQL statement captured by that snapshot. The obvious thing to do is to sum the values over all of the SQL statements executed in that snapshot to produce one set of values per snapshot. Unfortunately this doesn't really give us anything that useful - we already have data values from AWR for these statistics across the whole system (SQL execution time, number of SQL statements executed, and number of disk reads), and when added up across all the SQL statements it doesn't help us see if there are any anomalies within the SQL statement themselves.

A better use for this query is to run it separately within another query that instead groups by SQL_ID and sums over multiple snaphots e.g. all snapshots in one day. This is a useful way of seeing which particular SQL statements put a greater load on the system than other SQL statements. "Load" could be number of executions, CPU used, disk reads, or some other measurement.

For example, one system I was looking at recently was doing a lot of disk reads and I could see that there were a number of table scans occurring. So I wanted to identify the SQL statements causing these table scans i.e. the SQL statements with the highest disk reads. By summarising over a whole day I could ensure that I was looking at the worst offenders who were executed multiple times during the day, and not a bad query only executed once.

The following query reports SQL statements captured by AWR yesterday ("sysdate - 1" truncated to midnight) sorted by total number of disk reads. The "sql_reads > 100000" is a filter so that not all SQL statements are listed, only those with a significant number of disk reads - you can increase or decrease this threshold based on how active your system is. You may want to start higher at a million and then reduce it by a factor of 10 until you get enough SQL statements listed.
with 
snaps as 
(select csnaps.snap_id
      , csnaps.dbid
      , min (csnaps.end_interval_time) end_snap_time
      , min (csnaps.end_interval_time) - min (csnaps.begin_interval_time) snap_interval
   from dba_hist_snapshot csnaps
  group by csnaps.snap_id, csnaps.dbid
) -- snaps
, sqlstats as
(
[insert previous query here]
)
select sqlstats.sql_id
     , sum (sqlstats.sql_reads)  sql_reads
     , sum (sqlstats.sql_execs)  sql_execs
     , sum (sqlstats.sql_time) / 1000000 sql_time
  from snaps
     join sqlstats
       on snaps.snap_id = sqlstats.snap_id and snaps.dbid = sqlstats.dbid
 where snaps.end_snap_time between 
       (trunc (sysdate) - 1) and (trunc (sysdate))
   and sql_reads > 100000
 group by sqlstats.sql_id
 order by sql_reads desc
/
Note that the time values are in microseconds and so must be divided by one million to output them as seconds.

A further optimization to this query is to restrict it to the main working hours, say 8am to 6pm:
   and extract (hour from snaps.end_snap_time) between 8 and 17 -- 8:00 to 17:59
This avoids issues with any overnight batch jobs you may be running or the Oracle default job to update stale statistics on database objects (which does do a lot of disk reads).

When I used this I was able to identify about 4 SQL statements responsible for about 25% of the disk reads on the system, each having a relatively low execution count - under 100 each for millions of disk reads in total. Each was clearly doing full table scans, which was validated by checking the execution plans of each - there is an AWR report supplied with Oracle that reports this information for a SQL statement across a number of snapshots (awrsqrpt.sql).

Now that I knew the top contributors to disk reads on the system I was able to investigate each SQL statement individually and work out changes to improve their execution. This was a mixture of extra indexes, SQL rewrites and other database changes.