Archive

Author Archive

Oracle: compare DB Time and CPU time to ASH

April 26th, 2019

Screen Shot 2019-04-26 at 1.31.49 PM

Below I’m comparing AAS from ASH verses SYSMETRICs specificially

  • ASH CPU in AAS vs SYSMETRIC CPU in AAS
  • ASH total AAS vs SYSMETRIC  total AAS derived from DB TIME

ASH CPU is consistently lower that SYSMETRIC  CPU and that could be because waits burn CPU but show up as waits and not CPU in ASH.

On the other hand, not sure why total AAS from SYSMETRIC  is consistently smaller than total AAS from ASH.

  Def v_secs=60 --  bucket size

select   DBTT.begin_time ,ASH_CPU, DBT_CPU , ASH_AAS,  DBT_AAS from
(
    select to_char(begin_time,'MON DD YYYY HH24:MI') begin_time,
          round(value/100,2) DBT_AAS
    from v$sysmetric_history 
    where metric_name='Database Time Per Sec'
    and INTSIZE_CSEC > 2000
    order by 1
)  DBTT,
(
    select to_char(begin_time,'MON DD YYYY HH24:MI') begin_time,
          round(value/100,2) DBT_CPU
    from v$sysmetric_history 
    where metric_name='CPU Usage Per Sec'
)  DBTC,
(
   select
        to_char(to_date(
                         trunc((id*&v_secs)/ (24*60*60)) || ' ' ||  -- Julian days
                           mod((id*&v_secs),  24*60*60)             -- seconds in the day
                , 'J SSSSS' ), 'MON DD YYYY HH24:MI')     start_time,
        round(CPU/&v_secs,2) ASH_CPU,
        round(total/&v_secs,2)  ASH_AAS
   from ( 
     select
        trunc((to_char(sample_time,'J')*(24*60*60)+to_char(sample_time,'SSSSS'))/&v_secs)  id,
        sum(decode(session_state,'ON CPU',1,0))     CPU,
        sum(decode(session_state,'ON CPU',0,1))     Wait,
       --  decode(session_state,'ON CPU','ON CPU','WAIT')     event,
        count(*) total
     from
        v$active_session_history ash
     where SAMPLE_TIME > sysdate - 60/(24*60)
     group by
         trunc((to_char(sample_time,'J')*(24*60*60)+to_char(sample_time,'SSSSS'))/&v_secs)
        )
) AAS
where
     DBTT.begin_time=DBTC.begin_time
 and DBTT.begin_time=AAS.start_time (+)
order by DBTT.begin_time ;

Low load – data looks pretty comparable:

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 26 2019 18:24			       0		     0
APR 26 2019 18:25		  .02	       0	.02	     0
APR 26 2019 18:26			       0		     0
APR 26 2019 18:27			       0		     0
APR 26 2019 18:28			       0		     0
APR 26 2019 18:29		  .03	       0	.03	     0
APR 26 2019 18:30			       0		     0
APR 26 2019 18:31			       0		     0
APR 26 2019 18:32		    0	       0	 .8	     0
APR 26 2019 18:33		    0	       0	  1	     0
APR 26 2019 18:34		  .02	       0	.05	     0
APR 26 2019 18:35		  .22	     .21	.22	   .21
APR 26 2019 18:36			       0		     0
APR 26 2019 18:37		  .22	     .21	.22	   .21
APR 26 2019 18:38		  .02	       0	.02	     0
APR 26 2019 18:39			       0		     0
APR 26 2019 18:40			       0		     0

Higher load – we start to see consistent differences :

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 26 2019 18:56		  .52	     .85	3.7	  2.54
APR 26 2019 18:57		   .4	     .62       2.95	  1.96
APR 26 2019 18:58		  .23	     .39       1.73	   1.2
APR 26 2019 18:59		  .17	      .4       1.32	  1.27
APR 26 2019 19:00		  .45	     .85       3.17	  2.51
APR 26 2019 19:01		  .53	     .81	3.2	  2.57
APR 26 2019 19:02		  .33	      .4       2.18	  1.28
APR 26 2019 19:03		   .2	     .39	.87	  1.24
APR 26 2019 19:04		  .28	      .4       1.88	  1.17
APR 26 2019 19:05		  .43	     .81       3.12	  2.56
APR 26 2019 19:06		  .48	     .81       3.33	  2.36
APR 26 2019 19:07		   .3	     .41       1.43	  1.33
APR 26 2019 19:08		  .43	      .4	2.2	   1.2
APR 26 2019 19:09		  .13	      .4       1.43	  1.29
APR 26 2019 19:10		  .48	     .87       3.48	  2.49
APR 26 2019 19:11		  .48	     .72       3.03	  2.25
APR 26 2019 19:12		  .27	     .39       2.03	   1.2

 

The screen shot below reproduces much of the above except DB Time and CPU used by this Session come from v$systat and not SYSMETRICs and in this case DB TIME is showing a higher AAS than ASH whereas it is the other way around for DB TIME from SYSMETRICs.

 

Screen Shot 2019-04-26 at 12.28.06 PM

John Beresniewicz pointed out that DB Time doesn’t include BACKGROUND processes, so here is slight update just filtering for FOREGROUND processes

 

  Def v_secs=60 --  bucket size

select   DBTT.begin_time ,ASH_CPU, DBT_CPU , ASH_AAS,  DBT_AAS from
(
    select to_char(begin_time,'MON DD YYYY HH24:MI') begin_time,
          round(value/100,2) DBT_AAS
    from v$sysmetric_history
    where metric_name='Database Time Per Sec'
    and INTSIZE_CSEC > 2000
    order by 1
)  DBTT,
(
    select to_char(begin_time,'MON DD YYYY HH24:MI') begin_time,
          round(value/100,2) DBT_CPU
    from v$sysmetric_history
    where metric_name='CPU Usage Per Sec'
)  DBTC,
(
   select
        to_char(to_date(
                         trunc((id*&v_secs)/ (24*60*60)) || ' ' ||  -- Julian days
                           mod((id*&v_secs),  24*60*60)             -- seconds in the day
                , 'J SSSSS' ), 'MON DD YYYY HH24:MI')     start_time,
        round(CPU/&v_secs,2) ASH_CPU,
        round(total/&v_secs,2)  ASH_AAS
   from (
     select
        trunc((to_char(sample_time,'J')*(24*60*60)+to_char(sample_time,'SSSSS'))/&v_secs)  id,
        sum(decode(session_state,'ON CPU',1,0))     CPU,
        sum(decode(session_state,'ON CPU',0,1))     Wait,
       --  decode(session_state,'ON CPU','ON CPU','WAIT')     event,
        count(*) total
     from
        v$active_session_history ash
     where SAMPLE_TIME > sysdate - 60/(24*60)
        and session_type = 'FOREGROUND'
     group by
         trunc((to_char(sample_time,'J')*(24*60*60)+to_char(sample_time,'SSSSS'))/&v_secs)
        )
) AAS
where
     DBTT.begin_time=DBTC.begin_time
 and DBTT.begin_time=AAS.start_time (+)
order by DBTT.begin_time ;

and the output is much closer to being inline

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 27 2019 01:25			       0		     0
APR 27 2019 01:26			       0		     0
APR 27 2019 01:27			       0		     0
APR 27 2019 01:28			       0		     0
APR 27 2019 01:29		  .02	     .01	.02	   .02
APR 27 2019 01:30		    0	     .15	.05	   .42
APR 27 2019 01:31		  .35	     .83	2.8	  2.56
APR 27 2019 01:32		  .55	     .79       2.62	  2.38
APR 27 2019 01:33		  .17	     .41       1.67	  1.31
APR 27 2019 01:34		   .3	     .42       1.37	  1.28
APR 27 2019 01:35		  .48	     .78	2.8	  2.61

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 27 2019 01:36		  .42	     .84	2.6	  2.56
APR 27 2019 01:37		   .3	     .58       1.88	  1.87
APR 27 2019 01:38		  .23	     .41       1.48	  1.25
APR 27 2019 01:39		  .23	     .41       1.57	  1.32
APR 27 2019 01:40		  .43	     .83       2.82	  2.53
APR 27 2019 01:41		  .55	      .8       2.67	  2.52
APR 27 2019 01:42		   .3	     .41       1.62	  1.28
APR 27 2019 01:43		  .32	      .4	1.1	  1.27
APR 27 2019 01:44		  .25	      .4       1.17	  1.19
APR 27 2019 01:45		  .52	     .79       2.38	  2.55
APR 27 2019 01:46		   .3	      .8       2.23	  2.39

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 27 2019 01:47		  .25	      .4       1.67	  1.32
APR 27 2019 01:48		  .32	     .41       1.47	  1.18
APR 27 2019 01:49		  .27	     .42	1.4	  1.34
APR 27 2019 01:50		  .42	     .86       2.65	  2.51
APR 27 2019 01:51		   .4	     .76       2.57	   2.4
APR 27 2019 01:52		  .18	     .41	1.8	  1.32
APR 27 2019 01:53		  .25	     .41       1.58	  1.32
APR 27 2019 01:54		   .2	     .41       1.35	  1.21
APR 27 2019 01:55		  .42	      .8       2.28	  2.56
APR 27 2019 01:56		   .4	     .72       2.55	   2.2
APR 27 2019 01:57		  .23	     .41       1.32	  1.34

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 27 2019 01:58		  .25	     .41       1.18	  1.19
APR 27 2019 01:59		   .2	     .51       1.47	  1.87
APR 27 2019 02:00		  1.3	     1.5       4.05	  4.19
APR 27 2019 02:01		 1.12	    1.34       3.15	  3.14
APR 27 2019 02:02		  .27	     .43       1.45	  1.26
APR 27 2019 02:03		  .25	     .42       1.02	   1.3
APR 27 2019 02:04		  .17	     .42       1.37	  1.21
APR 27 2019 02:05		  .47	     .82       2.17	  2.49
APR 27 2019 02:06		  .42	     .78	2.6	   2.3
APR 27 2019 02:07		  .17	     .41       1.15	  1.44
APR 27 2019 02:08		  .22	     .42       1.75	  1.22

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 27 2019 02:09		   .2	     .42       1.33	  1.31
APR 27 2019 02:10		  .32	     .85       2.37	  2.53
APR 27 2019 02:11		  .52	     .74       2.38	  2.28
APR 27 2019 02:12		  .22	     .41       1.78	  1.25
APR 27 2019 02:13		  .28	     .41       1.27	  1.27
APR 27 2019 02:14		  .27	     .42       1.48	  1.19
APR 27 2019 02:15		  .35	     .82       2.37	  2.53
APR 27 2019 02:16		  .32	     .75       2.85	  2.17
APR 27 2019 02:17		  .17	      .4	.87	  1.35
APR 27 2019 02:18		  .25	     .43       1.68	  1.21
APR 27 2019 02:19		  .22	     .42       1.57	  1.32

BEGIN_TIME		      ASH_CPU	 DBT_CPU    ASH_AAS    DBT_AAS
-------------------------- ---------- ---------- ---------- ----------
APR 27 2019 02:20		  .47	     .88       2.42	   2.5
APR 27 2019 02:21		  .37	     .71       2.27	  2.18
APR 27 2019 02:22		   .2	     .42       1.47	  1.27
APR 27 2019 02:23		  .28	     .41	1.2	  1.28
APR 27 2019 02:24		  .17	     .43       1.27	  1.25
APR 27 2019 02:25		  .47	     .82       2.05	  2.54

Now we can see it's much closer

 

Screen Shot 2019-04-26 at 8.36.24 PM

 

AWR from load period

WORKLOAD REPOSITORY report for

DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
------------ ----------- ------------ -------- --------------- ----------- ---
ORCL          1534324168 ORCL                1 25-Apr-19 22:07 11.2.0.4.0  NO

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
ip-10-13-0-252   Linux x86 64-bit                    2     1       1      15.38

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:        23 26-Apr-19 21:00:37        34       1.0
  End Snap:        24 26-Apr-19 22:00:04        33       1.0
   Elapsed:               59.46 (mins)
   DB Time:               98.53 (mins)

Load Profile                    Per Second   Per Transaction  Per Exec  Per Call
~~~~~~~~~~~~~~~            ---------------   --------------- --------- ---------
             DB Time(s):               1.7               0.0      0.00      0.00
              DB CPU(s):               0.6               0.0      0.00      0.00
      Redo size (bytes):       4,762,632.4           4,196.8
  Logical read (blocks):          57,271.0              50.5
          Block changes:          36,004.1              31.7
 Physical read (blocks):               0.2               0.0
Physical write (blocks):             239.9               0.2
       Read IO requests:               0.2               0.0
      Write IO requests:              70.5               0.1
           Read IO (MB):               0.0               0.0
          Write IO (MB):               1.9               0.0
             User calls:           3,746.0               3.3
           Parses (SQL):           1,872.4               1.7
      Hard parses (SQL):               0.0               0.0
     SQL Work Area (MB):               1.3               0.0
                 Logons:               0.0               0.0
         Executes (SQL):           1,895.9               1.7
              Rollbacks:               0.0               0.0
           Transactions:           1,134.8

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:   97.86       Redo NoWait %:  100.00
            Buffer  Hit   %:  100.00    In-memory Sort %:  100.00
            Library Hit   %:  101.55        Soft Parse %:  100.00
         Execute to Parse %:    1.24         Latch Hit %:   99.22
Parse CPU to Parse Elapsd %:    0.20     % Non-Parse CPU:  100.00

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                            Tota    Wait   % DB
Event                                 Waits Time Avg(ms)   time Wait Class
------------------------------ ------------ ---- ------- ------ ----------
log file sync                     4,048,143 3688       1   62.4 Commit
DB CPU                                      1947           32.9
buffer busy waits                 2,395,899 341.       0    5.8 Concurrenc
enq: TX - row lock contention       412,330 71.6       0    1.2 Applicatio
latch: cache buffers chains           9,514  3.8       0     .1 Concurrenc
log file switch (checkpoint in           26  3.5     135     .1 Configurat
SQL*Net message to client         6,685,887  2.9       0     .0 Network
enq: SQ - contention                 11,266  2.4       0     .0 Configurat
log file switch completion              335  2.2       7     .0 Configurat
latch: In memory undo latch           2,863  1.8       1     .0 Concurrenc
^L
Wait Classes by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                        Avg             Avg
                                        Total Wait     Wait   % DB   Active
Wait Class                  Waits       Time (sec)     (ms)   time Sessions
---------------- ---------------- ---------------- -------- ------ --------
Commit                  4,048,145            3,689        1   62.4      1.0
DB CPU                                       1,947            32.9      0.5
System I/O              2,649,191            1,614        1   27.3      0.5
Concurrency             2,408,339              347        0    5.9      0.1
Application               412,330               72        0    1.2      0.0
Other                     182,809               17        0     .3      0.0
Configuration              11,676                8        1     .1      0.0
Network                 6,686,094                3        0     .0      0.0
User I/O                    2,124                1        1     .0      0.0

Host CPU
~~~~~~~~                  Load Average
 CPUs Cores Sockets     Begin       End     %User   %System      %WIO     %Idle
----- ----- ------- --------- --------- --------- --------- --------- ---------
    2     1       1      1.38      1.19       3.3       6.4      12.5      65.9

Instance CPU
~~~~~~~~~~~~
              % of total CPU for Instance:      30.2
              % of busy  CPU for Instance:      88.6
  %DB time waiting for CPU - Resource Mgr:       0.0

IO Profile                  Read+Write/Second     Read/Second    Write/Second
~~~~~~~~~~                  ----------------- --------------- ---------------
            Total Requests:             897.6             9.6           888.0
         Database Requests:              70.7             0.2            70.5
        Optimized Requests:               0.0             0.0             0.0
             Redo Requests:             816.3             5.2           811.2
                Total (MB):              16.7             5.1            11.7
             Database (MB):               1.9             0.0             1.9
      Optimized Total (MB):               0.0             0.0             0.0
                 Redo (MB):               9.8             4.9             4.9
         Database (blocks):             240.1             0.2           239.9
 Via Buffer Cache (blocks):             239.7             0.0           239.7
           Direct (blocks):               0.5             0.2             0.2

Memory Statistics
~~~~~~~~~~~~~~~~~                       Begin          End
                                 ------------ ------------
                  Host Mem (MB):     15,753.8     15,753.8
                   SGA use (MB):     11,680.0     11,680.0
                   PGA use (MB):        265.3        260.0
    % Host Mem used for SGA+PGA:        75.82        75.79

Cache Sizes                       Begin        End
~~~~~~~~~~~                  ---------- ----------
               Buffer Cache:     9,952M     9,952M  Std Block Size:         8K
           Shared Pool Size:     1,422M     1,422M      Log Buffer:     9,920K

 Shared Pool Statistics        Begin    End
~~~~~~~~~~~~~~~~~~~~~~~~~~~~  ------  ------
             Memory Usage %:   29.71   29.73
    % SQL with executions>1:   95.33   95.28
  % Memory for SQL w/exec>1:   82.84   82.66

^LTime Model Statistics                        DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total time in database user-calls (DB Time): 5912.1s
-> Statistics including the word "background" measure background process
   time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
DB CPU                                                1,947.3         32.9
sql execute elapsed time                              1,835.4         31.0
parse time elapsed                                       51.9           .9
sequence load elapsed time                               11.7           .2
connection management call elapsed time                   1.0           .0
repeated bind elapsed time                                0.2           .0
PL/SQL execution elapsed time                             0.0           .0
DB time                                               5,912.1
background elapsed time                               1,738.5
background cpu time                                     169.1
                          ------------------------------------------------------

Operating System Statistics                   DB/Inst: ORCL/ORCL  Snaps: 23-24
-> *TIME statistic values are diffed.
   All others display actual values.  End Value is displayed if different
-> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name

Statistic                                  Value        End Value
------------------------- ---------------------- ----------------
BUSY_TIME                                238,871
IDLE_TIME                                461,716
IOWAIT_TIME                               87,280
NICE_TIME                                163,683
SYS_TIME                                  45,109
USER_TIME                                 22,772
LOAD                                           1                1
RSRC_MGR_CPU_WAIT_TIME                         0
VM_OUT_BYTES                           1,249,280
PHYSICAL_MEMORY_BYTES             16,519,077,888
NUM_CPUS                                       2
NUM_CPU_CORES                                  1
NUM_CPU_SOCKETS                                1
GLOBAL_RECEIVE_SIZE_MAX                4,194,304
GLOBAL_SEND_SIZE_MAX                   4,194,304
TCP_RECEIVE_SIZE_DEFAULT                  87,380
TCP_RECEIVE_SIZE_MAX                   6,291,456
TCP_RECEIVE_SIZE_MIN                       4,096
TCP_SEND_SIZE_DEFAULT                     16,384
TCP_SEND_SIZE_MAX                      4,194,304
TCP_SEND_SIZE_MIN                          4,096
                          ------------------------------------------------------

                          ------------------------------------------------------

Operating System Statistics - Detail          DB/Inst: ORCL/ORCL  Snaps: 23-24

Snap Time           Load    %busy    %user     %sys    %idle  %iowait
--------------- -------- -------- -------- -------- -------- --------
26-Apr 21:00:37      1.4      N/A      N/A      N/A      N/A      N/A
26-Apr 22:00:04      1.2     34.1      3.3      6.4     65.9     12.5
                          ------------------------------------------------------

Foreground Wait Class                         DB/Inst: ORCL/ORCL  Snaps: 23-24
-> s  - second, ms - millisecond -    1000th of a second
-> ordered by wait time desc, waits desc
-> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
-> Captured Time accounts for        102.6%  of Total DB time       5,912.07 (s)
-> Total FG Wait Time:             4,119.97 (s)  DB CPU time:       1,947.29 (s)

                                                                  Avg
                                      %Time       Total Wait     wait
Wait Class                      Waits -outs         Time (s)     (ms)  %DB time
-------------------- ---------------- ----- ---------------- -------- ---------
Commit                      4,048,143     0            3,689        1      62.4
DB CPU                                                 1,947               32.9
Concurrency                 2,408,319     0              347        0       5.9
Application                   412,330     0               72        0       1.2
Configuration                  11,676     0                8        1       0.1
Network                     6,685,887     0                3        0       0.0
System I/O                        989     0                1        1       0.0
User I/O                          307     0                0        1       0.0
Other                             215     0                0        1       0.0
                          ------------------------------------------------------

^LForeground Wait Events                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> s  - second, ms - millisecond -    1000th of a second
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by wait time desc, waits desc (idle events last)
-> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0

                                                             Avg
                                        %Time Total Wait    wait    Waits   % DB
Event                             Waits -outs   Time (s)    (ms)     /txn   time
-------------------------- ------------ ----- ---------- ------- -------- ------
log file sync                 4,048,143     0      3,689       1      1.0   62.4
buffer busy waits             2,395,899     0        341       0      0.6    5.8
enq: TX - row lock content      412,330     0         72       0      0.1    1.2
latch: cache buffers chain        9,514     0          4       0      0.0     .1
log file switch (checkpoin           26     0          4     135      0.0     .1
SQL*Net message to client     6,685,887     0          3       0      1.7     .0
enq: SQ - contention             11,266     0          2       0      0.0     .0
log file switch completion          335     0          2       7      0.0     .0
latch: In memory undo latc        2,863     0          2       1      0.0     .0
control file sequential re          989     0          1       1      0.0     .0
log file switch (private s           49     0          0       8      0.0     .0
db file sequential read             300     0          0       1      0.0     .0
cursor: pin S                        19     0          0       5      0.0     .0
library cache: mutex X               24     0          0       4      0.0     .0
latch: enqueue hash chains           66     0          0       1      0.0     .0
latch: undo global data              24     0          0       1      0.0     .0
latch free                           45     0          0       1      0.0     .0
latch: redo allocation               61     0          0       0      0.0     .0
latch: session allocation             1     0          0       9      0.0     .0
latch: cache buffers lru c            5     0          0       2      0.0     .0
latch: cache buffer handle            8     0          0       1      0.0     .0
Disk file operations I/O              7     0          0       0      0.0     .0
latch: checkpoint queue la            3     0          0       0      0.0     .0
SQL*Net message from clien    6,685,885     0     29,743       4      1.7
                          ------------------------------------------------------

^LBackground Wait Events                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0




                                                             Avg
                                        %Time Total Wait    wait    Waits   % bg
Event                             Waits -outs   Time (s)    (ms)     /txn   time
-------------------------- ------------ ----- ---------- ------- -------- ------
log file parallel write       2,597,621     0      1,569       1      0.6   90.3
log file sequential read         15,328     0         25       2      0.0    1.5
LGWR wait for redo copy         181,500     0         16       0      0.0     .9
db file parallel write           13,637     0          8       1      0.0     .5
control file sequential re       13,797     0          6       0      0.0     .4
control file parallel writ        4,854     0          3       1      0.0     .2
Disk file operations I/O          1,288     0          1       1      0.0     .0
db file async I/O submit          2,497     0          1       0      0.0     .0
reliable message                    721     0          1       1      0.0     .0
Log archive I/O                     155     0          0       2      0.0     .0
os thread startup                    16     0          0      12      0.0     .0
log file single write               310     0          0       0      0.0     .0
direct path write                   167     0          0       1      0.0     .0
latch: redo allocation              102     0          0       1      0.0     .0
direct path read                    347     0          0       0      0.0     .0
ADR block file read                  15     0          0       0      0.0     .0
log file sync                         2     0          0       2      0.0     .0
db file sequential read              11     0          0       0      0.0     .0
ADR block file write                  5     0          0       0      0.0     .0
asynch descriptor resize             83   100          0       0      0.0     .0
db file scattered read                4     0          0       0      0.0     .0
rdbms ipc message             1,738,809     1     51,666      30      0.4
DIAG idle wait                    7,127   100      7,127    1000      0.0
Space Manager: slave idle           976    99      4,857    4976      0.0
Streams AQ: qmn slave idle          128     0      3,585   28008      0.0
Streams AQ: qmn coordinato          256    50      3,585   14004      0.0
pmon timer                        1,207    97      3,566    2954      0.0
smon timer                           61    13      3,398   55704      0.0
SQL*Net message from clien          274     0          0       1      0.0
class slave wait                     16     0          0       0      0.0
                          ------------------------------------------------------

^LWait Event Histogram                         DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
-> Ordered by Event (idle events last)

                                                    % of Waits
                                 -----------------------------------------------
                           Total
Event                      Waits  1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
ADR block file read           15  93.3   6.7
ADR block file write           5 100.0
ADR file lock                  6 100.0
ARCH wait for archivelog l   156 100.0
Disk file operations I/O    1295  85.8   1.8  12.2    .2          .1
LGWR wait for redo copy    181.5  99.5    .2    .1    .1    .0    .0    .0
Log archive I/O              155  13.5  74.2  11.0    .6                .6
SQL*Net message to client  6629. 100.0    .0    .0    .0    .0
asynch descriptor resize      83  98.8   1.2
buffer busy waits          2395.  99.3    .6    .1    .0    .0    .0    .0    .0
control file parallel writ  4855  94.8   3.9    .7    .5    .1    .0    .0
control file sequential re 14.8K  94.8   4.0    .7    .3    .1    .0    .0
cursor: pin S                 19  42.1  15.8              42.1
db file async I/O submit    2497  99.0    .7    .2    .0
db file parallel write     13.6K  89.6   9.4    .6    .2    .1    .0    .2
db file scattered read         4 100.0
db file sequential read      311  86.8   7.7   1.6   2.9   1.0
direct path read             347  98.8   1.2
direct path write            167  91.0   9.0
enq: SQ - contention       11.3K  99.1    .6    .1    .1    .1
enq: TX - row lock content 412.3  99.8    .1    .0    .0    .0    .0    .0
latch free                    47  85.1   8.5   4.3               2.1
latch: In memory undo latc  2863  88.9   4.0   3.5   2.4    .8    .3    .0
latch: cache buffer handle     8  87.5        12.5
latch: cache buffers chain  9514  95.5   1.4   1.3   1.0    .6    .3    .0
latch: cache buffers lru c     5  60.0  20.0        20.0
latch: checkpoint queue la     4 100.0
latch: enqueue hash chains    66  86.4   4.5   4.5   4.5
latch: messages                5 100.0
latch: object queue header     1 100.0
latch: redo allocation       164  87.2   6.7   3.7   2.4
latch: session allocation      1                         100.0
latch: undo global data       24  66.7  12.5   8.3  12.5
library cache: mutex X        24  66.7         4.2        25.0   4.2
log file parallel write    2597.  96.9   2.1    .7    .3    .1    .0    .0
log file sequential read   15.3K  55.2  34.4   9.5    .4    .1    .0    .5
log file single write        310  99.7    .3
log file switch (checkpoin    26         3.8   7.7  42.3  34.6              11.5
log file switch (private s    49         6.1        51.0  42.9
log file switch completion   335   5.4   6.0   5.7  57.3  25.1    .6
log file sync              4047.  82.2  15.6   1.2    .7    .3    .1    .0
os thread startup             16                    50.0  37.5        12.5
reliable message             721  84.6  15.0    .4
DIAG idle wait              7127                                     100.0
SQL*Net message from clien 6682.  83.6   5.5    .5   2.4   7.4    .5    .0    .0
Space Manager: slave idle    957                            .1          .2  99.7
Streams AQ: qmn coordinato   256  49.2          .4          .4              50.0
Streams AQ: qmn slave idle   128                                           100.0
class slave wait              16  93.8               6.3
pmon timer                  1208          .2    .2                .1   1.2  98.3
rdbms ipc message          1738.  80.8   9.6    .8   2.9   4.8    .2    .4    .5
smon timer                    62   1.6                                19.4  79.0
                          ------------------------------------------------------


^LWait Event Histogram Detail (64 msec to 2 sec)DB/Inst: ORCL/ORCL  Snaps: 23-2
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> Units for % of Total Waits:
   ms is milliseconds
   s is 1024 milliseconds (approximately 1 second)
-> % of Total Waits: total waits for all wait classes, including Idle
-> % of Total Waits: value of .0 indicates value was <.05%;
   value of null is truly 0
-> Ordered by Event (only non-idle events are displayed)

                                                 % of Total Waits
                                 -----------------------------------------------
                           Waits
                           64ms
Event                      to 2s =2s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
LGWR wait for redo copy        2 100.0    .0
Log archive I/O                1  99.4                .6
buffer busy waits             13 100.0    .0                            .0
control file parallel writ     1 100.0    .0
control file sequential re     4 100.0    .0    .0
db file parallel write        22  99.8    .0    .1    .1
enq: TX - row lock content     2 100.0    .0
latch: In memory undo latc     1 100.0    .0
latch: cache buffers chain     1 100.0    .0
log file parallel write       81 100.0    .0    .0    .0    .0    .0
log file sequential read      69  99.5    .0    .2    .2
log file switch (checkpoin     3  88.5                                11.5
log file sync                593 100.0    .0    .0    .0    .0    .0
os thread startup              2  87.5  12.5
                          ------------------------------------------------------

^LWait Event Histogram Detail (4 sec to 2 min) DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^LWait Event Histogram Detail (4 min to 1 hr)  DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^LService Statistics                           DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by DB Time

                                                           Physical      Logical
Service Name                  DB Time (s)   DB CPU (s)    Reads (K)    Reads (K)
---------------------------- ------------ ------------ ------------ ------------
SYS$USERS                           5,912        1,947            0      204,286
ORCL_A                                  0            0            0            0
SYS$BACKGROUND                          0            0            1           21
                          ------------------------------------------------------


Service Wait Class Stats                      DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
   classes:  User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in seconds

Service Name
----------------------------------------------------------------
 User I/O  User I/O  Concurcy  Concurcy     Admin     Admin   Network   Network
Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
SYS$USERS
      307         0   2408319       347         0         0   6685891         3
SYS$BACKGROUND
     1817         1        20         0         0         0         0         0
                          ------------------------------------------------------

^LSQL ordered by Elapsed Time                  DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
   into the Total Database Time multiplied by 100
-> %Total - Elapsed Time  as a percentage of Total DB time
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for   31.4% of Total DB Time (s):           5,912
-> Captured PL/SQL account for    0.0% of Total DB Time (s):           5,912

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
           833.4      1,668,249          0.00   14.1   37.7     .0 21yp54r1kwdcw
Module: JDBC Thin Client
INSERT ALL INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@g
mail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@
gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p
@gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','

           454.9      1,667,916          0.00    7.7   35.1     .0 djyntpq5hxwpk
Module: JDBC Thin Client
delete from authors where id < ( select * from (select max(id) - 30 from author
s) a ) and id > ( select * from (select max(id) - 500 from authors) b )

           294.0      1,668,262          0.00    5.0   28.2     .0 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

           260.7      1,667,137          0.00    4.4   45.1     .0 2fpz2m7duxb64
Module: JDBC Thin Client
select count(*) from authors where id < ( select max(id) - 30 from authors) and
 id > ( select max(id) - 2500 from authors) union select count(*) from authors
where id < ( select max(id) - 30 from authors) and id > ( select max(id) - 1500
 from authors) union select count(*) from authors where id < ( select max(id) -

             5.1         83,419          0.00     .1   94.1     .0 4m7m0t6fjcs5x
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=
:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1

             4.1          1,843          0.00     .1   80.2     .0 3p9jxd3w1hdx9
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
select s.sid||':'||s.serial# session_id, nvl(s.username,decode(s.type,'BACKGRO
UND','SYS')) username, s.machine, q.force_matching_signature, s.sql_id,
s.sql_hash_value, substr(q.sql_text, 1, 1000) sql_text, nvl (c.command_name,
 decode(s.wait_class,'Commit',s.wait_class, decode(s.type,'BACKGROUND', b.na

             0.8             61          0.01     .0   25.0     .0 ca6tq9wk5wakf
Module: JDBC Thin Client
select * from (select name, to_char(next_time, 'YYYY/MM/DD HH24:MI:SS') as resto
rable_time, recid from sys.v_$archived_log al JOIN sys.v_$database_incarnation d
i ON di.RESETLOGS_ID = al.RESETLOGS_ID and di.STATUS = 'CURRENT' where al.name i
s NOT NULL and al.standby_dest = 'NO' AND al.archived = 'YES' AND al.thread# = 1

             0.6              1          0.64     .0   88.2     .0 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice

             0.6             60          0.01     .0    2.0   38.2 3h0a0h5srz9t9
Module: JDBC Thin Client
select count(*) from sys.v_$datafile where status in ('RECOVER','ONLINE') and EN
ABLED != 'READ ONLY' and checkpoint_time < sysdate-(120/1440)

             0.4             30          0.01     .0   93.9     .0 75p1jt4wbk27n
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
select decode(class,1,'User',2,'Redo',4,'Enqueue',8,'Cache',16,'OS',64,'SQL','Ot
her') class, name, value from v$sysstat where class not in (32,128) and name not
 like 'session%' and name not like 'java session%' and not regexp_like (name,'(O
LAP|^IM|spare|cell|^flash|^gc|^HSC|^EHCC|^(W|w)orkload|^(C|c)luster|RAC)') order

                          ------------------------------------------------------

^LSQL ordered by CPU Time                      DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> %Total - CPU Time      as a percentage of Total DB CPU
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for   35.1% of Total CPU Time (s):           1,947
-> Captured PL/SQL account for    0.0% of Total CPU Time (s):           1,947

    CPU                   CPU per           Elapsed
  Time (s)  Executions    Exec (s) %Total   Time (s)   %CPU    %IO    SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
     314.1    1,668,249       0.00   16.1      833.4   37.7     .0 21yp54r1kwdcw
Module: JDBC Thin Client
INSERT ALL INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@g
mail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@
gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p
@gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','

     159.8    1,667,916       0.00    8.2      454.9   35.1     .0 djyntpq5hxwpk
Module: JDBC Thin Client
delete from authors where id < ( select * from (select max(id) - 30 from author
s) a ) and id > ( select * from (select max(id) - 500 from authors) b )

     117.5    1,667,137       0.00    6.0      260.7   45.1     .0 2fpz2m7duxb64
Module: JDBC Thin Client
select count(*) from authors where id < ( select max(id) - 30 from authors) and
 id > ( select max(id) - 2500 from authors) union select count(*) from authors
where id < ( select max(id) - 30 from authors) and id > ( select max(id) - 1500
 from authors) union select count(*) from authors where id < ( select max(id) -

      83.0    1,668,262       0.00    4.3      294.0   28.2     .0 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

       4.8       83,419       0.00    0.2        5.1   94.1     .0 4m7m0t6fjcs5x
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=
:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1

       3.3        1,843       0.00    0.2        4.1   80.2     .0 3p9jxd3w1hdx9
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
select s.sid||':'||s.serial# session_id, nvl(s.username,decode(s.type,'BACKGRO
UND','SYS')) username, s.machine, q.force_matching_signature, s.sql_id,
s.sql_hash_value, substr(q.sql_text, 1, 1000) sql_text, nvl (c.command_name,
 decode(s.wait_class,'Commit',s.wait_class, decode(s.type,'BACKGROUND', b.na

       0.6            1       0.57    0.0        0.6   88.2     .0 bunssq950snhf
insert into wrh$_sga_target_advice (snap_id, dbid, instance_number, SGA_SIZ
E, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_READS) select :snap_id, :dbi
d, :instance_number, SGA_SIZE, SGA_SIZE_FACTOR, ESTD_DB_TIME, ESTD_PHYSICAL_R
EADS from v$sga_target_advice

       0.3           30       0.01    0.0        0.4   93.9     .0 75p1jt4wbk27n
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
select decode(class,1,'User',2,'Redo',4,'Enqueue',8,'Cache',16,'OS',64,'SQL','Ot
her') class, name, value from v$sysstat where class not in (32,128) and name not
 like 'session%' and name not like 'java session%' and not regexp_like (name,'(O
LAP|^IM|spare|cell|^flash|^gc|^HSC|^EHCC|^(W|w)orkload|^(C|c)luster|RAC)') order

       0.2           61       0.00    0.0        0.8   25.0     .0 ca6tq9wk5wakf
Module: JDBC Thin Client
select * from (select name, to_char(next_time, 'YYYY/MM/DD HH24:MI:SS') as resto
rable_time, recid from sys.v_$archived_log al JOIN sys.v_$database_incarnation d
i ON di.RESETLOGS_ID = al.RESETLOGS_ID and di.STATUS = 'CURRENT' where al.name i
s NOT NULL and al.standby_dest = 'NO' AND al.archived = 'YES' AND al.thread# = 1

       0.2        1,242       0.00    0.0        0.3   63.9     .0 cm5vu20fhtnq1
select /*+ connect_by_filtering */ privilege#,level from sysauth$ connect by gra
ntee#=prior privilege# and privilege#>0 start with grantee#=:1 and privilege#>0

                          ------------------------------------------------------

^LSQL ordered by User I/O Wait Time            DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> %Total - User I/O Time as a percentage of Total User I/O Wait time
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for   19.2% of Total User I/O Wait Time (s):
-> Captured PL/SQL account for    0.2% of Total User I/O Wait Time (s):
  User I/O                UIO per           Elapsed
  Time (s)  Executions    Exec (s) %Total   Time (s)   %CPU    %IO    SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
       0.2           60       0.00   18.7        0.6    2.0   38.2 3h0a0h5srz9t9
Module: JDBC Thin Client
select count(*) from sys.v_$datafile where status in ('RECOVER','ONLINE') and EN
ABLED != 'READ ONLY' and checkpoint_time < sysdate-(120/1440)

       0.0            1       0.00    0.2        0.1   96.1    3.3 6ajkhukk78nsr
begin prvt_hdm.auto_execute( :dbid, :inst_num , :end_snap_id ); end;

       0.0            1       0.00    0.2        0.0   39.7   56.9 47mm81hm9sggy
 SELECT sum(case when a.session_type = 1 and a.wait_time = 0 then 1
else 0 end) as fgw, sum(case when a.session_type = 1 and a.wait_time <>
0 then 1 else 0 end) as fgc, sum(case when a.session_type <>
 1 and a.wait_time = 0 then 1 else 0 end) as bgw, sum(case w

       0.0            1       0.00    0.2        0.0   77.5   11.6 85px9dq62dc0q
INSERT /*+ APPEND LEADING(@"SEL$F5BB74E1" "H"@"SEL$2" "A"@"SEL$1") USE_NL(@"SE
L$F5BB74E1" "A"@"SEL$1") */ INTO WRH$_ACTIVE_SESSION_HISTORY ( snap_id,
dbid, instance_number, sample_id, sample_time , session_id, session_serial#, ses
sion_type , flags , user_id , sql_id, sql_child_number, sql_opcode, force_matchi

       0.0    1,668,262       0.00    0.1      294.0   28.2     .0 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

       0.0           62       0.00    0.0        0.0   66.8     .0 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0

       0.0          240       0.00    0.0        0.1   80.1     .0 0kqxgptj0p6rt
Module: JDBC Thin Client
SELECT count(1) FROM dba_users WHERE username = 'RDSADMIN'

       0.0            1       0.00    0.0        0.0  100.2     .0 0pt4jfmq9f1q0
SELECT x.statistic# as stat_id, x.keh_id as keh_id, nvl(awr_time.value_diff, 0
) as value_diff FROM X$KEHTIMMAP x ,(SELECT startsn.stat_id as stat_id, sum(G
REATEST( 0, (endsn.value - startsn.value) )) as value_diff FROM WRH$_SYS_TIME_
MODEL startsn , WRH$_SYS_TIME_MODEL endsn WHERE endsn.dbid = :dbid AND ends

       0.0           67       0.00    0.0        0.0   87.1     .0 0ws7ahf1d78qa
select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('USERENV', 'DB_UNIQUE_
NAME'), SYS_CONTEXT('USERENV', 'INSTANCE_NAME'), SYS_CONTEXT('USERENV', 'SERVICE
_NAME'), INSTANCE_NUMBER, STARTUP_TIME, SYS_CONTEXT('USERENV', 'DB_DOMAIN') from
 v$instance where INSTANCE_NAME=SYS_CONTEXT('USERENV', 'INSTANCE_NAME')

       0.0            1       0.00    0.0        0.0  100.0     .0 155cwuv2pfp1d
 SELECT distinct x.id, e.instance_number FROM WRM$_SNAP_ERROR e , X$KEHSQT
x WHERE e.table_name = x.name AND e.dbid = :dbid AND e.instance_number =
:inst AND e.snap_id IN (:bid, :eid) AND x.ver_type = :edge

                          ------------------------------------------------------
^LSQL ordered by Gets                          DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Resources reported for PL/SQL code includes the resources used by all SQL
   statements called by the code.
-> %Total - Buffer Gets   as a percentage of Total Buffer Gets
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets:     204,307,245
-> Captured SQL account for   98.1% of Total

     Buffer                 Gets              Elapsed
      Gets   Executions   per Exec   %Total   Time (s)  %CPU   %IO    SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.03668E+08   1,668,249         62.1   50.7      833.4  37.7     0 21yp54r1kwdcw
Module: JDBC Thin Client
INSERT ALL INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@g
mail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@
gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p
@gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','

 51,657,287   1,667,137         31.0   25.3      260.7  45.1     0 2fpz2m7duxb64
Module: JDBC Thin Client
select count(*) from authors where id < ( select max(id) - 30 from authors) and
 id > ( select max(id) - 2500 from authors) union select count(*) from authors
where id < ( select max(id) - 30 from authors) and id > ( select max(id) - 1500
 from authors) union select count(*) from authors where id < ( select max(id) -

 28,710,182   1,667,916         17.2   14.1      454.9  35.1     0 djyntpq5hxwpk
Module: JDBC Thin Client
delete from authors where id < ( select * from (select max(id) - 30 from author
s) a ) and id > ( select * from (select max(id) - 500 from authors) b )

 16,135,337   1,668,262          9.7    7.9      294.0  28.2     0 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

    254,263      83,419          3.0    0.1        5.1  94.1     0 4m7m0t6fjcs5x
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=
:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1

     10,226           1     10,226.0    0.0        0.1  96.1   3.3 6ajkhukk78nsr
begin prvt_hdm.auto_execute( :dbid, :inst_num , :end_snap_id ); end;

      7,444       1,242          6.0    0.0        0.3  63.9     0 cm5vu20fhtnq1
select /*+ connect_by_filtering */ privilege#,level from sysauth$ connect by gra
ntee#=prior privilege# and privilege#>0 start with grantee#=:1 and privilege#>0

      6,986           1      6,986.0    0.0        0.0 102.4     0 cw860p03hy5ff
 SELECT count(*) as cnt , a.SQL_ID, a.CURRENT_OBJ#, sum
(case when a.event_id in (:e0, :e1, :e2, :e3, :e4, :e5, :e6, :e7) then 1 else 0
end) as full_scan FROM WRH$_ACTIVE_SESSION_HISTORY a , WRH$_EVENT_NAME en
 WHERE a.dbid = :dbid AND a.instance_number = :inst AND a.snap_id > :bid AND

      3,840         240         16.0    0.0        0.1  80.1     0 0kqxgptj0p6rt
Module: JDBC Thin Client
SELECT count(1) FROM dba_users WHERE username = 'RDSADMIN'

      1,488          62         24.0    0.0        0.0  66.8     0 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0

                          ------------------------------------------------------

^LSQL ordered by Reads                         DB/Inst: ORCL/ORCL  Snaps: 23-24
-> %Total - Physical Reads as a percentage of Total Disk Reads
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Total Disk Reads:             810
-> Captured SQL account for    3.5% of Total

   Physical              Reads              Elapsed
      Reads  Executions per Exec   %Total   Time (s)   %CPU    %IO    SQL Id
----------- ----------- ---------- ------ ---------- ------ ------ -------------
         28           1       28.0    3.5        0.0   39.7   56.9 47mm81hm9sggy
 SELECT sum(case when a.session_type = 1 and a.wait_time = 0 then 1
else 0 end) as fgw, sum(case when a.session_type = 1 and a.wait_time <>
0 then 1 else 0 end) as fgc, sum(case when a.session_type <>
 1 and a.wait_time = 0 then 1 else 0 end) as bgw, sum(case w

         28           1       28.0    3.5        0.1   96.1    3.3 6ajkhukk78nsr
begin prvt_hdm.auto_execute( :dbid, :inst_num , :end_snap_id ); end;

          0          62        0.0    0.0        0.0   66.8     .0 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0

          0         240        0.0    0.0        0.1   80.1     .0 0kqxgptj0p6rt
Module: JDBC Thin Client
SELECT count(1) FROM dba_users WHERE username = 'RDSADMIN'

          0           1        0.0    0.0        0.0  100.2     .0 0pt4jfmq9f1q0
SELECT x.statistic# as stat_id, x.keh_id as keh_id, nvl(awr_time.value_diff, 0
) as value_diff FROM X$KEHTIMMAP x ,(SELECT startsn.stat_id as stat_id, sum(G
REATEST( 0, (endsn.value - startsn.value) )) as value_diff FROM WRH$_SYS_TIME_
MODEL startsn , WRH$_SYS_TIME_MODEL endsn WHERE endsn.dbid = :dbid AND ends

          0          67        0.0    0.0        0.0   87.1     .0 0ws7ahf1d78qa
select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('USERENV', 'DB_UNIQUE_
NAME'), SYS_CONTEXT('USERENV', 'INSTANCE_NAME'), SYS_CONTEXT('USERENV', 'SERVICE
_NAME'), INSTANCE_NUMBER, STARTUP_TIME, SYS_CONTEXT('USERENV', 'DB_DOMAIN') from
 v$instance where INSTANCE_NAME=SYS_CONTEXT('USERENV', 'INSTANCE_NAME')

          0   1,668,262        0.0    0.0      294.0   28.2     .0 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

          0           1        0.0    0.0        0.0  100.0     .0 155cwuv2pfp1d
 SELECT distinct x.id, e.instance_number FROM WRM$_SNAP_ERROR e , X$KEHSQT
x WHERE e.table_name = x.name AND e.dbid = :dbid AND e.instance_number =
:inst AND e.snap_id IN (:bid, :eid) AND x.ver_type = :edge

          0           1        0.0    0.0        0.0     .0     .0 181cvj277dvuq
Module: JDBC Thin Client
select count(*) from sys.dba_triggers where owner = 'RDSADMIN' and trigger_name
= 'RDS_GRANT_TRIGGER' and status = 'ENABLED'

          0           1        0.0    0.0        0.0  100.6     .0 18c2yb5aj919t
 SELECT nvl(e1,0) as e1, nvl(e2,0) as e2, nvl(e3,0) as e3,
nvl(e4,0) as e4, nvl(e5,0) as e5, nvl(e6,0) as e6 FROM (SELECT e.
event_id as event_id, e.event_name as event_name FROM WRH$_EVENT_NAME e
 WHERE e.dbid = :dbid AND e.event_name in ('log

                          ------------------------------------------------------


LSQL ordered by Physical Reads (UnOptimized)  DB/Inst: ORCL/ORCL  Snaps: 23-24
-> UnOptimized Read Reqs = Physical Read Reqts - Optimized Read Reqs
-> %Opt   - Optimized Reads as percentage of SQL Read Requests
-> %Total - UnOptimized Read Reqs as a percentage of Total UnOptimized Read Reqs
-> Total Physical Read Requests:             790
-> Captured SQL account for  162.2% of Total
-> Total UnOptimized Read Requests:             790
-> Captured SQL account for  162.2% of Total
-> Total Optimized Read Requests:               1
-> Captured SQL account for    0.0% of Total

UnOptimized   Physical              UnOptimized
  Read Reqs   Read Reqs Executions Reqs per Exe   %Opt %Total    SQL Id
----------- ----------- ---------- ------------ ------ ------ -------------
        741         741         61         12.1    0.0   93.8 ca6tq9wk5wakf
Module: JDBC Thin Client
select * from (select name, to_char(next_time, 'YYYY/MM/DD HH24:MI:SS') as resto
rable_time, recid from sys.v_$archived_log al JOIN sys.v_$database_incarnation d
i ON di.RESETLOGS_ID = al.RESETLOGS_ID and di.STATUS = 'CURRENT' where al.name i
s NOT NULL and al.standby_dest = 'NO' AND al.archived = 'YES' AND al.thread# = 1

        540         540         60          9.0    0.0   68.4 3h0a0h5srz9t9
Module: JDBC Thin Client
select count(*) from sys.v_$datafile where status in ('RECOVER','ONLINE') and EN
ABLED != 'READ ONLY' and checkpoint_time < sysdate-(120/1440)

          8           8          1          8.0    0.0    1.0 6ajkhukk78nsr
begin prvt_hdm.auto_execute( :dbid, :inst_num , :end_snap_id ); end;

          0           0         62          0.0    N/A    0.0 0k8522rmdzg4k
select privilege# from sysauth$ where (grantee#=:1 or grantee#=1) and privilege#
>0

          0           0        240          0.0    N/A    0.0 0kqxgptj0p6rt
Module: JDBC Thin Client
SELECT count(1) FROM dba_users WHERE username = 'RDSADMIN'

          0           0          1          0.0    N/A    0.0 0pt4jfmq9f1q0
SELECT x.statistic# as stat_id, x.keh_id as keh_id, nvl(awr_time.value_diff, 0
) as value_diff FROM X$KEHTIMMAP x ,(SELECT startsn.stat_id as stat_id, sum(G
REATEST( 0, (endsn.value - startsn.value) )) as value_diff FROM WRH$_SYS_TIME_
MODEL startsn , WRH$_SYS_TIME_MODEL endsn WHERE endsn.dbid = :dbid AND ends

          0           0         67          0.0    N/A    0.0 0ws7ahf1d78qa
select SYS_CONTEXT('USERENV', 'SERVER_HOST'), SYS_CONTEXT('USERENV', 'DB_UNIQUE_
NAME'), SYS_CONTEXT('USERENV', 'INSTANCE_NAME'), SYS_CONTEXT('USERENV', 'SERVICE
_NAME'), INSTANCE_NUMBER, STARTUP_TIME, SYS_CONTEXT('USERENV', 'DB_DOMAIN') from
 v$instance where INSTANCE_NAME=SYS_CONTEXT('USERENV', 'INSTANCE_NAME')

          0           0  1,668,262          0.0    N/A    0.0 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

          0           0          1          0.0    N/A    0.0 155cwuv2pfp1d
 SELECT distinct x.id, e.instance_number FROM WRM$_SNAP_ERROR e , X$KEHSQT
x WHERE e.table_name = x.name AND e.dbid = :dbid AND e.instance_number =
:inst AND e.snap_id IN (:bid, :eid) AND x.ver_type = :edge

         0           0          1          0.0    N/A    0.0 181cvj277dvuq
Module: JDBC Thin Client
select count(*) from sys.dba_triggers where owner = 'RDSADMIN' and trigger_name
= 'RDS_GRANT_TRIGGER' and status = 'ENABLED'

                          ------------------------------------------------------

^LSQL ordered by Executions                    DB/Inst: ORCL/ORCL  Snaps: 23-24
-> %CPU   - CPU Time      as a percentage of Elapsed Time
-> %IO    - User I/O Time as a percentage of Elapsed Time
-> Total Executions:       6,763,255
-> Captured SQL account for  100.0% of Total

                                              Elapsed
 Executions   Rows Processed  Rows per Exec   Time (s)  %CPU   %IO    SQL Id
------------ --------------- -------------- ---------- ----- ----- -------------
   1,668,262      16,682,570           10.0      294.0  28.2     0 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

   1,668,249      16,682,340           10.0      833.4  37.7     0 21yp54r1kwdcw
Module: JDBC Thin Client
INSERT ALL INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@g
mail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@
gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p
@gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','

   1,667,916      16,682,560           10.0      454.9  35.1     0 djyntpq5hxwpk
Module: JDBC Thin Client
delete from authors where id < ( select * from (select max(id) - 30 from author
s) a ) and id > ( select * from (select max(id) - 500 from authors) b )

   1,667,137       1,667,976            1.0      260.7  45.1     0 2fpz2m7duxb64
Module: JDBC Thin Client
select count(*) from authors where id < ( select max(id) - 30 from authors) and
 id > ( select max(id) - 2500 from authors) union select count(*) from authors
where id < ( select max(id) - 30 from authors) and id > ( select max(id) - 1500
 from authors) union select count(*) from authors where id < ( select max(id) -

      83,419          83,419            1.0        5.1  94.1     0 4m7m0t6fjcs5x
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=
:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1

       1,843           4,199            2.3        4.1  80.2     0 3p9jxd3w1hdx9
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
select s.sid||':'||s.serial# session_id, nvl(s.username,decode(s.type,'BACKGRO
UND','SYS')) username, s.machine, q.force_matching_signature, s.sql_id,
s.sql_hash_value, substr(q.sql_text, 1, 1000) sql_text, nvl (c.command_name,
 decode(s.wait_class,'Commit',s.wait_class, decode(s.type,'BACKGROUND', b.na

       1,242           3,782            3.0        0.3  63.9     0 cm5vu20fhtnq1
select /*+ connect_by_filtering */ privilege#,level from sysauth$ connect by gra
ntee#=prior privilege# and privilege#>0 start with grantee#=:1 and privilege#>0

         491             491            1.0        0.0  15.8     0 d3mr8mdgarrrf
Module: JDBC Thin Client
Select 1 from dual

         357             357            1.0        0.0  15.9     0 bunvx480ynf57
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
SELECT 1 FROM DUAL

         240             240            1.0        0.1  80.1     0 0kqxgptj0p6rt
Module: JDBC Thin Client
SELECT count(1) FROM dba_users WHERE username = 'RDSADMIN'

                          ------------------------------------------------------

^LSQL ordered by Parse Calls                   DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Parse Calls:       6,679,380
-> Captured SQL account for   98.0% of Total

                            % Total
 Parse Calls  Executions     Parses    SQL Id
------------ ------------ --------- -------------
   1,646,465    1,668,249     24.65 21yp54r1kwdcw
Module: JDBC Thin Client
INSERT ALL INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@g
mail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p@
gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','p
@gmail.com') INTO authors (id,name,email) VALUES ( serial.nextval ,'Priya','

   1,644,580    1,667,916     24.62 djyntpq5hxwpk
Module: JDBC Thin Client
delete from authors where id < ( select * from (select max(id) - 30 from author
s) a ) and id > ( select * from (select max(id) - 500 from authors) b )

   1,631,166    1,667,137     24.42 2fpz2m7duxb64
Module: JDBC Thin Client
select count(*) from authors where id < ( select max(id) - 30 from authors) and
 id > ( select max(id) - 2500 from authors) union select count(*) from authors
where id < ( select max(id) - 30 from authors) and id > ( select max(id) - 1500
 from authors) union select count(*) from authors where id < ( select max(id) -

   1,618,879    1,668,262     24.24 128ccsst17vwb
Module: JDBC Thin Client
update authors set email = 'toto' where id > ( select max(id) - 1 from authors)

       1,843        1,843      0.03 3p9jxd3w1hdx9
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
select s.sid||':'||s.serial# session_id, nvl(s.username,decode(s.type,'BACKGRO
UND','SYS')) username, s.machine, q.force_matching_signature, s.sql_id,
s.sql_hash_value, substr(q.sql_text, 1, 1000) sql_text, nvl (c.command_name,
 decode(s.wait_class,'Commit',s.wait_class, decode(s.type,'BACKGROUND', b.na

       1,242        1,242      0.02 cm5vu20fhtnq1
select /*+ connect_by_filtering */ privilege#,level from sysauth$ connect by gra
ntee#=prior privilege# and privilege#>0 start with grantee#=:1 and privilege#>0

         444          491      0.01 d3mr8mdgarrrf
Module: JDBC Thin Client
Select 1 from dual


         357          357      0.01 bunvx480ynf57
Module: rdsoracleperfmon@ip-10-13-0-252 (TNS V1-V3)
SELECT 1 FROM DUAL

         240          240      0.00 0kqxgptj0p6rt
Module: JDBC Thin Client
SELECT count(1) FROM dba_users WHERE username = 'RDSADMIN'

          86           86      0.00 g00cj285jmgsw
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, del
etes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags
 + :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where ob
j# = :objn

                          ------------------------------------------------------

^LSQL ordered by Sharable Memory               DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^LSQL ordered by Version Count                 DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^LKey Instance Activity Stats                  DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
db block changes                        128,440,171       36,004.1          31.7
execute count                             6,763,255        1,895.9           1.7
logons cumulative                                83            0.0           0.0
opened cursors cumulative                 6,762,912        1,895.8           1.7
parse count (total)                       6,679,380        1,872.4           1.7
parse time elapsed                            4,427            1.2           0.0
physical reads                                  810            0.2           0.0
physical writes                             855,813          239.9           0.2
redo size                            16,990,119,384    4,762,632.4       4,196.8
session cursor cache hits                    77,548           21.7           0.0
session logical reads                   204,307,245       57,271.0          50.5
user calls                               13,363,557        3,746.0           3.3
user commits                              4,048,312        1,134.8           1.0
workarea executions - optimal             2,384,500          668.4           0.6
                          ------------------------------------------------------



^LOther Instance Activity Stats                DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
Batched IO (bound) vector count                   0            0.0           0.0
Batched IO (full) vector count                    0            0.0           0.0
Batched IO block miss count                       0            0.0           0.0
Batched IO buffer defrag count                    0            0.0           0.0
Batched IO double miss count                      0            0.0           0.0
Batched IO same unit count                        0            0.0           0.0
Batched IO single block count                     0            0.0           0.0
Batched IO vector block count                     0            0.0           0.0
Batched IO vector read count                      0            0.0           0.0
Block Cleanout Optim referenced                   3            0.0           0.0
CCursor + sql area evicted                        0            0.0           0.0
CPU used by this session                    169,854           47.6           0.0
CPU used when call started                  169,437           47.5           0.0
CR blocks created                           445,415          124.9           0.1
Cached Commit SCN referenced                  7,260            2.0           0.0
Commit SCN cached                            26,256            7.4           0.0
DBWR checkpoint buffers written             855,036          239.7           0.2
DBWR checkpoints                                155            0.0           0.0
DBWR revisited being-written buf                  3            0.0           0.0
DBWR thread checkpoint buffers w            855,036          239.7           0.2
DBWR transaction table writes                   555            0.2           0.0
DBWR undo block writes                      853,316          239.2           0.2
Effective IO time                                 0            0.0           0.0
HSC Heap Segment Block Changes           41,562,557       11,650.7          10.3
Heap Segment Array Inserts               16,684,612        4,677.0           4.1
Heap Segment Array Updates                1,045,076          293.0           0.3
IMU CR rollbacks                                262            0.1           0.0
IMU Flushes                               2,736,609          767.1           0.7
IMU Redo allocation size              7,883,745,880    2,209,954.1       1,947.4
IMU commits                               1,447,676          405.8           0.4
IMU contention                              362,320          101.6           0.1
IMU ktichg flush                                  0            0.0           0.0
IMU pool not allocated                      127,058           35.6           0.0
IMU undo allocation size             32,646,758,648    9,151,466.5       8,064.3
IMU- failed to get a private str            127,058           35.6           0.0
Number of read IOs issued                         0            0.0           0.0
Requests to/from client                   6,685,887        1,874.2           1.7
SMON posted for undo segment shr                 50            0.0           0.0
SQL*Net roundtrips to/from clien          6,685,883        1,874.2           1.7
TBS Extension: bytes extended                     0            0.0           0.0
TBS Extension: files extended                     0            0.0           0.0
TBS Extension: tasks created                      0            0.0           0.0
TBS Extension: tasks executed                     0            0.0           0.0
active txn count during cleanout            954,481          267.6           0.2
auto extends on undo tablespace                   0            0.0           0.0
background checkpoints completed                156            0.0           0.0
background checkpoints started                  155            0.0           0.0
background timeouts                          16,771            4.7           0.0
buffer is not pinned count               59,277,768       16,616.6          14.6
buffer is pinned count                   17,737,723        4,972.2           4.4
bytes received via SQL*Net from       4,388,680,667    1,230,225.2       1,084.1
bytes sent via SQL*Net to client        624,356,803      175,018.3         154.2
calls to get snapshot scn: kcmgs         11,156,856        3,127.5           2.8
calls to kcmgas                           5,845,836        1,638.7           1.4
calls to kcmgcs                          33,990,250        9,528.1           8.4
cell physical IO interconnect by     62,515,707,392   17,524,263.6      15,442.4
change write time                            42,538           11.9           0.0
cleanout - number of ktugct call          1,139,165          319.3           0.3
cleanouts and rollbacks - consis            509,091          142.7           0.1
cleanouts only - consistent read             99,170           27.8           0.0
^LOther Instance Activity Stats                DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
cluster key scan block gets                     510            0.1           0.0
cluster key scans                               510            0.1           0.0
commit batch/immediate performed                  0            0.0           0.0
commit batch/immediate requested                  0            0.0           0.0
commit cleanout failures: block                   0            0.0           0.0
commit cleanout failures: buffer                  4            0.0           0.0
commit cleanout failures: callba                135            0.0           0.0
commit cleanout failures: cannot            314,740           88.2           0.1
commit cleanouts                          7,588,079        2,127.1           1.9
commit cleanouts successfully co          7,273,200        2,038.8           1.8
commit immediate performed                        0            0.0           0.0
commit immediate requested                        0            0.0           0.0
commit txn count during cleanout            360,974          101.2           0.1
consistent changes                        3,391,626          950.7           0.8
consistent gets                          80,542,873       22,577.6          19.9
consistent gets - examination             4,656,779        1,305.4           1.2
consistent gets direct                            0            0.0           0.0
consistent gets from cache               80,542,873       22,577.6          19.9
consistent gets from cache (fast         17,153,679        4,808.5           4.2
cursor authentications                           21            0.0           0.0
data blocks consistent reads - u          3,256,005          912.7           0.8
db block gets                           123,764,360       34,693.4          30.6
db block gets direct                             28            0.0           0.0
db block gets from cache                123,764,332       34,693.3          30.6
db block gets from cache (fastpa         26,902,471        7,541.2           6.7
deferred (CURRENT) block cleanou          2,600,211          728.9           0.6
enqueue conversions                           3,455            1.0           0.0
enqueue releases                         13,832,614        3,877.5           3.4
enqueue requests                         13,832,621        3,877.5           3.4
enqueue timeouts                                  2            0.0           0.0
enqueue waits                               423,596          118.7           0.1
failed probes on index block rec                  0            0.0           0.0
free buffer inspected                        10,222            2.9           0.0
free buffer requested                     2,484,063          696.3           0.6
global undo segment hints helped                  0            0.0           0.0
global undo segment hints were s                  0            0.0           0.0
heap block compress                         107,507           30.1           0.0
immediate (CR) block cleanout ap            608,261          170.5           0.2
immediate (CURRENT) block cleano             28,028            7.9           0.0
index crx upgrade (positioned)                    0            0.0           0.0
index fast full scans (full)                    995            0.3           0.0
index fetch by key                           84,969           23.8           0.0
index scans kdiixs1                      59,960,451       16,808.0          14.8
leaf node 90-10 splits                           20            0.0           0.0
leaf node splits                                 29            0.0           0.0
lob reads                                         0            0.0           0.0
lob writes                                       48            0.0           0.0
lob writes unaligned                             48            0.0           0.0
logical read bytes from cache     1,673,684,623,360  469,163,538.3     413,427.8
max cf enq hold time                              0            0.0           0.0
messages received                         2,616,086          733.3           0.7
messages sent                             2,616,086          733.3           0.7
min active SCN optimization appl                 41            0.0           0.0
no buffer to keep pinned count                    0            0.0           0.0
no work - consistent read gets              789,829          221.4           0.2
non-idle wait count                      27,096,037        7,595.5           6.7
parse count (describe)                            0            0.0           0.0
parse count (failures)                            0            0.0           0.0
parse count (hard)                               10            0.0           0.0
parse time cpu                                    9            0.0           0.0
^LOther Instance Activity Stats                DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
physical read IO requests                       790            0.2           0.0
physical read bytes                       6,635,520        1,860.1           1.6
physical read total IO requests              34,302            9.6           0.0
physical read total bytes            18,875,869,184    5,291,241.5       4,662.7
physical read total multi block              17,899            5.0           0.0
physical reads cache                             30            0.0           0.0
physical reads cache prefetch                    20            0.0           0.0
physical reads direct                           780            0.2           0.0
physical reads direct (lob)                       0            0.0           0.0
physical reads direct temporary                   0            0.0           0.0
physical reads prefetch warmup                   20            0.0           0.0
physical write IO requests                  251,370           70.5           0.1
physical write bytes                  7,010,820,096    1,965,257.4       1,731.8
physical write total IO requests          3,167,748          888.0           0.8
physical write total bytes           43,639,838,208   12,233,022.1      10,779.8
physical write total multi block             57,191           16.0           0.0
physical writes direct                          808            0.2           0.0
physical writes direct (lob)                      2            0.0           0.0
physical writes direct temporary                  0            0.0           0.0
physical writes from cache                  855,005          239.7           0.2
physical writes non checkpoint              708,767          198.7           0.2
pinned cursors current                            4            0.0           0.0
process last non-idle time                    4,560            1.3           0.0
recursive calls                           5,127,204        1,437.3           1.3
recursive cpu usage                           4,180            1.2           0.0
redo KB read                             17,814,209        4,993.6           4.4
redo blocks checksummed by FG (e          7,995,007        2,241.1           2.0
redo blocks written                      35,756,793       10,023.3           8.8
redo buffer allocation retries                  410            0.1           0.0
redo entries                             49,700,791       13,932.0          12.3
redo log space requests                         691            0.2           0.0
redo ordering marks                         112,321           31.5           0.0
redo size for direct writes                 230,428           64.6           0.1
redo subscn max counts                      152,956           42.9           0.0
redo synch long waits                         1,708            0.5           0.0
redo synch time                             369,850          103.7           0.1
redo synch time (usec)                3,698,507,708    1,036,757.4         913.6
redo synch time overhead (usec)         198,391,675       55,612.7          49.0
redo synch time overhead count (                 49            0.0           0.0
redo synch time overhead count (          4,042,627        1,133.2           1.0
redo synch time overhead count (              1,634            0.5           0.0
redo synch time overhead count (              3,886            1.1           0.0
redo synch time overhead count (                  0            0.0           0.0
redo synch writes                         4,048,305        1,134.8           1.0
redo wastage                            688,387,636      192,967.3         170.0
redo write info find                      4,048,197        1,134.8           1.0
redo write info find fail                         1            0.0           0.0
redo write time                             157,587           44.2           0.0
redo writes                               2,597,619          728.2           0.6
rollback changes - undo records                   0            0.0           0.0
rollbacks only - consistent read             22,116            6.2           0.0
rows fetched via callback                       678            0.2           0.0
session connect time                              0            0.0           0.0
shared hash latch upgrades - no          56,213,811       15,757.7          13.9
shared hash latch upgrades - wai          2,307,379          646.8           0.6
sorts (memory)                            3,861,606        1,082.5           1.0
sorts (rows)                             33,399,372        9,362.4           8.3
sql area evicted                                  0            0.0           0.0
sql area purged                                   0            0.0           0.0
switch current to new buffer              1,332,200          373.4           0.3
^LOther Instance Activity Stats                DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Ordered by statistic name

Statistic                                     Total     per Second     per Trans
-------------------------------- ------------------ -------------- -------------
table fetch by rowid                         99,222           27.8           0.0
table fetch continued row                         0            0.0           0.0
table scan blocks gotten                      1,681            0.5           0.0
table scan rows gotten                       76,003           21.3           0.0
table scans (short tables)                    1,110            0.3           0.0
temp space allocated (bytes)                      0            0.0           0.0
total cf enq hold time                        4,600            1.3           0.0
total number of cf enq holders                  797            0.2           0.0
total number of times SMON poste                 54            0.0           0.0
transaction rollbacks                             0            0.0           0.0
undo change vector size               6,750,240,964    1,892,212.5       1,667.4
user logons cumulative                           67            0.0           0.0
user logouts cumulative                          68            0.0           0.0
write clones created in backgrou                  2            0.0           0.0
write clones created in foregrou                454            0.1           0.0
                          ------------------------------------------------------

^LInstance Activity Stats - Absolute Values    DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Statistics with absolute values (should not be diffed)

Statistic                            Begin Value       End Value
-------------------------------- --------------- ---------------
logons current                                34              33
opened cursors current                        34              34
session cursor cache count                13,647          14,165
session pga memory                   317,235,240     308,502,240
session pga memory max               395,829,960     387,817,856
session uga memory                   131,833,304     136,933,720
session uga memory max             2,274,160,264   2,395,967,584
                          ------------------------------------------------------

Instance Activity Stats - Thread Activity     DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Statistics identified by '(derived)' come from sources other than SYSSTAT

Statistic                                     Total  per Hour
-------------------------------- ------------------ ---------
log switches (derived)                          155    156.42
                          ------------------------------------------------------
IOStat by Function summary                    DB/Inst: ORCL/ORCL  Snaps: 23-24
-> 'Data' columns suffixed with M,G,T,P are in multiples of 1024
    other columns suffixed with K,M,G,T,P are in multiples of 1000
-> ordered by (Data Read + Write) desc

                Reads:   Reqs   Data    Writes:  Reqs   Data    Waits:    Avg
Function Name   Data    per sec per sec Data    per sec per sec Count    Tm(ms)
--------------- ------- ------- ------- ------- ------- ------- ------- -------
Others            17.5G     7.8   5.02M     17G     5.8  4.891M   13.5K     0.1
LGWR                86M     1.6   .024M   17.1G   811.6  4.901M   2605K     0.1
DBWR                 0M     0.0      0M    6.5G    70.2  1.873M       0     N/A
Direct Reads         6M     0.2   .002M      0M     0.0      0M       0     N/A
Direct Writes        0M     0.0      0M      6M     0.2   .002M       0     N/A
Buffer Cache Re      0M     0.0      0M      0M     0.0      0M      10     0.0
TOTAL:            17.6G     9.6  5.046M   40.6G   887.9 11.666M 2618.6K     0.1
                          ------------------------------------------------------

IOStat by Filetype summary                    DB/Inst: ORCL/ORCL  Snaps: 23-24
-> 'Data' columns suffixed with M,G,T,P are in multiples of 1024
    other columns suffixed with K,M,G,T,P are in multiples of 1000
-> Small Read and Large Read are average service times, in milliseconds
-> Ordered by (Data Read + Write) desc

                Reads:   Reqs   Data    Writes:  Reqs   Data      Small   Large
Filetype Name   Data    per sec per sec Data    per sec per sec    Read    Read
--------------- ------- ------- ------- ------- ------- ------- ------- -------
Log File            17G     5.2  4.877M     17G   811.2  4.894M     0.0     3.5
Archive Log          0M     0.0      0M     17G     4.9  4.877M     N/A     N/A
Data File           10M     0.3   .003M    6.5G    70.5  1.874M     0.1     N/A
Control File       596M     4.1   .167M     76M     1.4   .021M     0.1     1.3
TOTAL:            17.6G     9.6  5.047M   40.6G   887.9 11.666M     0.1     3.5
                          ------------------------------------------------------

IOStat by Function/Filetype summary           DB/Inst: ORCL/ORCL  Snaps: 23-24
-> 'Data' columns suffixed with M,G,T,P are in multiples of 1024
    other columns suffixed with K,M,G,T,P are in multiples of 1000
-> Ordered by (Data Read + Write) desc for each function

 Reads:   Reqs   Data    Writes:  Reqs   Data    Waits:    Avg
 Data    per sec per sec Data    per sec per sec Count    Tm(ms)
 ------- ------- ------- ------- ------- ------- ------- -------
Others
   17.5G     7.8   5.02M     17G     5.8  4.891M   10.2K     0.2
 Others (Log File)
     17G     5.1  4.877M      0M     0.0      0M     621     0.0
 Others (Archive Log)
      0M     0.0      0M     17G     4.9  4.877M       0     N/A
 Others (Control File)
    509M     2.6   .143M     51M     0.9   .014M    9276     0.2
 Others (Data File)
      2M     0.1   .001M      0M     0.0      0M     305     0.4
LGWR
     86M     1.6   .024M   17.1G   811.6  4.901M    6137     0.1
 LGWR (Log File)
      0M     0.1      0M     17G   811.2  4.894M     620     0.0
 LGWR (Control File)
     86M     1.5   .024M     24M     0.4   .007M    5517     0.1
DBWR
      0M     0.0      0M    6.5G    70.3  1.874M       0     N/A
      0M     0.0      0M    6.5G    70.3  1.874M       0     N/A
 DBWR (Data File)
      0M     0.0      0M    6.5G    70.3  1.874M       0     N/A
Direct Reads
      6M     0.2   .002M      0M     0.0      0M       0     N/A
 Direct Reads (Data File)
      6M     0.2   .002M      0M     0.0      0M       0     N/A
Direct Writes
      0M     0.0      0M      6M     0.2   .002M       0     N/A
 Direct Writes (Data File)
      0M     0.0      0M      6M     0.2   .002M       0     N/A
Buffer Cache Reads
      0M     0.0      0M      0M     0.0      0M       8     0.0
 Buffer Cache Reads (Data File)
      0M     0.0      0M      0M     0.0      0M       8     0.0
TOTAL:
   17.6G     9.6  5.046M   40.6G   887.9 11.667M   16.3K     0.1
                          ------------------------------------------------------

^LTablespace IO Stats                          DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
          Av       Av     Av      1-bk  Av 1-bk          Writes        Av    Buf
  Reads   Rds/s  Rd(ms) Blks/Rd   Rds/s  Rd(ms)  Writes   avg/s Writes(ms)    Wa
------- ------- ------- ------- ------- ------- ------- ------- ---------- -----
UNDO_T1
    156       0     0.0     1.0 2.5E+05     0.0       0      70        0.3  813,
USERS
    156       0     0.0     1.0     653     0.0       0       0        2.9 3.55E
SYSAUX
    166       0     0.0     1.1     476     0.0       0       0        2.9
SYSTEM
    156       0     0.0     1.0     229     0.0       0       0        0.4
RDSADMIN
    156       0     0.0     1.0     156     0.0       0       0        0.0
                          ------------------------------------------------------

^LFile IO Stats                                DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by Tablespace, File

Tablespace               Filename
------------------------ ----------------------------------------------------
          Av       Av     Av      1-bk  Av 1-bk          Writes   Buffer  Av Buf
  Reads   Rds/s  Rd(ms) Blks/Rd   Rds/s  Rd(ms)  Writes   avg/s    Waits  Wt(ms)
------- ------- ------- ------- ------- ------- ------- ------- -------- -------
RDSADMIN                 /rdsdbdata/db/ORCL_A/datafile/o1_mf_rdsadmin_g52c5fo
    156       0     0.0     1.0       0     0.0     156       0        0     0.0
SYSAUX                   /rdsdbdata/db/ORCL_A/datafile/o1_mf_sysaux_g52bly2r_
    166       0     0.0     1.1       0     0.0     476       0        0     0.0
SYSTEM                   /rdsdbdata/db/ORCL_A/datafile/o1_mf_system_g52blcs9_
    156       0     0.0     1.0       0     0.0     229       0        0     0.0
UNDO_T1                  /rdsdbdata/db/ORCL_A/datafile/o1_mf_undo_t1_g52bmb96
    156       0     0.0     1.0       0     0.0 2.5E+05      70  813,152     0.1
USERS                    /rdsdbdata/db/ORCL_A/datafile/o1_mf_users_g52bmd5h_.
    156       0     0.0     1.0       0     0.0     653       0 3.55E+06     0.1
                          ------------------------------------------------------
^LBuffer Pool Statistics                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Standard block size Pools  D: default,  K: keep,  R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k

                                                            Free   Writ   Buffer
     Number of Pool       Buffer     Physical    Physical   Buff   Comp     Busy
P      Buffers Hit%         Gets        Reads      Writes   Wait   Wait    Waits
--- ---------- ---- ------------ ------------ ----------- ------ ------ --------
D    1,225,340  100  203,886,410           30     855,005      0      0 4.36E+06
                          ------------------------------------------------------

Checkpoint Activity                           DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Physical Writes:                      855,813

                                          Other    Autotune      Thread
       MTTR    Log Size    Log Ckpt    Settings        Ckpt        Ckpt
     Writes      Writes      Writes      Writes      Writes      Writes
----------- ----------- ----------- ----------- ----------- -----------
          0     853,497           0           0           0       1,539
                          ------------------------------------------------------

Instance Recovery Stats                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> B: Begin Snapshot,  E: End Snapshot

                                                                            Estd
  Targt  Estd                                     Log Ckpt Log Ckpt    Opt   RAC
  MTTR   MTTR Recovery  Actual   Target   Log Sz   Timeout Interval    Log Avail
   (s)    (s) Estd IOs RedoBlks RedoBlks RedoBlks RedoBlks RedoBlks  Sz(M)  Time
- ----- ----- -------- -------- -------- -------- -------- -------- ------ -----
B     0    41    15950   663396   636984   636984  1419489      N/A    N/A   N/A
E     0    42    15615   672998   636984   636984  1472527      N/A    N/A   N/A
                          ------------------------------------------------------

MTTR Advisory                                     DB/Inst: ORCL/ORCL  Snap: 24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Buffer Pool Advisory                              DB/Inst: ORCL/ORCL  Snap: 24
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate


                                    Est
                                   Phys      Estimated                  Est
    Size for   Size      Buffers   Read     Phys Reads     Est Phys %DBtime
P    Est (M) Factor  (thousands) Factor    (thousands)    Read Time for Rds
--- -------- ------ ------------ ------ -------------- ------------ -------
D        992     .1          122    1.0             19            1     5.0
D      1,984     .2          244    1.0             19            1     5.0
D      2,976     .3          366    1.0             19            1     5.0
D      3,968     .4          489    1.0             19            1     5.0
D      4,960     .5          611    1.0             19            1     5.0
D      5,952     .6          733    1.0             19            1     5.0
D      6,944     .7          855    1.0             19            1     5.0
D      7,936     .8          977    1.0             19            1     5.0
D      8,928     .9        1,099    1.0             19            1     5.0
D      9,920    1.0        1,221    1.0             19            1     5.0
D      9,952    1.0        1,225    1.0             19            1     5.0
D     10,912    1.1        1,344    1.0             19            1     5.0
D     11,904    1.2        1,466    1.0             19            1     5.0
D     12,896    1.3        1,588    1.0             19            1     5.0
D     13,888    1.4        1,710    1.0             19            1     5.0
D     14,880    1.5        1,832    1.0             19            1     5.0
D     15,872    1.6        1,954    1.0             19            1     5.0
D     16,864    1.7        2,076    1.0             19            1     5.0
D     17,856    1.8        2,199    1.0             19            1     5.0
D     18,848    1.9        2,321    1.0             19            1     5.0
D     19,840    2.0        2,443    1.0             19            1     5.0
                          ------------------------------------------------------

^LPGA Aggr Summary                             DB/Inst: ORCL/ORCL  Snaps: 23-24
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory

PGA Cache Hit %   W/A MB Processed  Extra W/A MB Read/Written
--------------- ------------------ --------------------------
          100.0              4,577                          0
                          ------------------------------------------------------

PGA Aggr Target Stats                         DB/Inst: ORCL/ORCL  Snaps: 23-24
-> B: Begin Snap   E: End Snap (rows dentified with B or E contain data
   which is absolute i.e. not diffed over the interval)
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used    - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem    - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem   - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem    - percentage of workarea memory under manual control

                                                %PGA  %Auto   %Man
    PGA Aggr   Auto PGA   PGA Mem    W/A PGA     W/A    W/A    W/A Global Mem
   Target(M)  Target(M)  Alloc(M)    Used(M)     Mem    Mem    Mem   Bound(K)
- ---------- ---------- ---------- ---------- ------ ------ ------ ----------
B      1,944      1,551      265.3        0.0     .0     .0     .0    199,080
E      1,944      1,558      260.0        0.0     .0     .0     .0    199,080
                          ------------------------------------------------------

PGA Aggr Target Histogram                     DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Optimal Executions are purely in-memory operations

  Low     High
Optimal Optimal    Total Execs  Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- -------------- ------------ ------------
     2K      4K      2,325,896      2,325,896            0            0
    64K    128K              8              8            0            0
   512K   1024K             19             19            0            0
     1M      2M             14             14            0            0
     4M      8M              2              2            0            0
                          ------------------------------------------------------

PGA Memory Advisory                               DB/Inst: ORCL/ORCL  Snap: 24
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
   where Estd PGA Overalloc Count is 0

                                       Estd Extra    Estd P Estd PGA
PGA Target    Size           W/A MB   W/A MB Read/    Cache Overallo    Estd
  Est (MB)   Factr        Processed Written to Disk   Hit %    Count    Time
---------- ------- ---------------- ---------------- ------ -------- -------
       243     0.1         25,397.1              0.0  100.0        0 1.0E+07
       486     0.3         25,397.1              0.0  100.0        0 1.0E+07
       972     0.5         25,397.1              0.0  100.0        0 1.0E+07
     1,458     0.8         25,397.1              0.0  100.0        0 1.0E+07
     1,944     1.0         25,397.1              0.0  100.0        0 1.0E+07
     2,333     1.2         25,397.1              0.0  100.0        0 1.0E+07
     2,722     1.4         25,397.1              0.0  100.0        0 1.0E+07
     3,111     1.6         25,397.1              0.0  100.0        0 1.0E+07
     3,500     1.8         25,397.1              0.0  100.0        0 1.0E+07
     3,888     2.0         25,397.1              0.0  100.0        0 1.0E+07
     5,833     3.0         25,397.1              0.0  100.0        0 1.0E+07
     7,777     4.0         25,397.1              0.0  100.0        0 1.0E+07
    11,665     6.0         25,397.1              0.0  100.0        0 1.0E+07
    15,554     8.0         25,397.1              0.0  100.0        0 1.0E+07
                          ------------------------------------------------------

^LShared Pool Advisory                             DB/Inst: ORCL/ORCL  Snap: 24
-> SP: Shared Pool     Est LC: Estimated Library Cache   Factr: Factor
-> Note there is often a 1:Many correlation between a single logical object
   in the Library Cache, and the physical number of memory objects associated
   with it.  Therefore comparing the number of Lib Cache objects (e.g. in
   v$librarycache), with the number of Lib Cache Memory Objects is invalid.


                                       Est LC Est LC  Est LC Est LC
  Shared    SP   Est LC                  Time   Time    Load   Load       Est LC
    Pool  Size     Size       Est LC    Saved  Saved    Time   Time      Mem Obj
 Size(M) Factr      (M)      Mem Obj      (s)  Factr     (s)  Factr     Hits (K)
-------- ----- -------- ------------ -------- ------ ------- ------ ------------
     480    .3       32        2,943   65,509    1.0     332    1.2       31,482
     640    .4      156        9,042   65,512    1.0     329    1.2       31,484
     800    .5      156        9,042   65,512    1.0     329    1.2       31,484
     960    .6      156        9,042   65,512    1.0     329    1.2       31,484
   1,120    .7      156        9,042   65,512    1.0     329    1.2       31,484
   1,280    .8      156        9,042   65,512    1.0     329    1.2       31,484
   1,312    .8      156        9,042   65,512    1.0     329    1.2       31,484
   1,344    .8      156        9,042   65,512    1.0     329    1.2       31,484
   1,376    .9      156        9,042   65,512    1.0     329    1.2       31,484
   1,408    .9      156        9,042   65,512    1.0     329    1.2       31,484
   1,440    .9      156        9,042   65,512    1.0     329    1.2       31,484
   1,472    .9      156        9,042   65,512    1.0     329    1.2       31,484
   1,504    .9      156        9,042   65,517    1.0     324    1.1       31,486
   1,536   1.0      156        9,042   65,529    1.0     312    1.1       31,489
   1,568   1.0      156        9,042   65,543    1.0     298    1.0       31,492
   1,600   1.0      156        9,042   65,557    1.0     284    1.0       31,512
   1,632   1.0      156        9,042   65,557    1.0     284    1.0       31,512
   1,664   1.0      156        9,042   65,557    1.0     284    1.0       31,512
   1,696   1.1      156        9,042   65,557    1.0     284    1.0       31,512
   1,728   1.1      156        9,042   65,557    1.0     284    1.0       31,512
   1,760   1.1      156        9,042   65,557    1.0     284    1.0       31,512
   1,792   1.1      156        9,042   65,557    1.0     284    1.0       31,512
   1,824   1.1      156        9,042   65,557    1.0     284    1.0       31,512
   1,856   1.2      156        9,042   65,557    1.0     284    1.0       31,512
   1,888   1.2      156        9,042   65,557    1.0     284    1.0       31,512
   1,920   1.2      156        9,042   65,557    1.0     284    1.0       31,512
   2,080   1.3      156        9,042   65,557    1.0     284    1.0       31,512
   2,240   1.4      156        9,042   65,557    1.0     284    1.0       31,512
   2,400   1.5      156        9,042   65,557    1.0     284    1.0       31,512
   2,560   1.6      156        9,042   65,557    1.0     284    1.0       31,512
   2,720   1.7      156        9,042   65,557    1.0     284    1.0       31,512
   2,880   1.8      156        9,042   65,557    1.0     284    1.0       31,512
   3,040   1.9      156        9,042   65,557    1.0     284    1.0       31,512
   3,200   2.0      156        9,042   65,557    1.0     284    1.0       31,512
                          ------------------------------------------------------

SGA Target Advisory                               DB/Inst: ORCL/ORCL  Snap: 24
SGA Target   SGA Size       Est DB     Est Physical
  Size (M)     Factor     Time (s)            Reads
---------- ---------- ------------ ----------------
     2,920        0.3       33,117           19,342
     4,380        0.4       33,117           19,280
     5,840        0.5       33,117           19,157
     7,300        0.6       33,118           18,874
     8,760        0.8       33,118           18,874
    10,220        0.9       33,118           18,874
    11,680        1.0       33,117           18,874
    13,140        1.1       33,118           18,874
    14,600        1.3       33,118           18,874
    16,060        1.4       33,118           18,874
    17,520        1.5       33,122           18,874
    18,980        1.6       33,122           18,874
    20,440        1.8       33,122           18,874
    21,900        1.9       33,122           18,874
    23,360        2.0       33,122           18,874
                          ------------------------------------------------------

Streams Pool Advisory                             DB/Inst: ORCL/ORCL  Snap: 24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Java Pool Advisory                                DB/Inst: ORCL/ORCL  Snap: 24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Buffer Wait Statistics                        DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by wait time desc, waits desc

Class                    Waits Total Wait Time (s)  Avg Time (ms)
------------------ ----------- ------------------- --------------
data block           3,537,315                 287              0
undo block             779,085                  48              0
undo header             34,065                   2              0
1st level bmb           12,393                   1              0
                          ------------------------------------------------------

^LEnqueue Activity                             DB/Inst: ORCL/ORCL  Snaps: 23-24
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc

Enqueue Type (Request Reason)
------------------------------------------------------------------------------
    Requests    Succ Gets Failed Gets       Waits  Wt Time (s) Av Wt Time(ms)
------------ ------------ ----------- ----------- ------------ --------------
TX-Transaction (row lock contention)
     461,060      461,055           0     412,330           70            .17
SQ-Sequence Cache
      94,726       94,726           0      11,266            2            .21
                          ------------------------------------------------------
^LUndo Segment Summary                         DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count,  OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen,   uR - unexpired Released,   uU - unexpired reUsed
-> eS - expired   Stolen,   eR - expired   Released,   eU - expired   reUsed

Undo   Num Undo       Number of  Max Qry   Max Tx Min/Max   STO/     uS/uR/uU/
 TS# Blocks (K)    Transactions  Len (s) Concurcy TR (mins) OOS      eS/eR/eU
---- ---------- --------------- -------- -------- --------- ----- --------------
   2      865.8       4,195,091    1,278        5 15.1/35.3 0/0   0/0/0/0/0/0
                          ------------------------------------------------------

Undo Segment Stats                            DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Most recent 35 Undostat rows, ordered by Time desc

                Num Undo    Number of Max Qry  Max Tx Tun Ret STO/    uS/uR/uU/
End Time          Blocks Transactions Len (s)   Concy  (mins) OOS     eS/eR/eU
------------ ----------- ------------ ------- ------- ------- ----- ------------
26-Apr 21:57     144,285      699,750      66       5      15 0/0   0/0/0/0/0/0
26-Apr 21:47     144,099      697,502     670       5      25 0/0   0/0/0/0/0/0
26-Apr 21:37     142,931      692,838      70       5      15 0/0   0/0/0/0/0/0
26-Apr 21:27     142,850      691,432     673       5      25 0/0   0/0/0/0/0/0
26-Apr 21:17     143,712      696,514   1,278       5      35 0/0   0/0/0/0/0/0
26-Apr 21:07     147,945      717,055     677       5      25 0/0   0/0/0/0/0/0
                          ------------------------------------------------------

^LLatch Activity                               DB/Inst: ORCL/ORCL  Snaps: 23-24
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
AQ deq hash table latch               1    0.0             0            0    N/A
ASM db client latch               3,251    0.0             0            0    N/A
ASM map operation hash t              1    0.0             0            0    N/A
ASM network state latch              80    0.0             0            0    N/A
AWR Alerted Metric Eleme         25,303    0.0             0            0    N/A
Change Notification Hash          1,190    0.0             0            0    N/A
Consistent RBA                2,598,555    0.0    0.1      0            0    N/A
DML lock allocation          10,178,082    0.0    0.0      0            0    N/A
Event Group Locks                   167    0.0             0            0    N/A
FAL Queue                           452    0.0             0            0    N/A
FIB s.o chain latch                 932    0.0             0            0    N/A
FOB s.o list latch                1,029    0.0             0            0    N/A
File State Object Pool P              1    0.0             0            0    N/A
I/O Staticstics latch                 1    0.0             0            0    N/A
IPC stats buffer allocat              1    0.0             0            0    N/A
In memory undo latch         30,946,670    1.6    0.0      2    5,005,394    0.0
JS Sh mem access                      1    0.0             0            0    N/A
JS queue access latch                 1    0.0             0            0    N/A
JS slv state obj latch                1    0.0             0            0    N/A
KFC FX Hash Latch                     1    0.0             0            0    N/A
KFC Hash Latch                        1    0.0             0            0    N/A
KFCL LE Freelist                      1    0.0             0            0    N/A
KGNFS-NFS:SHM structure               1    0.0             0            0    N/A
KGNFS-NFS:SVR LIST                    1    0.0             0            0    N/A
KJC message pool free li              1    0.0             0            0    N/A
KJCT flow control latch               1    0.0             0            0    N/A
KMG MMAN ready and start          1,189    0.0             0            0    N/A
KTF sga latch                        18    0.0             0        1,193    0.0
Locator state objects po              1    0.0             0            0    N/A
Lsod array latch                      1    0.0             0            0    N/A
MQL Tracking Latch                    0    N/A             0           72    0.0
Memory Management Latch               1    0.0             0        1,189    0.0
Memory Queue                          1    0.0             0            0    N/A
Memory Queue Message Sub              1    0.0             0            0    N/A
Memory Queue Message Sub              1    0.0             0            0    N/A
Memory Queue Message Sub              1    0.0             0            0    N/A
Memory Queue Message Sub              1    0.0             0            0    N/A
Memory Queue Subscriber               1    0.0             0            0    N/A
MinActiveScn Latch                   40    0.0             0            0    N/A
Mutex                                 1    0.0             0            0    N/A
Mutex Stats                           1    0.0             0            0    N/A
OS process                          416    0.0             0            0    N/A
OS process allocation             7,335    0.0             0            0    N/A
OS process: request allo            167    0.0             0            0    N/A
PL/SQL warning settings              85    0.0             0            0    N/A
PX hash array latch                   1    0.0             0            0    N/A
QMT                                   1    0.0             0            0    N/A
Real-time plan statistic            348    0.0             0            0    N/A
Result Cache: RC Latch        8,096,606    0.0    0.0      0            0    N/A
SGA IO buffer pool latch         10,670    0.0             0       13,450    0.0
SGA blob parent                       1    0.0             0            0    N/A
SGA bucket locks                      1    0.0             0            0    N/A
SGA heap locks                        1    0.0             0            0    N/A
SGA pool locks                        1    0.0             0            0    N/A
SQL memory manager latch              1    0.0             0        1,188    0.0
SQL memory manager worka         86,600    0.0             0            0    N/A
Shared B-Tree                       129    0.0             0            0    N/A
Streams Generic                       1    0.0             0            0    N/A
Testing                               1    0.0             0            0    N/A
Token Manager                         1    0.0             0            0    N/A
^LLatch Activity                               DB/Inst: ORCL/ORCL  Snaps: 23-24
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
WCR: sync                             1    0.0             0            0    N/A
Write State Object Pool               1    0.0             0            0    N/A
X$KSFQP                               1    0.0             0            0    N/A
XDB NFS Security Latch                1    0.0             0            0    N/A
XDB unused session pool               1    0.0             0            0    N/A
XDB used session pool                 1    0.0             0            0    N/A
active checkpoint queue          15,651    0.0             0            0    N/A
active service list               2,018    0.0             0        1,639    0.0
alert log latch                     310    0.0             0            0    N/A
archive control                     648    0.0             0            0    N/A
archive process latch             1,511    0.0             0            0    N/A
begin backup scn array                4    0.0             0            0    N/A
buffer pool                           1    0.0             0            0    N/A
business card                         1    0.0             0            0    N/A
cache buffer handles         36,706,018    0.3    0.0      0            0    N/A
cache buffers chains        678,148,703    0.9    0.0      4    1,103,253    0.9
cache buffers lru chain       4,081,822    0.0    0.0      0    3,969,584    0.1
call allocation                     382    0.0             0            0    N/A
cas latch                             1    0.0             0            0    N/A
change notification clie              1    0.0             0            0    N/A
channel handle pool latc            171    0.0             0            0    N/A
channel operations paren         24,232    0.0    0.0      0            0    N/A
checkpoint queue latch        2,390,524    0.0    0.1      0      854,563    0.0
client/application info             481    0.0             0            0    N/A
compile environment latc             83    0.0             0            0    N/A
cp cmon/server latch                  1    0.0             0            0    N/A
cp pool latch                         1    0.0             0            0    N/A
cp server hash latch                  1    0.0             0            0    N/A
cp sga latch                         80    0.0             0            0    N/A
cvmap freelist lock                   1    0.0             0            0    N/A
deferred cleanup latch               80    0.0             0            0    N/A
dml lock allocation                  80    0.0             0            0    N/A
done queue latch                      1    0.0             0            0    N/A
dummy allocation                    168    0.0             0            0    N/A
eighth spare latch - X p              1    0.0             0            0    N/A
eleventh spare latch - c              1    0.0             0            0    N/A
enqueue freelist latch              177    6.2    0.0      0    9,031,974    0.0
enqueue hash chains          28,147,972    0.9    0.0      0            5    0.0
enqueues                            361    0.0             0            0    N/A
fifteenth spare latch -               1    0.0             0            0    N/A
file cache latch                  1,536    0.0             0            0    N/A
first Audit Vault latch              62    0.0             0            0    N/A
flashback copy                        1    0.0             0            0    N/A
fourteenth spare latch -              1    0.0             0            0    N/A
fourth Audit Vault latch              1    0.0             0            0    N/A
gc element                            1    0.0             0            0    N/A
gcs commit scn state                  1    0.0             0            0    N/A
gcs partitioned table ha              1    0.0             0            0    N/A
gcs pcm hashed value buc              1    0.0             0            0    N/A
gcs resource freelist                 1    0.0             0            0    N/A
gcs resource hash                     1    0.0             0            0    N/A
gcs resource scan list                1    0.0             0            0    N/A
gcs resource validate li              1    0.0             0            0    N/A
gcs shadows freelist                  1    0.0             0            0    N/A
ges domain table                      1    0.0             0            0    N/A
ges enqueue table freeli              1    0.0             0            0    N/A
ges group table                       1    0.0             0            0    N/A
ges process hash list                 1    0.0             0            0    N/A
ges process parent latch              1    0.0             0            0    N/A
ges resource hash list                1    0.0             0            0    N/A
^LLatch Activity                               DB/Inst: ORCL/ORCL  Snaps: 23-24
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                   Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
ges resource scan list                1    0.0             0            0    N/A
ges resource table freel              1    0.0             0            0    N/A
ges value block free lis              1    0.0             0            0    N/A
global KZLD latch for me             62    0.0             0            0    N/A
global tx hash mapping                1    0.0             0            0    N/A
granule operation                     1    0.0             0            0    N/A
hash table column usage              24    0.0             0          187    0.0
hash table modification              74    0.0             0            0    N/A
heartbeat check                       1    0.0             0            0    N/A
internal temp table obje              1    0.0             0            0    N/A
intra txn parallel recov              1    0.0             0            0    N/A
io pool granule metadata              1    0.0             0            0    N/A
job workq parent latch                1    0.0             0            0    N/A
k2q lock allocation                   1    0.0             0            0    N/A
kcbtsemkid latch                    155    0.0             0            0    N/A
kdlx hb parent latch                  1    0.0             0            0    N/A
kgb parent                            1    0.0             0            0    N/A
kgnfs mount latch                     1    0.0             0            0    N/A
kokc descriptor allocati              4    0.0             0            0    N/A
ksfv messages                         1    0.0             0            0    N/A
ksim group membership ca              1    0.0             0            0    N/A
kss move lock                        33    0.0             0            0    N/A
ksuosstats global area              360    0.0             0            0    N/A
ksv allocation latch                144    0.0             0            0    N/A
ksv class latch                      65    0.0             0            0    N/A
ksv msg queue latch                   1    0.0             0            0    N/A
ksz_so allocation latch             167    0.0             0            0    N/A
ktm global data                     246    0.0             0            0    N/A
kwqbsn:qsga                         128    0.0             0            0    N/A
lgwr LWN SCN                  2,602,992    0.0    0.0      0            0    N/A
list of block allocation             28    0.0             0            0    N/A
loader state object free          1,882    0.0             0            0    N/A
lob segment dispenser la              1    0.0             0            0    N/A
lob segment hash table l             13    0.0             0            0    N/A
lob segment query latch               1    0.0             0            0    N/A
lock DBA buffer during m              1    0.0             0            0    N/A
logical standby cache                 1    0.0             0            0    N/A
logminer context allocat              1    0.0             0            0    N/A
logminer local                        1    0.0             0            0    N/A
logminer work area                    1    0.0             0            0    N/A
longop free list parent               1    0.0             0            0    N/A
managed standby latch               297    0.0             0            0    N/A
mapped buffers lru chain              1    0.0             0            0    N/A
message pool operations           2,164    0.0             0            0    N/A
messages                      7,779,582    0.0    0.0      0            0    N/A
mostly latch-free SCN         2,620,722    0.2    0.0      0            0    N/A
msg queue latch                       1    0.0             0            0    N/A
multiblock read objects              10    0.0             0            0    N/A
name-service namespace b              1    0.0             0            0    N/A
ncodef allocation latch              80    0.0             0            0    N/A
nineth spare latch - X p              1    0.0             0            0    N/A
object queue header heap         30,131    0.0             0            2    0.0
object queue header oper      5,367,223    0.0    0.0      0            0    N/A
object stats modificatio              4    0.0             0            0    N/A
parallel query alloc buf            461    0.0             0            0    N/A
parallel query stats                  1    0.0             0            0    N/A
parameter list                       15    0.0             0            0    N/A
parameter table manageme            291    0.0             0            0    N/A
peshm                                 1    0.0             0            0    N/A
pesom_free_list                       1    0.0             0            0    N/A
^LLatch Activity                               DB/Inst: ORCL/ORCL  Snaps: 23-24
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
pesom_hash_node                       1    0.0             0            0    N/A
post/wait queue               6,170,847    0.0    0.0      0    5,567,423    0.1
process allocation                  183    0.0             0           83    0.0
process group creation              167    0.0             0            0    N/A
process queue                         1    0.0             0            0    N/A
process queue reference               1    0.0             0            0    N/A
qmn task queue latch                521    0.0             0            0    N/A
query server freelists                1    0.0             0            0    N/A
queued dump request                  12    0.0             0            0    N/A
queuing load statistics               1    0.0             0            0    N/A
recovery domain hash lis              1    0.0             0            0    N/A
redo allocation              18,172,705    0.2    0.0      0   49,615,134    0.6
redo copy                             1    0.0             0   49,702,067    0.8
redo writing                  8,104,776    0.0    0.0      0            0    N/A
resmgr group change latc             72    0.0             0            0    N/A
resmgr:active threads               168    0.0             0            0    N/A
resmgr:actses change gro             84    0.0             0            0    N/A
resmgr:actses change sta              1    0.0             0            0    N/A
resmgr:free threads list            167    0.0             0            0    N/A
resmgr:plan CPU method                1    0.0             0            0    N/A
resmgr:resource group CP              1    0.0             0            0    N/A
resmgr:schema config                 11    0.0             0            0    N/A
resmgr:session queuing                1    0.0             0            0    N/A
rm cas latch                          1    0.0             0            0    N/A
row cache objects               635,859    0.0             0            0    N/A
second Audit Vault latch              1    0.0             0            0    N/A
sequence cache                5,183,895    1.6    0.0      0            0    N/A
session allocation            5,005,761    0.1    0.0      0    5,005,623    0.3
session idle bit             31,732,642    0.3    0.0      0            0    N/A
session queue latch                   1    0.0             0            0    N/A
session state list latch            174    0.0             0            0    N/A
session switching                   165    0.0             0            0    N/A
session timer                     1,208    0.0             0            0    N/A
seventh spare latch - X               1    0.0             0            0    N/A
shared pool                      29,044    0.0             0            0    N/A
shared pool sim alloc                 2    0.0             0            0    N/A
shared pool simulator                14    0.0             0            0    N/A
sim partition latch                   1    0.0             0            0    N/A
simulator hash latch          1,264,761    0.0             0            0    N/A
simulator lru latch             855,006    0.0    0.0      0      409,747    0.0
sixth spare latch - X pa              1    0.0             0            0    N/A
sort extent pool                    165    0.0             0            0    N/A
space background state o             10    0.0             0            0    N/A
space background task la          4,363    0.2    0.0      0        2,381    0.0
state object free list                2    0.0             0            0    N/A
statistics aggregation              336    0.0             0            0    N/A
tablespace key chain            852,982    0.0    0.0      0            0    N/A
temp lob duration state               2    0.0             0            0    N/A
tenth spare latch - X pa              1    0.0             0            0    N/A
test excl. parent l0                  1    0.0             0            0    N/A
test excl. parent2 l0                 1    0.0             0            0    N/A
thirteenth spare latch -              1    0.0             0            0    N/A
threshold alerts latch              129    0.0             0            0    N/A
transaction allocation               43    0.0             0            0    N/A
twelfth spare latch - ch              1    0.0             0            0    N/A
twenty-fifth spare latch              1    0.0             0            0    N/A
twenty-first spare latch              1    0.0             0            0    N/A
twenty-fourth spare latc              1    0.0             0            0    N/A
twenty-second spare latc              1    0.0             0            0    N/A
twenty-third spare latch              1    0.0             0            0    N/A
^LLatch Activity                               DB/Inst: ORCL/ORCL  Snaps: 23-24
-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
   willing-to-wait latch get requests
-> "NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
-> "Pct Misses" for both should be very close to 0.0

                                           Pct    Avg   Wait                 Pct
                                    Get    Get   Slps   Time       NoWait NoWait
Latch Name                     Requests   Miss  /Miss    (s)     Requests   Miss
------------------------ -------------- ------ ------ ------ ------------ ------
undo global data             19,869,354    0.2    0.0      0            0    N/A
virtual circuit buffers               1    0.0             0            0    N/A
virtual circuit holder                1    0.0             0            0    N/A
virtual circuit queues                1    0.0             0            0    N/A
write info latch                      0    N/A             0    2,597,776    0.0
                          ------------------------------------------------------

^LLatch Sleep Breakdown                        DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by misses desc

                                       Get                                 Spin
Latch Name                        Requests       Misses      Sleeps        Gets
-------------------------- --------------- ------------ ----------- -----------
cache buffers chains           678,148,703    6,039,532       9,920   6,030,019
In memory undo latch            30,946,670      498,440       3,120     495,577
enqueue hash chains             28,147,972      254,691          77     254,625
cache buffer handles            36,706,018      102,104           8     102,096
sequence cache                   5,183,895       84,508          24      84,484
session idle bit                31,732,642       80,214          15      80,199
undo global data                19,869,354       48,471          40      48,447
redo allocation                 18,172,705       41,799         177      41,636
mostly latch-free SCN            2,620,722        5,427           5       5,422
session allocation               5,005,761        5,030           1       5,029
messages                         7,779,582        3,026           5       3,021
post/wait queue                  6,170,847        1,008           2       1,006
object queue header operat       5,367,223          258           1         257
cache buffers lru chain          4,081,822          221           5         216
checkpoint queue latch           2,390,524           27           4          23
Consistent RBA                   2,598,555           10           1           9
                          ------------------------------------------------------

^LLatch Miss Sources                           DB/Inst: ORCL/ORCL  Snaps: 23-24
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
Latch Name               Where                       Misses     Sleeps   Sleeps
------------------------ -------------------------- ------- ---------- --------
In memory undo latch     kticmt: child                    0      2,468        2
In memory undo latch     ktichg: child                    0        486        7
In memory undo latch     ktiFlushMe                       0        155        1
In memory undo latch     ktiFlush: child                  0         18    1,037
In memory undo latch     ktiTxnPoolFree                   0          4        0
cache buffer handles     kcbzfs                           0          8        3
cache buffers chains     kcbgcur_2                        0      4,881       49
cache buffers chains     kcbgtcr: fast path               0      3,079    7,957
cache buffers chains     kcbgtcr_2                        0        997      151
cache buffers chains     kcbrls_2                         0        750       16
cache buffers chains     kcbgcur: fast path (shr)         0        651    2,711
cache buffers chains     kcbchg1: clear MS bit            0        409        4
cache buffers chains     kcbchg1: mod cur pin             0        248       26
cache buffers chains     kcbzwb                           0        191      135
cache buffers chains     kcbchg1: mod cr pin              0        160       26
cache buffers chains     kcbnlc                           0        155        4
cache buffers chains     kcbchg1: aux pin                 0        112        0
cache buffers chains     kcbrls_1                         0         61       25
cache buffers chains     kcbso1: set no access            0         54      230
cache buffers chains     kcbrls: fast release             0         53       46
cache buffers chains     kcbgtcr: fast path (cr pin       0         40       51
cache buffers chains     kcbgcur_4                        0         25       25
cache buffers chains     kcbzgb: scan from tail. no       0         13        0
cache buffers chains     kcb_pre_apply: kcbhq61           0          7        1
cache buffers chains     kcb_commit                       0          6        0
cache buffers chains     kcbzpbuf                         0          6        7
cache buffers chains     kcb_post_apply: kcbhq62          0          4        0
cache buffers chains     kcbcge                           0          4       18
cache buffers chains     kcb_trim_hash_chain              0          3        0
cache buffers chains     kcbgtcr: kslbegin excl           0          3        0
cache buffers chains     kcb_is_private                   0          2      432
cache buffers lru chain  kcbzgws                          0          5        0
checkpoint queue latch   kcbbwthc: thread checkpoin       0          3        1
checkpoint queue latch   kcbswcu: Switch buffers          0          1        3
enqueue hash chains      ksqrcl                           0         40       21
enqueue hash chains      ksqgtl3                          0         36       56
enqueue hash chains      ksqcmi: get hash chain lat       0          1        0
lgwr LWN SCN             kcs023                           0          5        0
messages                 ksaamb: after wakeup             0          4        1
messages                 ksarcv                           0          1       14
mostly latch-free SCN    kcsnew_scn_rba                   0          1        0
object queue header oper kcbo_switch_q_bg                 0          1        0
post/wait queue          ksliwat:add:nowait               0          2        0
redo allocation          kcrfw_redo_gen: redo alloc       0        159        0
redo allocation          kcrfw_redo_write: before w       0         13       22
redo allocation          kcrfw_redo_gen: redo alloc       0          3       67
redo allocation          kcrfw_post: more space           0          2       88
sequence cache           kdnssd                           0         20        1
sequence cache           kdnnxt: cached seq               0          3       15
sequence cache           kdnss                            0          1        8
session allocation       ksucri_int : SSO                 0          1        0
session idle bit         ksupuc: set busy                 0          8       17
session idle bit         ksupuc: clear busy               0          6        0
session idle bit         ksuxds                           0          3        0
undo global data         ktudnx:child                     0         19        3
undo global data         ktufrbs_2                        0         15       21
undo global data         ktudba: KSLBEGIN                 0          5       16
undo global data         ktubnd_4                         0          1        0
                          ------------------------------------------------------

Mutex Sleep Summary                           DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by number of sleeps desc

                                                                         Wait
Mutex Type            Location                               Sleeps    Time (ms)
--------------------- -------------------------------- ------------ ------------
Library Cache         kglpin1   4                                33            0
Cursor Pin            kksfbc [KKSCHLPIN1]                        29            0
Library Cache         kglpndl1  95                                7            0
Library Cache         kglpnal1  90                                6            0
Cursor Pin            kksLockDelete [KKSCHLPIN6]                  5            0
                          ------------------------------------------------------

^LParent Latch Statistics                      DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Child Latch Statistics                        DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^LSegments by Logical Reads                    DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Logical Reads:     204,307,245
-> Captured Segments account for   91.9% of Total

           Tablespace                      Subobject  Obj.       Logical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
KYLELF     USERS      AUTHORS_I                       INDEX   93,967,584   45.99
KYLELF     USERS      AUTHORS                         TABLE   93,574,368   45.80
SYS        SYSTEM     I_SEQ1                          INDEX       85,200     .04
SYS        SYSTEM     SEQ$                            TABLE       83,408     .04
SYS        SYSTEM     I_SYSAUTH1                      INDEX        8,960     .00
                          ------------------------------------------------------

Segments by Physical Reads                    DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Physical Reads:             810
-> Captured Segments account for    3.7% of Total

           Tablespace                      Subobject  Obj.      Physical
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSAUX     WRH$_ACTIVE_SESSION_ 34324168_0 TABLE           28    3.46
SYS        SYSAUX     WRH$_IOSTAT_FUNCTION            INDEX            2     .25
                          ------------------------------------------------------

Segments by Physical Read Requests            DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Physical Read Requests:             790
-> Captured Segments account for    1.3% of Total

           Tablespace                      Subobject  Obj.     Phys Read
Owner         Name    Object Name            Name     Type      Requests  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSAUX     WRH$_ACTIVE_SESSION_ 34324168_0 TABLE            8    1.01
SYS        SYSAUX     WRH$_IOSTAT_FUNCTION            INDEX            2     .25
                          ------------------------------------------------------

Segments by UnOptimized Reads                 DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total UnOptimized Read Requests:             790
-> Captured Segments account for    1.3% of Total

           Tablespace                      Subobject  Obj.   UnOptimized
Owner         Name    Object Name            Name     Type         Reads  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSAUX     WRH$_ACTIVE_SESSION_ 34324168_0 TABLE            8    1.01
SYS        SYSAUX     WRH$_IOSTAT_FUNCTION            INDEX            2     .25
                          ------------------------------------------------------

Segments by Optimized Reads                   DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Segments by Direct Physical Reads             DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Segments by Physical Writes                   DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Physical Writes:         855,813
-> Captured Segments account for    0.1% of Total

           Tablespace                      Subobject  Obj.      Physical
Owner         Name    Object Name            Name     Type        Writes  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
KYLELF     USERS      AUTHORS                         TABLE          524     .06
SYS        SYSTEM     SEQ$                            TABLE           56     .01
KYLELF     USERS      AUTHORS_I                       INDEX           55     .01
SYS        SYSAUX     WRH$_ACTIVE_SESSION_ 34324168_0 TABLE           33     .00
SYS        SYSAUX     SMON_SCN_TIME                   TABLE           22     .00
                          ------------------------------------------------------

Segments by Physical Write Requests           DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Physical Write Requestss:         251,370
-> Captured Segments account for    0.3% of Total

           Tablespace                      Subobject  Obj.    Phys Write
Owner         Name    Object Name            Name     Type      Requests  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
KYLELF     USERS      AUTHORS                         TABLE          442     .18
SYS        SYSTEM     SEQ$                            TABLE           56     .02
KYLELF     USERS      AUTHORS_I                       INDEX           55     .02
SYS        SYSAUX     SMON_SCN_TIME                   TABLE           22     .01
SYS        SYSAUX     WRH$_SYSMETRIC_HISTO            TABLE           12     .00
                          ------------------------------------------------------

Segments by Direct Physical Writes            DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Direct Physical Writes:             808
-> Captured Segments account for    3.5% of Total

           Tablespace                      Subobject  Obj.        Direct
Owner         Name    Object Name            Name     Type        Writes  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSAUX     WRH$_ACTIVE_SESSION_ 34324168_0 TABLE           26    3.22
SYS        SYSAUX     SYS_LOB0000006402C00            LOB              2     .25
                          ------------------------------------------------------
Segments by Table Scans                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Total Table Scans:             995
-> Captured Segments account for  100.0% of Total

           Tablespace                      Subobject  Obj.         Table
Owner         Name    Object Name            Name     Type         Scans  %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS        SYSTEM     I_SYSAUTH1                      INDEX          992   99.70
SYS        SYSTEM     I_OBJ2                          INDEX            3     .30
                          ------------------------------------------------------

Segments by DB Blocks Changes                 DB/Inst: ORCL/ORCL  Snaps: 23-24
-> % of Capture shows % of DB Block Changes for each top segment compared
-> with total DB Block Changes for all segments captured by the Snapshot

           Tablespace                      Subobject  Obj.      DB Block    % of
Owner         Name    Object Name            Name     Type       Changes Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
KYLELF     USERS      AUTHORS                         TABLE   26,820,688   55.81
KYLELF     USERS      AUTHORS_I                       INDEX   21,156,096   44.02
SYS        SYSTEM     SEQ$                            TABLE       83,440     .17
SYS        SYSAUX     WRH$_PARAMETER_PK    34324168_0 INDEX          112     .00
SYS        SYSAUX     WRH$_SQL_PLAN_PK                INDEX           80     .00
                          ------------------------------------------------------

^LSegments by Row Lock Waits                   DB/Inst: ORCL/ORCL  Snaps: 23-24
-> % of Capture shows % of row lock waits for each top segment compared
-> with total row lock waits for all segments captured by the Snapshot

                                                                     Row
           Tablespace                      Subobject  Obj.          Lock    % of
Owner         Name    Object Name            Name     Type         Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
KYLELF     USERS      AUTHORS                         TABLE      436,695  100.00
                          ------------------------------------------------------

Segments by ITL Waits                         DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Segments by Buffer Busy Waits                 DB/Inst: ORCL/ORCL  Snaps: 23-24
-> % of Capture shows % of Buffer Busy Waits for each top segment compared
-> with total Buffer Busy Waits for all segments captured by the Snapshot

                                                                  Buffer
           Tablespace                      Subobject  Obj.          Busy    % of
Owner         Name    Object Name            Name     Type         Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
KYLELF     USERS      AUTHORS                         TABLE    1,973,082   55.58
KYLELF     USERS      AUTHORS_I                       INDEX    1,576,624   44.42
                          ------------------------------------------------------

^LDictionary Cache Stats                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> "Pct Misses"  should be very low (< 2% in most cases)
-> "Final Usage" is the number of cache entries being used


                                   Get    Pct    Scan   Pct      Mod      Final
Cache                         Requests   Miss    Reqs  Miss     Reqs      Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_awr_control                      64    0.0       0   N/A        2          1
dc_global_oids                       7    0.0       0   N/A        0         51
dc_histogram_data                    3   33.3       0   N/A        0      2,301
dc_histogram_defs                   77    1.3       0   N/A        0      4,392
dc_objects                         276    0.4       0   N/A        0      2,628
dc_profiles                        124    0.0       0   N/A        0          2
dc_rollback_segments               892    0.0       0   N/A        0         19
dc_segments                        753    0.0       0   N/A        9        927
dc_sequences                    83,429    0.0       0   N/A   83,429         14
dc_tablespaces                   5,185    0.0       0   N/A        0          7
dc_users                         8,085    0.0       0   N/A        0         87
global database name             2,458    0.0       0   N/A        0          1
outstanding_alerts                  24    0.0       0   N/A        0          5
                          ------------------------------------------------------

Library Cache Activity                        DB/Inst: ORCL/ORCL  Snaps: 23-24
-> "Pct Misses"  should be very low

                         Get    Pct            Pin    Pct             Invali-
Namespace           Requests   Miss       Requests   Miss    Reloads  dations
--------------- ------------ ------ -------------- ------ ---------- --------
ACCOUNT_STATUS           186    0.0              0    N/A          0        0
BODY                      16    0.0             22    0.0          0        0
DBLINK                   198    0.0              0    N/A          0        0
EDITION                   78    0.0            140    0.0          0        0
INDEX                      2    0.0              2    0.0          0        0
SCHEMA                    62    0.0              0    N/A          0        0
SQL AREA                 637    0.2      6,603,229   -2.3          0        0
SQL AREA BUILD             8   12.5              0    N/A          0        0
SQL AREA STATS             8  100.0              8  100.0          0        0
TABLE/PROCEDURE           87    0.0      3,337,268    0.0          0        0
                          ------------------------------------------------------

^LMemory Dynamic Components                    DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Min/Max sizes since instance startup
-> Oper Types/Modes: INItializing,GROw,SHRink,STAtic/IMMediate,DEFerred
-> ordered by Component

                 Begin Snap     Current         Min         Max   Oper Last Op
Component         Size (Mb)   Size (Mb)   Size (Mb)   Size (Mb)  Count Typ/Mod
--------------- ----------- ----------- ----------- ----------- ------ -------
ASM Buffer Cach         .00         .00         .00         .00      0 STA/
DEFAULT 16K buf         .00         .00         .00         .00      0 STA/
DEFAULT 2K buff         .00         .00         .00         .00      0 STA/
DEFAULT 32K buf         .00         .00         .00         .00      0 STA/
DEFAULT 4K buff         .00         .00         .00         .00      0 STA/
DEFAULT 8K buff         .00         .00         .00         .00      0 STA/
DEFAULT buffer     9,952.00    9,952.00    9,952.00    9,952.00      0 INI/
KEEP buffer cac         .00         .00         .00         .00      0 STA/
PGA Target         1,952.00    1,952.00    1,952.00    1,952.00      0 STA/
RECYCLE buffer          .00         .00         .00         .00      0 STA/
SGA Target        11,680.00   11,680.00   11,680.00   11,680.00      0 STA/
Shared IO Pool          .00         .00         .00         .00      0 STA/
java pool             32.00       32.00       32.00       32.00      0 STA/
large pool            32.00       32.00       32.00       32.00      0 STA/
shared pool        1,600.00    1,600.00    1,600.00    1,600.00      0 STA/
streams pool            .00         .00         .00         .00      0 STA/
                          ------------------------------------------------------



Memory Resize Operations Summary              DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Memory Resize Ops                             DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^LProcess Memory Summary                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> B: Begin Snap   E: End Snap
-> All rows below contain absolute values (i.e. not diffed over the interval)
-> Max Alloc is Maximum PGA Allocation size at snapshot time
-> Hist Max Alloc is the Historical Max Allocation for still-connected processes
-> ordered by Begin/End snapshot, Alloc (MB) desc

                                                            Hist
                                    Avg  Std Dev     Max     Max
               Alloc      Used    Alloc    Alloc   Alloc   Alloc    Num    Num
  Category      (MB)      (MB)     (MB)     (MB)    (MB)    (MB)   Proc  Alloc
- -------- --------- --------- -------- -------- ------- ------- ------ ------
B Other        249.7       N/A      7.3     12.6      47      47     34     34
  Freeable      14.9        .0      1.0      1.3       4     N/A     15     15
  PL/SQL          .6        .5       .0       .0       0       0     34     34
  SQL             .1        .0       .0       .0       0       3     20     15
E Other        241.3       N/A      7.3     12.7      47      47     33     33
  Freeable      18.0        .0      1.4      2.4       9     N/A     13     13
  PL/SQL          .6        .5       .0       .0       0       0     33     33
  SQL             .1        .0       .0       .0       0       3     19     14
                          ------------------------------------------------------

SGA Memory Summary                            DB/Inst: ORCL/ORCL  Snaps: 23-24

                                                      End Size (Bytes)
SGA regions                     Begin Size (Bytes)      (if different)
------------------------------ ------------------- -------------------
Database Buffers                    10,435,428,352
Fixed Size                               2,264,456
Redo Buffers                            10,158,080
Variable Size                        1,744,831,096
                               -------------------
sum                                 12,192,681,984
                          ------------------------------------------------------

SGA breakdown difference                      DB/Inst: ORCL/ORCL  Snaps: 23-24
-> ordered by Pool, Name
-> N/A value for Begin MB or End MB indicates the size of that Pool/Name was
   insignificant, or zero in that snapshot


Pool   Name                                 Begin MB         End MB  % Diff
------ ------------------------------ -------------- -------------- -------
java   free memory                              32.0           32.0    0.00
large  PX msg pool                                .5             .5    0.00
large  free memory                              31.5           31.5    0.00
shared FileOpenBlock                            19.5           19.5    0.00
shared KGLH0                                    63.2           63.3    0.15
shared KGLS                                     16.6           16.6    0.00
shared KTI-UNDO                                 18.5           18.5    0.00
shared SQLA                                     93.0           93.1    0.18
shared db_block_hash_buckets                    89.0           89.0    0.00
shared dbktb: trace buffer                      25.8           25.8    0.00
shared event statistics per sess                31.4           31.4    0.00
shared free memory                             999.3          999.1   -0.02
shared ksunfy : SSO free list                   29.8           29.8    0.00
shared private strands                          35.7           35.7    0.00
       buffer_cache                          9,952.0        9,952.0    0.00
       fixed_sga                                 2.2            2.2    0.00
       log_buffer                                9.7            9.7    0.00
                          ------------------------------------------------------

^LStreams CPU/IO Usage                         DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Streams Capture                               DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Streams Capture Rate                          DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Streams Apply                                 DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Streams Apply Rate                            DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Buffered Queues                               DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Buffered Queue Subscribers                    DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------
Rule Set                                      DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Persistent Queues                             DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Persistent Queues Rate                        DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Persistent Queue Subscribers                  DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^LResource Limit Stats                             DB/Inst: ORCL/ORCL  Snap: 24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Shared Servers Activity                       DB/Inst: ORCL/ORCL  Snaps: 23-24
-> Values represent averages for all samples

   Avg Total   Avg Active    Avg Total   Avg Active    Avg Total   Avg Active
 Connections  Connections Shared Srvrs Shared Srvrs  Dispatchers  Dispatchers
------------ ------------ ------------ ------------ ------------ ------------
           0            0            0            0            0            0
                          ------------------------------------------------------

Shared Servers Rates                          DB/Inst: ORCL/ORCL  Snaps: 23-24

  Common     Disp                        Common       Disp     Server
   Queue    Queue   Server    Server      Queue      Queue      Total     Server
 Per Sec  Per Sec Msgs/Sec    KB/Sec      Total      Total       Msgs  Total(KB)
-------- -------- -------- --------- ---------- ---------- ---------- ----------
       0        0        0       0.0          0          0          0          0
                          ------------------------------------------------------

Shared Servers Utilization                    DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Shared Servers Common Queue                   DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Shared Servers Dispatchers                    DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

^Linit.ora Parameters                          DB/Inst: ORCL/ORCL  Snaps: 23-24

                                                                End value
Parameter Name                Begin value                       (if different)
----------------------------- --------------------------------- --------------
archive_lag_target            300
audit_file_dest               /rdsdbdata/admin/ORCL/adump
control_files                 /rdsdbdata/db/ORCL_A/controlfile/
db_block_checking             MEDIUM
db_create_file_dest           /rdsdbdata/db
db_name                       ORCL
db_recovery_file_dest_size    1073741824
db_unique_name                ORCL_A
dg_broker_config_file1        /rdsdbdata/config/dr1ORCL.dat
dg_broker_config_file2        /rdsdbdata/config/dr2ORCL.dat
diagnostic_dest               /rdsdbdata/log
filesystemio_options          setall
local_listener                (ADDRESS = (PROTOCOL=TCP)(HOST=lo
log_archive_dest_1            location="/rdsdbdata/db/ORCL_A/ar
log_archive_format            -%s-%t-%r.arc
memory_target                 0
open_cursors                  300
pga_aggregate_target          2038670336
plsql_warnings                DISABLE:ALL
processes                     1652
recyclebin                    OFF
sga_target                    12247367680
spfile                        /rdsdbbin/oracle/dbs/spfileORCL.o
standby_file_management       AUTO
undo_tablespace               UNDO_T1
use_large_pages               ONLY
                          ------------------------------------------------------

^Linit.ora Multi-Valued Parameters             DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

Dynamic Remastering Stats                     DB/Inst: ORCL/ORCL  Snaps: 23-24

                  No data exists for this section of the report.
                          ------------------------------------------------------

End of Report

Uncategorized

Want to change the future Amazon RDS performance monitoring?

April 10th, 2019

UPDATE: All slots for this study have been filled.

On the other and would love your feedback. Please send any ideas about what you’d like to see in RDS performance monitoring to me at kylelf at amazon.com

Thanks

Kyle
APG_2

Have you ever used Amazon RDS console to manage performance on an RDS database and had ideas on how to make it better?  The Amazon UX Research team for AWS is collaborating with me to recruit for an upcoming user research study with Amazon Relational Database Service (RDS) performance monitoring. Sessions will take place between Monday, April 22, 2019 – Friday, April 26, 2019. In this study Amazon is looking to speak to experienced Database Administrators who currently use RDS. The sessions will be conducted remotely on WebEx and will be 1 hour long. As appreciation for your time and feedback, participants will receive a $100 Amazon.com gift card for completing a session (if your company guidelines permit).

If you are interested, sign up for this confidential study here: (Note that the study has limited spots, first come first serve.) 

https://app.acuityscheduling.com/schedule.php?owner=14014140&calendarID=2201515

After you sign up, and if you get a slot, the Amazon UX team will send you a WebEx invite for your session.

 

 

 

Uncategorized

Honeycomb.io for DB Load and Active Sessions

April 3rd, 2019

Honeycomb.io turns out to be a nice solution for collecting, retrieving and displaying multi-dimensional time series data, i.e. the kind of data you get from sampling.

For example, in the database world we have Active Session History (ASH) which at  it’s core tracks

  1. when – timestamp
  2. who – user
  3. command – what SQL are they running
  4. state – are they runnable on CPU or are they waiting and if waiting what are they waiting for like I/O, Lock, latch, buffer space, etc

Collecting this information is pretty easy to store in a relational database as I did when creating S-ASH (Simulated ASH) and Marcin Przepiorowski built upon over the years since, or even store in flatfiles like I did with W-ASH (web enabled ASH).

On the other hand retrieving the data in a way that can be graphed is challenging. To retrieve and display the data we need to transform the data into number time series.

WIth honeycomb.io we can store, retrieve and display data by various dimensions as time series.

Just sign up at honeycomb, then start to create a dataset. Pick any application it doesn’t matter and when you hit continue for creating a dataset, you will get a writekey. WIth that writekey you can start sending data to honeycomb.io.

I’m on a Mac using Python so I just installed with

pip install libhoney

see: https://docs.honeycomb.io/getting-data-in/python/sdk/

I then connected to a PostgreSQL database in Amazon RDS and looped, running a query to collect the sampled data

select 
       datname,
       pid, 
       usename, 
       application_name, 
       client_addr, 
       COALESCE(client_hostname,'unknown'), 
       COALESCE(wait_event_type,'cpu'), 
       COALESCE(wait_event,'cpu'), 
       query 
from 
       pg_stat_activity 
where 
       state = 'active' ; "

and putting this in a call to honeycomb.io

import libhoney
import psycopg2
import pandas as pd
import time
from time import gmtime, strftime
libhoney.init(writekey="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", dataset="honeycomb-python-example", debug=True)
PGHOST="kylelf2.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com"
PGDATABASE="postgres"
PGUSER="kylelf"
PGPASSWORD="xxxxxxxx"
conn_string = "host="+ PGHOST +" port="+ "5432" +" dbname="+ PGDATABASE +" user=" + PGUSER  +" password="+ PGPASSWORD
conn=psycopg2.connect(conn_string)
print("Connected!")
sql_command = "select datname, pid, usename, application_name, client_addr, COALESCE(client_hostname,'unknown'), COALESCE(wait_event_type,'cpu'), COALESCE(wait_event,'cpu'), query from pg_stat_activity where state = 'active' ; "
print (sql_command)
builder = libhoney.Builder()
try:
        while 1 < 2 :
                mytime=strftime("%Y-%m-%d %H:%M:%S", time.localtime())
                print "- " + mytime + " --------------------- "
                cursor = conn.cursor()
                cursor.execute(sql_command)
                for row in cursor:
                        db=row[0]
                        pid=row[1]
                        un=row[2]
                        app=row[3]
                        ip=row[4]
                        host=row[5]
                        group=row[6]
                        event=row[7]
                        query=row[8].replace('\n', ' ')
                        if group != "cpu" :
                                event= group + ":" + event
                        print '{0:10s} {1:15s} {2:15s} {3:15s} {4:40.40s}'.format(un,ip,group,event,query)
                        ev = builder.new_event()
                        ev.add_field( "hostname", ip)
                        ev.add_field( "user", un)
                        ev.add_field( "event", event)
                        ev.add_field( "sql", query)
                        #ev.created_at = mytime;
                        ev.send()
                time.sleep(1)
                cursor.close()
                conn.close()
                conn=psycopg2.connect(conn_string)

( You might notice the disconnect / connect at the end of the loop. That waste resources but for some reason querying from pg_stat_activity would return the same number of rows if I didn’t disconnect. Disconnecting it worked. For the case of a simple demo I gave up trying to figure out what was going on. This weirdness doesn’t happen for user tables)

On the honeycomb.io dashboard page I can choose “events” under “BREAK DOWN” and “count()” under (CALCULATE PER GROUP) and I get a db load chart by wait event. I can further choose to make the graph a stacked graph:

 

Screen Shot 2019-04-03 at 2.38.47 PM

 

Now there are some limitations that make this less than a full solution. For one, zooming out cause the granularity to change from graphing a point every second to graphing points every 15, 30 or 60 seconds, yet the count will count all the points in those intervals and there is no way to normalize it by the elasped time i.e. for a granularity of 60 seconds it sums up all the points in 60 seconds and graphs that value where what I want is to take that 60 second sum and divide by 60 seconds to get the *Average* active sessions in that interval and not the sum.

But over all a fun easy demo to get started with.

I found honeycomb.io to respond quickly to emails and they have a slack channel where folks were asking and answering questions that was responsive as well.

 

Uncategorized

Amazon RDS cluster dashboard with Performance Insights

March 28th, 2019

Amazon RDS Performance Insights (PI) doesn’t have a single pane of glass dashboard for clusters, yet. Currently PI has a dashboard that has to be looked at for each instance in a cluster.

On the other hand one can create a simple cluster dashboard using Cloudwatch.

PI, when enabled, automatically sends three metrics to Cloudwatch every minute.

These metrics are

  1. DBLoad
  2. DBLoadCPU
  3. DBLoadNonCPU

DBLoad = DBLoadCPU + DBLoadNonCPU

These metrics are measured in units of Average Active Sessions (AAS). AAS is like the run queue on the UNIX top command except at the database level. AAS is the average number of SQL queries running concurrently in the database. In the case of DB Load AAS,  the average is over 1 minute since the metrics are reported each minute, and represents the total average # of SQL queries running concurrently. The DBLoad AAS can be broken down into those queries that are runnable on CPU, which is DBLoadCPU, and those queries that are not ready to run on the CPU because they are waiting for some resource like an I/O to complete, a lock , a latch, or some  resource that can only be accessed in single threaded mode like a latch or buffer.

These metrics can be used to look at the health of the database.

For example we can only have as many SQL running on the CPU as there are vCPUs. If DBLoadCPU goes above the # of vCPUs then we know that some of those queries that are runnable on the CPU are actually waiting for the CPU.

We also know that when DBLoadNonCPU is low or near 0 then the queries are not waiting for resources and can execute. When DBLoadNonCPU goes up significantly then that represents an opportunity to optimize. For example if queries are spending half their time waiting for IO then if we could buffer that IO we could remove the IO wait and theoretically the queries could go twice as fast, doubling throughput.

By looking at DBLoadCPU for each instance in a cluster we can see if the load is well balanced and we can see if the load goes above the maximum CPU resources of the instance which would indicate a CPU bottleneck.

By looking at the ratio or percentage of DBLoadNonCPU to total DBLoad we can see how much time is wasted waiting for resources instead of executing on CPU. By show this percentage for each instance in the cluster in one graph we can see if any particular instance is running into a bottleneck. If so we would want to look the performance insights dashboard for that instance to get more detail about what is happening.

So let’s set up a cluster dashboard using PI data.

Create a RDS database instance with PI enabled : https://console.aws.amazon.com/rds

PI is supported on all RDS Oracle, Aurora PostgreSQL,  RDS PostgreSQL 10 , RDS SQL Server (except 2008) and on the most recent versions of  Aurora MySQL 5.6, RDS MySQL 5.6 & 5.7. See docs for details:  https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html

In the case of this blog I created an Aurora MySQL 5.6  cluster with the original writer node and 3 reader nodes. You can create the initial instance with one reader node, then after creation, go to modify instanced and add reader node.

My nodes are shard1, shard2, shard3 and shard4.

After creating or modifying an instance to support PI, navigate to Cloudwatch and create a new dashboard (or add to an existing one)

Screen Shot 2019-03-28 at 12.17.34 PM

 

after creating a dashboard (or modifying an existing one) add a new widget, click “Add widget” button

Screen Shot 2019-03-28 at 12.22.46 PM

and for this example chose the line graph, the first option on the left of popup:

Screen Shot 2019-03-25 at 6.00.44 PM

at the bottom of the screen enter “DBLoadCPU” into the search field

Screen Shot 2019-03-28 at 12.24.02 PM

hit return and click on “per database metrics”

Screen Shot 2019-03-28 at 12.25.17 PM

 

My cluster instances are shard1, shard2, shard3 and shard4 so I click those

Screen Shot 2019-03-28 at 12.26.19 PM

and click “Create Widget” in the bottom left

Screen Shot 2019-03-28 at 12.28.46 PM

I got a surprise, as each shard instance was suppose to have same load but can see  something is wrong on shard4. Will investigate that as we go.

For now there are some options on the widget that I want changed. I want the graph to start at 0 (zero) and have a maximum of 4 ,  since my instances have 2vCPUs and I want to be able to look quickly at the graph to know where I’m at without having to read the axis everytime.  My max available CPU load is 2 AAS since I have 2 vCPU. I set the max at 4 so there is some head room to be able to show load about 2.

There is pull down menu in top right of widget. Click it and choose “Edit”

Screen Shot 2019-03-28 at 12.31.25 PM

Click the third tab “Graph options” and enter 0 for min and for max enter a value above the # of vCPUs you have on your instance. I have 2 vCPUs, so I enter 4.

Screen Shot 2019-03-28 at 12.32.14 PM

click update in the bottom right.

Screen Shot 2019-03-28 at 12.34.30 PM

I can see I’m running a pretty good load as shards 1-3 are running  around 1.5 AAS on CPU i.e. our SQL are asking for about 75% of the CPU capacity of the box. ( 100% * (1.5 AAS CPU / 2 vCPU) ).

I also see shard4 has a lower DBLoadCPU. Turns out I had turned off the load to that shard this morning and forgot, so I restarted it.

Now lets add a widget to see how efficient our load is, i.e. what % of the load is waiting instead of being runnable on CPU.

Create a new widget and search for DBLoad and choose the 2 metrics DBLoadNonCPU & DBLoad for all 4 instances. We will use them in a mathematical expression.

Screen Shot 2019-03-28 at 12.40.19 PM

create the widget, then edit it,

uncheck all the metrics

then we click “Add a math expression”

add the expression 100*(DBLoadNonCPU/DBLoad)  for each instance

Screen Shot 2019-03-28 at 12.45.32 PM

Looks like

Screen Shot 2019-03-28 at 12.47.12 PM

You can see I restarted the load on shard4 because the DBLoadCPU has gone up.

Now for the new graph click on title and type “% bottleneck”

edit it and add min 0 and max 100 ( i.e. 0-100% range), now it looks like

Screen Shot 2019-03-28 at 12.49.36 PM

Be sure and click “Save dashboard” so you don’t loose you work.

Now what do we see? well now that I’ve restarted the load on shard4, we see on “DBLoadCPU”,  the DBLoadCPU is pretty evenly balanced.

On “% bottleneck”  we see it’s pretty low except for shard1. To find out what is happening we have to navigate to the PI dashboard for shard1. Looks like shard1 is spending a lot of it’s load waiting on resources.

Let’s go to the PI dashboard for shard1.

 

Screen Shot 2019-03-28 at 12.58.21 PM

 

we can see that on the left most of the load was not CPU. CPU is green. All other colors are waiting for resources.

This is the write node and other activity is going on than the reader nodes which are only selects.

On the right hand side we can see CPU load went up so the ratio of Wait load in relation to CPU load and total load went down. This is what we see in the “% bottleneck” widget we made in Cloudwatch.

Now what are those resources that we are waiting on and what changed to make CPU go up? We can see that by exploring the PI dashboard.

For a demo on how to use PI to  identify and solve performance issues see

https://www.youtube.com/watch?v=yOeWcPBT458

 

 

Uncategorized

“delayed commit ok initiated” – Aurora MySQL

January 11th, 2019

“delayed commit ok initiated” –  is a thread state in Aurora MySQL which indicates the thread has started the async commit process but is waiting for it to be ack’d. You will not find this thread state in MySQL as  MySQL  does not use our async commit protocal, it is Aurora MySQL specific. This is  usually the genuine commit time of a transaction.

This is a “state” and not a wait.

Uncategorized

CLI for Amazon RDS Performance Insights

December 11th, 2018

Installing CLI on LINUX

1. install PIP

https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html#awscli-install-linux-pip

curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py --user

2. install AWS CLI

https://docs.aws.amazon.com/cli/latest/userguide/installing.html

pip install awscli --upgrade --user

3. configure

aws configure

For “aws configure” you will need

  • AWS Access Key ID:
  • AWS Secret Access Key:

Which you can get by going to the AWS console, going to IMS and creating access key.

Running example

Once “aws” is configured you can run the CLI like

aws \
 pi get-resource-metrics \
 --region us-east-1 \
 --service-type RDS \
 --identifier db-xxxxxx \
 --metric-queries "{\"Metric\": \"db.load.avg\"}" \
 --start-time `expr \`date +%s\` - 60480000 ` \
 --end-time `date +%s` \
 --period-in-seconds 86400

That “—identifier” is for one of my databases, so you will have to change that.
You will also have to modify region if you are accessing a database in a different region

getting json output

export AWS_DEFAULT_OUTPUT="json"

documentation

API

CLI

examples

My databases

  • db-YTDU5J5V66X7CXSCVDFD2V3SZM ( Aurora PostgreSQL)
  • db-2XQCJDBHGIXKDYUVVOIUIJ34LU ( Aurora MySQL)
  • db-Z2PNRYPV4J7LJLGDOKMISTWRQU (RDS MySQL)

see these blogs for loads on these databases

CPU load last 5 minutes

aws \
 pi get-resource-metrics \
 --region us-east-1 \
 --service-type RDS \
 --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM \
 --start-time `expr \`date +%s\` - 300 ` \
 --metric-queries '{
      "Metric": "db.load.avg",
      "Filter":{"db.wait_event.type": "CPU"}
      } ' \
 --end-time `date +%s` \
 --period-in-seconds 300

Top SQL by load

aws pi describe-dimension-keys \
    --region us-east-1 \
    --service-type RDS \
    --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM \
    --start-time `expr \`date +%s\` - 300 ` \
    --end-time `date +%s` \
    --metric db.load.avg \
    --group-by '{"Group":"db.sql"}'
    

Top Waits by load

aws pi describe-dimension-keys \
    --region us-east-1 \
    --service-type RDS \
    --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM \
    --start-time `expr \`date +%s\` - 300 ` \
    --end-time `date +%s` \
    --metric db.load.avg \
    --group-by '{"Group":"db.wait_event"}'

.
Top User by load

aws pi describe-dimension-keys \
    --region us-east-1 \
    --service-type RDS \
    --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM \
    --start-time `expr \`date +%s\` - 300 ` \
    --end-time `date +%s` \
    --metric db.load.avg \
    --group-by '{"Group":"db.user"}'

.

"Total": 0.15100671140939598, 
            "Dimensions": {
                "db.sql.db_id": "pi-4101593903", 
                "db.sql.id": "209554B4D97DBF72871AE0854DAD97385D553BAA", 
                "db.sql.tokenized_id": "1F61DDE1D315BB8F4BF198DB219D4180BC1CFE05", 
                "db.sql.statement": "WITH cte AS (\n   SELECT id     \n   FROM   authors \n   LIMIT  1     \n   )\nUPDATE authors s\nSET    email = 'toto' \nFROM   cte\nWHERE  s.id = cte.id\n\n"
            }

Top SQL by waits grouped

aws pi describe-dimension-keys \
    --region us-east-1 \
    --service-type RDS \
    --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM \
    --start-time `expr \`date +%s\` - 300 ` \
    --end-time `date +%s` \
    --metric db.load.avg \
    --group-by '{"Group":"db.sql"}' \
    --partition-by '{"Group": "db.wait_event"}'

.

{
            "Total": 0.1644295302013423, 
            "Dimensions": {
                "db.sql.db_id": "pi-4101593903", 
                "db.sql.id": "209554B4D97DBF72871AE0854DAD97385D553BAA", 
                "db.sql.tokenized_id": "1F61DDE1D315BB8F4BF198DB219D4180BC1CFE05", 
                "db.sql.statement": "WITH cte AS (\n   SELECT id     \n   FROM   authors \n   LIMIT  1     \n   )\nUPDATE authors s\nSET    email = 'toto' \nFROM   cte\nWHERE  s.id = cte.id\n\n"
            }, 
            "Partitions": [
                0.003355704697986577, 
                0.14093959731543623, 
                0.020134228187919462
            ]
        }, 
"PartitionKeys": [
        {
            "Dimensions": {
                "db.wait_event.type": "CPU", 
                "db.wait_event.name": "CPU"
            }
        }, 
        {
            "Dimensions": {
                "db.wait_event.type": "IO", 
                "db.wait_event.name": "IO:XactSync"
            }
        }, 
        {
            "Dimensions": {
                "db.wait_event.type": "Lock", 
                "db.wait_event.name": "Lock:transactionid"
            }
        }

Top SQL over last 5 minutes based on CPU

 aws pi describe-dimension-keys     \
    --region us-east-1     \
    --service-type RDS     \
    --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM     \
    --start-time `expr \`date +%s\` - 300 `     \
    --end-time `date +%s`     \
    --metric db.load.avg     \
    --group-by '{"Group":"db.sql"}'     \
    --filter '{"db.wait_event.type": "CPU"}'
{
"Total": 0.003355704697986577,
"Dimensions": {
"db.sql.db_id": "pi-4101593903",
"db.sql.id": "209554B4D97DBF72871AE0854DAD97385D553BAA",
"db.sql.tokenized_id": "1F61DDE1D315BB8F4BF198DB219D4180BC1CFE05",
"db.sql.statement": "WITH cte AS (\n SELECT id \n FROM authors \n LIMIT 1 \n )\nUPDATE authors s\nSET email = 'toto' \nFROM cte\nWHERE s.id = cte.id\n\n"
}

load over last 5 minutes based on CPU

aws \
 pi get-resource-metrics \
 --region us-east-1 \
 --service-type RDS \
 --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM \
 --start-time `expr \`date +%s\` - 300 ` \
 --metric-queries '{
      "Metric": "db.load.avg",
      "Filter":{"db.wait_event.type": "CPU"}
      } ' \
 --end-time `date +%s` \
 --period-in-seconds 300

 

Top SQL over last 5 minutes based on CPU

 aws pi describe-dimension-keys     \
    --region us-east-1     \
    --service-type RDS     \
    --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM     \
    --start-time `expr \`date +%s\` - 300 `     \
    --end-time `date +%s`     \
    --metric db.load.avg     \
    --group-by '{"Group":"db.sql"}'     \
    --filter '{"db.wait_event.type": "CPU"}'

alternatively with a partition by waits

aws pi describe-dimension-keys \
 --region us-east-1 \
 --service-type RDS \
 --identifier db-YTDU5J5V66X7CXSCVDFD2V3SZM \
 --start-time `expr \`date +%s\` - 300 ` \
 --end-time `date +%s` \
 --metric db.load.avg \
 --group-by '{"Group":"db.sql"}' \
 --partition-by '{"Group": "db.wait_event"}' \
 --filter '{"db.wait_event.type": "CPU"}'

CLI  with counter metrics

aws \
 pi get-resource-metrics \
 --region us-east-1 \
 --service-type RDS \
 --identifier db-VMM7GRZMTGWZNPWAJOLWTHQDDE \
 --metric-queries "{\"Metric\": \"db.Transactions.xact_commit.avg\"}" \
 --start-time `expr \`date +%s\` - 3600 ` \
 --end-time `date +%s` \
 --period-in-seconds 60 \
 --endpoint-url https://api.integ.pi.a2z.com

Uncategorized

Aurora MySQL synch/mutex/innodb/aurora_lock_thread_slot_futex wait

September 18th, 2018

Thanks to Jeremiah Wilton for the following info:

This wait event indicates that there is a thread which is waiting on an InnoDB record lock. Check your database for conflicting workloads. More information on InnoDB locking can be found here: https://dev.mysql.com/doc/refman/5.6/en/innodb-locking.html

 

In other words, record-level lock conflicts are happening. More than one connection is trying to update the last_login for a particular id in the_table at the same time. Those connections are conflicting and serializing on the record lock for that id. Here’s a query that can help you identify the blocker and waiter for InnoDB record locks in MySQL-family engines. Run this when you see the aurora_lock_thread_slot_futex wait event in Performance Insights. In a future release of Performance Insights, we will automatically generate and display a similar blockers-and-waiters report when Performance Insights detects this event.

select p1.id waiting_thread, p1.user waiting_user, p1.host waiting_host, it1.trx_query waiting_query,
       ilw.requesting_trx_id waiting_transaction, ilw.blocking_lock_id blocking_lock, il.lock_mode blocking_mode,
       il.lock_type blocking_type, ilw.blocking_trx_id blocking_transaction,
       case it.trx_state when 'LOCK WAIT' then it.trx_state else p.state end blocker_state, il.lock_table locked_table,
       it.trx_mysql_thread_id blocker_thread, p.user blocker_user, p.host blocker_host
from information_schema.innodb_lock_waits ilw
join information_schema.innodb_locks il on ilw.blocking_lock_id = il.lock_id and ilw.blocking_trx_id = il.lock_trx_id
join information_schema.innodb_trx it on ilw.blocking_trx_id = it.trx_id
join information_schema.processlist p on it.trx_mysql_thread_id = p.id
join information_schema.innodb_trx it1 on ilw.requesting_trx_id = it1.trx_id
join information_schema.processlist p1 on it1.trx_mysql_thread_id = p1.id;

+----------------+--------------+---------------------+---------------------------------------+---------------------+--------------------+---------------+---------------+----------------------+---------------+----------------------+----------------+--------------+---------------------+
| waiting_thread | waiting_user | waiting_host        | waiting_query                         | waiting_transaction | blocking_lock      | blocking_mode | blocking_type | blocking_transaction | blocker_state | locked_table         | blocker_thread | blocker_user | blocker_host        |
+----------------+--------------+---------------------+---------------------------------------+---------------------+--------------------+---------------+---------------+----------------------+---------------+----------------------+----------------+--------------+---------------------+
|           1117 | reinvent     | 172.31.51.118:34734 | UPDATE sbtest8 SET k=k+1 WHERE id=125 | 888017450           | 888017113:88:6:17  | X             | RECORD        | 888017113            | LOCK WAIT     | `sysbench`.`sbtest8` |           1196 | reinvent     | 172.31.51.118:34888 |
|           1117 | reinvent     | 172.31.51.118:34734 | UPDATE sbtest8 SET k=k+1 WHERE id=125 | 888017450           | 888017089:88:6:17  | X             | RECORD        | 888017089            | LOCK WAIT     | `sysbench`.`sbtest8` |           1431 | reinvent     | 172.31.51.118:35366 |
|           1117 | reinvent     | 172.31.51.118:34734 | UPDATE sbtest8 SET k=k+1 WHERE id=125 | 888017450           | 888015342:88:6:17  | X             | RECORD        | 888015342            | LOCK WAIT     | `sysbench`.`sbtest8` |           1680 | reinvent     | 172.31.51.118:35868 |
.
.
+----------------+--------------+---------------------+----------------------------------------+---------------------+-

Also the following:
https://dev.mysql.com/doc/refman/5.6/en/innodb-information-schema-examples.html

SELECT
  r.trx_id waiting_trx_id,
  r.trx_mysql_thread_id waiting_thread,
  r.trx_query waiting_query,
  b.trx_id blocking_trx_id,
  b.trx_mysql_thread_id blocking_thread,
  b.trx_query blocking_query
FROM       information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b
  ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r
  ON r.trx_id = w.requesting_trx_id;
+----------------+----------------+----------------------------------------+-----------------+-----------------+----------------------------------------+
| waiting_trx_id | waiting_thread | waiting_query                          | blocking_trx_id | blocking_thread | blocking_query                         |
+----------------+----------------+----------------------------------------+-----------------+-----------------+----------------------------------------+
| 917169041      |           2822 | UPDATE sbtest5 SET k=k+1 WHERE id=126  | 917169007       |            2296 | UPDATE sbtest5 SET k=k+1 WHERE id=126  |
| 917169041      |           2822 | UPDATE sbtest5 SET k=k+1 WHERE id=126  | 917168488       |            2214 | UPDATE sbtest5 SET k=k+1 WHERE id=126  |
| 917169025      |           3069 | UPDATE sbtest2 SET k=k+1 WHERE id=125  | 917168945       |            2700 | UPDATE sbtest2 SET k=k+1 WHERE id=125  |
.
.
+----------------+----------------+----------------------------------------+-----------------+-----------------+----------------------------------------+

see AWS forum post at https://forums.aws.amazon.com/thread.jspa?threadID=289484

Uncategorized

Is NFS on ZFS slowing you down?

January 25th, 2018

If you think so, check out shell script “ioh.sh” from github at  https://github.com/khailey/ioh

Introduction and Goals

The goal of ioh.sh is to measure both the throughput and latency of the different code layers when using NFS mounts on a ZFS appliance. The ZFS appliance code layers inspected with the script are I/O from the disks, ZFS layer and the NFS layer. For each of these layers the script measures the throughput, latency and average I/O size. Some of the layers are further broken down into other layers. For example NFS writes are broken down into data sync, file sync and non-sync operations and NFS reads are broken down into cached data reads and reads that have to go to disk.

The primary three questions ioh is used to answer are

  • Is I/O latency from the I/O subsystem to ZFS appliance sufficiently fast?
  • Is NFS latency from ZFS appliance to the consumer sufficiently fast?
  • Is ZFS adding unusual latency

One: If the latency from the I/O subsystem is not adequate then look into supplying better performing I/O subsystem for ZFS appliance. For example if the goal is 3ms write times per 1K redo write but the underlying I/O subsystem is taking 6ms, then it will be impossible for ZFS appliance to meet those expectations.

Two: If the latency for NFS response from ZFS appliance is adequate and yet the NFS client reports latencies as much slower (more than 2ms slower) then one should look instead at problems in the NIC, network or NFS client host, see network tracing, example http://datavirtualizer.com/tcp-trace-analysis-for-nfs/

Three: If the I/O latency is sufficiently fast but ZFS latency is slow, then this could indicate a problem in the ZFS layer.

The answer to the question “what is adequate I/O latency” depends. In general a single random 8 Kb block read on Oracle is expected to take 3-12 ms on average, thus the typical latency is around 7.5 ms. NOTE: when measuring I/O latency on the source system it’s important to use a tool like “iostat” that will show the actually I/Os to the subsystem. The I/O measured by the Oracle database will include both I/Os satisfied from the host file system cache as well as the I/O subsystem unless the database is running with direct I/O

The ioh tool can also give insight into other useful information such as

  • Are IOPs getting near the supported IOPs of the underlying I/O subsystem
  • is NFS throughput getting near the maximum bandwidth of the NIC?”

For example if the NIC is 1GbE then the maximum bandwidth is about 115MB/s, and generally 100MB/s is a good rule of thumb for the max. If throughput is consistently near the NIC maximum, then demand is probably going above maximum and thus increasing latency

$ ioh.sh -h

usage: ./ioh.sh options

collects I/O related dtrace information into file "ioh.out"
and displays the

OPTIONS:
  -h              Show this message
  -t  seconds     runtime in seconds, defaults to forever
  -c  seconds     cycle time ie time between collections, defaults to 1 second
  -f  filename    change the output file name [defautls to ioh.out]
  -p              parse the data from output file only,  don't run collection
  -d  display     optional extra data to show: [hist|histsz|histszw|topfile|all]
                    hist    - latency histogram for NFS,ZFS and IO both reads and writes
                    histsz  - latency histogram by size for NFS reads
                    histszw - latency histogram by size for NFS writes
                    topfile - top files accessed by NFS
                    all     - all the above options
                  example
                    ioh.sh -d "histsz topfile"

two optional environment variables CWIDTH – histogram column width PAD – character between columns in the histograms, null by default

Running

$ sudo ./ioh.sh 

Outputs to the screen and put raw output into default file name “ioh.out.[date]”. The default output file name can be changed with “-o filename” option. the raw output can later be formatted with

$ ./ioh.sh -p  ioh.out.2012_10_30_10:49:27

By default it will look for “ioh.out”. If the raw data is in a different file name it can be specified with “-o filename”

The output looks like

date: 1335282287 , 24/3/2012 15:44:47
TCP out:  8.107 MB/s, in:  5.239 MB/s, retrans:        MB/s  ip discards:
----------------
            |       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|     0.005 |    24.01 |    4.899 |        1
R |     zfs:|     7.916 |     0.05 |    7.947 |     1020
C |   nfs_c:|           |          |          |        .
R |     nfs:|     7.916 |     0.09 |    8.017 |     1011
- 
W |      io:|     9.921 |    11.26 |   32.562 |      312
W | zfssync:|     5.246 |    19.81 |   11.405 |      471
W |     zfs:|     0.001 |     0.05 |    0.199 |        3
W |     nfs:|           |          |          |        .
W |nfssyncD:|     5.215 |    19.94 |   11.410 |      468
W |nfssyncF:|     0.031 |    11.48 |   16.000 |        2

The sections are broken down into

  • Header with date and TCP throughput
  • Reads
  • Writes

Reads and Writes are are further broken down into

  • io
  • zfs
  • nfs

For writes, the non stable storage writes are separated from the writes to stable storage which are marked as “sync” writes. For NFS the sync writes are further broken down into “data” and “file” sync writes.

examples:

The following will refresh the display every 10 seconds and display an extra four sections of data

$ sudo ./ioh.sh -c 10 -d "hist histsz histszw topfile"   

date: 1335282287 , 24/3/2012 15:44:47
TCP out:  8.107 MB/s, in:  5.239 MB/s, retrans:        MB/s  ip discards:
----------------
            |       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|     0.005 |    24.01 |    4.899 |        1
R |     zfs:|     7.916 |     0.05 |    7.947 |     1020
R |     nfs:|     7.916 |     0.09 |    8.017 |     1011
- 
W |      io:|     9.921 |    11.26 |   32.562 |      312
W | zfssync:|     5.246 |    19.81 |   11.405 |      471
W |     zfs:|     0.001 |     0.05 |    0.199 |        3
W |     nfs:|           |                     |        .
W |nfssyncD:|     5.215 |    19.94 |   11.410 |      468
W |nfssyncF:|     0.031 |    11.48 |   16.000 |        2
---- histograms  -------
    area r_w   32u   64u   .1m   .2m   .5m    1m    2m    4m    8m   16m   33m   65m    .1s   .3s   .5s    1s    2s    2s+
R        io      .     .     .     .     .     .     .     .     .     1     3     1
R       zfs   4743   287    44    16     4     3     .     .     .     1     2     2
R       nfs      .  2913  2028    89    17     3     .     .     .     1     2     2
-
W        io      .     .     .    58   249   236    50    63   161   381   261    84    20     1
W       zfs      3    12     2
W   zfssync      .     .     .     .    26   162   258   129   228   562   636   250    75    29
W       nfs      .     .     .     .    12   164   265   134   222   567   637   250    75    29
--- NFS latency by size ---------
    ms   size_kb
R   0.1    8     .  2909  2023    87    17     3     .     .     .     1     2     2
R   0.1   16     .     4     5     2
-
W   5.0    4     .     .     .     .     8    49    10     3     4    11     4     2     1
W  21.4    8     .     .     .     .     4    55   196    99   152   414   490   199    60    23
W  18.3   16     .     .     .     .     .    34    29    25    43    91    84    28     8     5
W  16.1   32     .     .     .     .     .    19    16     7    14    38    36    15     3
W  19.3   64     .     .     .     .     .     6    11     .     9    11    19     6     2     1
W  20.4  128     .     .     .     .     .     1     3     .     .     2     4     .     1
---- top files ----
   MB/s                  IP  PORT filename
W  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
W  0.02MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
W  0.57MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
W  0.70MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/redo3.log
W  3.93MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/10.2.0.4/db_1/dbs/soe.dbf
-
R  0.01MB/s  172.16.100.102 39938 /domain0/group0/vdb12/datafile/home/oracle/oradata/kyle/control01.ora
R  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
R  0.02MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
R  0.05MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
R  7.84MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/10.2.0.4/db_1/dbs/soe.dbf
IOPs         313

Sections

First line is the date Second line is TCP MB per second in,out and retransmitted. The last value is “ip discards”

The three parts are all related and are a drill down starting with course grain data at the top to finer grain data at the bottom.

  • averages – default
  • histograms [hist]
  • histograms by size reads [histsz] writes [histszw] for NFS

The first section is a quick overview.

The second section breaks out the latency into a histogram so one can get an indication of amount of I/O from memory (ie those in microsecond ranges) as well as how far out the outliers are. (are the outliers on the VDBs matching up to the outliers seen on ZFS appliance?)

The third section differentiates between latency of single block random (typically the 8K size) and latency of multi-block sequential reads (32K and higher). The differentiation is important when comparing to Oracle stats which are grouped by single block random reads (called “db file sequential read” ) and sequential multi-block read (called “db file scattered read”).

The final section

top files read and write [topfile]

is sort of a sanity check as there are periods where there is suppose to be little to no NFS I/O and yet there is, so the top file sections tells which file and which host the NFS I/O is coming from.

The last line after all the sections is total IOPs for reads plus writes. (Note these IOPs could get converted to higher values at the storage layer if using RAID5 which will cause each write to be two reads plus two writes.)

The first section, shows up by default. The other sections require command line arguments.

To see just the first section, which is the default, run ioh.sh without any arguments:

sudo ./ioh.sh

To show non-default sections, add them to the command line

sudo ./ioh.sh -d "hist histsz histszw topfile"

A shortcut for all sections is “all”

sudo ./ioh.sh  -d all

Collecting in the background

nohup sudo ./ioh.sh -c 60 -t 86400 &

Runs the collection for 1 day (86400 seconds) collecting every 60 seconds and put raw output into default file name “ioh.out”. The default output file name can be changed with “-o filename” option.

1. Averages:

The displays I/O, ZFS and NFS data for both reads and writes. The data is grouped to try and help easily correlate these different layers First line is date in epoch format

columns

  • MB/s – MB transferred a second
  • avg_ms – average operation time
  • avg_sz_kb – average operation size in kb
  • count – number of operations

example

.
             |      MB/s|     mx_ms| avg_sz_kb|     count
 ------------|----------|----------|----------|--------------------
 R |      io:|  through | average  |  average |      count of operations
 R |     zfs:|  put     |latency   | I/O size | 
 R |     nfs:|          |millisec  | in KB    |   
 - 
 W |      io:|          |          |          |
 W | zfssync:|          |          |          |                                         
 W |     zfs:|          |          |          |                                         
 W |     nfs:|          |          |          |                                         
 W |nfssyncD:|          |          |          |                                         
 W |nfssyncF:|          |          |          |                                         

For writes

  • zfssync – these are synchronous writes. THese should mainly be Oracle redo writes.
  • nfs – unstable storage writes
  • nfssyncD – data sync writes
  • nfssyncF – file sync writes

DTrace probes used

  • io:::start/done check for read or write
  • nfs:::op-read-start/op-read-done , nfs:::op-write-start/op-write-done
  • zfs_read:entry/return, zfs_write:entry/return

2. Histograms

latency distribution for i/o, zfs, nfs for reads and writes. These distributions are not normalized by time, ie if ioh.d is outputs once a second then these counts will be equal to the counts in the first section. If ioh.d outputs every 10 seconds, then these values will be 10x higher

  1. Histograms by size for reads and writes

The first column is the average latency for the size of I/O for this line. The second column is the size. The size includes this size and every size lower up till the previous bucket. The goal here is to show the sizes of I/Os and the different latency for different sizes. For an Oralce database with 8k block size, 8k reads will tend to be random where as higher read sizes say will be multiblock requests and represent sequential reads. It’s common to see the 8K reads running slower than the larger reads.

4. Top files

shows the top 5 files for reads and writes. First column is MB/s, then R or W, then IP, then port then filename

Examples and Usage

Idle system

First thing to look at is the MB in and out which answers

  • “how busy is the system?”
  • “is NFS throughput approaching the limits of the NIC?”

In the following example, there is only less than 50KB/s total NFS throughput ( in plus out) thus the system isn’t doing much, and there must be no database activity other than the regular maintenance processes which are always running on a database. To confirm this, one can look at the top files at the bottom and see that the only activity is on the control files which are read and written to as part of database system maintenance. Otherwise there is no activity to speak of, so no reason look at I/O latency in this case. Additionally, all majority of what little I/O is in 16K sizes which is typical of control file activity, where as the default database data block activity is in 8K sizes. Most read I/O is coming from ZFS appliance cache as its 64 micro seconds.

date: 1335282646 , 24/3/2012 15:50:46
TCP  out:  0.016 MB/s, in:  0.030 MB/s, retrans:        MB/s  ip discards:
----------------
            |       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|           |          |          |        .
R |     zfs:|     0.016 |     0.01 |    1.298 |       13
R |     nfs:|     0.016 |     0.10 |   16.000 |        1
- 
W |      io:|     0.365 |     4.59 |    9.590 |       39
W | zfssync:|     0.031 |    14.49 |   16.000 |        2
W |     zfs:|     0.001 |     0.07 |    0.199 |        3
W |     nfs:|           |          |          |        .
W |nfssyncD:|     0.003 |          |          |        .
W |nfssyncF:|     0.028 |    14.33 |   14.400 |        2
---- histograms  -------
    area r_w   32u   64u   .1m   .2m   .5m    1m    2m    4m    8m   16m   33m   65m    .1s   .3s   .5s    .5s+
R        io      .
R       zfs     60     5
R       nfs      .     .     5
-
W        io      .     .     .    20    43    60    11    11     8    28    17     1
W       zfs      2     8     5     2
W   zfssync      .     .     .     .     .     .     2     .     2     5     1     1
W       nfs      .     .     .     .     .     .     2     .     2     5     1     1
--- NFS latency by size ---------
    ms   size_kb
R   0.1   16     .     .     5
-
W          8     .     .     .     .     .     .     .     .     .     1     .     1
W  16.0   16     .     .     .     .     .     .     2     .     2     4     1
---- top files ----
   MB/s                  IP  PORT filename
W  0.00MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
W  0.01MB/s  172.16.100.102 39938 /domain0/group0/vdb12/datafile/home/oracle/oradata/kyle/control01.ora
W  0.01MB/s  172.16.103.133 59394 /domain0/group0/vdb13/datafile/home/oracle/oradata/kyle/control01.ora
W  0.01MB/s   172.16.100.69 39682 /domain0/group0/vdb14/datafile/home/oracle/oradata/kyle/control01.ora
W  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
-
R  0.00MB/s   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
R  0.01MB/s  172.16.100.102 39938 /domain0/group0/vdb12/datafile/home/oracle/oradata/kyle/control01.ora
R  0.01MB/s  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
IOPs          39

Active System Below is an example of an active system. Looking at TCP bytes in and out, there is a fair bit 3MB/s out and 2MB/s in. These rates are a long way from saturating 1GbE but there is activity going on.

READs all reads are coming out of the cache. How do we know? For one the average ms latency is 0.07, or 70 micro seconds. Does thhis 70us include slower reads that might be off disk? Looking at the histogram, one can see that the slowest zfs I/O is under 100us and looking just above at the I/O histogram there are no I/Os thus all the I/O is coming from cache.

Writes Writes are pretty slow. Oracle Redo writes on good systems are typically 3ms or liess for small redo. Unfortunately most of the I/O is coming from datafile writes so it’s difficult to tell what the redo write times are. (maybe worth enhancing ioh.d to show average latency by file) Typically the redo does “nfssyncD” writes and datafile writes are simply unstable storage writes “nfs” writes that get sync at a later date. This particular database is using the Oracle parameter “filesystemio_options=setall” which implements direct I/O. Direct I/O can work without sync writes but the implementation depends on the OS. This O/S implementation, OpenSolaris, causes all Direct I/O writes to by sync writes.

date: 1335284387 , 24/3/2012 16:19:47
TCP out:  3.469 MB/s, in:  2.185 MB/s, retrans:        MB/s  ip discards:
----------------
            ||         |           |          |          o       MB/s|    avg_ms| avg_sz_kb|     count
------------|-----------|----------|----------|--------------------
R |      io:|           |          |          |        .
R |     zfs:|     3.387 |     0.03 |    7.793 |      445
R |     nfs:|     3.384 |     0.07 |    8.022 |      432
- 
W |      io:|     4.821 |    12.08 |   24.198 |      204
W | zfssync:|     1.935 |    38.50 |   11.385 |      174
W |     zfs:|     0.001 |     0.06 |    0.199 |        3
W |     nfs:|           |          |          |        .
W |nfssyncD:|     1.906 |    39.06 |   11.416 |      171
W |nfssyncF:|     0.028 |    14.59 |   14.400 |        2
---- histograms  -------
    area r_w   64u   .1m   .2m   .5m    1m    2m    4m    8m   16m   33m   65m    .1s   .3s   .3s+
R        io      .
R       zfs   2185    34     5     .     1
R       nfs    903  1201    47     8     1
-
W        io      .     .    19   142   143    46    42   108   240   212    57    12     1
W       zfs     13     3     1
W   zfssync      .     .     .     .    10     6     .    21    60   384   287    86    16
W       nfs      .     .     .     .    10     5     .    21    60   384   287    86    16
--- NFS latency by size ---------
    ms   size_kb
R   0.1    8   900  1199    47     7     1
R   0.1   16     3     2     .     1
-
W  17.7    4     .     .     .     .     3     1     .     2     5     3     3
W  41.1    8     .     .     .     .     3     .     .    13    35   292   231    76    13
W  34.0   16     .     .     .     .     3     3     .     4    13    61    30     8     2
W  39.0   32     .     .     .     .     .     1     .     .     2    16    14     2     1
W  28.3   64     .     .     .     .     1     .     .     .     2     9     8
W  26.2  128     .     .     .     .     .     .     .     2     3     2     1
W        256     .     .     .     .     .     .     .     .     .     1
---- top files ----
   MB/s             IP  PORT filename
R  0.01   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
R  0.01  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
R  0.02   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/system.dbf
R  0.02   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
R  3.33   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/product/dbs/soe.dbf 
-
W  0.01   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/control01.ora
W  0.01  172.16.103.196 52482 /domain0/group0/vdb17/datafile/home/oracle/oradata/swingb/control01.ora
W  0.15   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/undo.dbf
W  0.30   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/oradata/sol/redo1.log
W  1.46   172.16.100.81 21763 /domain0/group0/vdb16/datafile/export/home/opt/app/product/dbs/soe.dbf 
IOPs         204

ZFS read layer problem

          |      MB/s|    avg_ms|  avg_sz_kb
----------|----------|----------|-----------
R |   io :|    88.480|      4.60|     17.648 
R |  zfs :|    19.740|      8.51|     12.689 
R |  nfs :|    16.562|     22.67|     30.394 

In this case the ZFS I/O 19MB/s is higher than NFS at 16MB/s. Now that could because some thing is accessing the file system locally on ZFS appliance or that ZFS is doing read ahead, so there are possible explanations, but it’s interesting. Second subsystem I/O at 88MB/s is much greater than ZFS I/O at 19MB/s. Again that is notable. Could because there is a scrub going on. (to check for a scrub, run “spool status”, to turn off scrub run “zpool scrub -s domain0″ though the scrub has to be run at some point). Both interesting observations.

Now the more interesting parts. The NFS response time 22ms is almost 3x the average ZFS response time 8ms. On the other hand the average size of NFS I/O is 2.5x the average ZFS I/O size so that might be understandable. The hard part to understand is that the ZFS latency 8ms is twice the latency of subsystem I/O at 4ms yet the average size of the I/O sub-system reads is bigger than the average ZFS read. This doesn’t make any sense.

In this case to hone in the data a bit, it would be worth turning off a scrub if it was running and see what the stats are to eliminate a factor that could be muddying the waters.

But in this case, even without a scrub going, the ZFS latency was 2-3x slower than the I/O subsystem latency.

It turns out ZFS wasn’t caching and spending a lot of time trying to keep the ARC clean.

ZFS write layer problem

           |       MB/s|    avg_ms| avg_sz_kb|     count
-----------|-----------|----------|----------|--------------------
W |     io:|     10.921|     23.26|    32.562|       380
W |    zfs:|    127.001|     37.95|     0.199|      8141
W |    nfs:|     0.000 |     0.00 |    0.000 |        0

NFS is 0 MB/s because this was from http traffic. The current version of ioh would show the TCP MB/s. This version also mixed up zfs sync and non-sync writes into one bucket, but much of the ZFS writes have to be non-sync because the write rate is 127MB/s where as the I/O subsystem writes are only 10MB/s thus at least 117MB/s is not sync and if they are not sync they are just memory writes so should be blindingly fast, but they aren’t. The average latency for the ZFS writes is 37ms. All the more shockingly the average size is only 0.199K where as the I/O subsystem writes 32K in 23ms. The case here was that because of disk errors, the ZFS layer was self throttling way to much. This was a bug

Uncategorized

IP CIDR rules and address ranges

November 3rd, 2017

I always forget IP address range coverage rules and forget where to look.

It’s the wiki!

https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing

and for good reference here is the table:

/32 is for a single address

/24 is for a range in the last place x.x.x.0

/16 is for a range in the last 2 places x.x.0.0

Screen Shot 2017-11-03 at 11.28.31 AM

Uncategorized

Best method for tuning sub-optimal execution plans

October 18th, 2017

How do you determine if the Oracle SQL optimizer has created a sub-optimal execution plan? re-run statistics and see what happens? wait for Oracle to find other execution plans? What if neither method is helping? Do you read the execution plan? What do you look at? Differences in actual vs estimated? How successful is that? Look for full table scans?  Do you look at the 10053 trace? How much time and effort does that take?  What do you look at in the 10053 trace. Do you have a systematic methodology  that works in almost all cases?

Well there is a method that is reliable and systematic. It’s laid out in Dan Tow’s book SQL Tuning.

The method is tedious as it requires a lot of manual work to draw join trees, identify constraints and relationships, manual decomposition and execution of every 2 table join in the statement. It can add up to a lot of work but it is systematic and dependable.

Cool thing is it can all be done automatically with a tool called DB Optimizer that now (as I look today) only cost about $400.

If you cost your company $50/hour then in 8 hours of saved work it’s paid for its self. In my experience as a DBA I have serveral SQL a year that take me over a day to optimize manually but that I can get done in a few minutes with DB Optimizer.  Thus with just one hard SQL the tool has paid for itself. The DB Optimizer analysis might run for a couple hours, but afterwords with the data it collects and presents, I can find better tuning path in minutes if it exists.

 

Here is previous blog post that gives some an overview

Here is a video that explains the method. (Same presentation in a different video.)

Slides from the videos.

Here is Dan Tow’s book SQL Tuning that originally laid out the method.

Here is post by Jonathan Lewis demonstrating the method.

Pick up a copy. I think it’s super cool and interested in feedback on the your experiences.

 

sql9

Uncategorized