Oracle: compare DB Time and CPU time to ASH

April 26th, 2019
Below I'm comparing AAS from ASH verses SYSMETRICs specificially ASH CPU in AAS vs SYSMETRIC CPU in AAS ASH total AAS vs SYSMETRIC  total AAS derived from DB TIME ASH CPU is consistently lower that SYSMETRIC  CPU and that could be because waits burn CPU but show up as waits and not CPU in ASH. On the other hand, not sure why total AAS from SYSMETRIC  is consistently smaller than total AAS from ASH. Def v_secs=60 -- bucket size select DBTT.begin_time ,ASH_CPU, DBT_CPU , ASH_AAS, DBT_AAS from ( select to_char(begin_time,'MON DD YYYY HH24:MI') begin_time, ...


Uncategorized

Want to change the future Amazon RDS performance monitoring?

April 10th, 2019
UPDATE: All slots for this study have been filled. On the other and would love your feedback. Please send any ideas about what you'd like to see in RDS performance monitoring to me at kylelf at amazon.com Thanks Kyle Have you ever used Amazon RDS console to manage performance on an RDS database and had ideas on how to make it better?  The Amazon UX Research team for AWS is collaborating with me to recruit for an upcoming user research study with Amazon Relational Database Service (RDS) performance monitoring. Sessions will take place between Monday, April 22, 2019 – Friday, April 26, 2019. In this study Amazon is looking to ...


Uncategorized

Honeycomb.io for DB Load and Active Sessions

April 3rd, 2019
Honeycomb.io turns out to be a nice solution for collecting, retrieving and displaying multi-dimensional time series data, i.e. the kind of data you get from sampling. For example, in the database world we have Active Session History (ASH) which at  it's core tracks when - timestamp who - user command - what SQL are they running state - are they runnable on CPU or are they waiting and if waiting what are they waiting for like I/O, Lock, latch, buffer space, etc Collecting this information is pretty easy to store in a relational database as I did when creating S-ASH (Simulated ASH) and Marcin Przepiorowski built upon over ...


Uncategorized

Amazon RDS cluster dashboard with Performance Insights

March 28th, 2019
Amazon RDS Performance Insights (PI) doesn't have a single pane of glass dashboard for clusters, yet. Currently PI has a dashboard that has to be looked at for each instance in a cluster. On the other hand one can create a simple cluster dashboard using Cloudwatch. PI, when enabled, automatically sends three metrics to Cloudwatch every minute. These metrics are DBLoad DBLoadCPU DBLoadNonCPU DBLoad = DBLoadCPU + DBLoadNonCPU These metrics are measured in units of Average Active Sessions (AAS). AAS is like the run queue on the UNIX top command except at the database level. AAS is the average number of SQL queries running concurrently in the database. In the ...


Uncategorized

“delayed commit ok initiated” – Aurora MySQL

January 11th, 2019
“delayed commit ok initiated" -  is a thread state in Aurora MySQL which indicates the thread has started the async commit process but is waiting for it to be ack’d. You will not find this thread state in MySQL as  MySQL  does not use our async commit protocal, it is Aurora MySQL specific. This is  usually the genuine commit time of a transaction. This is a "state" and not a wait.


Uncategorized

CLI for Amazon RDS Performance Insights

December 11th, 2018
Installing CLI on LINUX 1. install PIP https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html#awscli-install-linux-pip curl -O https://bootstrap.pypa.io/get-pip.py python get-pip.py --user 2. install AWS CLI https://docs.aws.amazon.com/cli/latest/userguide/installing.html pip install awscli --upgrade --user 3. configure aws configure For “aws configure” you will need AWS Access Key ID: AWS Secret Access Key: Which you can get by going to the AWS console, going to IMS and creating access key. Running example Once “aws” is configured you can run the CLI like aws \ pi get-resource-metrics \ --region us-east-1 \ --service-type RDS \ --identifier db-xxxxxx \ --metric-queries "{\"Metric\": \"db.load.avg\"}" \ --start-time `expr \`date +%s\` - 60480000 ` \ --end-time `date +%s` \ --period-in-seconds 86400 That “—identifier” is for one of my databases, so you will have to change ...


Uncategorized

Aurora MySQL synch/mutex/innodb/aurora_lock_thread_slot_futex wait

September 18th, 2018
Thanks to Jeremiah Wilton for the following info: This wait event indicates that there is a thread which is waiting on an InnoDB record lock. Check your database for conflicting workloads. More information on InnoDB locking can be found here: https://dev.mysql.com/doc/refman/5.6/en/innodb-locking.html   In other words, record-level lock conflicts are happening. More than one connection is trying to update the last_login for a particular id in the_table at the same time. Those connections are conflicting and serializing on the record lock for that id. Here's a query that can help you identify the blocker and waiter for InnoDB record locks in MySQL-family engines. Run ...


Uncategorized

Is NFS on ZFS slowing you down?

January 25th, 2018
If you think so, check out shell script "ioh.sh" from github at  https://github.com/khailey/ioh Introduction and Goals The goal of ioh.sh is to measure both the throughput and latency of the different code layers when using NFS mounts on a ZFS appliance. The ZFS appliance code layers inspected with the script are I/O from the disks, ZFS layer and the NFS layer. For each of these layers the script measures the throughput, latency and average I/O size. Some of the layers are further broken down into other layers. For example NFS writes are broken down into data sync, file sync and non-sync operations and NFS ...


Uncategorized

IP CIDR rules and address ranges

November 3rd, 2017
I always forget IP address range coverage rules and forget where to look. It's the wiki! https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing and for good reference here is the table: /32 is for a single address /24 is for a range in the last place x.x.x.0 /16 is for a range in the last 2 places x.x.0.0


Uncategorized

Best method for tuning sub-optimal execution plans

October 18th, 2017
How do you determine if the Oracle SQL optimizer has created a sub-optimal execution plan? re-run statistics and see what happens? wait for Oracle to find other execution plans? What if neither method is helping? Do you read the execution plan? What do you look at? Differences in actual vs estimated? How successful is that? Look for full table scans?  Do you look at the 10053 trace? How much time and effort does that take?  What do you look at in the 10053 trace. Do you have a systematic methodology  that works in almost all cases? Well there is a method that is reliable ...


Uncategorized