Archive for January, 2016

SQL – is there a better way?

January 23rd, 2016

Next Tuesday at 11:20 am at Oracle headquarters there will be a panel discussion on SQL. You can participate whether you are there or not as we are collecting questions ahead of time and will video the panel to be posted later. The panel’s participants will be

You can add your questions to the list of questions here on the Google spreadsheet or in the comments on this post.

Is it worth discussing webscale? Webscale seems like a joke word mainly from that infamous NoSQL is webscale video though ironically I’ve seen it in new industry announcements even this week.

General areas of interest are what is the future of SQL? Will there always be SQL or will there be something to replace SQL? How can we improve SQL?  Where is the industry headed with SQL, No SQL and other ways of storing and retrieving data.

Currently the questions are:

  1. Why is SQL getting a bad rap?
    • Isn’t SQL slow? Why is it so slow?
    • Aren’t alternatives to SQL (XQuery, JSON find()) easier to use?
    • If SQL is so great, how do you explain the rise of NoSQL databases?
    • How much of the problem with Oracle SQL is it’s divergence of from  Codd’s relational algebra? (allowing duplicate rows, NULL, etc)
    • Chris Date, Pascal Fabien and others have expressed dismay at the differences in industry SQL vs relational algebra as set down by Codd. How much of a problem are these differences really?
  2. Is SQL hard to learn?
    • Is SQL a *real* programming language? Is it a functional language?
    • I’ve heard that I need to know relational algebra in order to be effective with SQL, is that true?
    • I’m an object-oriented developer, what’s an easy way for me to learn and use SQL? ***
    • How do I write loops in SQL?
    • Joins look painful — isn’t there an easier way?
    • I don’t want to filter entire tables; why doesn’t SQL let me look at the first few rows?
    • I’ve heard of JDBC, but it sounds really low level, like C; is there a easier way for me to interact with SQL?
    • How do I represent sparse data in SQL?
    • Is there a good IDE I can use for SQL? Anything other than SQL Developer?
  3. Why does Oracle have a NoSQL product?
    • Oracle seems to have flip flopped on NoSQL? Why? Is it because Oracle EE and SE have too much overhead for OLTP and ACID compliance?
      1. in May 2011 Oracle recommended not using NoSQL in a white paper titled “Debunking the NoSQL Hype,” the final advice being  “Go for the tried and true path. Don’t be risking your data on NoSQL databases.” *
      2. In Sept 2011 Oracle releases Oracle NoSQL database **
    • Is Oracle No SQL webscale What is webscale ?
    • NoSQL can do 5.6 million queries per second. Can Oracle do the same?
    • Why is Hadoop so popular even though Oracle SQL can run circles around HBase, Hive, Pig, Impala, etc
  4. What’s the difference between PL/SQL and SQL?
    • What does PL/SQL bring to the performance table? (question for Bryn and Steven)
  5. 12c Release 2 performance and reliability features e.g. Sharding
  6. Oracle Developer Advocate program (question for Steven)
  7. There was a time when Oracle only had “declarative referential integrity” which still live on as DISABLE RELY NOVALIDATE. When will Oracle implement SQL-92 ASSERT for arbitrary constraints. There is a rumor that Oracle is thinking about it.




Oracle suggested that the NoSQL approach was well-suited for certain use-cases:
“The Oracle NoSQL Database, with its ‘No Single Point of Failure’ architecture,
is the right solution when data access is “simple” in nature and application demands exceed the volume or latency capability of traditional data management solutions.  For example, click-stream data from high volume web sites, high-throughput event processing and social networking communications all represent application domains that produce extraordinary volumes of simple keyed data. Monitoring online retail behavior, accessing customer profiles, pulling up appropriate customer ads and storing and forwarding real-time communication are examples of domains requiring the ultimate in low-latency access. Highly distributed applications such as real-time sensor aggregation and scalable authentication also represent domains well-suited to Oracle NoSQL Database.”

*** “Using tables to store objects is like driving your car home and then disassembling it to put it in the garage. It can be assembled again in the morning, but one eventually asks whether this is the most efficient way to park a car.” (incorrectly attributed to Esther Dyson, the editor of Release 1.0).


The big questions for me are

  • Will there ever be an alternative to SQL?
  • Are there ways to improve SQL significantly?
  • Have we learned anything from No SQL? Has No SQL shown up any major flaws in SQL and relational?

The questions wanting to know if there are alternatives to doing joins struck me as strange when I first saw it. Now with some reflection it brings up the old argument of whether to normalize or de-normalize. In the de-normalized model we can often access all the data we want without a join but we loose the flexibility of the normalized model.




Jenkins Plugin for Delphix

January 19th, 2016

In my last blog I talked about trying out Jenkins. In this blog post I want to talk about the new Jenkins plugin for Delphix.

Delphix plugin

Delphix plugin is easy to add. Just navigate to “Manage Jenkins” in the top left

Screen Shot 2016-01-15 at 11.34.17 AM

Then click on “Manage Plugins”

Screen Shot 2016-01-15 at 11.34.27 AM

Then type in “delphix” in the search bar and then click on the “Delphix Jenkins plugin” to install

Screen Shot 2016-01-15 at 11.45.10 AM

Now we can access Delphix APIs through the Jenkins interface. If I access a job (pick a job then click “configure” on the top left, or create a new job) and click  “add build step” under build, you will now see Delphix command options:

Screen Shot 2015-12-28 at 12.34.37 PM

Here is a zoom in to the new “Add build step” options:

Screen Shot 2015-12-28 at 12.34.43 PMBefore we can use these build steps, we have to add the Delphix engine(s) to Jenkins.

To add Delphix engine(s) click on “Manage Jenkins” on the top left


Then you should see a long page. In the middle of this page you should see a “Delphix” section


Add the Delphix engine(s) and hit “save” at the bottom of the screen

jenkins_add_delphix3Now when I add a build step I can choose a Delphix action like “add environment”

Screen Shot 2016-01-15 at 12.05.34 PMand now I can click “run now” and it adds the environment


Trying out Jenkins

January 15th, 2016

Jenkins seems to be the top code build, integration and QA automation tool to support continuous delivery.

Jenkins allows me to kick of jobs on a schedule or ofter certain events and Jenkins will keep track of the status of the jobs and the results.

To get Jenkins installed,  I read   and then read to try my first Jenkins example.

I installed Jenkins  on one of the VMs from the “landshark” demo environment for Delphix Express. Landshark consists of two Linux Centos VMs that provide source and target examples for Delphix Express. I used the “target” VM but could have just as easily used the “source” machine.

Installing Jenkins

To install Jenkins I did:

sudo wget -O /etc/yum.repos.d/jenkins.repo

sudo rpm --import

sudo yum install jenkins


service jenkins start

This gives the message “OK” but actually Jenkins dies. The workaround was to edit /etc/sysconfig/jenkins and change




also starting jenkins by hand worked even without workaround

cd /usr/lib/jenkins 
java -jar jenkins.war

I then accessed the Jenkins console at    http://my_linux_vm:8080

Running Jenkins

Click on “create new jobs”

Screen Shot 2015-12-23 at 1.13.55 PM

Name the job

Select “Freestyle project”

Screen Shot 2015-12-23 at 1.14.17 PM

Schedule the “date” command to be run and every 5 minutes by doing:

  • Fill in “Build Triggers” by checking “Build periodically” and type in “H/5 * * * *”
  • Under build choose “Execute Shell” and type in “echo `date`”

Screen Shot 2015-12-23 at 1.15.52 PM

After hitting “Save”, then letting 5 minutes pass you’ll see a new entry in “Build History” in the bottom left. Click on the “#1″.

Screen Shot 2015-12-23 at 1.21.27 PM

Then click “Console Output”

Screen Shot 2015-12-23 at 1.21.38 PM

and you will see the output of the shell script we scheduled to be run every 5 minutes.

Screen Shot 2015-12-23 at 1.21.51 PM



Yes SQL ! the conference Jan 26 & 27 at Oracle

January 13th, 2016

Wow, this is big – a Yes SQL conference !

Finally ! It’s here ! Yes SQL ! the conference!

At Oracle headquarters !

Send me your questions!

Have questions about SQL? about No SQL? about Yes SQL? About how Oracle is faring in the industry with SQL? I’ll be moderating a SQL panel discussion with

  • Andy Mendelsohn (Executive Vice President for Database Server Technologies)
  • George Lumpkin (Vice President, Product Management)
  • Bryn Llewellyn (Distinguished Product Manager)
  • Steven Feuerstein (Developer Advocate)
  • Mohamed Zait (Architect) will explain Oracle’s strategy

Send me your questions – ideally just put them in the comments below – and I’ll ask them in the panel discussion.

Is Oracle Web scale? Is Oracle Web scale? Come ask the panel discussion.

The Conference

SQL is fascinating and almost every business depends on SQL to some degree. SQL permeates modern IT and is the #1 method of data access, so finally a conference dedicated to SQL.

Here are some of the rockstar SQL speakers at the conference:

  • Steven Feuerstein
  • Janis Griffin
  • Bryn Llewellyn
  • Andy Mendelsohn
  • Kerry Osborne
  • Tanel Poder
  • Mauro Pagano
  • Carlos Sierra


Check out the full 3 day conference agenda.

Coordinating with BIWA

Yes SQL! is coordinating with BIWA. The first 2 days are Yes SQL! (Jan 26 and 27) and the 3rd day (Jan 28) is BIWA (Business Intelligence, Warehousing and Analytics conference)

Full day SQL class

On Jan 28 a full day (8 am-5 pm)SQL tuning workshop is being offered. The workshop is being presented by Carlos Sierra and Mauro Pagano. Seats limited so sign up early here.

Talk Examples


register here




January 13th, 2016

I knew I had to try out Neto’s  Graphora when I saw that it was a docker container to collect Oracle performance data and send it to Graphite. Graphite has been on my radar as I see and hear about more and more companies using it to graph and monitor performance data.  Any graphical Oracle performance tool interests me and recently I’ve been spending some time investigating Docker so having everything rolled up in Graphora sounded perfect.

To run Graphora you need docker. It’s easy enough to do a yum install docker, but I decided to use a VM with docker already installed. The VM image I used was from Delphix Express Landshark environment and you can use this image as well if you’d like. Of the three machine images that come withDelphix Express Landshark environment , I  used the source machine image.

The basic architecture is Graphora docker container connects to a running Oracle database, collects I/O statistics and sends them to graphite running in a docker container.

Screen Shot 2016-01-12 at 4.28.00 PM

Start Docker

# service docker start
Starting cgconfig service:                         [  OK  ]
Starting docker:                                   [  OK  ]

Pull Graphora docker container image

# docker pull netofrombrazil/graphora
latest: Pulling from netofrombrazil/graphora

Enable sys connections over SQL*Net

$ORACLE_HOME/bin/./orapwd file=$ORACLE_HOME/dbs/orapw${ORACLE_SID} password=change_on_install

otherwise getting error

ORA-01031: insufficient privileges

on the following command when trying to create the graphora user

Create graphora user

Then run

# docker run -ti netofrombrazil/graphora --host  --port 1521 --sid XEXE --create
Enter sys password: change_on_install
Creating user graphora
Grant access for user graphora to create sessions
Grant select privilege on V$SESSION_EVENT, V$SYSSTAT, V$STATNAME for user graphora
GraphOra is ready to collect your performance data!

I’m creating the  graphora user on my “source” landshark VM which has an ip of It’s sid is XEXE.  The flag “-it” means interactive (-i) and allocate a tty (-t).

Startup Graphite

The “-p” flags map ports in the container to ports on the host running the container.

Graphite uses these ports

  • 80: the graphite web interface
  • 2003: the carbon-cache line receiver (the standard graphite protocol)
  • 2004: the carbon-cache pickle receiver
  • 7002: the carbon-cache query port (used by the web interface)

So the command is

docker run -p 80:80 -p 2003:2003 -p 2004:2004 -p 7002:7002 -d nickstenning/graphite

Now you can access graphite at the URL of the machine. It looks like

Screen Shot 2016-01-12 at 1.57.56 PM

Startup Graphora

Now the big moment – startup graphora sending performance data from my XEXE database to graphite

# docker run -ti netofrombrazil/graphora \
              --host \
              --port 1521 \
              --sid XEXE \
              --interval 10 \
              --graphite graphs-db \
              --graph-port 2003
 phyReads: 0 phyWrites: 0 dbfsr: 0.00 lfpw: 0.00
 Could not connect to graphs-db on port 2003.

whoops, what went wrong ?

What went wrong

First thing to do was see if graphite was actually working correctly. I decided to send data to graphite with netcat

export PORT=20003
export SERVER=
echo "local.random.diceroll 4 `date +%s`" | nc  ${SERVER} ${PORT}

And yes, this works. I can see a new “local” category of data in the graphite web interface. Graphite is working correctly so the problem must be in how I all graphora.

Having never used graphite before nor graphora, I wasn’t sure what all the command line options were and I got one of the arguments wrong, but how could I find out what the command line arguments were? They aren’t outlined on Neto’s blog. They aren’t on the docker download page

So now I dug around to try and find out what is in Neto’s package. I discovered the command

docker diff [name of container]

 to show what new and modified files there are in the container.

# docker diff graphora
C /opt
C /opt/graphOra
C /opt/graphOra/graphOra
C /opt/graphOra/lib
C /opt/graphOra/lib/database.pyc

Not that many files in the container. Question “how do I see these files that are inside the container?” In order to get to the file inside the container,  I have to exec a bash shell into the container (or know where the actual mapped files are)

docker exec -i -t graphora bash

This will give me a shell in the container and I can now vi the files. I then vi graphora file

# vi /opt/graphOra/graphOra

and find the command line arguments

def cli_args():
  parser.add_argument('--host', required=True, help='Oracle Database server')
  parser.add_argument('--port', required=True, help='Database listener port')
  parser.add_argument('--sid', required=True, help='Database SID')
  parser.add_argument('--user', default='graphora', help='User to connect into the database')
  parser.add_argument('--password', default='WeLoveThisCompany', help='user db password')
  group = parser.add_mutually_exclusive_group()
  group.add_argument('--create', action='store_true', help='Setup graphOra to monitor the database instance')
  group.add_argument('--interval', default=10, type=int, help='Interval time in seconds between each data collection')
  parser.add_argument('--graphite', default=' ', help='Specify IP or Hostname of Graphite Server')
  parser.add_argument('--graph-port', type=int, help='Specify Graphite Server port')
  parser.add_argument('--print-stats', default='true', choices=['true','false'], help='Print stats')
  parser.add_argument('--project', default='oracle', help='Specify a string to identify your data in Grafana')

Getting it right

My problem is that I thought “–graphite” was possibly the default name of graphite database. I was thinking since graphite collects data maybe it has it’s own database with a default name graphite-db which I saw in Neto’s command line example. Well turns out as we can see above that “–graphite” is the IP or host name of the host running graphite. In my case everything, the Oracle XE database, graphite and graphora are running on my one machine, With that change it now works:

# docker run -ti netofrombrazil/graphora \ 
              --host \
              --port 1521 \
              --sid XEXE \
              --interval 10 \
              --graphite \ 
              --graph-port 2003

I can then plot the values. On the top left of the Graphite interface I can click  on “netapp” and continue expanding until I get to the I/O metrics.

Screen Shot 2016-01-12 at 2.08.05 PM

I can then click on any of the metrics to see the graph.

If I click on the “dashboard” menu item in the top right, I can click on the metrics and get a graph for each metric building a dashboard,

Screen Shot 2016-01-12 at 1.12.13 PM

Choose metrics by clicking on the names in top right. Each time you a choose a name, a list of next level list of names will be shown until you choose the actual metric.

The UI might be a little more intuitive in the tree mode. To change to tree mode, select “Dashboard” then “Configure UI”

Screen Shot 2016-01-12 at 2.16.57 PM

Then select “Tree”

Screen Shot 2016-01-12 at 2.17.17 PM

Now the interface looks like

Screen Shot 2016-01-12 at 1.45.11 PM


Looking at Neto’s blog, his graph UI looked much different and nicer than the graphite graphs. I poked around and it looks like his graph was a grafana graph. Grafana works with graphite and there is a docker version of it, so I started it up:

docker run -i -p 3000:3000 grafana/grafana

Now I can access grafana on my machine at port 3000. Go to the URL  your_machine:3000 and you will see a login page

Screen Shot 2016-01-12 at 2.31.05 PM

login with admin/admin

Screen Shot 2016-01-12 at 2.32.01 PM

If you don’t see the menus on the left then click the Grafana icon in the top right and the menus should appear. First step is to add the graphite data source. Click on “Data Source” on the right

Screen Shot 2016-01-12 at 2.33.37 PM

Then click “Add new” in the top right.

Screen Shot 2016-01-12 at 2.33.47 PM

Main thing is to add the URL for graphite which in my case is simple the IP of my machine,

Name the data source “graphora” in this example.

Now click on “home” on the tip menu and then “+New” on the bottom buttons

Screen Shot 2016-01-12 at 2.36.52 PM

Click on the green line in the top middle and choose “Add Panel -> Graph”

Screen Shot 2016-01-12 at 2.38.47 PM

Set the data source in the bottom left for the dashboard to be “graphora” data

Screen Shot 2016-01-12 at 2.39.51 PM

Then click “select metric” in the middle of the screen just below the “metric” tab label.

When you click “select metric” you will see the metrics collected by graphora which all start out with the prefix “netapp”

Screen Shot 2016-01-12 at 2.40.02 PM

You have to choose the full precision of the name like

netapp perf oracle 172 16 100 11 XEXE dbfsr_latency

but for the other metrics, you can choose duplicate and then change the value of the last field to go quicker

Screen Shot 2016-01-12 at 2.45.42 PM

With all 5 metrics choose my dashboard looks like

Screen Shot 2016-01-12 at 1.51.54 PM

Summary and Reflections

The whole combination of technologies is interesting – Docker, Oracle, Graphora,  Graphite, Grafana etc.

In this particular case with Graphora I wonder if it would just be easier to distribute the 2 files necessary to run Graphora instead of a Docker container. I don’t know. Docker is cool. Docker makes a lot of sense for things that have many files and things that I want to run multiple concurrent versions of without having the different versions interfere, but is it overkill for a couple of python files? It’s nice that with docker, one  doesn’t have to have any download or install instructions. It’s as simple as “docker run container”.

When it comes to data collection and graphing, it’s attractive that these packages take care of so much of the scripting and plumbing.  For example when it comes to collecting data, I tend to either stuff it in flat files or stuff it in Oracle. Neither of these approaches is great. Flat files are inefficient and require a lot of coding. Oracle his huge and makes deployment a bear if one depends on it. WIth graphite it’s great that it comes with aggregation routines and purging routines.

When it comes to graphing, I’ve been completely happy with Highcharts though again if using Highcharts I have to create my own HTML files with javascript etc where as in graphite an grafana, it’s just point and click. On the other hand a lot of the optimizations, aggregation, graph parent child relationships, and custom displays seem difficult to impossible with these packages.

All in all, quite intriguing. I guess if I want to get going fast, then graphite might be a good way to display a bunch of statistics in line graphs. On the other hand, the problem  that I see missed routinely in the industry is that creating performance dashboards is a lot more then pasting up a bunch of line graphs. Data visualization, in order to be powerful, needs a lot of intelligence in what metrics are chosen, how they are aggregated, correlated and displayed. Often the display requires a customization and  new approaches to displaying data visually. What I want to see more are  pre-set dashboards that intelligently layout and correlate data for particular use cases. Use cases in the realm of performance monitoring of IT systems tend to cover the same types of situation – where are you spending time, how much CPU, I/O, concurrency contention etc.

Cool discovery with Docker is that to get a list of files created or modified by a docker container run

docker diff container_name

Then I can get into the container with exec bash to look at or modify these files

docker exec -i -t graphora bash

If I modify the files then I can then restart the container

docker restart graphora

To see the docker containers that exist and/or are running, use the command

docker ps -a


Graphite uses Whisper database default

Graphite vs Grafana as well as storage database options such as InfluxDB and Cyanite (Cassandra)