automate or die – the revolution will be automated

December 27th, 2013


photo by David Blackwell.

The worst enemy of companies today is thinking that they have the best processes that exist, that their IT organizations are using the latest and greatest technology and nothing better exists in the field. This mentality will be the undermining of many companies.

The status quo is pre-ordained failure.

Innovation is and always has separated the victors in industry. Innovation is constantly pushing what  can be automated.  Most tech companies are essentially american auto companies pre-Ford.



Other guys Fort Model T
$2000-$3000 $850
made by hand assemly line
12.5 hours to build 1.5 hours


by 1930, 250 car companies had died

What we do manually is constantly being automated.  Why?  Because automation saves time, improves efficiency and enables  feedback control.  Without automation tasks have to be done manually which increases errors.

People make mistakes, call in sick, forget to perform tasks, leave and get things wrong. A computer, when properly programmed, gets it right all the time. – Hilton Lipschitz

If  the  tasks involve  multiple steps  done by different people then the total time to completion rises exponentially. Why does time to completion  rise when different steps of a task ? The time rises because of the queueing time between people.

The book, The Phoenix Project, lays out the impact of passing a task to another person or group in the following graphic:


Screen Shot 2013-11-12 at 12.18.37 PM

What the graphic shows is that the busier the person is who is responsible for the next step the longer it takes from them to complete the task and the time goes up exponentially the busier they are.  At one major customer of mine on Wall St., a single request to provision a copy of a database  required 34 approvals. No wonder that customer took typically 3-6 months to provision a copy of a database.

If you are a CIO of a company and you want to improve the productivity  and agility of your development teams, what is the most important improvement you can make? The most important improvement you can make depends on what the bottleneck is in you development organization. The only way to improve the efficiency of an organization is to improve the efficiency of the main constraint, the bottleneck. Improving efficiency elsewhere is just an illusion at best or detrimental at worst. Only by tuning the bottleneck will you increase productivity.  One you find the bottleneck automate it if possible.

Considering IT as a data factory, we could apply the DFA principles to data and enumerate them as:

  • Reduce the number of people, steps, and parts to get data clones provisioned.
  • Simplify and standardize the steps that must remain
  • Automated provision wherever possible
  • Use standard Data Management Processes across data clones of all flavors.
  • Simplify the process to connect data clones to the hosts and applications that need it.
  • Encapsulate or eliminate the need for special knowledge to get a data clone to operate; Make it Dead Simple.

First task is to find out what the the bottleneck is.  The most common bottleneck is provisioning environments for development and test.

Operations can never deliver environments upon demand. You have to wait months or quarters to get a test environment.  When that happens terrible things happen. People actually horde environments.  They invite people to their teams because the know they have  reputation for having a cluster of test environments so people end up testing on environments that are years old which doesn’t actually achieve the goal. –  Gene Kim

The most powerful thing that orgs can do is to enable dev and testing to get environment when they need it” – Gene Kim

Much of environment provisioning has begun to be automated already with Puppet, Chef and virtual machines. What is not been automated until recently is the provisioning of database copies. What is the impact of not automating database provisioning? What is the cost to companies of being constrained by the enormous bureaucracy  of provisioning QA, development and reporting environments?

The impact we’ve seen have been

  • 96% of QA cycle time spent building  for QA environments instead of testing
    • single threaded QA work because of limited ability to provision concurrent environments
  • 95% data storage spent on duplicate data with storage limits constraining and impeding what data can been copied
  • 90% of developer lost time due to waiting for data in development environments
  • 50% of DBA time spent making database copies constraining availability on other important work
  • 20% of bugs of production bugs slipping in because of using subsets in development and QA

A clear indication of the impact is to compare efficiencies before automating database provisioning with virtual databases and after. After companies have implemented Delphix automated virtual database provisioning we see

  • QA has gone from 4% efficiency to 99% efficiency meaning 99% of a QA cycle is actually running the QA suite instead of waiting for a QA environment build
    • Accelerated QA work with the ability to provision many environments concurrently
  • Petabytes of storage freed and little to no limit on number of environments that can be provisioned
  • Companies have doubled or more development team output
  • DBA’s have gone from 8000 hours/year of database copying to 8 hours
  • Elimination of bugs slipping into production due to using old or subset data for QA

Delphix  accelerates application releases  driving revenue growth while driving costs down.

This is why Delphix is used by

  • Fortune #1 Walmart
  • #1 pharmaceutical Pfizer
  • #1 social Facebook
  • #1 US bank  Wells Fargo
  • #1 networking  Cisco
  • #1 cable provider Comcast
  • #1 auction site Ebay
  • #1 insurance New York Life
  • #1 chip manufacture Intel

The list goes on.

“ What is so provocative about that notion is that any improvement not made at the constraint is an illusion.  If you fix something before the constraint you end up with more work piled up in front of the constraint.  If you fix something after the constraint you will always be starved for work. In most transformations, if you look at what’s really impeding flow,  the fast flow of features, from development to operations to the customer,  it’s typically IT operations. …  Operations can never deliver environments upon demand. You have to wait months or quarters to get a test environment.  When that happens terrible things happen. People actually horde environments.  They invite people to their teams because the know they have  reputation for having a cluster of test environments so people end up testing one environments that are years old which doesn’t actually achieve the goal. …  One of the most powerful things that organizations can do is to enable development and testing to get environment they need  when they need it” –  Gene Kim




  1. Trackbacks

  2. No trackbacks yet.

  2. Narendra
    | #1


    Good post and while I agree with the sentiment I am praying my colleagues don’t read this post. At my work place, people are suffering from, what I like to call, “Compulsive Automation Disorder”. The main problem with automation approach (or for that matter anything) is it is good when it is done in moderation and, most importantly, with KNOWLEDGE. What would you say to people, who call them selves Oracle DBAs and then try to automate database recovery? Really?? Should anybody be even thinking of automating database recovery? I don’t think so.
    Another problem with automation that I have seen is, people blindly using automated tasks. They treat those automated tasks as “black box” and don’t always understand what and how it works. I compare it with the habit of “copy-paste”, especially code snippets from internet.
    In my experience (limited to Oracle DBA activities), I am more than happy to automate work AFTER I have complete understanding of what and how the work is done, which, in turn, should give me ability to “open the bonnet” when things go wrong. Also last, but not least, is one should periodically review the automated tasks and be prepared to upgrade them (or else you can end up with an “automated” database backup job that still uses user-managed hot/cold backup method, implemented in the 8i days but continued in 11gR2…)

  3. khailey
    | #2

    Yes too much custom automation can cause problems, be reinventing the wheel and in the end be much less powerful than buying a package solution that solves the automation.
    One colleague calls some of these in house custom automations “Self service Frankensteins”:
    * If only one programmer- single point of failure
    * If multiple programmers – how expensive was it to build? vs buy?
    * Time to Market (TTM) – how long did it take/is it taking to build?
    * How much is each delayed day costing in lost productivity?
    * Functionality- how rich and efficient is the interface?
    * maintenance – lack of formal support plan and established SLA’s
    * Longevity – no product/feature road map, no long-term development plans
    * Agility – Business demands outpacing feature release
    * Adoption – viewed as “pet project” vs. critical business service,
    * Value – lack of proper req. gathering and needs analysis results in missing features and some nobody wants
    * Stability – Lack of full proper testing results in errors uncovered in production/failure of product.
    All of these things, plus more, will keep your Frankenstein from graduating from a science project to a First-rate Enterprise Business service, replete with all the trappings that entails (Dedicated Development, O&M budget, SLA’s, etc).

+ five = 8