Why Data Agility is more valuable than schema consolidation.

November 20th, 2013

If you’ve been an Oracle data professional for any length of time, you’ve undoubtedly had a conversation about reducing your total number of databases by consolidating applications schemas from each individual database into separate schemas in one monolith database. Certainly, in the days before shared dictionaries, pluggable databases, and thin cloning this argument could be made easily because of the high price of the database license. Consolidation is a winning argument, but doing it at the cost of data agility turns it into a loser. Let’s explore why.

The argument For Schema Consolidation

Essentially, the pro-side of this argument is cost driven. Using 18 schemas in 1 database instead of 18 databases with 1 schema each means:
• Fewer database licenses – and this usually this holds true even if you use CPU-based pricing.
• Fewer hosts – even if you need a slightly larger host to handle the traffic, its usually cheaper than N individual hosts – especially when you consider the managed cost.
• Less Storage – The binaries and dictionary objects are often exactly similar, but we end up storing them on disk and in memory N times, and then backing them up to disk even more times.
• Less CPU and I/O – 18 databases simply require more CPU and I/O than 18 schemas in one database even if you push the same traffic to the 18 schemas.
• Less Database Management Overhead – fewer databases to manage means less time managing.

The argument Against Schema Consolidation

The con-side of this argument is very sensitive on conditions. In some cases, I’ve seen these cons really amount to very little cost and effort. In other cases, even without the newer technologies, the cost was so high that the approach was abandoned. Key things that usually cause Consolidation efforts trouble include:
• Namespace Clobber – especially when you are consolidating applications that weren’t designed that way, all sorts of namespace conflicts can arise. I’ve seen synonyms, links, and packages wreak havoc on a consolidation effort, sometimes even requiring large re-coding because of the nature of the beast.
• No Isolability – traffic, resources, and limits are no longer fully isolated. A lot of traffic to one schema can affect the performance of another. A huge update may rob you of the ability to get good response rates. A crash in one application can cause a domino effect – whether the fail begins at the database tier or the app tier. One failure affects all.
• Conjoined Backup and Restore – Often, the database is backed up as a collective, and restored as one. So, great care must be exercised when only a single schema related to a single app needs a restore. Of course, you can work around this by creating separate tablespaces for each schema, and then using tablespace PIT Recovery, but that itself takes time and resources.
• Risky Planned and Unplanned Outages – If you’ve got to patch the database, everybody goes down. If someone pulls the plug on your database host in the data center, everyone is hosed.
• Resource Pool Management Complexity – if you’ve only got one database, then you’ve probably got one storage pool. So, unless you’re very carefully carving and monitoring it (which itself takes time and resources), you can cause all sorts of unintended side effects.
• One Attack Surface – If 18 apps share a database, then they share an attack surface. An exploit against one is an exploit against all.
• More Management Headaches – A lot more focus, concern, and worry about Quotas, Security, backing up/restoring schemas, isolation measures, and control. This is such a headache that a lot of work has gone into automation.

The Tide has Turned

First, the benefits aren’t as strong as they used to be. The marketing around Oracle 12c provides ample evidence that the same amount of work over a collection of databases takes up 6x less hardware. Pluggable databases, or databases with shared dictionaries make the cost side of the equation significantly less attractive. Thin cloning technology neutralizes most of the rest of the equation as it provides a way to have copies of the database at almost no cost, virtually eliminating the storage argument.

Then there are the cons. And, this is where I content that we have systematically ignored or under estimated the value of Data Agility.

The value of Data Agility

Getting the right data to the right person at the right time is such a key value for organizations because there are so many obstacles to doing it. And, instead of understanding the value of agility, we’ve spent a lot of time, energy and effort finding solutions to minimize the impact of not having that agility. Like what, for example? Letting developers code on smaller sets of data OR live with older data OR write their own custom “rollback” scripts. Encouraging testers to accept the idea that a rewind has to take 8 hours, or that they have to wait for a refresh because they might clobber someone else’s app, or that they can’t test in their schema today because another application is at a “critical” stage. Telling BI folks that they can’t get their data refreshed because other apps can’t be taken offline, and it just takes too long to ship the whole schema over. Telling all of the apps that they have to be down like it or not because we can’t patch one schema at a time.

Using Schema consolidation saves money at the cost of data agility, and shifts the burden in ways where we’ve been trained not to miss it, or where we think it’s an IT problem.

Delphix thin cloning lets you keep your individual databases, but pay the price of a consolidated one. Developers can code on a full and fresh set of data at a fraction of the cost and never write custom rollback scripts again. Testers can rewind in minutes without having a huge resource drain, avoiding wait times mostly required to avoid secondary effects outside their app. BI folks never have to go offline to get fresh data, and refresh is a minutes long operation every time. Patch and Upgrade can not only be accomplished on a rolling basis, but using intelligent refresh, can be performed once and distributed to all sorts of downstream systems.

So what?

If you’re considering schema consolidation, look hard at the ROI. What used to make sense even 3 years ago is completely upside down now. Pluggable databases destroy the cost argument at the memory tier, and Delphix Thin Cloning does the same at the storage tier. Schema Consolidation just doesn’t make the economic sense it used to make.


Uncategorized

  1. Trackbacks

  2. No trackbacks yet.
  1. Comments

  2. | #1

    Interesting article – but from the perspective of someone who is in the weeds managing several hundred of consolidated apps, I disagree with your final assessment. At least in my case, schema consolidation is still a huge win.

    Responding to a few specific points: (1) Namespace is definitely an achilles heel of Oracle, but this is the one thing that I think 12c multitenancy addresses nicely (imho it doesn’t address much else). (2) Isolation: well-known tradeoff; either you share resources or you don’t… nothing specific about schema isolation here – same applies to VMs without proper resource management. (3) On backup/restore, I think that anyone really doing consolidation is keeping separate tablespaces. I sure hope so. We’ve got a few scripts here that automate TSPITR and they give us decent data agility – even drive clones from them. (4) Patching without RAC or DG will require downtime whether you’re consolidated or not. The coordination requirements go up with schema/multi-tenant consolidation. But personally I think this is actually a win for consolidation – it’s a lot more work to patch a lot more databases. On the topic of pulling power plugs, same rebuttal as for isolation… applies to VMs the same. (5) I think management cost is most proportional to the number of different environments you manage. Consolidation doesn’t matter – standardization does. It’ll be a pain if they’re all different, whether you’re sharing resources or not. It’s easy if everything matches. (6) The argument around security and attack surfaces has some substance; but then again we’re usually talking about many of the consolidated apps being multiple copies of the same app and data for dev and test environments, so I think this risk is somewhat mitigated. And a well designed consolidated database (with minimum grants and a proper patching schedule) should minimize the risk.

    I’m probably biased because I’m in the middle of writing a bunch of articles about how to do consolidation right. :) Kyle, you know I’m a huge fan of Delphix! I think there will always be a need for virtualization-based consolidation and you guys probably have the best product on the market for rapid, efficient provisioning of database clones. But I think that like performance tuning has the biggest impact higher in the stack, you also get bigger wins by consolidating higher in the stack. The best consolidation happens in the app itself and many cloud apps are built this way from the ground up. Google doesn’t spin up new servers or VMs every time a new business signs up for Google Apps. Sometimes specific requirements will drive consolidation lower in the stack: for example, if an app needs the ability to independently failover to another datacenter, data guard requires the app to have it’s own database. A strong consolidation strategy has a plan for consolidation at all four levels: app, schema/multitenant, database and VM.

  3. Noons
    | #2

    I think the conclusion is a bit in the air, although of course a desirable outcome.

    Sure, *IF* Deplhix was free AND an upgrade to 12c was also free, that would all be true now.

    Exactly WHICH releases of SAP, EBS, Psoft, JDE and other Oracle apps are FULLY compatible and tested in 12c RIGHT NOW? How much does it cost to do that upgrade?
    The answer spans also the reason many folks are still in much older releases of Oracle…

    For all those reasons, schema consolidation still makes a lot of sense. And in particular where data movement between schemas is a necessity- case in point, between DW and Hyperion or other BI tools – then schema consolidation beats 12c any time!

    Of course: in another 5 years the picture will be different. Heck, it wouldn’t be IT if it wouldn’t be! :)
    But we don’t live in the future and we can’t move forward in time. One step at a time.

  4. khailey
    | #3

    Hey Jeremy – thanks for stopping by and giving the thoughtful response and especially your experiences from in the trenches. Yes – pros and cons on both side. I think 12c pluggable database can address some of the issues, ie having isolation yet benefit from being run from one CDB and having options of mass upgrades. Of course until I actually sit down and do this stuff myself it’s all theory.

  5. khailey
    | #4

    Hi Noons – I think you hit it on the nail on the head : if 12c was free and if 12c can address issues such as full compatibility and be solid enough not to have any first release bugs and not have performance issues do to aggregating all the redo into one stream etc etc.
    As far as Delphix goes, it’s not free, but the ROI is clear and proven. Most customers bottom line has improved by and order or two of magnitude greater than the license cost. Customers both saved more many then the license price and have increased project output and efficiencies gain well beyond the license price. My experience is that Delphix is usually paid for by the storage savings alone and then the efficiency gains are pure gravy. Another example is numerous customers have been able to free up one or more DBAs from menial copying work to more innovated work by taking the cloning off the experienced DBAs and giving it to a junior DBA who can finish the work in minutes instead of hours. In many cases the DBAs don’t even have to do anything as the developers, QA and reporting groups can allocate and manage their ow clones with the simple UI for end users. Then on top of are the massive savings. Customers have reported doing projects in half the time and doubling or more than doubling in some cases project output.

  6. | #5

    Kyle,
    Thank you. I’ve seen too many people / white papers recommending consolidation exercises without identifying the downside.
    I appreciate your list “The argument Against Schema Consolidation”.


5 − = zero