Search Results

Keyword: ‘thin cloning’

Database Thin Cloning: Allocate on Write (ZFS)

May 31st, 2014

Allocate on Write Thin Cloning

Three challenges specifically stand out when considering Copy on Write filesystem snapshots described in the previous section:

  • The number of snapshots you can take of source database LUNs is limited
  • The size of the snapshots is limited
  • Difficulties arise sharing the base image of source databases at multiple points in time. In some cases it is not possible, in others difficult or resource heavy.

These challenges highlight a specific need: to create thin provision clones of a source database from multiple points of time at the same time without using any additional space consumption. This requirement is important, as it allows one base image to serve as the foundation for all subsequent clones and imposes no unplanned storage or refresh requirements on users of the target (cloned) systems.

With a filesystem storage technology called Allocate on Write, these challenges can be met. In allocate on write filesystems, data blocks are never modified. When modifications are requested to a block, the block with the new changes is written to a new location. After a request to modify a block has been issued and completed there will be two versions of the block: the version that existed prior to modification and the modified block. The location of the blocks and the versioning information for each block is located in a metadata area that is in turn managed by the same allocate on write mechanism. When a new version of a block has been written to a new location, the metadata has to be modified. However, instead of modifying the contents of the relevant metadata block, the new metadata block is written to a new location. These allocations of new metadata blocks with points to the new block ripple up the metadata structures all the way to the root block of the metadata. Ultimately, the root metadata block will be allocated in a new place pointing to the new versions of all blocks, meaning that the previous root block points to the filesystem at a previous point in time. The current, recently modified root block points to the filesystem at the current point in time. Through this mechanism an allocate on write system is capable of holding complete version history of not only a block, but all blocks involved in that block’s tracking.

Screen Shot 2013-06-03 at 10.29.37 AM

Figure 10. When a datablock in the bottom left is modified, instead of modifying the current block a new block is allocated with the modified contents. The metadata pointing to this new location has to be modified as well, and again instead of modifying the current metadata block, a new metadata block is allocated. These changes ripple up the structure such that the current root block points to the filesystem at the current point in time while the previous root block points to the filesystem at the previous point in time.

ZFS

Allocate on write has many similar properties with EMC’s VNX copy on write and NetApp’s WAFL systems, but the way allocate on write has been implemented in ZFS eliminates the boundaries found in both. With ZFS there is no practical size limitations to snapshots, no practical limit to the number of snapshots, and snapshots are almost instantaneously and practical zero space (on the order of a few kilobytes).

ZFS was developed by Sun Microsystems to address the limitations and complexity of filesystems and storage. Storage capacity is growing rapidly, yet filesystems have many limitations on how many files can be in a directory or how big a volume can be. Volume sizes are predetermined and have to be shrunk or expanded later depending on how far off the original calculation was, making capacity planning an incredibly important task. Any requirement to change filesystem sizes could cause hours of outages while filesystems are remounted and fsck is run. ZFS has no need for filesystem checks because it is designed to always be consistent on disk. The filesystems can be allocated without size constraints because they are allocated out of a storage pool that can easily be extended on the fly. The storage pool is a set of disks or LUNs. All disks are generally assigned to one pool on a system, and thus all ZFS filesystems using that pool have access to the entire space in the pool. More importantly, they have access to all the I/O operations for the spindles in that pool. In many ways, it completely eliminates the traditional idea of volumes.

On a non-ZFS filesystem the interface is a block device. Writes are done per block and there are no transaction boundaries. In the case of a loss of power or other critical issue there is also a loss of consistency. While the inconsistency issues have been addressed by journaling, that solution impacts performance and can be complex.

In a ZFS filesystem all writes are executed via allocate on write, and thus no data is overwritten. Writes are written in transaction groups such that all related writes succeed or fail as a whole, alleviating the need for fsck operations or journaling. On-disk states are always valid and there are no on-disk “windows of vulnerability”. Everything is checksummed and there is no silent data corruption.

Screen Shot 2013-06-03 at 11.11.33 AM

Figure 11. Comparison of non-ZFS filesystems on top and ZFS filesystems on the bottom. The ZFS filesystems are created in a storage pool that has all the available spindles, giving filesystems access to all the storage and IOPS from the entire pool. On the other hand, the non-ZFS filesystems are created on volumes and those volumes are attached to a specific set of spindles, creating islands of storage and limiting the IOPS for each filesystem.

Excepting certain hardware or volume manager specific software packages, the general comparison between non-ZFS and ZFS filesystems is as follows:

Filesystem (non-ZFS)

  • One filesystem per volume
  • Filesystem has limited bandwidth
  • Storage is stranded on the volume

ZFS Filesystem

  • Many filesystems in a pool
  • Filesystems grow automatically
  • Filesystems have access to all bandwidth

Along with many filesystem improvements, ZFS basically has moved the size barrier beyond any existing hardware that has yet been created and has no limitations on the number of snapshots that can be created. The maximum number of snapshots is 2^64 (18 quintillion) and the maximum size of a filesystem is 2^64 bytes (18.45 Exabytes).

A ZFS snapshot is a read-only copy of a filesystem. Snapshot creation is basically instantaneous and the number of snapshots is practically unlimited. Each snapshot takes up no additional space until original blocks become modified or deleted. As snapshots are used for clones and the clones are modified, the new modified blocks will take up additional space. A clone is a writeable copy of a snapshot. Creation of a clone is practically instantaneous and for all practical purposes the number of clones is unlimited.

Snapshots can be sent to a remote ZFS array via a send and receive protocol. Either a full snapshot or incremental changes between snapshots can be sent. Incremental snaps generally send and receive quickly and can efficiently locate modified blocks.

One concern with allocate on write technology is that a single block modification can set off a cascade of block allocations. First, the datablock to be modified is not overwritten but a new block is allocated and the modified contents are written into the new block (similar to copy on write). The metadata that points to the new datablock location has to be modified; but again, instead of overwriting the metadata block, a new block is allocated and the modified data is written into the new block. These changes cascade all the way up the metadata tree to the root block or uber block (see Figure 10). Thus for one data block change there can be 5 new blocks allocated. These allocations are quick as they take place in memory, but what happens when they are written out to disk? Blocks are written out to disk in batches every few seconds for non-synchronous writes. On an idle or low activity filesystem a single block change could create 5 writes to disk, but on an active filesystem the total number of metadata blocks changed will be small compared to the number of datablocks. For every metadata block written there will typically be several datablocks that have been modified. On an active filesystem typically a single metadata block covers the modifications of 10 or 20 datablocks and thus the extra number of blocks written to disk is usually on the order of 10% the actual metadata block count.

Screen Shot 2013-06-03 at 10.30.46 AM

Figure 12. The flow of transaction data through in-memory buffers and disk.

But what happens for sync writes that can’t wait for block write batches that happen every few seconds? In those cases the sync writes must be written out immediately. Sync writes depend on another structure called the ZFS Intent Log (ZIL). The ZIL is like a database change log or redo log. It contains just the change vectors and is written sequentially and continuously such that a synchronous write request for a datablock change only has to wait for the write to the ZIL to complete. There is a ZIL per filesystem, and it is responsible for handling synchronous write semantics. The ZIL creates log records for events that change the filesystem (write, create, etc.). The log records will have enough information to replay any changes that might be lost in memory in case of a power outage where the block changes in memory are lost. Log records are stored in memory until either:

  • Transaction group commits
  • A synchronous write requirement is encountered (e.g. fsync() or O_DSYNC)

In the event of a power failure or panic, log records are replayed. Synchronous writes will not return until ZIL log records are committed to disk.

Another concern is that blocks that were initially written sequentially next to each other may end up spread over the disk after modifications to those blocks due to the updates resulting in a new block being allocated to a different location. This fragmentation has little effect on random read workloads but multiblock reads can suffer from this because a simple request for a continuous number of blocks may turn into several individual reads by ZFS.

ZFS also introduced the concept of hybrid storage pools where both traditional spinning disks and modern flash-based SSDs are used in conjunction. In general, disks are cheap and large in size but are limited both in latency and throughput by mechanics. Flash devices on the other hand provide I/O requests with latency that is only a small fraction of that of disks; however, they are very expensive per gigabytes. So while it may be tempting to achieve the best possible performance by putting all data on SSDs, this is usually still too cost prohibitive. ZFS allows mixing these two storage technologies in a storage pool, after which the ZIL can be placed on a mirror of flash devices to speed up synchronous write requests where latency is crucial.

Another use for SSDs in ZFS is for cache devices. ZFS caches blocks in a memory area called the Adaptive Replacement Cache—also the name of the algorithm used to determine which blocks have a higher chance of being requested again. The ARC is limited in size by the available system memory; however, a stripe of SSD devices for a level 2 ARC can be configured to extend the size of the cache. Since many clones can be dependent on one snapshot, being able to cache that snapshot will speed up access to all the thin clones based off of that snapshot.

Screen Shot 2013-06-03 at 10.30.57 AM

Figure 13. A storage pool with an SSD caching layer and ZFS Intent Log for syncing.

With these capabilities in mind, there are several methods available to use this technology for database thin clones:

  • Open Source ZFS snapshots and clones
  • ZFS Storage Appliance from Oracle with RMAN
  • ZFS Storage Appliance from Oracle with Dataguard

(Open) Solaris ZFS

ZFS is available in a number of operating systems today. It was released in Solaris 10 and has gained even more features and importance in Solaris 11. After the acquisition of Sun by Oracle, the OpenSolaris project was abandoned but the community forked a number of open source projects, the most notable of which is Illumos and OpenIndiana. These releases are still actively being developed and maintained. Many commercial products are built on these open source projects.

Any one of these systems can be used to build your own ZFS based storage system to support thin cloning:

  • Database storage on local ZFS
  • ZFS storage as an NFS filer
  • ZFS storage as an iSCSI/block storage array

When a database is already running on Solaris with local disks, a ZFS filesystem can be used to hold all database files. Creating snapshots and clones on that filesystem is a simple matter of using a few ZFS commands; however, one does not have to bother with storage protocols like NFS. If Solaris is in use and datafiles are on ZFS anyways, it may also be a good idea to automate regular snapshots as an extra layer of security and to enable a “poor man’s flashback database”.

When a database is not running locally on a Solaris server, you can still benefit from ZFS features by building your own ZFS storage server. You can share ZFS volumes via iSCSI or fibre channel and use ASM on the database server for datafiles but instead we will focus on the easier setup with ZFS filesystems and the NFS protocol to share the volumes.

On a Solaris Storage server

  • Create a zpool (ZFS pool)
  • Create a ZFS filesystem in the pool
  • Export that filesystem via NFS

On the source database server

  • Mount the NFS filesystem
  • Put datafiles on the NFS mount as one of:
    • “live” data (this may have performance implications)
    • backup image copies (or an RMAN clone)
    • a replication target

On the Solaris Storage server

  • Take snapshots whenever necessary
  • Create clones from the snapshots
  • Export the clones via NFS

On the target database server

  • Mount NFS clones
  • Use this thin clone

ZFS Storage Appliance with RMAN

Oracle sells a ZFS storage appliance preconfigured with disks, memory, ZFS filesystem, and a powerful monitoring and analytics dashboard. One of these appliances can be used to create database thin clones; in fact, Oracle has published a 44-page white paper outlining the steps (found at http://www.oracle.com/technetwork/articles/systems-hardware-architecture/cloning-solution-353626.pdf). In brief, the steps involved are:

On the ZFS Appliance

  • Create a “db_master” project
  • Create a “db_clone” project
  • For both the “db_clone” and “db_master” project, create 4 filesystems:
    • datafile
    • redo
    • archive
    • alerts

On the Source Database

  • Mount a directory from the ZFS Appliance via NFS
  • Backup the source database with RMAN to the NFS mount directory

On the ZFS Appliance

  • Select the “db_master” project
  • Snapshot the “db_master” project
  • Clone each filesystem on “db_master” to the “db_clone” project

On the Target Host

  • Mount the 4 filesystems from the db_clone project via NFS
  • Startup the clone database on the target host using the directories from the db_clone project mount via NFS from the ZFS storage appliance

Screen Shot 2013-06-03 at 10.31.06 AM

Figure 14. A diagram of the procedure used to clone databases using the ZFS storage appliance and RMAN. First a directory is mounted on the source machine from the ZFS storage appliance via NFS. Then an RMAN backup is taken of the source database onto the NFS mounted directory. The snapshot can be taken off the RMAN backup on the ZFS storage appliance and then used to create thin clones.

ZFS Storage Appliance with DataGuard

One way to efficiently address getting changes from a source database onto a ZFS storage appliance is by using Dataguard as outlined in Oracle’s white paper on Maximum Availability Architecture (MAA) DB Cloning. You can find the document at the following link:

http://www.oracle.com/technetwork/database/features/availability/maa-db-clone-szfssa-172997.pdf

The concept revolves around using Dataguard to host the datafiles from a Dataguard instance on the ZFS storage appliance. With the datafiles hosted in ZFS, all changes from the source database will be propagated to the ZFS Storage Appliance via the Dataguard instance. Once the Dataguard datafiles are hosted on the ZFS storage appliance, the snapshots of the datafiles can easily be taken at desired points in time and clones can be made from the snapshots. The ZFS clones can be used to start up database thin clones on target database hosts by mounting those datafiles via NFS to the target hosts.

Screen Shot 2013-06-03 at 10.31.14 AM

Figure 15. Using Dataguard, files can be shared with a ZFS storage appliance via NFS to use for thin cloning of a target database.

Uncategorized

Database Thin Cloning: WAFL (Netapp)

May 30th, 2014

Write Anywhere File Layout (WAFL)

With EMC, thin cloning can only be achieved by using backup technology; in essence, the process has to be architected manually in order to support databases. How can the same goals be achieved but with database thin cloning specifically in mind?

A more seamless approach to database thin cloning is SnapManager for Oracle (SMO) and SnapManager for SQL Server offered by NetApp. NetApp employs a technology called Write Anywhere File Layout (WAFL) that sounds on the surface like EMC VNX copy on write but is different.  WAFL has been around far longer and has a track record of being used for database thin cloning. WAFL allows quick, easy, and efficient snapshots to be taken of a filesystem. New writes don’t overwrite previous blocks with WAFL; instead, the new writes go to a new location. With this architecture it is easy to snapshot files, filesystems or LUNs in minutes.

Up to 255 snapshots can be created from a single LUN. (The 255 limitation is  per volume) An entire LUN can be the source of a snapshot, or snapshots can be made of specific sets of files. Along with the quick and easy snapshot technology, NetApp provides a feature called SnapMirror that will propagate snapshots to a secondary filer. The secondary filer in turn can use a feature called FlexClone that can be used to create clones.

Clones created in this manner will share duplicate blocks and thus can be used to create database thin clones on a secondary filer. The snapshots on the source array can be managed specifically for databases with NetApp Snapshot Manager for Oracle (SMO), or Snapshot Manager for SQL Server. SMO connects to the database, and in the case of Oracle will put all tablespaces in hot backup mode before taking snapshots then take them out of hot backup mode when the snapshot is complete. Information about the snapshots is tracked and managed within SMO inside an Oracle database that serves as a repository.

The technology involved with snapshot cloning in WAFL is solid but very component heavy. On top of the components already listed is a required installation on the target array called NetApp SnapDrive for UNIX. Snapshots are propagated to the secondary array with SnapMirror but a feature called Protection Manager manages the process. A critical step in cloning operations is correctly synchronizing the snapshot schedule of SMO with the transfer schedule of Protection Manager so that the same retention class is maintained on the source and target arrays. On the destination array it is important to manage and track how many clones are made and which snapshot is used for the basis of each clone. If more than 255 clones are made of a single LUN, the next clone will no longer be a logic (virtual) clone sharing duplicate data blocks but a physical clone with a completely new copy of the datafiles.

Screen Shot 2013-06-03 at 10.29.29 AM

Figure 8. Using NetApp filer technologies including WAFL, SnapMirror, SMO, and FlexClone to create thin provisioned database clones.

An important consideration on WAFL volumes on NetApp is the aggregate pool. The aggregate pool defines which LUNs will be included in a snapshot. The size limitation on this pool varies between 16TB and 100TB depending on the model of the NetApp array. The limits on the size of this pool and the limit of 255 snapshots should be considered when evaluating the capabilities of SMO and FlexClone on NetApp.

Reference

from http://media.netapp.com/documents/tr-3761.pdf 

Screen Shot 2013-06-05 at 8.11.09 AM

Screen Shot 2013-06-05 at 8.11.18 AM

an interesting discussion of Netapp vs EMC filesystem snapshots:

Uncategorized

Database Thin Cloning: Copy on Write (EMC)

May 29th, 2014

Copy on Write

Copy on write is a storage or filesystem mechanism that allows storage or filesystems to create snapshots at specific points in time. Whereas Clonedb is a little known and rarely used option, storage technologies are widely known and used in the industry. These snapshots maintain an image of a stroage a specific point in time. If the active storage makes a change to a block, the original block will be read from disk in its original form and written to a save location. Once the block save is completed, the snapshot will be updated to point to the new block location. After the snapshot has been updated, the active storage datablock can be written out and overwrite the original version.

Screen Shot 2013-06-03 at 10.28.39 AM

Figure 4. This figure shows storage blocks in green. A snapshot will point to the datablocks at a point in time as seen on the top left.

Screen Shot 2013-06-03 at 10.28.44 AM

Figure 5. When the active storage changes a block, the old version of the block has to be read and then written to a new location and the snapshot updated. The active storage can then write out the new modified block.

Using storage snapshots, an administrator can snapshot the storage containing datafiles for the database and use the snapshot to create a clone of a source database. With multiple snapshots, multiple clones with shared redundant blocks can be provisioned.

On the other hand, if the source database is an important production environment then creating clone databases on the same storage as the production database is generally not a good practice. A strategy that allows the cloned database files to be stored off of the production storage environment will be more optimal for performance and stability.

EMC Snapshot with BCV

EMC has a number of technologies that can create database thin clones. In the simplest case the clone databases can share the same storage as the source databases using snapshots of the storage. The storage snapshot can be taken and used to make a thin clone. EMC supports up to 16 writeable storage snapshots allowing up to 16 thin clones of the same source datafiles (while sharing the same storage as the source database). If the source database consists of several LUNs then snapshots must be taken of the LUNs at the same point in time. Taking consistent snapshots of multiple LUNs at the same point in time requires the EMC Timefinder product that will manage taking snapshots of multiple LUNs at the same point in time.

Taking load off of production databases and protecting production databases from possible performance degradation is an important goal of cloning. By taking snapshots of the production LUNs one incurs an extra read and extra write for every write issued by the production database. This overhead will impact both production and the clone. On top of the extra load generated by the snapshots, the clones themselves create load on the LUNs because of the I/O traffic they generate.

In order to protect the performance of the production database, clones are often provisioned on storage arrays that are separate from production. In the case where production LUNs are carved out of one set of isolated physical disk spindles and another set of LUNs are carved out of a separate set of physical spindles on the same array, it may be acceptable to run the clones within the same array. In this case, Business Continuance Volumes (BCV) can be used to mirror production LUNs onto the LUNs allocated for the clones. Then shapshots can be taken of the mirrors and those snapshots can be used for thin clones; or, in order to protect the production LUNs from the overhead generated by snapshots, the BCV mirrors can be broken and the LUNs allocated for cloning can be used to start up thin clone databases. Filesystem snapshots can be used to clone up to 16 thin clone databases using the LUNs mirrored from production.

More often than not, however, snapshots are taken of BCVs or the BCVs are broken and then copied to a second non-production storage array where snapshots can be taken and clones provisioned off of the snapshots. In this case, though the EMC environment is limited to only 16 clones and if those clones are from yesterday’s copy of production, then a whole new copy of production has to be made to create clones of today’s copy of production. This ends up taking more storage and more time, which goes against the goal of thin cloning.

EMC’s goal has been backup, recovery, and high availability as opposed to thin cloning; however, these same technologies can be harnessed for thin cloning.

The steps to set this configuration up on EMCs system are:

  1. Create BCVs and then break the BCVs
  2. Zone and mask a LUN to the target host
  3. Perform a full copy of the BCV source files to target array
  4. Perform a snapshot operation on target array
  5. Startup database and recover using  the target array

Screen Shot 2013-06-03 at 10.29.07 AM

Figure 6. Timefinder is used to snapshot multiple LUNs from the production filer to the non-production filer to be used for thin provision clones.

EMC is limited to 16 writeable snapshots and shapshots of snapshots (also known as branching) is generally not allowed. On some high-end arrays it may be possible to take a single snapshot of a snapshot, but not branch any deeper.

EMC VNX

While copy on write storage snapshots are limited to 16 snapshots, there are other options available in order to increase the number and to enable branching of oclones. EMC has another technology called VNX which improves upon previous Snapview snapshots. The VNX technology:

  • requires less space
  • has no read+write overhead of copy on first write (COFW)
  • makes snapshot reads simpler
  • supports clones of clones (branching)

When the older Snapview snapshots were created they required extra storage space at creation time. The newer VNX snapshots don’t require any extra storage space when they are created. The older COFW feature caused more writes for the storage than before the snapshot was in place. With newer VNX Snapshots the storage writes become Redirect on Write (ROW) where each new active storage modification is written to a different location with no extra read or write overhead.

Another benefit of VNX is how blocks are read from the source LUNs: in  the older Snapview, reads from snapshot had to merge data from the storage with the Reserve LUN Pool (RLP) where the original data blocks that have been modified are kept. With the newer VNX the snapshot data is read directly from the snapshot source LUN.

EMC’s Timefinder capability is also no longer necessary with VNX. Up to 256 snapshots can be taken in a VNX environment, and snapshots can be made of multiple LUNs simultaneously without needed additional software capabilities to create a consistent copy.

Despite all the improvements on VNX, VNX is still considered a lower end storage solution as compared to Symmetrix arrays that have all the short comings described above.

VNX relaxes some of the constraints of the older Snapview clones; however, in both cases the problem of efficiently bringing new changes from a source array to arrays used for development still exists. After a copy is brought over to a target array from source database LUNs, changes on the source (fresh data) cannot easily be brought over to the target array without a full new copy of the source database. Multiple point in time snapshots are also difficult, as having a target database on the development array share duplicate blocks with another version of the target database (different point in time) is impossible with this architecture. Instead, multiple copies will take up excess space on the target array, and none of the benefits of block sharing in cache or on disk will apply if multi-versioned clone databases are required.

EMC Snapshots with SRDF and Recover Point

A major challenge of both BCVs and VNX is keeping the remote storage array used for clones up to date with the source database. EMC has two solutions to this challenge; each provides a way of continuously pulling in changes from the source database into the second storage array in order to keep it up to date and usable for refreshed databases:

  • Symmetrix Remote Data Facility (SRDF)
  • RecoverPoint

SRDF streams changes from a source array to a destination array on Symmetric storage arrays only.

RecoverPoint is a combination of a RecoverPoint Splitter and a RecoverPoint appliance. The splitter splits writes, sending one write to the intended destination and the other to a RecoverPoint appliance. The splitter can live in the array, be fabric based, or host based. Host based splitting is implemented by installing a device driver on the host machine and allows RecoverPoint to work with non-EMC storage; however, because the drivers are implemented at the OS level the availability will depend on the operating system that has been ported. The fabric based splitters currently work with Brocade SAN switches and Cisco SANTap. Fabric splitters open up the usage of RecoverPoint with non-EMC storage. The RecoverPoint appliance can coalesce and compress the writes and send them back to a different location on the array or send them off to a different array either locally or in another datacenter.

One advantage of RecoverPoint over SRDF is that SRDF will immediately propagate any changes from the source array to the destination. As with all instant propagation systems if there is a logical corruption on the source (for instance, a table being dropped), it will immediately be propagated to the destination system. With RecoverPoint changes are recorded and the destination can be rolled back to before the point in time of the logical corruption.

SRDF could be used in conjunction with Timefinder snapshots to provide a limited number of consistent point-in-time recovery points for groups of LUNs. RecoverPoint on the other hand can work with consistency groups to guarantee write order collection over a group of LUNs, and provides continuous change collection. RecoverPoint tracks block changes and journals them to allow rolling back target systems in the case of logical corruption or the need to rewind the development system.

Screen Shot 2013-06-03 at 10.29.21 AM

Figure 7. EMC SRDF or RecoverPoint can propagate changes from source filer LUNs to the target filer dynamically, allowing better point in time snapshotting capabilities.

Using SRDF or RecoverPoint allows propagation of changes from a source array to a target array. On the target array, clones can be made from the source database at different points in time while still sharing duplicate blocks between the clones no matter which point in time they came from.

In all these cases, however, there are limits to the snapshots that can be taken as well as technical challenges trying to get the source changes to the target array in an easy and storage-efficient manner.

More information on EMC snapshot technologies can be found via the following website links:

Summary

With EMC, thin cloning can only be achieved by using backup technology; in essence, the process has to be architected manually in order to support databases. How can the same goals be achieved but with database thin cloning specifically in mind? See the following blogs on Netapp, ZFS and Delphix.

Addendum

I’ve been getting questions about how EMC compares with Delphix. Delphix offers technology that is completely missing from EMC arrays

Screen Shot 2014-05-30 at 12.07.53 PM

EMC historically only supports 16 snapshots and no branching. EMC has no tools to transfer changes of a database from the production storage to the development storage. In theory one could use either SRDF which only works between compatible Symmetrix arrays for sending changes from one to the other or they could use Recover Point. Recover Point requires two additional appliances to capture changes on the wire and then play them onto different storage. Neither is setup for databases specifically to take into account things like file system snapshots with putting the database in hot backup mode. I haven’t met anyone with EMC that thinks that EMC could do much of what Delphix does when we explained what we do.
We have 3 parts
  1. Source sync
    • initial full copy
    • forever incremental change collection
    • rolling window of save changes with older replace data purged
  2. DxFS storage on Delphix
    • storage agnostic
    • compression
    • memory sharing of data blocks (only technology AFAIK to do this)
  3. VDB provisioning and management
    • self service interface
    • rolls, security, quotas, access control
    • branching, refresh, rollback
Of these EMC only has limited snapshots which is a part of bullet 2 above but for bullet 2 we also have unlimited, instantaneous snapshots that work on any storage be it EMC, Netapp or JBODs. Also if one is considering a new SSD solution like Pure Storage, Violin, Fuision IO etc, only Delphix can support them for snapshots.  We also compress data by 1/3 typically along data block lines. No one else AFAIK is data block aware and capable of this kind of compression and fast access. There is no detectible overhead for compression on Delphix.
No one in the industry does point 1 above of keeping the remote storage in sync with the changes.Netapp tries with a complex set of products and features but even with all of that they can’t capture changes down to the second.
Finally point 3, provisioning. No one has a full solution except us. Oracle tries to with EM 12c but they are nothing without ZFS or Netapp storage, plus their provisioning is extremely complicated. Installation takes between 1 week to 1 month and it’s brand new in 12c so their are bugs. And it does’t provide provisioning down to any second nor branching etc.

Delphix goes way beyond just data

  • SAP endorsed business solution
  • EBS automated thin cloning of full stack – db, app, binaries
  • Application stack thin cloning

Delphix customers have seen an average application development throughput of 2x.

One SAP was able to expand their development environments from 2 to 6 and increased the project output from 2 projects every 6 months to over 10.

Points to consider

• Storage Flexibility: EMC cloning solutions only work with EMC storage – increasing
lock-in at the storage tier. In contrast, Delphix is storage vendor agnostic and can be
deployed on top of any storage solution. As companies move towards public clouds,
influence over the storage tier vendor diminishes. Unlike EMC, Delphix remains
relevant on-premise and in the cloud (private or public).

• Application Delivery: Database refresh and provisioning tasks can take days to weeks
of coordinated effort across teams. The sheer effort becomes an inhibitor to
application quality and a barrier to greater agility. Delphix is fundamentally designed
for use by database and application teams, enabling far greater organizational
independence. Delphix fully automates various functions like refreshing and
promoting database environments, re-parameterizing init.ora files, changing SIDs, and
provisioning from SCNs. As a result, with Delphix, database provisioning and refresh
tasks can be executed in 3 simple clicks. The elimination of actual labor as well as
process overhead (i.e. organizational inter-dependencies) has allowed Delphix
customers to increase application project output by up to 500%. In contrast, EMC
cloning products increase cross-organizational dependencies and are primarily
designed for storage teams.

• Storage Efficiency: While EMC delivers storage efficiency simply through copy on write
cloning, Delphix adds intelligent filtering and compression to deliver up to 2-4x greater
efficiency (even on EMC storage!). Additionally, most customers realize more value
from other Delphix benefits (application delivery acceleration; faster recovery from
downtime etc.) that EMC does not offer or enable.

• Data Protection and Recovery: While EMC only allows for static images or snapshots
of databases at discrete points in time, Delphix provides integrated log shipping and
archiving. This enables provisioning, refresh, and rollback of virtual copies to any point
in time (down to the second or SCN) with a couple of clicks. It also enables an
extended logical, granular recovery window for edge-case failures and far better RPO
and RTO compared disk, tape or EMC clones. Many Delphix customers have wiped out
the cost of backup storage as well as 3rd party backup tools for databases with this
Delphix “Timeflow” capability.

 2nd Level Virtualization: Delphix can create VDBs (virtual databases) from existing
VDBs, which is extremely valuable given the natural flow of data in application
lifecycles from development to QA to staging etc. For example, a downstream QA
team may request a copy of the database that contains recent changes made by a
developer. EMC cloning tools can only create first generation snapshots of production
databases and do not reflect the real need or data flow within application
development lifecycles.

• Integrated Data Delivery: Many enterprise applications (ex: Oracle EBS, SAP ECC etc.)
are comprised of multiple modules and databases that have to be refreshed to the
same point in time for data warehousing, business intelligence, or master data
management projects. Delphix uniquely supports integrated and synchronized data
delivery to the exact same point in time or to the same transaction ID.

• Resource Management: Delphix offers resource management and scheduling
functionality such as retention period management, refresh scheduling, and capacity
management per VDB that is lacking in EMC’s cloning products. For example, some
VDBs for a specific source database may be retained for a few weeks while specific
quarter ending copies can be retained for extended durations (for compliance).
Delphix also supports prioritizing server resources allocated to process IO requests per
VDB. This is important in environments where DBA teams must meet SLAs that vary by
lines of business or criticality of applications.

 Security and Auditability: Physical database copies and EMC clones alike constantly
proliferate and increase the risk of audit failures and data breaches when sensitive
data is involved. Delphix delivers a full user model, centralized management, retention
policies (for automated de-provisioning), and complete auditing for VDBs. Delphix also
integrates with homegrown and 3rd party masking tools so virtual copies can be
centrally obfuscated – avoiding tedious masking steps per copy.

• V2P (Virtual to Physical): In the event that customers experience downtime across
primary and standby databases, Delphix can quickly convert a VDB (from any point in
time) from virtual to physical form to minimize the cost and risk of downtime. This
provides an extended recovery layer and also a quick path to creating physical copies
for other purposes like performance testing.

 

Uncategorized

Database Thin Cloning : clonedb (Oracle)

May 28th, 2014

A production database is full of data that makes sense for its purpose, whether the database is 10GB or 10TB.
Now if you take that database and clone it for QA or development suddenly the data is cumbersome, unnecessary. Terabytes of disk are provisioned simply to hold a replica for the purpose of testing a small subset of it. An entire architecture with its myriad support structures, historical data, indexes, and large objects cloned and ready just to be partially used, trashed, and rebuilt. This is waste, both of time and storage resources. To the business, having a duplicate environment makes absolute sense; however, from an IT point of view the repeated duplication of storage space and  the time drain it cause just makes no sense at all.

When database copies are made from the same master database (the source environment , the master database, being used for cloning), typically 95% or more of the blocks are duplicated across all copies. In a QA, development, and reporting environments there will almost always be some changes exclusive to the cloned system; however, the amount is usually extremely small compared to the size of the source database. The unchanged blocks are redundant and take up massive amounts of disk space that could be saved if the blocks could somehow be shared.

Yet sharing duplicate blocks is not an easy feat in most database environments. It requires a technology that can act as a foundational cornerstone that coordinates and orchestrates access to duplicate blocks. At the same time, it requires the database copies to be writable with their own private modifications that are hidden from the source or other clones made from the source.

There are several technologies available in the industry that can accomplish block sharing across database clones. The primary technology involves filesystem snapshots that can be deployed across multiple clones. This concept is known as thin cloning, and it allows filesystems to share original, unmodified blocks across multiple snapshots while keeping changes on the target clones private to the clone that made the changes.

But as with all new technologies, there are many methods available that all accomplish the same task. Likewise with the taks of  thin cloning, there are multiple vendors and methods that provide this technology.

Thin Cloning Technologies

The main requirement of database cloning is that the database files and logs must be in a consistent state on the copied system. This can be achieved either by having datafiles in a consistent states (via a cold backup) or with change logs that can bring the datafiles to a consistent state. The cloning process must also perform prerequisite tasks like producing startup parameter files, database definition files, or other pre-creation tasks for the target database to function. For example, Oracle requires control files, password files, pfile/spfiles, and other pre-created components before the target can be opened for use. Controlfiles, logfiles, and datafiles together constitute a database that can be opened and read by the appropriate DBMS version.

In order to share data between two distinct, non-clustered instances of a database, the two instances must believe they have sole access to the datafiles. Modifications to the datafiles in one instance (a production database, for example) cannot be seen by the other instance (a clone) as it would result in corruption. For thin clones, datafiles are virtual constructs that are comprised of the shared common data and a private area specific to the clone.

There are a number of technologies that can be leveraged to support thin cloning. These technologies can be broken down into

  • Application software made to manage access to shared datafile copies
  • Copy-on-write filesystem snapshots
  • Allocate-on-write filesystem snapshots

The difference in these technologies is substantial and will determine how flexible and manageable the final thin cloning solution will be.

Software Managed Thin Cloning

Because it is part of the existing stack, one approach to sharing databases is to have the database software itself as the thin cloning technology. In this scenario the DBMS software itself orchestrates access to a shared set of database files while modifications are written to an area that is private to each database clone. In this way the clones are managed as part of the DBMS software itself. This is the approach Oracle has taken with Clonedb, a software model introduced in Oracle 11gR2 patchset 2.

Clonedb

Clonedb manages the combination of shared and private data within an Oracle environment. By maintaining a central set of read only datafiles and a private area for each clone. The private area is only visible to each clone, guaranteeing that cross-clone corruption cannot occur. The Oracle RDBMS software orchestrates access to the read only datafiles and private areas maintained by the clones.

Oracle’s Clonedb is available starting in Oracle version 11.2.0.2 and higher. The cloned option takes a set of read only datafiles from an RMAN backup as the basis for the clone. Clonedb then maps a set of ‘sparse’ files to the actual datafiles. These sparse files represent the private area for each clone where all changes are stored. When the clone database needs a datablock, it will look in the sparse file; if it is not found there, then the cloned database looks in the underlying read only backup datafile. Through this approach, many clones can be created from a single set of RMAN datafile backups.

The greatest benefit of this technology is that by sharing the underlying set of read only datafiles, the common data shared between each clone does not have to be replicated. Clones can be created easily and with minimal storage requirements. With time and space no longer a constraint, cloning operations become far more efficiently and with minimal resource requirements.

Many of the business and operational issues summarized in chapters 1 and 2 of this book can be alleviated with this technology. For instance, DBAs can use Clonedb to provision multiple developer copies of a database instead of forcing developers to share the same data set. By using multiple developer copies many delays and potential data contamination issues can be avoided, which speeds development and efficiency during application development. Developers will be able to perform their test work, validate it against his or her private dataset, and then commit their code and merge it into a shared development database. The cloned environment can be trashed, refreshed, or handed over to another developer.

On the other hand, Clonedb requires that all clones be made from the source database at the same point in time. For example, if Clonedb is used to provision databases from an RMAN backup taken a day ago and developers want clones of the source database as it is today, then an entire new RMAN backup must be made. This dilutes the storage and timesavings advantage that Clonedb originally brought. While it is possible to use redo and archive logs to bring the previous day’s RMAN backups up to speed (all changes from the last 24 hours would be applied to yesterday’s RMAN datafile copies), the strategy would only work efficiently in some cases. The farther the clone is from the original RMAN datafile copies, the longer and more arduous the catching up process would be, resulting in wasted time and resources.

Clonedb functionality is effective and powerful in some situations, but it is limited in its ability to be a standard in an enterprise-wide thin cloning strategy.

 Screen Shot 2013-06-03 at 10.28.11 AM

Figure 1. Sparse files are mapped to actual datafiles behind the scenes. The datafile backup copy is kept in a read only state. The cloned instance (using Clonedb) first looks for datablocks in the sparse file. If the datablock is not found, it will then read from the RMAN backup. If the Clonedb instance modifies any data, it will write the changes to the sparse file.

In order to implement Clonedb, Oracle 11.2.0.2 or higher is required. Additionally, Direct NFS (dNFS) must be used for sparse files. The sparse file is implemented on a central NFS mounted directory with files that can be accessed via Oracle’s direct NFS implementation.

To create this configuration, the following high-level steps must be taken:

  • Recompile the Oracle binaries with Oracle dNFS code
  • Run the clonedb.pl script, available through Metalink Document 1210656.1
  • Startup the cloned database with the startup script created by clonedb.pl

The syntax for clonedb.pl is relatively simple:

clonedb.pl initSOURCE.ora create_clone.sql

Three environment variables must be set for the configuration:

MASTER_COPY_DIR=”/rman_backup”
CLONE_FILE_CREATE_DEST=”/nfs_mount”
CLONEDB_NAME=”clone”

Once clonedb.pl is run, running the output file generated by the script will create the dabase clone.

sqlplus / as sysdba @create_clone.sql

The create clone script does the work in four basic steps:

  1. The database is started up in nomount mode with a generated pfile (initclone.ora in this case).
  2. A custom create controlfile command is run that points to the datafiles in the RMAN backup location.
  3. Maps the sparse files on a dNFS mount to the datafiles in the RMAN backup location. For instance: dbms_dnfs.clonedb_renamefile(‘/backup/file.dbf’, ‘/clone/file.dbf’);
  4. The database is brought online in resetlogs mode: alter database open resetlogs;

Screen Shot 2013-06-03 at 10.28.22 AM

Figure 2. This image shows that multiple Clonedb instances can share the same underlying RMAN backup. Each Clonedb instances writes its changes to its own private sparse files

Screen Shot 2013-06-03 at 10.28.31 AM
Figure 3. A graphical outline of the process. An RMAN backup is taken of the source database and placed in a location where the Clonedb instances can access them (in this case, an NFS mount). A Clonedb instance can be set up on any host that has access to the NFS filer via dNFS. The Clonedb instances will create sparse files on the NFS filer. The sparse files map to the datafiles in the RMAN backup.

NOTE:    If Clonedb instances are going to be created from two different points in time, then a new RMAN backup has to be taken and copied to the NFS server before it can be used as the source for new clones as shown in Figure 3.

Because Clonedb adds an extra layer of code that requires reads to both the sparse files over dNFS and the RMAN datafile backups, there is a performance hit for using Clonedb. The biggest drawback is the requirement for multiple copies of source databases in order to create clones from different points in time, which diminishes the storage savings. The Clonedb functionality is a powerful option that should be in the back pocket of any Oracle DBA but it has limited use for an automated strategy involving thin cloning.

Reference

The best write-up on clonedb is Tim Hall’s blog post at
http://www.oracle-base.com/articles/11g/clonedb-11gr2.php

Uncategorized

The Thin Cloning Left Shift

November 13th, 2013

The DevOps approach to software delivery manages risk by applying change in small packages instead of big releases. By increasing release frequency, overall risk falls since more working capabilities are delivered more often. The consequence of this is that problems with your data can be amplified. And, as a result, you can squeeze the risk out of one aspect of your delivery just to introduce it in another. Thin cloning attacks that risk, enhancing and amplifying the value of DevOps by reducing the data risk inherent in your architecture.

Data Delivery

How is there risk in your architecture? Well, just because you’ve embraced Agile and DevOps doesn’t mean that your architecture can support it. For example, one customer with whom I spoke had a 3-week infrastructure plan to go along with every 2-week agile sprint because it took them that long to get their data backed up, transmitted, restored and ready for use. So, sure, the developers were a lot more efficient. But, the cost in infrastructure resources, and the corresponding Total Cost of Data was still very high for each sprint. And, if a failure occurred in data movement, the result would be catastrophic to the Agile cycle.

Data Currency and Fidelity

Another common tradeoff has to do with the hidden cost of using stale data in development. The reason this cost is hidden (at least from the developer’s viewpoint) is that the cost shows up as a late breakage event. For example, one customer described their data as evolving so fast that a query developed using stale data might work just fine in development but then be unable to respond to several cases that appear in more recent production data. Another customer had a piece of code tested against a subset of data that came to a crawl 2 months later during production-like testing. Had they not caught it, it would have resulted in a full outage.

I contend that the impact of these types of problems is chronically underestimated because we place too much emphasis on the number of errors, and not enough on their early detection. I contend that being able to remediate errors sooner is significantly more important than being able to reduce the overall error count. Why? First, because the cost of errors rises dramatically as you proceed through a project. Second, because remediating faster means avoiding secondary and tertiary effects that can result in time wasted chasing ghost errors and root causing things that simply would not be a problem if we fixed things faster and operated on fresher data.

Thought Experiment

To test this, I did a simple thought experiment where I compared two scenarios. In both scenarios, time is measured by 20 milestones and the cost of error rises exponentially from “10” at milestone 7 to “1000” at milestone 20. In Scenario A, I hold the number of errors constant and force remediation to occur in 5% less time. In Scenario B, I leave the time for all remediation constant and shrink the total number of errors down by 10%.

Scenario A
Scenario A: Defects Held Constant; Remediation Time Reduced by 10%

Scenario B
Scenario B: Remediation Time Held Constant; Defects Reduced by 10%

In each graph, the blue curve represents the before state, and the green curve the after state. For both Scenarios, in the before state, the total cost of errors was marked at $2.922M. The comparison of the two graphs shows that the savings from shrinking the total time to remediate by 10% was $939k vs. the savings from shrinking the total number of errors was $415k. In other words, even though these graphs didn’t change much at all – the dollar value of the change was significant when time to remediate was the focus. And, the value of reducing the time to remediate by 10% was more than twice as much then the value of just reducing the number of defects by 10%. In this thought experiment, TIME is the driving factor driving the cost companies pay for quality – the sooner and faster something gets fixed, the lower it costs. In other words, shifting left saves money. And, it doesn’t have to be a major shift left to result in a big increase in savings.

The Promise of thin cloning.

The power of thin cloning is that it addresses both of the key aspects of data freshness: currency and timeliness. Currency measures how stale it is compared to the source [see Segev ICDE 90] and timeliness how old it is since its creation or update at the source [See Wang JMIS 96]. These two concepts capture the real architectural issue with most organizations. There is a single point of truth somewhere that has the best data (high timeliness). But, it’s very difficult to make all of the copies of that data maintain fidelity with that source (currency) and the difficulty to do so rises in proportion to the size of the dataset, and the frequency with which the target copy needs currency. But, it’s clear that DevOps goes in this direction.

Today, most people accept the consequences of low fidelity/lack of currency because of the benefits of a DevOps approach. That is, they accept that some code will fail because its not tested on full size data, or because they will miss cases because data is evolving too quickly, or that they will chase down ghost errors because of old or poor data. And, they accept it because the benefit of DevOps is so large.

But, with thin cloning solutions like Delphix, this issue just goes away. Large – even very large databases can be fully refreshed in minutes. That means full size datasets with minutes old timeliness and minutes old currency.

So what?

Even in shops that are state of the art – with the finest minds and the best processes – the results of thin cloning can be dramatic. One very large customer struggling to close their books each quarter was struggling with a close period of over 20 days, with more than 20 major errors requiring remediation. With Delphix, that close is now 2 days, and the errors have become undetectable. For a large swath of customers, we’re seeing an average reduction of 20-30% in the overall development cycle. With Delphix, you’re DevOps ready, prepared for short iterations, and capable of delivering a smooth data supply at a much lower risk.

Shifting your quality curve left saves money. Data Quality through fresh data is key to shifting that curve left. Delphix is the engine to deliver the high quality, fresh data to the right person in a fraction of the time that it takes today.

Uncategorized

Docker and Delphix architectures

December 2nd, 2015

In my last post I showed using Docker and Delphix to support WordPress.

I use wordpress for this blog. Works fine on it’s own. It is just me making a few posts here and there. Occasionally there are problems like an upgrade that goes bad or a hack that get’s some redirection code into the site. In those cases I have to go to a backup of my MySQL database that is used by wordpress on my site. The database is small so it’s pretty quick to backup, but I don’t back it up normally.  I know I should and occasionally it would be nice to have a backup in the event that data is corrupted somehow (like hacks into the contents of the database).

WordPress uses MySQL for the data store and the wordpress content changes are all stored in MySQL. MySQL data can be linked to Delphix which automate data management for MySQL (or any data) by providing backups, versioning and fast thin cloning of the data to be used in development and QA.

Using WordPress as an example there are a number of architectures we could use. First we don’t need Delphix or Docker and could just set it up, as have with this blog,  as

Screen Shot 2015-12-01 at 1.26.20 PM

One weakness of this architecture is any changes to the wordpress website are being made on the source. Why is that a problem? It’s a problem if something goes wrong when deploying changes. How do you rollback any incorrect changes? What happens if multiple developers are working on the wordpress site? Is there any way to version changes to keep changes by one developer separate from another?

I just use wordpress for my personal blog but what if you used it for your business and what if multiple people were making changes. In this case, ideally I want to make changes on a staging site and when changes are validated then push them to the production site.

Ideally development on the wordpress site is done on a staging or development server.

Screen Shot 2015-12-01 at 3.48.12 PM

Question is, how do you keep the data in the development host in sync with the source host and how would you roll changes from development into the source? One answer for deploying changes would be to use something like RAMP. So we can use something like RAMP to push the changes to production but how push changes in production back to the staging environment?  What about data coming into production such as comments, feedback, forms etc? How do we get that data back to the development environment? That’s were Delphix shines

Screen Shot 2015-12-01 at 3.33.01 PM

Delphix connects to the MySQL database on production and syncs all the data changes onto Delphix providing a timeline (down to second) of changes. These versions of the database can be provisioned out to a target host via what is called “thin cloning”. When a thin clone is made, data is not moved or changed. Instead we just make an image of the data at the point in time available to the database instance. The data is mounted over NFS or iSCSI. The only thing that gets stored in these thin clones are changes made to the thin clone and those changes are only visible to the clone that made the changes. This architecture provides two things

  1. Backups down to the second of production for a multiple weeks generally stored in less than the size of the original database thanks to compression and deduplication and accessible in a matter of minutes.
  2. Thin clones of the data providing as many copies to as many developers as we want for almost free.

Point 1, backups, is a huge piece of mind. Once Delphix is connected to the production/source database then backup is automatic  and recovery is a no stress few clicks of the mouse.

Point 2 supports a more robust development environment like

Screen Shot 2015-12-01 at 3.51.19 PM

In this environment I can have multiple target hosts where developers can each work on their own private copy of the production database and thus website. We can even have extra copies to test merging of changes from different developers. What happens though if we want all the developer copies on one machine like:

Screen Shot 2015-12-01 at 3.52.53 PM

The problem with this, is I don’t know how to run multiple instances of wordpress on one machine. An easy solution would be to use Docker containers such that each instance of wordpress is separate from the others as in

Screen Shot 2015-12-01 at 3.56.09 PM

Docker containers are self contained and don’t impact each other (except potentially on a resource consumption level like CPU).

Docker containers are also quick to spin up allowing quick failover, when used in conjunction with Delphix, like

Screen Shot 2015-12-01 at 4.01.41 PM

Finally we could combine the architectures to support quick fail over, recovery, versioning, multiple developer environments like

Screen Shot 2015-12-01 at 3.58.59 PM

In this case, our production MySQL database is using data directly off of Delphix. This allows us to quickly rollback any changes by simply using Delphix to rollback to an earlier version of the database. Or we could promote a developer copy directly to production. And if the host went down, we could fail over to another machine quickly by starting up a docker wordpress container there and provisioning it with a thin clone in minutes from Delphix.

 

 

 

 

 

 

 

 

 

 

Uncategorized

5 Years of Delphix

September 22nd, 2015

Wow – 5 years have flown by. Five years ago I joined Delphix and five years ago Delphix began the virtual data industry. Like all successful technologies many other vendors have come into the domain.  Delphix for me is amazing compared to other virtual data solutions. Delphix is easy, fast and powerful. Check out this video of linking to a RAC database and provisioning a thin clone RAC database. How much work would it take to do the same on technologies? Can one even sync and provision from RAC on other technologies? Its the only Data as a Service thin cloning solution that you can download and try  (put “express” as your title form and I’ll send you the free version of Delphix instead of the 30 day demo)

Delphix is doing powerful things that others might never do. Delphix is the only one (AFAIK) to share not only data blocks on disk, but also in memory, meaning that if I have 10 virtual databases of the same 300GB database and 300GB  of memory, then all the data will be cached for all the virtual databases – its’ like 3 TB of memory in 300 GB of RAM. On top of that, Delphix compresses memory as well with 1/3 compression meaning that we can cache that 300 GB database in around 100 GB of memory, so you can get those 3TB  of memory in 100 GB of RAM !! We have lots of back end innovation that no one else has and it comes from the architects and creators of RMAN, ZFS, DTrace, flash back database, active dataguard who we have all hired. ( Not to mention half a dozen Oaktable members and numerous Oracle Aces).

Other technologies are limited to saving some storage and doing backups.

Delphix power is accelerating application development. What Delphix has that others  don’t:

  • Self Service for developers.
    • Rollback – recover a virtual database to any point in time
    • Refresh – refresh to most recent source database data
    • Bookmark – mark data versions. Bookmarks can be shared with other developers
    • Branch – branch an existing data version or booked mark data version to  give QA specific versions/schemas environment in minutes
    • Synchronize – refresh, rollback, branch multiple data sources together, for example the Oracle installation, the application stack and the database can all be refreshed, book marked or rolled back together, or even more powerful multiple databases can be tracked together for those applications that depend on more than one database.
  • Cloud enabled –  Delphix runs in AWS and we have partnerships VMware vCloud, IBM cloud and Dell cloud
  • Download and try – wow, who wants a technology that requires sales guys and consultants in order to try it? Not me! With Delphix you can download and try  and even run it for free   (Hint – Put “Express” as your title on the Download and try form and I’ll send you the free version of Delphix instead of the 30 day demo)
  • All major database: Oracle, SQL Server, Postgres, Sybase, DB2, MySQL etc
  • Supports any data  – application stacks, EBS stack, SAP stack etc
  • Source syncing – allowing any point in time virtual database versions
  • Open Stack –  software solution independent of specialized hardware
  • Continuous Integration – Delphix ability to provision from any point in time and branch virtual database from virtual database is about the only way to do continuos integration when  continuous integration involves a large database
  • Five years track record – other solutions have few customers of any size using their solutions. Delphix has a 100 of the Fortune 500 heavily using Delphix as well as many more in the Fortune 2000
  • Best support in the industry I’ve ever seen.  Delphix has an amazing team that has gotten accolade after accolade not to mention easy access to the engineers who built the system. I worked a Oracle support for years and was a customer of Oracle at time as well trying to use Oracle and know how challenging support can be. I also know that it’s almost impossible to get support on new technologies that are not central to a company. Our data virtualization is our company and our company is the architects and leaders in the industry who built all the underlying technologies used to implement data virtualization.

Price

With Oracle EM Snap Clone, you have to set up a test master yourself. The default with Oracle is that the test master is a copy of the source database that a DBA makes manually,  and in that case you can’t get a copy of tomorrows source database unless you take a whole new copy. To get around this probem you can setup a dataguard which runs another $47,500K per core for the dataguard instance  not to mention getting the cloud management pack $5,000/core , data lifecycle management pack $12,000 /core, masking pack $11,500/core , plus you  have to get specialized hardware being either EMC Symmetrix, Netapp or ZFS appliance.

Delphix is cheaper than the required Oracle Snap Clone packs and Delphix doesn’t need dataguard as we sync directly with the source databases (or standby).  Delphix runs on any storage. No limitations. No manual work. All automated, synchronized and optimized. The price of Delphix vs Oracle Snap Clone doesn’t even matter as numerous of Delphix customers have Unlimited License Agreement (ULA ) that cover Snap Clone and they still buy Delphix. Why? Because Delphix works. It’s automated, easy and addresses the top industry concerns which are

  • providing functionality faster
  • providing better functionality
  • providing higher quality functionality with less bugs
  • doing it all securely

Self Service:

Competitors say you can give developers access to to them, but their tools are DBA or storage admin tools, not tools for developers.

Delphix has a safe, secure and easy developer specific interface that allows them to bookmark, branch, refresh, rollback their virtual databases as well as restrict what they can do and access. The developer interface, called Jetstream, also has cool things like allowing developers to make a data version and share it with another developer with bookmarking and branching.

Source Syncing

The biggest flaw with some competitors, and it’s a huge flaw, is that they can’t sync with the source database. It only works once the DBA manually copies a clone of the source onto the EMC storage, Netapp or ZFS storage at a single point in time. There is no time flow. No way to use the data for backup and recovery at various points in time.  To improve the other solutions  requires the DBA to set  up dataguard for Oracle for each source and have the data guard datafile stored on the thin clone storage. From there the DBA has to manage taking snapshots, collecting archive logs, purging old data and keeping track of which snapshots are used by what thin clones.

Also  the competitors have no automated time flow on the virtual databases(VDB). No refresh, rollback, tagging or branching of VDBs. Oracle can take snapshots of virtual databases but they have to be manually run and don’t have point in time recovery. With Delphix, out of the box virtual databases can be recovered down to the second to any point in time over the last 2 weeks.

Branching

Branching is power. On Delphix one can make a virtual database from a virtual database. This is really cool. For example if QA person who finds a bug and logs it yet development can’t reproduce it then often the bug languishes or the developer has to come over and use the QA person’s system and the QA person has to wait. Now with branching, the QA person can just bookmark the problem data set and the developer can branch off a thin clone of that data set in minutes and reproduce the problem without every interrupting the QA person and without any waiting. Branching has many powerful uses. Branching brings the equivalent of code source control to data. With Delphix it’s now data control!

Future

The future of Delphix is exciting. I’m loving all the DevOps and Continuous Integration work being done with Delphix from integration with Ansible, Chef and Puppet to providing data containers to Docker.

 

SANYO DIGITAL CAMERA

photo by Sherri Lynn Wood

Uncategorized

Delphix Data as a Service (DaaS)

May 19th, 2015

The capabilities of Delphix can be differentiated from snapshot technologies through the following hierarchy:


Screen Shot 2015-05-19 at 4.39.57 PM

  1. Data as as Service (DaaS) (Delphix approach to data management)
  2. Virtual Data (end-to-end collection and provisioning of thin clones)
  3. Thin Cloning
  4. Storage Snapshots
On top we have the most powerful and advanced data management features that enable fast, easy, secure, audit-able data flow through organizations.
Screen Shot 2014-05-21 at 8.08.47 AM
DaaS is built on top of other technologies. On the bottom we have the minimal building blocks starting with storage snapshots.  Storage snapshots can be used to make “thin clone” databases. Storage snapshots have been around for nearly 2 decades but have seen minimal usage for database thin cloning due to the technical and managerial hurdles. Part of the difficulty with creating thin clones is that thin cloning requires work by multiple people and/or teams such as as DBAs, system admins, storage admins etc it takes to create the thin clones.

Why does it take so long to clone databases with file system snapshots? There are two reasons

  • bureaucracy
  • technical challenges

Bureaucracy

Depending on your company the more or less bureaucratic steps you will have (one customer reported 300 steps to thin cloning)  to get a thin clone database allocated. If you are the DBA, Storage, Systems guru all rolled into one at a small company, and if so bravo, you can probably do it pretty  quick. On the other hand if you wear all those hats, you are probably the crucial person in IT and most critical IT processes grind to a halt because they depend on you and you are super busy.

Screen Shot 2014-05-23 at 4.01.39 PM

Why does it take so long to pass tasks between people and  groups? Because a task that might take an hour when someone is completely free and idle will take multiple days as that person starts to be 95% busy or more. See the following chart from the book The Phoenix Project:

Screen Shot 2014-05-23 at 4.02.15 PM

Technical Challenges

Screen Shot 2013-11-11 at 8.51.06 PM

The easiest way to create a clone is to snapshot the production storage. To snapshot the production storage, either shutdown the source database, take a snapshot or more  likely put all the table spaces in hot backup mode, take a snapshot, and then take all of the table space out of hot backup mode. If the database spans more than one LUN it may take special storage array options to snapshot all the LUNs at the same point in time. Once the all the database LUNs are snapshot, then you can use the snapshots to create a “thin clone” of the production database on the same storage as production.

Problem with this scenario no matter what storage you use is that the clone is doing I/O on the same LUNs as production.  The whole point of cloning production is to protect production but in this case the clone’s I/O will be hurting production. Ooops

Screen Shot 2013-11-11 at 8.51.31 PM

 

Screen Shot 2014-05-21 at 8.08.47 AM

What we want to do is somehow get a copy of production onto some non-production storage where we can snapshot it. This means making a full physical copy of production onto a “development filer.” Once a copy has been made we can make clones by snapshoting the copy. These snapshots then require configuration to make them available to target machines either  over fiber channel or mounting them over NFS and then recovering the database on the target machines.

Problem with this scenario is that what if tomorrow we want a clone of production as it is that day? Currently we only have the copy from yesterday, thus we have to copy across the whole copy of production onto the “development filer.” Continually copying the source each time we need a clone at a different point in time defeats the purpose of creating thin clones in the first place.

 

Delphix is the solution

In order to overcome the obstacles creating thin clones, all the steps can be optimized and automated with a technology called “Virtual Data” (like Virtual Machines).
Screen Shot 2014-05-21 at 8.08.47 AM
Virtual data just the first step in automation. The next step is adding all the processes, functionality and control to manage the virtual data which is DaaS.
Screen Shot 2014-05-21 at 8.08.47 AM
File system snapshots  address the very bottom of the hierarchy, that is, they only manage storage snapshots. They have no automated thin cloning of databases. Without automated thin cloning of databases there is no end-to-end processing of data from source to thin cloned target i.e.virtual data. With out virtual there is no DaaS.
Screen Shot 2014-05-21 at 8.08.47 AM
DaaS features, all of which are encompassed by Delphix, include
Screen Shot 2014-05-21 at 8.08.47 AM
  • Security
    • Masking
    • Chain of custody
  • Self Service
    • Login and Roles
    • Restrictions
  • Developer
    • Data Versioning and Branching
    • Refresh, Rollback
  • Audit
    • Live Archive
  • Modernization
    • Unix to Linux conversion
    • Data Center migration
    • Federated data cloning
    • Consolidation

DaaS re-invents data management and provisioning by virtualizing, governing, and delivering data on demand.

Most businesses manage data delivery with manual, ad hoc processes: users file change requests, then wait for DBAs, systems administrators, and storage administrators to push data from system to system, bogging down production applications, networks, and target systems with long load times. Data delays cost businesses billions a year in lost productivity and low utilization of systems and software resources.

As a result, there  an enormous opportunity to optimize data management. Data management can be optimized with DaaS yielding significant business impact:

  • Drive revenue, competitive differentiation with faster application time to market
  • Enable faster growth via better release management of enterprise applications
  • Improve customer intimacy, upsell, cross-sell with faster, more flexible analytics
  • Free budget for innovation by reducing IT maintenance costs
  • Reduce compliance risk through better governance, data security.

Businesses need to manage data as a strategic asset across their operations, applying the same rigor as supply chain optimization for manufacturing companies.

DaaS Transformation Process with Delphix

Delphix applies a three-step process to transform the data supply chain:

  • Analyze: survey systems, processes, teams across data supply chains
  • Transform: virtualize, automate data delivery with centralized governance
  • Leverage: drive business value via new data products, process optimization

Businesses typically manage multiple data supply chains simultaneously, all of which are targets for data chain optimization:

  • Compliance retention, reporting
  • Modernization, migration projects
  • Application projects and development
  • BI, analytics
  • Data protection.

Delphix re-invents the data supply chain with its DaaS:

  • Install data engines in hours across all repositories, locations (including cloud)
  • Connect: non-disruptively sync data across sites, systems, architectures
  • Control: secure data, track release versions, preserve and prove data history
  • Deploy: automatically launch virtual data environments in 10x less space, time
  • Leverage data with self service refresh, reset, branching, bookmarks, integration.

Uncategorized

Put Delphix on your laptop at Oracle Jan 28 !

January 27th, 2015

 

#CloneAttack

Create an army of clone databases and applications in minutes

cloneattack

Tomorrow Jan 28 we will be installing Delphix on people’s laptops at the BIWA conference at Oracle conference center at Oracle head quarters in Redwood Shores.

Prerequisites

  • Laptop, either
    • Mac: VMware Fusion or VirtualBox
    • Linux: VMware Fusion or VirtualBox
    • Windows: VMware Workstation or VirtualBox
  • at least 8 GB RAM
  • at least 50 GB free disk space, but preferably 100 GB free
  • at least 2 Ghz CPU, preferably dual-core or better

We’ll provide a USB stick with 3 virtual machine file OVA files. Just start up the VMs and in a few minutes you will be thin cloning Oracle databases, Postgres databases and web applications.

Example of the installation

Example of provisioning a database with web application using #CloneAttack

Uncategorized

Top 3 criteria to choose a virtual data solution 

January 6th, 2015

3105581280_58d4132191_z

photo by Thomas Hawk

Data virtualization solutions also known as Copy Data Management (CDM), Virtual Copy Data (VCD) and Virtual Data Appliances (VDA) are rising rapidly as over 100 of the Fortune 500 have adopted data virtualization solutions between 2010 and end of 2015. Adoption is hardly surprising given that virtual data reduces the time to provision copies of large data sets from days down to minutes and eliminates most of the space required for copies of data. How many copies of large data sets do companies have? Database vendor Oracle claims that on average a customer has 12 copies of production databases in non-production environments such as development, QA, UAT, backup, business intelligence, sand boxes, etc  and Oracle expects the number of copies to double by the time their latest version of Oracle, 12c, is fully adopted. With Fortune 500 companies often having 1000s of databases and these databases reaching multi terabytes in size, the down stream storage costs of these data copies can be staggering.

There are a number of virtual data solutions coming onto the market and several already in the market place such as Oracle, Delphix and Actifio. Delphix and Actifio are listed in The 10 Coolest Virtualization Startups Of 2014 and Delphix is listed in TechTarget‘s Top Ten Virtualization Companies in 2014 as well as Forbes Magazine Names Delphix One of America’s Top 25 Most Promising Companies of 2014. Oracle as well is flooding their product offerings with data virtualization solutions such as Clone DB, Snap Clone, Snapshot Management Utility for ZFSSA and ACFS thin cloning in Oracle 12c and new vendors will be coming to market over the next year. 
 
Question to ask when look at data virtualization solutions are
  • What unique features does each vendor provide to help achieve my business goals?
  • Does the solution support the my full IT environment, or is it niche/vendor specific?
  • How much automation, self-service and application integration is pre-built vs. requires customization?
  • Are their customers similar in size and nature to myself using the solution?
  • Is the solution simple and powerful or just complicated?
Picking between the available solutions is further complicated by the common claims used by all the solutions in the market, thus we’ve come up with a list of the top 3 criteria to choose between these solutions.
 
Top 3 criteria for choosing a virtual data solution
 
The top 3 questions to ask when looking at a virtual data solution are
  1. Is the solution addressing your business goals
  2. Is the solution supporting your entire IT landscape
  3. Is the solution automated, complete and simple
1. Address business goals
The first step is to identify the business problems and clarify if the solutions meet your business goals. The top use cases for data virtualization in the industry are:
    • Storage savings
    • Application development acceleration
    • Data protection & production support
Deciding which of the above uses cases apply will help in determining the best solution.

Storage savings

All data virtualization solutions offer storage savings by the simple fact that virtual data provides thin clones of data meaning that each new copy of data initially takes up no new space. New space is only used after the data copies begin to modify data. Modified data requires additional storage.

Comparing storage savings

To compare the storage savings of various solutions find out how much storage is required to store new modifications and how much storage is required to initially link to a data sources. Of the solutions we’ve looked at the initial required storage ranges from 1/3 the size of the source data up to 3x the size of the source data. Of the solutions we’ve looked at some can store new modified data in 1/3 the actual space thanks to compression. Other solutions don’t have compression and some solutions have to store redundant copies of changed data blocks.

Data agility more important that storage savings

 
Storage savings can be massive but surprisingly enough of the 100s of virtual data adopters we’ve talked to most mention that the data agility far is by far more important than storage savings. Agility means that a virtual copies can be made in minutes instead of the more traditional full physical copies which can take hours or days or even weeks to make when making copies of large databases.

Application development acceleration

 
The agility that virtual data provides such as provisioning a full read writable copy of a multi TB database in minutes can improve the efficiency of many different aspects of a company but the area we see the biggest improvement is application development. Companies report 20-80% improvement in application development timelines after moving to data virtualization solutions.  Application development typically requires many copies source data when developing and/or customizing an application. These copies of data are required not only by the developers but also QA.

User friendly self service interface
When it comes to identifying the best data virtualization solution for application development look for solutions that provide user friendly self service developer specific interfaces. Some solutions only provide interfaces for a DBA or storage administrators. Administrator specific interfaces will continue to impede developers as developers will have to request copies from these administrators incurring wait time especially when those other administrators are already busy. The improvements to application development come when the solution gives users self service interfaces where they can directly make copies of data eliminating the costly delays of waiting for data.
Developer Centric Interface
When looking at application development acceleration make sure the solutions have a developer centric interface with per developer logins that can supply the correct security level of  limiting  what data developers have access to, how many copies they can make and how much extra storage they can use when modifying data. Data typically has sensitive content that should be masked before giving the data to developers. In the case of sensitive data look for solutions that include data masking.  Important as well is looking for developer interfaces that give developers standard development functionality such as data versioning, refreshing, bookmarking and rollback. Can one developer bookmark a certain version of a database and can another developer branch a copy from that bookmark to look at a certain use case or bug?
Branching of virtual data copies crucial for QA support
The most important feature for application development acceleration is the ability of the solution to branch data copies. Branching data copies means making a new thin clone copy from an existing thin clone copy. Some solutions have this feature and some do not. Why is branching important? Branching is important for a number of reasons such as being able to branch a developers copy of data from a time before they might have made an error in data changes such as dropping a table. More importantly though branching is essential for being able to spin up copies of data for QA directly from development. One of the biggest bottlenecks in development is supplying QA with the correct version of data or database to run the QA tests.  If there is a development database with schema changes and/or data modifications, then instead of having to build up a new copy for QA to use, with data virtualization and branching, one can branch a new clone, or many clones for that matter, and give them to QA in minutes. All the while development can go ahead and continue to use the data branch they were working on.
Data protection for developer virtual copies
Finally some data virtualization solutions offer by default data protection for development databases. Development databases are often not backed up as they are considered “just development” but we see an order of magnitude more incidences of developers inadvertently corrupting data on development databases than production DBAs accidentally damaging data on production databases.  Ask the data virtualization solutions if they can provide branches of a damanged development database down to the second at a point in time before the developer accidentally damaged the development database.  Some solutions offer no protection, others offer manual snapshots of points in time, and finally the best simply and automatically provide a time window of multiple days into the past from which a virtual database can be branched off if there were any mistakes or data corruption.
Data protection & production support

Data virtualization solutions can provide powerful data protection. For example if someone corrupts data on production such as dropping a table or a batch job that only half completes modifying some data but not all data before erroring out, a virtual database can be spun up in minutes and the uncorrupted data exported from the virtual database and imported into the production database.  We have heard numerous stories of the wrong table being dropped on production or a batch job deleting and/or modifying the wrong data with the changed propagated immediately to the standby thus being unrecoverable from the standby.
Data virtualization can save the day recovering the data in minutes. Data virtualization can offer impressively fine grain and wide time windows for Recovery Point Objects and fast Recovery Time Objectives.
Time window size and granularity
When looking at data virtualization solutions for data protection make sure the solution provides a time flow, ie a time window of changes from the source data from which virtual copies can be made. Some solutions have no time window, other solutions have occasional snapshots of past states of data and the best solutions offer recovery to any point in time down to the second in a time window.
Time window storage requirements
The larger the time window of changes collected from the past the more storage will be required. Find out how much storage is required to maintain the time window. Some solutions require significant storage for this time window and some solutions can store the entire time window for multiple weeks in the size of the original data source thanks to compression.
Time and ease of provisioning

Finally look into how easy or difficult it is to provision the data required. If the data required is a database then provisioning the data can be a complicated task without automation. Does the solution offer a point and click provisioning of a running database down down to the second at a past point in time? How easy or difficult is it to chose the point in time from which the data is provisioned? Is choosing a point in time a simple UI widget or does it require manual application of database logs or manual editing of scripts?

. Support your entire IT landscape

Is the solution a point solution or does it expand to all the needs of the IT department?
Is the solution specific to a few use cases or does it scale to the full Enterprise requirements?
Is the solution a single data type solution such as only Oracle databases?

Is the solution software running on any hardware or does it require specialized hardware? Does the solution use any storage system in your IT landscape or is it restricted to specialize storage systems? Will the solution lock you into a specific storage type or will it allow full flexibility to use new storage types as they become market leaders such as new, better and more affordable flash storage systems. Does your IT landscape use the cloud and does the solution support your IT department’s cloud requirements?

Does the solution support all of your data types and operations systems? For example does your IT landscape use any of the  following databases and does the solution automate support for these databases?
    • Oracle
    • Oracle RAC
    • SQL Server
    • MySQL
    • DB2
    • Sybase
    • PostGres
    • Hadoop
    • Mongo

Does your IT landscape require data virtualization for any of the following and does the solution automate support for these data types

    • web application tiers
    • Oracle EBS
    • SAP
    • regular files
    • other datatypes

Does your IT landscape use and does the solution support all of your operating system types

    • Linux
    • HP/UX
    • Solaris
    • Windows
    • AIX

3. Fully Automated, Complete and Simple

Automated 

How automated is the solution? Can an end user provision data or does it required a specialized technician such as a storage admin or DBA? When provisioning databases such as Oracle, SQL Server, MySQL etc does the solution fully and automatically provision a running database or are manual steps required? For example some solutions only provision data from a single point in time from the data source. What if a user requires a different point in time? How much manual intervention is required? Some solutions only support provisioning data from specific snapshots in the past. What if a user requires a specific point in time in the past that is between snapshots. How much manual intervention is required? Does the solution collected changes automatically from the data source or does the solution require some other tools or manual work to collect changes from the source or get newer copies of source data?

Complete
How complete is the solution?
Is the solution a point solution for a specific database like Oracle or does it support multiple database vendors as well as application stacks and other file types?
Does the solution include masking of data?
Does the solution include replication or other backup and fail over support?
Does the solution sync with a data source and collect changes or is it simply a interface to manage storage array snapshots?
Does the solution offer point in time recovery down to the second or is it limited to occasional snapshots?
Does the solution provide interfaces for your end user self-service?
Does the solution offer performance monitoring and analytics?
Does the solution only provide data sharing on disk only or does it share data at the caching layer as well?

Simple

How long does it take to install the solution? We’ve seen systems set up in 15 minutes and others take 5 days.
How easy or hard is it to manage the solution? Can the solution be managed by a junior DBA or junior IT person or does it require expert storage admins and DBAs?

Does the solution come with an alerting framework to make administration easier?

Does the interface come with a “single pain of glass” to expand to 1000s of virtual data copies across potentially 100s of separate locations in your IT landscape?

Is it easy to add more storage to the solution? Is it easy to remove unneeded storage from the solution

 

In Summary

Find out how powerful, flexible and complete the solution is.

Is the solution a point solution or a complete solution ?
Some solutions are specific point solutions for example only for Oracle databases. Some solutions are point solutions to specific hardware or storage systems while others are complete software  solutions. Complete flexible solutions sync automatically with source data, collect all changes from the source providing data provisioning down to the second from anywhere within that time window, will support any data type or database on any hardware and support the cloud.

Does the solution provide self service and user functionality?

  • Point-in-time provisioning
  • Reset, branch and rollback of environments
  • Refresh parent and children environments with the latest data
  • Provision multiple source environments to the same point in time
  • Automation / self-service / auditing capabilities

Some simple technical differentiators

  • support your data and database types on your systems and OS
  • support your data center resources or require specialized hardware or storage
  • sync automatically with source data or does it leave syncing as a manual exercise or require other solutions
  • provision data copies down to the second from an extended time window into the past
  • branch virtual data copies
  • cloud support included

But when it comes down to it, even after asking all these questions, don’t believe the answers alone. Ask the vendor to prove it. Ask the vendor to provide in house access to the solution and see how easy or hard it  is to install, manage and  execute the functionality required.

 

For more information also see

 

 

Uncategorized