Archive

Author Archive

SQL*PLus on Mac

December 1st, 2016

I would think installing SQL*Plus on the Mac would be point, click download, point click, install bam it works.

Nah

It did install mostly straight forward on my old Mac. Got a new Mac and no dice.

Tried installing myself guessing at the downloads. First of all why isn’t there just one download?

Downloaded instantclient and instantclient with SQL*Plus which turns out to be correct, but no dice. Still got errors.

Got errors. Gave up.

Came back again to look at it yesterday and followed this:

https://tomeuwork.wordpress.com/2014/05/12/how-to-install-oracle-sqlplus-and-oracle-client-in-mac-os/

worked like a charm.

Then I ran a shell script that used SQL*Plus oramon.sh and get the error

dyld: Library not loaded: /ade/dosulliv_sqlplus_mac/oracle/sqlplus/lib/libsqlplus.dylib
Referenced from: /Applications/oracle/product/instantclient_64/11.2.0.4.0/bin/sqlplus
Reason: image not found

so I set my environment variable

export DYLD_LIBRARY_PATH=$ORACLE_HOME/lib

Let’s check the environment variable:

$ env | grep LIB
$

Huh?! Where is my environment variable?

Apparently you can’t set DYLD_LIBRARY_PATH on Mac without over riding some security settings which didn’t sound attractive.

I googled around and found

https://blog.caseylucas.com/2013/03/03/oracle-sqlplus-and-instant-client-on-mac-osx-without-dyld_library_path/

which didn’t work for me. Then found

https://blogs.oracle.com/taylor22/entry/sqlplus_and_dyld_library_path

which has as a solution to set the library path inside an alias for SQL*Plus – cool!

alias sqlplus=”DYLD_LIBRARY_PATH=/Applications/oracle/product/instantclient_64/11.2.0.4.0/lib sqlplus”

and I put that into my ~/.bashrc and it worked!

Uncategorized

Started at Amazon! … want to join me?

August 29th, 2016

(Disclamer: any opinions expressed here are fully my own and not representative of my employer)

11039191375_cac06fb854_z

photo by alvaroprieto  (cc 2.0)

Super excited to be working at Amazon on my passion which is performance data visualization and database monitoring. Suffice it to say this is the most excited I’ve been about work in my career and I’ve had ample opportunity to work on database performance in the past such as at Oracle (where I helped design the performance pages and designed Top Activity page), at Quest (now Dell) on Spotlight, on my own free tools ( ASHMon, S-ASH, W-ASH, Oramon etc)  and at Embarcadero where our team produced DB Optimizer that extended sampling and average active sessions to SQL Server, DB2 and Sybase (not to mention Visual SQL Tuning). The work here at Amazon looks to largely surpass all previous work.

More to news to come as I settle in.

In the mean time Amazon is looking to hire! We are looking for product managers, developers, service people etc. Mainly senior people with a good track record.  Please feel free to contact me if (and only if)

  • you are senior in your career  and/or
  • we have personally worked together  and/or
  • you have done something innovative already in your career (a free tool, a new design, etc).

Please refrain from contacting me about junior positions.  If you are interested in junior positions please look at the Amazon jobs listed on their website. Amazon is hiring aggressively!

These positions are almost all out of Seattle. There is some chance of working in Vancouver and Palo Alto though it would be recommended to work out of Seattle.

One specific position on my groups team is a Data Engineer to work on reporting. Here is the job listing from the Amazon site:

 

External job description:

Amazon Relational Database Service (Amazon RDS) is an industry leading web service that makes it easy to set up, operate, and scale a relational database in the cloud using any of the leading database engines – MySQL, MariaDB, PostgreSQL, SQL Server and Oracle, as well as Amazon’s own MySQL-compatible database engine, Aurora. We are looking for a for a seasoned and talented data engineer to join the team in our Seattle Headquarters. More information on Amazon RDS is available at http://aws.amazon.com/rds.

The data engineer must be passionate about data and the insights that large amounts of data can provide and has the ability to contribute major novel innovations for our team. The role will focus on working with a team of product and program managers, engineering leaders and business leaders to build pipelines and data analysis tools to help the organization run it’s business better. The role will focus on business insights, deep data and trend analysis, operational monitoring and metrics as well as new ideas we haven’t had yet (but you’ll help us have!). The ideal candidate will possess both a data engineering background and a strong business acumen that enables him/her to think strategically and add value to help us improve the RDS customer experience. He/she will experience a wide range of problem solving situations, strategic to real-time, requiring extensive use of data collection and analysis techniques such as data mining and machine learning. In addition, the data engineering role will act as a foundation for the business intelligence team and be forward facing to all levels within the organization.

· Develop and improve the current data architecture for RDS
· Drive insights into how our customers use RDS, how successful they are, where our revenue trends are going up or down, how we are helping customers have a remarkable experience, etc.
· Improve upon the data ingestion models, ETLs, and alarming to maintain data integrity and data availability.
· Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data sets of RDS.
· Partner with BAs across teams such as product management, operations, sales, marketing and engineering to build and verify hypotheses.
· Manage and report via dashboards and papers the results of daily, weekly, and monthly reporting

 

Basic Qualifications
· Bachelor’s Degree in Computer Science or a related technical field.
· 6+ years of experience developing data management systems, tools and architectures using SQL, databases, Redshift and/or other distributed computing systems.
· Familiarity with new advances in the data engineering space such as EMR and NoSQL technologies like Dynamo DB.
· Experience designing and operating very large Data Warehouses.
· Demonstrated strong data modelling skills in areas such as data mining and machine learning.
· Proficient in Oracle, Linux, and programming languages such as R, Python, Ruby or Java.
· Skilled in presenting findings, metrics and business information to a broad audience consisting of multiple disciplines and all levels or the organizations.
· Track record for quickly learning new technologies.
· Solid experience in at least one business intelligence reporting tool, e.g. Tableau.
· An ability to work in a fast-paced environment where continuous innovation is occurring and ambiguity is the norm.

 

Preferred Qualification
· Master’s degree in Information Systems or a related field.
· Capable of investigating, familiarizing and mastering new datasets quickly.
· Knowledge of a programming or scripting language (R, Python, Ruby, or JavaScript).
· Experience with MPP databases such as Greenplum, Vertica, or Redshift
· Experience with Java and Map Reduce frameworks such as Hive/Hadoop.
· 1+ years of experience managing an Analytic or Data Engineering team.
· Strong organizational and multitasking skills with ability to balance competing priorities.

 

Uncategorized

Oaktable World 2016 Sept 19 & 20 is on !!

August 26th, 2016

Having take a new job at Amazon just two weeks ago and moving to Seattle (!)  I didn’t have the time nor was it practical to set up Oaktable World  this year. Luckily Kellyn Pot’vin has taken over the mantel !

Get the info straight from the horses mouth at her blog.

The following content has been graciously supplied from Kellyn’s blog:

Oak Table World is FREE to the PUBLIC!  We don’t require an Oracle Open World badge to attend, so bring a friend and they’ll owe you big time!

otwHere is the current schedule:

The Great Dane

Mogens Norgaard will be opening Oak Table World on

  • Monday, at 8:30am.

Be prepared to be awe-inspired by all he has to share with you, (which may go hand-in-hand with the amount of coffee we can provide to him…)

otwmon

otwtue

Location

Moscone_ctr

Uncategorized

Denial of Service (DoS) attacks continue

August 24th, 2016

It’s frustrating to have to spend time jumping off into web security and wordpress configurations when there are so many other things that are important to be doing. Today the DoS continued and the Jetpack solution didn’t seem to work. The other two solutions from Digital Ocean didn’t seem reasonable. One was to re-install wordpress with there install version. Nice that they offer a better security protected version but I didn’t feel like re-installing my wordpress. TH other option basically eliminated all access to xmlrpc.php. Looking around I found a plugin that does firewall work and had just added functionality for the xmlrpc.php problem, called ninjafirewall.

Problem is after I installed it I was getting 500 ” Internal server error” errors trying to access this blog.

Turns out the solution is to add a few lines to /etc/apache2/apache2.conf

such as

<Directory /var/www.kyle>

        Options FollowSymLinks

        AllowOverride All

</Directory>

where my WordPress files are hosted in  /var/www.kyle

This didn’t work and I went down many ratholes trying other things. Problem was there was a another line in my  apache2.conf that said

<Directory /var/www>

        Options FollowSymLinks

        AllowOverride FileInfo

</Directory>

I had done some hacking stuff like changed all “AllowOverride None” to “AllowOverride All” but I hadn’t looked for “AllowOverride FileInfo” and second part is that “/var/www” is a link to “/var/www.kyle” thus overriding my “AllowOverride All” . Long story short changing

<Directory /var/www>

        Options FollowSymLinks

        AllowOverride FileInfo

</Directory>

to

<Directory /var/www>

        Options FollowSymLinks

        AllowOverride All

</Directory>

fixed the problem.

Then I was able to install NinjaFirewall and configure it.

Going to the side bar in WordPress admin view, select “Firewall Policies”

Screen Shot 2016-08-23 at 5.30.01 PM

then select “Block any access to the API” for “WordPress XML-RPC API”Screen Shot 2016-08-23 at 5.30.34 PM

that works. Now the apache log shows 403 errors for access to xmlrpc.php

root@datavirtualizer:/etc/apache2# tail -f /var/log/apache2/other_vhosts_access.log
104.131.152.183:80 154.16.199.74 - - [23/Aug/2016:20:31:47 -0400] "POST /xmlrpc.php HTTP/1.1" 403 376 "-" "Googlebot/2.1 (+http://www.google.com/bot.html)"
104.131.152.183:80 154.16.199.74 - - [23/Aug/2016:20:31:47 -0400] "POST /xmlrpc.php HTTP/1.1" 403 376 "-" "Googlebot/2.1 (+http://www.google.com/bot.html)"
104.131.152.183:80 154.16.199.74 - - [23/Aug/2016:20:31:48 -0400] "POST /xmlrpc.php HTTP/1.1" 403 376 "-" "Googlebot/2.1 (+http://www.google.com/bot.html)"
104.131.152.183:80 154.16.199.74 - - [23/Aug/2016:20:31:48 -0400] "POST /xmlrpc.php HTTP/1.1" 403 376 "-" "Googlebot/2.1 (+http://www.google.com/bot.html)"

Actually I think an alternative and better method (don’t trust me I don’t fully understand the options) is to leave the xmlrpc stuff off and got to “login protect” . Choose “only under attack”

Screen Shot 2016-08-23 at 5.41.49 PM

see

for more info.

Whoever

  • 154.16.199.74

seems to be the main culprit.

Is there any way to report stuff like this?

IP address 154.16.199.74

Address type IPv4
ISP Host1plus-cloud-servers
Timezone America/Los_Angeles (UTC-7)
Local time 17:33:55
Country United States   
State / Region Nevada
District / County Clark County
City Las Vegas
Zip / Postal code 89136
Coordinates 36.1146, -115.173

 

 

Update

Decided to see what effect the firewall had in the logs.

The logs look like

04.131.152.183:80 ::1 - - [22/Aug/2016:06:42:05 -0400] "OPTIONS * HTTP/1.0" 200 110 "-" 
104.131.152.183:80 ::1 - - [22/Aug/2016:06:42:05 -0400] "OPTIONS * HTTP/1.0" 200 110 "-" 
104.131.152.183:80 ::1 - - [22/Aug/2016:06:42:06 -0400] "OPTIONS * HTTP/1.0" 200 110 "-" 
104.131.152.183:80 ::1 - - [22/Aug/2016:06:42:06 -0400] "OPTIONS * HTTP/1.0" 200 110 "-" 

so we have the date in field 5 and HTML return code in field 10.

Wrote an awk script to get the date truncated to the hour and count of return code by code type by hour

grep xmlrpc $1 |  \
sed -e 's/:/ /g' | \
sed -e 's/\[/ /g' | \
awk 'BEGIN {
            codes[200]=0;
            codes[401]=0;
            codes[403]=0;
            codes[404]=0;
            codes[405]=0;
            codes[500]=0;
           }
{
      dt=sprintf("%s%s",$6,$7);
      dates[dt]=dt
      cnt[dt,$14]+=1;
}
END {
     printf "%14s , ", "date"
     for ( code in codes ) {
       printf "%7s , " , code
     }

     for ( dt in dates ) {
       printf  "\n%14s ", dt
       for ( code in codes ) {
          printf  ", %7i ", cnt[dt,code]+0 ;
       }
     }
   print " "
}'

root@datavirtualizer:/var/log/apache2# ./log.awk other_vhosts_access.log
          date ,     401 ,     403 ,     200 ,     404 ,     500 ,     405 , 
 24/Aug/201600 ,    1757 ,       0 ,     217 ,       0 ,       0 ,       0 
 24/Aug/201601 ,    2833 ,       0 ,      96 ,       0 ,       0 ,       2 
 24/Aug/201602 ,     610 ,       0 ,     502 ,       0 ,       0 ,       1 
 24/Aug/201603 ,     666 ,       0 ,     401 ,       0 ,       0 ,       1 
 24/Aug/201604 ,    1555 ,       0 ,      98 ,       0 ,       0 ,       0 
 24/Aug/201605 ,    2927 ,       0 ,     104 ,       0 ,       0 ,       0 
 24/Aug/201606 ,    3914 ,       0 ,      98 ,       0 ,       0 ,       1 

then plotted in Excel. In Excel just cut and pasted from Unix, chose import wizard and chose comma delineated:

 

Screen Shot 2016-08-24 at 4.29.55 PM

 

plotting looks like

Screen Shot 2016-08-24 at 4.20.40 PM

 

we can see that after applying the firewall at 5pm yesterday , i.e. 17:00 hours, which shows up as  23/Aug/201617 in the x-axis legend, we can see a spike in 403s (forbidden) when I first set up no access to xmlrpc.php and then 401s (unauthorized) after I changed the option to ask for username password after too many access in a few seconds.

Uncategorized

Denial of Service (DoS) attack on this site

August 22nd, 2016

This site had been running for a years with no big issues.
I had performance and system saturation issues about 3 years ago and then move the site to DigitalOcean.com.

DigitalOcean.com is inexpensive and the performance is awesome.

Then last Monday and every day since the site had been going down.
Simplest “solution” for me was just to get on and bounce the machine.

That cleared it up.

After this went on for a few days I contacted DigitalOcean saying I didn’t see how it could be an issue with them, but I’d asked anyway.

Sure enough they had identified the issue, which had nothing to do with them, and gave me the solution.

Read their solution page for more information.

Basically the problem is a DoS attack using xmlrpc.php from WordPress.
To verify this I looked into the logs and sure enough there is rapid access to xmlrpc.php

cd /var/log/apache2

grep xmlrpc *

other_vhosts_access.log.1:104.131.152.183:80 37.1.214.203 – – [14/Aug/2016:10:09:25 -0400] “POST /xmlrpc.php HTTP/1.1″ 500 569 “-” “-”

other_vhosts_access.log.1:104.131.152.183:80 37.1.214.203 – – [14/Aug/2016:10:09:25 -0400] “POST /xmlrpc.php HTTP/1.1″ 500 0 “-” “-”

other_vhosts_access.log.1:104.131.152.183:80 37.1.214.203 – – [14/Aug/2016:10:09:26 -0400] “POST /xmlrpc.php HTTP/1.1″ 500 569 “-” “-“

There are number of solutions, but the easiest for me was to use the Jetpack plugging which comes with a “protect” option. After activating the protect options sure enough the xmlrpc.php access stop. In the following “grep” we see xmlrpc.php rapid access just before the Jetpack option is turned on then they stop. Yay.

root@datavirtualizer:/var/log/apache2# date

Mon Aug 22 12:58:07 EDT 2016

root@datavirtualizer:/var/log/apache2# grep rpc other_vhosts_access.log

104.131.152.183:80 154.16.199.74 – – [22/Aug/2016:07:33:28 -0400] “GET /xmlrpc.php HTTP/1.1″ 500 569 “-” “PycURL/7.19.7″

104.131.152.183:80 64.137.253.68 – – [22/Aug/2016:09:01:24 -0400] “POST /xmlrpc.php HTTP/1.1″ 500 569 “https://google.com/” “Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36″

104.131.152.183:80 114.44.230.12 – – [22/Aug/2016:09:01:52 -0400] “POST /xmlrpc.php HTTP/1.1″ 500 569 “https://google.com/” “Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36″

104.131.152.183:80 163.172.177.30 – – [22/Aug/2016:09:17:56 -0400] “GET /xmlrpc.php HTTP/1.1″ 500 569 “-” “PycURL/7.19.5 libcurl/7.38.0 GnuTLS/3.3.8 zlib/1.2.8 libidn/1.29 libssh2/1.4.3 librtmp/2.3″

104.131.152.183:80 163.172.174.255 – – [22/Aug/2016:09:17:59 -0400] “GET /xmlrpc.php HTTP/1.1″ 500 569 “-” “PycURL/7.24.0″

104.131.152.183:80 163.172.179.147 – – [22/Aug/2016:09:18:00 -0400] “GET /xmlrpc.php HTTP/1.1″ 500 569 “-” “PycURL/7.19.5 libcurl/7.38.0 GnuTLS/3.3.8 zlib/1.2.8 libidn/1.29 libssh2/1.4.3 librtmp/2.3″

104.131.152.183:80 163.172.175.207 – – [22/Aug/2016:09:18:03 -0400] “GET /xmlrpc.php HTTP/1.1″ 500 569 “-” “PycURL/7.24.0″

104.131.152.183:80 154.16.199.74 – – [22/Aug/2016:09:40:36 -0400] “GET /xmlrpc.php HTTP/1.1″ 500 569 “-” “PycURL/7.19.7″

104.131.152.183:80 154.16.199.74 – – [22/Aug/2016:12:11:17 -0400] “GET /xmlrpc.php HTTP/1.1″ 500 569 “-” “PycURL/7.19.7″

104.131.152.183:80 195.212.29.168 – – [22/Aug/2016:12:54:25 -0400] “GET /xmlrpc.php?rsd HTTP/1.1″ 200 995 “-” “Mozilla/4.0 (compatible;)”

104.131.152.183:80 195.212.29.168 – – [22/Aug/2016:12:54:26 -0400] “GET /xmlrpc.php HTTP/1.1″ 405 281 “-” “Mozilla/4.0 (compatible;)”

104.131.152.183:80 216.81.94.75 – – [22/Aug/2016:12:55:56 -0400] “GET /xmlrpc.php?rsd HTTP/1.1″ 200 995 “-” “Mozilla/4.0 (compatible;)”

104.131.152.183:80 216.81.94.75 – – [22/Aug/2016:12:55:56 -0400] “GET /xmlrpc.php HTTP/1.1″ 405 281 “-” “Mozilla/4.0 (compatible;)”

 

Let’s see if the site stays up now.

Uncategorized

Speaking at OOW 2016 on Sunday on Active Session History

August 19th, 2016

Session ID: UGF3110

Advanced Active Session History Analytics
Moscone West—3020
09/18/16, 03:30:00 PM – 04:15:00 PM

 

 

Uncategorized

Moving to Seattle! Our beautiful rental available in San Francisco

June 10th, 2016

27552681126_8d52a09180_z

More news coming, but for now, we (family and I) are moving to Seattle! We will be giving up our gorgeous rental house in San Francisco thus it will be available to the next lucky family.

Beautiful large house a block from San Francisco’s most prestigious neighborhood St Francis Woods.

see: Photos of house

It is a wonderful elegant big family house for our family and two kids walking distance to Commodore Sloat a block from St Francis Woods on a safe family street. We love it but we have to move because of my job change.

Its an amazing home. Perfect for our family as we walk our kids to school on a walking path. The front yards have no driveways so it’s super safe for kids to play.

I commute both to Palo Alto fast on 280 and I take the muni fast to down town. It’s a secret that the fastest way to down town is on the K, L or M via West Portal because it’s underground. Blows the N-Judah or J-Church away.

It’s a big house just under 3000 square feet. Massive Wolf Stove in kitchen – biggest I’ve seen. Huge beautiful den with working fire place and vaulted ceilings. Beautiful dining room. Hardwood throughout. A beauty.

It can be a 4 bedroom – we used 3 for ourselves and one for a nanny.
It has 2.5 baths meaning one bathroom with shower and separate bath and toilet, a separate toilet on first floor and downstairs a shower and toilet.

Full laundry downstairs. Massive garage –  3 car parking and workshop and lots of storage 


No pets or smoking. As far as I understand it, the neighborhood HOAs don’t allow roommate situations thus in my mind it makes sense for families.

Hardwood floors throughout. 

Fenced yard, ocean view.

Excellent public schools (Top rated in SF with neighborhood preference) Top rated private schools nearby: St.Brendans, St.Stevens, Stratford 

Beautiful walk to West Portal shops, restaurants, parks, library, movie theatre…

The best SF has to offer of calm and families neighborhood with quick access to the hustle and bussle

ABSOLUTELY NO PETS AND NO SMOKING

The rent is $5350

1 Year Lease

trish_hailey@hotmail.com  

kylelf@gmail.com

Photos of house

Craiglist add: http://sfbay.craigslist.org/sfc/apa/5697091871.html

**We are the current tenants and have loved living in this amazing home and neighborhood.

27309981430_568d32b356_z 27514129171_798a4da473_z 27552759266_378712c56f_z

Uncategorized

Evolution of Masking Vendors

April 25th, 2016

Screen Shot 2016-04-22 at 12.47.52 PM

Masking with Delphix (where duplicate blocks are shared making a new copy almost free storage wise and almost instantaneous) has 4 big advantages

  1. Instant data, no copying
  2. Ease of Use
  3. Consistent across data centers and databases vendors
  4. Master/Slave

Virtual Data Masking

Delphix masking and virtualization is the most advanced solution in the market place, because Delphix doesn’t provision data. Instead of provisioning data, Delphix sets up pointers back to existing data for a new clone. When that new clone tries to modify data, the existing data stays unchanged, and the changed data is stored elsewhere and only visible to the clone that made the change. This allows Delphix to mask once and provision many masked copies in minutes for almost no storage.

  • Some tools requires to subset data. Imagine writing code to subset data from a medium size (5000 objects) custom database, and maintain it.
  • Some tools requires 1.5X disk in the target, because it creates temp tables to copy and mask data.
  • Whereas, Delphix masks in memory, no need for disk, and virtualizes the data.

Screen-Shot-2016-03-07-at-5.17.21-PM

Ease of use saves money

Largest cost in data masking is the personnel to develop and maintain masking code.

Most tools require significant programming skills and dedicated administrators.

  • DELPHIX:
    • Users with no programming background can use the product in 4 hours.
    • Web based interface with profiling integrated to masking: You can profile and start masking in minutes without any programming knowledge.

Mask data consistently

Delphix masks data consistently across different type of data sources, across different data centers automatically

Some tools either masked different data sources differently breaking referential integrity or they require the user to manually maintain relationships across all attributes and across all data sources using the ‘Translation Matrix’.  Other tools based on specific databases require the user to import data into that proprietary database in order to mask it and then the data needs to be copied back out of the proprietary database into the location it is used.

  • DELPHIX:
    • The module which identifies sensitive data (Profiler), also assigns the masking algorithms, so no need to manually define relationships.
    • Delphix masking algorithms are deterministic, so based on the input we create a consistent output, regardless of the data source
    • Delphix architecture separates transformation from a data source

Master/Slave configuration

Delphix provides a central interface to configure/manage users, metadata and algorithms, and execute masking in a consistent and distributed manner for each department, entity, or data center. Without this, each entity would have masked data differently, and aggregation of data would be useless.

Screen Shot 2016-03-07 at 5.21.34 PM

Next Steps

Pete Finnigan recently did a paper reviewing of Delphix and data masking where he points out some of the challenges to masking and solutions.

Pete goes into ways of securing the source database such that the cloned copy benefits from the security in the source. Pete also shares some of the top reasons he has heard at customer sites for why people don’t mask even though they want to.

The top 5 reasons people don’t mask when they should

  1. Fear of not locating all data to mask
  2. Referential integrity
  3. Data distribution
  4. Testing may not be valid with masked data
  5. Time, cost and skills needed

Pete has done a second paper on specifically how to secure data in non production areas. We will be publishing this paper soon.

Pete’s first paper with Delphix on masking is available here.

 

Uncategorized

Delphix replication and push button cloud migration

April 22nd, 2016

Someone just asked on the Delphix Forums whether they could test Delphix replication with the free version of Delphix called Delphix Express.

I’d never tried, so I sat down to try and was amazed at how easy it was.

One of the coolest things about Delphix replication is that it makes it super easy to migrate to the cloud and also to fall back to in house if need be.  For cloud migration, I just set up a Delphix engine in house and one in a cloud, for example Amazon EC2. Then I just give the in house engine the  credentials to replicate to the engine in the cloud. The replication can been compressed and encrypted. The replication is active/active so I can use either or both engines.  (stay tuned for a Delphix Express .ami file that we plan to release. Currently Delphix enterprise is supplied as an ami for AWS/EC2 but not Delphix Express though you could use the .ova to set up Delphix Express in AWS/EC2)

Setup

I created two Delphix Express installations.

On one engine, the source engine, (http://172.16.103.16/)  I linked to an Oracle 11.1.0.7 database on Solaris Sparc called “yesky”.

On that same engine I went to the menu “system” and chose “replication”

Screen Shot 2016-04-22 at 12.28.25 PM

 

That brought me to the configuration page

Screen Shot 2016-04-22 at 12.22.46 PM

where I filled out

  1. Replica Profile name – any name
  2. Description – any description
  3. Target Engine – in my case I used the IP address of the engine to receive the replicaiton
  4. User Name – login name to the target engine
  5. Password – password for the login to the target engine
  6. Enabled – check this to make replication run automatically on a schedule
  7. Every – set the schedule to every 15 minutes for my test

Then I clicked “Create Profile” in the bottom right.

And within a few minutes the replicated version was available on my replication target engine (172.16.100.92). On the target I choose from the Databases pulldown menu “DelphixExpress” and there is my “yesky” source replicated from my source Delphix Express engine.

Screen Shot 2016-04-22 at 12.20.56 PM

 

Now I have two Delphix engines where engine 1 is replicating to engine 2. Both engines are active active so I can use the second engine for other work and/or actually cloning the Oracle datasource replicated from engine 1 (“yesky”).

Next Steps

Try it out  yourself with our free version of Delphix called Delphix Express.

 

Uncategorized

Collaborate 2016 Oaktable World Sessions

April 12th, 2016

Oaktable World Las Vegas is happening at Collaborate 2016! Many thanks to Tim Gorman, Alex Gorbachev and Mark Farnham for organizing!
Free Oaktable World t-shirts available at Delphix booth 1613 on Tuesday and at the Oaktable World talks on Wednesday. Also available at the Delphix booth is free copies of Mike Swing’s “the little r12.2.5 upgrade essentials for managers and tema members”. Mike will be doing Q&A at the Delphix booth Tuesday 1:15-2:00 and book signing on Wednesday 2:00- 2:45.

Oaktable World all day Wednesday 9:15-6:15 Mandalay Bay Ballroom I

IMG_7040

Time Session Type Presenter Name Proposed Topic
09:15 – 10:15 60 mins Alex Gorbachev Back of a Napkin Guide to Oracle Database in the Cloud
10:30 – 11:30 “Re-Energize” session no OTW session
11:45 – 11:55 10 mins Alex Gorbachev Internet of Things 101
12:00 – 12:10 10 mins Tim Gorman How Oracle Data Recovery Mechanisms Complicate Data Security, and What To Do About It
12:15 – 12:25 10 mins
12:30 – 12:40 10 mins Kyle Hailey Challenges and solutions masking data
12:45 – 12:55 10 mins Dan Norris Tools used for security and compliance on Linux
13:00 – 14:00 Oracle keynote no OTW session
14:00 – 15:00 60 mins Kellyn Pot’Vin-Gorman Performance Feature Enhancements in Enterprise Manager 13c wtih DB12c
15:00 – 16:00 60 mins Dan Norris IPv6: What You Need to Know (with Linux & Exadata references)
17:15 – 18:15 60 mins Kyle Hailey Data for DevOps

All talks in Mandalay Bay Ballroom I

Uncategorized