UKOUG RAC & HA Meeting

Today I attended the UKOUG RAC & HA SIG and here are the verbatim notes that I took at the event. I probably should not complain at the journey, but an hour on a packed train for 38 miles seems “extravagant”. Still, probably took one of the speakers longer to arrive from Geneva!

Introduction

Next RAC Sigs, Thursday 15th May, London and Thursday 2nd October, heritage motor centre

Call for papers for UKOUG conference opens on March 17th, This seems astonishingly early!

A Really packed agenda today.

Survey

8 9.2
2 10.1
majority on 10.2
none on 11 – yet

few itanium
srpinkling of solaris sparc
probably majority on linux 64 bit
another sprinkling of windows

Vast majority with 2 nodes a handful with more than this, including 8 nodes from CERN and someone with 10

Vast majority using SAN for storage a few on NAS.

A lot using ASM, with a handful on ocfs, sprinkling on veritas 1 with polyserve

A lot of people with a physical standby, a few with a logical standby no one using auto failover a handful with stretched clusters.

Hardly anyone using standard editon

Phil Davies – Support update

Whispers of the first patchset for 11g – probably not for ages though. 10.2.0.4 surely coming soon. Interesting problem with ASM hang, and controlfile enqueue problem this is on 10.2.0.3, fixed 10.2.0.4.

On the January 2008 CPU, one audience member, claimed Oracle support stated to them that the cpu was rolling upgradeable. My support analyst definately stated it was not, interesting contradiction. Nominet got a mention, as I have a Documentation bug out for the CPU.

Dave Burnham – Highly available Oracle Databases

High level overview of building higly available databases. Downtime = Time to notice an issue + Time to resolve the problem.

Complexity kils availability, I certainly agree with this, Keep it Simple Stupid really is the way to go – the less moving parts the less that can go wrong. Concept of an availability benchmark system, which is a single server oracle database server – does your infrastructure improve on this config? That is the high availability solution is the modern comodity system which can have many hot swappable and redundant parts- not fancy clustering solutions.

However, several things are not protected by the single server solution, host failure, site failure, and of course the number one cause of reduced availability is human error.

One alternative to running RAC is to use a single instance database with clustering solution from veritas (like VCS), or SUN, or any of the other hardware vendors. Basically on failure, the clustering solution will restart Oracle on a different node. No expensive RAC license, and it’s fairly well understood technology.

Dave has lots of experience of stretched RAC clusters but states they are quite complex, and that dataguard is far simpler, though perhaps was still prefering stretched RAC for HA.

Miguel Anjo – Multiple RAC clusters

Running around 20 RAC clusters, 2-8 nodes.

Oracle Home is same everywhere, they deploy the ORACLE_HOME as an image.

3 stage environment
development: 8/5
integration: 8/5
Production 24/7

Custom built gui – browser based, to allow developers to see what is happening to their sessions, including sql, DML & DDL and ability to kill their session.

They have a 2 node clustered server for monitoring (runs single instance oracle). The have auditing turned on and generate weekly/monthly reports. Custom written monitoring, based on python, bash, xml.

1 RAC cluster per physics experiment.

They use a wiki for a logbook, database procedures.

Martin Bach – Lessons Learned from Migrating 10.2.0.2 to 10.2.0.3

This talk was based on using Standard Edition. They not only upgraded release but also migrated hardware, old hardware single core cpu with 3GB memory, run queue sometimes exceeding 12. new hardware 2 x dual core opteron and an upgraded SAN.

They have NO RAC test environment – scary stuff! Oh they have no device naming persistance – no ASMLIB or udev. They encountered some wacky bugs with SUSE and OEM. dbms_scheduler failing to schedule jobs to run on time, running by 5-45 minutes.

ASM 11g Experience in Extended Cluster – Bernhard de Cock Buning

Seems to be running RDBMS at 9i with Clusterware and ASM instance at 10.2.0.3 considering upgrading Clusterware and ASM to 11g. RDBMS moving to 10. They can’t use the ASM_PREFERED_READ_FAILURE_GROUP as the RDBMS was not 11. ASM Sysasm user – separate user to own ASM home, not required in 11gR1 but is required in 11gR2. Audience member stated they saw x2 increase in rebalance performance in 11g compared to 10g. Possibility to perform rolling ASM upgrade with 11g.

Simulating one site failure, 10g continued uninterrupted but 11g generated an ORA-600[kfdOffline01]. Seems like ASM rebooted on the surviving site. They used swingbench for testing load and had node crashes a couple of times, but once they were using Hugepages they had NO node crashes. It’s an interesting idea run 11g ASM with 10g database instance.

Split Mirror Backups with RAC & ASM – Howard Jones

General consensus is that it’s costly – requiring high end storage and complex. Using Symantec SMB integrating with Netbackup.

Using Dataguard for hardware migration – Miguel Anjo

Cern Using oracle streams to send LHC data around the world. Uses rman duplicate target database for standby for creation of standby. they switchover to the standby and upgrade this, only using the (now old) primary should they encounter a failure.

I don’t get it really, perhaps it was still too close to lunch for me to understand fully: why they don’t upgrade the primary saving failing over, but using a dataguard standby for the protection it offers should something go wrong? The CERN mechanism still encounters downtime, seems like they do some of the upgrade before the failover and reduce the outage, but for example a 10.2.0.2 to 10.2.0.3 upgrade you can install in a new ORACLE_HOME and you still need the outage for the catupgrd scropt? if you are out there CERN guys, what am I missing?

Logical Standby in the real time world – Graham Cameron

Old system single instance running on service guard cluster queries were hurting performance, chose physical and logical

small db only 22gb, 2GB/s of logs per day running Oracle 9.2.0.8, running the physical and logical on same server, creating server in 9.2.0.8 required the database to be quiesced. They still had major issues with their logical standby and found it failing on many occasions, interestingly they are using oracle streams far more successfully on a different project.

Still, a cracking day and thoroughly enjoyable.

Advertisements

2 thoughts on “UKOUG RAC & HA Meeting

  1. Hi Jason,

    Regarding the talk on dataguard for HW migration, the whole point is that we want to move the DB to new HW (and possibly apply a patchset and/or migrate from 32 to 64 bit, etc). We use RMAN and Oracle standby to duplicate the DB files (on a different SAN) and keep them in synch with production till the scheduled outage. You can’t really avoid some degree of outage with this method, but its duration does not depend on the DB size, so it is acceptable for VLDBs (at least in our case).
    More info: http://www.oracle.com/technology/deploy/availability/pdf/cernprofile.pdf

    Cheers,
    Luca

  2. Hello Luca!

    Thanks for repsonding. Right, your goal is to actually move to new hardware, now I understand! If you were not moving hardware then you have no benefit, but moving to a new hardware platform, as you say the DB size does not impact the migration.

    Of course if you were moving say from sparc to linux or linux to windows this would not work, but for some migrations it is obviously a good win!

    Your infrastructure there at CERN looks good, and I thought your weekly reports of database useage looked really useful.

    cheers,

    jason.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s