Rolling Oracle Clusterware Upgrade to 10.2.0.4

I have previously written about upgrading Oracle Clusterware from version 10.2.0.3 to 11.1.0.6 and how to do this with a rolling upgrade. I recently was involved with an upgrade from 10.2.0.3 to the 10.2.0.4 patchset (linux x86-64) and found the rolling upgrade of the clusterware to be suitably different that it warrants a short blog posting.

With the 11g upgrade, you had the option of applying the patchset to an individual CRS_HOME at a time, so you could apply the patchset one node at a time, having taking the clusterware down on that node. That feels comfortable and seems like a bit of a rolling upgrade. However with the 10.2.0.4 patchset, I see the following installer screen:

Every option in the above screen shot is greyed out, there is no choice to make and there is no option but to install on both nodes in the cluster simultaneously. At that point I started to worry a little, hmm do I really want to patch the clusterware CRS_HOME while there was clusterware running from this home and RDBMS and ASM instances depending on this clusterware? With the 11g upgrade everything was down on a node while the CRS_HOME for that node was being patched.

Unfortunately, the installation instructions in the README for the patchset are not altogether clear on how you should do a rolling upgrade, just saying you should bring everything down on one node a time. I really thought this was an error at first, and that surely you must apply the patchset to one CRS_HOME at a time to have a safe rolling upgrade.

However I took the plunge and patched the CRS_HOME on both rac nodes at the same time, with the clusterware running as well as the RDBMS & ASM instances. The homes patched fine. At the end of the install process you have to take everything down on one node at a time and run the root102.sh script in the CRS_HOME/install directory. This all upgraded without a hitch:

$oracle> crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.4.0]

It’s a little disconcerting but applying the 10.2.0.4 patchset to Oracle Clusterware can be done in a rolling fashion as long as you don’t worry about patching the CRS_HOME with everything still running!

Advertisements

10 thoughts on “Rolling Oracle Clusterware Upgrade to 10.2.0.4

  1. Hi Chris,

    Hope it goes well for you.

    We have patched 4 separate 2 node clusters now from 10.2.0.3 to 10.2.0.4, and it seems a stable procedure.

    Watch out for the new orpocd clusterware daemon.

    jason.

  2. Jason,

    We have implemented 10g RAC on win 2003 with ASM

    Am upgrading my db from 10.2.0.3.0 to 10.2.0.4.0 …Do I have to upgrade the clusterware and asm seperately befoer doin that .. if so, how can I do that ?

    kai

  3. Hi Kai,

    You *must* upgrade your clusterware first.

    The article, tells you how to upgrade the clusterware to 10.2.0.4 in a rolling fashion, only one node at a time needs to be down.

    jason.

  4. Hi Jason,

    Just want to understand a couple of things re the crs 10.2.0.4 upgrade.

    On the node you ran the installer from, was everything down, and did you use the -local option? I’m assuming not, and i’m also asuming crs was down on the first node, but yet everything was up on the second node, and the installer did cp over the binaries, then it was just a matter if shutting down and restarting that node, and not doing another install.

    Iwill be doing this in the next week or so, so just wanted to get it straight.

    Thanks,

    Rob

  5. Hi Rob,

    Thanks for reading!

    no, nothing was down when we started to run the installer.

    No we did not use the -local option.

    The binaries were copied with EVERYTHING UP on BOTH/ALL nodes.

    You then get prompted to run the root.sh script on a node by node basis at this point you must have everything down on the node you are running the root.sh script on. It is this script that updates the clusterware ocr/voting disks.

    does that make sense?

    jason.

  6. Yes – thanks. I was certainly prepared to shut everything down on the node I run the installer from…..interesting !!

    Rob

  7. Hello Rob,

    I believe the script itself does not have anything coded into it to start asm/rdbms however, i think you can configure clusterware that upon it starting up it starts up asm and the rdbms.

    This afteer all is equivalent to a server boot scenario, clusterware starts and is then responsible for starting asm/rdbms.

    By the way, watch out for oprocd, if you cluster is suffering cpu wise you may find oprocd reboots you. – this has definately occurred to me. You are not on the 2.4 kernel are you which i’ve heard has problems – though we were affected on 2.6

    jason.

  8. Hi Jason,

    Thanks for the feedback. Do you know of a patch that addresses the reboot, or perhaps another “feature” ?

    Linux 2.6.5-7.286-smp

    -rob

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s