Fixing an Oracle 11gR2 Grid Infrastructure Upgrade Bug

Finally, after 4 weeks of getting nowhere with Oracle Support, I’ve eventually managed to install the 11.2 Grid Infrastructure and upgrade clusterware on my 2 node RAC cluster from 10.2.0.4 to 11.2.0.1.

First of, I’ve got to point out this is a very old Linux installation. Almost certainly your linux version will be much more recent than this one:

oracle@linuxrac1:DBA -> uname -r
2.6.9-22.ELsmp

I confess and admit this is an ancient kernel. I’m banged to rights. What I tend to do, is install the servers, install oracle and then never upgrade the underlying OS. Once the hardware has been deemed end of life, we will migrate the database off to new hardware, which will have been installed with the latest O/S version. What this means is that you will probably never have to attempt an 11.2 clusterware install on such an old kernel, unlike me. Which is fortunate for you, for out the box it does not work.

If you find yourself in that unfortunate situation and you get the following error:

Cannot get node network interfaces (/var/opt/grid/11.2.0/bin/oifcfg iflist -p -n failed.) at /var/opt/grid/11.2.0/crs/install/crsconfig_lib.pm line 1786.

Then here is what I did to get the clusterware upgraded from 10.2.0.4 to the 11.2.0.1 version.

Edit $GRID_HOME/crs/install/crsconfig_lib.pm

Change line 1785 to:

($rc,@iflist_out) = get_oifcfg_iflist($CFG->OLD_CRS_HOME);

Change line 9192 to:

my $oifcfg = catfile($CFG->params('OLD_CRS_HOME'), 'bin', 'oifcfg');

Basically in both those lines you are change the variable to become the OLD_CRS_HOME, essentially for some of the upgrade, the executables have to be run from the 10.2 CRS Home.

I also needed to change the file $GRID_HOME/crs/install/crsconfig_params to include the definition of the OLD_CRS_HOME:

Edit $GRID_HOME/crs/install/crsconfig_params

Insert at line 55

OLD_CRS_HOME=/opt/oracle/product/crs

Of course your 10.2 CRS install may be located in a different location.

Once these are in place, you can run the rootupgrade.sh script once more and this time it should complete successfully.

It was such a relief to see the following:

oracle@linuxrac1:DBA ~> crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.1.0]

And have the clusterware running from the new Grid Infrastructure location:

oracle@linuxrac1:+ASM1 ~> ps -ef|grep grid
root     25845     1  0 Jan19 ?        00:00:08 /var/opt/grid/11.2.0/bin/ohasd.bin reboot
oracle   26312     1  0 Jan19 ?        00:00:15 /var/opt/grid/11.2.0/bin/oraagent.bin
oracle   26328     1  0 Jan19 ?        00:00:00 /var/opt/grid/11.2.0/bin/mdnsd.bin
oracle   26340     1  0 Jan19 ?        00:00:00 /var/opt/grid/11.2.0/bin/gipcd.bin
oracle   26351     1  0 Jan19 ?        00:00:03 /var/opt/grid/11.2.0/bin/gpnpd.bin
root     26374     1  0 Jan19 ?        00:00:03 /var/opt/grid/11.2.0/bin/cssdmonitor
root     26391     1  0 Jan19 ?        00:00:03 /var/opt/grid/11.2.0/bin/cssdagent
root     26393     1  0 Jan19 ?        00:00:16 /var/opt/grid/11.2.0/bin/orarootagent.bin
oracle   26411     1  0 Jan19 ?        00:00:03 /var/opt/grid/11.2.0/bin/diskmon.bin -d -f
oracle   26427     1  0 Jan19 ?        00:00:41 /var/opt/grid/11.2.0/bin/ocssd.bin
root     26495     1  0 Jan19 ?        00:00:03 /var/opt/grid/11.2.0/bin/octssd.bin
root     26512     1  0 Jan19 ?        00:00:22 /var/opt/grid/11.2.0/bin/crsd.bin reboot
oracle   26523     1  0 Jan19 ?        00:00:06 /var/opt/grid/11.2.0/bin/evmd.bin
root     26544     1  0 Jan19 ?        00:00:02 /var/opt/grid/11.2.0/bin/oclskd.bin
oracle   26601 26523  0 Jan19 ?        00:00:00 /var/opt/grid/11.2.0/bin/evmlogger.bin -o /var/opt/grid/11.2.0/evm/log/evmlogger.info -l /var/opt/grid/11.2.0/evm/log/evmlogger.log
oracle   27044     1  0 Jan19 ?        00:00:00 /var/opt/grid/11.2.0/opmn/bin/ons -d
oracle   27045 27044  0 Jan19 ?        00:00:00 /var/opt/grid/11.2.0/opmn/bin/ons -d
oracle   30867     1  0 Jan19 ?        00:00:25 /var/opt/grid/11.2.0/bin/oraagent.bin
root     30869     1  0 Jan19 ?        00:05:18 /var/opt/grid/11.2.0/bin/orarootagent.bin
oracle   31413     1  0 Jan19 ?        00:01:17 /var/opt/grid/11.2.0/jdk/jre//bin/java -Doracle.supercluster.cluster.server=eonsd -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/var/opt/grid/11.2.0/srvm/admin/logging.properties -classpath /var/opt/grid/11.2.0/jdk/jre//lib/rt.jar:/var/opt/grid/11.2.0/jlib/srvm.jar:/var/opt/grid/11.2.0/jlib/srvmhas.jar:/var/opt/grid/11.2.0/jlib/supercluster.jar:/var/opt/grid/11.2.0/jlib/supercluster-common.jar:/var/opt/grid/11.2.0/ons/lib/ons.jar oracle.supercluster.impl.cluster.EONSServerImpl
oracle   31511     1  0 Jan19 ?        00:00:01 /var/opt/grid/11.2.0/bin/tnslsnr LISTENER_SCAN2 -inherit
oracle   31521     1  0 Jan19 ?        00:00:01 /var/opt/grid/11.2.0/bin/tnslsnr LISTENER_SCAN3 -inherit

I should point out that I was able to upgrade a 2 node RAC cluster running with a 2.6.9-34 kernel without having to hack around with any files.

Advertisements

4 thoughts on “Fixing an Oracle 11gR2 Grid Infrastructure Upgrade Bug

    • Hello,

      You know, I was worried about that too. So I took an SR out, on that very subject. The engineer said that the 2.6.9 kernel was supported. I had no issue also obtaining support for the install issue with the 2.6.9-22 kernel, and rhel U2

      jason.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s