Finally, after 4 weeks of getting nowhere with Oracle Support, I’ve eventually managed to install the 11.2 Grid Infrastructure and upgrade clusterware on my 2 node RAC cluster from 10.2.0.4 to 220.127.116.11.
First of, I’ve got to point out this is a very old Linux installation. Almost certainly your linux version will be much more recent than this one:
oracle@linuxrac1:DBA -> uname -r 2.6.9-22.ELsmp
I confess and admit this is an ancient kernel. I’m banged to rights. What I tend to do, is install the servers, install oracle and then never upgrade the underlying OS. Once the hardware has been deemed end of life, we will migrate the database off to new hardware, which will have been installed with the latest O/S version. What this means is that you will probably never have to attempt an 11.2 clusterware install on such an old kernel, unlike me. Which is fortunate for you, for out the box it does not work.
If you find yourself in that unfortunate situation and you get the following error:
Cannot get node network interfaces (/var/opt/grid/11.2.0/bin/oifcfg iflist -p -n failed.) at /var/opt/grid/11.2.0/crs/install/crsconfig_lib.pm line 1786.
Then here is what I did to get the clusterware upgraded from 10.2.0.4 to the 18.104.22.168 version.
Change line 1785 to:
($rc,@iflist_out) = get_oifcfg_iflist($CFG->OLD_CRS_HOME);
Change line 9192 to:
my $oifcfg = catfile($CFG->params('OLD_CRS_HOME'), 'bin', 'oifcfg');
Basically in both those lines you are change the variable to become the OLD_CRS_HOME, essentially for some of the upgrade, the executables have to be run from the 10.2 CRS Home.
I also needed to change the file $GRID_HOME/crs/install/crsconfig_params to include the definition of the OLD_CRS_HOME:
Insert at line 55
Of course your 10.2 CRS install may be located in a different location.
Once these are in place, you can run the rootupgrade.sh script once more and this time it should complete successfully.
It was such a relief to see the following:
oracle@linuxrac1:DBA ~> crsctl query crs activeversion Oracle Clusterware active version on the cluster is [22.214.171.124.0]
And have the clusterware running from the new Grid Infrastructure location:
oracle@linuxrac1:+ASM1 ~> ps -ef|grep grid root 25845 1 0 Jan19 ? 00:00:08 /var/opt/grid/11.2.0/bin/ohasd.bin reboot oracle 26312 1 0 Jan19 ? 00:00:15 /var/opt/grid/11.2.0/bin/oraagent.bin oracle 26328 1 0 Jan19 ? 00:00:00 /var/opt/grid/11.2.0/bin/mdnsd.bin oracle 26340 1 0 Jan19 ? 00:00:00 /var/opt/grid/11.2.0/bin/gipcd.bin oracle 26351 1 0 Jan19 ? 00:00:03 /var/opt/grid/11.2.0/bin/gpnpd.bin root 26374 1 0 Jan19 ? 00:00:03 /var/opt/grid/11.2.0/bin/cssdmonitor root 26391 1 0 Jan19 ? 00:00:03 /var/opt/grid/11.2.0/bin/cssdagent root 26393 1 0 Jan19 ? 00:00:16 /var/opt/grid/11.2.0/bin/orarootagent.bin oracle 26411 1 0 Jan19 ? 00:00:03 /var/opt/grid/11.2.0/bin/diskmon.bin -d -f oracle 26427 1 0 Jan19 ? 00:00:41 /var/opt/grid/11.2.0/bin/ocssd.bin root 26495 1 0 Jan19 ? 00:00:03 /var/opt/grid/11.2.0/bin/octssd.bin root 26512 1 0 Jan19 ? 00:00:22 /var/opt/grid/11.2.0/bin/crsd.bin reboot oracle 26523 1 0 Jan19 ? 00:00:06 /var/opt/grid/11.2.0/bin/evmd.bin root 26544 1 0 Jan19 ? 00:00:02 /var/opt/grid/11.2.0/bin/oclskd.bin oracle 26601 26523 0 Jan19 ? 00:00:00 /var/opt/grid/11.2.0/bin/evmlogger.bin -o /var/opt/grid/11.2.0/evm/log/evmlogger.info -l /var/opt/grid/11.2.0/evm/log/evmlogger.log oracle 27044 1 0 Jan19 ? 00:00:00 /var/opt/grid/11.2.0/opmn/bin/ons -d oracle 27045 27044 0 Jan19 ? 00:00:00 /var/opt/grid/11.2.0/opmn/bin/ons -d oracle 30867 1 0 Jan19 ? 00:00:25 /var/opt/grid/11.2.0/bin/oraagent.bin root 30869 1 0 Jan19 ? 00:05:18 /var/opt/grid/11.2.0/bin/orarootagent.bin oracle 31413 1 0 Jan19 ? 00:01:17 /var/opt/grid/11.2.0/jdk/jre//bin/java -Doracle.supercluster.cluster.server=eonsd -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/var/opt/grid/11.2.0/srvm/admin/logging.properties -classpath /var/opt/grid/11.2.0/jdk/jre//lib/rt.jar:/var/opt/grid/11.2.0/jlib/srvm.jar:/var/opt/grid/11.2.0/jlib/srvmhas.jar:/var/opt/grid/11.2.0/jlib/supercluster.jar:/var/opt/grid/11.2.0/jlib/supercluster-common.jar:/var/opt/grid/11.2.0/ons/lib/ons.jar oracle.supercluster.impl.cluster.EONSServerImpl oracle 31511 1 0 Jan19 ? 00:00:01 /var/opt/grid/11.2.0/bin/tnslsnr LISTENER_SCAN2 -inherit oracle 31521 1 0 Jan19 ? 00:00:01 /var/opt/grid/11.2.0/bin/tnslsnr LISTENER_SCAN3 -inherit
I should point out that I was able to upgrade a 2 node RAC cluster running with a 2.6.9-34 kernel without having to hack around with any files.