I’ve still been working away on migrating a 10gR2 RAC cluster running on a RHEL4 2 node cluster up to 11gR2. However I seem to have hit an install issue right at the first hurdle.
The system involved is an x86-64 system running a 2.6.9-22 kernel. The grid infrastructure install seemed to be passing by without incident, I had chosen the “Upgrade Grid Infrastructure” option at the Installation Type dialog box during the install, and had ran through the install, until the very end when a box pops up asking for the rootupgrade.sh script to be run:
This was not a worry, so I went off and attempted to run it, but then the following occurred:
[root@linuxrac1 11.2.0]# ./rootupgrade.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /var/opt/grid/11.2.0 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-12-21 15:45:59: Parsing the host name 2009-12-21 15:45:59: Checking for super user privileges 2009-12-21 15:45:59: User has super user privileges Using configuration parameter file: /var/opt/grid/11.2.0/crs/install/crsconfig_params Creating trace directory Cannot get node network interfaces (/var/opt/grid/11.2.0/bin/oifcfg iflist -p -n failed.) at /var/opt/grid/11.2.0/crs/install/crsconfig_lib.pm line 1786.
Running the oifcfg command by itself, seem to actually produce some sensible output:
[root@linuxrac1 11.2.0]# /var/opt/grid/11.2.0/bin/oifcfg iflist -p -n eth0 22.214.171.124 UNKNOWN 255.255.255.0 eth1 10.0.0.0 PRIVATE 255.0.0.0
This is the same output if you run oifcfg from the 10gR2 install:
[root@linuxrac1 11.2.0]# /var/opt/oracle/product/crs/bin/oifcfg iflist -p -n eth0 126.96.36.199 UNKNOWN 255.255.255.0 eth1 10.0.0.0 PRIVATE 255.0.0.0
However, it seems like some commands may be running, incorrectly, from the 11gR2 when they should be running from the old 10gR2 home:
The 11.2 command produces the following
[root@linuxrac1 log]# /var/opt/grid/11.2.0/bin/oifcfg getif -global Failed to initialize GPnP
While the 10.2 command produces following output:
[root@linuxrac1 log]# /var/opt/oracle/product/crs/bin/oifcfg getif -global eth0 188.8.131.52 global public eth1 10.0.0.0 global cluster_interconnect
I took out a Service Request with Oracle and after a couple of weeks, they are conceding the possibility that this might be a bug, 9126737 which is as yet unpublished. They are muttering about needing a backport for this fix. This of course will take an as yet indeterminate quantity of time.
This service request has been on the go for 3 weeks, and I don’t feel any closer to a solution.
I can’t believe that such a mainstream platform can have a fundamental bug, that is preventing from installing the 11.2 Grid Infrastructure.
I’d very much like to hear from anyone if you have managed to upgrade a 10gR2 RAC system to the new 11.2 Grid Infrastructure, particularly on Linux.