I have just done a couple of 10g to 11g Oracle Clusterware upgrades on a pair of 2 node RAC clusters. These are now happily running 11g Clusterware with 10g ASM and database instances.
First off, I have found the documentation a little bit on the sparse side in terms on how to actually do a clusterware upgrade. It took a little while for me to realise that it is very possible to perform a rolling upgrade when upgrading your clusterware, know I knew this was possible when patching from 10.2.0.X to 10.2.0.Y but it took a little longer for me to understand that this can be done when going up to 11g.
The best place in the online documentation for information about this is Appendix B of the Oracle Clusterware Installation Guide. Another useful thing to look at is metalink note 338706.1 which tells you about the prerequisites you need to fulfill before you can upgrade your clusterware to 11g. Of course it is only with hindsight that I have seen the information there in the Clusterware Installation Guide. Here is what I did to upgrade, you are far better of, unlike myself, running the preupdate.sh script as recommended – but hey this what testing is all about 😉
From the unziped clusterware directory run the cluster verification utility to check your system is ready to upgrade:
runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
make sure you upgrade any rpm’s needing changed.
Bring down the database and ASM instances on the first node you want to upgrade and then stop crs:
/opt/oracle/product/crs/bin/crsctl stop crs
If you run the preupdate.sh script that is in the clusterware/upgrade directory you don’t need to stop crs yourself or indeed perform the next step in changing permissions of the crs directory as it’s taken care for you.
The permissions on my crs directory were incorrect and the directory was owned by root. I changed them with:
chown -R oracle:oinstall crs/
run the installer and it will detect your CRS_HOME and offer to upgrade it, you want to make sure that on the Specify Hardware Cluster Installation Mode screen you select just the node you want to upgrade, assuming you are doing it rolling:
Once the upgrade has done it’s thing you are prompted to run the rootupgrade script:
[root@linuxrac2 install]# ./rootupgrade
Checking to see if Oracle CRS stack is already up...
copying ONS config file to 11.1 CRS home
/bin/cp: `/opt/oracle/product/crs/opmn/conf/ons.config' and `/opt/oracle/product/crs/opmn/conf/ons.config' are the same file
/opt/oracle/product/crs/opmn/conf/ons.config was copied successfully to /opt/oracle/product/crs/opmn/conf/ons.config
WARNING: directory '/opt/oracle/product' is not owned by root
Oracle Cluster Registry configuration upgraded successfully
Adding daemons to inittab
Attempting to start CRS stack
The CRS stack will be started shortly
Oracle CRS stack has failed to start. Check the file /var/adm/messages or the crsd, cssdd, and evmd logs in
/opt/oracle/product/crs/log/linuxrac2 directory for more details
You don’t need to worry when it says CRS stack has failed to start, because after a few moments CRS is running happily! Your database and/or ASM isntance will now be automatically restarted as well.
It is also worth pointing out that the active CRS version only becomes the 22.214.171.124.0 version after all nodes are upgraded:
[root@linuxrac2 crsd]# crsctl query crs softwareversion
CRS software version on node linuxrac2 is 126.96.36.199.0
[root@linuxrac2 crsd]# crsctl query crs activeversion
CRS active version on the cluster is 10.2.0.3.0
You now basically proceed to perform the same on the other nodes in your cluster, and there you have it, a rolling clusterware upgrade from 10g to 11g. I was actually well impressed with how smooth and painless the upgrade was and there really were no brown trouser moments.
It remains to be seen how stable the new 11g clusterware is but I’m sure it’s just a coincidence that about 12 hours after the upgrade one of the nodes on one of the clusters had a kernel panic and froze!