I’ve just completed a busy weekend of patching, upgrading a 10.2.0.3 RAC cluster up to 10.2.0.4 and because the downtime was booked anyway, I decided on a quick test cycle of the July CPU and rolled that into the upgrade window.
So far, 10.2.0.4 seems good, with thankfully none of the fun and games that Jeff Hunter seems to be experiencing. The actual upgrade itself was exceptionally smooth and without incident.
Note that with 10.2.0.4 there is a new clusterware process, oprocd. This used to be UNIX platform specific but has now made it across to the Linux version.
ps -efl|grep oprocd
4 S root 29978 29208 0 76 0 - 1643 wait Jul20 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oprocd
4 S root 30326 29978 0 -40 - - 2112 - Jul20 ? 00:00:00 /opt/oracle/product/crs/bin/oprocd.bin run -t 1000 -m 500 -f
The oprocd daemon is actually there to perform some fencing capability and it seems like it sets a timer and if oprocd fails to wake up within a certain margin of this it will reboot the node.
During the whole upgrade process I had a physical standby open for read only queries. I admit this completely disregards Metalink Note:278641.1 on how to apply a patchset with a physical standby in place, but on this occasion I think Metalink is just plain wrong and having your standby blindly performing managed recovery while you are upgrading the primary is not a good solution – particularly when you have forked over all those licensing pounds for a high availability system!
Speaking of high availability is it any wonder that there are so few people applying the CPU’s when you really are required to have everything running out of the Oracle Home being patched to be shutdown? Sure with RAC you may have a shot at performing a rolling upgrade (assuming you’ve done the view recompilation). However, there are many, many non-RAC systems out there that don’t really want the downtime involved with these security patches.
I’m sure the days of hot-patching must be just around the corner, 11gR2 anyone?