Doing an 11gR2 Clusterware Downgrade to 10gR2

I have been working away on 11gR2 migration for a number of months now. During the testing cycle I have been upgrading to 11gR2 and then downgrading back to 10gR2 upon a test 2 node RAC cluster. I’ve done this on numerous occasions and every time I downgrade I end up with a niggling issue with the ASM spfile. First lets see what steps I’m taking to downgrade:

After shutting down both the database instance and ASM instances on both nodes I remove the 11gR2 database ORACLE_HOME. I’m using the new 11gR2 deinstall utility:


oracle@linuxrac1:DBA1 ~/deinstall> ./deinstall -home /var/opt/oracle/product/11.2.0
.
<snip>
.
####################### CLEAN OPERATION SUMMARY #######################
Successfully de-configured the following database instances : DBA
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/var/opt/oracle/product/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/var/opt/oracle/product/11.2.0' on the local node.
Successfully detached Oracle home '/var/opt/oracle/product/11.2.0' from the central inventory on the remote nodes 'linuxrac2'.
Successfully deleted directory '/var/opt/oracle/product/11.2.0' on the remote nodes 'linuxrac2'.
Oracle Universal Installer cleanup was successful.

Oracle install successfully cleaned up the temporary directories.
#######################################################################

Then I get rid of the 11gR2 grid infrastructure:

[jason@linuxrac1 ~]$ sudo  /var/opt/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig -force
.
<snip>
.
GSD exists.
ONS daemon exists. Local port 6101, remote port 6200
eONS daemon exists. Multicast port 21702, multicast IP address 234.91.178.66, listening port 2016
ADVM/ACFS is not supported on Redhat 4
ACFS-9201: Not Supported
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'linuxrac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'linuxrac1'
CRS-2679: Attempting to clean 'ora.dba.db' on 'linuxrac1'
ORA-12545: Connect failed because target host or object does not exist
ORA-12545: Connect failed because target host or object does not exist
ORA-12545: Connect failed because target host or object does not exist
CRS-2681: Clean of 'ora.dba.db' on 'linuxrac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'linuxrac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.cssdmonitor' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'linuxrac1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'linuxrac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'linuxrac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

Note you run a different command on the last node you are removing grid infrastructure from:

/opt/grid/11.2.0/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

Then run the deinstall of the Grid Infrastructure home:

oracle@linuxrac1:DBA1 ~/deinstall> ./deinstall -home /var/opt/grid/11.2.0
.
<snip>
.
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Clusterware was already stopped and de-configured on node "linuxrac2"
Oracle Clusterware was already stopped and de-configured on node "linuxrac1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/var/opt/grid/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/var/opt/grid/11.2.0' on the local node.
Successfully detached Oracle home '/var/opt/grid/11.2.0' from the central inventory on the remote nodes 'linuxrac2'.
Successfully deleted directory '/var/opt/grid/11.2.0' on the remote nodes 'linuxrac2'.
Oracle Universal Installer cleanup was successful.

Oracle install successfully cleaned up the temporary directories.
#######################################################################

Now we reinstall the cluster with 10gR2, essentially just running the final CRS root.sh script on both nodes:

/opt/oracle/product/crs/root.sh

This now leaves you with a 10gR2 cluster. However when I attempt to startup ASM I’m getting the following:


oracle@linuxrac1:+ASM1 ~> sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Mon Mar 15 09:53:41 2010

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> startup
ORA-01078: failure in processing system parameters
LRM-00111: no closing quote for value 'C"'

Now, I keep my spfile for this cluster on a (shared) raw device, so lets take a look at it:

oracle@linuxrac1:+ASM1 /tmp> dd if=/crs/spfileASM.ora of=/tmp/initASM.ora bs=1M
156+0 records in
156+0 records out

oracle@linuxrac1:+ASM1 /tmp> more /tmp/initASM.ora 
C"

C"
*.asm_diskgroups='DATA1','DATA2','DATA3','DATA4'
*.cluster_database_instances=2
*.diagnostic_dest='/opt/oracle'
+ASM2.instance_number=2
+ASM1.instance_number=1
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='exclusive'

Don’t understand what is happening to the ASM spfile, but those spurious C” entries are making sure ASM fails to parse the file properly. Once you remove these by creating a new spfile everything is happy again, and the downgrade is fine.

Advertisements

4 thoughts on “Doing an 11gR2 Clusterware Downgrade to 10gR2

  1. To get a better understanding what is happening, look into the spfile with ‘od’: od -xa /crs/spfileASM.ora
    (‘a’ gives ascii representation, ‘x’ dump hexadecimals)

  2. Hi Frits,

    Thanks for reading, I’m getting a bit rusty at this blogging business!

    Good idea!

    Just seems like something in the downgrade process corrupts the file.

  3. I’ve tried the deinstall tool for removing clusterware and database.

    For downgrading I think it’s a good tool if the bumps you encounter are resolved.

    For removal, it’s not that handy: everything must be needly closed thus in working state. If something crashes or is not functioning anymore, it will not work. There ought to be a ‘destruction’ switch to just kill any processes and remove the files. Because of this, I remove all stuff by hand, which saves me a lot of time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s