Online patching: The Good, the Bad, and the Ugly

I’ve worked on 24×7 systems for more than a decade, and I have a real dislike of downtime. For one, it can be a real pain to agree any downtime with the business, and while RAC can and does help when you do work in a rolling fashion, there is still risk.

The promise of online patching has been a long one, and it is only recently that I dipped my toe in the water with them. Unfortunately, they are not a panacea, and in this blog posting I’m going to share some of the downsides.

Of course not all patches are online, if they are the README associated with the patch will have an online section in how to apply the patch, also when you uncompress the patch there will be an directory called online.

The Good

So first for the good side, the actual application truly can be done online, in that sense it does what it says on the tin. Here I’m running from the unzipped patch directory, and in this example I’m using patch 10219624:

bash-3.2$ /u01/app/oracle/product/11.2.0/db_1/OPatch/opatch apply online -connectString TESTRAC1 -ocmrf /tmp/ocm.rsp 
Oracle Interim Patch Installer version 11.2.0.3.3
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/11.2.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.3
OUI version       : 11.2.0.3.0
Log file location : /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/10219624_Jan_24_2013_08_54_08/apply2013-01-24_08-54-08AM_1.log

Applying interim patch '10219624' to OH '/u01/app/oracle/product/11.2.0/db_1'
Verifying environment and performing prerequisite checks...
All checks passed.
Backing up files...

Patching component oracle.rdbms, 11.2.0.3.0...
Installing and enabling the online patch 'bug10219624.pch', on database 'TESTRAC1'.


Verifying the update...

Patching in all-node mode.

Updating nodes 'rac2' 
   Apply-related files are:
     FP = "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/copy_files.txt"
     DP = "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/copy_dirs.txt"
     MP = "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/make_cmds.txt"
     RC = "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/remote_cmds.txt"

Instantiating the file "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/copy_files.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/copy_files.txt" with actual path.
Propagating files to remote nodes...
Instantiating the file "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/copy_dirs.txt.instantiated" by replacing $ORACLE_HOME in "/u01/app/oracle/product/11.2.0/db_1/.patch_storage/10219624_Dec_20_2012_02_13_54/rac/copy_dirs.txt" with actual path.
Propagating directories to remote nodes...
Patch 10219624 successfully applied
Log file location: /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/10219624_Jan_24_2013_08_54_08/apply2013-01-24_08-54-08AM_1.log

OPatch succeeded.

I’m applying this to a 2 node 11gR2 RAC cluster. You’ll notice that it is applied on ALL nodes. You can’t apply an online patch in RAC to just one node at a time and you can’t rollback one node at a time either. Also be aware that while the patch is in the oracle home on all nodes in the cluster, it’s only been applied to the local instance

Now, I know you are meant to give connection string details like username/password, and can then apply to all instances in a cluster at the same time, but on some systems I work on, I do not have this information, and rely on OS authentication only. This can lead to some pain.

You can tell a patch is applied with the following:

SQL> oradebug patch list

Patch File Name                                   State
================                                =========
bug10219624.pch                                  ENABLED

However, on the remote node:

SQL> oradebug patch list

Patch File Name                                   State
================                                =========
No patches currently installed

I accept this need not arise if you are able to authenticate properly at installation time. To fix this up you can do the following:

-bash-3.2$ /u01/app/oracle/product/11.2.0/db_1/OPatch/opatch util enableonlinepatch -connectString TESTRAC2 -id 10219624
Oracle Interim Patch Installer version 11.2.0.3.3
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/11.2.0/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0/db_1/oraInst.loc
OPatch version    : 11.2.0.3.3
OUI version       : 11.2.0.3.0
Log file location : /u01/app/oracle/product/11.2.0/db_1/cfgtoollogs/opatch/opatch2013-01-24_09-47-08AM_1.log

Invoking utility "enableonlinepatch"
Installing and enabling the online patch 'bug10219624.pch', on database 'TESTRAC2'.


OPatch succeeded.

The Bad
I’ve found rolling back to be slightly more problematic on the remote with o/s authentication. The rollback always removed the patch from the home across all nodes and always removed it from the instance on the local node. While there is an opatch method documented to then stop being enabled in an instance, very similar to the enableonlinepatch above (it’s Disableonlinepatch) I found it did not work with some patches, though opatch reported success, the patch was still enabled.

Another point to note, restarting an instance does not remove an online applied patch, there is a directory under the $ORACLE_HOME, called hpatch that has the online applied patch libraries.

To get round this I had to resort to the following oradebug commands:

SQL> oradebug patch list

Patch File Name                                   State
================                                =========
bug10219624.pch                                  ENABLED

SQL> oradebug patch disable bug10219624.pch
Statement processed.
SQL> oradebug patch list

Patch File Name                                   State
================                                =========
bug10219624.pch                                  DISABLED

SQL> oradebug patch remove bug10219624.pch
Statement processed.
SQL> oradebug patch list

Patch File Name                                   State
================                                =========
bug10219624.pch                                  REMOVED

That oradebug patch list showing removed then reverts to “No patches currently installed” upon instance restart.

The Ugly

This really caught me out, patches applied online are completely incompatible with a subsequent running of opatch auto. I had the situation recently whereby I had applied a patch online and then later wanted to run opatch auto to apply further patches. Before running opatch auto I always run the check for conflicts, and this did not give me a clue that opatch auto would not work with the online applied patch.

However when I ran opatch auto on Bundle Patch 11 the following occurred:

[Jan 16, 2013 9:19:16 AM]    OUI-67303:
                             Patches [   14632268   12880299   13734832 ] will be rolled back.
[Jan 16, 2013 9:19:16 AM]    Do you want to proceed? [y|n]
[Jan 16, 2013 9:19:19 AM]    Y (auto-answered by -silent)
[Jan 16, 2013 9:19:19 AM]    User Responded with: Y
[Jan 16, 2013 9:19:19 AM]    OPatch continues with these patches:   14474780
[Jan 16, 2013 9:19:19 AM]    OUI-67073:UtilSession failed:
                             OPatch cannot roll back an online patch while applying a regular patch.
                             Please rollback the online patch(es) " 14632268" manually, and then apply the regular patch(es) " 14474780".
[Jan 16, 2013 9:19:19 AM]    --------------------------------------------------------------------------------
[Jan 16, 2013 9:19:19 AM]    The following warnings have occurred during OPatch execution:
[Jan 16, 2013 9:19:19 AM]    1) OUI-67303:
                             Patches [   14632268   12880299   13734832 ] will be rolled back.
[Jan 16, 2013 9:19:19 AM]    --------------------------------------------------------------------------------
[Jan 16, 2013 9:19:19 AM]    Finishing UtilSession at Wed Jan 16 09:19:19 GMT 2013
[Jan 16, 2013 9:19:19 AM]    Log file location: /u01/app/ora/product/11.2.0.3/db_1/cfgtoollogs/opatch/opatch2013-01-16_09-19-08AM_1.log
[Jan 16, 2013 9:19:19 AM]    Stack Description: java.lang.RuntimeException:
                             OPatch cannot roll back an online patch while applying a regular patch.
                             Please rollback the online patch(es) " 14632268" manually, and then apply the regular patch(es) " 14474780"

Yes, it’s not that difficult to fix up, the frustrating thing here is the prerequisite checks did not show any issues. It’s pretty clear that the opatch auto developers have not given any thought how to properly handle an online applied patch, or the online patching developers have not considered the consequences of online patching with a future opatch auto.

Online patching is almost like the holy grail, nobody wants downtime, but I just don’t think the current online patching technique is quite fully there yet, and it really doesn’t play at all with opatch auto.

Observing Exadata HCC compression changes when adding columns

This blog posting is very much a follow on from the previous entry on how data compressed with Exadata HCC compression behaves under changing table definitions. Many thanks to Greg Rahn for the comments on the previous blog entry on a simple mechanism for determining whether the compression level has changed or not.

In this blog posting we add a column to an HCC compressed table and we observe whether the number of blocks in the table changes or not.

As Greg stated in the comments on the previous blog entry, we have 3 possibilities for adding a column:

  1. add column
  2. add column with a default value
  3. add column with a default value but also specify as not null

We start with the same table as in the previous entry:

SQL : db01> create table t_part (
username varchar2(30),
user_id number,
created date )
partition by range (created)
(partition p_2009 values less than (to_date('31-DEC-2009', 'dd-MON-YYYY')) tablespace users,
partition p_2010 values less than (to_date('31-DEC-2010', 'dd-MON-YYYY')) tablespace users,
partition p_2011 values less than (to_date('31-DEC-2011', 'dd-MON-YYYY')) tablespace users,
partition p_2012 values less than (to_date('31-DEC-2012', 'dd-MON-YYYY')) tablespace users )

/

Table created.

SQL : db01> alter table t_part compress for query high

/

Table altered.
SQL : db01> insert /*+ APPEND */ into t_part select * from all_users

488 rows created.

SQL : db01> commit;

Commit complete.

So now we gather stats on the table and see how many blocks the table is consuming:

SQL : db01> exec DBMS_STATS.gather_table_stats(ownname => 'SYS',tabname => 'T_PART', estimate_percent => 100);
PL/SQL procedure successfully completed.

SQL : db01> select table_name, blocks, empty_blocks, avg_row_len , last_analyzed from dba_tables where table_name='T_PART';

TABLE_NAME BLOCKS EMPTY_BLOCKS   AVG_ROW_LEN LAST_ANAL
---------- ------ -------------- ---------- ------------
T_PART        60      0          20         18-MAY-12

This will be our starting point for each of the 3 ways of adding a column. We will always start with this table consuming 60 blocks. We will then add the column and then determine how many blocks the table is consuming after the column has been added.

If the table has undergone decompression from HCC compression the number of blocks will go up, conversely if it has not then the number of blocks will remain static.

First we try just adding a column, no default value:

SQL : db01> alter table t_part add city varchar2(30);

Table altered.

SQL : db01> exec DBMS_STATS.gather_table_stats(ownname => 'SYS', tabname => 'T_PART', estimate_percent => 100);

PL/SQL procedure successfully completed.
SQL : db01> select table_name, blocks, empty_blocks, avg_row_len , last_analyzed from dba_tables where table_name='T_PART';

TABLE_NAME BLOCKS EMPTY_BLOCKS AVG_ROW_LEN LAST_ANAL
---------- ------ ----------  ---------- ------------
T_PART        60      0          20         18-MAY-12

So this method has not updated the number of blocks. It’s just a dictionary change. We then drop the table with the purge option and recreate it back to the starting point of 60 blocks. Next we try adding the column with a default value:

SQL : db01> alter table t_part add city varchar2(30) default 'Oxford';
Table altered.
SQL : db01> exec DBMS_STATS.gather_table_stats(ownname => 'SYS', tabname => 'T_PART', estimate_percent => 100);

PL/SQL procedure successfully completed.
SQL : db01>select table_name, blocks, empty_blocks, avg_row_len , last_analyzed from dba_tables where table_name='T_PART';

TABLE_NAME   BLOCKS  EMPTY_BLOCKS  AVG_ROW_LEN  LAST_ANAL
------------ ------ ------------    ---------- -----------
T_PART        192       0             27       18-MAY-12

As we can see the number has absolutely rocketed up from 60 to 192, this is indicative of the fact the data is no longer compressed with HCC compression.

Finally we repeat adding a column with a default value, but this time including the not null condition:


SQL :  db01> alter table t_part add city varchar2(30) default 'Oxford' not null;

Table altered.

SQL :  db01>  exec DBMS_STATS.gather_table_stats(ownname => 'SYS', tabname => 'T_PART', estimate_percent => 100);

PL/SQL procedure successfully completed.
<pre>SQL : db01> select table_name, blocks, empty_blocks, avg_row_len , last_analyzed from dba_tables where table_name='T_PART';

TABLE_NAME BLOCKS EMPTY_BLOCKS   AVG_ROW_LEN LAST_ANAL
---------- ------ -------------- ---------- ------------
T_PART        60      0          20         18-MAY-12

We see that with thetechnique of including a not null clause on the add column with a default value that the number of blocks has not changed, and hence the data must still be HCC compressed, as confirmed with the DBMS_COMPRESSION.GET_COMPRESSION_TYPE procedure.

Essentially if you can have any column that you add to an HCC compressed table to be defined as  not null  then you can be sure that specifying a default value will not undo your HCC compression.

If you do need to allow nulls, then getting away without a default value would be best and perhaps only updating recent data rather than all historical data would at least preserve some data as being HCC compressed. Be aware that uncompressing HCC compressed obviously can lead to a large increase in your storage requirements.

Adding Columns and Exadata HCC compression

While everyone is aware of the issues of mixing EHCC compression and OLTP type activities, I had a customer who was interested in finding out what happens upon adding a column to a table that has EHCC compression enabled on it.

As I could not see any definitive statements in the documentation on this particular scenario I ran up some tests to see the behaviour.

First of all they are using partitioning by date range, so we create a partitioned table:

SQL: db01> create table t_part  ( 
username varchar2(30), 
user_id  number, 
created date ) 
partition by range (created) 
( partition p_2009 values less than (to_date('31-DEC-2009', 'dd-MON-YYYY')) tablespace users, 
partition p_2010 values less than (to_date('31-DEC-2010', 'dd-MON-YYYY')) tablespace users, 
partition p_2011 values less than (to_date('31-DEC-2011', 'dd-MON-YYYY')) tablespace users, 
partition p_2012 values less than (to_date('31-DEC-2012', 'dd-MON-YYYY')) tablespace users )

/

Table created

The customer is particularly interested in using partitioning for ILM type scenarios in that they will compress historical partitions but not more up-to-date ones. Lets enable HCC compression on the table and check that it’s on:


SQL: db01> alter table t_part compress for query high 
/

Table altered

SQL: db01> select table_name, partition_name, compression, compress_for 
from all_tab_partitions 
where table_name='T_PART' 
/

TABLE_NAME                     PARTITION_NAME                 COMPRESS COMPRESS_FOR 
------------------------------ ------------------------------ -------- ------------ 
T_PART                         P_2009                         ENABLED  QUERY HIGH 
T_PART                         P_2010                         ENABLED  QUERY HIGH 
T_PART                         P_2011                         ENABLED  QUERY HIGH 
T_PART                         P_2012                         ENABLED  QUERY HIGH

Lets insert some data and check that the actual row is compressed (thanks to Kerry Osborne)


SQL: db01>; insert /*+ APPEND */ into t_part select * from all_users 
/ 
3008 rows created
SQL: db01> commit
/
Commit complete

SQL: db01> select max(rowid) from t_part
/

MAX(ROWID) 
------------------ 
AAAexSAANAAHGoUAAN

SQL: db01> select decode( 
DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&amp;rowid'), 
    1, 'No Compression', 
    2, 'Basic/OLTP Compression', 
    4, 'HCC Query High', 
    8, 'HCC Query Low', 
   16, 'HCC Archive High', 
   32, 'HCC Archive Low', 
   'Unknown Compression Level') compression_type 
from dual;

Enter value for rowid: AAAexSAANAAHGoUAAN 
old   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
new   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', 'AAAexSAANAAHGoUAAN'),

COMPRESSION_TYPE 
------------------------- 
HCC Query High

So we are confident we have a row that is compressed. Now we add a new column to the table and give it a default value, we then check again what compression the row has:

SQL: db01> alter table t_part add city varchar2(30) default 'Oxford' 
/

Table altered.

select decode( 
DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
  2    3  1, 'No Compression', 
  4  2, 'Basic/OLTP Compression', 
  5  4, 'HCC Query High', 
  6  8, 'HCC Query Low', 
  7  16, 'HCC Archive High', 
32, 'HCC Archive Low', 
    'Unknown Compression Level') compression_type 
from dual; 
Enter value for rowid: AAAexSAANAAHGoUAAN 
old   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
new   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', 'AAAexSAANAAHGoUAAN'),

COMPRESSION_TYPE 
------------------------- 
Basic/OLTP Compression

Oh Dear! Our compression has changed.

This maybe is not that surprising. But what if you have a requirement to add a column but with no default value, and you just want to update more recent records, can we avoid downgrading all records from HCC compression?

So we are using the same table and data as before. We will focus on two rows, one in the 2011 partition and one in the 2012 partition.

SQL: db01> select max(rowid) from t_part where created  > TO_DATE('31-Dec-2010', 'DD-MM-YYYY') and created < TO_DATE('01-Jan-2012', 'DD-MM-YYYY');

MAX(ROWID) 
------------------ 
AAAezbAAHAAFwIKAE/

SQL: db01> select decode( 
DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
    1, 'No Compression', 
    2, 'Basic/OLTP Compression', 
    4, 'HCC Query High', 
    8, 'HCC Query Low', 
    16, 'HCC Archive High', 
    32, 'HCC Archive Low', 
    'Unknown Compression Level') compression_type 
from dual;  
Enter value for rowid: AAAezbAAHAAFwIKAE/ 
old   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
new   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', 'AAAezbAAHAAFwIKAE/'),

COMPRESSION_TYPE 
------------------------- 
HCC Query High

SQL: db01> select max(rowid) from t_part where created  > TO_DATE('31-Dec-2011', 'DD-MM-YYYY') and created < TO_DATE('31-Dec-2012', 'DD-MM-YYYY');

MAX(ROWID) 
------------------ 
AAAezcAAHAAHdoSADf

SQL:xldnc911001hdor:(SMALLDB1):PRIMARY> select decode( 
    DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
    1, 'No Compression', 
    2, 'Basic/OLTP Compression', 
    4, 'HCC Query High', 
    8, 'HCC Query Low', 
    16, 'HCC Archive High', 
    32, 'HCC Archive Low', 
    'Unknown Compression Level') compression_type 
from dual; 
Enter value for rowid: AAAezcAAHAAHdoSADf 
old   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
new   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', 'AAAezcAAHAAHdoSADf'),

COMPRESSION_TYPE 
------------------------- 
HCC Query High

Now we add a column to the table and update the records in only the 2012 partition:

SQL: db01> alter table t_part add city varchar2(30);

Table altered.

SQL: db01> update t_part set city='Oxford' where created > to_date('31-Dec-2011', 'DD-MM-YYYY');

448 rows updated.

SQL: db01> commit;

Commit complete.

And now we again check the compression status of our two rows:

SQL: db01> select decode( 
DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
    1, 'No Compression', 
    2, 'Basic/OLTP Compression', 
    4, 'HCC Query High', 
    8, 'HCC Query Low', 
   16, 'HCC Archive High', 
   32, 'HCC Archive Low', 
       'Unknown Compression Level') compression_type 
from dual;  
Enter value for rowid: AAAezbAAHAAFwIKAE/ 
old   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
new   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', 'AAAezbAAHAAFwIKAE/'),

COMPRESSION_TYPE 
------------------------- 
HCC Query High

SQL: db01> select decode( 
DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
    1, 'No Compression', 
    2, 'Basic/OLTP Compression', 
    4, 'HCC Query High', 
    8, 'HCC Query Low', 
    16, 'HCC Archive High', 
    32, 'HCC Archive Low', 
        'Unknown Compression Level') compression_type 
   from dual; 
Enter value for rowid: AAAezcAAHAAHdoSADf 
old   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', '&rowid'), 
new   2: DBMS_COMPRESSION.GET_COMPRESSION_TYPE ( 'SYS', 'T_PART', 'AAAezcAAHAAHdoSADf'),

COMPRESSION_TYPE 
------------------------- 
Basic/OLTP Compression

So that is great, we have a way of evolving table definitions without having to suffer the whole set of historical data to not be in HCC compression.

Creating ASM diskgroups on Exadata with ASMCA

I recently had the chance to create some diskgroups on an Exadata box outside the standard installation procedure, while this is not necessarily Exadata specific, I thought the technique of using ASMCA silently on the command line to create the diskgroups was sufficiently novel for a short blog posting. If for nothing else but to remind myself on how to do this in future.

This example uses the disks presented from a quarter rack exadata system and creates a diskgroup called DATA01:

asmca -silent -createDiskGroup -diskGroupName 'DATA01' -diskList o/192.168.10.14/DATA01_CD_00_cel01,o/192.168.10.14/DATA01_CD_01_cel01,
o/192.168.10.14/DATA01_CD_02_cel01,o/192.168.10.14/DATA01_CD_03_cel01,
o/192.168.10.14/DATA01_CD_04_cel01,o/192.168.10.14/DATA01_CD_05_cel01,
o/192.168.10.14/DATA01_CD_06_cel01,o/192.168.10.14/DATA01_CD_07_cel01,
o/192.168.10.14/DATA01_CD_08_cel01,o/192.168.10.14/DATA01_CD_09_cel01,
o/192.168.10.14/DATA01_CD_10_cel01,o/192.168.10.14/DATA01_CD_11_cel01,
o/192.168.10.15/DATA01_CD_00_cel02,o/192.168.10.15/DATA01_CD_01_cel02,
o/192.168.10.15/DATA01_CD_02_cel02,o/192.168.10.15/DATA01_CD_03_cel02,
o/192.168.10.15/DATA01_CD_04_cel02,o/192.168.10.15/DATA01_CD_05_cel02,
o/192.168.10.15/DATA01_CD_06_cel02,o/192.168.10.15/DATA01_CD_07_cel02,
o/192.168.10.15/DATA01_CD_08_cel02,o/192.168.10.15/DATA01_CD_09_cel02,
o/192.168.10.15/DATA01_CD_10_cel02,o/192.168.10.15/DATA01_CD_11_cel02,
o/192.168.10.16/DATA01_CD_00_cel03,o/192.168.10.16/DATA01_CD_01_cel03,
o/192.168.10.16/DATA01_CD_02_cel03,o/192.168.10.16/DATA01_CD_03_cel03,
o/192.168.10.16/DATA01_CD_04_cel03,o/192.168.10.16/DATA01_CD_05_cel03,
o/192.168.10.16/DATA01_CD_06_cel03,o/192.168.10.16/DATA01_CD_07_cel03,
o/192.168.10.16/DATA01_CD_08_cel03,o/192.168.10.16/DATA01_CD_09_cel03,
o/192.168.10.16/DATA01_CD_10_cel03,o/192.168.10.16/DATA01_CD_11_cel03 -redundancy NORMAL -compatible.asm 11.2.0.0 -compatible.rdbms 11.2.0.0 -sysAsmPassword welcome1 -silent

Note I’ve edited the above to have returns after every couple of disks for readability. Of course you can specify your required redundancy level and differing compatibile parameters. You can see the 3 cells here that make up an Exadata 1/4 rack. You can also see the Infiniband IP addresses of each of these cells in the path to each griddisk.

You can see however the nice mapping between each of the griddisks names and the storage cells upon which they reside (e.g. DATA01_CD_00_cel01 – DATA01 griddisk created on Celldisk 00 of storage cel01) – I really like this naming feature of Exadata it makes life that little bit more straightforward for the administrator.

Only other thing to be aware of, when creating the DBFS_DG diskgroup all those CD_00, and CD_01 griddisks don’t exist as this is the storage on each cell is used for the systems disks.

UKOUG Exa Day

Had a great day at the UKOUG Exa.. Day yesterday. I was happy by and large with how my presentation went, it was a bit irritating having some technical issues with laptops and projectors, but hopefully the audience was entertained enough to not let that have annoyed them too much. I’ve included a link to the powerpoint of the presentation, you really need to read the notes to gain an understanding of what I was trying to say!

Apologies that it is a near 7MB download, but I’d be happy to answer any questions you may have on it, and all feedback welcomed.

As for the day itself, there was a great atmosphere at the event, and many, many familiar faces from UKOUG events of the past, in particular many faces from the RAC SIG. I particularly enjoyed Frits Hoogland’s presentation on Exadata and OLTP. He even alluded to the fact that Exadata smart flash logging may not be as beneficial to OLTP as you may be lead to believe. The other take away message from his presentation was to test everything – don’t take things for granted!

Tanel Poder was great to listen to and I think he could have gone on for many hours more talking about Exadata performance.

The day rounded off with great chat and beers for all the delegates courtesy of e-dba.

Exadata Smart Flash Logging

With the 11.2.2.4.0 release of the Exadata storage server software (and providing you are at least at 11.2.0.2 BP11), you will have the opportunity to utilise Exadata Smart Flash Logging. I thought I’d take a look at how much (if any) improvement this feature would provide to a busy production environment.

Have a look at this blog entry on Exadata Smart Flash Loggin by Luis Moreno Campos for an introduction on how it works. Basically you now issue two writes, 1 to flash 1 to your disk based redo logs, fastest write is the winner.

First lets check that we actually have some Exadata Smart Flash Log’s available to be used:

CellCLI> list flashlog detail

name                      cel07_FLASHLOG 
cellDisk FD_00_cel07,FD_08_cel07,FD_09_cel07,FD_01_cel07,FD_15_cel07,FD_06_cel07,FD_03_cel07,FD_04_cel07,FD_07_cel07,FD_02_cel07,FD_12_cel07,FD_14_cel07,FD_13_cel07,FD_11_cel07,FD_05_cel07,FD_10_cel07
 creationTime              2012-03-17T15
 degradedCelldisks 
 effectiveSize             512M 
 efficiency                100.0 
 id                        a24a25e5-062e-4be1-bb6b-3168113a5fe8 
 size                      512M 
 status                    normal

First we can see that on this cell, there is a flashlog created of size 512M. It is carved out of each the 16 flash doms in the cell. Consequently this reduces the amount you have for your flashcache, though it’s a very small reduction.

How much use are we getting out of the flashlog? Well, we can look at some metrics:

CellCLI> list metriccurrent where objectType='FLASHLOG'
 FL_ACTUAL_OUTLIERS                 FLASHLOG        0 IO requests 
 FL_BY_KEEP                         FLASHLOG        0 
 FL_DISK_FIRST                      FLASHLOG        6,168,815 IO requests 
 FL_DISK_IO_ERRS                    FLASHLOG        0 IO requests 
 FL_EFFICIENCY_PERCENTAGE           FLASHLOG        100 % 
 FL_EFFICIENCY_PERCENTAGE_HOUR      FLASHLOG        100 % 
 FL_FLASH_FIRST                     FLASHLOG        172,344 IO requests 
 FL_FLASH_IO_ERRS                   FLASHLOG        0 IO requests 
 FL_FLASH_ONLY_OUTLIERS             FLASHLOG        0 IO requests 
 FL_IO_DB_BY_W                      FLASHLOG        286,075 MB 
 FL_IO_DB_BY_W_SEC                  FLASHLOG        13.328 MB/sec 
 FL_IO_FL_BY_W                      FLASHLOG        303,793 MB 
 FL_IO_FL_BY_W_SEC                  FLASHLOG        13.761 MB/sec 
 FL_IO_W                            FLASHLOG        6,341,159 IO requests 
 FL_IO_W_SKIP_BUSY                  FLASHLOG        0 IO requests 
 FL_IO_W_SKIP_BUSY_MIN              FLASHLOG        0.0 IO/sec 
 FL_IO_W_SKIP_LARGE                 FLASHLOG        0 IO requests 
 FL_PREVENTED_OUTLIERS              FLASHLOG        415 IO requests

First off, this is taken on a very busy system:

FL_IO_FL_BY_W_SEC: 13.761 MB/sec

That is saying how much data is being written to flash by smart flash log. Well, that sounds great, but it’s not quite so simple. Remember writes go to both flash and disk.

FL_DISK_FIRST: 6,168,815 IO requests

This metric is actually telling that 6.1M I/O requests were serviced first by disk. While:

FL_FLASH_FIRST: 172,344 IO requests

Is saying this number went to flash first. Oh, that’s not a great improvement! I make that 2.7% writes went to flash first.

Finally a word on the FL_PREVENTED_OUTLIERS, this is saying there was 415 writes on this cell that would have taken more than 0.5 secs if there was no flash logging in place.

I have also checked AWR reports on before and after having flash logging in place. There is very little change. AWR shows an average wait of 3ms for Log File Parallel Write. Have a look at the wait event histogram for this:

We see the vast majority of writes are under a 1ms. This was also the case before the flash logging as well. It has not improved this at all.

This is a busy cpu bound system lets look at the Log file sync wait event histogram:

Eurgh! is the only way to describe this.

I think Kevin Closson has covered this a mere half-decade ago!

Exadata Flash Storage

Exadata flash storage is provided by the Sun Flash Accelerator F20 PCIe card shown above. Four of these cards are installed in every Exadata storage cell. There is a Documentation set available to peruse.

First, we can see these devices using lspci:

[root@cel01 ~]# lsscsi |grep  MARVELL 
[8:0:0:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdn 
[8:0:1:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdo 
[8:0:2:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdp 
[8:0:3:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdq 
[9:0:0:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdr 
[9:0:1:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sds 
[9:0:2:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdt 
[9:0:3:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdu 
[10:0:0:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdv 
[10:0:1:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdw 
[10:0:2:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdx 
[10:0:3:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdy 
[11:0:0:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdz 
[11:0:1:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdaa 
[11:0:2:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdab 
[11:0:3:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdac

You can see they are bunched into 4 groups of 4 8:, 9:, 10:, and 11: This is the fact that the 4 cards each have 4 FMOD, so on every exadata the flash is presented as 16 separate devices.

We can also use the flash_dom command:


[root@cel01 ~]# flash_dom -l

Aura Firmware Update Utility, Version 1.2.7

Copyright (c) 2009 Sun Microsystems, Inc. All rights reserved..

U.S. Government Rights - Commercial Software. Government users are subject 
to the Sun Microsystems, Inc. standard license agreement and 
applicable provisions of the FAR and its supplements.

Use is subject to license terms.

This distribution may include materials developed by third parties.

Sun, Sun Microsystems, the Sun logo, Sun StorageTek and ZFS are trademarks 
or registered trademarks of Sun Microsystems, Inc. or its subsidiaries, 
in the U.S. and other countries.



 HBA# Port Name         Chip Vendor/Type/Rev    MPT Rev  Firmware Rev  IOC     WWID                 Serial Number

 1.  /proc/mpt/ioc0    LSI Logic SAS1068E C0     105      011b5c00     0       5080020000fe34c0     465769T+1130A405XA

        Current active firmware version is 011b5c00 (1.27.92) 
        Firmware image's version is MPTFW-01.27.92.00-IT 
        x86 BIOS image's version is MPTBIOS-6.26.00.00 (2008.10.14) 
        FCode image's version is MPT SAS FCode Version 1.00.49 (2007.09.21)


          D#  B___T  Type       Vendor   Product          Rev    Operating System Device Name 
          1.  0   0  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sdn    [8:0:0:0] 
          2.  0   1  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sdo    [8:0:1:0] 
          3.  0   2  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sdp    [8:0:2:0] 
          4.  0   3  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sdq    [8:0:3:0]

 2.  /proc/mpt/ioc1    LSI Logic SAS1068E C0     105      011b5c00     0       5080020000fe3440     465769T+1130A405X7

        Current active firmware version is 011b5c00 (1.27.92) 
        Firmware image's version is MPTFW-01.27.92.00-IT 
        x86 BIOS image's version is MPTBIOS-6.26.00.00 (2008.10.14) 
        FCode image's version is MPT SAS FCode Version 1.00.49 (2007.09.21)


          D#  B___T  Type       Vendor   Product          Rev    Operating System Device Name 
          1.  0   0  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sdr    [9:0:0:0] 
          2.  0   1  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sds    [9:0:1:0] 
          3.  0   2  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sdt    [9:0:2:0] 
          4.  0   3  Disk       ATA      MARVELL SD88SA02 D20Y   /dev/sdu    [9:0:3:0]
.
.

The output above has been edited for brevity. You can even have a look at the devices /proc/mpt/ioc1 on the filesystem.

We can also of course look at these devices via cellcli:


CellCLI> list physicaldisk where diskType='FlashDisk' 
         FLASH_1_0       1113M086V3      normal 
         FLASH_1_1       1113M086V4      normal 
         FLASH_1_2       1113M086V0      normal 
         FLASH_1_3       1113M086UY      normal 
         FLASH_2_0       1113M0892K      normal 
         FLASH_2_1       1113M086TR      normal 
         FLASH_2_2       1113M0891P      normal 
         FLASH_2_3       1113M0892L      normal 
         FLASH_4_0       1113M086UP      normal 
         FLASH_4_1       1113M086UQ      normal 
         FLASH_4_2       1113M086UT      normal 
         FLASH_4_3       1113M086UN      normal 
         FLASH_5_0       1113M08AGJ      normal 
         FLASH_5_1       1112M07V6U      normal 
         FLASH_5_2       1113M08AKJ      normal 
         FLASH_5_3       1113M08AH5      normal

Again presented as 4 lots of 4 and disktype of FlashDisk. Looking in on the detail of one of the flashdisks:


CellCLI>  list physicaldisk where diskType='FlashDisk' detail

  name:                   FLASH_5_3 
         diskType:               FlashDisk 
         errCmdTimeoutCount:     0 
         errHardReadCount:       0 
         errHardWriteCount:      0 
         errMediaCount:          0 
         errOtherCount:          0 
         errSeekCount:           0 
         luns:                   5_3 
         makeModel:              "MARVELL SD88SA02" 
         physicalFirmware:       D20Y 
         physicalInsertTime:     2011-12-07T19:00:02+00:00 
         physicalInterface:      sas 
         physicalSerial:         1113M08AH5 
         physicalSize:           22.8880615234375G 
         sectorRemapCount:       0 
         slotNumber:             "PCI Slot: 5; FDOM: 3" 
         status:                 normal

I’ve edited the above for just the detail on the FLASH_5_3 device, basically the last FDOM slot on the highest numbered PCI slot. You can see the size of each of the FDOMs at 22.8880615234375G which multiplied by 16 gives 366.21G.

We can also look at the lun level:

CellCLI> list lun where id='5_3' detail 
         name:                   5_3 
         cellDisk:               FD_15_cel01 
         deviceName:             /dev/sdy 
         diskType:               FlashDisk 
         id:                     5_3 
         isSystemLun:            FALSE 
         lunAutoCreate:          FALSE 
         lunSize:                22.8880615234375G 
         overProvisioning:       100.0 
         physicalDrives:         FLASH_5_3 
         status:                 normal

You can see each lun has a celldisk name associated with it, and a sensible naming convention. Finally drilling down into the celldisk detail:

CellCLI> list celldisk where name='FD_15_cel01' detail 
         name:                   FD_15_cel01 
         comment: 
         creationTime:           2012-01-10T10:13:06+00:00 
         deviceName:             /dev/sdy 
         devicePartition:        /dev/sdy 
         diskType:               FlashDisk 
         errorCount:             0 
         freeSpace:              0 
         id:                     8ddbd2c8-8446-4735-8948-d8aea5744b35 
         interleaving:           none 
         lun:                    5_3 
         size:                   22.875G 
         status:                 normal

The final point of interest on the flash cards is the white part, middle top on the card. That is the Energy Storage Module (ESM), and it has a set lifetime. According the F20 docs on a V2 it’s lifetime was expected at 3 years. You can monitor the health and lifetime of your modules with the following ipmi command:

[root@cel01 ~]# for RISER in RISER1/PCIE1 RISER1/PCIE4 RISER2/PCIE2 RISER2/PCIE5; do ipmitool sunoem cli "show /SYS/MB/$RISER/F20CARD/UPTIME"; done

Connected. Use ^D to exit. 
-> show /SYS/MB/RISER1/PCIE1/F20CARD/UPTIME

 /SYS/MB/RISER1/PCIE1/F20CARD/UPTIME 
    Targets:

    Properties: 
        type = Power Unit 
        ipmi_name = PCIE1/F20/UP 
        class = Threshold Sensor 
        value = 9844.000 Hours 
        upper_nonrecov_threshold = 26220.000 Hours 
        upper_critical_threshold = 25806.000 Hours 
        upper_noncritical_threshold = 25254.000 Hours 
        lower_noncritical_threshold = N/A 
        lower_critical_threshold = N/A 
        lower_nonrecov_threshold = N/A 
        alarm_status = cleared

    Commands: 
        cd 
        show

-> Session closed 
Disconnected

I’ve edited the output above to just one riser card, just to prevent boredom. You are looking to ensure the value , here showing value = 9844.000 Hours is less than the upper_noncritical_threshold, which in this case it is. Otherwise have the ESM replaced if this value is greater than the threshold.

So far I’ve found the flash cards on both V2 and X2-2 to be very reliable, I’d be interested in hearing other thoughts on their reliability.

Exadata Batteries

Andy Colvin has a good post highlighting the importance of making sure your batteries are operating with enough charge to ensure that the drive policy is in writeback as opposed to writethrough.

I just wanted to add a small addendum to that posting. I have seen severe issues with the MegaRaid controller going into writethrough mode. It is particularly crucial on the compute nodes. Under some circumstances it can lead to the drives on the compute node suffering disk corruption. I have felt the pain of this leading to a so called Bare Metal Restore of the affected node.

I’ve also had the pleasure of being involved with the replacement of around 50 Exadata V2 batteries. In the last couple of months. This is almost certainly due to the age of the batteries. The batteries will have been due to be replaced in all Exadata’s after 2 years, but these batteries just failed to make the distance.

One of the MegaCLI commands Andy highlighted provides a wealth of information:


[root@db01 ~]# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -a0

BBU status for Adapter: 0

BatteryType: iBBU08 
Voltage: 4040 mV 
Current: 0 mA 
Temperature: 50 C

BBU Firmware Status:

  Charging Status              : None 
  Voltage                      : OK 
  Temperature                  : OK 
  Learn Cycle Requested        : No 
  Learn Cycle Active           : No 
  Learn Cycle Status           : OK 
  Learn Cycle Timeout          : No 
  I2c Errors Detected          : No 
  Battery Pack Missing         : No 
  Battery Replacement required : No 
  Remaining Capacity Low       : No 
  Periodic Learn Required      : No 
  Transparent Learn            : No

Battery state:

GasGuageStatus: 
  Fully Discharged        : No 
  Fully Charged           : No 
  Discharging             : No 
  Initialized             : Yes 
  Remaining Time Alarm    : No 
  Remaining Capacity Alarm: Yes 
  Discharge Terminated    : No 
  Over Temperature        : No 
  Charging Terminated     : No 
  Over Charged            : No

Relative State of Charge: 100 % 
Charger System State: 1 
Charger System Ctrl: 0 
Charging current: 0 mA 
Absolute state of charge: 0 % 
Max Error: 0 %

BBU Capacity Info for Adapter: 0

Relative State of Charge: 100 % 
Absolute State of charge: 87 % 
Remaining Capacity: 1341 mAh 
Full Charge Capacity: 1353 mAh 
Run time to empty: Battery is not being discharged 
Average time to empty: 161 min 
Average Time to full: Battery is not being charged 
Cycle Count: 2 
Max Error: 0 % 
Remaining Capacity Alarm: 0 mAh 
Remaining Time Alarm: 0 Min


BBU Design Info for Adapter: 0

Date of Manufacture: 06/02, 2011 
Design Capacity: 1530 mAh 
Design Voltage: 4100 mV 
Specification Info: 0 
Serial Number: 2080 
Pack Stat Configuration: 0x0000 
Manufacture Name: LS36681 
Device Name: bq27541 
Device Chemistry: LPMR 
Battery FRU: N/A


BBU Properties for Adapter: 0

Auto Learn Period: 2592000 Sec 
Next Learn time: 384645185 Sec 
Learn Delay Interval:0 Hours 
Auto-Learn Mode: Enabled

Exit Code: 0x00

This is from a V2 that has had it’s battery replaced. First thing to highlight is the battery type:

BatteryType: iBBU08

Earlier batteries were 07 and were perhaps less longer lasting than the full 2 years before a preventative maintenance was due. I’d be extra vigilant if you have the 07 model. It will show as iBBU on a storage cell and unknown on a compute node.

Next up is the temperature:

Temperature: 50 C

You really want to ensure this is under 55C or there is something either wrong with the environment (use ipmitool to check the ambient temperature) or the battery is overheating.

You can tell if your battery is charging with either the:

Charging Status : None

or the

Average Time to full: Battery is not being charged

Output’s would show charging and a time to full if it was charging. One possible reason for a low battery charge is a learn cycle.

Charge capacity determines whether the writeback or writethrough mode is in use:

Full Charge Capacity: 1353 mAh

This is a relatively new battery and has a good amount of charge.

As you start approaching going below 700 mAH you may want to take proactive action and schedule a battery replacement.

You also want to ensure the Max error is down low:

Max Error: 0 %

Last thing I’m going to highlight is the battery manufacture:

Device Chemistry: LPMR

While on an X2-2 it displays:

Device Chemistry: LION

Both display the bq27541 used for determining the charge level. Apart from this line, there appears little difference between the output on a V2 and an X2.

Just to reemphasise keep an eye on your batteries and make sure your MegaRaid Controller is in writeback!

Kfed and Exadata ASM disks

I’ve written in the past on the usefulness of kfed. As Martin Berger requested seeing some output from kfed with Exadata disks I thought I would oblige.

So What is kfed

kfed is the so called Kernel Files Editor, Miladin Modrakovic has written quite nicely about this. It can be used to read and modify ASM disk headers. The ASM Support guy, Bane Radulović also has a nice write up.

Before you can run kfed you need to have disk to point it to, and this is where it gets interesting on Exadata as compared to traditional, say Fibre Channel SAN attached storage. On there you typically have devices like /dev/sdX that are your luns from the SAN.

Exadata is Different

I think we are all understanding that things are a little different on Exadata and the way disks are presented is certainly unusual. Exadata uses a protocol called iDB to communicate to the Storage Servers. It is a network protocol.

This is one of the first things that really surprised me in working with Exadata. I was so used to running iostat on a database server to see how busy the storage was. Well you can forget doing that on Exadata, as it will only show you the local compute node disks – not really where the action is!

kfod Disk Discovery

So to run kfed, we need to find something to run it against, and here kfod (Kernel Files Oracle Storage Manager Discovery Tool) is your friend:

db01: oracle$ kfod di=all 
-------------------------------------------------------------------------------- 
 Disk          Size Path                                     User     Group 
================================================================================ 
   1:    1501184 Mb o/192.168.10.3/DATA01_CD_00_cel01 <unknown> <unknown> 
   2:    1501184 Mb o/192.168.10.3/DATA01_CD_01_cel01 <unknown> <unknown> 
   3:    1501184 Mb o/192.168.10.3/DATA01_CD_02_cel01 <unknown> <unknown> 
   4:    1501184 Mb o/192.168.10.3/DATA01_CD_03_cel01 <unknown> <unknown> 
. 
. 
. 
-------------------------------------------------------------------------------- 
ORACLE_SID ORACLE_HOME 
================================================================================ 
     +ASM1 /u01/app/ora/product/11.2.0.2/grid_1 
     +ASM2 /u01/app/ora/product/11.2.0.2/grid_1

Output above has been edited to prevent tedium. This is basically scanning for valid devices and spitting out the size and path to those devices. So now you can feed something like o/192.168.10.3/DATA01_CD_03_cel01 into kfed:

db01: oracle$ kfed read o/192.168.10.3/DATA01_CD_03_cel01 
kfbh.endian:                          1 ; 0x000: 0x01 
kfbh.hard:                          130 ; 0x001: 0x82 
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD 
kfbh.datfmt:                          1 ; 0x003: 0x01 
kfbh.block.blk:                       0 ; 0x004: T=0 NUMB=0x0 
kfbh.block.obj:              2147483651 ; 0x008: TYPE=0x8 NUMB=0x3 
kfbh.check:                   828339576 ; 0x00c: 0x315f7578 
kfbh.fcn.base:                        0 ; 0x010: 0x00000000 
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000 
kfbh.spare1:                          0 ; 0x018: 0x00000000 
kfbh.spare2:                          0 ; 0x01c: 0x00000000 
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8 
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000 
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000 
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000 
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000 
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000 
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000 
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000 
kfdhdb.dsknum:                        3 ; 0x024: 0x0003 
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL 
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER 
kfdhdb.dskname:      DATA01_CD_03_CEL01 ; 0x028: length=18 
kfdhdb.grpname:                  DATA01 ; 0x048: length=6 
kfdhdb.fgname:                    CEL01 ; 0x068: length=4 
kfdhdb.capname:                         ; 0x088: length=0 
kfdhdb.crestmp.hi:             32965963 ; 0x0a8: HOUR=0xb DAYS=0xa MNTH=0x1 YEAR=0x7dc 
kfdhdb.crestmp.lo:            180889600 ; 0x0ac: USEC=0x0 MSEC=0x20a SECS=0x2c MINS=0x2 
kfdhdb.mntstmp.hi:             32967149 ; 0x0b0: HOUR=0xd DAYS=0xf MNTH=0x2 YEAR=0x7dc 
kfdhdb.mntstmp.lo:           2947617792 ; 0x0b4: USEC=0x0 MSEC=0x45 SECS=0x3b MINS=0x2b 
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200 
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000 
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000 
kfdhdb.mfact:                    454272 ; 0x0c0: 0x0006ee80 
kfdhdb.dsksize:                  375296 ; 0x0c4: 0x0005ba00 
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002 
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001 
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002 
kfdhdb.f1b1locn:                      0 ; 0x0d4: 0x00000000 
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000 
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000 
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000 
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000 
kfdhdb.dbcompat:              186646528 ; 0x0e0: 0x0b200000 
kfdhdb.grpstmp.hi:             32965963 ; 0x0e4: HOUR=0xb DAYS=0xa MNTH=0x1 YEAR=0x7dc 
kfdhdb.grpstmp.lo:            177854464 ; 0x0e8: USEC=0x0 MSEC=0x276 SECS=0x29 MINS=0x2 
kfdhdb.vfstart:                       0 ; 0x0ec: 0x00000000 
kfdhdb.vfend:                         0 ; 0x0f0: 0x00000000 
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000 
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000 
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000 
kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000 
kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000 
kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000 
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000 
kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000 
kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000 
kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000 
kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000 
kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000 
kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000 
kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000 
kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000 
kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000 
kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000 
kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000 
kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000 
kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000 
kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000 
kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000 
kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000 
kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000 
kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000 
kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000 
kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000 
kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000 
kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000 
kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000 
kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000 
kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000 
kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000 
kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000 
kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000 
kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000 
kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000 
kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000 
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000 
kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000 
kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000 
kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000 
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000 
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000 
kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000 
kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000 
kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000 
kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000 
kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000 
kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000 
kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000 
kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000 
kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000 
kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000 
kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000 
kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000 
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000 
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000 
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000 
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

Lots of spares! It’s worth noting that kfed can also be used for editing your header if it gets corrupted.

Exadata ASM Disk Headers

This post is more for academic interest, but I have had a bit of a look at ASM disk headers on Exadata, under various conditions.

First up we see the disks of a newly racked Exadata that has not been configured, apart from some networking. None of the disks on the cell have been touched:

[root@cel01] od -c /dev/sdd --read-bytes 1056300 
0000000  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
4017040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
4017054

So it’s pretty much a blank slate to begin with.

Next I looked at the header after the celldisk and griddisks had been created:

[root@cel01 ~]# od -c /dev/sdd |head -100 
0000000                   o   r   a   c   l   e       s   a   g   e   d 
0000020   i   s   k  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0000040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0000100   .   !   o   R  \0  \0  \0 006  \0 002  \0  \0 004  \0  \0  \0 
0000120 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0000140  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0001000   *   c   d   I   n   f   o   * 030   N 334 230   9   ) 230 210 
0001020 271   3 246 347   ^ 367   m 241 354 177   T 334  \0  \0  \0  \0 
0001040 005  \0  \0  \0 006  \0  \0  \0  \0 200  \0  \0  \0 200  \0  \0 
0001060  \0 200 266 350  \0  \0  \0  \0 005  \0  \0  \0  \0  \0  \0  \0 
0001100 001  \0  \0  \0 005  \0  \0  \0 342 177  \0  \0  \0  \0  \0  \0 
0001120 233 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0001140  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0002000   C   D   _   0   3   _   c   e   l   0   1  \0  \0  \0  \0  \0
0002020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0002040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0005000   S   M   s   c   t   r  \a  \0 345 256   u   c 001  \0  \0  \0 
0005020  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 002  \0  \0  \0 
0005040  \0 005  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 003  \0  \0  \0 
0005060 001  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 005  \0  \0  \0 
0005100   & 312 001  \0   G  \a  \0  \0  \0  \0  \0  \0 006  \0  \0  \0 
0005120 002  \0  \0  \0 376 004  \0  \0  \0  \0  \0  \0 006  \0  \0  \0 
0005140 001 005  \0  \0 202   i 001  \0 376 004  \0  \0  \a  \0  \0  \0 
0005160 203   n 001  \0 243   [  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0005200  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77742000   G   D   t   a   b   l   e   ! 355   I 211   !   ' 314 334   u 
77742020 211   h 305 310 330 221   q 224 243   [  \0  \0 001  \0  \0  \0 
77742040  \a  \0  \0  \0 314 034 355 022  \0  \0  \0  \0  \0  \0  \0  \0 
77742060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77742100  \0  \0  \0  \0  \0  \0  \0  \0 222 001  \0  \0  \0  \0  \0  \0 
77742120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77743000   R   E   C   O   0   1   _   C   D   _   0   3   _   c   e   l 
77743020   0   1  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77743040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77747000   G   D   t   a   b   l   e   !   Z   A   D 332   r 312 370 003 
77747020 351 027 270   E   p 034 215 227 200   n 001  \0 002  \0  \0  \0 
77747040 006  \0  \0  \0 321 321 223 037  \0  \0  \0  \0  \0  \0  \0  \0 
77747060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77747100  \0  \0  \0  \0  \0  \0  \0  \0 222 001  \0  \0  \0  \0  \0  \0 
77747120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77750000   D   A   T   A   0   1   _   C   D   _   0   3   _   c   e   l 
77750020   0   1  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77750040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77754000   G   D   t   a   b   l   e   ! 363   O   K 231 317   ,   ] 251 
77754020 370 375 310 273 277 263 322 224   G  \a  \0  \0 001  \0  \0  \0 
77754040 005  \0  \0  \0 201   "   z   R  \0  \0  \0  \0  \0  \0  \0  \0 
77754060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77754100  \0  \0  \0  \0  \0  \0  \0  \0 222 001  \0  \0  \0  \0  \0  \0 
77754120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77755000   D   B   F   S   _   D   G   _   C   D   _   0   3   _   c   e 
77755020   l   0   1  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77755040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77761000   G   D   t   a   b   l   e   !  \0  \0  \0  \0  \0  \0  \0  \0 
77761020  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0 001  \0  \0  \0 
77761040 001  \0  \0  \0   y   l 001   R  \0  \0  \0  \0  \0  \0  \0  \0 
77761060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77761100  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 
77761120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77762000   P   R   I   M   A   R   Y   _   M   D  \0  \0  \0  \0  \0  \0 
77762020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77766000   G   D   t   a   b   l   e   !  \0  \0  \0  \0  \0  \0  \0  \0 
77766020  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0 001  \0  \0  \0 
77766040 002  \0  \0  \0   b   v   ^ 031  \0  \0  \0  \0  \0  \0  \0  \0 
77766060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77766100  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 
77766120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77767000   S   E   C   O   N   D   A   R   Y   _   M   D  \0  \0  \0  \0 
77767020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77773000   G   D   t   a   b   l   e   !  \0  \0  \0  \0  \0  \0  \0  \0 
77773020  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0 001  \0  \0  \0 
77773040 003  \0  \0  \0   l   y   T   N  \0  \0  \0  \0  \0  \0  \0  \0 
77773060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77773100  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 
77773120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77774000   U   N   D   O   A   R   E   A   _   M   D  \0  \0  \0  \0  \0 
77774020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
100000000                   o   r   a   c   l   e       s   a   g   e   d 
100000020   i   s   k  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
100000040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
100000100   .   !   o   R  \0  \0  \0 006  \0 002  \0  \0 004  \0  \0  \0 
100000120 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
100000140  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0

First up you can see that this is labelled as an “oracle sagedisk” note the code name SAGE – “Storage Appliance for Grid Environments”

Next you can see the label for this particular celldisk – “C D _ 0 3 _ c e l 0 1″

Curious, at least to me is the next part – “S M s c t r” – something about SMart SCan”?

Finally you can see the Griddisks created on this celldisk, the DATA01, RECO01 and the DBFS_DG.

Last header dump I have is of a dropped celldisk:

[root@cel01 ~]# od -c /dev/sdd |head -100 
0000000   d   r   o   p   p   e   d       c   e   l   l   d   i   s   k 
0000020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0000100 026 031 024   Q  \0  \0  \0 006  \0 002  \0  \0 004  \0  \0  \0 
0000120 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0000140  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0001000   *   c   d   I   n   f   o   * 201   K  \v 036 362 326 340   G 
0001020 024 245   l 022 031   . 322 212   N 312 216   K  \0  \0  \0  \0 
0001040  \t  \0  \0  \0 003  \0  \0  \0  \0 200  \0  \0  \0 200  \0  \0 
0001060  \0 200 266 350  \0  \0  \0  \0 005  \0  \0  \0  \0  \0  \0  \0 
0001100 001  \0  \0  \0 005  \0  \0  \0 361 177  \0  \0  \0  \0  \0  \0 
0001120 233 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0001140  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0002000   C   D   _   0   3   _   c   e   l   0   1  \0  \0  \0  \0  \0 
0002020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0002040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
0005000   S   M   s   c   t   r 003  \0   '   :   p   c 001  \0  \0  \0 
0005020  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 002  \0  \0  \0 
0005040  \0 005  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 003  \0  \0  \0 
0005060 001  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
0005100  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77761000   G   D   t   a   b   l   e   !  \0  \0  \0  \0  \0  \0  \0  \0 
77761020  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0 001  \0  \0  \0 
77761040 001  \0  \0  \0   y   l 001   R  \0  \0  \0  \0  \0  \0  \0  \0 
77761060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77761100  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 
77761120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77762000   P   R   I   M   A   R   Y   _   M   D  \0  \0  \0  \0  \0  \0 
77762020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77766000   G   D   t   a   b   l   e   !  \0  \0  \0  \0  \0  \0  \0  \0 
77766020  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0 001  \0  \0  \0 
77766040 002  \0  \0  \0   b   v   ^ 031  \0  \0  \0  \0  \0  \0  \0  \0 
77766060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77766100  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 
77766120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77767000   S   E   C   O   N   D   A   R   Y   _   M   D  \0  \0  \0  \0 
77767020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77773000   G   D   t   a   b   l   e   !  \0  \0  \0  \0  \0  \0  \0  \0 
77773020  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0 001  \0  \0  \0 
77773040 003  \0  \0  \0   l   y   T   N  \0  \0  \0  \0  \0  \0  \0  \0 
77773060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
77773100  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 
77773120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
77774000   U   N   D   O   A   R   E   A   _   M   D  \0  \0  \0  \0  \0 
77774020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
100000000                   o   r   a   c   l   e       s   a   g   e   d 
100000020   i   s   k  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
100000040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
100000100   .   !   o   R  \0  \0  \0 006  \0 002  \0  \0 004  \0  \0  \0 
100000120 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
100000140  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
100001000   *   c   d   I   n   f   o   * 201   K  \v 036 362 326 340   G 
100001020 024 245   l 022 031   . 322 212   O 312 216   K  \0  \0  \0  \0 
100001040  \b  \0  \0  \0 003  \0  \0  \0  \0 200  \0  \0  \0 200  \0  \0 
100001060  \0 200 266 350  \0  \0  \0  \0 005  \0  \0  \0  \0  \0  \0  \0 
100001100 001  \0  \0  \0 005  \0  \0  \0 361 177  \0  \0  \0  \0  \0  \0 
100001120 233 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
100001140  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
100002000   C   D   _   0   3   _   c   e   l   0   1  \0  \0  \0  \0  \0 
100002020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
100002040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
100005000   S   M   s   c   t   r 003  \0   '   :   p   c 001  \0  \0  \0 
100005020  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 002  \0  \0  \0 
100005040  \0 005  \0  \0 001  \0  \0  \0  \0  \0  \0  \0 003  \0  \0  \0 
100005060 001  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
100005100  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
177742000   G   D   t   a   b   l   e   ! 274   A   F 274 206 211 030   % 
177742020 347 035 221 252   R   : 032 207 243   [  \0  \0 001  \0  \0  \0 
177742040  \a  \0  \0  \0 330 217 331 256  \0  \0  \0  \0  \0  \0  \0  \0 
177742060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
177742100  \0  \0  \0  \0  \0  \0  \0  \0 222 001  \0  \0  \0  \0  \0  \0 
177742120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
177743000   R   E   C   O   0   1   _   C   D   _   0   3   _   c   e   l 
177743020   0   1  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
177743040  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
177747000   G   D   t   a   b   l   e   ! 027   I 320 224 210   T 033 210 
177747020   5   \ 353   g 035   o 371 253 200   n 001  \0 002  \0  \0  \0 
177747040 006  \0  \0  \0 327 177 303 304  \0  \0  \0  \0  \0  \0  \0  \0 
177747060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
177747100  \0  \0  \0  \0  \0  \0  \0  \0 222 001  \0  \0  \0  \0  \0  \0 
177747120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0 
* 
177750000   D   A   T   A   0   1   _   C   D   _   0   3   _   c   e   l

You can see straight away that the disk gets marked with the “d r o p p e d c e l l d i s k” But with a simple drop the rest of information on the drive is still in place. If you really want to clean your disks you can always try the following:

cellcli> drop celldisk all ERASE=1pass

This should clear up your disks quite nicely. Goes without saying, don’t try this in production!

Follow

Get every new post delivered to your Inbox.

Join 51 other followers