Exadata Installation: Creating Grid Disks

During a recent Exadata install for an e-dba customer, I had the chance to examine closely the Griddisk creation process. I must say it did throw up one surprise that you’ll see later.

Installing an Exadata
Once a new Exadata has been racked by Oracle, it comes time to configure it. The first stage is applying the network configurations: nameservers, ntp servers and the various IP addresses and hostnames associated with each node.

Once the networking part is done it is time to run the OneCommand process, at the end of which you will have an Oracle RAC cluster with the RDBMS installed and various other configurations done. The OneCommand process utilises the following script from the /opt/oracle.SupportTools/onecommand directory:

[root@db01 onecommand]# ./deploy112.sh -l 
INFO: Logging all actions in /opt/oracle.SupportTools/onecommand/tmp/db01-20120131110424.log and traces in /opt/oracle.SupportTools/onecommand/tmp/db01-20120131110424.trc

INFO: Loading configuration file /opt/oracle.SupportTools/onecommand/onecommand.params... 
The steps in order are... 
Step  0 = ValidateEnv 
Step  1 = CreateWorkDir 
Step  2 = UnzipFiles 
Step  3 = setupSSHroot 
Step  4 = UpdateEtcHosts 
Step  5 = CreateCellipinitora 
Step  6 = ValidateIB 
Step  7 = ValidateCell 
Step  8 = PingRdsCheck 
Step  9 = RunCalibrate 
Step 10 = CreateUsers 
Step 11 = SetupSSHusers 
Step 12 = CreateGridDisks 
Step 13 = GridSwInstall 
Step 14 = PatchGridHome 
Step 15 = RelinkRDSGI 
Step 16 = GridRootScripts 
Step 17 = DbSwInstall 
Step 18 = PatchDBHomes 
Step 19 = CreateASMDiskgroups 
Step 20 = DbcaDB 
Step 21 = DoUnlock 
Step 22 = RelinkRDSDb 
Step 23 = LockUpGI 
Step 24 = SetupCellEmailAlerts 
Step 25 = ApplySecurityFixes 
Step 26 = ResecureMachine

This list does change, this was an X2-2 that came installed with You can then take each step at once, or run all at once. I think this is a really nice way of installing your Exadata, and it really guides you through the process of getting Clusterware and an RDBMS installed.

The step I’m particularly interested in here though, is Step 12: CreateGridDisks.

Creating Grid Disks

So I thought I would walk through the process of how OneCommand creates the Grid Disks for use with ASM.

First step restarts all cell services using dcli:

/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e alter cell restart services all"

Making sure everthing is in a good state. We then run a list of the physicaldisks:

/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e list physicaldisk;"

Now we clear down the drives:

/usr/local/bin/dcli -c  -l root "date;cellcli -e drop flashcache ;"

/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "date; cellcli -e drop celldisk ALL HARDDISK force;cellcli -e drop celldisk ALL FLASHDISK force;

This removes both the flashcache and celldisks and consequently any griddisks.

Now the names get set for the cells:

/usr/bin/ssh cel01 "cellcli -e alter cell name=cel01" </dev/null 

This is run for each cell in turn. Next a list cell is run for each cell to ensure they are online:

 /usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "date;cellcli -e list cell "

Finally time to create some celldisks:

/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e create celldisk all "

This creates celldisks on both hardisks and flash disks. Now time to create the flashcache:

/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e create flashcache all "

Now create some griddisks:

/usr/local/bin/dcli -c cel01,cel02,cel03 -l root "cellcli -e create griddisk ALL HARDDISK  prefix=testgd,size=1832.546875G"

For me is strangest part of the whole process. A griddisk is created on nearly the entire size of the harddisk. They are given the prefix testgd. Now a separate script is called to create the DBFS_DG Griddisks. This does not utilise dcli, but uses ssh to each cell, and a separate ssh command at that for each Griddisk:

/usr/bin/ssh -l root cel01 "cellcli -e create griddisk DBFS_DG_CD_02_cel01 celldisk=CD_02_cel01,size=29.109375G"

Now we get rid of the earlier created testgd griddisks:

/usr/local/bin/dcli -c cel01,cel02,cel03 -l root "cellcli -e drop griddisk ALL prefix=testgd force"

Now, I was puzzled by this for sometime. Why would you create a whole load of griddisks and then just drop them again? One explanation is that this ensures the DBFS_DG griddisks are created on the slowest portion of the harddisk.

Another list of griddisks is done:

/usr/local/bin/dcli -c cel01,cel02,cel03 -l root "cellcli -e list griddisk"

This returns only the DBFS_DG griddisks.

Now we finally create of DATA01 and our RECO01 Griddisks. This time again we use a separate script and we are using ssh, as opposed to dcli!

/usr/bin/ssh -l root cel01 'cellcli -e create griddisk ALL HARDDISK prefix=DATA01, size=1466G'
/usr/bin/ssh -l root cel02 'cellcli -e create griddisk ALL HARDDISK prefix=DATA01, size=1466G'
/usr/bin/ssh -l root cel03 'cellcli -e create griddisk ALL HARDDISK prefix=DATA01, size=1466G'
/usr/bin/ssh -l root cel01 'cellcli -e create griddisk ALL HARDDISK prefix=RECO01' 
/usr/bin/ssh -l root cel02 'cellcli -e create griddisk ALL HARDDISK prefix=RECO01' 
/usr/bin/ssh -l root cel03 'cellcli -e create griddisk ALL HARDDISK prefix=RECO01' 

Note the RECO01 creation just uses up the remaining space on the celldisks

A list of the physical disks is run:

/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e list physicaldisk"

Followed by a lun list and celldisk list, not to mention, topped of by listing the griddisks:

/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e list lun"
/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e list celldisk"
/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root "cellcli -e list griddisk"

And that is the creation of Grid Disks step completed.


5 thoughts on “Exadata Installation: Creating Grid Disks

  1. The method used to create the dbfs_dg griddisks can seem a bit convoluted, but when you think about it from the perspective of the team that wrote onecommand, it makes sense. They could hardcode the value for the offset and create the griddisks using the “create griddisk offset=…” option, but they would have to account for both sizes of disks offered by Oracle. Also, It would require a change in the code when Oracle decides to ship the HC Exadata racks with 3TB drives instead of 2TB. This lets them only write one piece of code that will run successfully no matter the size of the disks.

    The script chooses to look at the celldisks that have been created, and find the one with the fewest free space and store that number. After that, they take that number and create griddisks of that size across all of the disks to level them out. From there, they could just use dcli to create the dbfs_dg griddisks, but I assume that they use individual SSH commands to ensure that the griddisks are only created on celldisks 02 – 11.

    Of course, all of this could have been avoided if they just created 2 internal 2.5″ disks for the operating system (like they do on the database appliance). Chalk it up to lessons learned.


  2. Hi Andy,

    Thanks for reading!

    So, I thought they could have calculated the offset! They already have to code the size parameter for the testgd, but yeah, obviously the guy coding it decided this way.

    Undestood on the ssh for DBFS_DG, but then using it for the DATA01 and RECO01, I do not get! I guess it keeps the same creation method.

    I agree on 2 internal 2.5″ disks!



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s