The Griddisk is connected to the Celldisk…

One of the songs my children like to hear is called dem bones, you are probably familiar with how it tells you which bones are connected to which.

When considering the hierarchy of abstractions within Exadata disk drives I am often very much reminded of this song. When presenting storage from an Exadata cell there are 4 layers to deal with before we have something that ASM knows how to operate on, that is something that ASM can actually create a diskgroup with.

It is worth also pointing out that ASM is running on the compute nodes, but as an administrator you will be operating on the storage cell to create the disks that ASM can actually use.

We could work our way from the actual physical drives up to what ASM sees, but lets go in reverse from ASM drilling back down to a “brown spinny thing”.

All commands I’m going to show here were run on the e-dba Proof of Concept Exadata X2-2 quarter rack. Lets first take a look at the diskgroups available to the RDBMS:

SQL> select group_number, name 
from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ------------------------------
	   1 DATA_EX01
	   2 DBFS_DG
	   3 RECO_EX01

So we see the 3 diskgroups.

Let us look at the individual ASM disks that make up these diskgroups:

SQL> select group_number, disk_number, name, path 
from v$asm_disk 
where name like '%CEL01' 
order by 1,2 asc;
GROUP_NUMBER DISK_NUMBER NAME				   PATH
------------ ----------- --------------------- ------------------------------------------
	   1	       0 DATA_EX01_CD_00_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_00_ex01cel01
	   1	       1 DATA_EX01_CD_01_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_01_ex01cel01
	   1	       2 DATA_EX01_CD_02_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_02_ex01cel01
	   1	       3 DATA_EX01_CD_03_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_03_ex01cel01
	   1	       4 DATA_EX01_CD_04_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_04_ex01cel01
	   1	       5 DATA_EX01_CD_05_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_05_ex01cel01
	   1	       6 DATA_EX01_CD_06_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_06_ex01cel01
	   1	       7 DATA_EX01_CD_07_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_07_ex01cel01
	   1	       8 DATA_EX01_CD_08_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_08_ex01cel01
	   1	       9 DATA_EX01_CD_09_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_09_ex01cel01
	   1	      10 DATA_EX01_CD_10_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_10_ex01cel01
	   1	      11 DATA_EX01_CD_11_EX01CEL01	o/192.168.10.3/DATA_EX01_CD_11_ex01cel01
	   2	       0 DBFS_DG_CD_02_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_02_ex01cel01
	   2	       1 DBFS_DG_CD_03_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_03_ex01cel01
	   2	       2 DBFS_DG_CD_04_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_04_ex01cel01
	   2	       3 DBFS_DG_CD_05_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_05_ex01cel01
	   2	       4 DBFS_DG_CD_06_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_06_ex01cel01
	   2	       5 DBFS_DG_CD_07_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_07_ex01cel01
	   2	       6 DBFS_DG_CD_08_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_08_ex01cel01
	   2	       7 DBFS_DG_CD_09_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_09_ex01cel01
	   2	       8 DBFS_DG_CD_10_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_10_ex01cel01
	   2	       9 DBFS_DG_CD_11_EX01CEL01	o/192.168.10.3/DBFS_DG_CD_11_ex01cel01
	   3	       0 RECO_EX01_CD_00_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_00_ex01cel01
	   3	       1 RECO_EX01_CD_01_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_01_ex01cel01
	   3	       2 RECO_EX01_CD_02_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_02_ex01cel01
	   3	       3 RECO_EX01_CD_03_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_03_ex01cel01
	   3	       4 RECO_EX01_CD_04_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_04_ex01cel01
	   3	       5 RECO_EX01_CD_05_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_05_ex01cel01
	   3	       6 RECO_EX01_CD_06_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_06_ex01cel01
	   3	       7 RECO_EX01_CD_07_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_07_ex01cel01
	   3	       8 RECO_EX01_CD_08_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_08_ex01cel01
	   3	       9 RECO_EX01_CD_09_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_09_ex01cel01
	   3	      10 RECO_EX01_CD_10_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_10_ex01cel01
	   3	      11 RECO_EX01_CD_11_EX01CEL01	o/192.168.10.3/RECO_EX01_CD_11_ex01cel01

So we have limited the output to disks from just the one cell. We can see that there are disks here being used in the 3 different diskgroups. In the path field we see the IP address of the cell that the asm disk resides in. The names of the form DATA_EX01_CD_00_EX01CEL01 are griddisks and these are created and managed on the storage cell.

I really like the naming convention and you can tell a lot about where a this ASM disk came from by looking at the name.

So we see the name of the cell: ex01cel01

We see which celldisk this griddisk is created upon: CD_00 (within the ex01cel01 cell)

And we see the diskgroup that this belongs to: DATA_EX01

Lets take a look at what the storage cell can tell us about the griddisks:

CellCLI> list griddisk
	 DATA_EX01_CD_00_ex01cel01	 active
	 DATA_EX01_CD_01_ex01cel01	 active
	 DATA_EX01_CD_02_ex01cel01	 active
	 DATA_EX01_CD_03_ex01cel01	 active
	 DATA_EX01_CD_04_ex01cel01	 active
	 DATA_EX01_CD_05_ex01cel01	 active
	 DATA_EX01_CD_06_ex01cel01	 active
	 DATA_EX01_CD_07_ex01cel01	 active
	 DATA_EX01_CD_08_ex01cel01	 active
	 DATA_EX01_CD_09_ex01cel01	 active
	 DATA_EX01_CD_10_ex01cel01	 active
	 DATA_EX01_CD_11_ex01cel01	 active
	 DBFS_DG_CD_02_ex01cel01  	 active
	 DBFS_DG_CD_03_ex01cel01  	 active
	 DBFS_DG_CD_04_ex01cel01  	 active
	 DBFS_DG_CD_05_ex01cel01  	 active
	 DBFS_DG_CD_06_ex01cel01  	 active
	 DBFS_DG_CD_07_ex01cel01  	 active
	 DBFS_DG_CD_08_ex01cel01  	 active
	 DBFS_DG_CD_09_ex01cel01  	 active
	 DBFS_DG_CD_10_ex01cel01  	 active
	 DBFS_DG_CD_11_ex01cel01  	 active
	 RECO_EX01_CD_00_ex01cel01	 active
	 RECO_EX01_CD_01_ex01cel01	 active
	 RECO_EX01_CD_02_ex01cel01	 active
	 RECO_EX01_CD_03_ex01cel01	 active
	 RECO_EX01_CD_04_ex01cel01	 active
	 RECO_EX01_CD_05_ex01cel01	 active
	 RECO_EX01_CD_06_ex01cel01	 active
	 RECO_EX01_CD_07_ex01cel01	 active
	 RECO_EX01_CD_08_ex01cel01	 active
	 RECO_EX01_CD_09_ex01cel01	 active
	 RECO_EX01_CD_10_ex01cel01	 active
	 RECO_EX01_CD_11_ex01cel01	 active

These match the names found in V$ASM_DISK. We can look in detail at an individual griddisk:

CellCLI> list griddisk where name='DATA_EX01_CD_00_ex01cel01' detail

	 name:              	 DATA_EX01_CD_00_ex01cel01
	 availableTo:       	 
	 cellDisk:          	 CD_00_ex01cel01
	 comment:           	 
	 creationTime:      	 2011-06-08T13:33:48+01:00
	 diskType:          	 HardDisk
	 errorCount:        	 0
	 id:                	 550975c0-9d2e-47dd-85cd-d2550d394ec9
	 offset:            	 32M
	 size:              	 423G
	 status:            	 active

So the griddisk DATA_EX01_CD_00_ex01cel01 is created on the celldisk CD_00_ex01cel01 and we can check whether there are other griddisks on this celldisk:

CellCLI> list griddisk where celldisk='CD_00_ex01cel01'

	 DATA_EX01_CD_00_ex01cel01	 active
	 RECO_EX01_CD_00_ex01cel01	 active

So there are two griddisks created on this celldisk. That is one important thing to remember you can create multiple griddisks on top of the one celldisk, and it is the griddisks that are presented to ASM.

Lets look at this celldisk in more detail:

CellCLI> list celldisk where name='CD_00_ex01cel01' detail

	 name:              	 CD_00_ex01cel01
	 comment:           	 
	 creationTime:      	 2011-06-08T13:32:08+01:00
	 deviceName:        	 /dev/sda
	 devicePartition:   	 /dev/sda3
	 diskType:          	 HardDisk
	 errorCount:        	 0
	 freeSpace:         	 0
	 id:                	 2dd77a53-53f1-49b5-98a0-d86a19140dc0
	 interleaving:      	 none
	 lun:               	 0_0
	 raidLevel:         	 0
	 size:              	 528.734375G
	 status:            	 normal

So we see this celldisk is associated with a device /dev/sda and it also has a lun 0_0 associated with it. Because this is a system disk the actual celldisk has been created on partition /dev/sda3, non celldisks use the entire device rather than a partition.

Lets have a look at the luns we have:

CellCLI> list lun where diskType='HARDDISK' 

	 0_0 	 0_0 	 normal
	 0_1 	 0_1 	 normal
	 0_2 	 0_2 	 normal
	 0_3 	 0_3 	 normal
	 0_4 	 0_4 	 normal
	 0_5 	 0_5 	 normal
	 0_6 	 0_6 	 normal
	 0_7 	 0_7 	 normal
	 0_8 	 0_8 	 normal
	 0_9 	 0_9 	 normal
	 0_10	 0_10	 normal
	 0_11	 0_11	 normal

So here we are keeping the output to just the hard drives and ignoring the flashdisks. We see we have 12 luns, which is a 1-1 mapping to the number of physical drives we have within the cell. Let us look in more detail at a lun:

CellCLI> list lun 0_0 detail

	 name:              	 0_0
	 cellDisk:          	 CD_00_ex01cel01
	 deviceName:        	 /dev/sda
	 diskType:          	 HardDisk
	 id:                	 0_0
	 isSystemLun:       	 TRUE
	 lunAutoCreate:     	 FALSE
	 lunSize:           	 557.861328125G
	 lunUID:            	 0_0
	 physicalDrives:    	 20:0
	 raidLevel:         	 0
	 lunWriteCacheMode: 	 "WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU"
	 status:            	 normal

So focusing down in on lun 0_0 we see that there is a celldisk created upon this, and matching with earlier we see that it is celldisk CD_00_ex01cel01, and as noted earlier it is created on device /dev/sda. We also see that this is associated with a physical drive 20:0.

So now we can drill on down to the actual physical drive:

CellCLI> list physicaldisk 20:0 detail

	 name:              	 20:0
	 deviceId:          	 19
	 diskType:          	 HardDisk
	 enclosureDeviceId: 	 20
	 errMediaCount:     	 0
	 errOtherCount:     	 0
	 foreignState:      	 false
	 luns:              	 0_0
	 makeModel:         	 "SEAGATE ST360057SSUN600G"
	 physicalFirmware:  	 0805
	 physicalInsertTime:	 2010-12-31T14:24:44+00:00
	 physicalInterface: 	 sas
	 physicalSerial:    	 E1P6N9
	 physicalSize:      	 558.9109999993816G
	 slotNumber:        	 0
	 status:            	 normal

So here we finally see some details of the hard drive itself, including the fact that it is a Seagate drive. We can also link it back here to lun 0_0.

Finally just for fun, we can even use the MegaCli command to obtain info about the drive:

[root@cel01 ~]# /opt/MegaRAID/MegaCli/MegaCli64 PDList -a0
                                     
Adapter #0

Enclosure Device ID: 20
Slot Number: 0
Device Id: 19
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 558.911 GB [0x45dd2fb0 Sectors]
Non Coerced Size: 558.411 GB [0x45cd2fb0 Sectors]
Coerced Size: 557.861 GB [0x45bb9000 Sectors]
Firmware state: Online, Spun Up
SAS Address(0): 0x5000c50028c59721
SAS Address(1): 0x0
Connected Port Number: 0(path0) 
Inquiry Data: SEAGATE ST360057SSUN600G08051047E1P6N9          
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None 
Device Speed: 6.0Gb/s 
Link Speed: 6.0Gb/s 
Media Type: Hard Disk Device
Drive:  Not Certified
.
.

We can see the DeviceId of 19 is matching up both here and with the cellcli command.

So we essentially have the following chain

Gridisk

Celldisk

Lun

Physical Drive

With the key point being multiple griddisks can be presented to ASM that have been created on top of a celldisk that effectively maps to a single physical drive.

Advertisements

One thought on “The Griddisk is connected to the Celldisk…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s