Exadata Storage Cells and ASM Mirroring and Disk partnering

I think it is true to say that the majority of people using ASM are using ASM external redundancy. There is a lot less experience out there in using ASM mirroring. Why would you buy a big, expensive storage array and then not use all its features? Hardware RAID being a primary example of one of those features you are paying for.

But then comes along Exadata, it too is big and expensive, but the one thing it does not give you is hardware RAID protection – Data protection is done through ASM mirroring. The same is true of the newly minted Oracle Database Appliance.

You’ll be aware the Exadata storage comes in the form of so called storage cells, filled with 12 drives. This gives not only a minimum of 36 drives with the smallest quarter rack, but also a minimum of 3 storage cells.

ASM Mirroring

I have discussed ASM extent mirroring fairly well in the past, but to recap, in a normal redundancy environment ASM writes primary and secondary extents. These extents are written to different failure groups. Drives sharing the same components should be in the same failure group and primary and secondary extents should reside within a different failure group.

The idea of course being is, should the component that all disks in the failure group share actually fail, then all data is still accessible via the mirrored extents that you know are guaranteed to be in a different failure group that is not dependent on this now failed shared component.

In Exadata the obvious boundary for failure groups is the storage cell. Your data still needs to be accessible in its entirety should you lose an entire cell – therefore primary and secondary extents must be stored on different cells. Therefore drives in the same cell are in the same failure group and drives from different cells are in different failure groups.

Disk Partnering

Each disk in a failure group maintains a set of so called partner disks. Each ASM disk has up to 10 partner disks that secondary extents for this disk can be written to. However on both 1/4 and 1/2 rack Exadata boxes running 11.2.0.2 I have only ever seen 8 partner disks being used. The partner disks for a disk must of course reside in a different failure group

This can be seen below, taken from a 1/4 rack V2 Exadata box. Note I’m focussing on the disk 0 which is on cell01:


select DISK, NUMBER_KFDPARTNER, NAME, FAILGROUP
from V$ASM_DISK A, X$KFDPARTNER B
where DISK = 0
and GRP=1
and B.NUMBER_KFDPARTNER = A.DISK_NUMBER
and name like 'DATA%'
order by 2 asc
/
  
      DISK NUMBER_KFDPARTNER      NAME                                 FAILGROUP
---------- ----------------- ------------------------------ ------------------------------
      0            16           DATA_EX01_CD_04_EX01CEL02              EX01CEL02
      0            18           DATA_EX01_CD_06_EX01CEL02              EX01CEL02
      0            20           DATA_EX01_CD_08_EX01CEL02              EX01CEL02
      0            21           DATA_EX01_CD_09_EX01CEL02              EX01CEL02
      0            28           DATA_EX01_CD_04_EX01CEL03              EX01CEL03
      0            29           DATA_EX01_CD_05_EX01CEL03              EX01CEL03
      0            30           DATA_EX01_CD_06_EX01CEL03              EX01CEL03
      0            34           DATA_EX01_CD_10_EX01CEL03              EX01CEL03

Here I have chosen to focus on the first disk, disk 0 in Diskgroup 1 and find all partner disks for this. Now this disk is in cell 1. Cell 1 has disks 0 – 11. cell 2 has 12 – 23, and cell 3 has 24 – 35.

You can see here that the partner disks for this disk are spread evenly over both cell2 (16, 18, 20, 21) and cell 3 (28, 29, 30, 34).

I was quite interested in seeing if there was any overlap in what set of disk partners would be chosen by a parter of 0. I looked at the partners for the first partner of 0:


      DISK NUMBER_KFDPARTNER      NAME                            FAILGROUP
---------- ----------------- ------------------------------ ------------------------------
     16             0           DATA_EX01_CD_00_EX01CEL01         EX01CEL01
     16             2           DATA_EX01_CD_02_EX01CEL01         EX01CEL01
     16             6           DATA_EX01_CD_06_EX01CEL01         EX01CEL01
     16            11           DATA_EX01_CD_11_EX01CEL01         EX01CEL01
     16            27           DATA_EX01_CD_03_EX01CEL03         EX01CEL03
     16            29           DATA_EX01_CD_05_EX01CEL03         EX01CEL03
     16            32           DATA_EX01_CD_08_EX01CEL03         EX01CEL03
     16            35           DATA_EX01_CD_11_EX01CEL03         EX01CEL03

As you can see they only have disk 29 in common. Of course disk 16 being in cell2 could not have chosen any drives from this cell, but in cell3 it only has 1 partner in common with disk 0 from cell 1.

At least you can the disk partnering algorithm on Exadata is ensuring the partner drives are chosen to be on different cells, guaranteeing your data will survive the unavailability of a cell.

Advertisements

7 thoughts on “Exadata Storage Cells and ASM Mirroring and Disk partnering

  1. 11GR2 _asm_partner_target_disk_part has a default value of 8.It controls maximum number of partner disks ._asm_partner_target_fg_rel controls maximum number of failure group for partner disk.default is 4.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s