ASM, Pillar Data, and IBM XIV

There I was at a VMWARE event happily minding my own business getting up to speed on the major IT trend that is virtulisation. This was not really Oracle related, and ASM was very, very far from my mind.

However it seems I’m never very far away from ASM, as the event was run in conjunction with Pillar Data Systems and inevitably the fact I’m DBA came up in conversation and it turns out Pillar have added a special feature to their Axiom range of arrays all to take advantage of the ASM 1MB stripe depth.

The Impact of Stripe Size

Lets first consider what a typical 128K stripe depth would be like when writing a 1MB ASM coarse stripe:

128k_stripe1

So it requires 8 128K writes to write a full ASM 1MB coarse stripe, and any read of a 1MB ASM extent will require the participation of all the drives within the stripe set.

It’s not too difficult to imagine this may not be the best bet for scaling the number of concurrent requests. Of course with the 1MB stripe depth we have the following:

1mb_stripe1

Basically breaking an I/O request into multiple requests over multiple drives hurts throughput. Oracle themselves advocate a 1MB stripe depth.

The reason being is that it strikes the correct balance between ensuring disks are spending more of their time transfering data compared with the seek time, but at the same time allowing multiple drives to come into play.

Another interesting feature of Pillar Data systems is it’s ability to have multiple stripe depths within a RAID set. This enables you to store your redo, and control files with the fine grained ASM stirpe of 128K in a stripe depth of 128K.

XIV – ASM in hardware?

01_xiv

IBM’s XIV Storage came on the market last year and to me seems like it contains a lot of the features of ASM but done in hardware.

Each logical volume within a XIV array is divided into stripes of 1MB in size. XIV also uses a fairly familiar mirroring algorithim in that copies of the 1MB chunks of data are kept on 2 independent physical devices. Does this sound familiar to ASM users? It should.

These stripes are spread over all disks within the system

Like ASM it does not use traditional mirroring and there is no conept of devices being mirror pairs of each other.

Not that ASM needs much vinidication but they do say that imitation is the sincerest form of flattery, but clearly the ideas behind ASM have been found to be good ideas by other vendors as well.

Of course these stonking great arrays don’t come cheap and at least with ASM you are getting some of this redundancy for free with your Oracle database license. Though it would be nice if there was some snapshot or clone features included in some future ASM release.

Advertisements

6 thoughts on “ASM, Pillar Data, and IBM XIV

  1. Jason

    It’s great to hear people talking about matching up I/O sizes between Oracle and the storage subsystem again. In fact I’m working with one customer at the moment who has an imbalance between stripe widths at the various layers and you’re right – it hurts IOPS …a lot on a busy system!

    One curiosity though: although 1MB is the default AU size (in ASM 10g) but that would be a pretty large element size within a RAID 5 stripe. For example on EMC by default I think that would be a 17 disk RAID group. Have you done many tests with differing stripe sizes or AU sizes in 11g? (assuming external redundancy of course). The other consideration is stripe alignment which can be another tricky area.

    I’m also encouraged to hear the recent resurgence in popularity of short stroking (putting data on the outer half of the disk). With the size of modern disks this is a real “no-brainer” for me and I continue to adopt this approach wherever people will allow me to! This is vindicated by Oracle-HP Exadata server which short strokes all disks (hot for data, cold for recovery area). Nice.

    Simon

  2. Hi Simon,

    Thanks for reading!

    I have not played with ASM on 11g very much I’m afraid and have not experimented with differing AU sizes.

    I think the minimum ASM stripe size is 1MB so I’m not sure this would help.

    Yep I saw the idea of short-stroking with Exadata too, of course do you need to worry about this with SSD’s now?

    jason.

  3. Hi Jason,

    I always reads your ASM blogs. I am just getting started with ASM and doing some POC(proof of Concept) with one of customer to replace LVM with ASM. I got a configuration question. How many database(s) per ASM disk group you have designed? One to one or many to one?

    • Hi,

      Thanks for reading!

      Well, if they are on the same physical hardware I would not have multiple ASM instances running on the same box.

      So if you can live with multiple DB’s on the same box, then 1 ASM will manage them.

      jason.
      jason.

  4. Thanks for reply…

    I mean to say, How many database per disk group?
    I believe we can have multiple disk group in single instance of ASM. I was asking whether multiple database should use one single disk group or each database should have its owned disk group in single instance of ASM running on server. Multiple database running server.

  5. Great thread. I’ve been doing alot of testing of ASM and XIV. I am getting best performance using AU_SIZE=8m on XIV – like 30% better than the default 1m.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s