Exadata: What’s Coming

This is based on the presentation Juan Loaiza gave regarding What’s new with Exadata. While a large part of the presentation focussed on what was already available, there are quite a few interesting new features that are coming down the road.

First of was a brief mention of the hardware. I’m less excited about this. The X4 has plenty of the hardware that you could want: CPU, memory and flash. You’d expect some or all of them to be bumped in the next generation.

New Hardware

This was skated over fairly quickly, but I expect an Exadata X5 in a few months. The X4 was released back in December 2013, first X4 I saw was January 2014. I wouldn’t be surprised if Oracle release the X5 on or around the anniversary of that release.

Very little was said about the new hardware that would be in the X5 except that the development cycle has followed what intel has released, and that cpu cores have gone up and flash capacity has gone up. No word was said on what CPU is going to be used on the X5.

The compute nodes on an X4-2 have Intel E5-2697 v2 chips this is a 12 core chip running at 2.7GHz. I’d expect an increase in core count. The X3 to X4 transition increased core count by 50%. If that happens again, we get to 18 cores. There is an Intel E5-2699 v3 with 18 cores but that’s clocked at 2.3GHz.

However, I think I’d be less surprised if they went with E5-2697 v3 which is 14 core chip clocked at 2.6GHz. That would be a far more modest increase in the number of cores. The memory speed available with this chip does go up though – it’s DDR4. Might help with In Memory option. I also wonder if they’ll bump the amount of memory supported – this chip (like the predecessor) can go to 768GB.

As I said, it was not mentioned which chip was going to be used, only that Intel had released new chips and that Oracle would be qualifying their use for Exadata over the coming months.

New Exadata Software

There was a bunch of interesting sounding new features coming down the road. Some of the ones that in particular caught my eye were:

The marketing friendly term “Exafusion”. Exafusion seems to be about speeding up OLTP, labelled as “Hardware Optimized OLTP Messaging” it’s a reimplementation of cache fusion. Messages bypass network stack leading to a performance improvement.

Columnar Flash Cache – This is Exadata automatically reformatting HCC data when written to flash as a pure column store for analytic workloads. Dual formats are stored.

Database snapshots on Exadata. This seems designed with pluggable databases in mind for producing fast clones for dev/test environments. Clearly something that was a gap with ASM as used on exadata, but ACFS does snapshots.

Currently the latest Linux release available on Exadata is 5.10. Upgrading across major releases is not supported – would have required reimaging. Not a pretty prospect. Thankfully Oracle are going to allow and enable upgrading in place to 6.5.

Some talk about reducing I/O outliers both in reading from hdd and in writing to flash.

Currently with IORM you can only enable or disable access to flash for a particular database. Full IORM seems to be coming for flash.

Final new feature that caught my eye was the long rumoured Virtualisation coming to Exadata. OVM is coming. The ODA for example has had VM capability for some time, so it’s in some ways an obvious extension. I’m expecting with the increasing number of cores lots of smaller organisations may not actually need all those cores and might think even if they could turn unused ones off, it’s a waste buying that hardware and not being able to use it.

I’m hoping to NOT see OVM on an Exadata in the wild anytime soon.

Software on Silicon

One final point almost tucked out of site, was that Juan had a little bullet point about “software on silicon”. Now this has me confused. My understanding is that when Larry was talking about this, it was specifically SPARC. That I can understand as Oracle controls what goes on the chip.

Ignoring the SPARC Supercluster, there is no SPARC on Exadata. So that leaves a closer collaboration with Intel or moving to SPARC. Collaborating closer with Intel is a possibility and Oracle had first dibs on the E7-8895 v2 for the X4-8.

I can’t imagine changing the compute nodes to SPARC that wouldn’t make sense. But “software on silicon” is a bit like offloading…

Exadata software is definitely keeping moving forward and the difference between running Oracle on Exadata compared with non-exadata is growing ever wider with each “exadata only” feature.


On Active Dataguard

One of the real advantages of Oracle Open World is that there are swarms of Oracle employees in attendance who have real intimate knowledge of various pieces of the software and how internally various bits of the Oracle software actually works.

In the exhibition hall there was a huge section with lots of different Oracle stands focusing on various features and on one of the days I dropped by the Active Dataguard booth and had a chat to one of the guys there.

So in terms of how you switch on active dataguard, it’s really not that hard. You just need to have opened your standby read only (which you can do without 11g) and then issue the familiar alter database recover managed standby command – the difference is with 11g you can issue this command to apply the redo while the database open read only and servicing application queries, previously the standby had to be at the mount stage.

So that seems like a small change and in fact such a little change on the surface tends to lead to some lousy presentations as there is very little, at first glance, to say about active dataguard. However underneath the covers a lot of work went on to enable this, it was not just a case of allowing the recover command to be run with the database open.

Database changes that are stored in the redo stream are not ordered and when performing managed recovery those changes are applied to the database out of order and this is done for performance reasons, trying to reorder the changes into the correct sequence would not be performant.

This is no good for queries though, as they need to see a read consistent point of view and with active dataguard this is a constantly moving target. So one of the challenges with active dataguard was to ensure consistent reads were still implemented.

So there is the concept of a published Read SCN which queries are consistent up to and this is behind where the apply process has actually applied redo up to.

Work was also required with dropped packages – you can’t have a package becoming unavailable when the query executing at an earlier read scn actually requires the package. This required changes to the redo stream so that package changes are buffered until the read scn is bumped up.

I also asked a couple of Oracle guys if there was any chance of this being backported to 10g, Larry Carpenter just laughed and said that was one of the first questions he asked. Seems like the changes to the redo stream are so significant that it will never see the light of day in 10g.

OOW: Summary

I thought I would have a posting to try and summarise my experiences at this years Oracle OpenWorld. First off I had a brilliant time, I think it was better than I was expecting by a long way. I really, really liked San Francisco. I don’t think I’m a great traveller but I thought the city had a great buzz & energy about it, and that was without the 40 odd thousand OpenWorld attendees.

It’s pretty hard I suspect to go to OpenWorld and not come back enthused about Oracle. Yeah, it probably means it’s good marketing, but I suspect it is also the superb community surrounding the products that gives you an extra ooomph of enthusiasm.

The conference is superbly organised, I really thought it was more like several conferences within a conference. I hardly left the Moscone South, (apart from to go to the OTN lounge & the keynotes). This meant I kept bumping into the same faces, and I must have bumped into a bloke from BA about half a dozen times at least. I think this is a good thing, as it makes the conference feel a bit less intimidating and it probably helps with managing the flow of people.

It’s the Networking Stupid!

I would be the first to admit I was the worst “networker” in the world. Simon Haslam has my personality nailed with the following: “…the old definition of an “extrovert techie” : “someone who looks at *your* shoes when they’re talking to you”. You know, that picture of a bloke with an apple in front of his face is there for a reason 😉

However, the real value add of OpenWorld is in all the fantastic individuals you can chat to. Sure, there are the Oracle employees and I would say one of the best things is being able to button hole the people that are intimately acquainted with any particular Oracle feature that is of interest to you. I had a great chat with the Active dataguard guys, and heard some real interesting stuff from Nitin Vengurlekar.

There is also the opportunity to chat to various well known characters, in the Oracle community, and yeah the OTN lounge was definitely a great place to meet people who know far more about Oracle than you do.

I did enjoy the bloggers meet-up, but I’ll be honest – I hardly recognised anybody, and I’m certain no one had read any of these scribblings!

Favourite presentations

So the actual presentations may not have been all that exciting and I did go to a few where I thought hang on, I’ve seen this material before. There did seem to be a lot of generic sessions just going over what were the highlights of the new features in 11g. I would say that the UKOUG Birmingham Conference has far more in-depth presentations.

Another point I noted about most of the presentations I saw, was that they were all presentation 1.0. What I mean by that, is that there was slide after slide filled with a load of bullet points. Presentation Zen should be required reading!

That being said, these were I  thought the stand out presentations that I saw:

Graham Wood & John Beresniewicz: Performance Fundamentals for oracle Database 10g & 11g. I found this the most interesting of the Oracle presentations, a bit less generic than many others. As we know it’s all about DB Time.

Alex Gorbachev: Under the Hood of Oracle Clusterware. This was a masterly executed presentation, particularly the demos.

Larry Ellison: Exadata Announcement. I think Oracle had the marketing for this spot on, they really built the excitement up during the week. Yeah, I’m a techie, but I was quite pumped up for this announcement. You never know, I might even one day get my hands on a couple of exadata cells.

Tips for Next Time

I got heartily sick of taking so many cabs, so next time I’d really like to stay a little bit closer to the Moscone Center. That means I need to secure my OpenWorld ticket earlier than I did this time, it all felt a bit last minute this year, and the hotels fill up so quickly!

Failing that, I probably should have walked a bit more, but I think I got a misguided sense of the scale of San Francisco by doing too much touristy stuff the first 2 days.

Do More networking! Definitely need to drink more beer with more techies! Which would probably help with the jet lag – I certainly went to bed too early on the first night, I’m sure the key for avoiding jetlag is to try and stay up as late as you possibly can, and beer can surely only help with this!

I would heartily recommend Oracle OpenWorld to anybody thinking about going next year. Just make sure you hit those Oracle booths in the exhibition hall and pump the employees for all their worth. Oh and quench your thirst of an evening with a good selection of techies.

What more could a DBA want?

OOW: Day 4

Automatic Storage Management – Frits Hoogland

Fantastic crowd for 09:00am the day after the big party. The room is jammed packed.

Frits comes across as a very confident speaker. This was one of the most technical in depth presentations I have seen the whole week at Openworld. It feels quite strange after all the generic overviews.

not all that many people using ASM in the hall, of those that are, 10.2 is the vast majority.

Run through of ASM basics.

Describing redundandcy, claiming vast majority of people using ASM are using it in external redundancy.

jumped right into disk headers and showed diagram of what happens when you increase the number of disks to the disk headers. he really needed to explain Allocation units first.

I don’t think he has explained extents.

Explained the concept of not being able to allocate space even when one disk in a diskgroup still has free space.

Explains about how ASM tunes I/O – it’s just allocation policy only!


ASM sees each device as an individual entity and stripes over all devices it sees.

ASMLIB: support library for ASM. It is an API for storage & O/S vendors to add functionality

Device name labels, persistent device names. ASMLIB creates a meta device and sorts out the correct permissions


ASMLIB adds a kernel dependency, adds dependency to asmlib.

ASM Advantages & Disadvantages

Using ASM pushes to the DBA more responsibility for volume management  & filesystem management. RMAN backups are compulsory – this is a good thing. ASM is relatively young. There is no black magic in terms of allocating storage space.

ASM does SAME – Stripe And Mirror Everything

Online storage migration and configuration changes can be a real manageability win.

Frits did not take questions which was a real, real shame as there was an absolutely enormous crowd around him at the end asking questions, it would have been interesting to hear some the questions – and answers!

Real World Performance – Andrew Holdsworth

There is a guy sitting next me eating his lunch & talking on his freaking bluetooth headset. w/hat the hell is the matter with these people!?!

Optimizer Expose

issues he hears at time:  never using correct index, optimizer scans table when i want to use index access, why are nested loops so bad sometimes

Problems occur sometimes upon upgrading.

DBMS_STATS auto gathering has impacted production systems throughout the world, unpredictable performance when execution plans unpredictably change.

auto gather when 10% of the rows have changed

new histograms may be created

bind peeking may become an issue because of new histograms

It’s all about the statistics that are generated.

contention between letting stats evolve but risk changing good plans or keep them static and predictable and potentially not get the in many cases log file sync was seen as impacting scalability

optimal plan

To keep consistency of plans do not gather histograms, accurate high/low values are crucial:

use tools like SQL profiles, outlines
manually hint every statement – bad idea
this approach will not give the best plans but does give predictability

Plan efficiency important where I/O is not just memory access

six challenges: data skew  – a non uniform distribution of data generally on a per column basis

histograms can help with data skew, or determine if uniform plans are ok

Bind peeking: different plans are even possible on different instances in a cursor.

high/low cardinality: impossible for optimizer to get the correct # of rows when the high/low values are incorrect

correlations can throw the optimizer

cardinality approximation

the debugging process is all about making sure the optimizer has correct cardinality

running with gather_plan_statistics is how to get what the  cardinality estimate the optimizer is making.

Managing statistics on partitioned tables

different possibilities for building stats with partitions

when to apply the knife to your data/workloads/databases

a little discussion on sharding and in memory db on the middleware need to make sure you can route transactions from the middleware to the correct shard.

detecting and avoiding hiccups in your  system

in many oracle systems log file sync is the dominant wait.

output from v$event_histogram for logfile sync

graph showing count x elapsed time shwing peak at 16ms but another peak at around the timeout of 1sec.

statistical averages can be misleading

cursor invalidations can lead to massive re parsing.


root cause analysis is vital

Well that is a wrap from me, my OpenWorld is over for this year, really hope to make it back next year.

OOW: Day 3

A successful 11g installation – Plamen Zyumbyulev & Phil Newlan

First part of the presentation is from Plamen
trying to get to a dynamically configurable infrastracture moving to a service approach (oracle services)

effective automatic workload management with NO single point of failure. need centralised monitoring and management.

running linux x86-64 with blades, though he said privately that he was less than happy with them. Disadvantages include not low cost, limited I/O and limited flexibility.

Consolidation project moving many business systems into a smaller number of databases.

For DR Can actually just force logging on certain tablespaces

datafiles can have different redundancy within the same diskgroup

using workload management where sessions are assigned to consumer groups and only allowed certain % of resources.

with different services on separate nodes, dynamic resource re-mastering is useful to ensure the blocks are owned by the instances that are hosting the services rather than being spread around the whole cluster.

Note Mtel went live with a beta version of 11g though not mission critical stuff!

List of improvements in 11g:

Parallel Query integration with services so the query stays local to the instances upon which the service has been defined as active.

improvements to ASM, preferred read for stretched clusters.

Enterprise manager screen showing which failure group the instance is reading from. Obviously you must be using ASM mirroring to take advantage of this. Must have 11gR1 ASM and 11gR1 RDBMS

ASM Fast Disk Resync

loss of disk within a diskgroup will not cause an immediate rebalance. ASM keeps track of blocks that have changed when disk is back it syncs the changes. Again need to have implemented ASM mirroring. Only useful for temporary disk loss, not for swapping a disk out.

ADDM is now RAC aware in 11gR1. will identify the most globally significant performance problems with the entire RAC cluster rather than on an instance basis.

runtime connection pooling is integrated with RAC load balancing advisory so a new connection is given based on the load of the various instances routing to the instance that will (hopefully)  best response time.

Question regarding whether you can run with mismatched clusterware, ASM, RDBMS versions. yes you can but clusterware must be at the highest version. i,.e. 10g RDBMS with 11g clusterware.

REAL confusion over what happens when a disk in a failure group fails and how much space is required to rebalance.

Performance Fundamentals for oracle Database 10g & 11g – Graham Wood, & John Beresniewicz

This was such a popular session, that this is second running of it. Real title is DB Time performance tuning: Theory and Practice.

They are being a little sarcastic regarding enterprise manager.


History of tuning methods comparing methods. Graham Wood has been using Oracle since Version 2.

DB Time is the total time in database calls by foreground sessions, includes cpu time, I/O time and non-idle wait time

DB Time <> response time

Database time is total time spent by user processes either actively working or actively waiting in a database call.

Response time for the end user is not just the time spent doing work in the db.

Active session is a session currently spending time in a database call

DB Time = Sum of DB Time over all sessions

Avg Active sessions = Total DB Time / Wall Clock (Elapsed) Time

Increasing load is either more sessions active


same number sessions performing operations that take longer

DB Time increases when performance decreases

If host is CPU bound, foreground processes accumulate active run-queue time basically when a process is in the run-queue it may still be recording active DB Time as the session has not been able to signal that the wait is finished – as it’s waiting to get back on the CPU to record the fact.

This can manifest itself as though there is increased I/O wait time – when the real problem is waiting to get on CPU.


where to fine DB Time


STAT_NAME = ‘DB time’


“Database Time per Second” “CPU Usage Per Sec


ELASPED_TIME = DB Time within this view


All active sessions captured every second this view in memory has a 1 sec sampling, while DBA_HIST_ACTIVE_SESS_HISTORY

11g enhancement shows which row source is running within an SQL statement in.2 each sample


They completely overran and skipped this section and took no questions.

Changes, Changes, Changes – Tom Kyte

This is my first Tom Kyte session of Openworld.  Qutie a bit of the crowd claim to have been a DBA for 15 years.

Tom thinks that 11g is all about safely introducing change into the database.

increasing level off online changes as you through oracle versions

online parameter changes
online major memory changes
online schema evolution
online index creates
rolling upgrades – still to actually see that
online disk reconfiguration

Tom’s giving a review of standby databases.

Real Application Testing helps with testing changes.

Flashback technology helps recover from human error, various options flashback query, tables, database.

tom previewing package versions – this was meant to be in 11gR1 but did not make the cut. This will give online application upgrades.

In truth this was a bit of an anti-climax after the hype of the Exadata announcement.

Larry Ellison Keynote Announcing Exadata: Live Blog

Well I might as well give it a go live blogging the Larry Ellison Keynote. It’s being broadcast on the web, so I’m not sure how many will read it live, but what the hell. If nothing else it should lead to some humorous spelling mistakes.

First off, I LOVE being a blogger. I skipped past an ENORMOUS queue to get into the Press, blogger and analyst pen. I’m sitting in the middle of the second row of tables smack in the middle. Lots of press in today natch, in fact I saw far more press badges than blogger badges. I can’t believe Oracle are treating bloggers with the same regard as journalists, but hey I’m loving it!

Oh, by the way, Oracle employees got in first, I saw Andrew Holdworth march in at the head of a bigish group of employees. Then it was the turn of Club Gold members.

The hall is really, really filling  up now. I see Eddie Awad two rows behind, but I don’t see/recognise any other bloggers. I was definately really keen to get into this, much more than I thought I would be at the start of the week. The drip, drip marketing has obviously done the trick, I really want to know what the X is, and I’, chuffed to be in the hall when it’s announced.

14:15 Crowds are still streaming in, that’s been for about 15 minutes now. REM blaring over the sound system

14:29 Can’t be long now surely before Ellison takes the stage, the incoming stream of people is starting to thin. The journo next to me looks like he has fallen asleep, I guess his snoring won’t matter too much over the blaring music. I think it might be Seal now.

14:31 here we go the back drop is now playing a video & the blaring music has been cut.

14:32 big cheer for hp, obviously from the hp employees.

14:33 oh christ it’s hp up first. the room turns hp blue instead of Oracle red.

14:34 she is getting desperate if she is worried that there are not enough people in the hall putting their hand up to owning hp kit

14:35 hp are going to have over 300,000 employees! plug for larrys announcement obviously a joint hp product

14:40 Just noticed that there is a big crowd to my left sitting on the floor and they have just been evicted from the hall – certainly that was more intersting than the hp blether.

14:45 video on the change that the hp datacenters have undergone christ, all bloody mighty, this is an advert. I’m being advertised to. Funny how the video does not actually seem to be the one she actually interested, certainly does not seem to fit with how she introduced it.

14:50 nod to virtualisation – it’s all about the management. One thing I would say, she is a fantastic presenter, this is not just spewing out the bullet points – she really, really knows her material. Although, the current slide on HP + EDS (HP have swallowed up EDS) is ladden with Bullets.

14:55 Damn. This would have been great for “buzzword bingo”. Vritualisation, check. Cloud computing. Check. Going green, check. Now talking about pod computing a datacenter in a box – which is basically copying the Sun Blackbox which is ooh, about 2 years old.

15:00 She is wrapping up. Why HP & Oracle are such great buddies. Desperate plea to get you to go the HP booth. Well that was a lot of words in 30 minutes without saying all that much.

15:01 It’s a video of Larrys yacht. I hope he does not sail it into the Moscone.

15:02 the crowd gives appluase to him just for getting on the stage. He is desperate to win the Americas cup. Extreme performance theme.

15:04 Graph on the size of data warehouses growing in size. Data bandwidth problem shoving the data from the disks to the servers. Claiming Data Warehouses start to slowdown at 1TB. It’s all about storage so far.

15:06 possible solutions reducing data going through the pipes or get more/wider pipes.

15:07 Oracle’s frist hardware product. in association with hp, called the Exadata programmable storage server. Only 12 disk drives hey this is just like a thumper from Sun

15:08 oooh not quite like a thumper after all, it’s returning query results not blocks. Puts intelligence closer to the data added 2 infiniband pipes.

15:13 Exadata will work with any Oracle database server, but Linux only to start with, and maybe just 32 bit at that – just said linux x86 not mention of 64bit.

15:15 Second Product: the HP-Oracle database machine. Loads of people racing to the front to get a picture.

15:16 64 intel cores for database processing, and 112 cores for storage. 36GB memory 168 TB of disk data. Good joke about the iPod.

15:18 3 year develpment program, customer testing October 2007.

15:19 Wow Plamens, M-tel was testing an exadata. Claims upto x70 performance improvement for certain operations on a 1/2 config.

15:20 replacing about 6 racks of storage with the database machine. Average speedup x30.

15:24 It’s all about bandwidth, if you have filled the pipe extra storage won’t help but the exadata adds pipe with each exadata server.

15:25 Going after terradata. And now it’s netezza, their performance is nearly as good, but I’d be worried about the fault tolering.

15:28 upto 168 TB in a database machine, 112 cores compared to 108 for netezza.

15:29 Wow the hardware price is cheaper with oracle but you get hammered on the $1.68M for Oracle server software. Hey that’s 2M, but I suppose if you have a site license…

15:31 Sales from Oracle, delivery from HP. Why are they clapping Mark Hurd (HP) just for appearing?

15:36 larry heads off-stage and a video from people who already have used it runs. Plamen makes another appearance!

15:37 show is over. No questions. I’m sure I’ve seen Larry take questions after the keynote in the past.

15:40 A massive traffic jam to get out the hall.

OOW: Afternoon of Day 2

Current Trends in Real World Database Performance – Andrew Holdsworth

This was a run through of issues seen by Andrew’s group in the past year (10 months to be exact)

Real World Performance Fundamentals

Things are harder with increasing dataset sizes, and transaction rates. Increasingly rigorous targets.

Ever increasing CPU power and lower memory costs. The bigger the database the more poor design will be exposed in terms of poor performance.

Two types of systems, systems that have growth below moores law that have low performance requirements. Then systems above moore’s law are the ones with the real performance systems: the majority of db’s are NOT like this.

The Performance Hacker

This is an individual that claims to be a performance expert but without any knowledge. Root cause analysis is not done. Don’t just fiddle with init.ora parameters.

The Usual performance issues

Poor execution plans

Best execution plan uses the least system resources. First challenge is creating good schema statistics that yield good cardinality estimates. Things that screw up include, data skew, bind peeking, high/low end values.

He has a graph showing how things slow down when having to access disk rather than memory.

Too many connections and these log on/off too often – has seen 15,000 connections

graph showing impact of conections on scaling increasing the number of connections lower throughput. 50 processes per core loses 1/2 the performance.

Don’t have huge number of connections!

graph on parsing performance – don’t hard parse can’t believe he has been presenting that for over 10 years, but still keeps seeing it with customers.

dealing with growth is a real challenge

capacity planning is impossible without reference data..

Today CPU is the cheapest component, now storage and associated networking are dominating the hardware budget. Software is obviously really expensive as well.

OLTP Performance

seen 15,000 SQL statements per second on some systems.

most common delay is log file sync wait. Sometimes this is caused by bugs in various versions of oracle – that’s quite refreshingly honest.

DW/BI Performance

Sees a lot of databases that have the name data warehouse but that are really OLTP systems.

Data loads often the first performance problem – some loading programs are not just loading the data but transforming and selecting data as well thus not loading as performantly as they should.

Supporting too many users on a DW.

one off reports DW require huge amounts of hardware.

Hardware Review

getting faster due to additional cores, thought not seeing them scale more than 3/4.

Storage is still the dominant costs.
Solid state becoming slightly more common.

database storage for a paradigm shift – tomorrow in Larrys presentation – this is in a hardware review rather than software?!?

infiniband beating 10GigE

Not seeing 10GigE actually being that performant


block size changes calculations of optimizer, also affects what data you are storing in the buffer cache and impacts contention.

I was gutted to find that I had missed lunch so I decided to skip Paul Otellini’s keynote, though I did hear where he was claiming Intel were “famous” for low power talk about LMAO.

I then had yet another nightmare taxi journey where the cabbie asks me what the cross street for my destination was. How the hell should I know what the cross street is? I come from about a gazzilion miles away from SF, your the bloody cabby!

Top 10 Things You wanted to know about ASM – Rich Long & Nitin Vengurlekar

The room is absolutely jam packed for this one and we have started 8 minutes late and people are still streaming in.

ASM Architecture

ASM instance manages  metadata

diskgroup is logical grouping of disks

This is then presented as a series of questons:

what init.ora parameters does a user need to configure for ASM instances

only 3 parameters required

INSTANCE_TYPE which must be ASM

how does the database interact with ASM instance

ASM is not in the I/O path so ASM does not impact performance shows diagram of database create operation – do not mention COD.

Do I need to define filesystemio_options – no it’s not necessary as the db writes to raw.

Can the ASM instance and the RDBMS instance run different versions. Yes, the ASM instance can be at lower, the same or higher version.

talking a bit about compatible.rdbms and compatible.asm


how do I backup my ASM instance – you dont! there is no database opened. everything that ASM needs to mount a diskgroup is contained upon the disk. RMAN is the recommended method for backup.


How do I migrate to a new storage array? given that the new and old storage are visible to the server just add the new disks into the diskgroup and drop the old disks

can I take my diskgroup from solaris and plug it into a linux? No as the VTOC is different, also ASM currently stored in the disk header.

How do you use ASM with multipathing software. multipath software is at a lower level so should be transparent to ASM.

is ASM constantly rebalancing to manage “hot spots” NO.

graph showing IOPS to a bunch of disks managed by ASM showing that I/O is balanced pretty much equally amongst all devices.

long Q & A session

Storing now db files coming in 11.2.

larger AU recommended at > 10TB.

question regarding ASMLIB – manageability win persistent naming, global open. really recommending ASMLIB

raw devices not supported metalink document

SAs accidentally trampling over ASM disks just need to expose more information to them though this is mitigation not cure.

external redundancy 1 big lun or multiple luns – i though they misunderstood this question.

At the end I had a brief chat with Bill Bridge and asked him if he though in the 7 long years from idea to final product whether he thought it would never see the light of day and he said several times.

I was also badgering Rich Long regarding his presentation as he definately dumbed down the RDBMS & ASM interactions. I also felt he could have been doing with a slid on Allocation Units & extent sizing as this did not really get explained until the Q & A. I fear I may have come across a bit like a fruitcake at that point, but my slides are better 😉

OOW: Day 2

I seem to be spending a small fortune on taxi journeys. Twice now the taxi driver has actually asked me for directions to the destination, I could not believe one of the guys did not know where the Moscone Center was!

I would not want to not know where I was going in SF.

Active-Active Datacenters – Ashish Ray & Lawrence To

This is a bit of a high level overview of various Oracle techniques for distributed active datacenters

definition: independent loosely coupled systems that are kept synchronised

how far apart can sites be:

need to be aware of the network, latency & bandwidth implications

how is data kept in sync:

host based replication either within the db or 3rd party

storage array based mirroring, simpler but drawbacks, propagation of data corruption, less network utilisation

can all db’s be read/write?

discussion on techniques for avoiding conflicts when writing to multiple active db’s ideas include partitioning the data, i.e. emea partitions & apac partitions

how is high availability maintained

need fine grained monitoring. There is a need to measure the latency

how easily can the configuration be managed

3 choices:

RAC extended cluster – better for 25Km or less cache fusion and disk i/o traffic have to traverse the inter-site network so there is additional network latency. Needs to be carefully performance tested. Advantages is both sites can be active and there is no conflicts – it is the same database

Still need dataguard, need to be aware of upgrades/patches. need 3rd site for voting disk.

11g ASM preferred reads facilitate stretch clusters by allowing localised disk reads to only failure groups local to a node. fast disk resync also helps.

Does not provide full HA/DR.

Active dataguard – distance not an issue particularly with ASYNC. works for deploying read only applications. Lawrence thinks dataguard really shines in terms of manageability obviously does provide full HA/DR.

Streams – the real only option for multiple highly distributed active read/write. Allows replication of entire database or just a subset – it is extremely flexible. no real distance limitation tcp/ip for propagation of changes. various options for conflict resolution.

11g package DBMS_COMPARISON to compare tables and merge differences.

managed and configured via Enterprise manager or PL/SQL API’s. performance tuning is key.

Global Scale Web 2.0 – Wei Hu

Sharding is an application managed scaling technique using many, many databases the reason being that a single db can’t cope with the volume of transactions, so subset the data into multiple db’s. It’s then upto the application to route queries to the appropriate db.

shards are replicated, this is the dominant technique for large scale websites. An unamed social network site uses 1800 db’s – he did not say whether they were oracle or mysql! That is horzontal sharding.

Another technique is to have one master and then fan out changes to a read farm the supports reads on a sharded basis.

Very common with mysql heck this is an oracle employee presenting mentioning mysql!

Apparently Oracle has lots of techniques that are useful for sharded db’s

challenges include schema changes, failures corruptions.

Schema Changes

claiming that non oracle social networking site has a nightmare with schema changes – they are offline.

schema changes with mysql give 2 choices total outage by making changes to all shards simultaneously, or try shard at a time approach.

Claiming oracle does online schema changes though i’m pretty sure i’ve seen releases causing application failure with Oracle.


shards need to be replicated. He is really going for mysql saying mysql replication is terrible. storage engine and replication state may become inconsistent.

apparently google have made significant changes to mysql for replication and effectively have forked as mysql did not accept back into the codebase the changes.

Now saying Oracle replication has been available since version 7 and that it is highly stable.

Prasing Active Dataguard, but that ain’t gonna help with sharding for writes, but is obviously useful for reader farms but would need all writes to a master. Obviously it’s good for failure as well.

Data Corruption

Increasing data volumes lead to a higher probability of data corruption. The ideal sharding solution should detect corrupt data and prevent them from being written. Pushing Oracle dataguard as protection against corruption and lost writes. Flashback allows recovery of data? High performance backup and recovery.


Now bashing mysql over the number of cores it supports, except watch those license fees escalate with the number of cpu’s you have. Of course if you are sharding would you just not just add more nodes and shard finer grained!?!

Talking about scaling to large memory as well.

Best to integrate with mid-tier caching in particular memcached. You must invalidate the mid-tier cache when the data changes so the cache is always in sync with data. With mysql apparently you need database triggers to identify update rows or change mysql to log additional data.

Oracle’s idea is to use LogMiner! This is to directly return primary key of all changed rows from the redo logs to refresh the front end cache – I’m not sure how magic that sounds!

Application Changes

Need to able to allow application changes quickly and safely. Each topic has an example of a website that went tits up (i’m assuming these are not  Oracle shops). Advert for Real Application Testing Oracle has better Performance Diagnostics, AWR, ADDM, ASH Oracle is more instrumented than Mysql.

Growth Happens

The number of shards and the data volume will increase. Working with lots of anything is more difficult. The ideal arch would allow you to further partition each shard. Slag off of mysql regarding partitioning.

Now mentioning partitioning – but I’m not sure how that helps you shard to a db into 2 db’s – exchange partitions is the answer to move partitions around. Partitioning is transparent to the application.

Complexity Happens

Monitoring many databases is tough – grid control the answer. Nokia manage 500 dissimiliar databases with 5 dbas

Some quite skeptical questions regarding license fees – if you shard to 50 db’s you are paying 50 oracle licenses that’ll cost a bit more than mysql! Funnily, one of the questions on licensing costs was from an Oracle employee.

OOW Keynote: Andy Mendelsohn

6 Questions being answered from lots sent in via email

top features of Oracle 11g


secure files – similar performance graph to juan laioza


demonstration on advanced compression table with 5.5M rows using 200,000 blocks. compared with a table using advanced compression is 55,000 blocks almost x4 less space also is showing graphical explain plan monitoring via grid control & future release of db control. this is being trumpeted quite a bit this week. The full table scan query against the compressed table only took 1/2 as long.

Downside with the compression is that updates/inserts have a slight overhead.

Active dataguard for query offloading

making it easier to manage the db by automating more & more, potential to have queries tuned automatically.

real application testing – capture a production workload and then replay it against another instance for testing.

video from customers,  featuring among others Plamen Zyumbyulev. patchset is the database release.

Upgrade Challenges

mitigating the risk of upgrades with capturing sql plan baselines. this capture the 10gR2 optimizer explain plans, when you have to upgraded to 11g. You can then run with your 10g plans after changing the optimizer_features parameter to 11.  if the optimizer finds a better plan when running with the 11g optimizer features it is stored and you can decide to switch to this plan. this gives you good plan stability after an upgrade.

The real ideal is to create a test 11g environment and use RAT.

What is new from development

first patchset for 11g now shipping

applications being certified on 11g & database vault

in memory database cache new enterprise edition option

RAT also backportd to 9i & 10g

Cloud computing for development and backups

apex 3.1, 3.2

How to migrate from Oracle forms

apex is the solution –  you can import oracle forms apps into apex.

What is MAA

mentions ASM & FRA and dataguard.

backup to amazon cloud uses secure backup 2.0 and rman. This is demoed using mostly grid control.

oracle have partnered with amazon to create virtual images running on amazon EC2, this will allow you to deploy oracle in the cloud really quickly.

Now there is a guy talking on his mobile telephone right next to me!?! WTF!

Andy claiming 11g is the easiest oracle version to migrate to.

Plugging for Larry’s keynote and Xtreme performance.

The demos did really not work all that well they were typed in a terminal using a really, really small font. I’m 1/2 way down the hall and I can’t see a thing.

OOW: Day 1

Next Generation Performance & Scalability – Juan Loaiza

This presentation was really a run through of 11g features.

I was expecting a little bit more about what was coming rather than a review of current 11g features, this talk could have been delivered last year, I think or renamed “this generation” rather than next! Oracle are being really, really tight lipped about any 11gR2 new features.

Leader in industry performance benchmarks. price/performance leader in tpc-c – in SE-1 of course, probably not Enterprise Edition.

revue of 25 years of improvements in terms of scalability execution, availability, storage & management.

Juan thinks by 2010 there will be the first 1000TB db and the first 1000 processor core db & the first Terabyte buffer cache!

11g Inovations

result cache – use memory on db server to cache sql statement results. write to table invalidates the cache.,

OCI client result cache – useful for small read intensive tables can save trips to db.
client cache is always consistent.

times ten in memory db sits on app tier talking about microsecond response time

database resident connection pooling like a 3rd way of doing db connections similiar to web server connecton pooling model useful for apps that conect and disconect a lot.

adaptive cache fusion protocols highly optimized for common operations.

active dataguard – more efficient than logical replication.

native pl/sql & java compilation

compression – claiming little overhead for select statements – they directly read the compressed data directly. more overhead for inserts.

interesting graph comparing secure files versus storing the data on ext3. the old lob method loses quite badly to ext3 while secure files is comparable in terms of performance. Some jiggery pokery with filesystem journaling

DirectNFS graph showing much better scalability as you increase the number of nic cards as compared to normal NFS client. other advantage is that it works on various O/S platforms.

Oracle Grid 2.0: A preview Bob Thorne

grid 2.0 defined as a policy based infrastructure where you set policies that define your performance requirements for various applications and the infrastructure can dynamic change to meet these requirements, this is particularly related to peaks and troughs in workload.

zero unplanned and planned. hope to extend online patching to upgrades and application changes – maybe in

Root cause analysis at the os level for diagnosis of cluster problems. this is available on OTN.

persistent storage for all data – ASM to support all filesystem – ASM to be used like a normal volume manager with a cluster filesystem on top. Are they seperating the filesystem from the volume manager?

There was not too much meat to this one.