Next Generation Performance & Scalability – Juan Loaiza
This presentation was really a run through of 11g features.
I was expecting a little bit more about what was coming rather than a review of current 11g features, this talk could have been delivered last year, I think or renamed “this generation” rather than next! Oracle are being really, really tight lipped about any 11gR2 new features.
Leader in industry performance benchmarks. price/performance leader in tpc-c – in SE-1 of course, probably not Enterprise Edition.
revue of 25 years of improvements in terms of scalability execution, availability, storage & management.
Juan thinks by 2010 there will be the first 1000TB db and the first 1000 processor core db & the first Terabyte buffer cache!
result cache – use memory on db server to cache sql statement results. write to table invalidates the cache.,
OCI client result cache – useful for small read intensive tables can save trips to db.
client cache is always consistent.
times ten in memory db sits on app tier talking about microsecond response time
database resident connection pooling like a 3rd way of doing db connections similiar to web server connecton pooling model useful for apps that conect and disconect a lot.
adaptive cache fusion protocols highly optimized for common operations.
active dataguard – more efficient than logical replication.
native pl/sql & java compilation
compression – claiming little overhead for select statements – they directly read the compressed data directly. more overhead for inserts.
interesting graph comparing secure files versus storing the data on ext3. the old lob method loses quite badly to ext3 while secure files is comparable in terms of performance. Some jiggery pokery with filesystem journaling
DirectNFS graph showing much better scalability as you increase the number of nic cards as compared to normal NFS client. other advantage is that it works on various O/S platforms.
Oracle Grid 2.0: A preview Bob Thorne
grid 2.0 defined as a policy based infrastructure where you set policies that define your performance requirements for various applications and the infrastructure can dynamic change to meet these requirements, this is particularly related to peaks and troughs in workload.
zero unplanned and planned. hope to extend online patching to upgrades and application changes – maybe in 220.127.116.11
Root cause analysis at the os level for diagnosis of cluster problems. this is available on OTN.
persistent storage for all data – ASM to support all filesystem – ASM to be used like a normal volume manager with a cluster filesystem on top. Are they seperating the filesystem from the volume manager?
There was not too much meat to this one.