It always seems cold when it’s time for the UKOUG annual conference in Birmingham, and duly it turned out to be the coldest day of the winter so far. It seems somewhat eerily quiet at the conference today, but maybe that is to do with the ukoug holding multiple conferences throughout the year for the varios apps, unlike last year.
Tuesday was my first day this year, so with the conference being down to the 3 days, it’s going to feel especially brief.
Evaluating and Testing Storage Performance – Luca Canali
This was about how to do performance testing and ensuring the stability of new storage hardware. Luca seemed pretty excited that the LHC is finally coming back on stream, I guess it means floods of data to store.
When testing for new hw test for performance and stability often critical metric is small-read random IOPS
CERN have around 2000 disk drives and this gives them typically 1 disk drive failure per week. You have to design your systems to cope with failure.
using Oracle’s ORION for testing storage. Uses ASYNC i/O direct to storage (as ASM would do)
uses 8KB i/o sizes for random i/o
ORION does not use as much cpu as testing using oracle itself
graph showing fc and iscsi random iops fc scales linerally while the iscsi does not
also graph with sequential large i/o fc scaling well while the iscsi does not
this was 1 Gb ethernet
also tested 10 Gb (though the storage was “CERN made” rather than vendor puchased)
tested upto 42 drives. Finding ISCSI performance is variable from different linux versions
again have to test using real-world oracle workload
For capacity planning the view v$sysmetric_summary gives read/write I/O requests.
Oracle Advanced Compression in 11gR2 – Dan Morgan
Sees customers with databases in the 400-800 TB size.
Often as data volumes expand performance declines
compression can give a way out of that, and it is a trade off between cpu and disk i/o
Advanced Compression in 11g gives you:
dataguard network compression
data pump compression
OLTP table compression
secure files 11gR2 deduplication (dan is using for storing email in DBFS
Interesting discussion about Hybrid Columnar Compression, it offers far greater compression ratios but is restricted to running on exadata storage. It appears this was pulled from the normal Enterprise Edition release at the very last minute. Dan was saying, it may be because the exadata boxes may have enough cpu to handle the compression but that having to do this on your server would be painful.
Still seemed like it may have been a bit of marketing decision.
11g for developers – Connor McDonald
This was an outstanding presentation. Connor managed to rattle through some 300+ slides many of which were extremely funny.
using a gapminder like anmiation to show database code v application code
querying from a blob/clob via sql
set errorlogging on
gives an audit trail of what has failed
Vital Statistics – Julian Dyke
In 11.1 there are 107 routines within the DBMS_STATS package
DBMS_STATS can collect in parallel, but it cannot do a validate structure.
Incremental stats gathering in 11g allows you to just collect statistics on new partitions will using the “synopsis” from older partitions that have not changed.
Compressing very large data sets – Luca canali
This was Luca’s second presentation of the day, and this was a superb one.
compressing data can mean less physical i/o, less logical i/o but will consume more cpu.
Can be useful for data that has an active part with older data made read-only
CERN tested an exadata machine for a couple weeks, particulary interested in the hybrid columnar compression (archive) . Interesting graphs showing the advantages of various levels of compression on various CERN datasets, and several levels of compression.
For basic and OLTP compression the format of the data block is the similar to a “normal” block but uses a symbol table. Luca showing some dumps of rows with compression enabled on them.
OLTP compression is not limited to direct load operations. Allows normal inserts into the table. Block is compressed when it reaches the pctfree vailue.
Hybrid columnar compression uses a completely new block layout. Utilises a compression unit (CU) and data from the same column within the compression unit is stored together. This basically increases how much compression is possible.
Doing DML on hybrid columnar compressed data, means a lock effects the whole CU.
Rows not identifiable in a block dump when compressed with hybrid columnar compresssion
index lookups use more consistent gets than with no compression.
Compression factors vary very much with the data. He seems to like both oltp and hybrid columnar compression.
Luca even mentioned the possibility of turning hybrid columnar compression on when using non-exadata storage, but emphasised that was just for playing with, not production usage.
Active Dataguard Best Practices – Larry Carpenter
This was another outstanding presentation.
apple iTunes replacing their logical standby infrastructure with an active dataguard install.
Control how much lag is acceptable to an application
have active dataguard fix corrupt blocks this works for corruptions on either primary or standby (though an application has to access the corrupt block)
need to use services to determine how your application connects to the standby/primary
note on running statspack on your standby (standby statspack) no64 454848.1
broker in 11.2 allows one command to turn on active dataguard, also when you switchover to an active dataguard instance, it will start active dataguard on the now old primary!
Query lag in 11.2
allows you to define an SLA for an application using session setting STANDBY_MAX_DATA_DELAY
use a logon trigger to set this for an application.
A good day of presentations.
After the last talk of the day, when I turned my mobile phone on, I got a message saying my hotel room had been cancelled at the copthorne, and another booked at jury’s inn. Now, I’d actually checked into the copthorne earlier, so I’m either going to end up with 2 rooms or none. If I do have to share my room at the copthorne, I do hope they don’t snore.