OCI Bare Metal Pricing

In the same vein as the VM shapes, the available OCI Bare Metal shapes that you can use are hardly a secret and are easily obtainable from the Oracle cloud website .

However I still find how they are displayed by Oracle as frustrating as the VM discussion previously.

As before there are a bunch of (essentially) deprecated BM shapes shown in there. Every shape with an asterix next to it you are advised to only use, if you are already using them, and not to deploy new instances on those ones.

Starting with the Standard BM Instance type, again it’s actually almost a 50/50 split between shapes with and without an asterix – 2 out of 5 essentially should NOT be used. With the Dense IO type, it is exactly 50/50, 1/2 of the available types should NOT be used. I don’t encounter the HPC shapes, so will pass over those.

That doesn’t aid giving folks an easy overall understanding of what is available out there, and what they should be using.

Then the pricing table only shows the price per OCPU. Now I get this makes sense for the VM pricing as the same type can be provisioned with differing numbers of OCPU, but for the Bare Metal machines it is an all or nothing deal, you can’t provision half the OCPU on a bare metal machine.

Third, I still find it mightily irritating having to have 2 browser tabs open to see what I can get and how much it is going to cost, and have to get my calculator out to multiply the per OCPU cost by the number of OCPU in the bare metal instance.

And finally, it doesn’t aid my understanding having a price in a per hour basis. Now, I know some folks might like that and intuitively understand what a good price per hour is, but in the Oracle world, it’s rare for production instances to be spun up on an hourly basis. They tend to be more at the 24×7 end of things. I accept test/dev/QA etc. can and should be different, but still I want to see pricing easily on a yearly basis.

I built another of my own little tables with the info that I want:

So essentially the available Bare Metal shapes boil down to the above, I’ve included the dense I/O BM shape which has 51.2 TB of NVMe devices. You always pay for the entirety of the OCPU in the machine, so I’m not convinced on the utility of displaying price per OCPU.

Graphically these look like:

Clearly you pay a lot for the DenseIO2 shape, and you need a minimum of 8 OCPUs as well, so it is going to cost a lot in comparison to the other types. Also just like the VM case, it is clear the BM shapes with Intel CPUs are twice as expensive as the AMD types.

How favourably this compares to pricing on-premises is a discussion for another day!

OCI 1st Year Ramp-Up Savings

In this simple example of the benefit of an Annual Universal Credit, imagine a customer who is looking to build infrastructure that has a total cost of 40,000 a month, it doesn’t matter the currency, as it’s illustrative only.

Once they have built their infrastructure and have reached the steady state of 40,000 a month, over an entire year they will consume 480,000 worth of credits.

However, you are never going to start a project on day 1 and immediately spin up the entire infrastructure, while it would that be impressive from a speed perspective, it’s even more unlikely you’ll be ready to migrate your entire infrastructure on that day. You are never going to go from 0 -> 100s of instances/databases etc. in very short order.

It is way more likely you are going to have a gradual ramp up as you migrate infrastructure and applications to the cloud. Your VM and PaaS usage will slowly increase, and your storage consumption will slowly go up, etc.

Therefore in this 1st year, while you are doing your migrations to OCI you will not need to consume the entirety of your commitment straight away.

I’ve modelled 3 scenarios of ramp-ups over an entire year. Once we reach steady state, we consume the entirety of the 40,000 of credits per month.

Ramp Up Scenario: 1

In this scenario we consume 10K a month of services for 3 months, then need 20K for another 3 months, 30K for 3 more months and finally we land at the full 40K for the rest of the year and going forward:

M1 -> M3: 10K
M3 -> M6: 20K
M6 -> M9: 30K
M9 -> M12: 40K

Essentially we are ramping up by 1/4 of the overall end-state consumption each quarter.

Ramp Up Scenario: 2

In this scenario we ramp up 10% in month 1, then 20% month 2, and so on, reaching our end state in month 10 and 100% of consumption:

10%, 20%, 30%, 40%, ….. 100%

M1: 10%
M2: 20%
M3: 30%
.
.
M10 -> M12: 100%

Ramp Up Scenario: 3

In our final scenario, we start with exactly 1/12 of the consumption and add 1/12 each and every month for the 12 months:

1/12, 2/12, 3/12, 4/12, ….. 12/12

M1: 1/12
M2: 2/12
M3: 3/12
.
.
.
M12: 12/12

This is how these play out over a year:

I find it amusing that the first two scenarios lead to exactly the same place – I have NOT deliberately set out for that to be the result.

Remember the end state consumption is 40,000 or 480,000 over a year. By having a ramp-up you can make huge savings, up to 45% in my example.

One note of caution, on ExaCS you need to be aware that the base price is not able to be ramped, as soon as you switch on (say) a quarter rack, you are paying that base price whether you consume 1 OCPU or the maximum.

OCI VM Shape Pricing

While the available OCI VM shapes that you can use is hardly a secret and is easily obtainable from the oracle cloud website I find it frustrating on several counts.

First there are bunch of (essentially) deprecated VM shapes in there. Every shape with an asterix next to it you are advised to only use, if you are already using them, and not to deploy new instances on those ones.

It’s actually almost a 50/50 split between shapes with and without an asterix. That doesn’t aid giving folks an easy overall understanding of what is available out there, and what they should be using.

Next, because the different OCPU sized VMs of the same type are all listed, it actually adds unnecessary length to the table. While the pricing table doesn’t do this, as the price scales with OCPU so it doesn’t need to display a bunch of redundant information – you know a 2 OCPU VM is going to cost you double the cost of the 1 OCPU of the same type.

Third, I find it mightily irritating having to have 2 browser tabs open to see what I can get and how much it is going to cost. The pricing table, doesn’t quite display enough info, while the description table doesn’t have the pricing. 😦

And finally, it doesn’t aid my understanding having a price in a per hour basis. Now, I know some folks might like that and intuitively understand what a good price per hour is, but in the Oracle world, it’s rare for production instances to be spun up on an hourly basis. They tend to be more at the 24×7 end of things. I accept test/dev/QA etc. can and should be different, but still I want to see pricing easily on a yearly basis.

To that end, I decided to build my own little table with the info that I want.

So essentially the available VM shapes boil down to the above, I’ve included the dense I/O VM shape which have NVMe devices. Pricing is always on a per OCPU basis, so clearly the more OCPU you need the higher your bill is going to be.

Graphically these look like:

Clearly you pay a lot for the DenseIO2 shapes, and you need a minimum of 8 OCPUs as well, so it is going to cost a lot in comparison to the other types. Also clear is the VM shapes with Intel CPUs are twice as expensive as the AMD types.

You might want to consider whether you are getting x2 the performance from the Intel silicon.

Exadata Cloud Service Prices

The OCI Exadata Cloud Service is well described by this document, in particular toward the end of the document there is a useful table that outlines what you get with each size of Exadata Rack, in terms of CPU/Memory/Storage etc. Note it does differ from the on-premises options.

With ExaCS the pricing is slightly more complicated than with other products as you have the cost of a rack component, which scales linearly going from 1/4 to 1/2 then to full, AND then you have to add in the number of OCPU’s that you require. Clearly the number of OCPU’s you need is going to have an impact on the size of ExaCS rack you need to provision.

While March feels a very, very long time ago – there are some weeks of 2020 that feel like months worth of events happen in them – back then I was pricing up some Exadata Cloud Service components for a customer.

Recently I had to revisit the calculation and noticed the price had come down in the space of a few months. From the March quote we see:

So a pair of X8 quarter-racks were coming in at £21,389 (note British Pounds), or £10,695 each.

While come my new quote in July 2020, we have:

So it has now gone down to £17,111 a pair or £8,555 each.

This is a 20% reduction in cost!

It is also worth noting that the cost per OCPU has NOT changed, it has remained constant at £0.2556 per hour.

Another point is that according to the price list the X8 ExaCS is actually cheaper than provisioning the X7 variety. And it’s not even close, the X8 base price comes in at £11.4997 per hour, while the X7 is a whopping £17.0366, which is nearly 50% higher!

Across an entire year you’d be some £50K worse off provisioning an X7. Clearly Oracle really want you on the X8. It’s a good deal for the customer to be paying less to be given the newer hardware.

There is no price advantage with the OCPU component of the bill, you pay the same price whether that CPU is in an X7 or X8.

OCI Annual Universal Credit

This is potentially delving too deep into the contracting weeds than most folks of a technical nature would wish. But as I’ve indicated in the previous post, when it comes to cloud, cost is all important, so it is worthwhile knowing some of the complexities of OCI billing and consumption.

An Oracle document that goes into the details is the Oracle PaaS and IaaS Universal Credits Service Descriptions. Not exactly for the faint-hearted, but as well as describing what all the terms related to the OCI service means. From the what you think may be straightforward, OCPU, but which is actually an exceedingly long definition, to the more succinct Port Hour, all the billing metrics for the various OCI services are there.

What is also there is the definitions of how the billing works, including the new Annual Flex (or Annual Universal Credit) which it seems replaces Monthly Flex (unless you get “special” Oracle approval)

Your Cloud Services Account will be charged based on one of the following payment/billing models:

1: Annual Universal Credit,

2: Pay as You Go.

The Annual Universal Credit is defined as follows (this is slightly paraphrased from the doc, to remove redundant wording, though keeping the letter case, highlighting decided by me):

Annual Universal Credit

Oracle allows You the flexibility to commit an amount to Oracle to be applied towards the future usage of eligible Oracle IaaS and PaaS Services.

An Annual Universal Credit amount must be used within its applicable yearly Credit Period during the Services Period and will expire at the end of that yearly Credit Period; any pre-paid unused amounts are non-refundable and are forfeited at that time.

The pre-paid balance of the Total Credit Value will be decremented on a monthly basis reflecting Your actual usage for the prior month at the rates for each activated Service

So, as I see it, you commit to spend a certain amount a year. What you use each month of the year is reduced from that total, and if you have anything left at the end of the year it’s gone.

Next there is the definition of the Monthly option, which used to be the standard Monthly Flex option (and is available at the discretion of Oracle)

Monthly Universal Credit (subject to Oracle approval)

Oracle allows You the flexibility to commit an amount to Oracle to be applied towards the future monthly usage of eligible Oracle IaaS and PaaS Services

You agree that You will consume each month during the Services Period a combined total equal to at least the Credit Quantity amount

The Monthly Universal Credit amount must be used within each month and will expire at the end of that month; any unused amounts are non-refundable and are forfeited at that time.

The Monthly Universal Credit balance shall be decremented on a monthly basis reflecting Your actual usage… If, by the end of any month during the Services Period, You have not consumed Services in an amount equal to the Monthly Universal Credit, Oracle will decrement Your account for the credit shortfall for that month

I think it is quite clear the Annual Universal Credit is far more beneficial and flexible to the customer. You buy a pot of credits for a year, and you can consume that pot at any time during the year. It could be all in the final month, or spread evenly over the entire year.

In the Monthly Universal Credit option, essentially your pot is sliced into 12 equal sized pots to be used each month on a use it or lose it basis.

The big savings come if you are performing a migration and are ramping up, say over an entire year. If your average usage over that year is only 70% of the end state, you can commit to that lower level of spend and what you save in the beginning of the ramp up, you can consume towards the end of the year.

I think the customer is the clear winner with this change.

OCI Discount Levels

One of the biggest drivers in any cloud migration are the economics. Customers are continually pushing to save on IT spending and any cloud migration that I’ve come across has to show a reduction in spend as compared to what the customer is currently spending.

In consuming OCI services customers have a choice of options on how to pay. What you actually buy are so called Universal Cloud Credits. You do not pay for individual services, but a set amount of cloud credits that can be used on the entirety of the OCI offering, so you are not stuck with a service that you no longer want, while having to pay extra for something different that you do, but you hadn’t thought of to start with. You have the flexibility to change your mind.

The two options for obtaining cloud credits are:

Pay As You Go (PAYG)

No upfront costs and is billed in arrears, dependent on how much you consume you will only ever pay for what you use.

Monthly Flex

Billed annually in advance for a committed amount, regardless of whether you even spin up an OCPU or consume a GB of storage. Minimum 12 months, and essentially you pay for a year in advance, but can only consume 1/12 of the total per month.

If you don’t consume your entire 1/12 it currently doesn’t carry forward, use the 1/12 or lose it.

Note, this monthly burn down, may be about to change to annual burn down.

So you might just looking at the above, as why on earth I’d be interested in the Monthly Flex when it seems a whole lot less flexible than the PAYG. And of course the answer comes down to cost.

PAYG is billed at the standard metered rate, while Monthly Flex can carry a discount. And this discount can be rather significant. So it is much more appropriate/sensible for enterprise customers to go with the Monthly Flex rate.

Clearly this is not just an act of charity on Oracle’s part, as they benefit from having a customer tied into a fixed level of spend for a fixed period time, so you could say everyone is a winner with Monthly Flex.

The Monthly Flex discount level is dependent on both the level of spend AND the duration of the commitment.

**** Update 22/07/2020 Discount levels may have changed with Oracle moving to Annual Universal Credits ****

As you’d expect, the higher the spend and the longer the commitment the greater the discount level you are going to scoop up. We see a range from a somewhat miserly 5% discount all the way up to a whopping 45%.

**** Update 22/07/2020 spend levels to obtain a discount is post discount application ****

The above monthly spend is what you need to be spending *post* discount being applied. To secure these discount percentages, your pre-discount service consumption follows the below:

It is worth noting that sometimes it is worth spending a bit more to obtain the higher level of discount, and the cost estimator even does that for you by adding in a line item (additional cloud credit amount) of the following type:

Upgraded cloud credits for higher discount and lower overall price

So for example, if you have a requirement for an amount of services that requires a monthly spend of $12,000 of services and want to go for a 3 year contract, you would be entitled to 15% discount:

$12, 000 x .85 = $10,200

So your total spend would be the $10,200 per month as above.

Whereas what you should do, is obtain $12,500 of cloud credits and secure the 20% discount:

$12,500 x .80 = $10, 000

This means you are paying less for more.

Clearly as the table shows, the differential increases the higher the spend. If you look at the table, the discount increase between $5K, $10K, $25K, and $50K is a constant 5% increase, but when you get to the $100K the discount increase is itself increased to 10%.

This is a direct link to the OCI estimator tool so you can see for yourself how much your requirements will cost, and the discount you’ll achieve.

**** Update 22/07/20 The estimator tool seems to have now removed flex discount levels entirely ****

UKOUG Systems Event and Exadata Content

I’ve been involved in organising a couple of upcoming UKOUG events.

I will be involved with the engineered systems stream for the annual UKOUG conference, returning after an absence of a couple of years, to once again being held in Birmingham.

While the planning for this is at a very early stage, Martin Widlake will be giving you the inside scoop on this.

The event I really want to talk about though is an event that is much more immediate:

The UKOUG Systems Event, this is a one day, multi-stream event which is being held in London on May 20th.

This event will feature at least 1 and possibly 2 Exadata streams. I am sure we will have a really good range of speakers with a wealth of Exadata experience.

In addition to Exadata there will be a focus on other engineered systems platforms as well as Linux/Solaris and virtualisation. So a wide range of topics being covered in a number of different streams. If you feel you have a presentation that might be of interest, either submit a paper, or feel free to get in touch with me to discuss further.

Note the submission deadline is 18th March.

But the real big news though is that the event is likely to feature some serious deep dive material from Roger Macnicol. Roger is one of the people within Oracle actually responsible for writing the smart scan code.

If you want to understand Exadata smart scans you will not be able to get this information anywhere else in the whole of Europe.

I had the privilege of seeing Roger present at E4 last year, and the information he can provide is so good you even had super smart people like Tanel Poder scribbling down a lot of the information that Roger was providing.

So to repeat, if you are interested in knowing about how smart scan works we are hoping to be able to provide a talk with the level of detail that is only possible from having one of the people responsible for smart scan from inside Oracle come to give it. In addition to this he will be presenting on BDA.

If all that was not enough, there should be a nice relaxed social event at the end of the conference where you will be able to chat over any questions you may still have!

In Enkitec We Trust

The 2nd of March 2015 was my first day as part of the Accenture Enkitec Group.

When I first started using Exadata back in 2011, the one thing I relied on more than anything else to get me up to speed was the original Expert Oracle Exadata book by Kerry Osborne, Randy Johnson, and Tanel Poder. I am equally sure the 2nd Edition will prove just as valuable.

e-dba years

I have thoroughly enjoyed my past 3 1/2 years with e-dba. Both myself and e-dba as a company have grown enormously in this time, and it has been a really positive experience being with a growing company.

At e-dba I had all the exposure to Exadata I could have wanted and they have many Exadata customers and a large number of exa racks under their care.

It was a wrench to leave.

Feeling gravity’s pull

Over the past couple of years I have come to know and appreciate the talents of several of the members of the Accenture Enkitec Group. Kerry expressed this well in that oak table world talk at OpenWorld 2014 as an effect like a “gravitational pull” when recruiting people.

It is certainly something I felt when weighing up my options. The prospect of working with such an outstanding collection of Oracle talent was too hard to ignore.

I would always have regretted not haven taken the chance to work in this team.

I can’t wait to get started.

Oracle 12.1.0.2 Bundle Patching

I’ve spent a few days playing with patching 12.1.0.2 with the so called “Database Patch for Engineered Systems and Database In-Memory”. Lets skip over why these not necessarily related feature sets should be bundled together into effectively a Bundle Patch.

First I was testing going from 12.1.0.2.1 to BP2 or 12.1.0.2.2. Then as soon as I’d done that of course BP3 was released.

So this is our starting position with BP1:

GI HOME:

[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)
19189240;DATABASE BUNDLE PATCH : 12.1.0.2.1 (19189240)

DB Home:

[oracle@rac2 ~]$ /u01/app/oracle/product/12.1.0.2/db_1/OPatch/opatch lspatches
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19189240;DATABASE BUNDLE PATCH : 12.1.0.2.1 (19189240)

Simple enough, right? BP1 and the individual patch components within BP1 give you 12.1.0.2.1. Even I can follow this.

Lets try and apply BP2 to the above. We will use opatchauto for this, and to begin with we will run an analyze:

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-35-17_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP2/19774304
Grid Infrastructure Patch(es): 19392590 19392604 19649591 
RAC Patch(es): 19392604 19649591 

Patch Validation: Successful

Analyzing patch(es) on "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.
Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.

Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ...
Patch "/tmp/BP2/19774304/19392590" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.
Patch "/tmp/BP2/19774304/19392604" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.
Patch "/tmp/BP2/19774304/19649591" analyzed on "/u01/app/12.1.0/grid_1" with warning for apply.

SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during analyze (Please see log file for details):
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19392604, 19649591

opatchauto completed with warnings.

Well, that does not look promising. I have no “one-off” patches in this home to cause a conflict, it should be a simple BP1->BP2 patching without any issues.

Digging into the logs we find the following:

.
.
.
[18-Dec-2014 13:37:08]       Verifying environment and performing prerequisite checks...
[18-Dec-2014 13:37:09]       Patches to apply -> [ 19392590 19392604 19649591  ]
[18-Dec-2014 13:37:09]       Identical patches to filter -> [ 19392590 19392604  ]
[18-Dec-2014 13:37:09]       The following patches are identical and are skipped:
[18-Dec-2014 13:37:09]       [ 19392590 19392604  ]
.
.

Essentially out of the 3 patches in the home at BP1 only the Database Bundle Patch 19189240 is superseded by BP2. Maybe this annoys me more than it should. I like my patches applied by BP2 to end in 2. I also don’t like the fact the analyze throws a warning about this.

Lets patch:

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply /tmp/BP2/19774304 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/19774304/opatch_gi_2014-12-18_13-54-03_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP2/19774304
Grid Infrastructure Patch(es): 19392590 19392604 19649591 
RAC Patch(es): 19392604 19649591 

Patch Validation: Successful

Stopping RAC (/u01/app/oracle/product/12.1.0.2/db_1) ... Successful
Following database(s) and/or service(s)  were stopped and will be restarted later during the session: testrac

Applying patch(es) to "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/oracle/product/12.1.0.2/db_1" with warning.
Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/oracle/product/12.1.0.2/db_1" with warning.

Stopping CRS ... Successful

Applying patch(es) to "/u01/app/12.1.0/grid_1" ...
Patch "/tmp/BP2/19774304/19392590" applied to "/u01/app/12.1.0/grid_1" with warning.
Patch "/tmp/BP2/19774304/19392604" applied to "/u01/app/12.1.0/grid_1" with warning.
Patch "/tmp/BP2/19774304/19649591" applied to "/u01/app/12.1.0/grid_1" with warning.

Starting CRS ... Successful

Starting RAC (/u01/app/oracle/product/12.1.0.2/db_1) ... Successful

SQL changes, if any, are applied successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during patch installation (Please see log file for details):
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19392604, 19649591
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19392604, 19649591

opatchauto completed with warnings.

I do not like to see warnings when I’m patching. The log file for the apply is similar to the analyze, identical patches skipped.

Checking where we are with GI and DB patches now:

[oracle@rac2 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
19649591;DATABASE BUNDLE PATCH : 12.1.0.2.2 (19649591)
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)

[oracle@rac2 ~]$ /u01/app/oracle/product/12.1.0.2/db_1/OPatch/opatch lspatches
19649591;DATABASE BUNDLE PATCH : 12.1.0.2.2 (19649591)
19392604;OCW PATCH SET UPDATE : 12.1.0.2.1 (19392604)

The only one changed is the DATABASE BUNDLE PATCH.

The one MOS document I effectively have on “speed dial” is 888828.1 and that showed up BP3 as being available 17th December. It also had the following warning:

Before install on top of 12.1.0.2.1DBBP or 12.1.0.2.2DBBP, first rollback patch 19392604 OCW PATCH SET UPDATE : 12.1.0.2.1

[root@rac2 ~]# /u01/app/12.1.0/grid_1/OPatch/opatchauto apply -analyze /tmp/BP3/20026159 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version        : 12.1.0.2.0
Running from       : /u01/app/12.1.0/grid_1

opatchauto log file: /u01/app/12.1.0/grid_1/cfgtoollogs/opatchauto/20026159/opatch_gi_2014-12-18_14-13-58_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0/grid_1
RAC home(s):
/u01/app/oracle/product/12.1.0.2/db_1

Configuration Validation: Successful

Patch Location: /tmp/BP3/20026159
Grid Infrastructure Patch(es): 19392590 19878106 20157569 
RAC Patch(es): 19878106 20157569 

Patch Validation: Successful
Command "/u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1" execution failed

Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-14-50PM_1.log

Analyzing patch(es) on "/u01/app/oracle/product/12.1.0.2/db_1" ...
Patch "/tmp/BP3/20026159/19878106" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.
Patch "/tmp/BP3/20026159/20157569" analyzed on "/u01/app/oracle/product/12.1.0.2/db_1" with warning for apply.

Analyzing patch(es) on "/u01/app/12.1.0/grid_1" ...
Command "/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp" execution failed: 
UtilSession failed: After skipping conflicting patches, there is no patch to apply.

Log file Location for the failed command: /u01/app/12.1.0/grid_1/cfgtoollogs/opatch/opatch2014-12-18_14-15-30PM_1.log

Following step(s) failed during analysis:
/u01/app/12.1.0/grid_1/OPatch/opatch prereq CheckConflictAgainstOH -ph /tmp/BP3/20026159/19878106 -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1
/u01/app/12.1.0/grid_1/OPatch/opatch napply -phBaseFile /tmp/OraGI12Home2_patchList -local  -invPtrLoc /u01/app/12.1.0/grid_1/oraInst.loc -oh /u01/app/12.1.0/grid_1 -silent -report -ocmrf /tmp/ocm.rsp


SQL changes, if any, are analyzed successfully on the following database(s): TESTRAC

Apply Summary:

opatchauto ran into some warnings during analyze (Please see log file for details):
RAC Home: /u01/app/oracle/product/12.1.0.2/db_1: 19878106, 20157569

Following patch(es) failed to be analyzed:
GI Home: /u01/app/12.1.0/grid_1: 19392590, 19878106, 20157569

opatchauto analysis reports error(s).

Looking at the log file we see patch 19392604 already in the home conflicts with patch 19878106 from BP3. 19392604 is the OCW PATCH SET UPDATE in BP1 (and BP2) while 19878106 is the Database Bundle Patch in BP3. We see the following in the log file:

Patch 19878106 has Generic Conflict with 19392604. Conflicting files are :
                             /u01/app/12.1.0/grid_1/bin/diskmon

That seems messy. It definitely annoys me that to apply BP3 I have to take additional steps of rolling back a pervious BP. I don’t recall having to do this with previous Bundle Patches, and I’ve applied a fair few of them.

I rolled the lot back with opatchauto rollback. Then applied BP3 ontop of the unpatched homes I was left with. But lets look at what BP3 on top of 12.1.0.2 gives you:

GI Home:

[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
20157569;OCW Patch Set Update : 12.1.0.2.1 (20157569)
19878106;DATABASE BUNDLE PATCH: 12.1.0.2.3 (19878106)
19392590;ACFS Patch Set Update : 12.1.0.2.1 (19392590)

DB Home:

[oracle@rac1 ~]$ /u01/app/12.1.0/grid_1/OPatch/opatch lspatches
20157569;OCW Patch Set Update : 12.1.0.2.1 (20157569)
19878106;DATABASE BUNDLE PATCH: 12.1.0.2.3 (19878106)

So for BP2 we had patch 19392604 OCW PATCH SET UPDATE : 12.1.0.2.1 Now we still have a 12.1.0.2.1 OCW Patch Set Update with BP3 but it has a new patch number!

That really irritates.

12c Upgrade and Concurrent Stats Gathering

I was upgrading an Exadata test database from 11.2.0.4 to 12.1.0.2 and I came across a failure scenario I had not encountered before. I’ve upgraded a few databases to both 12.1.0.1 and 12.1.0.2 for test purposes, but this was the first one I’d done on Exadata. And the first time I’d encountered such a failure.

I started the upgrade after checking with the pre upgrade script that everything was ready to upgrade. And I ran with the maximum amount of parellelism:

$ORACLE_HOME/perl/bin/perl catctl.pl -n 8 catupgrd.sql
.
.
.
Serial Phase #:81 Files: 1 A process terminated prior to completion.

Died at catcon.pm line 5084.

That was both annoying and surprising. The line in catcon.pm is of no assistance:

   5080   sub catcon_HandleSigchld () {
   5081     print CATCONOUT "A process terminated prior to completion.\n";
   5082     print CATCONOUT "Review the ${catcon_LogFilePathBase}*.log files to identify the failure.\n";
   5083     $SIG{CHLD} = 'IGNORE';  # now ignore any child processes
   5084     die;
   5085   }

But what of more use was the bottom of a catupgrd.log file:

11:12:35 269  /
catrequtlmg: b_StatEvt     = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152

This error is coming from the catrequtlmg.sql, but my first thought was checking if the parameter resource_manager_plan was set, and it turned out it wasn’t. However setting the default_plan and running this piece of sql by itself produced the same error:

SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152



PL/SQL procedure successfully completed.

I then started thinking about what it meant by gather statistics concurrently and I noticed that I had indeed set this database to gather stats concurrently (it’s off by default):

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
TRUE

I then proceeded to turn of this concurrent gathering and rerun the failing SQL:


SQL> exec dbms_stats.set_global_prefs('CONCURRENT','FALSE');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
FALSE


SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
catrequtlmg: Gathering Table Stats USER$MIG
catrequtlmg: Gathering Table Stats COL$MIG
catrequtlmg: Gathering Table Stats CLU$MIG
catrequtlmg: Gathering Table Stats CON$MIG
catrequtlmg: Gathering Table Stats TAB$MIG
catrequtlmg: Gathering Table Stats IND$MIG
catrequtlmg: Gathering Table Stats ICOL$MIG
catrequtlmg: Gathering Table Stats LOB$MIG
catrequtlmg: Gathering Table Stats COLTYPE$MIG
catrequtlmg: Gathering Table Stats SUBCOLTYPE$MIG
catrequtlmg: Gathering Table Stats NTAB$MIG
catrequtlmg: Gathering Table Stats REFCON$MIG
catrequtlmg: Gathering Table Stats OPQTYPE$MIG
catrequtlmg: Gathering Table Stats ICOLDEP$MIG
catrequtlmg: Gathering Table Stats TSQ$MIG
catrequtlmg: Gathering Table Stats VIEWTRCOL$MIG
catrequtlmg: Gathering Table Stats ATTRCOL$MIG
catrequtlmg: Gathering Table Stats TYPE_MISC$MIG
catrequtlmg: Gathering Table Stats LIBRARY$MIG
catrequtlmg: Gathering Table Stats ASSEMBLY$MIG
catrequtlmg: delete_props_data: No Props Data

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

It worked! I was able to upgrade my database in the end.

I wish the preupgrade.sql script would check for this. Or indeed when upgrading, the catrequtlmg.sql would disable the concurrent gathering.

I would advise checking for this before any upgrade to 12c and turning it off if you find it in one of your about to be upgraded databases.