Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 464 articles
Browse latest View live

Waiting for row lock. But which row is locked?

$
0
0

When you are waiting on a locked row, the locked resource is the transaction that has locked the row. We have no information about which row is locked. Here is how to get it from V$SESSION.

In a first session I lock a row in the SCOTT schema:

SQL> select * from SCOTT.SALGRADE where grade=1 for update ;
 
GRADE LOSAL HISAL
---------- ---------- ----------
1 700 1200

And in another session:

SQL> delete from SCOTT.SALGRADE;

Of course it’s waiting.

Let’s see the wait event:

SQL> select sid,state,event,p1raw,p1text from v$session where event like 'enq: %'
 
SID STATE EVENT P1RAW P1TEXT
------ ------- ------------------------------ ---------------- ---------
54 WAITING enq: TX - row lock contention 0000000054580006 name|mode

So I’ve a row lock contention.

Here is more information from V$SESSION about the row that is locked:

SQL> select row_wait_obj#,row_wait_file#,row_wait_block#,row_wait_row# from v$session where event like 'enq: %';
 
ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
------------- -------------- --------------- -------------
116760 6 211 0

From there you can find which row is locked. It’s straightforward, but I’ve two warnings about it.

First warning is that the ROW_WAIT_OBJ# is not the OBJECT_ID but the DATA_OBJECT_ID.
You have to join with DBA_OBJECT and get the DATA_OBJECT_ID if you want to get the ROWID.

SQL> select owner,object_name,data_object_id from dba_objects where object_id=116760;
 
OWNER OBJECT_NAM DATA_OBJECT_ID
---------- ---------- --------------
SCOTT SALGRADE 116760

The ROW_WAIT_OBJ# is the OBJECT_ID but I need the DATA_OBJECT_ID to get the ROWID. Here it is the same, but just try to truncate the table (and insert back the data) and you will see that it is different.

The second warning is about the ROW_WAIT_FILE# which is the the absolute file number.
But you need the relative file number if you want to get the ROWID.

We have to go though DBA_DATA_FILES to get it.

SQL> select relative_fno from dba_data_files where file_id=6;
 
RELATIVE_FNO
------------
6

It’s the same number here, but if you have transported datafiles (or have more than 1024 datafiles), then you will see different numbers.

Remember, the ROWID must change when a table is truncated (in order to be sure that we don’t read old data) and the ROWID must not change when we transport tablespaces (so that we don’t have to update all the blocks in the tablespace).

So now, I can get the ROWID with dbms_rowid and get the row that is locked:


SQL> select * from SCOTT.SALGRADE where rowid=dbms_rowid.rowid_create(1,116760,6,211,0);
 
GRADE LOSAL HISAL
---------- ---------- ----------
1 700 1200

This is the ideal case. You find the row that is locked.
For most of the mode 6 TX lock (exclusive) you will have that information because the session went to the row in order to see the lock, and that information was registered in V$SESSION.
If you have a mode 4 TX lock (share) that can be different. You find those when the lock is at higher level than the row. For example ITL wait, or unique index entry, or referential integrity (I’m talking about TX mode 4 here, not TM locks). Then the information gotten from V$SESSION is incomplete. There is no row number, it can be an index block, etc.

More information about locks at DOAG 2015.

As a summary, here is the query to get the ROWID for which sessions are waiting on TX locks:
(fixed thanks to comment below.)

select s.p1raw,o.owner,o.object_name,
dbms_rowid.rowid_create(1,o.data_object_id,f.relative_fno,s.row_wait_block#,s.row_wait_row#) row_id
from v$session s
join dba_objects o on s.row_wait_obj#=o.object_id
join dba_segments m on o.owner=m.owner and o.object_name=m.segment_name
join dba_data_files f on s.row_wait_file#=f.file_id and m.tablespace_name=f.tablespace_name
where s.event like 'enq: TX%'

From P1RAW, you have the lock type (5458 is TX in ascii) and the lock mode: 54580006 is mode 6 (exclusive) and 54580004 is mode 4 (share).

 

Cet article Waiting for row lock. But which row is locked? est apparu en premier sur Blog dbi services.


You are in Standard Edition One? Don’t worry.

$
0
0

NoProblem
You want the amazing features of Oracle SQL and PL/SQL with minimal cost. Your database is too large for Oracle XE edition, but not big enough to require the cost of Enterprise Edition. You accept to do things manually, have maintenance windows. You are ok with a RTO of 5 minutes (time to switchover with Dbvisit standby), RPO of 10 minutes (archive_lag_target=600). Your servers have no more than 2 sockets.

So you opted for Standard Edition One.
But now, you have heard that Standard Edition One is stopped. What are your alternatives?

This information comes from what we can find in oracle.com documents about Standard Edition Two.
Big thanks to Dominic Giles, Mike Dietrich, Tammy Bednar that helped a lot to understand the rules and how to apply them on real customer situations.

Standard Edition Two

If you want to upgrade to 12.1.0.2 (12c patchet 1) then you have to go to Standard Edition Two.
From the price list, the SE2 costs x3 the price of SE1:
CaptureSEpriccelist

Upgrade from Standard Edition One

The price list shows the public price when you buy Standard Edition Two.
However, from Oracle Brief DB SE2 you don’t have to pay the difference when going from SE1 to SE2:

CaptureSE2brief

It’s clear: if you are in SE1, it’s free to go to SE2 and the maintenance annual cost is only 20% higher.

So, if you are in SE1 the annual cost increases by 20% and:

  • you can upgrade to latest versions with full support
  • you can put your database in HA with RAC if you want to
  • each database load can have up to 16 sessions running concurrently in CPU
  • you can still have older versions if you need to, without this new limitation

It’s always difficult to accept changes that are imposed, but when you think that you have bought SE1 at a price that has been set at a time where sockets had 4 cores, then it is easier to accept the new limitations. They make the Standard Edition model sustainable after all.

VMWare?

A lot of customers I know have bought SE1 because they we consolidating their IT on VMWare. Enterprise Edition was a no-go because of the cost to licence all the cores of all servers. And having a dedicated ESX for oracle database is the opposite of consolidation if you have only few databases.

But the price of SE1 is still acceptable when you licence all sockets. And, as long as each server has no more than 2 sockets, SE1 was possible there. Don’t worry, it’s the same rule for SE2. The 2 socket limit is per server.

RAC

Yes. Now you can have RAC in SE2 which was not possible in SE1. This brings service high availability to your database: two instances are running on two nodes. One can fail and the service is still there on the other node with access to all data.

But if you have two sockets in your server, you can’t run another instance on another node because of the two sockets limitations. In RAC, SE2 is limited to 2 mono-socket nodes. Here is what you can do:

  • You can change hardware to mono-socket servers
  • You can remove one processor (physically – not BIOS disabled). The 2 sockets limitation is only for occupied sockets.
  • You can virtualize with OVM to use only one socket for oracle. The other one can be used for your application server.

Don’t forget, the goal is to have at least 8 cores to run the 8 user sessions. More cores (or hyper-threading) will keep-up with background processes, OEM agents, etc. Having two 1s8c16t (1 socket 8 cores 16 threads) servers is a good minimal configuration to use the full power of SE2 RAC.

Note that if you are on VMWare with 2-socket servers, then you can’t use RAC in SE2. RAC in SE2 is limited to one socket per server.

Note that RAC is only for service protection. Data protection is achieved with a standby database. There is no Data Guard in Standard Edition, but there are smart alternatives such as Dbvisit Standby

NUP

Less good news for customers that bought the minimum NUP which was 5 in SE1 independently of the number of servers. When you go to SE2 you have a limit of 10 NUP, and it’s per server. It can make a difference if you are on VMWare as you have to count all servers.

11.2.0.4

You don’t need to upgrade to SE2 if you don’t want to pay the additional 20% for support.
You can stay in 11.2.0.4 which is in free extended support until end of January 2016:
Roadmap_August_2105

But then in February 2016, do you want to pay the extended support?
CaptureExtendedSupport

The cost for extended support is additional 20% after 7 years of initial release. 11gR2 was released on Sept. 2009 which means that the extended support for 11.2.0.4 will be 20% in Sept. 2016 – the same price as the SE2 support.

Conclusion: better to go to SE2 before Sept. 2016 than keep SE1. You have same support for 11.2.0.4 databases and in addition to that you will be able to install and upgrade to latest release. We expect 12.2 to be available in Sept. 2016.

Added Oct. 16th, 2015:
Today Oracle announced that the waived support has been extended to May 2017.
See more info in a new blog post

12.1.0.1

Did you already upgrade to 12c? Then you are in 12.1.0.1 which is the latest available for SE1.

It’s different case than 11.2.0.4 because there will be no extended support for it. In September 2016, you will have no PSU anymore. You can stay with sustaining support at no additional cost. But if you want more support you can downgrade to 11.2.0.4 and pay extended support, or upgrade to SE2 and upgrade to 12.1.0.2 or 12.2 if it is available at that time. Of course, as we have seen previously, there is no point to stay in SE1 and pay the additional 20% because it’s the same cost to go to SE2.

Conclusion: If you are in 12c SE1, stay in sustaining support (no PSU) or pay additional 20% to get full support and features of new versions through SE2.

So what?

If you look at it, except if you don’t want support, you should upgrade to SE2 in 2016.
Think positive: you can run oracle with all SE features on 16 cores dedicated to your users with a reasonable cost, even on VMware. And you can also think to get HA with RAC if you are on physical servers (1 socket) or virtualized with OVM. That was probably unexpected if you bought SE1 several years ago.
SE features are all oracle features that can be seen from application development point of view. EE has lot of features for administration, protection, performance, etc. But the lower cost of SE leaves time and money to find workarounds if you want to do it yourself. SE is a very good product affordable for small companies.

An additional remark. If you plan to buy Standard Edition before the end of the year, then it seems that you have the following choices (prices from public price list) :

  • Buy SE2 at 17500$ per socket and support for 3500$ per year
  • Buy SE1 at 5800$ per socket and upgrade to SE2 – support will be 1392$ per year then

Of course, the second solution seems to be the more attractive one, but don’t wait: SE1 will not be available anymore in December 2015.


SQL> select date'2015-12-01' - sysdate "SE1 COUNTDOWN (days)" from dual;
 
SE1 COUNTDOWN (days)
---------------------
72.4965278

 

Cet article You are in Standard Edition One? Don’t worry. est apparu en premier sur Blog dbi services.

Do you use SQL Plan Baselines?

$
0
0

I can hear a lot of complaints about the instability coming from the adaptive features introduced into the optimizer logic at each releases.
Giving more intelligence to the optimizer is very good to improve the response time for generated queries, BI, ad-hoc reporting.
But when you have an OLTP that works for years with its static set of queries, then you don’t appreciate the plan instability coming from (in reverse chronological order and not exhaustive): SPD, ADS, cardinality feedback, ‘size auto’ histograms, bind variable peeking, CBO, etc.

We can discuss about that, and I may agree, but my first question will be:
– do you use SQL Plan Baselines ?


It’s totally correct to care about plan stability. How many times I’ve been doing some tuning for a customer that tells me: “I don’t care if the query is long. We can accept it or tune it. But when the response time goes from 1 second to 1 hour at random, I can’t even test if I can improve it or not.”

Yes, plan in-stability is a problem. But the solution is not:

  • /*+ RULE */
  • optimizer_index_cost_adj=10
  • _optim_peek_user_binds=false
  • _optimizer_adaptive_cursor_sharing=false
  • _optimizer_use_feedback=false
  • optimizer_adaptive_features=false
  • optimizer_features_enable=8.0.0

(and no, this is not a copy/paste from a SAP note…)

The solution is:

  • SQL Plan Baselines if you are in Enterprise Edition (no option needed when you evolve them manually)
  • Stored outlines (deprecated, but the only way in Standard Edition)

Static SQL

It’s not new. When I started to work on databases, I was using DB2 (the v1 on Unix) and Oracle 7.

With DB2, when the application code was compiled, the queries were optimized at that time (with statistics from database – it’s cost based optimization) and the execution plan was stored into the database (similar to a stored procedure) at deploy time. It’s called the ‘bind process’.
Yes, bind variables come at execution time but the application can be bound to the database at deployment time. This is plan stability for Static SQL.

In Oracle, we were in RULE optimizer at that time, so the plan stability was there: the rules did not depend on data. Same query on same structure always give the same plan. Probably for this reason, Oracle has never implemented Static SQL. Even embedded SQL pre-processed at compilation are not optimized at compile time. Only syntax and semantic is checked. Oracle always process the Static SQL as Dynamic SQL: they are optimized at runtime. Not at each parse call, because it’s cached, but it’s cached only in memory (library cache). And at any point, because of invalidation, space pressure on shared pool, instance restart, etc. the SQL can be hard parsed again.

No problem with RULE. But when Oracle introduced the CBO – Cost Based Optimizer – then things changed. All SQL are considered as Dynamic SQL, they can be optimized at any time, and because the cost depends on data (statistics) and lot of parameters, the plans can change.

And people thought that Oracle didn’t care about plan instability because they introduced more and more parameters that can make the plan change: bind variable peeking, rolling invalidation, cardinality feedback, auto mechanisms and adaptive features. But no, the solution to store execution plans was there since Oracle 8: it’s the outlines. Outlines associate a set of hints that limit the choice of the optimizer in order to get always the same plan. Outlines fixed only one possible plan. Now SQL Plan Baselines goes further: all plans are stored and you can choose which ones are allowed to be used.

Capture

There are several ways to capture SQL Plan Baselines (capture automatically all statements that have more than 2 executions, capture from library cache with different criteria, or from SQL Tuning Set, or manually by sql_id). But be careful, because – depending on your application design – you may have a lot of statements captured.

And don’t capture a lot of statements at a time because cursors will be invalidated and you risk to have a peak of hard parsing that follow the bulk capture.
Here is an example to se the ‘reason’ of invalidation:
I have a statement that is executed very often:

SQL> select sql_id,child_number,executions,invalidations from v$sql where sql_id='2zgczymdyvgmq'
SQL_ID CHILD_NUMBER EXECUTIONS INVALIDATIONS
------------- ------------ ---------- -------------
2zgczymdyvgmq 0 98794435 0

I capture the SQL Plan Baseline for it:

SQL> variable loaded number
SQL> exec :loaded:=dbms_spm.load_plans_from_cursor_cache('2zgczymdyvgmq')
anonymous block completed
SQL> print loaded
LOADED
-
1

and after a few seconds, a new child cursor appears:

SQL> select sql_id,child_number,executions,invalidations,sql_plan_baseline from v$sql where sql_id='2zgczymdyvgmq'
SQL_ID CHILD_NUMBER EXECUTIONS INVALIDATIONS SQL_PLAN_BASELINE
------------- ------------ ---------- ------------- ------------------------------
2zgczymdyvgmq 0 98794462 0
2zgczymdyvgmq 1 501 1 SQL_PLAN_56gpu0marthcwcc4c47e7

the reason from V$SQL_SHARED_CURSOR is:

<ChildNode><ChildNumber>0</ChildNumber><ID>4</ID><reason>SQL Tune Base Object Different(3)</reason><size>2x4</size><flags_kestbcci>7</flags_kestbcci><ehash_kestbcci>3581723036</ehash_kestbcci></ChildNode>

control the capture

So my recommendation here is to control the capture. You will have to manage them (see why the plan change, evolve them, etc). They are stored in SYSAUX. The can be purged after retention, but if you captured a lot at the same time, then the purge will take a lot of time (and undo records).

So you should control the capture. Here is an example to capture the Top-100 statements by execution count. You can review them, capture 100 more if you thing it makes sense, wait a while to see how many have been executed again in the following days, how many have new possible plans, etc.


set serveroutput on
variable loaded number
exec for i in (select * from (select * from (select sql_id,exact_matching_signature from v$sql where plan_hash_value>0 and sql_plan_baseline is null and last_active_time>sysdate-1/24 group by sql_id,exact_matching_signature order by count(executions) desc) where rownum<=100) where exact_matching_signature not in (select signature from dba_sql_plan_baselines)) loop :loaded:=nvl(:loaded,0)+dbms_spm.load_plans_from_cursor_cache(i.sql_id); end loop;
print loaded

It’s just an example. I choose to capture the statements that have the most executions since startup. You can add other criteria.

But you should not capture only those that have a large elapsed time, or from a STS coming from AWR. The goal is different here. AWR is focused high resource consumption because the goal is to tune the ‘bad’ statements. But here the goal is to keep the ‘good’ statement so that they will never be ‘bad’.

So it makes sense to capture only the good statements. Don’t run the capture at a time where users complain about response time.

So what?

The core message here is:

  • Stop to complain about oracle Plan Instability if you didn’t give a try at SQL Plan Baselines (or outlines in SE)
  • When the system is going well, don’t wait for the next performance issue. This is the time to fix the plans that are going well.
  • Don’t fear migrations. Stabilize the most critical use-cases with SQL Plan Baselines and go on.
  • If you’re an ERP vendor, stop to fake optimizer with old parameters. Deploy the execution plans with your application

I know that SQL Plan Baselines are not widely used. Mostly because we don’t find time for this pro-active activity. And because it requires a good communication between dev and ops. But remember that Oracle has provided plan stability features for a long time, and they think we use it when they introduce all adaptive features.

 

Cet article Do you use SQL Plan Baselines? est apparu en premier sur Blog dbi services.

SEVERE:OUI-10020:The target area /u01/app/oracle/oraInventory/ is being used as a source by another session

$
0
0

What to do if you get the above error when you try to install Oracle SE2 (did not test if the same issue is there with EE, but probably it is) in silent mode?:

./runInstaller oracle.install.option=INSTALL_DB_SWONLY \
    ORACLE_BASE=/u01/app/oracle/ \
    ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_2_4/ \
    UNIX_GROUP_NAME=oinstall  \
    oracle.install.db.DBA_GROUP=dba \
    oracle.install.db.OPER_GROUP=dba \
    oracle.install.db.BACKUPDBA_GROUP=dba  \
    oracle.install.db.DGDBA_GROUP=dba  \
    oracle.install.db.KMDBA_GROUP=dba  \
    FROM_LOCATION=../stage/products.xml \
    INVENTORY_LOCATION=/u01/app/oracle/oraInventory/ \
    SELECTED_LANGUAGES=en \
    oracle.install.db.InstallEdition=SE2 \
    DECLINE_SECURITY_UPDATES=true  -silent -ignoreSysPrereqs -ignorePrereq -waitForCompletion

You already checked my oracle support and verified that there are no lock files in the oraInventory/locks directory:

ls -la /u01/app/oraIventory/locks

What to do? Follow the error message…:

[FATAL] [INS-10008] Session initialization failed
   CAUSE: An unexpected error occured while initializing the session.
   ACTION: Contact Oracle Support Services or refer logs
   SUMMARY:

… and create a service request in my oracle support? You don’t need to. The error message is a bit misleading. The real issue is only the trailing slash (“/”) at the end of the directory name for the oraInventory. Once you remove it everything works as expected:

./runInstaller oracle.install.option=INSTALL_DB_SWONLY \
    ORACLE_BASE=/u01/app/oracle \
    ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_2_4 \
    UNIX_GROUP_NAME=oinstall  \
    oracle.install.db.DBA_GROUP=dba \
    oracle.install.db.OPER_GROUP=dba \
    oracle.install.db.BACKUPDBA_GROUP=dba  \
    oracle.install.db.DGDBA_GROUP=dba  \
    oracle.install.db.KMDBA_GROUP=dba  \
    FROM_LOCATION=../stage/products.xml \
    INVENTORY_LOCATION=/u01/app/oracle/oraInventory \
    SELECTED_LANGUAGES=en \
    oracle.install.db.InstallEdition=SE2 \
    DECLINE_SECURITY_UPDATES=true  -silent -ignoreSysPrereqs -ignorePrereq -waitForCompletion

And all this because of a slash. I am pretty sure this could be handled better very easily.

 

Cet article SEVERE:OUI-10020:The target area /u01/app/oracle/oraInventory/ is being used as a source by another session est apparu en premier sur Blog dbi services.

Query V$UNDOSTAT for relevant time window

$
0
0

When you have a query failing in ‘ORA-01555: snapshot too old: rollback segment number … with name … too small’ you have two things to do:

  1. Convince the developer that the rollback segment is not too small because the message text comes from old versions
  2. Find information about query duration, undo retention and stolen blocks statistics. This is the goal of this post

The first information comes from the alert.log where every ORA-1555 is logged with the query and the duration:

Tue Sep 29 19:32:09 2015
ORA-01555 caused by SQL statement below (SQL ID: 374686u5v0qsh, Query Duration=4626 sec, SCN: 0x0022.c823dc12):

SCN

This means that at 19:32:09 the query 374686u5v0qsh running since 18:15:03 (4626 seconds ago) wasn’t able to find the undo blocks necessary to build the consistent image as of 18:15:02. How do I know that ‘as of’ point-in-time? Usually it’s the beginning of the query, but there are cases where it can be earlier (in serializable isolation mode, flashback queries) or later (query restart for example).
It’s better to check it: we have the SCN in hexadecimal given as ‘base’ and ‘wrap’ and we can convert it to a timestamp with the following formula:


SQL> select scn_to_timestamp(to_number('0022','xxxxxxx')*power(2,32)+to_number('c823dc12','xxxxxxxxxxxxxxxxxxxxxx') ) from dual;
 
SCN_TO_TIMESTAMP(TO_NUMBER('0022','XXXXXXX')*POWER(2,32)+TO_NUMBER('C823D
-------------------------------------------------------------------------
29-SEP-15 06.15.02.000000000 PM

Note that there can be a 3 second difference from the precision of SCN_TO_TIMESTAMP.

Undo statistics

Then I want to know the undo retention:


SQL> show parameter undo_retention
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_retention integer 900

900 seconds means that it is possible to get ORA-1555 after one hour because blocks expire after 15 minutes.

We can check how the expired undo blocks where reused from V$UNDOSTAT but I use the following query to get only the lines that are relevant to my query (those that cover the query duration up to the ORA-1555 ) and only the non-zero columns:


select maxqueryid||' '||to_char(end_time,'hh24:mi')||' '||
rtrim(lower(''
--||decode(MAXCONCURRENCY,0,'','MAXCONCURRENCY='||MAXCONCURRENCY||' ')
||decode(UNDOBLKS,0,'','UNDOBLKS='||UNDOBLKS||' ')
||decode(ACTIVEBLKS,0,'','ACTIVEBLKS='||ACTIVEBLKS||' ')
||decode(UNEXPIREDBLKS,0,'','UNEXPIREDBLKS='||UNEXPIREDBLKS||' ')
||decode(EXPIREDBLKS,0,'','EXPIREDBLKS='||EXPIREDBLKS||' ')
||decode(TUNED_UNDORETENTION,0,'','TUNED_UNDORETENTION(hour)='||to_char(TUNED_UNDORETENTION/60/60,'FM999.9')||' ')
||decode(UNXPSTEALCNT,0,'','UNXPSTEALCNT='||UNXPSTEALCNT||' ')
||decode(UNXPBLKRELCNT,0,'','UNXPBLKRELCNT='||UNXPBLKRELCNT||' ')
||decode(UNXPBLKREUCNT,0,'','UNXPBLKREUCNT='||UNXPBLKREUCNT||' ')
||decode(EXPSTEALCNT,0,'','EXPSTEALCNT='||EXPSTEALCNT||' ')
||decode(EXPBLKRELCNT,0,'','EXPBLKRELCNT='||EXPBLKRELCNT||' ')
||decode(EXPBLKREUCNT,0,'','EXPBLKREUCNT='||EXPBLKREUCNT||' ')
||decode(SSOLDERRCNT,0,'','SSOLDERRCNT='||SSOLDERRCNT||' ')
||decode(NOSPACEERRCNT,0,'','NOSPACEERRCNT='||NOSPACEERRCNT||' ')
)) "undostats covering ORA-1555"
from (
select BEGIN_TIME-MAXQUERYLEN/24/60/60 SSOLD_BEGIN_TIME,END_TIME SSOLD_END_TIME from V$UNDOSTAT where SSOLDERRCNT>0
) , lateral(select * from v$undostat
where end_time>=ssold_begin_time and begin_time<=ssold_end_time)
order by end_time;
/

Lateral join is possible in 12c, but there are other way to do the same in 11g.

Here is a sample output:


undostats covering ORA-1555
-------------------------------------------------------------------------------------------------------------
f3yfg50ga0r8n 18:14 activeblks=224 unexpiredblks=90472 expiredblks=34048 tuned_undoretention(hour)=92.2
f3yfg50ga0r8n 18:24 activeblks=736 unexpiredblks=90472 expiredblks=34560 tuned_undoretention(hour)=43.9 expstealcnt=1 expblkrelcnt=1280
f3yfg50ga0r8n 18:34 activeblks=1504 unexpiredblks=64320 expiredblks=61024 tuned_undoretention(hour)=11.4 expstealcnt=2 expblkrelcnt=14208
374686u5v0qsh 18:44 activeblks=1120 unexpiredblks=57792 expiredblks=54752 tuned_undoretention(hour)=11.4
374686u5v0qsh 18:54 activeblks=1888 unexpiredblks=74112 expiredblks=42784 tuned_undoretention(hour)=11.4
374686u5v0qsh 19:04 activeblks=864 unexpiredblks=90400 expiredblks=34816 tuned_undoretention(hour)=2. expstealcnt=1 expblkrelcnt=9216
374686u5v0qsh 19:14 activeblks=2784 unexpiredblks=91680 expiredblks=16896 tuned_undoretention(hour)=.9
374686u5v0qsh 19:24 activeblks=1248 unexpiredblks=94232 expiredblks=3584 tuned_undoretention(hour)=.9
374686u5v0qsh 19:34 activeblks=1504 unexpiredblks=94816 expiredblks=4352 tuned_undoretention(hour)=.9 ssolderrcnt=1
f3yfg50ga0r8n 19:44 activeblks=2656 unexpiredblks=93024 expiredblks=2944 tuned_undoretention(hour)=1.

The ORA-1555 occurred where the ssolderrcnt is > 0 and we see the number of blocks stolen before – all expired in this case.
All the detail about the statistics are in the V$UNDOSTAT documentation.

There is nothing more than V$UNDOSTAT here, but that query is easier if you are on command line.

 

Cet article Query V$UNDOSTAT for relevant time window est apparu en premier sur Blog dbi services.

11.2.0.4 support? Don’t worry until 2017

$
0
0

Customers reluctant to go to 12c before 12.2, in addition to Standard Edition contract changes when going to 12c, has lead to lot of upgrades to 11.2.0.4 but what about support? Don’t worry. It’s supported for no additional cost until May 2017

Support levels

Do you know why you pay support (per year about 20% of licence price)?
When you have a problem, you search on My Oracle Support. This is a really good service.
When you don’t find an answer you open a Service Request. This is not always a very nice experience, but it’s the only way to get info that is not public.
When you have a bug, you can get workarounds, existing patches, you can request a backport of a patch, and even get a new patch when development has made the fix.
And of course, you need to have support in order to get new releases.

But a Software Editor can’t support all the old versions. Here are the 3 levels:

  • Premier support during general availability: you get all support, usually for 5 years after new version (the .1 release) availability
  • Extended support for few additional years: existing patches, major fixes with PSU, …
  • Sustaining support: only exising patches here

Price

You pay for support each year, about 20% of licence cost.
That’s for premier support. Extended and sustaining supports have additional cost that increase with years.

12c

12c is the latest version. For your new projects, you should go to the latest patchset: 12.1.0.2 which is in premier support until mid 2018
If you upgrade, I recommend to go to 12.1.0.2 as well. Don’t think that 11.2.0.4 is more stable because it’s wrong. You should go to the latest version. Don’t use all new features if you fear regression, but go to latest version.
If you are in Standard Edition you have a problem because until 1 month ago only 12.1.0.1 was available and if you are in Standard Edition One, you can’t go to 12.1.0.2 without additional maintenance cost (+20% when going to SE2)

And the problem is that there will be no extended support for 12.1.0.1 which means latest PSU will be JUL16. If your security policy requires to get latest critical fixes, then you can’t stay in sustaining support.

11g

If you are in 11g latest patchset – 11.2.0.4 – premier support ended at the beginning of the year. It’s extended support so you still have PSUs but you should pay for it.

The good news is that the extended support is ‘waived’ – understand you don’t pay the additional fee. It has been waived until Feb. 2016 but today (thanks to Dominic Giles tweet and Martin Klier blog post) we got the news: it’s waived until May 2017

CaptureWaived11204

Note that PSUs are only for 11.2.0.4, the latest patchset.

So what?

The recommendation is still to go to latest patchset 12.1.0.2
But if you choose to stay in 11g, then you still have support for 11.2.0.4 without additional cost. So, don’t worry…

Why did I say that 12.1 is not less stable than 11.2?
First, the ‘dot-one’ fear is a myth. Probably the oldest myth in Oracle because there were no version one. At that time, they thought that nobody would buy a first version so the first commercial version was called Oracle 2.
But today, each patchset brings new features. Only the PSU brings stability: only critical fixes and no new features. Remember that 11.2.0.4 has been released after 12.1.0.1 and had some new features backported to it.
From our experience, we encountered more issues when going to 11.2.0.4 than 12.1.0.2. If you want stability, don’t use multitenant, disable some adaptive features [not sure it’s good to say that], or better. use SQL Plan Baselines, but go to latest release. It’s there that bugs are fixed first.

 

Cet article 11.2.0.4 support? Don’t worry until 2017 est apparu en premier sur Blog dbi services.

You are in Standard Edition, when to worry?

$
0
0

A previous blog post explaining what happens for those in ‘Standard Edition One’ had a “don’t worry” in the title because you can upgrade to SE2 with minimal additional cost – except when you have to buy new NUPs – and get only small additional limitation.
I didn’t made such a post for people in Standard Edition but there is a case where it can be worrying because of the 2 socket rule.

Capture4socketForbidden
Ok, let’s try to say it in a positive way…
Standard Edition One is not totally dead. Only the affordable price of Standard Edition One is dead.
But it’s limitation survived in Standard Edition 2: it can be licensed only on servers that have a maximum capacity of two sockets.

Ok, that’s not actually very positive. Let’s forget the politically correct language:

  • from December 2015 the price of the minimal edition you can buy is multiplied by 3.
  • from December 2015 the maximum server capacity for standard edition is divided by 2.

2 sockets limit

Which sockets do you count? Can you use visualization to fit within the limit? Can you remove physically a processor from a socket?
Oracle has very short sentences to define limitations that are not so easy to understand. How to count sockets depends on the context.

  • When you count the number of processor licences for SE you count the occupied sockets (and no core factor here)
  • When you count for the 2 socket limitation in RAC, you count the occupied socket or the OVM partitioned socket
  • When you count for the 2 socket limitation in single instance, then you must count all sockets

Because of those different cases, and the lack of clear documentation at the begining of the announce of SE2, the third point was not clear for me until Bjoern Rost made me read the rules again. I thought that it was possible to remove 2 processors from a 4-socket server and then install SE2 on it but that was wrong. It’s the server capacity that counts, and the sockets – not the processors – within it. If the server spec shows a 4 socket motherboard and you have Oracle installed, then you must pay for Enterprise Edition and there is no way to workaround that.
If you think about putting glue on the socket, or get rid of it with your hammer drill, then I suppose you should send a picture of it to your LMS contact to validate it (please put me in cc if you do that!)

Ok, you think it was obvious because we are talking about sockets and not processors? Don’t rely on that. With multi-chip processors, each chip is considered as a socket, but that’s another story.

References

Here are the references about the 2 socket limit rules:

https://www.oracle.com/database/standard-edition-two/index.html

Capture2socket1

http://www.oracle.com/technetwork/database/oracle-database-editions-wp-12c-1896124.pdf

Capture2socket2

http://www.oracle.com/us/corporate/pricing/databaselicensing-070584.pdf

Capture2socket3

And the reading of those rules have been made clear by the Database Product manager Dominic Giles:

and the Master Product Manager Database Upgrade & Migrations Mike Dietrich:

Capture2socket4

 

Cet article You are in Standard Edition, when to worry? est apparu en premier sur Blog dbi services.

Oracle Cloud Service – my first outage

$
0
0

We experienced the first planned outage this week-end, so let’s see how it is notified and what happens.

EMEA Commercial 2 – Amsterdam

Outage notification before:

  • subject: Announcement: Upcoming Mandatory Maintenance for Oracle Cloud
  • e-mail date: 16 Octobre 2015 1:49
  • message: Start Time / End Time:
    Saturday, October 17, 2015 9:00:00 PM CEST – Sunday, October 18, 2015 12:00:00 PM CEST
    Instances will be brought down before the start and restarted to match original state post completion.

So the notification was sent a bit less than 2 days before.

Actually I had a session logged at that time.

$ date
Sun Oct 18 21:03:27 CEST 2015
$ last -x -t 20151018170000 | head
oracle pts/0 178.197.234.212 Sun Oct 18 13:17 - 13:35 (00:17)
oracle pts/0 178.197.234.212 Sun Oct 18 13:12 - 13:14 (00:02)
oracle pts/0 178.197.234.212 Sun Oct 18 13:09 - 13:10 (00:01)
runlevel (to lvl 3) 2.6.39-400.109.1 Sun Oct 18 10:05 - 21:03 (10:57)
reboot system boot 2.6.39-400.109.1 Sun Oct 18 10:05 - 21:03 (10:57)
shutdown system down 2.6.39-400.109.1 Sat Oct 17 21:19 - 10:05 (12:46)
runlevel (to lvl 0) 2.6.39-400.109.1 Sat Oct 17 21:19 - 21:19 (00:00)
oracle pts/1 xdsl-188-154-161 Sat Oct 17 18:02 - down (03:16)
oracle pts/1 xdsl-188-154-161 Sat Oct 17 18:02 - 18:02 (00:00)
oracle pts/1 56.227.197.178.d Wed Oct 14 06:13 - 07:35 (01:21)

Remark: a reboot was not what I expected from the message ‘restarted to match original state’. I expected something like a ‘save state’ that includes the RAM.

Outage notification after:

  • subject: Announcement: Maintenance Was Completed for Oracle Cloud Outage Details
  • e-mail date: 18 Octobre 2015 18:16
  • message: Start Time / End Time:
    Saturday, October 17, 2015 9:00:00 PM CEST – Sunday, October 18, 2015 10:30:00 AM CEST

Remark: the system was up at 10:05 but the notification that it is up came 8 hours later. Then if I have something to restart manually, when am I expected to do it? at 10:05 when I see the system reboot? at 12:00 that was the planned end? Or at 18:16 when I receive the notification? When I’m responsible for an outage, I count the duration from start of maintenance up to availability notification.

Here is the summary from the ‘CLOUD My Service':

CaptureOutageEU

US Commercial 2 – North America

There was an outage later on the US cloud. It was planned from 6:00:00 AM to 9:00:00 PM CEST and here is the summary:

CaptureOutageUS

The problem is that it overlaps, so we can’t consider a Data Guard setup between both to ensure High Availability.


$ date
Sun Oct 18 21:05:46 CEST 2015
$ last -x -t 20151018170000 | head
runlevel (to lvl 3) 2.6.39-400.109.1 Sun Oct 18 16:12 - 21:05 (04:53)
reboot system boot 2.6.39-400.109.1 Sun Oct 18 16:12 - 21:05 (04:53)
shutdown system down 2.6.39-400.109.1 Sun Oct 18 06:10 - 16:12 (10:02)
runlevel (to lvl 0) 2.6.39-400.109.1 Sun Oct 18 06:10 - 06:10 (00:00)
oracle pts/1 xdsl-188-154-161 Sat Oct 17 21:25 - 00:01 (02:35)
oracle pts/1 xdsl-188-154-161 Mon Oct 5 07:29 - 11:41 (04:12)
oracle pts/1 xdsl-188-154-161 Mon Oct 5 07:22 - 07:22 (00:00)
oracle pts/1 109.132.241.235 Sun Sep 20 10:57 - 10:57 (00:00)
oracle pts/1 109.132.241.235 Sun Sep 20 10:28 - 10:28 (00:00)
oracle pts/2 109.132.241.235 Sat Sep 19 18:03 - 19:07 (01:03)

During the 06:00 – 16:08 outage (end was planned for 21:00), the system was stopped from 06:10 to 16:12 and the notification came 2 hours later.

So what?

On any server you should be confident that all your services restart on server reboot. Check your init.d scripts. Test them. Take care of dependencies if one must start after another one. In the cloud, because you’re not there when the system is brought up (and not notified immediately) then you must be 100% sure that everything restarts.

 

Cet article Oracle Cloud Service – my first outage est apparu en premier sur Blog dbi services.


OOW15 – Day 1- No fog on Golden Gate brige but Cloud everywhere at Moscone center

$
0
0

Here are the two sides of the Oracle Open World: marketing and community. Of course Oracle organizes that big event for marketing: show the future products and explain how they are better than all competitors ones. But it’s not only that. There is a community around Oracle, of users, partners, technologists, speakers, authors, etc. And Oracle Open World is also the place to meet.

The Sunday started for me with the Open World Bridge Run, organized every year by the SQL Developer product manager, Jeff Smith. More information on facebook.
So here are the two sides of that run: Golden Gate Bridge South and North

OOW15-brige-run OOW15-brige-run2

This is community and networking.
No matter your technology skills. No matter if you run fast or walk, everybody can enjoy.

The Sunday conferences are chosen by user groups, so besides all marketing of new product, it’s a good occasion to see what matters currently in real life from user experience.

Then it’s the keynote, and once again I’ll show you the two sides.
Here is the big conference room where it started with a live video of all Oracle ACEs in the cloud Room:
OOW15-Cloud1-CaptureCloud001
OOW15-Cloud2-5IMG_2136
and here is the other side where we were all playing in that bubble pool. Don’t hesitate to play or relax in the cloud room that is now open to everybody: “See Yourself in the Cloud” it’s in the Yerba Buena Gardens where the lunch is.
You want to know what has been said in the keynote? Videos are there.
All the details will come this week, but those are the main points:

  • Cloud: new services, RAC, Big Data, accessible from everywhere
  • Competitors: For Larry Ellison, IBM and SAP are not big competitors anymore
  • Partnership: Intel and the 3D XPoint memory to compete with SSD
  • Columnar: In-Memory now in Active Data Guard, columnar persisted in Flash
  • Multitenant: yes, the future is with pluggable databases. Now up to 4096 PDB in a CDB (did you reach the 252 limit?), Online cloning of PDB (no need to put source in read-only), and online multitenant relocation (a feature needed for the cloud)
  • 12.2 is still planned for 2016

I finished the day at the Oracle ACE dinner, networking with the people that gives life to the community – again meeting in life great people I know though blogs, twitter, OTN forums, event presentations, etc. Her again, two sides:

OOW15-ACE OOW15-ACE2

Today, I’ll have to choose between two sides: Oak table world (agenda) – the free conference, very technical and useful for real life troubleshooting. And regular OOW sessions – especially sessions from product managers to know more about the 12.2 features coming.

 

Cet article OOW15 – Day 1- No fog on Golden Gate brige but Cloud everywhere at Moscone center est apparu en premier sur Blog dbi services.

OOW15 – Day 3 – thoughts from DemoGrounds about SIZE AUTO

$
0
0

If you are at Oracle Open world, don’t miss the DemoGrounds where you can talk to product managers and developers. It’s a good way to know how and why something is implemented. They also listen to you user experience on their product, for possible evolution.
But remember they are not there to receive complaints or answer your SR…
Of course, I had a lot of very interesting discussions about current and future versions.
I’m posting here just one idea that came to my mind after that, about plan stability, histograms and FOR ALL COLUMN SIZE 1

OOW15DemoGroundsIMG_20141001_112017219_HDR

Think about what you need

Do you like histograms or not? They are good for ad-hoc queries, reporting, BI because they help to find the optimal execution plan.
But in OLTP where you want plan stability and you want to share plans, having an execution plan that fits for all execution values (and not only those from first execution – bind variable peeking), you don’t want histograms.
Read at the first philosophy post on Jonathan Lewis blog about that.

All that means that sometimes, in specific context, I have recommended to use ‘FOR ALL COLUMN SIZE 1′ instead of the default ‘FOR ALL COLUMN SIZE 1′. No histograms by default. I you need them for specific column, then set table preference for it.

But today, I realize that there is something wrong in that recommendation because the solution does not address the requirement.
The requirement was not: have no histograms
The requirement was: don’t use histograms

My recommendations (SIZE 1) is the solution for the first one. But the solution for the second one is:

SQL> alter session set "_optimizer_use_histograms"=false;

If what I want is not to use histograms, then this is the solution: disable the use of histograms.
Of course, I can say this now that I have validated that this is the goal of this parameter. It’s undocumented so you can’t rely only on its name or description. Ask support and they check internal documentation or ask the developers.

So what?

I don’t recommend anything here as a silver bullet. The only recommendation is: think about what you need and find the solution that fits to it.
If I want to use histograms only for few columns that I use with literals then the right solution is to keep “_optimizer_use_histograms”=true and gather SIZE 1 by default and set table preferences for the specific columns.
But if you just don’t want to use histograms and don’t want to manage which column has histograms or not, then disable the feature for the session that don’t need it. Other sessions may benefit from it.

 

Cet article OOW15 – Day 3 – thoughts from DemoGrounds about SIZE AUTO est apparu en premier sur Blog dbi services.

RMAN channels in RAC

$
0
0

When you want to minimize the backup or restore duration, you can parallelize RMAN operations by allocating multiple channels. If you are in cluster, you can open channels from several nodes. Let’s see why, how, and a strange issue: need to set cell_offload_processing=false even if not on Exadata.

Reason

Why do I want to run backup or restore from several nodes instead of opening all channels from one node?
Usually the bottleneck is on the storage, not the host. But storage have evolved a lot and the FC HBA throughput may be lower than the storage capability.

Here I’ve an EMC XtremIO Brick storage which can deliver 3GB/s and I’ve 4 nodes with two 10 Gbit/s HBA each. Which mean that one node cannot transfer more than 2GB per second. I need to use more than one node if we want the full rate of the XtremIO brick.

Here is the node HBA configuration that shows the two 10 Gbit/s:

# systool -c fc_host -v
Class = "fc_host"
 
Class Device = "host1"
Class Device path = "/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/0000:06:00.0/0000:07:00.0/0000:08:03.0/0000:0c:00.0/host1/fc_host/host1"
active_fc4s = "0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 "
dev_loss_tmo = "10"
fabric_name = "0x200a002a6a2505c1"
issue_lip =
maxframe_size = "2048 bytes"
node_name = "0x20f20025b5000012"
port_id = "0x510002"
port_name = "0x200a0025b5200002"
port_state = "Online"
port_type = "NPort (fabric via point-to-point)"
speed = "10 Gbit"
supported_classes = "Class 3"
supported_fc4s = "0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 "
supported_speeds = "10 Gbit"
symbolic_name = "fnic v1.5.0.1 over fnic1"
tgtid_bind_type = "wwpn (World Wide Port Name)"
uevent =
 
Device = "host1"
Device path = "/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/0000:06:00.0/0000:07:00.0/0000:08:03.0/0000:0c:00.0/host1"
uevent = "DEVTYPE=scsi_host"
 
 
Class Device = "host2"
Class Device path = "/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/0000:06:00.0/0000:07:00.0/0000:08:04.0/0000:0d:00.0/host2/fc_host/host2"
active_fc4s = "0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 "
dev_loss_tmo = "10"
fabric_name = "0x2014002a6a2508c1"
issue_lip =
maxframe_size = "2048 bytes"
node_name = "0x20f20025b5000012"
port_id = "0xdd0003"
port_name = "0x200b0025b5200002"
port_state = "Online"
port_type = "NPort (fabric via point-to-point)"
speed = "10 Gbit"
supported_classes = "Class 3"
supported_fc4s = "0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 "
supported_speeds = "10 Gbit"
symbolic_name = "fnic v1.5.0.1 over fnic2"
tgtid_bind_type = "wwpn (World Wide Port Name)"
uevent =
 
Device = "host2"
Device path = "/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/0000:06:00.0/0000:07:00.0/0000:08:04.0/0000:0d:00.0/host2"
uevent = "DEVTYPE=scsi_host"

RMAN

In that test, I didn’t rely on speed specification and opened lot of channels: 2 channels from each node. Here is how to allocate channels from several nodes: you need to connect to all nodes.

run {
allocate channel C11 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.201)(PORT=1521)))';
allocate channel C12 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.201)(PORT=1521)))';
allocate channel C21 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.202)(PORT=1521)))';
allocate channel C22 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.202)(PORT=1521)))';
allocate channel C31 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.203)(PORT=1521)))';
allocate channel C32 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.203)(PORT=1521)))';
allocate channel C41 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.204)(PORT=1521)))';
allocate channel C42 device type disk format '+FRA' connect 'sys/"..."@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XXXX))(ADDRESS=(PROTOCOL=TCP)(HOST=10.230.160.204)(PORT=1521)))';
backup as backupset tablespace CPY format '+FRA' tag 'TEST-BACKUP';
}

Then I can expect to reach the limit of the XtremIO Brick, which is 3GB/s:
CaptureXtremIORACRMAN1

As you can see here, I had a surprise at the first test: limited to 2GB/s as if that came from only one node. I’ll detail it later.
If you check the second test, I’ve reached the 3GB/s which is great. Of course I don’t need to have 8 channels for that. 3 channels are ok, and over 2 nodes so that I can use 3 10Gbit channels.

Now the fun part…

‘ASM file metadata operation’ msgop=41

You have seen that I had contention in the first test. Here it is:
CaptureXtremIORACRMAN3

I was waiting very long time (average 500ms) on ‘ASM file metadata operation’ the parameter p1 being ‘msgop’=41.
There is a bug which is supposed to be fixed in 11.2.0.3 (@cgswong blog post about it). I’m in 11.2.0.3 so it’s supposed to be fixed, but the workaround solved by issue.

And the workaround is:

SQL> alter system set cell_offload_processing = false;

Funny, isn’t it? I’m not on Exadata, so I don’t have offload processing, but it seems that some offloading code is still running and brings contention here. Just disable it and the wait event is still there, but lower wait time:
CaptureXtremIORACRMAN4

Even if to wait is still there, and the time is still in hundred of seconds, it seems that be issue is fixed because I can reach the maximum throughput (3GB/s)

If you encounter that wait event and you are not on Exadata, then the recommendation is to disable offloading.

 

Cet article RMAN channels in RAC est apparu en premier sur Blog dbi services.

AWR when ASH CPU >> DB CPU

$
0
0

I’ll present How to read an AWR report next month at UKOUG TECH15 and the first recommendation after checking Average Active Sessions is to verify that we are not in a CPU starvation situation. Here is a little demo to show how it looks like in an AWR report.

In other words, I’ll demo the second line of:

One session 100% in CPU

I’m doing the demo on a VM with 4 cores and I’m running a query that reads 15 million blocks from buffer cache, which means it run fully in CPU for nearly 5 minutes:

Capture-l1-01

From the load profile above we are actually using 1 second of CPU per second, which confirms that my session was able to run in CPU 100% of the time.

It’s the only activity in that database, so I see nearly 100% of the DB time spent in CPU:

Capture-l1-02

From the host, it’s the only instance running there, using 1 of the 4 vCPU – which means 25% utilization:

Capture-l1-03

I’ve set timed_os_statistics to gather OS statistics every 10 seconds. We can see that most context switches are voluntary ones (when doing a system call):

Capture-l1-04

Finally I check ASH:

Capture-l1-05

29 samples (taken every 10 seconds) are in CPU or Wait for CPU which covers nearly 100% of the 292.1 seconds of DB CPU.

This is the ‘normal’ case.

One session 100% + four processes in CPU

Now, in addition to my Oracle session I run 4 processes that also want to run on CPU all the time:

dd if=/dev/random of=/dev/null&
dd if=/dev/random of=/dev/null&
dd if=/dev/random of=/dev/null&
dd if=/dev/random of=/dev/null&

I introduced contention on the server, thus the time to run the same workload in the database has increased:

Capture-l5-01

You see here that DB time has increased because it includes the time to wait for CPU in runqueue, but the DB CPU(s) per second is now only 0.8 because my session was about 20% of its time waiting for CPU (5 processes sharing only 4 vCPUs).

The main symptom of that is on the Top events where the total is far less than 100% because of the unaccounted time waiting in runqueue:

Capture-l5-02

Look at the load average that is higher than the number of vCPUs:

Capture-l5-03

Lot of time in context switching (system CPU) and my instance is using only 1/5 of the total CPU used.

The OS has to share the cpu among the processes. this is why we see more ‘involuntary context switches’ than voluntary ones:

Capture-l5-04

Now comparing with ASH:

Capture-l5-05

460 seconds (46 samples taken every 10 seconds) in CPU is far more than the DB CPU (362.8) because 20% of the time was spent waiting for CPU, because of involuntary context switches, because of CPU starvation.

Note that the time waiting in runqueue after a wait event (system call – voluntary context switch) is not accounted as ‘Wait for CPU’ but is accounted in the wait event waiting time, for the simple reason that the accounting (which is done by a few CPU instructions) can’t be done before the process is back in CPU. A consequence of that is inflated wait time. I don’t have lot of waits here, but you can see that ‘db file sequential read’ is on average less than one millisecond in the first case, but has been inflated to 6 milliseconds when CPU had to be shared by lot of processes.

I usually don’t look at ADDM report, but there is a little clue about that in it – unfortunately not the Top-1 finding:

Finding 2: CPU Usage
Impact is .06 active sessions, 5.7% of total activity.
------------------------------------------------------
Host CPU was a bottleneck and the instance was consuming 26% of the host CPU.
All wait times will be inflated by wait for CPU.
Host CPU consumption was 100%.
 
Recommendation 1: Application Analysis
Estimated benefit is 1 active sessions, 100% of total activity.
...
Recommendation 2: Host Configuration
Estimated benefit is .06 active sessions, 5.7% of total activity.
...

The recommendations are not good either. My problem is on the host, outside of this database instance. It’s the rare case where we have to check at system level before looking at application…

Conclusion: if you are CPU-bound, the most important numbers in the AWR report (wait events) are meaningless. I wanted to show that kind of report here. At TECH15 presentation I just warn about it and then show how I read a report that is relevant. Hope to see you there.

 

Cet article AWR when ASH CPU >> DB CPU est apparu en premier sur Blog dbi services.

Interested in a deep dive of logical replication?

$
0
0

It’s next week at #DOAG2015

Interested in a deep dive of logical replication? Let’s try it. The best way to learn is to try it and this is the idea of #RepAttack where you can quickly install a replication configuration on your laptop, using VirtualBox, Oracle XE, and Dbvisit replicate trial version.

Last year Jan Karremans (www.jk-consult.nl) organized a #RepAttack which was cool but we were in a room at the end of the 3rd floor and many people hesitated to come inside. So for this year we had the idea to do it in a different way: fix a rendez-vous at the 2nd floor in front of the stairs:

Wednesday, November 18 – 2:00pm

There we meet up, give a brief intro of #RepAttack, distribute USB stick with all required software, and we will find a place to go to put the laptops on a table (or not – it’s lap-top afterall) and start installing/configuring.

Why on the 2nd floor? Because all Dbvisit partners are there: Opitz, Herrmann & Lenz Services GmbH and dbi services are just at the arrival of the stairs and Robotron a few steps from there.

LiveDemo_DSC6993
And at dbi services, we have the ‘live demo’ screen that we use for any on-demand demo for people coming at the booth. There I can show you how the #Repattack will look like before you do it on your laptop.

Hope to see lot of people next week.

The prereqs for the laptops are:

• At least 2.3 Gb RAM per VM. 5 Gb in total as 2 VMs will be built. A laptop with 8 Gb is recommended.
• At least 17Gb of free space. 25 Gb is recommended.
• 2 GHz Processor (a lesser processor will be acceptable but slower)
• Admin privileges on your host machine are required to install VirtualBox.
• Either Windows, Linux, Mac OS X operating system on your host machine. –

If you want to look at the cookbook, it’s online: https://dbvisit.atlassian.net/wiki/display/REPA11XE/RepAttack+11g+XE+Home

If you have an HP laptop, please check before that you can start a 64-bit VM on it because experience at #rackattack shows that we can spend a lot of time finding the right BIOS settings. But don’t worry, we are there to help.

Not at DOAG? Then see you next month in Birmingham, there’s another #RepAttack there.

 

Cet article Interested in a deep dive of logical replication? est apparu en premier sur Blog dbi services.

Oracle Data Integrator – The EL-T tool from Oracle

$
0
0

Oracle Data Integrator is the ETL tool that Oracle has developed to enter into the world of Business Intelligence. Taking advantage of its incredible experience in database managing, Oracle has created an ETL tool that is flexible to use, powerful and that allows the integration of data coming from another sources.

ODI_P1

 Agility of use

Oracle Data Integrator uses various types of graphical interfaces during all stages of data flow creation. Here is an example of data transformation flows. The different interfaces are very intuitive and easy to use.
ODI_P11

On this picture, you can see that the user interface is split in 3 big windows.
– The project element
– The Workflow design
– The Predefined components
The project element window centralizes all components, database connection or workflow you have created for the project. All information is centralized on one location.
The workflow design uses the drag and drop technology. In the same time, you can use predefined components that are SQL instructions or script. Using these components avoids writing many complex lines of code that can lead to syntax errors. Although each performed module or task can be controlled and customized.

The Loading Knowledge Module and the Integration Knowledge Module

One of the most powerful function from ODI is that you can choose the data loading and the Data Integration methodology without using code.
For example you can use an Oracle to Oracle push or pull method for loading the data or you can choose a multi connect or a specific SQL script. These options allow a better data transfer between the source and the target table.

ODI_P12
The Integration Knowledge Module allows to choose the data upgrade or integration strategy. You have multiple actions that you can do: Oracle incremental update, Oracle update … All these functions are still modeled in these knowledge modules.

ODI_P13

The Check Knowledge Module

This module is used to check the constraints of the datastore and to create an error table. This module can use the Oracle Check module or a specific SQL script.

ODI_P14

Powerful tool

Each operation on data such as select, create, lookup, … have been integrated into the modules, thus avoiding to write many complex lines of code that can lead to syntax errors. Although each performed module or task can be controlled and customized.

ODI_P3

In addition, ODI uses a very powerful debugger tool that allows checking each task from the data transform process.

ODI_P4

Many external data sources access

A good ETL tool must now connect to most database or data types that exist on the market. That’s why ODI provides access to a many database, including big data, making it one of the most connected tools on the market.

ODI_P5ODI_P5Bis

E-LT, the Business Intelligence 2.0

ODI is based on a new technological concept called E –LT (Extract – Load, Transform). Most of the data integration tools use the technology ETL (Extract – Load – Transform) In other terms, the data first have to be extracted from their base and copied in a temporary database. Then in a second time, data are transformed and stocked in the real data warehouse. These processes cost time, RAM and CPU.

The E-L.T. technology changes the processes sort. It takes advantages from the hardware and software technologies used in the source and target database. The data is extracted and directly copied in the data warehouse. The data transform processes are directly made on the data located in the data warehouse. This new process avoids another data transfer that cost time. At least, this technology allows refreshing frequently the data because we can choose the dataflow we want to update.

ODI_P6

Classical ETL schema                                                     E-LT schema

ODI_P7            ODI_P8

Oracle Data Integrator and Oracle Golden Gate

More and more customer’s solutions are using Oracle Golden Gate and Oracle Data Integrator at the same time. What advantages do we have to use these two technologies together?

  • First advantage, Oracle Golden Gate allows data replication with a granularity size up to the table. The data transfer to ODI can be focused on the relevant data.
  • Second advantage: Oracle Golden Gate job transfer can be triggered. That allows launching an “on demand” data updating process and in certain cases a “real time” data updating process.

ODI_P9

Oracle Data Integrator and Enterprise Data Quality

The data quality is becoming a very big problem for the companies, due to the explosion of data quantity to analyze. For a company, to analyze the good data is primordial. A business analyze made on bad data can lead the company to take bad decisions. That’s why ODI can use EDQ flows to be sure that the data that have been validated.

ODI_P10

Conclusion

In conclusion, we can say that Oracle Data Integrator is one of the most powerful data integration software on the market. In addition, the possibility of using an ODI instance in the cloud allows it to be very versatile. And finally, the association ODI – Oracle Golden Gate with the E-L.T. technology can be very powerful for having operational data in your data warehouse in a minimum of time. In a next blog, I will present you Oracle Enterprise Data Quality, a module for managing you data quality.

 

Cet article Oracle Data Integrator – The EL-T tool from Oracle est apparu en premier sur Blog dbi services.

Cloud Control 12c on your laptop

$
0
0

Today every DBA should have a lab on his laptop. This is where I reproduce most of cases before opening a SR, or to investigate something, to demo, to learn new features, or to prepare an OCM exam. I’ve a few basic VirtualBox VM with single instance databases (11g and 12c, SE and EE, CDB and non-CDB, etc). I’ve the RacAttack environement with a 3 nodes RAC in 12c on OEL7. But in order to prepare for OCM12c upgrade, I need also a Cloud Control 12c. Here is how I got it without having to install it.

Oracle provides some VirtualBox images, and there is one with Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) and a database (11.2.0.4) for the repository.

Download

You can get the image from: https://edelivery.oracle.com

Filter products by ‘Linux/OVM/VMs’ and search for ‘Enterprise Manager':

CaptureVBCC001

It’s 17GB in total to download, so be patient:

CaptureVBCC002

You can also download a wget script to get all files:

CaptureVBCC003

So you have 6 files and you can unzip them. When you see the compression ratio you can ask why it is zipped…

Then concatenate all .ova files:

C:\Users\frp>cd /d F:\Downloads
F:\Downloads>type VBox*.ova > EM12cR5.ova

and you can import it with VirtualBox as any OVA.

Network

I have all my VMs on the host-only network (192.168.78.1 in my case).
On the VM configuration, I set the first network card on ‘host only’ network and the second one as NAT to be able to access internet.
If you imported the OVA without changing the MAC addresses, here they are: 0800274DA371 and 08002748F74B

Now I can start the VM and login as root – password welcome1

My keyboard is Swiss French layout, so I change it in System/Administration/keyboard

Then I want to be able to ssh to the machine so I set the network (System/Administration/Network)

Here is my configuration in my case;
CaptureVBCC011

Then I activate the interface and can connect with ssh:
$ ssh root@192.168.78.42

Stop iptables as root

I want to communicate with the other VMs to discover databases, and to access via web, so I disable the firewall:


[root@emcc ~]# /etc/init.d/iptables stop
Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] [root@emcc ~]# chkconfig iptables off

then I can connect as oracle


$ su - oracle@192.168.78.42
 
Installation details of EM Plugin Update 1 12.1.0.5.0 .....
 
EM url: https://emcc.example.com:7799/em
Ports used by this deployment at /u01/OracleHomes/Middleware/oms/install/portlist.ini
Database 11.2.0.4.0 location: /u01/OracleHomes/db
Database name: emrepus
EM Middleware Home location: /u01/OracleHomes/Middleware
EM Agent Home location: /u01/OracleHomes/agent/core/12.1.0.5.0
 
This information is also available in the file /home/oracle/README.FIRST
 
 
To start all processes, click on the start_all.sh icon on your desktop or run the script /home/oracle/Desktop/start_all.sh
 
To stop all processes, click on the stop_all.sh icon on your desktop or run the script /home/oracle/Desktop/stop_all.sh

You just have to follow what is displayed

start_all.sh

Here is how to start everything in the right order.


$ /home/oracle/Desktop/start_all.sh
Starting EM12c: Oracle Database, Oracle Management Server and Oracle Management Agent .....
 
Starting the Oracle Database and network listener .....
 
 
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 12-NOV-2015 12:46:01
 
Copyright (c) 1991, 2013, Oracle. All rights reserved.
 
Starting /u01/OracleHomes/db/product/dbhome_1/bin/tnslsnr: please wait...
 
TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Log messages written to /u01/OracleHomes/db/product/dbhome_1/log/diag/tnslsnr/emcc/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost.localdomain)(PORT=1521)))
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 12-NOV-2015 12:46:01
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/OracleHomes/db/product/dbhome_1/log/diag/tnslsnr/emcc/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost.localdomain)(PORT=1521)))
The listener supports no services
The command completed successfully
 
SQL*Plus: Release 11.2.0.4.0 Production on Thu Nov 12 12:46:02 2015
 
Copyright (c) 1982, 2013, Oracle. All rights reserved.
 
Connected to an idle instance.
 
SQL> ORACLE instance started.
 
Total System Global Area 1469792256 bytes
Fixed Size 2253344 bytes
Variable Size 855641568 bytes
Database Buffers 603979776 bytes
Redo Buffers 7917568 bytes
Database mounted.
Database opened.
SQL>
System altered.
 
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting the Oracle Management Server .....
 
nohup: appending output to `nohup.out'
 
 
opmnctl startall: starting opmn and all managed processes...
Starting the Oracle Management Agent .....
 
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Starting agent ........................... started.

And it’s ready. You can access to it from the VM console, or the host browser on https://192.168.78.42:7799/em

All passwords are welcome1.

You can discover the others hosts. I’ve put them n the /etc/hosts in the EM VM, and I’ve put the following line in all /etc/hosts where I want to install agents:

192.168.78.42 emcc emcc.example.com

Hope it helps. There are things that are quicker to do from Cloud Control when you are in OCM exam so better to know it. However, don’t rely on that only as it may not be available for all exercises.

 

Cet article Cloud Control 12c on your laptop est apparu en premier sur Blog dbi services.


The scripts of my DOAG 2015 session: Automated GI/RAC staging with Cobbler and VirtualBox

$
0
0

I promised to upload the scripts of my session to our blog. So, here they are:

Please note that I had to upload the files with the “doc” extension because of some limitations of our blog software. Just save the files as they are written in the hyperlink and you should be fine. If you have any questions just contact me here or per email and I’ll be happy to answer.

Thanks again to all who attended my session and for waking up early :)

Hope to see you all again,
Daniel

 

Cet article The scripts of my DOAG 2015 session: Automated GI/RAC staging with Cobbler and VirtualBox est apparu en premier sur Blog dbi services.

Oracle Compliance Standard

$
0
0

In Enterprise Manager 12c, using the compliance standard results might be a good solution for DBA’s to detect security incoherences (for example a lambda user who has the sysdba role …) for their various targets.

From the 12.1.0.3 version, a new column named ‘Required Data Available’ appeared in the Compliance Standard Result screen. This column defines if the data for the compliance evaluation rules for each target are in the repository or not.

If the value is ‘YES’, it means that the data necessary for the compliance rule has been collected. If the value is ‘NO’, it means that nothing has been collected nor evaluated. Thus we can consider that for this target the compliance rule is not OK.

Let’s have a look on my EM12c configuration. I added the OMSREP database in order to apply on the target the High Security Configuration for Oracle Database. Apparently everything is fine, except the required data available:

co1

The requested configuration data is not available, and we do not have any violations available for this target’s security compliance.

EM 12c provides many compliance standards for various targets, in our case High Security Configuration for Oracle Database. But by default the configuration is not collected. We can notice that when we associate a target to a compliance standard, we receive the following message:

co2

We have to enable those collections by applying an Oracle Certified Template to the target, in our case it will be Oracle Certified – Enable Database Security Configuration Metrics, because those configuration metrics are not enabled by default in order not to overload the OMR (Oracle Management Repository):

co3

You choose the Oracle Certified Database Security Configuration Metrics, you select Apply, you select your target database, and then select OK:

co5

Now the target has its collections enabled.

At the beginning of our test we did not have any schedule about oracle security. Using emctl staus agent scheduler combined with a grep on the instance name and another grep with the metric collection gived no result:

oracle@em12c:> emctl status agent scheduler | grep OMSREP | grep oracle_security
oracle@em12c:>

Now the collection for the certified template has been applied, but we have to start the schedule:

oracle@em12c:> emctl startschedule agent -type oracle_database
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Start Schedule TargetType succeeded

And the agent status agent scheduler command shows the oracle_security collections and their scheduled time:

oracle@em12c:>emctl status agent scheduler | grep OMSREP | grep oracle_security

2015-11-23 10:33:59.854 : oracle_database:OMSREP:oracle_security_inst2
2015-11-23 10:34:50.068 : oracle_database:OMSREP:oracle_security

We can run a collection from the agent12c with emctl:

oracle@em12c:>emctl control agent runCollection 
OMSREP:oracle_database oracle_security 
Oracle Enterprise Manager Cloud Control 12c Release 5 
Copyright (c) 1996, 2015 Oracle Corporation. 
All rights reserved. 
--------------------------------------------------------------- 
EMD runCollection completed successfully

 

Finally now we can visualize the violations and target evaluations:

co6

 

Conclusion

If you use the compliance standard, be careful with the column ‘Required Data Available’, you won’t be sure you will have correct compliance results. Don’t forget that some configuration metrics are not enabled by default.

 

 

Cet article Oracle Compliance Standard est apparu en premier sur Blog dbi services.

Upgrading the Grid Infrastructure from 12.1.0.1 to 12.1.0.2 on the command line

$
0
0

A lot of people use the graphical way to upgrade Oracle software from one version to another. While there is nothing to say against that the same can be done without any graphical tools. This post outlines the steps to do so.

Currently my cluster is running Grid Infrastructure 12.1.0.1 without any PSU applied. The node names are racp1vm1 and racp2vm2:

[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [racp1vm2] is [12.1.0.1.0]
[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs softwareversion racp1vm2
Oracle Clusterware version on node [racp1vm2] is [12.1.0.1.0]

Obviously the first step is to copy the source files to all the nodes and to extract it:

[grid@racp1vm1 ~]$ cd /u01/app/oracle/software/
[grid@racp1vm1 software]$ ls -la
total 2337928
drwxr-xr-x. 2 grid oinstall       4096 Nov  5 15:14 .
drwxr-xr-x. 3 grid oinstall       4096 Nov  5 14:24 ..
-rw-r--r--. 1 grid oinstall 1747043545 Nov  5 15:14 linuxamd64_12102_grid_1of2.zip
-rw-r--r--. 1 grid oinstall  646972897 Nov  5 15:14 linuxamd64_12102_grid_2of2.zip
[grid@racp1vm1 software]$ unzip linuxamd64_12102_grid_1of2.zip
[grid@racp1vm1 software]$ unzip linuxamd64_12102_grid_2of2.zip
[grid@racp1vm1 software]$ rm linuxamd64_12102_grid_1of2.zip linuxamd64_12102_grid_2of2.zip

It is necessary to create the path for new ORACLE_HOME before the installation as the /u01/app/12.1.0 is locked (that is owned by root and not writable by oinstall which is the default Grid Infrastructure behavior):

[root@racp1vm1 app] mkdir /u01/app/12.1.0/grid_2_0
[root@racp1vm1 app] chown grid:oinstall /u01/app/12.1.0/grid_2_0
[root@racp1vm1 app] ssh racp1vm2
root@racp1vm2's password: 
Last login: Thu Nov  5 14:52:57 2015 from racp1vm1
[root@racp1vm2 ~] mkdir /u01/app/12.1.0/grid_2_0
[root@racp1vm2 ~] chown grid:oinstall /u01/app/12.1.0/grid_2_0

I’ll use my favorite method for installing the binaries only without doing any configuration:

[grid@racp1vm1 software]$ cd grid
./runInstaller \
      ORACLE_HOSTNAME=racp1vm1.dbi.lab \
      INVENTORY_LOCATION=/u01/app/oraInventory \
      SELECTED_LANGUAGES=en \
      oracle.install.option=UPGRADE \
      ORACLE_BASE=/u01/app/grid \
      ORACLE_HOME=/u01/app/12.1.0/grid_2_0 \
      oracle.install.asm.OSDBA=asmdba \
      oracle.install.asm.OSOPER=asmoper \
      oracle.install.asm.OSASM=asmadmin \
      oracle.install.crs.config.clusterName=racp1vm-cluster \
      oracle.install.crs.config.gpnp.configureGNS=false \
      oracle.install.crs.config.autoConfigureClusterNodeVIP=true \
      oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS \
      oracle.install.crs.config.clusterNodes=racp1vm1:,racp1vm2: \
      oracle.install.crs.config.storageOption=LOCAL_ASM_STORAGE \
      oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL \
      oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=NORMAL \
      oracle.install.asm.diskGroup.name=CRS \
      oracle.install.asm.diskGroup.AUSize=1 \
      oracle.install.crs.config.ignoreDownNodes=false \
      oracle.install.config.managementOption=NONE \
      -ignoreSysPrereqs \
      -ignorePrereq \
      -waitforcompletion \
      -silent

If the above runs fine the output should look similar to this:

As a root user, execute the following script(s):
	1. /u01/app/12.1.0/grid_2_0/rootupgrade.sh

Execute /u01/app/12.1.0/grid_2_0/rootupgrade.sh on the following nodes: 
[racp1vm1, racp1vm2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following script to complete the configuration.
	1. /u01/app/12.1.0/grid_2_0/cfgtoollogs/configToolAllCommands RESPONSE_FILE=

 	Note:
	1. This script must be run on the same host from where installer was run. 
	2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

Time for the upgrade on the first node:

[root@racp1vm1 ~] /u01/app/12.1.0/grid_2_0/rootupgrade.sh
Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm1.dbi.lab_2015-11-23_12-32-03.log for the output of root script

The contents of the logfile should look similar to this:

Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm2.dbi.lab_2015-11-23_12-31-02.log for the output of root script
[root@racp1vm1 ~]# tail -f /u01/app/12.1.0/grid_2_0/install/root_racp1vm1.dbi.lab_2015-11-23_12-32-03.log
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_2_0/crs/install/crsconfig_params
2015/11/23 12:32:03 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 12:32:25 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 12:32:28 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/11/23 12:32:32 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/11/23 12:32:33 CLSRSC-363: User ignored prerequisites during installation

2015/11/23 12:32:41 CLSRSC-515: Starting OCR manual backup.

2015/11/23 12:32:42 CLSRSC-516: OCR manual backup successful.

2015/11/23 12:32:45 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2015/11/23 12:32:45 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_1_0/bin/crsctl start rollingupgrade 12.1.0.2.0'

CRS-1131: The cluster was successfully set to rolling upgrade mode.
2015/11/23 12:32:50 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_2_0/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0/grid_1_0 -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'

ASM configuration upgraded in local node successfully.

2015/11/23 12:32:53 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2015/11/23 12:32:53 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/11/23 12:34:14 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/11/23 12:36:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/11/23 12:41:00 CLSRSC-472: Attempting to export the OCR

2015/11/23 12:41:00 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'

2015/11/23 12:41:03 CLSRSC-473: Successfully exported the OCR

2015/11/23 12:41:08 CLSRSC-486: 
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2015/11/23 12:41:08 CLSRSC-541: 
 To downgrade the cluster: 
 1. All nodes that have been upgraded must be downgraded.

2015/11/23 12:41:08 CLSRSC-542: 
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2015/11/23 12:41:08 CLSRSC-543: 
 3. The downgrade command must be run on the node racp1vm2 with the '-lastnode' option to restore global configuration data.

2015/11/23 12:41:39 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/11/23 12:41:44 CLSRSC-474: Initiating upgrade of resource types

2015/11/23 12:41:50 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'

2015/11/23 12:41:50 CLSRSC-475: Upgrade of resource types successfully initiated.

2015/11/23 12:41:52 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Now we can do the same thing on the second node:

[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/rootupgrade.sh
Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm2.dbi.lab_2015-11-23_13-43-30.log for the output of root script

Again, have a look at the logfile to confirm that everything went fine:

Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_2_0/crs/install/crsconfig_params
2015/11/23 13:43:30 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:30 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:39 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:50 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:51 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/11/23 13:43:55 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/11/23 13:43:55 CLSRSC-363: User ignored prerequisites during installation

ASM configuration upgraded in local node successfully.

2015/11/23 13:44:03 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/11/23 13:45:26 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/11/23 13:46:36 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/11/23 13:50:05 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/11/23 13:50:10 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2015/11/23 13:50:10 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_2_0/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2015/11/23 13:51:19 CLSRSC-479: Successfully set Oracle Clusterware active version

2015/11/23 13:51:25 CLSRSC-476: Finishing upgrade of resource types

2015/11/23 13:51:31 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'

2015/11/23 13:51:32 CLSRSC-477: Successfully completed upgrade of resource types

2015/11/23 13:51:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

That’s it. We can confirm the version by issuing:

[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [racp1vm2] is [12.1.0.2.0]

Hope this helps.

 

Cet article Upgrading the Grid Infrastructure from 12.1.0.1 to 12.1.0.2 on the command line est apparu en premier sur Blog dbi services.

OCM 12c preparation: Create CDB in command line

$
0
0

This post starts a series about things I wrote while preparing the OCM 12c upgrade exam. Everything in those posts are written before taking the exam – so don’t expect any clue about the exam here. It’s based only on the exam topics, and only those points I wanted to brush up, so don’t expect it to be a comprehensive list of points to know for the exam.
Let’s start by creating a CDB manually as it is something I never do in real life (dbca is the recommended way) but as it is still documented, it may be something to know.

I usually put code and output in my blog posts. But here the goal is to practice, so there is only the commands to run. If you have same environment as mine, a simple copy/paste would do it. But you probably have to adapt.

Documentation

Information about the exam says: Be prepared to use the non-searchable documentation during the exam, to help you with correct syntax.
Documentation about the ‘Create and manage pluggable databases’ topic is mostly in the Oracle® Database Administrator’s Guide. Search for ‘multitenant’, expand ‘Creating and Configuring a CDB’ and then you have the create CDB statement in ‘Creating a CDB with the CREATE DATABASE Statement’

Environment

You will need to have ORACLE_HOME set and $ORACLE_HOME/bin in the path.
If you have a doubt, find the inventory location and get oracle home from the inventory.xml:

cat /etc/oraInst.loc
cat /u01/app/oraInventory/ContentsXML/inventory.xml

Then I set the ORACLE SID:

export ORACLE_SID=CDB

Instance password file

I’ll put ‘oracle’ for all passwords:

cd $ORACLE_HOME/dbs
orapwd file=orapw$ORACLE_SID <<< oracle

Instance init.ora

In the dbs subdirectory there is a sample init.ora
I copy it and change what I need to change, here with ‘sed’ but of course you can do it manually

cp init.ora init$ORACLE_SID.ora
sed -i -e"s??$ORACLE_BASE?" init$ORACLE_SID.ora
sed -i -e"s?ORCL?$ORACLE_SID?i" init$ORACLE_SID.ora
sed -i -e"s?^compatible?#&?" init$ORACLE_SID.ora
# using ASMM instead of AMM (because I don't like it)
sed -i -e"s?^memory_target=?sga_target=?" init$ORACLE_SID.ora
sed -i -e"s?ora_control.?$ORACLE_BASE/oradata/CDB/&.dbf?g" init$ORACLE_SID.ora
sed -i -e"$" init$ORACLE_SID.ora
echo enable_pluggable_database=true >> init$ORACLE_SID.ora
cat init$ORACLE_SID.ora

In case I can choose the OMF example, I set the destinations

echo db_create_file_dest=$ORACLE_BASE/oradata/CDB >> init$ORACLE_SID.ora
echo db_create_online_log_dest_1=$ORACLE_BASE/oradata/CDB >> init$ORACLE_SID.ora
echo db_create_online_log_dest_2=$ORACLE_BASE/oradata/CDB >> init$ORACLE_SID.ora

From the documentation you can choose the CREATE DATABASE statement for non-OMF or for OMF. I choose the first one, and once again, here it is with ‘sed’ replacements that fit my environment:

sed -e "s/newcdb/CDB/g" \
-e "s?/u0./logs/my?$ORACLE_BASE/oradata/CDB?g" \
-e "s?/u01/app/oracle/oradata?$ORACLE_BASE/oradata?g" \
-e "s/[^ ]*password/oracle/g" > /tmp/createCDB.sql <<END
CREATE DATABASE newcdb
USER SYS IDENTIFIED BY sys_password
USER SYSTEM IDENTIFIED BY system_password
LOGFILE GROUP 1 ('/u01/logs/my/redo01a.log','/u02/logs/my/redo01b.log')
SIZE 100M BLOCKSIZE 512,
GROUP 2 ('/u01/logs/my/redo02a.log','/u02/logs/my/redo02b.log')
SIZE 100M BLOCKSIZE 512,
GROUP 3 ('/u01/logs/my/redo03a.log','/u02/logs/my/redo03b.log')
SIZE 100M BLOCKSIZE 512
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/app/oracle/oradata/newcdb/system01.dbf'
SIZE 700M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SYSAUX DATAFILE '/u01/app/oracle/oradata/newcdb/sysaux01.dbf'
SIZE 550M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
DEFAULT TABLESPACE deftbs
DATAFILE '/u01/app/oracle/oradata/newcdb/deftbs01.dbf'
SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE '/u01/app/oracle/oradata/newcdb/temp01.dbf'
SIZE 20M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
UNDO TABLESPACE undotbs1
DATAFILE '/u01/app/oracle/oradata/newcdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
ENABLE PLUGGABLE DATABASE
SEED
FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/newcdb/',
'/u01/app/oracle/oradata/pdbseed/')
SYSTEM DATAFILES SIZE 125M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED
SYSAUX DATAFILES SIZE 100M
USER_DATA TABLESPACE usertbs
DATAFILE '/u01/app/oracle/oradata/pdbseed/usertbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
END

I’ve written it in /tmp/createCDB.sql that I’ll run later.

Create database

For whatever reasons in case you have to cleanup a previous attempt that left shared memory:

ipcs -m | awk '/oracle/{print "ipcrm -m "$2}' | sh -x

Now creating required directories, running the create database script I’ve created before, and following the steps in documentation

mkdir -p $ORACLE_BASE/oradata/CDB $ORACLE_BASE/admin/$ORACLE_SID/adump
mkdir -p $ORACLE_BASE/oradata/CDB $ORACLE_BASE/oradata/pdbseed
mkdir -p $ORACLE_BASE/fast_recovery_area
PATH=$ORACLE_HOME/perl/bin/:$PATH sqlplus / as sysdba
startup pfile=initCDB.ora nomount
create spfile from pfile;
start /tmp/createCDB.sql
@?/rdbms/admin/catcdb.sql
oracle
oracle
temp
quit

Note that I’ve added $ORACLE_HOME/perl/bin in the PATH because this is required for the catcdb. More info about it:

The catcdb.sql is the long part in there (it run catalog and catproc on all conteainers – CDB$ROOT and PDB$SEED for the moment). Which means that if there is an exam where I have to create a database, it’s better to do that directly and read / prepare the other questions during that time.

Once done, you want to protect your database and run a backup. We will see that later.

Listener

I probably want a listener and see my service registered immediately

lsnrctl start
sqlplus / as sysdba
alter system register;

EM Express

I’m not sure EM Express helps a lot, but let’s start it:

exec DBMS_XDB_CONFIG.SETHTTPPORT(5500);

And I can acces to it on http://localhost:5500/em

oratab


echo CDB:$ORACLE_HOME:Y >> /etc/oratab

SQL Developer

If I have SQL Developer I’ll use it. At least to generate SQL statements for which I don’t know the exact syntax. It’s easier that going to documentation, copy/paste, change, etc.
I really hope that SQL Developer is there for the exam as EM Express do not have all features we had in 11g dbconsole.

You can create local connections to your CDB with a simple click:
Capture12COCMU-CreatePDB-004

Backup

Everything that takes time need a backup because you don’t want to do it again in case of failure.
Let’s put the database in archivelog mode and run a backup

rman target /
report schema;
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
backup database;

It’s an online backup, so no problem to continue with operations that don’t need an instance restart.
Next part will be about creating pluggable databases.

 

Cet article OCM 12c preparation: Create CDB in command line est apparu en premier sur Blog dbi services.

OCM 12c preparation: Manage PDB

$
0
0

Let’s see the different ways to create a PDB, with different tools.
Same disclaimer here as in the first post of the series: don’t expect to get those posts close to what you will have at the exam, but they cover important points that matches the exam topics.

Documentation

Information about the exam says: Be prepared to use the non-searchable documentation during the exam, to help you with correct syntax.
Documentation about the ‘Create and manage pluggable databases’ topic is mostly in the Oracle® Database Administrator’s Guide. Search for ‘multitenant’, expand ‘Creating and Removing PDBs with SQL*Plus’

You find all examples there. Remember that creating a PDB is always done from another one:

  • from PDB$SEED
  • from another PDB in your CDB
  • from another PDB in a remote CDB (need to create a db link)
  • from an unplugged PDB
  • from a non-CDB

and the you will name your datafiles with a conversion from the original ones.

Don’t forget to create the directories if you are not in OMF.

SQL Developer

SQL Developer is your friend. It is designed to help you. I use it in the following way:

  • SQL Worksheet is a nice notepad. Even if you finally paste the statements into sqlplus, the SQL Woksheet is graphical, has colors, and can also run statements from there ;)
  • SQL Reference documentation is classified by statements. SQL Developer is classified by objects. Right clock context menu shows you what you can do on a table, on a materialized view, etc
  • It shows what are your options and can show you the generated SQL statement if you finally want it

I’ll show you an example. You have several ways to name the files when you create a pluggable database, using the convert pairs. But if you have more than one pattern to replace, it’s not easy. Let’s use SQL Developer for that.

In the DBA tab, right click on the Container Database and you have all possible actions on it:

Capture12COCMU-CreatePDB-000

Here are all option for the CREATE PLUGGABLE DATABASE statement. Easier that going to documentation:

Capture12COCMU-CreatePDB-001

Above I’ve chosen ‘Custom Names’ to list all files. Then let’s get the SQL:

Capture12COCMU-CreatePDB-002

Now, I prefer to continue in the SQL Worksheet and I can paste it there. I’ve a file_name_convert pair for each files, so that I can change what I want:

Capture12COCMU-CreatePDB-003

SQL Developer is really a good tool.
When you unplug a PDB, it is still referenced by the original database. Then is you plug it elsewhere without renaming the files, the risk is that you drop it’s datafiles from the original container database.
Best recommendation is to immediately remove it from the original CDB and this is exactly what SQL Developer is doing:

dbca

DBCA is not my preferred tool to create a PDB, but let’s try it.

Let’s start by some troubleshooting (which is not what you want to do at an exam):
Capture12COCMU-CreatePDB-005

Well it is open. Let’s troubleshoot. dbca log is in $ORACLE_BASE/cfgtoollogs/dbca and I found the following:

[pool-1-thread-1] [ 2015-11-29 19:22:42.910 CET ] [PluggableDatabaseUtils.isDatabaseOpen:303] Query to check if DB is open= select count(*) from v$database where upper(db_unique_name)=upper('CDB') and upper(open_mode)='READ WRITE'
...
[pool-1-thread-1] [ 2015-11-29 19:22:43.034 CET ] [PluggableDatabaseUtils.isDatabaseOpen:334] DB is not open

Actually, I’ve no DB_UNIQUE_NAME in v$database:

SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
------------------------------


I’ve the db_unique_name for the instance:

SQL> show parameter uniq
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string CDB

but it’s the default (equals the db_name) as I didn’t set it in the init.ora when I created the CDB manually.
Let’s try to set it:

SQL> alter system set db_unique_name='CDB' scope=spfile;
alter system set db_unique_name='CDB' scope=spfile
*
ERROR at line 1:
ORA-32001: write to SPFILE requested but no SPFILE is in use

Ok, now I understand. I’ve created the spfile but didn’t restart the instance since then.

SQL> startup force
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 2932632 bytes
Variable Size 335544424 bytes
Database Buffers 729808896 bytes
Redo Buffers 5455872 bytes
Database mounted.
show parameter Database opened.
SQL> show spparameter unique
SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
------------------------------
CDB
 

Here it is. It’s not set in spfile, but takes the default. When we start with a pfile where it’s not set, it is not there in V$DATABASE.

My conclusion for the moment is: if you didn’t create the database with DBCA there is no reason to try to use it later.

And the most important when you create a PDB is written in the doc:

 

Cet article OCM 12c preparation: Manage PDB est apparu en premier sur Blog dbi services.

Viewing all 464 articles
Browse latest View live