Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 464 articles
Browse latest View live

How to read XML database alert log?

$
0
0

Since Oracle 11g, Oracle maintains two copies of the database’s alertlog in ADR: a flat text file in the sub-directory trace and an XML like in the folder alert. I had a case recently at a customer where the log.xml was moved to another place and compressed for archiving reason. As the regular text file was not containing old data, the goal was to exploit the archived XML -like file.

When the file is still located in its normal location, it’s very easy to read it using the command “show alert” in ADRCI.

oracle@vmtestol6:/home/oracle/ [DB121] adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Dec 3 16:20:31 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u00/app/oracle"
adrci> show homes
ADR Homes: 
diag/rdbms/db121_site1/DB121
diag/tnslsnr/vmtestol6/listener
adrci> set home diag/rdbms/db121_site1/DB121
adrci> show alert

ADR Home = /u00/app/oracle/diag/rdbms/db121_site1/DB121:
*************************************************************
Output the results to file: /tmp/alert_3268_13985_DB121_1.ado

So ADRCI is able to parse all the <msg> tag and convert it into something readable, there is no need to find a parser.

To avoid loosing information by replacing the current file, it’s not possible to put back the file into its original location.
The trick is to create a temporary diagnostic directory and use ADRCI to view the alertlog.
There is no need to use the same DB name but it’s important to re-create a diagnostic folder hierarchy otherwise you’ll get an error when trying to set the ADR base.

oracle@vmtestol6:/tmp/ [DB121] adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Dec 3 22:21:52 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u00/app/oracle"
adrci> set base /tmp
DIA-48447: The input path [/tmp] does not contain any ADR homes

Let’s create the hierarchy expected by ADRCI:

oracle@vmtestol6:/tmp/ [DB121] ls -l log_20151203.zip
-rw-r--r--. 1 oracle oinstall 162357  3 déc.  16:28 log_20151203.zip
oracle@vmtestol6:/tmp/ [DB121] mkdir -p diag/rdbms/db1/db1/alert
oracle@vmtestol6:/tmp/ [DB121] unzip log_20151203.zip -d diag/rdbms/db1/db1/alert
Archive:  log_20151203.zip
  inflating: diag/rdbms/db1/db1/alert/log.xml  
oracle@vmtestol6:/tmp/ [DB121] adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Dec 3 17:04:27 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/tmp"
adrci> show alert
...
2015-12-02 16:00:26.469000 +01:00
Instance shutdown complete
:w alert_DB121.log
"alert_DB121.log" [New] 7676L, 321717C written

Then it’s easy to save the file back to a flat text format! ADRCI also allows to run some commands to look for error and so on…

 

Cet article How to read XML database alert log? est apparu en premier sur Blog dbi services.


OCM 12c preparation: Explain fast refresh

$
0
0

There are some rules to be able to fast refresh a materialized view (which means, refresh it so that it is not stale, and without running the whole query). Documentation is in the Datawarehouse Guide, but we can use Enterprise Manager to get quickly to our goal.

Let’s see which tables we have:

CaptureMV001

and create materialized views:

CaptureMV002

I want to materialize the following join and group by:

select deptno,dname,count(*),sum(sal)
from scott.dept join scott.emp using (deptno)
group by deptno,dname

when entering the query you can run ‘explain':

CaptureMV003

here is what is displayed by the explain

CaptureMV004

For fast refresh we need materialized view log, so let’s create them:

CaptureMV005

I keep the default: primary key and choose the other column I will use in my materialized view:

CaptureMV006

I do it for both tables, here is what ‘show sql’ displays:

CREATE MATERIALIZED VIEW LOG ON SCOTT.DEPT NOCACHE WITH PRIMARY KEY ("DNAME") EXCLUDING NEW VALUES
CREATE MATERIALIZED VIEW LOG ON SCOTT.EMP NOCACHE WITH PRIMARY KEY ("DEPTNO", "SAL") EXCLUDING NEW VALUES

Note that there is no comma between the with clause and the column list. If you put one, you can have strange behaviour.

so here they are:

CaptureMV007

and let’s explain our mview again:

CaptureMV008

In order to support fast refresh for all kind of DML, I need to add the folloeing ‘with’ clause:

ALTER MATERIALIZED VIEW LOG ON DEPT ADD SEQUENCE, ROWID INCLUDING NEW VALUES;
ALTER MATERIALIZED VIEW LOG ON EMP ADD SEQUENCE, ROWID INCLUDING NEW VALUES;

You can do it from the GUI, but I don’t want to navigate though screens again.

So the result is that my materialized view supports fast refresh. But there was something else:

CaptureMV012

In order to maintain the SUM() and because the SAL column may be null, we need to keep a count of non null values.

select deptno,dname,count(*),sum(sal),count(sal)
from scott.dept join scott.emp using (deptno)
group by deptno,dname

Now everything is ok:

CaptureMV009

I can use the ‘get recommendation’ to see that there’s nothing else to do:

CaptureMV010

Here is the SQL generated:

CREATE MATERIALIZED VIEW "SCOTT"."MV_EMP_DEPT" USING INDEX REFRESH FORCE ON DEMAND ENABLE QUERY REWRITE AS
select deptno,dname,count(*),sum(sal),count(sal)
from scott.dept join scott.emp using (deptno)
group by deptno,dname
BEGIN DBMS_STATS.GATHER_TABLE_STATS(ownname =>'SCOTT', tabname => 'MV_EMP_DEPT'); END;

dbms_mview

If you don’t have Enterprise Manager, you can do the same manually.

First create the table to store the result:

$ ( cd $ORACLE_HOME/rdbms/admin ; ls *xmv*sql; )
utlxmv.sql

(yes I don’t have to remember the name, I just remember ‘xmv’ for explain mview and ‘xrw’ for explain rewrite)

$ sqlplus scott/tiger @ ?/rdbms/admin/utlxmv.sql
 
SQL*Plus: Release 11.2.0.3.0 Production on Thu Dec 17 15:09:36 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
Table created.
 
SQL> l
1 CREATE TABLE MV_CAPABILITIES_TABLE
...
 

And here is the result after running

exec dbms_mview.explain_mview('SCOTT.MV_EMP_DEPT');
select * from scott.MV_CAPABILITIES_TABLE;

on SQL Developer
CaptureMV013

That was about refresh. The rewrite capabilities can be explained in a similar way, but that was in a previous blog post.

 

Cet article OCM 12c preparation: Explain fast refresh est apparu en premier sur Blog dbi services.

OCM 12c preparation: Data Guard with OEM

$
0
0

I never create a Data Guard configuration from Enterprise Manager. It’s not that I don’t like GUI, but it is a lot easier to document it when doing from command line: copy paste the commands (actually I write the commands in the documentation and then copy to execute them so that I’m sure about the documentation). But for OCM 12c preparation, I want to be sure I can do it from OEM as it can be faster and prevent to miss a step.

However, sometimes it fails… Let’s see the Data Guard creation state after a failure on the final steps.

Ok, the job failed but after the standby creation:

Capture006
for whatever reason (I don’t understand why you may want to copy external files to standby server. Better put them on a shared filesystem) it failed here.

However, the job is nearly done and I don’t want to restart it from scratch.

Capture007

‘Create Standby Database’ includes the duplicate that is the longest step.

But OEM do not see the standby:

Capture001

Let’s click on ‘Add Standby Database’ but then cancel:

Capture002

and here is the Data Guard administration page:

Capture003

the standby is there, which means that broker configuration is done.
But if I want to do something from there:

Capture004

I can’t until both datbases are registered:

Capture005

At that point, I’ll not waste time in Cloud Control because the broker is setup and most of operations can be done with simple commands

Snapshot standby

Let’s convert the physical standby to snapshot standby.

I check the syntax:

DGMGRL> help convert
 
Converts a database from one type to another
 
Syntax:
 
CONVERT DATABASE TO
{ SNAPSHOT STANDBY | PHYSICAL STANDBY };
 

then convert:

DGMGRL> convert database "CDB112" to snapshot standby;
Converting database "CDB112" to a Snapshot Standby database, please wait...
Database "CDB112" converted successfully

And now convert back to physical standby

DGMGRL> convert database "CDB112" to physical standby;
Converting database "CDB112" to a Physical Standby database, please wait...
Operation requires shut down of instance "CDB112" on database "CDB112"
Shutting down instance "CDB112"...
ORA-01017: invalid username/password; logon denied
 
Warning: You are no longer connected to ORACLE.
 
Please complete the following steps and reissue the CONVERT command:
shut down instance "CDB112" of database "CDB112"
start up and mount instance "CDB112" of database "CDB112"

Argh… I connected / as sysdba…
Let’s do it again:

DGMGRL> connect sys/oracle
Connected as SYSDG.
DGMGRL> convert database "CDB112" to physical standby;
Converting database "CDB112" to a Physical Standby database, please wait...
Operation requires shut down of instance "CDB112" on database "CDB112"
Shutting down instance "CDB112"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "CDB112" on database "CDB112"
Starting instance "CDB112"...
ORACLE instance started.
Database mounted.
Continuing to convert database "CDB112" ...
Database "CDB112" converted successfully

Here it is.

Now enabling FSFO

The configuration created by OEM is in MaxPerformance with ASYNC log shipping, which is not ok for FSFO
(‘show database verbose’ if you don’t remember the properties)


DGMGRL> edit database "CDB112" set property LogXptMode='SYNC';
DGMGRL> edit database "CDB111" set property LogXptMode='SYNC';
DGMGRL> edit configuration set protection mode as maxavailability;

Second requirement is to be able to flashback datbases to reinstate


CDB111 SQL> alter database flashback on;
DGMGRL> edit database "CDB111" set property FastStartFailoverTarget='CDB112';
DGMGRL> edit database "CDB112" set property FastStartFailoverTarget='CDB111';
DGMGRL> edit database "CDB112" set state='apply-off';
CDB112 SQL> alter database flashback on;
DGMGRL> edit database "CDB112" set state='apply-on';

Then I can enable FSFO (‘help enable’ if you don’t remember the command)


DGMGRL> ENABLE FAST_START FAILOVER
Enabled.
DGMGRL> start observer

Let’s crash the primary:

DGMGRL> show configuration

Configuration - CDB111_vm111

Protection Mode: MaxAvailability
Members:
CDB111 - Primary database
CDB112 - (*) Physical standby database

DGMGRL> shutdown abort
ORACLE instance shut down.

and here is what I can see at the observer:

23:28:03.00 Friday, December 18, 2015
Initiating Fast-Start Failover to database "CDB112"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "CDB112"
23:28:34.34 Friday, December 18, 2015

failover is done.

And then when restarting the failed server:

23:54:51.34 Friday, December 18, 2015
Initiating reinstatement for database "CDB111"...
Reinstating database "CDB111", please wait...
Reinstatement of database "CDB111" succeeded
23:55:18.93 Friday, December 18, 2015

This is FSFO: no manual intervention, automatic failover and automatic reinstate.

Conclusion

This is the way I take the Enterprise Manager: I use it as long as it works well (save time when not knowing the syntax, save fingers typing).
But on any issue, let’s go back to the basics or I’ll waste time troubleshooting the GUI in addition to the issue.

 

Cet article OCM 12c preparation: Data Guard with OEM est apparu en premier sur Blog dbi services.

OCM 12c preparation: RAT in multitenant

$
0
0

I have several customers that have cases where Real Application Testing can be interesting, but they don’t use it because it’s an expensive option. Which is why it’s probably the topic listed for OCM 12c exam where I’ve the less experience. And I don’t even know at which level (CDB or PDB) it has to be run in multitenant. So I’ve tested it and came to a surprise.

In Enterprise Manager, when you select Database Replay from a PDB:

CaptureReplay002

It seems that you go back at CDB level:

CaptureReplay003

Capture

From there I’ve run a small capture, using the most simple and the default settings from the OEM wizard, and got a capture that has no reference to the PDB:

CaptureReplay004

Don’t hesitate to comment here because that looks strange for me that I cannot capture at PDB level.

Replay

So while the capture was running, I’ve created the SCOTT schema with utlsampl.sql and I’ve raised all salaries in EMP.
Now, to replay in the same state, I’ve re-created the SCOTT schema.

And once again, using all defaults in OEM wizard. But the replay had 2 errors:

CaptureReplayError

There’s probably a way to see the statements, but first I check the error messages:

[oracle@VM111 ~]$ oerr ora 01918
01918, 00000, "user '%s' does not exist"
// *Cause: User does not exist in the system.
// *Action: Verify the user name is correct.
[oracle@VM111 ~]$ oerr ora 65049
65049, 00000, "creation of local user or role is not allowed in CDB$ROOT"
// *Cause: An attempt was made to create a local user or role in CDB$ROOT.
// *Action: If trying to create a common user or role, specify CONTAINER=ALL.
//

As I know that the capture, running utlsampl.sql, did DROP USER SCOTT and then CREATE USER SCOTT, I can imagine that the replay was running on the CDB$ROOT.

I did the replay again, and the reason is clear. Because I’m at CDB level de defaut connection string for replay clients connects to the CDB$ROOT:

CaptureReplayErrorCDB

Let’s change it to PDB service name:

CaptureReplayErrorPDB

There I thought that the workload replay client had to connect to PDB but:


oracle@VM111 ~]$ wrc system/oracle@//vm111/PDB replaydir=/tmp/replay
 
Workload Replay Client: Release 12.1.0.2.0 - Production on Sun Dec 20 14:55:02 2015
 
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
 
(wrc_main_6062.trc) ORA-15554: cannot start workload replay client because the database server is not in PREPARE mode

then I connect to the CDB$ROOT and everything is ok:

[oracle@VM111 ~]$ wrc system/oracle replaydir=/tmp/replay
 
Workload Replay Client: Release 12.1.0.2.0 - Production on Sun Dec 20 14:55:17 2015
 
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
 
Wait for the replay to start (14:55:17)
Replay client 1 started (14:55:39)

With this configuration, the replay had no errors: SCOTT recreated and salaries updated.

Conclusion

My conclusion here is that everything about RAT is done at CDB level (but you can filter to capture only what happens on one PDB).

 

Cet article OCM 12c preparation: RAT in multitenant est apparu en premier sur Blog dbi services.

OCM 12c preparation: restore Voting disks, OCR and ASM spfile

$
0
0

As in the previous posts, here are a few commands I used to check that I know how to restore the cluster mandatory files in 12c. It’s what I’m doing while preparing the OCM 12c exam, but without any clue about what will be at the exam.

For more details, it’s in the MOS note 1062983.1

OCR

Here are the OCR backups:

[root@racp1vm1 ~]# ocrconfig -showbackuploc
The Oracle Cluster Registry backup location is [/u01/app/12.1.0/grid_1/cdata/] [root@racp1vm1 ~]# ocrconfig -showbackup
 
racp1vm1 2015/11/24 17:23:27 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup00.ocr 3467666221
racp1vm1 2015/11/24 13:23:26 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup01.ocr 3467666221
racp1vm1 2015/11/24 09:23:26 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup02.ocr 3467666221
racp1vm1 2015/11/23 07:14:45 /u01/app/12.1.0/grid_1/cdata/ws-dbi/day.ocr 0
racp1vm1 2015/11/23 07:14:45 /u01/app/12.1.0/grid_1/cdata/ws-dbi/week.ocr 0
racp1vm1 2015/12/18 20:28:50 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup_20151218_202850.ocr 3467666221
racp1vm1 2015/12/18 20:28:42 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup_20151218_202842.ocr 3467666221
racp1vm1 2015/12/18 20:28:40 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup_20151218_202840.ocr 3467666221
racp1vm1 2015/11/23 09:53:43 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup_20151123_095343.ocr 3467666221
racp1vm1 2015/11/23 01:33:18 /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup_20151123_013318.ocr 0

We need to stop Grid Infrastructure on all nodes

[root@racp1vm1 ~]# crsctl stop crs -f
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
 
[root@racp1vm1 ~]# crsctl stop crs -f
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.

and start it in exclusive mode without starting CRSD

[root@racp1vm1 ~]# crsctl start crs -h
Usage:
crsctl start crs [-excl [-nocrs] [-cssonly]] | [-wait | -waithas | -nowait] |
 
[-noautostart] Start OHAS on this server
where
-excl Start Oracle Clusterware in exclusive mode
-nocrs Start Oracle Clusterware in exclusive mode without starting CRS
-nowait Do not wait for OHAS to start
-wait Wait until startup is complete and display all progress and
 
status messages
-waithas Wait until startup is complete and display OHASD progress and
 
status messages
-cssonly Start only CSS
-noautostart Start only OHAS


[root@racp1vm1 ~]# crsctl start crs -excl -nocrs

Then we are able to restore the OCR:

[root@racp1vm1 grid]# ocrconfig -restore /u01/app/12.1.0/grid_1/cdata/ws-dbi/backup_20151123_095343.ocr

Voting disks

Here are my voting disks:

[root@racp1vm1 grid]# crsctl query css votedisk
$## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 22bf1a8f5f634fafbf521c83d8bf5982 (/dev/sdb1) [DATA] Located 1 voting disk(s).

I can delete it and re-create it:

[root@racp1vm1 grid]# crsctl delete css votedisk +DATA
CRS-4611: Successful deletion of voting disk +DATA.


[root@racp1vm1 grid]# crsctl replace votedisk +DATA
Successful addition of voting disk 67ad2282347d4f77bf2151e0a7c10105.
Successfully replaced voting disk group with +DATA.
CRS-4266: Voting file(s) successfully replaced

ASM spfile

Here Im removing the ASM spfile withotu having a backup:

ASMCMD> spget
+DATA/ws-dbi/ASMPARAMETERFILE/registry.253.896481653
ASMCMD> rm +DATA/ws-dbi/ASMPARAMETERFILE/registry.253.896481653

But I can get the non-default parameters from the alert.log:

[root@racp1vm1 ~]# adrci exec='set home +ASM1 ; show alert'
G
?non-default

and paste them to a pfile. Note that I have to add the instance_type which is not listed as non-default:

cat > /tmp/spfile.txt
large_pool_size = 12M
remote_login_passwordfile= "EXCLUSIVE"
asm_diskgroups = "ACFSDG"
asm_diskgroups = "FRA"
asm_power_limit = 1
instance_type='asm'

Then re-create spfile:

[grid@racp1vm1 ~]$ sqlplus / as sysasm
 
SQL*Plus: Release 12.1.0.2.0 Production on Fri Dec 18 21:35:25 2015
 
Copyright (c) 1982, 2014, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
 
SQL> create spfile='+DATA' from pfile='/tmp/spfile.txt';
File created.

Now ready to restart the cluster:


[root@racp1vm1 grid]# crsctl stop crs -f
[root@racp1vm1 grid]# crsctl start crs

 

Cet article OCM 12c preparation: restore Voting disks, OCR and ASM spfile est apparu en premier sur Blog dbi services.

Flashback table after multiple drop

$
0
0

FLASHBACK TABLE restores the latest version that is available in recycle bin. If you did multiple drop / create you may want to restore oldest versions. Of course it’s documented – everything is in the doc. But an example may be useful to understand it before you need it.

Let’s create and drop several times the DEMO table. I change the column name so that I can check easily which one has been restored

20:03:10 SQL> create table DEMO(id number constraint DEMOPK primary key , C1 char );
Table created.
20:03:12 SQL> drop table DEMO;
Table dropped.
20:03:12 SQL> create table DEMO(id number constraint DEMOPK primary key , C2 char );
Table created.
20:03:14 SQL> drop table DEMO;
Table dropped.
20:03:14 SQL> create table DEMO(id number constraint DEMOPK primary key , C3 char );
Table created.
20:03:16 SQL> drop table DEMO;
Table dropped.
20:03:16 SQL> create table DEMO(id number constraint DEMOPK primary key , C4 char );
Table created.
20:03:18 SQL> drop table DEMO;
Table dropped.
20:03:18 SQL> create table DEMO(id number constraint DEMOPK primary key , C5 char );
Table created.
20:03:20 SQL> drop table DEMO;
Table dropped.
20:03:20 SQL> create table DEMO(id number constraint DEMOPK primary key , C6 char );
Table created.
20:03:22 SQL> drop table DEMO;
Table dropped.

Here is what I have in recycle bin

20:03:22 SQL> select object_name,original_name,type,dropscn,createtime,droptime from user_recyclebin order by dropscn;
 
OBJECT_NAME ORIGINAL_N TYPE DROPSCN CREATETIME DROPTIME
------------------------------ ---------- ---------- ---------- ------------------- -------------------
BIN$KF+J+xYlFjngU3VOqMB9Kw==$0 DEMOPK INDEX 4350801 2016-01-02:20:03:10 2016-01-02:20:03:12
BIN$KF+J+xYmFjngU3VOqMB9Kw==$0 DEMO TABLE 4350804 2016-01-02:20:03:10 2016-01-02:20:03:12
BIN$KF+J+xYoFjngU3VOqMB9Kw==$0 DEMOPK INDEX 4350830 2016-01-02:20:03:12 2016-01-02:20:03:14
BIN$KF+J+xYpFjngU3VOqMB9Kw==$0 DEMO TABLE 4350833 2016-01-02:20:03:12 2016-01-02:20:03:14
BIN$KF+J+xYrFjngU3VOqMB9Kw==$0 DEMOPK INDEX 4350857 2016-01-02:20:03:14 2016-01-02:20:03:16
BIN$KF+J+xYsFjngU3VOqMB9Kw==$0 DEMO TABLE 4350861 2016-01-02:20:03:14 2016-01-02:20:03:16
BIN$KF+J+xYuFjngU3VOqMB9Kw==$0 DEMOPK INDEX 4350885 2016-01-02:20:03:16 2016-01-02:20:03:18
BIN$KF+J+xYvFjngU3VOqMB9Kw==$0 DEMO TABLE 4350889 2016-01-02:20:03:16 2016-01-02:20:03:18
BIN$KF+J+xYxFjngU3VOqMB9Kw==$0 DEMOPK INDEX 4350912 2016-01-02:20:03:18 2016-01-02:20:03:20
BIN$KF+J+xYyFjngU3VOqMB9Kw==$0 DEMO TABLE 4350915 2016-01-02:20:03:18 2016-01-02:20:03:20
BIN$KF+J+xY0FjngU3VOqMB9Kw==$0 DEMOPK INDEX 4350939 2016-01-02:20:03:20 2016-01-02:20:03:22
BIN$KF+J+xY1FjngU3VOqMB9Kw==$0 DEMO TABLE 4350943 2016-01-02:20:03:20 2016-01-02:20:03:22
 
12 rows selected.

and my goal now is to restore a previous version.

issue flashback multiple times

So the documentation tells to issue multiple flashback commands:

20:03:22 SQL> flashback table DEMO to before drop;
Flashback complete.
 
20:03:22 SQL> desc DEMO;
Name Null? Type
-------------- -------- --------
ID NOT NULL NUMBER
C6 CHAR(1)

that’s the latest version. Let’s issue the command again:

20:03:22 SQL> flashback table DEMO to before drop;
flashback table DEMO to before drop
*
ERROR at line 1:
ORA-38312: original name is used by an existing object

Yes of course, I have to drop it before.

20:03:22 SQL> drop table DEMO;
Table dropped.
 
20:03:22 SQL> flashback table DEMO to before drop;
Flashback complete.
 
20:03:22 SQL> desc DEMO;
Name Null? Type
-------------- -------- --------
ID NOT NULL NUMBER
C6 CHAR(1)

Ok, as I dropped it just before, that the latest version that is restored…

purge

If I want to issue multiple flashback table commands, I have to drop purge so that the intermediate restored tables don’t go to recycle bin

20:03:22 SQL> drop table DEMO purge;
Table dropped.
 
20:03:23 SQL> flashback table DEMO to before drop;
Flashback complete.
 
20:03:23 SQL> desc DEMO;
Name Null? Type
-------------- -------- --------
ID NOT NULL NUMBER
C5 CHAR(1)

that’s fine: I restored the N-1 version.

rename

The other solution is to restore it to another table, dropping that other table, or changing name each time:

20:03:23 SQL> flashback table DEMO to before drop rename to DEMO1;
Flashback complete.
 
20:03:23 SQL> desc DEMO1;
Name Null? Type
-------------- -------- --------
ID NOT NULL NUMBER
C4 CHAR(1)
 
20:03:23 SQL> flashback table DEMO to before drop rename to DEMO2;
Flashback complete.
 
20:03:23 SQL> desc DEMO2;
Name Null? Type
-------------- -------- --------
ID NOT NULL NUMBER
C3 CHAR(1)

Here I rewind two older versions.

name the recycle bin object

But there is a direct possibility if you know the version you want from the DBA_RECYCLEBIN view.

20:03:23 SQL> desc "BIN$KF+J+xYpFjngU3VOqMB9Kw==$0"
Name Null? Type
-------------- -------- --------
ID NOT NULL NUMBER
C2 CHAR(1)

And I restore that directly to a new table:

20:03:23 SQL> flashback table "BIN$KF+J+xYpFjngU3VOqMB9Kw==$0" to before drop rename to DEMO3;
Flashback complete.

So that’s probably the fastest way to restore an old version.

All that is possible because each time we flashback to before drop, the restored version is removed from the recycle bin.
From my example, only one remains here:

20:03:24 SQL> select object_name,original_name,type,dropscn,createtime,droptime from user_recyclebin order by dropscn;
 
OBJECT_NAME ORIGINAL_N TYPE DROPSCN CREATETIME DROPTIME
------------------------------ ---------- ---------- ---------- ------------------- -------------------
BIN$KF+J+xYlFjngU3VOqMB9Kw==$0 DEMOPK INDEX 4350801 2016-01-02:20:03:10 2016-01-02:20:03:12
BIN$KF+J+xYmFjngU3VOqMB9Kw==$0 DEMO TABLE 4350804 2016-01-02:20:03:10 2016-01-02:20:03:12

so the safest way is probably to flashback to a different table name each time, and clean that only when you’re sure you don’t need them anymore.

 

Cet article Flashback table after multiple drop est apparu en premier sur Blog dbi services.

log file sync / user commits

$
0
0

When presenting ‘Interpreting AWR Reports – Straight to the Goal’ at UKOUG TECH15 I had a very good question about the Statspack report I read which had log file sync much smaller than user commits. I realized that this needs a longer explanation, and that my slide is very misleading because I divided log file sync wait time per user commits, which probably make no sense here.

CapturePreziAWR-Commit

log file sync

‘log file sync’ occurs at commit time when your session waits that all redo protecting the transaction is written on disk. The idea is that when the end-user receives a ‘commit successful’ response, he expects that the changes are durable – as the D in ACID – even in case of instance crash. That means that the redo must be on a persistent storage.

User commits

In my presentation about reading an AWR report I show how we must always match the event time with the end-user response time. And that was probably my idea when dividing ‘log file sync’ by ‘user commits’. But that was probably a bad idea here and I’ll change this slide for the next presentation (soon: http://oraclemidlands.com/) because it makes no sense.

SQL commit

I’ll take simple examples to explain. In the first example I run 2000 insert + commit and chack the session statistics:

STAT/EVENT VALUE
-------------------------------------------------- ----------
STAT user commits 2000
STAT user calls 8017
STAT redo size 1121052
WAIT log file sync 2001

As you can see here, each commit (‘user commit’) increase the ‘log file sync’ event. It may be very quick if redo is already on disk but the wait event is always incremented.

SQL commit write

Same with ‘commit write’ which uses the commit_logging and commit_wait parameters (the default here):

STAT/EVENT VALUE
-------------------------------------------------- ----------
STAT user commits 2000
STAT redo size 1171124
STAT commit batch/immediate requested 2000
STAT commit immediate requested 2000
STAT commit batch/immediate performed 2000
STAT commit immediate performed 2000
STAT commit wait/nowait requested 2000
STAT commit wait requested 2000
STAT commit wait/nowait performed 2000
STAT commit wait performed 2000
STAT execute count 4128
WAIT log file sync 2001

Same values here, but more detail. From the values we see that commit write IMMEDIATE WAIT was performed

SQL commit write batch nowait

I’ll not show all combinations here. Here is BATCH (to optimize redo size to write) and NOWAIT:

STAT/EVENT VALUE
-------------------------------------------------- ----------
STAT user commits 2000
STAT redo size 1034768
STAT commit batch/immediate requested 2000
STAT commit batch requested 2000
STAT commit batch/immediate performed 2000
STAT commit batch performed 2000
STAT commit wait/nowait requested 2000
STAT commit nowait requested 2000
STAT commit wait/nowait performed 2000
STAT commit nowait performed 2000
WAIT log file sync 1

With NOWAIT, we don’t wait for log writer and we don’t have any ‘log file sync’. Which means that the response time of the commit is nearly immediate (only the time to update the transaction table). But of course, we may lose a commited transaction if log writer didn’t have time to write it before an instance crash.

This is the case I have in my example, and this is the reason why ‘log file sync’ is lower than ‘user commits';
Actually the example was done with Swingbench where transactions are done in a pl/sql procedure.

PL/SQL loop with commit

Here I’m calling 100 times a PL/SQL procedure that do 20 commits inside.

STAT/EVENT VALUE
-------------------------------------------------- ----------
STAT user commits 2000
STAT redo size 1044088
WAIT log file sync 101

PL/SQL default is different. It is doing something like NOWAIT BATCH implicitly. The idea is that you care that redo is persisted or not when it’s in the middle of a user call, because if it crashes, nobody has been notified that it is committed. Of course that may not be the right way if there are other users notified. We can go back to the SQL behavior by issuing ‘COMMIT WRITE’.

When the PL/SQL exits and there has been some commit done, an additional commit is done in WAIT mode this time, to be sure that all redo is persisted before returning to end-user.

Conclusion

The ‘log file sync wait event’ is actually the one that measure the number of times the end-user has waited on commit. And I was wrong to divide it by ‘user commits’. I’m changing the slide to the following.
CapturePreziAWR-Commit2

Hope to see you in Birmingham, Tuesday 26 January, 18:00 – 21:00

 

Cet article log file sync / user commits est apparu en premier sur Blog dbi services.

About SSH keys in Oracle Public Cloud

$
0
0

When working with resources on the Oracle public cloud, whatever it is IaaS or PaaS, the principle access method is based on SSH keys exchange. Once our instance is created this is going to be the first and only way accessing it. Of course afterward additional ports/services can be opened. However the question is how to manage these keys and how important are they?

In the process of creating an Instance, while talking about IaaS, one pre-requirement is to configure at least one SSH Key pair. the principle is to generate an SSH key pair in RSA format on the gatway / jumphost which will be used to access the public could hosted instance.

ssh-keygen -b 2048 -t rsa

This generates a private and public key. The public one need then to be upload in the Oracle Public Cloud interface.

create-ssh-key

Once there the SSH key can be assigned to an instance during the creation process

associate-ssh-key

You can assign as much SSH keys than you want / need to a single instance. This will correspond to all machines that are allowed to access the instance. However here is the key!

If you miss this step, you cannot afterward ADD or MODIFY the SSH key(s) attached to an instance.

instance-ssh-keys

As shown above, unlike for Storage or Security Lists there is no option for the SSH keys configuration once the instance is created.

Unfortunately this as currently an uglier consequence: If you loose or make a mistake for any reason with your key pair then you can’t access your instance anymore. The only solution so far will be to re-create the  whole instance!

The conclusion is that while working with Oracle Public Cloud, your SSH keys for the machines accessing the instances need to be carefully integrated in a backup strategy!

Talking with Oracle people shows that this limitation is taken under consideration and solutions will be available shortly (maybe a console-like access to the instance).

Cheers

 

Cet article About SSH keys in Oracle Public Cloud est apparu en premier sur Blog dbi services.


DataPump ‘Processing object type’ misleading messages

$
0
0

You’ve started a long DataPump and see it stuck on TABLE/STATISTICS/TABLE_STATISTICS, but you don’t expect that step to take a long time. Let’s see if we can rely on that message.

I’ve a 500MB table and export it. I’ll use the 12c LOGTIME option to print a timestamp in front of each output line:

18-JAN-16 21:35:44.038: Starting "SOE"."SYS_EXPORT_TABLE_01": soe/********@//localhost/SWINGBENCH tables=soe.DEMO directory=tmp status=1 logtime=all reuse_dumpfiles=y
18-JAN-16 21:35:48.294: Processing object type TABLE_EXPORT/TABLE/TABLE
18-JAN-16 21:35:48.687: Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
18-JAN-16 21:35:49.532: Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
18-JAN-16 21:35:49.849: Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
18-JAN-16 21:35:50.242: Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
18-JAN-16 21:35:54.779: . . exported "SOE"."DEMO" 542.6 MB 5051504 rows
18-JAN-16 21:35:55.257: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
18-JAN-16 21:35:55.644: Master table "SOE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
18-JAN-16 21:35:55.693: ******************************************************************************
18-JAN-16 21:35:55.694: Dump file set for SOE.SYS_EXPORT_TABLE_01 is:
18-JAN-16 21:35:55.697: /tmp/expdat.dmp
18-JAN-16 21:35:55.738: Job "SOE"."SYS_EXPORT_TABLE_01" successfully completed at Mon Jan 18 21:35:55 2016 elapsed 0 00:00:13

If you rely on the ‘Processing’ messages, the export of table data takes 49.532 – 48.687 = 0.845 seconds. And exporting table statistics takes 55.257 – 50.242 = 5 seconds. Obviously, this is wrong.

When you look at the ‘exported … rows’ it seems that the export of table data was still running at that time.

As you can see above, I’ve run the expdp with the STATUS=1 in order to display job status every 1 second.

After the TABLE_EXPORT/TABLE/TABLE_DATA the worker was not executing:

Worker 1 Status:
Instance ID: 1
Instance name: CDB
Host name: VM117
Object start time: Monday, 18 January, 2016 21:35:48
Object status at: Monday, 18 January, 2016 21:35:48
Process Name: DW00
State: WORK WAITING

Then it changed to the state of EXECUTING for several seconds.

Only at 21:35:51 we can see a status showing that it’s working on the table export, which is more than 1 second after the related ‘Processing’ message:

Worker 1 Status:
Instance ID: 1
Instance name: CDB
Host name: VM117
Object start time: Monday, 18 January, 2016 21:35:51
Object status at: Monday, 18 January, 2016 21:35:51
Process Name: DW00
State: EXECUTING
Object Schema: SOE
Object Name: DEMO
Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Completed Rows: 1,398,529
Worker Parallelism: 1

Then, the status displayed every seconds shows the same with an increasing number of rows until 21:35:54:

Worker 1 Status:
Instance ID: 1
Instance name: CDB
Host name: VM117
Access method: direct_path
Object start time: Monday, 18 January, 2016 21:35:51
Object status at: Monday, 18 January, 2016 21:35:54
Process Name: DW00
State: EXECUTING
Object Schema: SOE
Object Name: DEMO
Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Completed Rows: 5,051,504
Completed Bytes: 569,032,224
Percent Done: 100
Worker Parallelism: 1

and this is where we get the message:

18-JAN-16 21:35:54.779: . . exported "SOE"."DEMO" 542.6 MB 5051504 rows

Then we can see the following status which suggests that TABLE_EXPORT/TABLE/STATISTICS/MARKER has started:

Worker 1 Status:
Instance ID: 1
Instance name: CDB
Host name: VM117
Object start time: Monday, 18 January, 2016 21:35:51
Object status at: Monday, 18 January, 2016 21:35:55
Process Name: DW00
State: EXECUTING
Object Schema: SYS
Object Type: TABLE_EXPORT/TABLE/STATISTICS/MARKER
Worker Parallelism: 1

but we see the related ‘Processing’ message only after it:

18-JAN-16 21:35:55.257: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER

Strange to take 4 seconds on that step, and anyway the ‘Processing…’ message comes at the end here.

My first conclusion here is that you should not rely on the ‘Processing…’ messages to know what is currently running.
If you’ve run expdp from your terminal, you can interrupt it with control-C and you have a CLI to control the worker processes. They are still running and you can see the status with STATUS and then continue to the previous mode with CONTINUE_CLIENT.
If expdp is running in background, you can attach to the job and do the same.
Remember that DataPump is running through background jobs, which explains that the ‘Processing’ message may not be in sync with what is currently processed.

So, the ‘LOGTIME’ option a bit useless when it puts the timestamp in front the ‘Processing’ messages. However, it’s useful for the ‘exported’ message as it is the end of the table export.

 

Cet article DataPump ‘Processing object type’ misleading messages est apparu en premier sur Blog dbi services.

Execution Plan with ASH

$
0
0

Here is a query I use when I’m on a system that has Diagnostic Pack (ASH) but no tuning Pack (SQL Monitor).
It displays the execution plan with dbms_xplan.display_cursor and adds the % of ASH samples in front on each plan operation.
Here is a small output example. Usual dbms_xplan output but showing the most active operation:
CaptureXplanASH

As you see, you can quickly focus on the important part of a 3 pages execution plan. The part that is responsible for most of the response time.

Query is here. Customize the first line to filter the statements you want.

with
"sql" as (select SQL_ID,CHILD_NUMBER,PLAN_HASH_VALUE,'' FORMAT from v$sql where sql_id='&1'),
"ash" as (
select sql_id,sql_plan_line_id,child_number,sql_plan_hash_value
,round(count(*)/"samples",2) load
,nvl(round(sum(case when session_state='ON CPU' then 1 end)/"samples",2),0) load_cpu
,nvl(round(sum(case when session_state='WAITING' and wait_class='User I/O' then 1 end)/"samples",2),0) load_io
from "sql" join
(
select sql_id,sql_plan_line_id,sql_child_number child_number,sql_plan_hash_value,session_state,wait_class,count(*) over (partition by sql_id,sql_plan_hash_value) "samples"
FROM V$ACTIVE_SESSION_HISTORY
) using(sql_id,child_number) group by sql_id,sql_plan_line_id,child_number,sql_plan_hash_value,"samples"
),
"plan" as (
-- get dbms_xplan result
select
sql_id,child_number,n,plan_table_output
-- get plan line id from plan_table output
,case when regexp_like (plan_table_output,'^[|][*]? *([0-9]+) *[|].*[|]$') then
regexp_replace(plan_table_output,'^[|][*]? *([0-9]+) *[|].*[|]$','\1')
END SQL_PLAN_LINE_ID
from (select rownum n,plan_table_output,SQL_ID,CHILD_NUMBER from "sql", table(dbms_xplan.display_cursor("sql".SQL_ID,"sql".CHILD_NUMBER,"sql".FORMAT)))
)
select PLAN_TABLE_OUTPUT||CASE
-- ASH load to be displayed
WHEN LOAD >0 THEN TO_CHAR(100*LOAD,'999')||'% (' || TO_CHAR(100*LOAD_CPU,'999')||'% CPU'|| TO_CHAR(100*LOAD_IO,'999')||'% I/O)'
-- header
WHEN REGEXP_LIKE (PLAN_TABLE_OUTPUT,'^[|] *Id *[|]') THEN ' %ASH SAMPLES'
end plan_table_output
from "plan" left outer join "ash" using(sql_id,child_number,sql_plan_line_id) order by sql_id,child_number,n

The idea is to simply parse the PLAN_TABLE_OUTPUT to get the LINE_ID and match that with the ASH SQL_PLAN_LINE_ID which by itself worth the price to buy Diagnostic Pack. Don’t hesitate to comment for improvement.
Originally shared on dba-village as a view to create so it seems that I use it for about 5 years.

 

Cet article Execution Plan with ASH est apparu en premier sur Blog dbi services.

OFE – Optimizer Features Enable

$
0
0

Do you know the optimizer_features_enable parameter? What do you think about it? Good or bad to user it?
If I tell you to set optimizer_features_enable parameter=11.2.0.4 when you upgrade to 12c, do you think it’s a very safe decision, or totally insane to upgrade and set behavior to previous version? It’s not an underscore parameter, you are allowed to use it.

Optimizer Features Enable

If you’re a developer, you know what is versioning. Every change (new feature or bug fix) you do to your software makes a new version of some part of the code. Those changes can be deployed individually as patches, grouped in patchset, or combined to build a new release. Which mean that you can compile an executable with exactly the set of features you want.
But you can do more. You can, instead of having different versions of source code, put everything into the same code, add a parameter to be able to enable the new code part or not, and you have a ‘if’ that checks the parameter in order to run the new code or the old one. You can do even more: when you deploy a new release, you set a runtime version parameter that enables all the features you want to deploy into your new release.
I know only one software that do that: the Oracle optimizer. It probably something hard to maintain for developers, but being able to choose at runtime the features you want to use is great and flexible.
Well in Oracle this idea is not only for the optimizer, you can also choose the compatible version for the database: store it compatible with a previous version. But I’m talking about the optimizer here.

Instance, Session, Statement

Being able to run the optimizer as of a previous version can be great, but it goes beyond that. You can change the optimizer_features_enable parameter only for your session if you want, and even only for one statement. For example:
SELECT /*+ optimizer_features_enable('11.2.0.4') */ ...

will optimize the query as it was optimized in the latest 11g patchset even if you are in 12c. And if you can’t change your queries, you can use SQL Patch to do that. Available in every edition.
Want to know which values you can put there?
There’s the quick way:SQL> alter session set optimizer_features_enable=RBO;
ERROR:
ORA-00096: invalid value RBO for parameter optimizer_features_enable, must be from among 12.1.0.2, 12.1.0.1, 11.2.0.4, 11.2.0.3, 11.2.0.2, 11.2.0.1, 11.1.0.7, 11.1.0.6, 10.2.0.5, 10.2.0.4, 10.2.0.3, 10.2.0.2, 10.2.0.1, 10.1.0.5, 10.1.0.4, 10.1.0.3, 10.1.0, 9.2.0.8, 9.2.0, 9.0.1, 9.0.0, 8.1.7, 8.1.6, 8.1.5, 8.1.4, 8.1.3, 8.1.0, 8.0.7, 8.0.6, 8.0.5, 8.0.4, 8.0.3, 8.0.0

(Sorry for the joke about RBO – put whatever you want)
And the nice way:
SQL> select listagg(value,', ')within group(order by ordinal) from V$PARAMETER_VALID_VALUES where name='optimizer_features_enable';
 
LISTAGG(VALUE,',')WITHINGROUP(ORDERBYORDINAL)
----------------------------------------------------------------------------------------------------------------------------------------------
8.0.0, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.1.0, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 9.0.0, 9.0.1, 9.2.0, 9.2.0.8, 10.1.0, 10.1.0.3, 10.1.0.4, 10.1.0.5, 10.2.0.1, 10.2.0.2, 10.2.0.3, 10.2.0.4, 10.2.0.5, 11.1.0.6, 11.1.0.7, 11.2.0.1, 11.2.0.2, 11.2.0.3, 11.2.0.4, 12.1.0.1, 12.1.0.2

Is is bad or Good?

Well, the truth is that I find that very god to be able to do that, but I’m always reluctant to recommend them because I like to learn and use new features, and I find that setting OFE is negative and old-fashioned. Actually, I’ve that double feeling because there are always two different contexts. Actually this is the reason of that blog: clear out that ‘negative’ feeling about OFE.

New project

You are starting a new project, building a new application? Then install the latest database version and use the latest features. There are a lot of new features that will give more performance, more flexibility, more stability to your application, so use them. No OFE setting here except if you find a bug and need to workaround before getting the patch. But even there, you will not set OFE to previous version. You will disable only the feature or fix controls that cause the problem.

Migration

In the opposite, you are only migrating an existing application that has been tuned on a previous version of the database and you don’t have the budget to involve development in new testing and tuning? Then take the safe way. Set Optimizer Features to the previous version and everything will be fine. You build new reports on that application? Then use new features for them. You can set it for the session. Why not having it set in a logon trigger that check the service name?

Basically, OFE helps to:

  • Sort out any other migration issue before having to address the execution plan change ones.
  • Test new plans in a managed way. Do it when you are available to run non-regression tests. You can even use SQL Performance Analyzer if you have Tuning Pack.
  • You can also capture SQL Plan Baselines while OFE is set to old version, and then bring it back to current version. Then you will evolve plans only when there is no regression in response time. SPM is available in Enterprise Edition without any option.

How long?

Of course, it’s not because you can set optimizer to 8.0 version that it’s good to do it in 12c. OFE helps to postpone the upgrade of the optimizer so that you don’t have to test and resolve everything at the same time. But you should keep OFE to previous version only for a few months or year. Donc cumulate the gaps on multiple releases.
Take the occasion of an application release, when lot of non-regression testing will be done anyway, to bring OFE to latest version. Then you will see lot of statements improved (as it’s the goal of most of new features) and a few issues that will have to be addressed. You may have to gather statistics differently, to write some statements differently, to get rid of lot of old profiles and maybe implement a few new ones, etc. And the you will have a optimal application with its latest version running with the latest optimizer improvements.

Maybe you will choose to disable some features. But you probably don’t need to disable all new features. Let’s take an example about SQL Plan Directives that brought a lot of instability in 12c migrations. You have several ways to disable some of their behaviour (see this blog post). Maybe you will do that until 12.2 that will fix a lot of issues for sure. But don’t disable all 12c features. Don’t disable all adaptive features. Adaptive Plans are great and brings stability by avoiding bad plans to run for hours.

Features per version

When I see that an issue is fixed an issue by setting OFE to a previous version, I try to find which feature is responsible for the problem. I’ve a small script that parses a query with OFE set to previous versions and check (from event 10132 trace) the changes on documented and undocumented parameters. Here are those for the latest patchsets.


_bloom_serial_filter = on new in 11.2.0.4 was = off enable serial bloom filter on exadata (QKSFM_EXECUTION - SQL EXECUTION)
_fix_control_key = 1167487983 new in 11.2.0.4 was = -726982239
optimizer_features_enable = 11.2.0.4 new in 11.2.0.4 was = 11.2.0.3 optimizer plan compatibility parameter (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_undo_cost_change = 11.2.0.4 new in 11.2.0.4 was = 11.2.0.3 optimizer undo cost change (QKSFM_CBO - SQL Cost Based Optimization)
> _px_scalable_invdist = false
_px_scalable_invdist = true _optimizer_adaptive_plans = false
_optimizer_adaptive_plans = true < enable adaptive plans (QKSFM_ADAPTIVE_PLAN - Adaptive plans)
_optimizer_ansi_join_lateral_enhance = true new in 12.1.0.1 was = false optimization of left/full ansi-joins and lateral views (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_ansi_rearchitecture = true new in 12.1.0.1 was = false re-architecture of ANSI left, right, and full outer joins (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_batch_table_access_by_rowid = true new in 12.1.0.1 was = false enable table access by ROWID IO batching (QKSFM_ALL - A Universal Feature)
_optimizer_cluster_by_rowid = true new in 12.1.0.1 was = false enable/disable the cluster by rowid feature (QKSFM_CLUSTER_BY_ROWID - Cluster By Rowid Transformation)
_optimizer_cube_join_enabled = true new in 12.1.0.1 was = false enable cube join (QKSFM_JOIN_METHOD - Join methods)
_optimizer_dsdir_usage_control = 126 new in 12.1.0.1 was = 0 controls optimizer usage of dynamic sampling directives (QKSFM_CBO - SQL Cost Based Optimization)
optimizer_features_enable = 12.1.0.1 new in 12.1.0.1 was = 11.2.0.4 optimizer plan compatibility parameter (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_gather_stats_on_load = true new in 12.1.0.1 was = false enable/disable online statistics gathering (QKSFM_STATS - Optimizer statistics)
_optimizer_hybrid_fpwj_enabled = true new in 12.1.0.1 was = false enable hybrid full partition-wise join when TRUE (QKSFM_PQ - Parallel Query)
_optimizer_multi_table_outerjoin = true new in 12.1.0.1 was = false allows multiple tables on the left of outerjoin (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_nlj_hj_adaptive_join = true new in 12.1.0.1 was = false allow adaptive NL Hash joins (QKSFM_ADAPTIVE_PLAN - Adaptive plans)
_optimizer_null_accepting_semijoin = true new in 12.1.0.1 was = false enables null-accepting semijoin (QKSFM_TRANSFORMATION - Query Transformation)
_optimizer_partial_join_eval = true new in 12.1.0.1 was = false partial join evaluation parameter (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_proc_rate_level = basic new in 12.1.0.1 was = off control the level of processing rates (QKSFM_STATS - Optimizer statistics)
_optimizer_strans_adaptive_pruning = true new in 12.1.0.1 was = false allow adaptive pruning of star transformation bitmap trees (QKSFM_STAR_TRANS - Star Transformation)
_optimizer_undo_cost_change = 12.1.0.1 new in 12.1.0.1 was = 11.2.0.4 optimizer undo cost change (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_unnest_scalar_sq = true new in 12.1.0.1 was = false enables unnesting of of scalar subquery (QKSFM_TRANSFORMATION - Query Transformation)
_optimizer_use_gtt_session_stats = true new in 12.1.0.1 was = false use GTT session private statistics (QKSFM_STATS - Optimizer statistics)
_px_adaptive_dist_method = choose new in 12.1.0.1 was = off determines the behavior of adaptive distribution methods (QKSFM_PQ - Parallel Query)
_px_concurrent = true new in 12.1.0.1 was = false enables pq with concurrent execution of serial inputs (QKSFM_PQ - Parallel Query)
_px_cpu_autodop_enabled = true new in 12.1.0.1 was = false enables or disables auto dop cpu computation (QKSFM_PQ - Parallel Query)
_px_filter_parallelized = true new in 12.1.0.1 was = false enables or disables correlated filter parallelization (QKSFM_PQ - Parallel Query)
_px_filter_skew_handling = true new in 12.1.0.1 was = false enable correlated filter parallelization to handle skew (QKSFM_PQ - Parallel Query)
_px_groupby_pushdown = force new in 12.1.0.1 was = choose perform group-by pushdown for parallel query (QKSFM_PQ - Parallel Query)
_px_join_skew_handling = true new in 12.1.0.1 was = false enables skew handling for parallel joins (QKSFM_PQ - Parallel Query)
_px_object_sampling = 1 new in 12.1.0.1 was = 0 parallel query sampling for base objects (100000 = 100%) (QKSFM_PQ - Parallel Query)
_px_object_sampling_enabled = true new in 12.1.0.1 was = false use base object sampling when possible for range distribution (QKSFM_PQ - Parallel Query)
_px_parallelize_expression = true new in 12.1.0.1 was = false enables or disables expression evaluation parallelization (QKSFM_PQ - Parallel Query)
_px_partial_rollup_pushdown = adaptive new in 12.1.0.1 was = off perform partial rollup pushdown for parallel execution (QKSFM_PQ - Parallel Query)
_px_replication_enabled = true new in 12.1.0.1 was = false enables or disables replication of small table scans (QKSFM_PQ - Parallel Query)
_px_single_server_enabled = true new in 12.1.0.1 was = false allow single-slave dfo in parallel query (QKSFM_PQ - Parallel Query)
_px_wif_dfo_declumping = choose new in 12.1.0.1 was = off NDV-aware DFO clumping of multiple window sorts (QKSFM_PQ - Parallel Query)
_px_wif_extend_distribution_keys = true new in 12.1.0.1 was = false extend TQ data redistribution keys for window functions (QKSFM_PQ - Parallel Query)
_distinct_agg_optimization_gsets = choose new in 12.1.0.2 was = off Use Distinct Aggregate Optimization for Grouping Sets (QKSFM_ALL - A Universal Feature)
_fix_control_key = -1261475868 new in 12.1.0.2 was = 890546215
_gby_vector_aggregation_enabled = true new in 12.1.0.2 was = false enable group-by and aggregation using vector scheme (QKSFM_TRANSFORMATION - Query Transformation)
_optimizer_aggr_groupby_elim = true new in 12.1.0.2 was = false group-by and aggregation elimination (QKSFM_TRANSFORMATION - Query Transformation)
_optimizer_cluster_by_rowid_batched = true new in 12.1.0.2 was = false enable/disable the cluster by rowid batching feature (QKSFM_CLUSTER_BY_ROWID - Cluster By Rowid Transformation)
_optimizer_cluster_by_rowid_control = 129 new in 12.1.0.2 was = 3 internal control for cluster by rowid feature mode (QKSFM_CLUSTER_BY_ROWID - Cluster By Rowid Transformation)
optimizer_features_enable = 12.1.0.2 new in 12.1.0.2 was = 12.1.0.1 optimizer plan compatibility parameter (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_access_path = true new in 12.1.0.2 was = false optimizer access path costing for in-memory (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_autodop = true new in 12.1.0.2 was = false optimizer autoDOP costing for in-memory (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_bloom_filter = true new in 12.1.0.2 was = false controls serial bloom filter for in-memory tables (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_cluster_aware_dop = true new in 12.1.0.2 was = false Affinitize DOP for inmemory objects (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_gen_pushable_preds = true new in 12.1.0.2 was = false optimizer generate pushable predicates for in-memory (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_minmax_pruning = true new in 12.1.0.2 was = false controls use of min/max pruning for costing in-memory tables (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_table_expansion = true new in 12.1.0.2 was = false optimizer in-memory awareness for table expansion (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_reduce_groupby_key = true new in 12.1.0.2 was = false group-by key reduction (QKSFM_TRANSFORMATION - Query Transformation)
_optimizer_undo_cost_change = 12.1.0.2 new in 12.1.0.2 was = 12.1.0.1 optimizer undo cost change (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_vector_transformation = true new in 12.1.0.2 was = false perform vector transform (QKSFM_VECTOR_AGG - Vector Transformation)
_px_external_table_default_stats = true new in 12.1.0.2 was = false the external table default stats collection enable/disable (QKSFM_PQ - Parallel Query)
_ds_enable_view_sampling = true new in 12.1.0.2.1 was = false Use sampling for views in Dynamic Sampling (QKSFM_DYNAMIC_SAMPLING - Dynamic sampling)
_ds_sampling_method = PROGRESSIVE new in 12.1.0.2.1 was = NO_QUALITY_METRIC Dynamic sampling method used (QKSFM_DYNAMIC_SAMPLING - Dynamic sampling)
_ds_xt_split_count = 1 new in 12.1.0.2.1 was = 0 Dynamic Sampling Service: split count for external tables (QKSFM_DYNAMIC_SAMPLING - Dynamic sampling)
_fix_control_key = 0 new in 12.1.0.2.1 was = -1261475868
_key_vector_create_pushdown_threshold = 20000 new in 12.1.0.2.1 was = 0 minimum grouping keys for key vector create pushdown (QKSFM_VECTOR_AGG - Vector Transformation)
_optimizer_ads_use_partial_results = true new in 12.1.0.2.1 was = false Use partial results of ADS queries (QKSFM_DYNAMIC_SAMPLING - Dynamic sampling)
_optimizer_ads_use_spd_cache = true new in 12.1.0.2.1 was = false use Sql Plan Directives for caching ADS queries (QKSFM_DYNAMIC_SAMPLING - Dynamic sampling)
_optimizer_band_join_aware = true new in 12.1.0.2.1 was = false enable the detection of band join by the optimizer (QKSFM_ALL - A Universal Feature)
_optimizer_bushy_join = on new in 12.1.0.2.1 was = off enables bushy join (QKSFM_BUSHY_JOIN - bushy join)
_optimizer_cbqt_or_expansion = on new in 12.1.0.2.1 was = off enables cost based OR expansion (QKSFM_CBQT_OR_EXPANSION - Cost Based OR Expansion)
_optimizer_eliminate_subquery = true new in 12.1.0.2.1 was = false consider elimination of subquery optimization (QKSFM_ELIMINATE_SQ - eliminate subqueries)
_optimizer_enable_plsql_stats = true new in 12.1.0.2.1 was = false Use statistics of plsql functions (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_enhanced_join_elimination = true new in 12.1.0.2.1 was = false Enhanced(12.2) join elimination (QKSFM_TABLE_ELIM - Table Elimination)
optimizer_features_enable = 12.2.0.1 new in 12.1.0.2.1 was = 12.1.0.2 optimizer plan compatibility parameter (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_inmemory_use_stored_stats = AUTO new in 12.1.0.2.1 was = NEVER optimizer use stored statistics for in-memory tables (QKSFM_ALL - A Universal Feature)
_optimizer_key_vector_pruning_enabled = true new in 12.1.0.2.1 was = false enables or disables key vector partition pruning (QKSFM_VECTOR_AGG - Vector Transformation)
_optimizer_multicol_join_elimination = true new in 12.1.0.2.1 was = false eliminate multi-column key based joins (QKSFM_TABLE_ELIM - Table Elimination)
_optimizer_undo_cost_change = 12.2.0.1 new in 12.1.0.2.1 was = 12.1.0.2 optimizer undo cost change (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_union_all_gsets = true new in 12.1.0.2.1 was = false Use Union All Optimization for Grouping Sets (QKSFM_GROUPING_SET_XFORM - Grouping Set Transformation)
_optimizer_use_table_scanrate = HADOOP_ONLY new in 12.1.0.2.1 was = OFF Use Table Specific Scan Rate (QKSFM_CBO - SQL Cost Based Optimization)
_optimizer_use_xt_rowid = true new in 12.1.0.2.1 was = false Use external table rowid (QKSFM_TRANSFORMATION - Query Transformation)
_optimizer_vector_base_dim_fact_factor = 200 new in 12.1.0.2.1 was = 0 cost based vector transform base dimension to base fact ratio (QKSFM_VECTOR_AGG - Vector Transformation)
_pwise_distinct_enabled = true new in 12.1.0.2.1 was = false enable partition wise distinct (QKSFM_PARTITION - Partition)
_px_dist_agg_partial_rollup_pushdown = adaptive new in 12.1.0.2.1 was = off perform distinct agg partial rollup pushdown for px execution (QKSFM_PQ - Parallel Query)
_px_scalable_invdist_mcol = true new in 12.1.0.2.1 was = false enable/disable px plan for percentile functions on multiple columns (QKSFM_PQ - Parallel Query)
_query_rewrite_use_on_query_computation = true new in 12.1.0.2.1 was = false query rewrite use on query computation (QKSFM_TRANSFORMATION - Query Transformation)
_recursive_with_branch_iterations = 7 new in 12.1.0.2.1 was = 1 Expected number of iterations of the recurive branch of RW/CBY (QKSFM_EXECUTION - SQL EXECUTION)
_recursive_with_parallel = true new in 12.1.0.2.1 was = false Enable/disable parallelization of Recursive With (QKSFM_EXECUTION - SQL EXECUTION)
_sqlexec_hash_based_distagg_ssf_enabled = true new in 12.1.0.2.1 was = false enable hash based distinct aggregation for single set gby queries (QKSFM_EXECUTION - SQL EXECUTION)
_vector_encoding_mode = manual new in 12.1.0.2.1 was = off enable vector encoding(OFF/MANUAL/AUTO) (QKSFM_EXECUTION - SQL EXECUTION)
_xt_sampling_scan_granules = on new in 12.1.0.2.1 was = off Granule Sampling for Block Sampling of External Tables (QKSFM_EXECUTION - SQL EXECUTION)

Conclusion

  • It’s not fool to use new version of optimizer for new projects
  • It’s not bad to use previous optimizer version if your application has been tuned for that
  • It’s good to try to use new optimizer version when testing a new application release
 

Cet article OFE – Optimizer Features Enable est apparu en premier sur Blog dbi services.

Enable 10046 Tracing for a specific SQL

$
0
0

Available methods to enable 10046 trace are described in My Oracle Support Note 376442.1. You can enable 10046-tracing

– on session level (alter session)
– for other sessions (e.g. with oradebug, the package DBMS_MONITOR or DBMS_SYSTEM)

What is not covered with the methods above is the possibility to trace a specific SQL-statement, which runs “somewhen” in the future on the database. E.g. a SQL, which runs during a next batch job. With the introduction of UTS (Unified Tracing Service) in 11.2., you can actually do exactly that:

I.e. suppose I need a 10046-trace, level 12 of the SQL with SQL_ID cjrha4bzuupzf, which runs somewhen in the next 24 hours. So what I have to do is to just set the event “sql_trace” for the SQL_ID:


SQL> alter system set events 'sql_trace[sql: cjrha4bzuupzf] level=12';

REMARK: With the introduction of the parameter “_evt_system_event_propagation” in 11g (default is TRUE) the event-settings of “alter system set events”-commands are also propagated to existing sessions.

Let’s see if only the statement in question is being traced. From another session I’m doing the following:
REMARK: I actually want to trace the statement with the GATHER_PLAN_STATISTICS-hint.


SQL> select /* BEFORE TRACE */ count(*) from t1 where object_type='INDEX';
 
COUNT(*)
----------
1432
 
SQL> select /*+ GATHER_PLAN_STATISTICS */ count(*) from t1 where object_type='INDEX';
 
COUNT(*)
----------
1432
 
SQL> select /*+ AFTER TRACE */ count(*) from t1 where object_type='INDEX';
 
COUNT(*)
----------
1432
 
SQL> select value from v$diag_info where name = 'Default Trace File';
 
VALUE
--------------------------------------------------------------------
D:\APP\CBL\diag\rdbms\gen11204\gen11204\trace\gen11204_ora_11552.trc

Below is the content of the produced trace file:


=====================
PARSING IN CURSOR #305977200 len=79 dep=0 uid=42 oct=3 lid=42 tim=628480230944 hv=4289550318 ad='7ff95ed9ed80' sqlid='cjrha4bzuupzf'
select /*+ GATHER_PLAN_STATISTICS */ count(*) from t1 where object_type='INDEX'
END OF STMT
EXEC #305977200:c=0,e=22,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3724264953,tim=628480230943
WAIT #305977200: nam='SQL*Net message to client' ela= 1 driver id=1111838976 #bytes=1 p3=0 obj#=15671 tim=628480231942
FETCH #305977200:c=0,e=652,p=0,cr=189,cu=0,mis=0,r=1,dep=0,og=1,plh=3724264953,tim=628480232614
STAT #305977200 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=189 pr=0 pw=0 time=652 us)'
STAT #305977200 id=2 cnt=1432 pid=1 pos=1 obj=15671 op='TABLE ACCESS FULL T1 (cr=189 pr=0 pw=0 time=622 us cost=56 size=10024 card=1432)'
FETCH #305977200:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=3724264953,tim=628480232838
WAIT #305977200: nam='SQL*Net message to client' ela= 1 driver id=1111838976 #bytes=1 p3=0 obj#=15671 tim=628480232858
*** 2016-02-05 15:11:21.578
WAIT #305977200: nam='SQL*Net message from client' ela= 1336677 driver id=1111838976 #bytes=1 p3=0 obj#=15671 tim=628481569546
CLOSE #305977200:c=0,e=8,dep=0,type=0,tim=628481569645

So, as expected, only the SQL with SQL_ID cjrha4bzuupzf has been traced.

The event settings done with “alter system set events …” are not persistent settings. I.e. after a next restart of the instance the event is no longer active. To set the event persistently you would have to set it in the spfile as well:


SQL> alter system set event='sql_trace[sql: cjrha4bzuupzf] level=12' scope=spfile;

If you want to see if an event is currently active on the running instance do the following:


SQL> oradebug setmypid
Statement processed.
SQL> oradebug eventdump session
sql_trace[sql: cjrha4bzuupzf] level=12

If you have several events set in your spfile then it’s difficult to remove a single one of them, because the events are stored concatenated with a “:” as a single event. You can just overwrite the current setting or remove them all:


SQL> alter system reset event scope=spfile;

REMARK: Do not set events (except event 10046) without the instruction from Oracle Support to do so. I also do recommend to NOT set event 10046 in the spfile.

 

Cet article Enable 10046 Tracing for a specific SQL est apparu en premier sur Blog dbi services.

The taboo of ‘underscore parameters’

$
0
0

Oracle provides lots or parameters that can control the behavior of the software. The default values are probably the best ones most of the time. Hundreds of parameters are documented and we can set them to customize the Oracle software for our context because default values can’t fit all different database sizes, usage, workload, infrastructure, etc. And in addition to them there are those ‘underscore parameters’ or ‘hidden parameters’ or ‘undocumented parameters’. You should not set them without validation from Oracle Support.
However, several software vendors recommend some underscore parameter settings. Not only ISVs but software provided by Oracle do the same. And Oracle appliances (ODA, Exadata) also set a bunch of underscore parameters. Some people think that it’s bad, and I’ll explain here why I think it is not.

underscore parameters

How many parameters can I set in 12.1.0.2 ?

SQL> select count(*) from v$parameter;
COUNT(*)
----------
381

381 ones. Let’s look at the query that is behind the V$PARAMETER view:

SQL> variable c clob
SQL> exec dbms_utility.expand_sql_text(input_sql_text=>'select count(*) from v$parameter',output_sql_text=>:c);
PL/SQL procedure successfully completed.
SQL> print
C
--------------------------------------------------------------------------------
SELECT COUNT(*) "COUNT(*)" FROM (SELECT "A2"."CON_ID" "CON_ID" FROM (SELECT "A
4"."INST_ID" "INST_ID","A4"."CON_ID" "CON_ID" FROM SYS."X$KSPPI" "A4",SYS."X$KSP
PCV" "A3" WHERE "A4"."INDX"="A3"."INDX" AND BITAND("A4"."KSPPIFLG",268435456)=0
AND TRANSLATE("A4"."KSPPINM",'_','#') NOT LIKE '##%' AND (TRANSLATE("A4"."KSPPIN
M",'_','#') NOT LIKE '#%' OR "A3"."KSPPSTDF"='FALSE' OR BITAND("A3"."KSPPSTVF",5
)>0)) "A2" WHERE "A2"."INST_ID"=USERENV('INSTANCE')) "A1"

There is a where clause here about the name of the parameter (KSPPINM) which is:

TRANSLATE("A4"."KSPPINM",'_','#') NOT LIKE '#%'

it means that the name do not start with an underscore. It’s replaced by ‘#’ for the ‘like’ command because ‘_’ is a jocker and probably the ‘escape’ option of the ‘like’ clause were not available when the view was defined.
So this is what returns 381 parameters:

SQL> SELECT COUNT(*) "COUNT(*)" FROM (SELECT "A2"."CON_ID" "CON_ID" FROM (SELECT "A4"."INST_ID" "INST_ID","A4"."CON_ID" "CON_ID" FROM SYS."X$KSPPI" "A4",SYS."X$KSPPCV" "A3" WHERE "A4"."INDX"="A3"."INDX" AND BITAND("A4"."KSPPIFLG",268435456)=0 AND TRANSLATE("A4"."KSPPINM",'_','#') NOT LIKE '##%' AND (TRANSLATE("A4"."KSPPINM",'_','#') NOT LIKE '#%' OR "A3"."KSPPSTDF"='FALSE' OR BITAND("A3"."KSPPSTVF",5)>0)) "A2" WHERE "A2"."INST_ID"=USERENV('INSTANCE')) "A1"
2 /
COUNT(*)
----------
381

what if I allow the ones starting by underscore?

SQL> c/NOT LIKE '#%'/LIKE '#%'
1* SELECT COUNT(*) "COUNT(*)" FROM (SELECT "A2"."CON_ID" "CON_ID" FROM (SELECT "A4"."INST_ID" "INST_ID","A4"."CON_ID" "CON_ID" FROM SYS."X$KSPPI" "A4",SYS."X$KSPPCV" "A3" WHERE "A4"."INDX"="A3"."INDX" AND BITAND("A4"."KSPPIFLG",268435456)=0 AND TRANSLATE("A4"."KSPPINM",'_','#') NOT LIKE '##%' AND (TRANSLATE("A4"."KSPPINM",'_','#') LIKE '#%' OR "A3"."KSPPSTDF"='FALSE' OR BITAND("A3"."KSPPSTVF",5)>0)) "A2" WHERE "A2"."INST_ID"=USERENV('INSTANCE')) "A1"
SQL> /
COUNT(*)
----------
3604

… a lot more.

They are called ‘underscore parameters because they start with an underscore.
From there, there is nothing bad with them. Just a naming convention defined by Oracle. And because of that when you set it you have to enclose it in double quotes, as with any identifier that do not start with alphabetic character. No taboo there.

hidden parameters

The name ‘hidden parameter’ comes from the fact that those underscore parameter are not displayed by V$PARAMETER. But that’s not totally true:

SQL> show parameter histograms

I’ve no parameter in V$PARAMETER with ‘histograms’ in their names.
But I can set it:

SQL> alter session set "_optimizer_use_histograms"=false;
Session altered.

and then it is displayed:

SQL> show parameter histograms
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
_optimizer_use_histograms boolean FALSE

The V$PARAMETER view is defined to hide them only when they are not set. If its not the case, it’s a bug (3327961)

So if you think that those parameters are bad and forbidden because they are not displayed, then just set them and then they are allowed ;)
I’m just joking here. My goal is to show that the choice to display them or not is not a reason to make them taboo.

undocumented

Here is the point. They are not found in the documentation. The Oracle® Database Reference book describe 291 parameters in the ‘Initialization Parameter Descriptions’ and none of them start with an underscore. Note that 291 is far from 381 which means that some parameters not starting with an underscore and not hidden from V$PARAMETER and still undocumented parameters.

But look at My Oracle Support notes. They are official documentation, isn’t it? And lot of those underscore parameters are documented there: what they do, in which context they can be set, etc.

What I want to say here is that ‘undocumented’ means that, at release time, Oracle decided not to put them into the documentation because they though we should not need to set them. But then, real life starts. We upgrade databases that are from very different environments. We encounter issues that nobody thought about. We encounter bugs. We upgrade application that are bad from the get-go (not using bind variables, parsing as much as executing, defined tables with thousand of columns, etc.) and new features may not be suited for those bad applications. We apply patches, PSUs… Things change and what was decided at release time about documentation may be different one year later.

This is where undocumented parameters become documented. They keep their name (starting with underscore) but are now totally legal for some specific situation. There is no taboo with that. One way to stabilize a new release is to apply the latest PSUs. Another way is to disable the few features that happen to cause an issue in your environment. And when you upgrade to a new release or new patchset, then check them as you probably don’t need them anymore.

documented parameters that set undocumented ones

12c came with lot of new adaptive features in the CBO, and some of them has brought parsing issues (SPD and ADS to name them by their acronyms). A good OLTP application should not spend its time to parse statements, which means no consequence with that. But if you have a bad application that is already parsing a lot, you may encounter issues. Then, what do you prefer?
One possibility is to set optimizer_features_enable=11.2.0.4 so that you disable most of the 12c CBO new features. And you are happy with it because it’s not hidden, not underscore and not undocumented. However, if you look at what it does behind, you will see that it sets nearly 30 underscore parameters. One of them is setting “_optimizer_dsdir_usage_control”=0 and maybe this is the only one that you need.
So in that case, do you prefer to look good and disable all new features? Which means that you disable adaptive plans for example, which is a very nice feature that stabilize your response time.
Or do you accept to set an underscore parameter and then address exactly the problem and only the problem?

I choose the second one. No problem to set a few parameters, whether they start with underscore or not, as long as:

  • They address an issue you encounted. Check with support about that.
  • You document them. the ALTER SYSTEM SET has a COMMENT clause that you can use to describe the reason, the SR, etc
  • You review them before each upgrade. Do you need them anymore?
  • You plan long term solution if the setting is just a workaround for a bad application design

Disclaimer: This is not an encouragement to set a lot of parameters! Default values are good for most of of them. But this advise, in my opinion, is totally independent of the fact they are underscore or not. For both hidden or not, you probably need only a few of them.

conclusion

Staying in old versions is not a way to achieve stability.
If you want a stable database, you should:

  1. Apply PSUs, patch bundles, and upgrade to latest patchset. Because, believe it or not, new releases tend to fix more bugs than bringing new ones.
  2. Test the upgraded database, and fix the few issues that you may encounter: parameters (hidden or not), fix_control, patches,…

One additional note

I said that ‘documentation’ should not be only the one from the Oracle books, but also MOS notes, because the problems and solutions evolves with time. There is also an excellent source of information about bugs encountered at Oracle customers, reasons, workarounds and fixes. And it’s free: Mike Dietrich blog: https://blogs.oracle.com/UPGRADE

 

Cet article The taboo of ‘underscore parameters’ est apparu en premier sur Blog dbi services.

EM 13c target is fully broken !!

$
0
0

At a client’ site I decided to install the Enterprise Manager Cloud Control 13c. I did not encountered any special problem :=) , then I installed an agent 13c on the development plattform and added targets as usual, but at this timeI discovered that half of my database targets in 11.2.0.3 version were in the following state :

2016-02-05 15:31:46,863 [2527:A154C6AD] INFO - Target: [oracle_database.TESTDBA] is fully broken: 
Dynamic Category property error (code=0x400)

The gcagent.log showed me the errors:

2016-01-28 13:41:11,423 [261843:CE28B2D8] INFO - >>> Reporting exception: 
oracle.sysman.emSDK.agent.client.exception.NoSuchTargetException: 
the oracle_database target "TESTDBA" does not exist (request id 1) <<< 
oracle.sysman.emSDK.agent.client.exception.NoSuchTargetException: 
the oracle_database target "TESTDBA" does not exist 
at oracle.sysman.gcagent.dispatch.cxl.GetMetricDataAction.satisfyRequest(GetMetricDataAction.java:175) 
at oracle.sysman.gcagent.dispatch.ProcessRequestAction._call(ProcessRequestAction.java:135) 
at oracle.sysman.gcagent.dispatch.ProcessRequestAction.call(ProcessRequestAction.java:96) 
at oracle.sysman.gcagent.dispatch.InlineDispatchCoordinator.dispatchRequest(InlineDispatchCoordinator.java:235) 
at oracle.sysman.gcagent.dispatch.DispatchRequestsAction.call(DispatchRequestsAction.java:111) 
at oracle.sysman.gcagent.dispatch.DispatchRequestsAction.call(DispatchRequestsAction.java:51) 
at oracle.sysman.gcagent.task.DiagWrappedAction.call(DiagWrappedAction.java:52) 
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
at oracle.sysman.gcagent.task.SingleActionTask.run(SingleActionTask.java:76)

Looking at /https://agent:3872/emd/browser/main we can display the following error:

2016-01-22 11:49:34,661 ERROR - 
The target is fully broken (code=0x404) for the following reason: Invalid Input 
 2016-01-22 11:49:35,184 ERROR - The target is fully broken (code=0x406) for the following reason: 
No valid queryDescriptor or executionDescriptor found for target [oracle_database.TESTDBA$11] 
 2016-01-22 11:49:35,193 INFO - Dynamic property execution error for [DeduceAlertLogFile], 
error: Can't resolve a non-optional query descriptor property [background_dump_dest

I opened a SR and Oracle gave me an important hint by asking me to run the following query:

SQL> select v.version "DBVersion", p.value "DBDomain" from 
(select nvl((select version from (select version || '.'|| id version
from dba_registry_history where NAMESPACE = 'SERVER'
and BUNDLE_SERIES='PSU'
order by ACTION_TIME desc) where rownum = 1),
(select version from v$instance)) version from dual) v, v$parameter p
where p.name='db_domain';
and BUNDLE_SERIES='PSU'
     *
ERROR at line 3:
ORA-00904: "BUNDLE_SERIES": invalid identifier

We can notice the BUNDLE_SERIES column does not exist in the view dba_registry_history for my database target in a fully broken state.

By the way there is a bug number 9656976 :

Execution of catbundle.sql is not always required for new and upgraded databases. 
 The Readme documentation indicates when it is required. 
 However, you can execute catbundle.sql when it is not required so that the new or 
upgraded database has an updated dba_registry_history table

I tried a first solution by creating a table named dba_registry_history under the dbsnmp monitoring user in my broken database target, and the target was dicovered successfully, and was not anymore in a fully broken state.

Finally I choosed to run the catbundle.sql script on every database which were in a broken state, and the database target discovery was successfull for every database.

 

 

 

 

Cet article EM 13c target is fully broken !! est apparu en premier sur Blog dbi services.

Standard Edition 2 testing the 16 thread limitation

$
0
0

From 12.1.0.2 the Standard Edition – now called Standard Edition 2 – has a few limitations that were not there in SE and SE1. One of them is the limitation to 16 threads. Let’s see how it behaves when running 32 sessions working in CPU.

Installing 12.1 SE2 on a 32 CPU host.

What to provision quickly a host with more than 16 CPU? Easy with DBaaS. Here is a database on the Oracle Cloud Services, with 16 OCPU which means 32 threads:
CaptureSE2-CS
Here is the definition from the OS seeing 32 cores (which are actually virtual, equivalent to 16 hyper-threaded cores)

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Stepping: 4
CPU MHz: 2992.876
BogoMIPS: 5985.75
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-31

dedicated, shared, pooled, jobs

I’m running something like that:

set echo on timing on
connect scott/tiger @ //140.86.5.120/pdb.trial.oraclecloud.internal:dedicated
exec declare t date:=sysdate; begin loop exit when sysdate>t+&1./24/60/60; end loop; end
exit

and the same with shared and pooled connections.
Finally, I run the same from a job:

variable j number
exec dbms_job.submit(:j,'declare t date:=sysdate; begin loop exit when sysdate>t+&1./24/60/60; end loop; end;');

I’ve run those from 32 parallel sessions and got the following:
CaptureSE2-4

You can see my 32 sessions active, but only 16 at a time being in CPU. The others are waiting on the light green ‘resmgr: cpu quantum’ which is the Resource Manager used to limit to 16 threads in CPU. No way to bypass: whatever the connection type is we are limited to 16 sessions active on CPU.

From ‘top’ we can check that each session has the same amout of CPU time allowed:

SQL> Disconnected from Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production^M
top - 20:59:09 up 2 days, 9:34, 1 user, load average: 9.88, 4.23, 1.68
Tasks: 644 total, 13 running, 631 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.1%us, 0.4%sy, 0.0%ni, 98.4%id, 0.1%wa, 0.0%hi, 0.0%si, 0.1%st
Mem: 247354096k total, 135891524k used, 111462572k free, 785304k buffers
Swap: 4194300k total, 0k used, 4194300k free, 131373540k cached
 
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5586 root 20 0 2785m 41m 9212 S 72.3 0.0 0:00.37 /u01/app/oracle/pr
3796 oracle 20 0 56.6g 28m 26m S 43.0 0.0 1:11.87 oracleSE2 (LOCAL=N
3768 oracle 20 0 56.6g 44m 40m S 41.0 0.0 1:13.15 oracleSE2 (LOCAL=N
3774 oracle 20 0 56.6g 28m 25m S 41.0 0.0 1:12.64 oracleSE2 (LOCAL=N
3792 oracle 20 0 56.6g 28m 25m S 41.0 0.0 1:12.08 oracleSE2 (LOCAL=N
3800 oracle 20 0 56.6g 28m 26m S 41.0 0.0 1:11.67 oracleSE2 (LOCAL=N
3802 oracle 20 0 56.6g 28m 26m S 41.0 0.0 1:11.78 oracleSE2 (LOCAL=N
3804 oracle 20 0 56.6g 28m 26m S 41.0 0.0 1:11.55 oracleSE2 (LOCAL=N
3824 oracle 20 0 56.6g 28m 26m S 41.0 0.0 1:11.21 oracleSE2 (LOCAL=N
3826 oracle 20 0 56.6g 28m 26m S 41.0 0.0 1:11.15 oracleSE2 (LOCAL=N
3832 oracle 20 0 56.6g 28m 26m S 41.0 0.0 1:11.10 oracleSE2 (LOCAL=N
3776 oracle 20 0 56.6g 28m 25m S 39.1 0.0 1:12.55 oracleSE2 (LOCAL=N
...

That was with dedicated sessions (SERVER=dedicated)

Here are the processes with shared servers (SERVER=shared):


PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
35128 oracle 20 0 56.6g 24m 22m R 58.1 0.0 1:01.94 ora_j011_SE2
35140 oracle 20 0 56.6g 24m 21m R 58.1 0.0 1:01.18 ora_j017_SE2
35154 oracle 20 0 56.6g 24m 21m R 58.1 0.0 1:00.29 ora_j024_SE2
2849 oracle 20 0 56.6g 283m 279m R 56.2 0.1 1:08.65 ora_j000_SE2
35116 oracle 20 0 56.6g 24m 21m S 56.2 0.0 1:02.26 ora_j005_SE2
35124 oracle 20 0 56.6g 24m 21m S 56.2 0.0 1:02.06 ora_j009_SE2
35130 oracle 20 0 56.6g 24m 21m S 56.2 0.0 1:01.87 ora_j012_SE2
34367 oracle 20 0 56.6g 35m 32m S 54.3 0.0 1:06.10 ora_j002_SE2
...

or with resident connection pooling (SERVER=pooled):

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17227 oracle 20 0 56.6g 26m 23m R 59.7 0.0 1:08.01 ora_l035_SE2
17170 oracle 20 0 56.6g 26m 23m R 57.8 0.0 1:15.21 ora_l013_SE2
17176 oracle 20 0 56.6g 26m 23m R 57.8 0.0 1:14.87 ora_l016_SE2
17205 oracle 20 0 56.6g 26m 23m S 57.8 0.0 1:08.02 ora_l024_SE2
17207 oracle 20 0 56.6g 26m 23m R 57.8 0.0 1:07.90 ora_l025_SE2
17162 oracle 20 0 56.6g 26m 23m R 55.9 0.0 1:15.47 ora_l009_SE2
17174 oracle 20 0 56.6g 26m 23m S 55.9 0.0 1:14.80 ora_l015_SE2
17225 oracle 20 0 56.6g 26m 23m S 55.9 0.0 1:08.30 ora_l034_SE2
17201 oracle 20 0 56.6g 26m 23m R 54.0 0.0 1:08.11 ora_l022_SE2
17203 oracle 20 0 56.6g 26m 23m S 54.0 0.0 1:08.15 ora_l023_SE2
17166 oracle 20 0 56.6g 26m 23m R 52.0 0.0 1:15.33 ora_l011_SE2
17180 oracle 20 0 56.6g 26m 23m R 52.0 0.0 1:14.60 ora_l018_SE2
17209 oracle 20 0 56.6g 26m 23m R 52.0 0.0 1:08.08 ora_l026_SE2
17223 oracle 20 0 56.6g 26m 23m S 52.0 0.0 1:08.18 ora_l033_SE2
17182 oracle 20 0 56.6g 26m 23m R 50.1 0.0 1:14.48 ora_l019_SE2
...

Same with jobs:


PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
35128 oracle 20 0 56.6g 24m 22m R 58.1 0.0 1:01.94 ora_j011_SE2
35140 oracle 20 0 56.6g 24m 21m R 58.1 0.0 1:01.18 ora_j017_SE2
35154 oracle 20 0 56.6g 24m 21m R 58.1 0.0 1:00.29 ora_j024_SE2
2849 oracle 20 0 56.6g 283m 279m R 56.2 0.1 1:08.65 ora_j000_SE2
35116 oracle 20 0 56.6g 24m 21m S 56.2 0.0 1:02.26 ora_j005_SE2
35124 oracle 20 0 56.6g 24m 21m S 56.2 0.0 1:02.06 ora_j009_SE2
35130 oracle 20 0 56.6g 24m 21m S 56.2 0.0 1:01.87 ora_j012_SE2
34367 oracle 20 0 56.6g 35m 32m S 54.3 0.0 1:06.10 ora_j002_SE2
...

and I also tried with the new 12c threaded processes (DEDICATED_THROUGH_BROKER_listener=true):


top - 21:13:33 up 1 day, 9:49, 0 users, load average: 7.54, 4.81, 3.58
Tasks: 590 total, 17 running, 573 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 98.2%id, 0.1%wa, 0.0%hi, 0.0%si, 0.1%st
Mem: 247354096k total, 135538012k used, 111816084k free, 700228k buffers
Swap: 4194300k total, 0k used, 4194300k free, 131260340k cached
 
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
58046 oracle 20 0 63.8g 886m 689m S 54.8 0.4 5:24.47 ora_u005_SE2
58049 oracle 20 0 63.8g 886m 689m R 54.8 0.4 5:23.70 ora_u005_SE2
58065 oracle 20 0 63.8g 886m 689m R 52.8 0.4 1:38.75 ora_u005_SE2
58043 oracle 20 0 63.8g 886m 689m S 50.9 0.4 5:24.19 ora_u005_SE2
58045 oracle 20 0 63.8g 886m 689m S 50.9 0.4 5:24.21 ora_u005_SE2
58053 oracle 20 0 63.8g 886m 689m R 50.9 0.4 1:36.28 ora_u005_SE2
58061 oracle 20 0 63.8g 886m 689m R 50.9 0.4 1:36.10 ora_u005_SE2
58067 oracle 20 0 63.8g 886m 689m R 50.9 0.4 1:12.89 ora_u005_SE2
...

Different processes but same behavior: running 32 session on SE2 gives 50% of CPU resources to each session because of the limitation of 16 threads.

User processes

After lot of tests, some of them with DML to so that LGWR and DBWR has something to do, I’ve checked which sessions have waited on that Resource Manager event:

SQL> select distinct program,username from v$session_event join v$session using(sid) where v$session_event.event like 'resmgr:cpu quantum'
 
PROGRAM USERNAME
------------------------------ ------------------------------
sqlplus.exe SYS
sqlplus.exe SCOTT
JDBC Thin Client SYS

This proves that only user sessions are limited by SE2, and you can see it’s the case for SYS as well as other users.

SYS

Talking about SYS, I’ve run 16 sessions as SYS and 16 sessions as SCOTT:


73056 oracle 20 0 56.6g 25m 23m S 80.3 0.0 0:51.58 oracleSE2 (LOCAL=NO)
73058 oracle 20 0 56.6g 25m 23m S 80.3 0.0 0:51.36 oracleSE2 (LOCAL=NO)
73064 oracle 20 0 56.6g 25m 23m S 80.3 0.0 0:51.34 oracleSE2 (LOCAL=NO)
73052 oracle 20 0 56.6g 25m 23m R 80.0 0.0 0:51.71 oracleSE2 (LOCAL=NO)
73097 oracle 20 0 56.6g 25m 23m R 80.0 0.0 0:51.21 oracleSE2 (LOCAL=NO)
73103 oracle 20 0 56.6g 25m 23m R 80.0 0.0 0:51.02 oracleSE2 (LOCAL=NO)
73111 oracle 20 0 56.6g 25m 23m R 80.0 0.0 0:50.91 oracleSE2 (LOCAL=NO)
73117 oracle 20 0 56.6g 25m 23m R 79.6 0.0 0:50.89 oracleSE2 (LOCAL=NO)
73101 oracle 20 0 56.6g 25m 23m R 79.3 0.0 0:50.87 oracleSE2 (LOCAL=NO)
73050 oracle 20 0 56.6g 25m 23m S 79.0 0.0 0:51.72 oracleSE2 (LOCAL=NO)
73099 oracle 20 0 56.6g 25m 23m S 78.3 0.0 0:51.10 oracleSE2 (LOCAL=NO)
73060 oracle 20 0 56.6g 25m 23m S 78.0 0.0 0:51.23 oracleSE2 (LOCAL=NO)
73108 oracle 20 0 56.6g 25m 23m R 78.0 0.0 0:50.98 oracleSE2 (LOCAL=NO)
73113 oracle 20 0 56.6g 25m 23m S 78.0 0.0 0:50.90 oracleSE2 (LOCAL=NO)
73115 oracle 20 0 56.6g 25m 23m R 78.0 0.0 0:50.84 oracleSE2 (LOCAL=NO)
73106 oracle 20 0 56.6g 25m 23m R 77.3 0.0 0:50.90 oracleSE2 (LOCAL=NO)
72455 oracle 20 0 56.6g 46m 42m R 7.0 0.0 0:58.55 oracleSE2 (LOCAL=NO)
72459 oracle 20 0 56.6g 28m 25m S 7.0 0.0 0:58.38 oracleSE2 (LOCAL=NO)
72461 oracle 20 0 56.6g 28m 25m S 7.0 0.0 0:58.12 oracleSE2 (LOCAL=NO)
72463 oracle 20 0 56.6g 28m 26m S 7.0 0.0 0:58.17 oracleSE2 (LOCAL=NO)
72465 oracle 20 0 56.6g 28m 25m S 7.0 0.0 0:58.08 oracleSE2 (LOCAL=NO)
72467 oracle 20 0 56.6g 28m 26m S 7.0 0.0 0:58.01 oracleSE2 (LOCAL=NO)
72471 oracle 20 0 56.6g 28m 26m S 7.0 0.0 0:57.89 oracleSE2 (LOCAL=NO)
72469 oracle 20 0 56.6g 28m 25m S 6.6 0.0 0:57.87 oracleSE2 (LOCAL=NO)
72473 oracle 20 0 56.6g 28m 26m S 6.6 0.0 0:57.81 oracleSE2 (LOCAL=NO)
72477 oracle 20 0 56.6g 28m 26m S 6.6 0.0 0:57.73 oracleSE2 (LOCAL=NO)
72489 oracle 20 0 56.6g 28m 26m S 6.6 0.0 0:57.64 oracleSE2 (LOCAL=NO)
72493 oracle 20 0 56.6g 28m 26m S 6.6 0.0 0:57.45 oracleSE2 (LOCAL=NO)
72457 oracle 20 0 56.6g 28m 26m S 6.3 0.0 0:58.59 oracleSE2 (LOCAL=NO)
72491 oracle 20 0 56.6g 28m 26m R 6.0 0.0 0:57.65 oracleSE2 (LOCAL=NO)
72481 oracle 20 0 56.6g 28m 25m R 4.6 0.0 0:57.73 oracleSE2 (LOCAL=NO)
72475 oracle 20 0 56.6g 28m 25m S 3.3 0.0 0:57.67 oracleSE2 (LOCAL=NO)

Here you see that not all sessions are equal. Some are able to run 80% of their time in CPU and the others less than 10%

Let’s see more detail from Orachrome Lighty:

CaptureSE2-SYS

Here it’s clear. the SYS session had higher priority. They were able to run 80% of their time in CPU, and only 20% waiting. The SCOTT session here had only 10% of their time in CPU.

Conclusion

The first observation is that only 16 CPU threads are available for user sessions in a SE2 instance. Yes it is a limitation that was not there in SE but remember that SE comes from a time where only few cores were available on servers. My experience is that most of the Standard Edition database I’ve seen can run with optimal performance with only 4 or 5 active sessions in CPU on average. And I’m talking about 10000 queries per second OLTP applications here. If you reach 16 AAS in CPU then you should look at the queries that read millions of logical reads and you may have some tuning to do on them.

The second observation is that you should be very careful when running jobs as SYS (maintenance, monitoring). They have a high priority but still count within the 16 threads limitation, so the user session become very limited.

 

Cet article Standard Edition 2 testing the 16 thread limitation est apparu en premier sur Blog dbi services.


GTT in Exadata are eligible to SmartScan

$
0
0

I wanted to check if Exadata predicate offloading can occur on Global Temporary Tables. Actually, I thought it did not and I was wrong. I was ready to post that as an hypothesis for https://community.oracle.com/thread/3903836 but, before any post to forums, I try to test what I say because I may be wrong, or things may have changed from versions to versions. Here I will show how it’s easy to quickly test an hypothesis. And yes, you can even test SmartScan behavior on your laptop.

Let’s create a Global Temporary Table with some rows:

SQL> create global temporary table DEMOGTT on commit preserve rows as select * from dba_objects;
Table created.
SQL> commit;
Commit complete.

The point here is to use the Filter Predicate LIBrary that is shipped in every oracle installation, even non-Exadata ones, for simulation:

SQL> alter session set "_rdbms_internal_fplib_enabled"=true cell_offload_plan_display=always "_serial_direct_read"=always;
Session altered.

I’ve also forced Serial Direct Read to be sure to do direct path reads.
Then I select from it with a highly selective predicate:

SQL> set autotrace trace
SQL> select object_id from DEMOGTT where object_name like 'X%';
498 rows selected.
 
Execution Plan
----------------------------------------------------------
Plan hash value: 962761541
 
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1381 | 40049 | 459 (1)| 00:00:01 |
|* 1 | TABLE ACCESS STORAGE FULL| DEMOGTT | 1381 | 40049 | 459 (1)| 00:00:01 |
-------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - storage("OBJECT_NAME" LIKE 'X%')
filter("OBJECT_NAME" LIKE 'X%')
 
Note
-----
- Global temporary table session private statistics used
 
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
1720 consistent gets
1684 physical reads
128 redo size
9983 bytes sent via SQL*Net to client
915 bytes received via SQL*Net from client
35 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
498 rows processed
 
SQL> set autotrace off

Thanks to cell_offload_plan_display=always I can see that the optimizer build a plan that can use predicate offloading (the ‘STORAGE’ full table scan).
Autotrace tells me that I’ve read 1684 blocks from storage. I check my session cell statistics.

SQL> select name,value from v$mystat join v$statname using(statistic#) where name like 'cell%' and value>0 order by 1;
 
NAME VALUE
---------------------------------------------------------------- ----------
cell IO uncompressed bytes 13795328
cell blocks processed by cache layer 1684
cell blocks processed by data layer 1684
cell blocks processed by txn layer 1684
cell physical IO interconnect bytes 27770880
cell scans 2
cell simulated physical IO bytes eligible for predicate offload 13795328
cell simulated physical IO bytes returned by predicate offload 10552

All the 1684 physical reads were processed by the storage cell layers which means that offloading occurred.

Conclusion

When you are used to it, it’s often easy to build a very small test case to validate any assumption. With this example you know that ‘direct path read temp’ are eligible to SmartScan.

 

Cet article GTT in Exadata are eligible to SmartScan est apparu en premier sur Blog dbi services.

Easy transport of SQL Tuning Sets with OEM

$
0
0

When you want to transport a SQL Tuning Set between production and test for example, you have to pack it into a table, then export the table, import it into the target database, and unpack the STS. This is a case where Enterprise Manager can help to do it quickly.

First I create a SQL Tuning Set

Capture001

Not showing all details here. Just loading the Top-5 queries from library cache:

Capture002

Here is my STS that I maned ‘TEST’ and I can export it to a DataPump dump file:

Capture003

I can choose or create a directory from there and there’s a default name for the dump file with the name of my STS within:

Capture004

For the example I use the same database. I have to drop the old one because I cannot rename the STS while importing.
In Enterprise Manager the buttons on the left are related with the object that is selected, but the import is on the right, like the ‘create’ one as it creates a new STS. You cannot import to an existing STS.

Capture005

Now enter the impdp parameters:

Capture006

and run the job:

Capture007

Then here is the STS imported:

Capture008

Note that there is also a ‘copy to database’ button that run all that. However, because it includes file transfer, you have to provide host credentials.

For this example, I’ve used the EM13c VirtualBox and have added the emrepus target. I didn’t find the SQL Tuning Set Menu at the place I expected it. Thanks to Twitter friend Hatem Mahmoud I know why:

Don’t forget you need Tuning Pack for SQL Tuning Sets.

 

Cet article Easy transport of SQL Tuning Sets with OEM est apparu en premier sur Blog dbi services.

Resource Manager plan from OEM vs. command line

$
0
0

You are rather GUI or command line? Let’s compare what you can do with them when you want to create a Resource Manager plan, and what is missing in the GUI.

I’m using EM13c here on a 12c database. Doc for command line API is here.

I’ll explain what you can set in the GUI and the matching arguments generated by OEM:

Screenshot 2016-03-10 13.36.19

Here we have the name and description (comment):
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE (
plan IN VARCHAR2,
comment IN VARCHAR2 DEFAULT NULL,

When the ‘Activate this plan’ is checked, it calls the dbms_resource_manager.switch_plan, with the allow_scheduler_plan_switches=>false if you uncheck ‘Automatic Plan Switching Enabled’

The CREATE_PLAN_DIRECTIVE is called for each group or sub plan:
group_or_subplan IN VARCHAR2,

The ‘Utilization limit %’ defines the max_utilization_limit wich is now utilization_limit:
max_utilization_limit IN NUMBER DEFAULT NULL, -- deprecated
utilization_limit IN NUMBER DEFAULT NULL,

The number or shares that we set are converted to percentage of total shares in order to define the cpu_p1 which is now mgmt_p1.

cpu_p1 IN NUMBER DEFAULT NULL, -- deprecated
mgmt_p1 IN NUMBER DEFAULT NULL,

Actually, OEM put the share number and not the percentage when generating the SQL, but that’s ok.

Parallel Query DOP and queuing

Screenshot 2016-03-10 13.36.56

Here are the parallel settings. ‘bypass queue’ sets parallel_stmt_critical to ‘bypass_queue’ to avoid statement queuing for this consumer group.
parallel_stmt_critical IN VARCHAR2 DEFAULT NULL);

and the settings (using the deprecated parallel_target_percentage instead of parallel_sever_limit )
parallel_degree_limit_p1 IN NUMBER DEFAULT NULL,
parallel_target_percentage IN NUMBER DEFAULT NULL, -- deprecated
parallel_queue_timeout IN NUMBER DEFAULT NULL,

The timeout is the number of seconds the statement can remain queued.

Per session or per-call limits

Screenshot 2016-03-10 13.37.03

The limits set the following arguments (in respective order):
switch_elapsed_time IN NUMBER DEFAULT NULL,
switch_time IN NUMBER DEFAULT NULL,
switch_io_megabytes IN NUMBER DEFAULT NULL,
switch_io_logical IN NUMBER DEFAULT NULL,
switch_io_reqs IN NUMBER DEFAULT NULL,

The actions sets a consumer group to switch to, or KILL_SESSION or CANCEL_SQL:
switch_group IN VARCHAR2 DEFAULT NULL,

The ‘track by statement’ sets to true the switch_for_call (it switches to group only until the end of the call) and the ‘use estimate’ sets switch_estimate to true:
switch_for_call IN BOOLEAN DEFAULT NULL,
switch_estimate IN BOOLEAN DEFAULT FALSE,

Idle time limits

Screenshot 2016-03-10 13.37.11

This sets the following time in seconds:
max_idle_time IN NUMBER DEFAULT NULL,
max_idle_blocker_time IN NUMBER DEFAULT NULL,

What is missing?

It seems that we cannot set here the limit based on CBO estimation:
max_est_exec_time IN NUMBER DEFAULT NULL,
Same for the maximum number of active sessions limit
active_sess_pool_p1 IN NUMBER DEFAULT NULL,
And the transaction undo size limit:
undo_pool IN NUMBER DEFAULT NULL,

 

Cet article Resource Manager plan from OEM vs. command line est apparu en premier sur Blog dbi services.

Conversion to Flex ASM with asmca takes 5 minutes

$
0
0

In 12c Oracle recommands Flex ASM. You can opt for it at Grid Infrastructure installation, but it’s very easy to convert to it later from asmca. It has to configure


[grid@racp1vm1 ~]$ asmcmd showclustermode
ASM cluster: Flex mode disabled

In ASMCA if you are not in Flex ASM then you have the button to convert to it from the first tab. A listener will run on each node, so you define the port and the interface:
CaptureASMCAFLEXASM0

On my laptop with the lab environment from the dbi services Grid Infrastructure / RAC training workshop running the converttoFlexASM.sh as root took 5 minutes.

When it’s finished, you restart asmca and see that the convert button is not there anymore:

CaptureASMCAFLEXASM1

you can see it from asmcmd as well:

[grid@racp1vm1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled

I can shutdown the ASM instance on node 2 (not from node 1 as I’ve run asmca from it):

Screenshot 2016-03-11 21.15.40

Both nodes have the flex ASM listener:


[grid@racp1vm1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE racp1vm1 STABLE
ONLINE ONLINE racp1vm2 STABLE
ora.CRS_DG.dg
ONLINE ONLINE racp1vm1 STABLE
OFFLINE OFFLINE racp1vm2 STABLE
ora.DATA.dg
ONLINE ONLINE racp1vm1 STABLE
OFFLINE OFFLINE racp1vm2 STABLE
ora.DATA2.MYACFSVOL.advm
ONLINE ONLINE racp1vm1 Volume device /dev/a
sm/myacfsvol-160 is
online,STABLE
ONLINE ONLINE racp1vm2 Volume device /dev/a
sm/myacfsvol-160 is
online,STABLE
ora.DATA2.dg
ONLINE ONLINE racp1vm1 STABLE
OFFLINE OFFLINE racp1vm2 STABLE

Note that the diskgroups are OFFLINE in node 2 because I stopped the ASM instance, but the ACFS filesystem is still up, thanks to flex ASM.

 

Cet article Conversion to Flex ASM with asmca takes 5 minutes est apparu en premier sur Blog dbi services.

12c multitenant: Cursor sharing in CDB

$
0
0

In multitenant, there are two goals: consolidation within the same container database and isolation of pluggable databases. I see multitenant consolidation as an extension of schema consolidation. What is not possible in schema consolidation, such as public objects name collision, is now possible with pluggable databases. 10 years ago I administrated on a database with high level of consolidation: 3000 schemas with same structure and different data. The big scalability issue there was library cache waits (it was 10g) because same SQL statements running on thousands of schemas means parent cursors with lot of child. When 12c came out, and I’ve seen that Multitenant shares the parent cursor, then I wanted to compare library cache contention between schema consolidation an pluggable database consolidation.

I’ve created 50 pluggable databases PDB001 to PDB050 with 50 users in each USER001 to USER050.
I prepared the following script:

begin
for i in 1..1000 loop
execute immediate 'select * from dual';
execute immediate 'select '||i||' from dual';
end loop;
end;
/

The idea is to run the same statement in a loop, with other different statements to avoid cursor caching. Actually my goal is to simulate what we find sometimes with connection pools that run a ‘select from dual’ before grabbing a session, just to check it is still there.
I’ve run 50 concurrent sessions with that script in the following connections:

  1. Same schema name on multiple pluggable databases
  2. Different schema names on the same pluggable database
  3. Same schema name on same pluggable database
  4. Different user names on different pluggable databases

And here are the 4 runs displayed by Orachrome Lighty, my favorite tool to display database performance statistics.
CaptureCDBCURSORS001

Good reference for mutexes are Tanel Poder and Andrey Nikolaev. http://blog.tanelpoder.com/files/Oracle_Latch_And_Mutex_Contention_Troubleshooting.pdf

Basically here, “library cache: mutex X” is the most important wait event and it’s contention on library cache because of the hard parses.

Then, I changed the script to add more contention. I artificially multiply the number of child cursors. In addition to the 50 ones (not shared because of different user and/or container) I change an optimizer parameter to get 50 different versions:

begin
for i in 1..50
loop
execute immediate 'alter session set optimizer_index_cost_adj='||i;
for j in 1..20
loop
execute immediate 'select * from dual';
execute immediate 'select '||i||'+'||j||' from dual';
end loop;
end loop;
end;
/

The time it takes is longer than when we had only one version, but it’s still the same time in container consolidation vs. schema consolidation:

CaptureCDBCURSORS002

“cursor: mutex X” appears here. It’s the contention on the parent cursor because of the multiple versions to search.

So, in current version (I tested on 12.1.0.2) multitenant consolidation is the same as schema consolidation: not worse and not better. This was designed on purpose: sharing the parent cursor saves memory by avoiding to store same information multiple times. It’s the goal of consolidation. The non-sharing is done at child cursor level.

This means that bad application design that lead to library cache contention in schema consolidation will not be better when separating into multiple pluggable databases.
When you want explicitly to avoid sharing, then you have either to set different optimizer parameter (I dream of a dummy one, just to avoid sharing without changing anything else) or to issue different statements. In following example, I add the connection info as comment in the statement:


begin
for i in 1..1000 loop
execute immediate 'select /*+ &_USER.@&_CONNECT_IDENTIFIER */ * from dual';
execute immediate 'select '||i||' from dual';
end loop;
end;
/

And here is the result. No contention except when I connect with same user and same service:

CaptureCDBCURSORS003

Please, don’t hesitate to comment. Even if contention on library cache has improved at each release, high number of versions are always a problem. Especially with bad application design that parse too often. Reading a long chain of child cursors can take a long time and requires exclusive latch on parent cursor. PSU 11.2.0.2.2 introduced cursor obsolescence to limit the number of child cursor, but some bugs came with that. In 12.1 it’s limited to 1024 child cursors. With hundreds of pluggable databases we can reach that very quickly because of the many reasons for non-sharing (bind length, NLS settings, adaptive cursor sharing, etc).

On additional note, if you look at V$SQL_SHARED_CURSOR you don’t see any reason for the non-sharing when it’s because of different container. A enhancement request has been opened for that.

 

Cet article 12c multitenant: Cursor sharing in CDB est apparu en premier sur Blog dbi services.

Viewing all 464 articles
Browse latest View live