Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 466 articles
Browse latest View live

RAC to RON then RON to RAC and singleton service

$
0
0

You will probably never do it, but let’s imagine you have a RAC database, policy managed, with singleton service. Then you convert it to RAC One Node and you change you mind and convert it back to RAC. Be careful, the singleton services are converted to uniform ones when converting to RAC

My RACDB database is running on two nodes:
[oracle@racp1vm1 ~]$ srvctl status database -db racdb
Instance RACDB_1 is running on node racp1vm1
Instance RACDB_2 is running on node racp1vm2

it is policy managed and in a server pool of two servers:

[oracle@racp1vm1 ~]$ srvctl config database -db racdb | grep pool
Server pools: pool1
[oracle@racp1vm1 ~]$ srvctl status srvpool -serverpool pool1
Server pool name: pool1
Active servers count: 2

I’m in RAC and have the singleton service ‘S’ running on first node

[oracle@racp1vm1 ~]$ srvctl config service -db racdb -service S
Service name: s
Server pool: pool1
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
...

I want to go to RAC One Node so I need to have only one instance running

[oracle@racp1vm1 ~]$ srvctl stop instance -db racdb -instance RACDB_2 -f

and then convert

[oracle@racp1vm1 ~]$ srvctl convert database -db racdb -dbtype RACONENODE

Then I check the service:

[oracle@racp1vm1 ~]$ srvctl status service -db racdb -service s
Service s is running on nodes: racp1vm1

still running on one node of course, and still defined as SINGLETON

[oracle@racp1vm1 ~]$ srvctl config service -db racdb -service s
Service name: s
Server pool: pool1
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC

Let’s now convert back to RAC

[oracle@racp1vm1 ~]$ srvctl convert database -db racdb -dbtype RAC

and check the service:

[oracle@racp1vm1 ~]$ srvctl status service -db racdb -service s
Service s is running on nodes: racp1vm1,racp1vm2

Ouch. My service that was a singleton is now running on all nodes.

[oracle@racp1vm1 ~]$ srvctl config service -db racdb -service s
Service name: s
Server pool: pool1
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC

It seems that the conversion from RAC One Node to RAC has modified all service cardinality to be UNIFORM.

You have to set it back to SINGLETON:

[oracle@racp1vm1 ~]$ srvctl modify service -db racdb -cardinality singleton -service s -f

Be careful with that.
A service for which the application has not be designed to be load balanced across several nodes may have horrible performance. It’s always a good idea to check the service config and where they are running.

 

Cet article RAC to RON then RON to RAC and singleton service est apparu en premier sur Blog dbi services.


Cloning a PDB to the Oracle Cloud Service

$
0
0

When you create a DBaaS on the Oracle Cloud Service you have only one way to access to your database: ssh with the public rsa key you provide. Then you can open some ports to access the VM. However, from the VM you can’t access your own network. You need it to move data between from on-premises or private cloud to the public cloud. Or to setup a Data Guard between them. Let’s see how to quickly tunnel your local database service through the ssh connection.

In this example, I have my local database on 192.168.78.115 where service CDB is registered to listener on port 1521. I want to clone a local PDB to my public cloud CDB. Easiest is remote cloning: create a db link on destination to connect to by local CDB. I cannot create the db link using ‘//192.168.78.115:1521′ because it is not visible from the public cloud VM.

Here is where ssh remote tunneling comes into place. I connect to my cloud VM (140.86.3.67) and forward the port 1521 from 192.168.78.115 to port 9115 on the cloud VM:

ssh -R 9115:192.168.78.115:1521 140.86.3.67

And I can run sqlplus from there.
Of course, you can use a ssh config file entry for that:

Host cloud
HostName 140.86.3.67
RemoteForward 9115 192.168.78.115:1521
Port 22
User oracle
IdentityFile ~/.ssh/id_rsa

That’s all. I can test the ping to local port 9115 which is forwarded to my listener on my site:

SQL> host tnsping //localhost:9115/CDB
 
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 30-MAR-2016 21:59:55
 
Copyright (c) 1997, 2014, Oracle. All rights reserved.
 
Used parameter files:
/u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora
 
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=9115)))
OK (140 msec)

Then create a database link to it:

SQL> create database link LOCALHOST9115CDB connect to system identified by oracle using '//localhost:9115/CDB';
Database link created.

and run my PDB clone through it:

SQL> create pluggable database PDB from PDB@LOCALHOST9115CDB;
Pluggable database created.

You can use that for duplicate from active as well. In 12c you will probably use pull based duplicates, especially when transferring to the cloud because backupsets are smaller and may be compressed, so you will need a connection from the auxiliary to the target. Then you will need the remote port forwarding. If you prefer to Data Pump and don’t bother with scp, it’s the same: you can export or import through database link. For a standby configuration (Data Guard or Dbvisit Standby) you can do the same as long as the ssh connection is permanent. Better to setup a VPN for that.

I really encourage you to test the Cloud and Multitenant. There is a free trial easy to setup: https://cloud.oracle.com/home#platform
Move one of your database to it and see how it looks like: Agility, Performance, Availability.
If you’re around, there is the SITB in Geneva in few weeks: http://www.salon-sitb.ch/exposants/fiche/686/dbi-services.html
I’ll be there are the Oracle / dbi services booth. Let’s talk about Cloud, Multitenant, or anything else.

 

Cet article Cloning a PDB to the Oracle Cloud Service est apparu en premier sur Blog dbi services.

Lost in the Cloud? My Cloud vs. My Services

$
0
0

The Oracle Cloud has two interfaces that are very similar visually and it’s easy to get lost and don’t find what we are looking for in the menus.

CaptureCloudMyAccount003 CaptureCloudMyAccount004

You see the difference: My Account on the left and My Service on the right.
They look similar, but manage a different level of the Cloud Services.

My Account dashboard shows the services subscribed. You can subscribe new services from there (the Orders button). This is the interface for the Oracle Cloud Services customer which is your company. You connect to it with your Oracle Single Sign-On account.

My Services is where you actually manage the services in all details. This is the interface for the administrator: provisioning, monitoring of utilization and uptime. Each service is associated with an Identity Domain for authentication, and is physically located in a data center.

Actually, when you subscribe to Oracle Cloud Services you receive your credentials for the Identity Domain, the url to connect to the My Account interface, and the url to go directly to the My Services.

CaptureCloudMyAccount001

From the My Account you can navigate to My services.

So, if you are clicking around and don’t find what you want, then maybe your are on the My Account interface.
Go to My Services and this is where you can open the Service Console.

 

Cet article Lost in the Cloud? My Cloud vs. My Services est apparu en premier sur Blog dbi services.

Créer une base Oracle sur le Cloud en quelques clicks

$
0
0

Il n’a jamais été aussi facile de créer une base Oracle. Et ceci grâce au Database as a Service (DBaaS). C’est l’occasion de tester le cloud: Oracle Cloud Service offre une version d’essai de un mois. Si vous êtes proches de Genève, passez nous voir lors du SITB les 26-27 avril: http://www.dbi-services.com/fr/events-fr/swiss-it-business-sitb-du-26-au-27-avril-2016/
Mais j’ai aussi fait une video pour montrer comment créer sa base sur le Cloud de A à Z.

Le seul prérequis est d’avoir une adresse e-mail et un navigateur web.
Yous allez voir comment:

  • Créer un compte sur oracle.com si vous n’en avez pas déjà un
  • Souscrire à la version trial de 30 jours de Oracle Cloud Services
  • Créer un service DBaaS
  • Se connecter au serveur et à la CDB ainsi crée

La vidéo fait 12 minutes. Il faut compter une heure de break pour la souscription au service (la demande est validée, et on reçoit un e-mail quand c’est fait). La création du service DBaaS où tout est fait automatiquement (création et démarrage de la VM, installation d’Oracle, création de la CDB, de la PDB) prend environ une heure.

La video est sur Screencast-O-matic: http://screencast-o-matic.com/u/nTb6/dbaas
CaptureVideoCloudService

C’est l’occasion de tester le Cloud. Vous allez être fan. Une base avec une config décente accessible de partout.
Mais aussi de tester le multitenant. Vous avez toutes options sur cette version trial, alors regardez les facilités apportées par les pluggable database (‘bases enfichables’ pour ceux qui n’aiment pas que j’utilise les mots anglais pour les termes techniques).

C’était aussi pour moi l’occasion de tester Screencast-O-matic Pro Recorder, que je trouve excellent. C’est ma première video, alors j’attends vos commentaires.

 

Cet article Créer une base Oracle sur le Cloud en quelques clicks est apparu en premier sur Blog dbi services.

Fixed table automatic statistic gathering in 12c

$
0
0

Before 12c, the fixed stats statistics were not gathered automatically. In 12c this has changed: even when you have never gathered fixed objects statistics you can see that you have statistics for X$ tables. That may be better than having no statistics at all, but it doesn’t replace manual gathering at the right time.

Documentation

First you may have a doubt because the Best Practices for Gathering Optimizer Statistics with Oracle Database 12c white paper, which is by the way excellent, states the following: The automatic statistics gathering job does not gather fixed object statistics.

Actually, this was probably written before the new behavior has been implemented. 12c Documentation is clear about that: Oracle Database automatically gathers fixed object statistics as part of automated statistics gathering if they have not been previously collected.

When a table has no statistics, the optimizer usually do dynamic sampling. It’s not the case with fixed tables. Without statistics it uses pre-defined values. Note that SQL Plan Directives generated for predicates on fixed tables seem to be only of reason ‘JOIN CARDINALITY MISESTIMATE’ or ‘GROUP BY CARDINALITY MISESTIMATE’ and I’ve not seen any ‘SINGLE TABLE CARDINALITY MISESTIMATE’ for them yet.

I’ve checked statistics on fixed objects with the following query on multiple databases in 11g and 12c:
CaptureFixedStats20301
Here is the code:

column database_creation format a18
column last_analyzed format a18
select dbid,to_char(created,'dd.mm.yyyy hh24:mi') database_creation,version,(select to_char(max(last_analyzed),'dd.mm.yyyy hh24:mi') last_analyzed from dba_tab_statistics where object_type='FIXED TABLE') last_analyzed from v$database,v$instance;

I know that dbms_stats.gather_fixed_objects_stats has never been run manually on those databases. Here is the result:

CaptureFixedStats20302

The 11g databases have no statistics for fixed objects. The 12c have statistics seem to have been collected on the first maintenance window that came after the database creation.

Here is a full sample on many databases:


DBID DATABASE_CREATION VERSION LAST_ANALYZED
3512849711 04.03.2015 14:27 12.1.0.2.0 04.03.2015 22:02
2742730342 27.02.2015 16:25 12.1.0.2.0 27.02.2015 22:01
947204019 04.03.2015 10:38 11.2.0.4.0
3119262236 23.04.2015 08:45 11.2.0.4.0
3086459761 07.09.2015 10:29 11.2.0.4.0
3834994345 05.05.2015 16:58 11.2.0.4.0
2416611527 23.11.2015 15:09 11.2.0.4.0
1308353219 02.06.2015 08:48 12.1.0.2.0 02.06.2015 22:02
2748602325 02.03.2016 10:56 12.1.0.2.0
2100385935 29.03.2016 10:08 12.1.0.2.0
2693495113 29.07.2015 16:41 12.1.0.2.0 29.07.2015 22:03
1838239625 02.05.2013 10:40 12.1.0.2.0 11.08.2015 11:01
2459965412 06.02.2015 12:36 12.1.0.2.0 06.02.2015 22:01
1973550543 25.09.2015 10:05 12.1.0.2.0 25.09.2015 22:03
2777782141 15.09.2015 10:05 12.1.0.2.0 15.09.2015 22:03
1972863322 20.03.2015 09:33 12.1.0.2.0 07.05.2015 12:14
2598026599 30.04.2015 13:38 12.1.0.2.0 30.04.2015 22:02
392835176 02.12.2014 09:12 12.1.0.2.0 21.10.2015 22:05
3648145067 26.11.2014 12:38 12.1.0.2.0 18.12.2014 22:00
1427432880 08.01.2015 16:52 12.1.0.2.0 10.03.2015 11:16
3916227032 10.12.2014 10:47 12.1.0.2.0 10.12.2014 22:01
3410982685 13.05.2015 15:00 12.1.0.2.0 13.05.2015 22:03
3818933859 02.12.2015 07:50 11.2.0.4.0
4043114408 20.04.2015 12:06 12.1.0.2.0 20.04.2015 22:01
1021147402 04.01.2016 15:04 12.1.0.2.0
3248561100 05.05.2015 16:31 12.1.0.2.0 12.06.2015 22:02

Only few exceptions because some databases have the automatic job disabled.

Conclusion

We know it for a long time, CBO needs statistics. And it’s a good idea to gather them when they are not present. However, this is not a reason to do nothing manually. When run at 22:00 (the default beginning of the maintenance window) it’s probable that many X$ tables do not have the same number of rows than during high activity. Especially when it’s just after the database has been created and nothing has run yet. Session and processes structures have probably few rows. Memory structures are probably small. Our recommendation is to run dbms_stats.gather_fixed_objects_stats at a time where you have significant activity. Then no need to run it again frequently. If you scaled-up some resource configuration (mode memory, more CPU, higher connection pool) then you may run it again. And don’t worry, there are some critical tables that are skipped by the gathering process and still use some default.

 

Cet article Fixed table automatic statistic gathering in 12c est apparu en premier sur Blog dbi services.

Oracle kernel relink on Linux: Are my options stored persistently?

$
0
0

I recently visited an Oracle presentation about the Oracle-kernel built-in directNFS (dNFS) driver. To use dNFS in the database you have to enable it:


cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_on

REMARK: In versions before 11gR2/12c the following manual steps were necessary:


cd $ORACLE_HOME/lib
mv libodm11.so libodm11.so_stub
ln –s libnfsodm11.so libodm11.so

In the context of make-commands, somebody in the audience asked the question how Oracle persistently stores the current link options, so that a future relink won’t undo previous settings. E.g. when relinking without the partitioning option, will another relink include the objects again? As nobody had a good answer, I thought I’ll check that on Linux.

REMARK: Below tests were done on 12.1.0.2.

If options are linked in to the Oracle kernel (e.g. OLAP or Partitioning) is actually stored in the archive $ORACLE_HOME/rdbms/lib/libknlopt.a.

I.e. if I do relink the Oracle kernel without OLAP with the commands


$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk olap_off ioracle

I can see the following first steps:


/usr/bin/ar d $ORACLE_HOME/rdbms/lib/libknlopt.a xsyeolap.o
/usr/bin/ar cr $ORACLE_HOME/rdbms/lib/libknlopt.a $ORACLE_HOME/rdbms/lib/xsnoolap.o

The object xsyeolap.o is removed from $ORACLE_HOME/rdbms/lib/libknlopt.a and object $ORACLE_HOME/rdbms/lib/xsnoolap.o added to it. The link command $ORACLE_HOME/bin/orald contains then the following argument:


-lknlopt `if /usr/bin/ar tv $ORACLE_HOME/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi`

I.e. if object xsyeolap.o is in $ORACLE_HOME/rdbms/lib/libknlopt.a then the argument -loraolap12 is added to the link command. As xsyeolap.o is no longer in $ORACLE_HOME/rdbms/lib/libknlopt.a (it has been replaced with xsnoolap.o), the current and future link-commands won’t link OLAP into the kernel.

Obviously adding OLAP again will include the xsyeolap.o to the library $ORACLE_HOME/rdbms/lib/libknlopt.a again:


$ make -f ins_rdbms.mk olap_on ioracle
/usr/bin/ar d $ORACLE_HOME/rdbms/lib/libknlopt.a xsnoolap.o
/usr/bin/ar cr $ORACLE_HOME/lib/libknlopt.a $ORACLE_HOME/rdbms/lib/xsyeolap.o
...
$ORACLE_HOME/bin/orald -o $ORACLE_HOME/rdbms/lib/oracle ... -lknlopt `if /usr/bin/ar tv $ORACLE_HOME/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi` ...

What happens if the script $ORACLE_HOME/bin/relink with argument “all” is being used?

Actually the relink script uses a perl script $ORACLE_HOME/install/modmakedeps.pl which generates a file $ORACLE_HOME/install/current_makeorder.xml. At the beginning of modmakedeps.pl I can see the following command:


# initial hash populated with options from libknlopt
my %opts_hash = ( 'kcsm.o' => 'rac_off/rac_on',
'kzlilbac.o' => 'lbac_off/lbac_on',
'kzvidv.o' => 'dv_off/dv_on',
'kxmwsd.o' => 'sdo_off/sdo_on',
'xsyeolap.o' => 'olap_off/olap_on',
'kkpoban.o' => 'part_off/part_on',
'dmwdm.o' => 'dm_off/dm_on',
'kecwr.o' => 'rat_off/rat_on',
'ksnkcs.o' => 'rac_on/rac_off',
'kzlnlbac.o' => 'lbac_on/lbac_off',
'kzvndv.o' => 'dv_on/dv_off',
'kxmnsd.o' => 'sdo_on/sdo_off',
'xsnoolap.o' => 'olap_on/olap_off',
'ksnkkpo.o' => 'part_on/part_off',
'dmndm.o' => 'dm_on/dm_off',
'kecnr.o' => 'rat_on/rat_off',
'jox.o' => 'jox_on/jox_off' );

I.e. we have the following 2 lines in there:


'xsyeolap.o' => 'olap_off/olap_on',
'xsnoolap.o' => 'olap_on/olap_off',

What that means is that the script uses the list in %opts_hash to check what the current objects are in $ORACLE_HOME/rdbms/lib/libknlopt.a. Depending on the current objects in the archive it checks the original installation settings $ORACLE_HOME/inventory/make/makeorder.xml and generates a new file with the necessary changes in $ORACLE_HOME/install/current_makeorder.xml. E.g. if xsyeolap.o is in $ORACLE_HOME/rdbms/lib/libknlopt.a, but $ORACLE_HOME/inventory/make/makeorder.xml has olap_off then create a copy of $ORACLE_HOME/inventory/make/makeorder.xml in $ORACLE_HOME/install/current_makeorder.xml with olap_off changed to olap_on. If xsnoolap.o is in the archive but olap_on in $ORACLE_HOME/inventory/make/makeorder.xml then generate $ORACLE_HOME/install/current_makeorder.xml with olap_off. Finally the relink happens using the runInstaller:


$ORACLE_HOME/oui/bin/runInstaller -relink -waitForCompletion -maketargetsxml $ORACLE_HOME/install/current_makeorder.xml -logDir $ORACLE_HOME/install ORACLE_HOME=$ORACLE_HOME

If you want to see what options are currently set in your libknlopt.a you may do the following:


$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/install/modmakedeps.pl $ORACLE_HOME $ORACLE_HOME/inventory/make/makeorder.xml > /tmp/currentmakeorder.xml
$ grep TARGETNAME /tmp/currentmakeorder.xml | grep -E "(_on|_off)" | cut -d"\"" -f4

On my ORACLE_HOME the output of the last command ablove looks as follows:


rat_on
part_on
dm_on
olap_on
sdo_on
rac_off
dnfs_off
ctx_on

In case you want to relink the Oracle kernel with the original installation settings then use


$ relink as_installed

The runinstaller will use the original $ORACLE_HOME/inventory/make/makeorder.xml as the argument for -maketargetsxml then.

So no worries if you have to relink the oracle kernel several times with different options. Previous settings are stored persistently in the archive $ORACLE_HOME/rdbms/lib/libknlopt.a. If you don’t remember what changes you have done and want to go back to the installation settings then use


$ relink as_installed

 

Cet article Oracle kernel relink on Linux: Are my options stored persistently? est apparu en premier sur Blog dbi services.

Single-Tenant vs. non-CDB: no reason to refuse it

$
0
0

When non-CDB has been declared deprecated, I was a bit upset because multitenant with a lone PDB just looks like an overhead of 3 containers instead of one. But with experience I changed my mind. First because the multitenant architecture brings some features that are available even without the option. And second, because this overhead is not a big problem.Let’s put numbers on that last point.

A CDB has 3 containers: CDB$ROOT, PDB$SEED, and your pluggable database. Each one has SYSTEM and SYSAUX tablespaces. Even if the pluggable database system tablespaces are smaller thanks to metadata links, it’s still more datafiles and more space. There are also more processes.

I’ve created a non-CDB and a CDB with same configuration and same virtual machines. Oracle Cloud Services is very nice for that.

Here is the storage after the database creation.
First on the Multitenant one:

[oracle@vicdb ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb3 25G 17G 7.7G 68% /
tmpfs 7.3G 0 7.3G 0% /dev/shm
/dev/xvdb1 477M 148M 300M 34% /boot
[oracle@vicdb ~]$
 
[oracle@vinon ~]$ du -xh /u01/app/oracle/oradata | sort -h
9.6M /u01/app/oracle/oradata/NON/controlfile
151M /u01/app/oracle/oradata/NON/onlinelog
1.6G /u01/app/oracle/oradata/NON/datafile
1.8G /u01/app/oracle/oradata
1.8G /u01/app/oracle/oradata/NON

And in the non-CDB one:

[oracle@vinon ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb3 25G 15G 9.1G 62% /
tmpfs 7.3G 0 7.3G 0% /dev/shm
/dev/xvdb1 477M 148M 300M 34% /boot
 
[oracle@vicdb ~]$ du -xh /u01/app/oracle/oradata | sort -h
18M /u01/app/oracle/oradata/CDB/controlfile
151M /u01/app/oracle/oradata/CDB/onlinelog
741M /u01/app/oracle/oradata/CDB/2F81FDFC05495272E053CE46C40ABDCF
741M /u01/app/oracle/oradata/CDB/2F81FDFC05495272E053CE46C40ABDCF/datafile
2.4G /u01/app/oracle/oradata/CDB/datafile
3.3G /u01/app/oracle/oradata
3.3G /u01/app/oracle/oradata/CDB

The CDB system needs additional 1.4GB for the additional tablespaces.
If you think about it, the overhead is minimal when you compare it with the size of your database.

That’s for storage. Let’s have a look at memory.

Here is the Multitenant system

top - 10:20:00 up 2:26, 2 users, load average: 0.00, 0.04, 0.13
Tasks: 164 total, 1 running, 163 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 15138468k total, 14783080k used, 355388k free, 125580k buffers
Swap: 4194300k total, 0k used, 4194300k free, 13845468k cached
 
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20854 oracle -2 0 4790m 19m 17m S 2.4 0.1 0:19.75 ora_vktm_cdb
20890 oracle 20 0 4810m 129m 112m S 0.1 0.9 0:01.70 ora_mmon_cdb
20870 oracle 20 0 4793m 25m 21m S 0.1 0.2 0:00.71 ora_dia0_cdb
20892 oracle 20 0 4791m 40m 38m S 0.1 0.3 0:00.69 ora_mmnl_cdb
21306 oracle 20 0 4791m 26m 24m S 0.0 0.2 0:00.03 ora_j000_cdb
20852 oracle 20 0 4790m 19m 17m S 0.0 0.1 0:00.20 ora_psp0_cdb
20876 oracle 20 0 4791m 51m 48m S 0.0 0.3 0:00.23 ora_ckpt_cdb
20957 oracle 20 0 4801m 69m 57m S 0.0 0.5 0:00.49 ora_cjq0_cdb
12 root RT 0 0 0 0 S 0.0 0.0 0:00.05 watchdog/1
20866 oracle 20 0 4793m 44m 40m S 0.0 0.3 0:00.27 ora_dbrm_cdb
20872 oracle 20 0 4800m 76m 67m S 0.0 0.5 0:00.21 ora_dbw0_cdb
20911 oracle 20 0 4790m 19m 17m S 0.0 0.1 0:00.04 ora_tt00_cdb

and the CDB one:

top - 10:21:04 up 2:10, 2 users, load average: 0.00, 0.01, 0.10
Tasks: 154 total, 1 running, 153 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.8%us, 0.6%sy, 0.2%ni, 96.5%id, 0.6%wa, 0.0%hi, 0.0%si, 0.3%st
Mem: 15138468k total, 12102812k used, 3035656k free, 167644k buffers
Swap: 4194300k total, 0k used, 4194300k free, 11260644k cached
 
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21539 oracle -2 0 4790m 19m 17m S 2.0 0.1 0:32.69 ora_vktm_non
22095 oracle 20 0 15088 1164 856 R 2.0 0.0 0:00.01 top
1 root 20 0 19408 1540 1232 S 0.0 0.0 0:00.69 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/u:0
7 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/u:0H
8 root RT 0 0 0 0 S 0.0 0.0 0:00.25 migration/0
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
10 root 20 0 0 0 0 S 0.0 0.0 0:01.02 rcu_sched

With same SGA sizing the memory footprint is the similar. And you don’t need to setup a larger SGA for single-tenant: buffer cache is the same (depends on your data), library cache is the same (depends on your code), dictionary cache may be a bit larger but it’s still small. Basically it run exactly the same except that objects have a container id (which is only one byte in 12.1).

I think it’s enough to clear out the myth that single-tenant has a big overhead over non-CDB.

For sure, it’s a bit strange to have to store a PDB$SEED, which is used only to create new pluggable databases, when we cannot create additional pluggable databases. In single-tenant, you will probably have one CDB with your own seed in read-only, and you can remote clone from it. And it’s right that multitenant architecture has been implemented for the multitenant option. But there is no reason to refuse it. With minimal overhead, you can benefit from lot of features that are possible with the dictionary separation. Let’s take a single example: in Standard Edition you can move a whole database physically by unplug/plug or remote clone. When you realize that transportable tablespaces have never been available in Standard Edition can see unplug/plug as a great enhancement for Standard Edition. Easier than duplicate, and cross-version, cross-platform. Perfect for migrations.

 

Cet article Single-Tenant vs. non-CDB: no reason to refuse it est apparu en premier sur Blog dbi services.

Transparently externalize BLOB to BFILE

$
0
0

Storing documents within the database is the easiest, especially because you get them consistent with the metadata stored in the database. If you store them externally, then you need to manage their backup, their synchronization to standby site, the consistency in case of flashback or PITR, etc. However, documents grow (in number and in size thanks to better resolution of scan) and you don’t want a database where half of the size are documents in read only. If you have no option (partitioning, compression, etc) then you may choose to store the documents externally. This is usually a complete re-design of the application.
In this blog post, I’ve done a quick test I’ve done to transform some BLOB into External LOB (aka BFILE) and make it transparent to the application.

It’s just a test of concept. Any comments are welcome if you think something is wrong here.

Some display settings

SQL> set linesize 220 pagesize 1000 echo on
SQL> column filename format a20
SQL> column doc format a80 trunc
SQL> column external_doc format a40 trunc
SQL> whenever sqlerror exit failure;
SQL> connect demo/demo@//localhost/pdb
Connected.

First, I create a table with BLOB

SQL> create table DEMOTAB ( id number, filename varchar2(255),doc blob );
Table created.

And I will fill it with the content of 3 binary files. Let’s take them in $ORACLE_HOME/bin just for the fun of it:

SQL> host ls $ORACLE_HOME/bin | nl | head -3 > /tmp/files.txt
SQL> host cat /tmp/files.txt
1 acfsroot
2 adapters
3 adrci

I’m using SQL*Loader to load them to the BLOB:

SQL> host echo "load data infile '/tmp/files.txt' into table DEMOTAB fields terminated by ' ' ( id char(10),filename char(255),doc lobfile(filename) terminated by EOF)" > /tmp/sqlldr.ctl
SQL> host cat /tmp/sqlldr.ctl
load data infile '/tmp/files.txt' into table DEMOTAB fields terminated by ' ' ( id char(10),filename char(255),doc lobfile(filename) terminated by EOF)
 
SQL> host cd $ORACLE_HOME/bin ; sqlldr demo/demo@//localhost/pdb control=/tmp/sqlldr.ctl
 
SQL*Loader: Release 12.1.0.2.0 on Tue Apr 12 21:03:22 2016
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
Path used: Conventional
Commit point reached - logical record count 3
 
Table DEMOTAB:
3 Rows successfully loaded.
 
Check the log file:
sqlldr.log
for more information about the load.
 

They are loaded, I can query my table:

SQL> select DEMOTAB.*,dbms_lob.getlength(doc) from DEMOTAB;
 
ID FILENAME DOC DBMS_LOB.GETLENGTH(DOC)
---------- -------------------- -------------------------------------------------------------------------------- -----------------------
1 acfsroot 23212F62696E2F7368200A230A230A232061636673726F6F740A23200A2320436F70797269676874 945
2 adapters 3A0A230A2320244865616465723A206E6574776F726B5F7372632F75746C2F61646170746572732E 13360
3 adrci 7F454C4602010100000000000000000002003E000100000000124000000000004000000000000000 46156

I’m creating a folder to store the files externally, and create a DIRECTORY for it:

SQL> host rm -rf /tmp/files ; mkdir /tmp/files
SQL> create directory DEMODIR as '/tmp/files';
Directory created.

Now I add a BFILE column to my table:

SQL> alter table DEMOTAB add ( external_doc bfile );
Table altered.

My idea is not to move all BLOB to External LOB, but only part of them. For example, old documents can be externalized whereas current ones stay in the database. That helps to control the database size without taking any risk about consistency in case of PITR.

I’ve there an inline procedure ‘lob_to_file’ that reads a LOB and writes it to a file. In the body of the PL/SQL block I call the procedure for the 2 first rows of my table, and once the files are externalized, I empty the DOC column (the BLOB) and set the EXTERNAL_DOC one (the BFILE):

SQL> set serveroutput on
SQL> declare
tmp_blob blob default empty_blob();
procedure lob_to_file(input_blob in BLOB, file_path in varchar2, file_name in varchar2) as
buffer raw(32767);
buffer_size number:=32767;
amount number;
offset number;
filehandle utl_file.file_type;
blob_size number;
begin
filehandle := utl_file.fopen(file_path, file_name,'wb', 1024);
blob_size:=dbms_lob.getlength(input_blob);
offset:=1;
amount:=32767;
while offset < blob_size loop
dbms_lob.read(input_blob, amount, offset, buffer);
utl_file.put_raw(filehandle, buffer,true);
offset := offset + buffer_size;
buffer := null;
end loop;
exception when others then
utl_file.fclose(filehandle);
raise;
end;
begin
for c in ( select * from DEMOTAB where id <=2 ) loop
lob_to_file (c.doc, 'DEMODIR',c.filename);
update DEMOTAB set doc=null,external_doc=bfilename('DEMODIR',c.filename) where id=c.id;
end loop;
end;
/
PL/SQL procedure successfully completed.

Note: don’t take my code as an example. I did it quickly. You should know that best place for code examples is Tim Hall www.oracle-base.com

I can check that I have the two files in my directory

SQL> host ls -l /tmp/files
total 128
-rw-r--r--. 1 oracle oinstall 945 Apr 12 21:03 acfsroot
-rw-r--r--. 1 oracle oinstall 13360 Apr 12 21:03 adapters

and compare it to the size of original file:

SQL> host ls -l $ORACLE_HOME/bin | head -4
total 644308
-rwxr-xr-x. 1 oracle oinstall 945 May 24 2014 acfsroot
-rwxr-xr-x. 1 oracle oinstall 13360 Mar 23 2015 adapters
-rwxr-x--x. 1 oracle oinstall 46156 Mar 25 17:20 adrci

And here is my table:

SQL> select id,filename,dbms_lob.getlength(doc),external_doc from DEMOTAB;
 
ID FILENAME DBMS_LOB.GETLENGTH(DOC) EXTERNAL_DOC
---------- -------------------- ----------------------- ----------------------------------------
1 acfsroot bfilename('DEMODIR', 'acfsroot')
2 adapters bfilename('DEMODIR', 'adapters')
3 adrci 46156 bfilename(NULL)

You see that first two rows have empty BLOB but a BFILE addressing the files in DEMODIR
The third row is untouched.

Now, my idea is to make it transparent for the application, so I create a view on it which transparently retrieves the External LOB when LOB is null:

SQL> create view DEMOVIEW as select id,filename,nvl(doc,external_doc) doc from DEMOTAB;
View created.

And now time to query. The application does a select into a BLOB so let’s do the same:

SQL> variable doc blob;
SQL> exec select doc into :doc from DEMOVIEW where id=1;
PL/SQL procedure successfully completed.
SQL> print doc
 
DOC
--------------------------------------------------------------------------------
23212F62696E2F7368200A230A230A232061636673726F6F740A23200A2320436F70797269676874

This is the LOB coming from the external file. I get it as a BLOB when I query the view.

And now querying the one that is still stored in the database:

SQL> exec select doc into :doc from DEMOVIEW where id=3;
PL/SQL procedure successfully completed.
SQL> print doc
 
DOC
--------------------------------------------------------------------------------
7F454C4602010100000000000000000002003E000100000000124000000000004000000000000000

Querying the view instead of the table (and you can play with synonyms for that) the application get the document without knowing wheter it comes from the database or the external directory. It seems that externalizing binary documents do not require a re-design of the application.

 

Cet article Transparently externalize BLOB to BFILE est apparu en premier sur Blog dbi services.


The (almost) same sample schema for all major relational databases (2) – Oracle

$
0
0

In the last post we looked at how to install the “Dell DVD Store Database Test Suite” into a PostgreSQL 9.5.2 database. In this post we’ll do the same with an Oracle database.

The starting point is exactly the same. Download the generic “ds21.tar.gz” and the vendor specific “ds21_oracle.tar.gz” files and transfer both to the node the hosts the Oracle database:

oracle@oel12102:/var/tmp/ [PROD] ls
ds21_oracle.tar.gz  ds21.tar.gz

Once extracted we have almost the same structure as in the last post for PostgreSQL:

oracle@oel12102:/var/tmp/ds2/ [PROD] ls -l
total 132
-rw-r--r--. 1 oracle oinstall  5308 Aug 12  2010 CreateConfigFile.pl
drwxr-xr-x. 5 oracle oinstall    73 May 31  2011 data_files
drwxr-xr-x. 2 oracle oinstall  4096 Dec  2  2011 drivers
-rw-r--r--. 1 oracle oinstall 30343 May 13  2011 ds2.1_Documentation.txt
-rw-r--r--. 1 oracle oinstall 10103 Nov  9  2011 ds2_change_log.txt
-rw-r--r--. 1 oracle oinstall  1608 Jul  1  2005 ds2_faq.txt
-rw-r--r--. 1 oracle oinstall  2363 May  5  2011 ds2_readme.txt
-rw-r--r--. 1 oracle oinstall  5857 Apr 21  2011 ds2_schema.txt
-rw-r--r--. 1 oracle oinstall 18013 May 12  2005 gpl.txt
-rw-r--r--. 1 oracle oinstall 32827 Nov  9  2011 Install_DVDStore.pl
drwxr-xr-x. 5 oracle oinstall  4096 May 31  2011 oracleds2

The only difference is the “oracleds2″ directory. In contrast to the PostgreSQL version you do not need to create user as the scripts will connect as sysdba:

oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] pwd
/var/tmp/ds2/oracleds2
oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] head oracleds2_create_all.sh
# oracleds2_create_all.sh
# start in ./ds2/oracleds2
cd ./build
sqlplus "/ as sysdba" @oracleds2_create_tablespaces_small.sql
sqlplus "/ as sysdba" @oracleds2_create_db_small.sql
cd ../load/cust
sh oracleds2_cust_sqlldr.sh
cd ../orders
sh oracleds2_orders_sqlldr.sh
sh oracleds2_orderlines_sqlldr.sh

Lets go:

oracle@oel12102:/var/tmp/ds2/ [PROD] pwd
/var/tmp/ds2
oracle@oel12102:/var/tmp/ds2/ [PROD] chmod +x Install_DVDStore.pl
oracle@oel12102:/var/tmp/ds2/ [PROD] ./Install_DVDStore.pl 
Please enter following parameters: 
***********************************
Please enter database size (integer expected) : 100
Please enter whether above database size is in (MB / GB) : MB
Please enter database type (MSSQL / MYSQL / PGSQL / ORACLE) : ORACLE
Please enter system type on which DB Server is installed (WIN / LINUX) : LINUX
***********************************

For Oracle database scripts, total 4 paths needed to specify where cust, index, ds_misc and order dbfiles are stored. 

If only one path is specified, it will be assumed same for all dbfiles. 

For specifying multiple paths use ; character as seperator to specify multiple paths 

Please enter path(s) (; seperated if more than one path) where Database Files will be stored (ensure that path exists) : /u02/oradata/PROD/
***********************************
Initializing parameters...
***********************************
Database Size: 100 
Database size is in MB 
Database Type is ORACLE 
System Type for DB Server is LINUX 
File Paths : /u02/oradata/PROD/ 
***********************************

Calculating Rows in tables!! 
Small size database (less than 1 GB) 
Ratio calculated : 10 
Customer Rows: 200000 
Order Rows / month: 10000 
Product Rows: 100000 

Creating CSV files....
Starting to create CSV data files.... 
For larger database sizes, it will take time.
Do not kill the script till execution is complete. 

Creating Customer CSV files!!! 
1 100000 US S 0 
100001 200000 ROW S 0 

Customer CSV Files created!! 

Creating Orders, Orderlines and Cust_Hist csv files!!! 

Creating Order CSV file for Month jan !!! 
1 10000 jan S 1 0 100000 200000 

Creating Order CSV file for Month feb !!! 
10001 20000 feb S 2 0 100000 200000 

Creating Order CSV file for Month mar !!! 
20001 30000 mar S 3 0 100000 200000 

Creating Order CSV file for Month apr !!! 
30001 40000 apr S 4 0 100000 200000 

Creating Order CSV file for Month may !!! 
40001 50000 may S 5 0 100000 200000 

Creating Order CSV file for Month jun !!! 
50001 60000 jun S 6 0 100000 200000 

Creating Order CSV file for Month jul !!! 
60001 70000 jul S 7 0 100000 200000 

Creating Order CSV file for Month aug !!! 
70001 80000 aug S 8 0 100000 200000 

Creating Order CSV file for Month sep !!! 
80001 90000 sep S 9 0 100000 200000 

Creating Order CSV file for Month oct !!! 
90001 100000 oct S 10 0 100000 200000 

Creating Order CSV file for Month nov !!! 
100001 110000 nov S 11 0 100000 200000 

Creating Order CSV file for Month dec !!! 
110001 120000 dec S 12 0 100000 200000 

All Order, Orderlines, Cust_Hist CSV files created !!! 

Creating Inventory CSV file!!!! 

Inventory CSV file created!!!! 

Creating product CSV file!!!! 

Product CSV file created!!!! 

Started creating and writing build scripts for Oracle database... 

Completed creating and writing build scripts for Oracle database!!

All database build scripts(shell and sql) are dumped into their respective folders. 

These scripts are created from template files in same folders with '_generic_template' in their name. 

Scripts that are created from template files have '_' 100 MB in their name. 

User can edit the sql script generated for customizing sql script for more DBFiles per table and change the paths of DBFiles.

Now Run CreateConfigFile.pl perl script in ds2 folder which will generate configuration file used as input to the driver program.

Looks fine so lets try to load (for Oracle a separate load script was generated):

oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] pwd
/var/tmp/ds2/oracleds2
oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] chmod +x oracleds2_create_all_100MB.sh
oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] ./oracleds2_create_all_100MB.sh
...
  INSERT INTO "DS2"."CATEGORIES" (CATEGORY, CATEGORYNAME) VALUES (1,'Action')
                    *
ERROR at line 1:
ORA-01950: no privileges on tablespace 'DS_MISC'


  INSERT INTO "DS2"."CATEGORIES" (CATEGORY, CATEGORYNAME) VALUES (2,'Animation')
                    *
ERROR at line 1:
ORA-01950: no privileges on tablespace 'DS_MISC'
...

Hm. Quotas seem to be missing. The script that creates the user is this one:

/var/tmp/ds2/oracleds2build/oracleds2_create_db_100MB.sql

Lets add the quotas for the tablespaces there:

oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] head -21 build/oracleds2_create_db_100MB.sql

-- DS2 Database Build Scripts
-- Dave Jaffe  Todd Muirhead 8/31/05
-- Copyright Dell Inc. 2005

-- User

SET TERMOUT OFF
DROP USER DS2 CASCADE;
SET TERMOUT ON

CREATE USER DS2
  IDENTIFIED BY ds2
  TEMPORARY TABLESPACE "TEMP"
  DEFAULT TABLESPACE "DS_MISC"
  ;
ALTER USER DS2 QUOTA UNLIMITED ON CUSTTBS;
ALTER USER DS2 QUOTA UNLIMITED ON INDXTBS;
ALTER USER DS2 QUOTA UNLIMITED ON DS_MISC;
ALTER USER DS2 QUOTA UNLIMITED ON ORDERTBS;

… and try again (sorry for the long output, just want to be complete here):

oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] pwd
/var/tmp/ds2/oracleds2
oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] ./oracleds2_create_all_100MB.sh
oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] ./oracleds2_create_all_100MB.sh

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 12 13:29:06 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Connected.
SQL> spool CreateDS2_Tablespaces.log
SQL> 
SQL> --Currently this template assumes need for only single datafile per table
SQL> --This might impact performance for larger database sizes, so either user needs to edit the generated script from this template or change logic in perl script to generate required build table space script
SQL> --Paramters that need to be changed acc to database size are - number of datafiles per table, initial size of data file and size of increments for data file in case of overflow
SQL> 
SQL> --Paths for windows should be like this : c:\oracledbfiles\
SQL> --paths for linux should be like this : /oracledbfiles/
SQL> 
SQL> CREATE TABLESPACE "CUSTTBS" LOGGING DATAFILE '/u02/oradata/PROD/cust_1.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED	EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;
CREATE TABLESPACE "CUSTTBS" LOGGING DATAFILE '/u02/oradata/PROD/cust_1.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED	EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
*
ERROR at line 1:
ORA-01543: tablespace 'CUSTTBS' already exists


SQL> ALTER TABLESPACE "CUSTTBS" ADD DATAFILE '/u02/oradata/PROD/cust_2.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED ;
ALTER TABLESPACE "CUSTTBS" ADD DATAFILE '/u02/oradata/PROD/cust_2.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED
*
ERROR at line 1:
ORA-01537: cannot add file '/u02/oradata/PROD/cust_2.dbf' - file already partof database


SQL> CREATE TABLESPACE "INDXTBS" LOGGING DATAFILE '/u02/oradata/PROD/indx_1.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED	EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
CREATE TABLESPACE "INDXTBS" LOGGING DATAFILE '/u02/oradata/PROD/indx_1.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED	EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
*
ERROR at line 1:
ORA-01543: tablespace 'INDXTBS' already exists


SQL> ALTER TABLESPACE "INDXTBS" ADD DATAFILE '/u02/oradata/PROD/indx_2.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED ;
ALTER TABLESPACE "INDXTBS" ADD DATAFILE '/u02/oradata/PROD/indx_2.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED
*
ERROR at line 1:
ORA-01537: cannot add file '/u02/oradata/PROD/indx_2.dbf' - file already partof database


SQL> CREATE TABLESPACE "DS_MISC" LOGGING DATAFILE '/u02/oradata/PROD/ds_misc.dbf' SIZE 500M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
CREATE TABLESPACE "DS_MISC" LOGGING DATAFILE '/u02/oradata/PROD/ds_misc.dbf' SIZE 500M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
*
ERROR at line 1:
ORA-01543: tablespace 'DS_MISC' already exists


SQL> CREATE TABLESPACE "ORDERTBS" LOGGING DATAFILE '/u02/oradata/PROD/order_1.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
CREATE TABLESPACE "ORDERTBS" LOGGING DATAFILE '/u02/oradata/PROD/order_1.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
*
ERROR at line 1:
ORA-01543: tablespace 'ORDERTBS' already exists

SQL> ALTER TABLESPACE "ORDERTBS" ADD DATAFILE '/u02/oradata/PROD/order_2.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED ;
ALTER TABLESPACE "ORDERTBS" ADD DATAFILE '/u02/oradata/PROD/order_2.dbf' SIZE 1000M REUSE AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED
*
ERROR at line 1:
ORA-01537: cannot add file '/u02/oradata/PROD/order_2.dbf' - file already partof database


SQL> spool off
SQL> exit;
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 12 13:29:06 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


User created.


User altered.


User altered.


User altered.


User altered.


Grant succeeded.


Table created.


Table created.


Table created.


Table created.


Table created.


Table created.


Table created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


1 row created.


Table created.


Sequence created.


Sequence created.


Package created.


Commit complete.

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:10 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:10 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct
Path used:      Direct

Load completed - logical record count 100000.

Table DS2.CUSTOMERS, partition US_PART:
  100000 Rows successfully loaded.

Check the log file:
  us.log
for more information about the load.

Load completed - logical record count 100000.

Table DS2.CUSTOMERS, partition ROW_PART:
  100000 Rows successfully loaded.

Check the log file:
  row.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct

Load completed - logical record count 10000.

Table DS2.ORDERS, partition FEB2009:
  10000 Rows successfully loaded.

Check the log file:
  feb_orders.log
for more information about the load.

Load completed - logical record count 10000.

Table DS2.ORDERS, partition JUL2009:
  10000 Rows successfully loaded.

Check the log file:
  jul_orders.log
for more information about the load.

Load completed - logical record count 10000.

Table DS2.ORDERS, partition SEP2009:
  10000 Rows successfully loaded.

Check the log file:
  sep_orders.log
for more information about the load.

Load completed - logical record count 10000.

Load completed - logical record count 10000.

Load completed - logical record count 10000.

Load completed - logical record count 10000.

Table DS2.ORDERS, partition APR2009:
  10000 Rows successfully loaded.

Check the log file:
  apr_orders.log
for more information about the load.

Load completed - logical record count 10000.

Table DS2.ORDERS, partition JAN2009:
  10000 Rows successfully loaded.

Check the log file:
  jan_orders.log
for more information about the load.

Load completed - logical record count 10000.

Table DS2.ORDERS, partition DEC2009:
  10000 Rows successfully loaded.

Check the log file:
  dec_orders.log
for more information about the load.

Table DS2.ORDERS, partition OCT2009:
  10000 Rows successfully loaded.

Check the log file:
  oct_orders.log
for more information about the load.

Table DS2.ORDERS, partition MAY2009:
  10000 Rows successfully loaded.

Check the log file:
  may_orders.log
for more information about the load.

Table DS2.ORDERS, partition JUN2009:
  10000 Rows successfully loaded.

Check the log file:
  jun_orders.log
for more information about the load.

Load completed - logical record count 10000.

Load completed - logical record count 10000.

Table DS2.ORDERS, partition NOV2009:
  10000 Rows successfully loaded.

Check the log file:
  nov_orders.log
for more information about the load.

Table DS2.ORDERS, partition MAR2009:
  10000 Rows successfully loaded.

Check the log file:
  mar_orders.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


Load completed - logical record count 10000.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


Table DS2.ORDERS, partition AUG2009:
  10000 Rows successfully loaded.

Check the log file:
  aug_orders.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:14 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct
Path used:      Direct

Load completed - logical record count 49487.

Table DS2.ORDERLINES, partition NOV2009:
  49487 Rows successfully loaded.

Check the log file:
  nov_orderlines.log
for more information about the load.

Load completed - logical record count 50081.

Table DS2.ORDERLINES, partition MAY2009:
  50081 Rows successfully loaded.

Check the log file:
  may_orderlines.log
for more information about the load.

Load completed - logical record count 49791.

Load completed - logical record count 49763.

Table DS2.ORDERLINES, partition FEB2009:
  49791 Rows successfully loaded.

Check the log file:
  feb_orderlines.log
for more information about the load.

Table DS2.ORDERLINES, partition MAR2009:
  49763 Rows successfully loaded.

Check the log file:
  mar_orderlines.log
for more information about the load.

Load completed - logical record count 49784.

Table DS2.ORDERLINES, partition OCT2009:
  49784 Rows successfully loaded.

Check the log file:
  oct_orderlines.log
for more information about the load.

Load completed - logical record count 50251.

Table DS2.ORDERLINES, partition DEC2009:
  50251 Rows successfully loaded.

Check the log file:
  dec_orderlines.log
for more information about the load.

Load completed - logical record count 50234.

Table DS2.ORDERLINES, partition SEP2009:
  50234 Rows successfully loaded.

Check the log file:
  sep_orderlines.log
for more information about the load.

Load completed - logical record count 49918.

Table DS2.ORDERLINES, partition JUN2009:
  49918 Rows successfully loaded.

Check the log file:
  jun_orderlines.log
for more information about the load.

Load completed - logical record count 50159.

Load completed - logical record count 50206.

Load completed - logical record count 50718.

Table DS2.ORDERLINES, partition AUG2009:
  50159 Rows successfully loaded.

Check the log file:
  aug_orderlines.log
for more information about the load.

Table DS2.ORDERLINES, partition APR2009:
  50206 Rows successfully loaded.

Check the log file:
  apr_orderlines.log
for more information about the load.

Table DS2.ORDERLINES, partition JUL2009:
  50718 Rows successfully loaded.

Check the log file:
  jul_orderlines.log
for more information about the load.

Load completed - logical record count 49687.

Table DS2.CUST_HIST:
  49687 Rows successfully loaded.

Check the log file:
  jan_cust_hist.log
for more information about the load.

Load completed - logical record count 49687.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.


Table DS2.ORDERLINES, partition JAN2009:
  49687 Rows successfully loaded.

Check the log file:
  jan_orderlines.log
for more information about the load.
Path used:      Direct

Load completed - logical record count 49791.

Table DS2.CUST_HIST:
  49791 Rows successfully loaded.

Check the log file:
  feb_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 49763.

Table DS2.CUST_HIST:
  49763 Rows successfully loaded.

Check the log file:
  mar_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 50206.

Table DS2.CUST_HIST:
  50206 Rows successfully loaded.

Check the log file:
  apr_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 50081.

Table DS2.CUST_HIST:
  50081 Rows successfully loaded.

Check the log file:
  may_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 49918.

Table DS2.CUST_HIST:
  49918 Rows successfully loaded.

Check the log file:
  jun_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 50718.

Table DS2.CUST_HIST:
  50718 Rows successfully loaded.

Check the log file:
  jul_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 50159.

Table DS2.CUST_HIST:
  50159 Rows successfully loaded.

Check the log file:
  aug_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 50234.

Table DS2.CUST_HIST:
  50234 Rows successfully loaded.

Check the log file:
  sep_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:16 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 49784.

Table DS2.CUST_HIST:
  49784 Rows successfully loaded.

Check the log file:
  oct_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:16 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 49487.

Table DS2.CUST_HIST:
  49487 Rows successfully loaded.

Check the log file:
  nov_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:16 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 50251.

Table DS2.CUST_HIST:
  50251 Rows successfully loaded.

Check the log file:
  dec_cust_hist.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:16 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 100000.

Table DS2.PRODUCTS:
  100000 Rows successfully loaded.

Check the log file:
  prod.log
for more information about the load.

SQL*Loader: Release 12.1.0.2.0 - Production on Tue Apr 12 13:29:16 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 100000.

Table DS2.INVENTORY:
  100000 Rows successfully loaded.

Check the log file:
  inv.log
for more information about the load.

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 12 13:29:16 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 12 2016 13:29:16 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


Index created.


Table altered.


Index created.


Index created.


Table altered.


Index created.


Table altered.


Table altered.


Index created.


Table altered.


Table altered.


Index created.


Table altered.


Index created.


Index created.


Index created.

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 12 13:29:19 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 12 2016 13:29:16 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Connected.
"DS2"."PRODUCTS"(actor) INDEXTYPE IS CTXSYS.CONTEXT
                                            *
ERROR at line 2:
ORA-29833: indextype does not exist


"DS2"."PRODUCTS"(title) INDEXTYPE IS CTXSYS.CONTEXT
                                            *
ERROR at line 2:
ORA-29833: indextype does not exist


Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL*Plus: Release 12.1.0.2.0 Production on Tue Apr 12 13:29:19 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 12 2016 13:29:19 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


Table created.


Procedure created.


Procedure created.


Procedure created.


Warning: Procedure created with compilation errors.


Warning: Procedure created with compilation errors.


Procedure created.


Trigger created.

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing optionsCTXSYS.CONTEXT
oracle@oel12102:/var/tmp/ds2/oracleds2/ [PROD] 

Ok, now it failed to create the tablespaces but this is fine. As I do not have Oracle Text installed the creation of CTXSYS.CONTEXT indexes failed. In general it looks fine. Did we get the same what we did get in the PostgreSQL instance?:

SQL> select table_name from dba_tables where owner = 'DS2' order by 1;

TABLE_NAME
--------------------------------------------------------------------------------
CATEGORIES
CUSTOMERS
CUST_HIST
DERIVEDTABLE1
INVENTORY
ORDERLINES
ORDERS
PRODUCTS
REORDER

9 rows selected.

SQL> select sequence_name from dba_sequences where sequence_owner = 'DS2' order by 1;

SEQUENCE_NAME
--------------------------------------------------------------------------------
CUSTOMERID_SEQ
ORDERID_SEQ

SQL> select index_name from dba_indexes where owner = 'DS2' order by 1;

INDEX_NAME
--------------------------------------------------------------------------------
IX_CUST_USERNAME
IX_INV_PROD_ID
IX_PROD_CATEGORY
IX_PROD_SPECIAL
PK_CATEGORIES
PK_CUSTOMERS
PK_CUST_HIST
PK_ORDERLINES
PK_ORDERS
PK_PROD_ID

10 rows selected.

SQL> select distinct name from dba_source where owner = 'DS2' order by 1;

NAME
--------------------------------------------------------------------------------
BROWSE_BY_ACTOR
BROWSE_BY_CATEGORY
BROWSE_BY_TITLE
DS2_TYPES
LOGIN
NEW_CUSTOMER
PURCHASE
RESTOCK

SQL> select constraint_name from dba_constraints where owner = 'DS2' and constraint_name not like 'SYS%' order by 1;

CONSTRAINT_NAME
--------------------------------------------------------------------------------
FK_CUSTOMERID
FK_CUST_HIST_CUSTOMERID
FK_ORDERID
PK_CATEGORIES
PK_CUSTOMERS
PK_ORDERLINES
PK_ORDERS
PK_PROD_ID

8 rows selected.

It is not exactly the same but this might be because parts of the sample application are implemented in different ways. We at at least have the same tables, almost, except for the “DERIVEDTABLE1″ :) I am only interested in the data anyway.

 

Cet article The (almost) same sample schema for all major relational databases (2) – Oracle est apparu en premier sur Blog dbi services.

A short glance at Attunity replicate

$
0
0

If you follow my blog, you should know that I really like Dbvisit replicate because it’s simple, robust, has good features and excellent support. But that’s not a reason to ignore other alternatives (and this is the reason of http://www.dbi-services.com/news-en/replication-event-2/). When you have heterogeneous sources (not only Oracle) there is Oracle Golden Gate with very powerful possibilities, but maybe not an easy learning curve because of lack of simple GUI and setup wizard. You may need something more simple but still able to connect to heterogeneous sources. In this post, I am testing another logical replication software, Attunity replicate which has a very easy GUI to start and has connectors (called ‘endpoints’) for lot all databases.

I’m trying the free trial version, installed on my laptop (Windows 10) and accessed through web browser. Here is how the Attunity Replicate architecture looks like:
CaptureAttunity0000
Bulk reader/loader are for initialization, CDC is the redo mining process and Stream Loader the one that applies to destination. Those are connectors to RDBMS (and other source/destinations). A common engine does the filtering and transformation.

Replication profile

I define a new replication task where my goal is to replicate one simple table, EMP2, a copy of SCOTT.EMP:
CaptureAttunity0101
This task will do the first initialization and run real-time replication.

Source database

I need to define the databases. Interestingly, there is nothing to install on source and destination. The replication server connects only through SQL*Net:
CaptureAttunity0102
I use SYSTEM here for simplicity. You need a user that can access some system tables and views, be able to create a directory,read files, etc.

That’s probably defined in the documentation, but I like to do my first trial just by exploring. In case you missed one, don’t worry, the ‘test’ button will check for everything. Here is an example when you try to use SCOTT:
CaptureAttunity0115
V$LOGMNR_LOGS… That’s interesting. LogMiner may not be the most efficient, and may not support all datatypes, but it’s the only Oracle supported way to read redo logs.

Advanced tab is very interesting about it as it shows that there are two possibilities to mine Oracle redo stream: use LogMiner or read binary files (archived + online from source, or only archived logs shipped to another location). It supports ASM (and RAC) and it supports encryption (TDE).
CaptureAttunity0103

I’ve unchecked ‘automatically add supplemental logging’ here because it’s something I want to do manually because it requires an exclusive lock on the source tables. You can let it do automatically if you have no application running on the source when you will start, but that’s probably not the case. Doing it manually let me do this preparation off business hours.
The problem is that you have then to run it for all the tables:

  • ADD SUPPLEMENTAL LOG DATA at database level
  • ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS for all tables that have primary keys
  • ADD SUPPLEMENTAL LOG DATA ADD SUPPLEMENTAL LOG DATA (…) COLUMNS naming unique columns as well as columns that are used to filter redo records
  • ADD SUPPLEMENTAL LOG DATA ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS for tables that have no unique columns

Something is missing here for me: I would like to run the DDL manually, but have the script generated automatically.

Target database

CaptureAttunity0104

Direct-path insert is something that we want for the bulk load, except if we have tables or indexes that do not support OCI direct-path inserts. Then it will use conventional array insert. But it seems that this setting is per-task and not per-table.
CaptureAttunity0105

Task design

I’ve selected my table with the ‘Table Selection’ wizard:
CaptureAttunity0106

With ‘Table Settings’ you can customize columns, filter rows:
CaptureAttunity0107
With ‘Global Transformation’ you can add rules to transform tables or columns that follow a pattern.

Bulk Load

There is a ‘Run’ button that will bulk load the tables and start real-time replication, but let’s look at the ‘advanced’ options:
CaptureAttunity0108
You can managed the initial load yourself, which is good (for example to initialize with a RMAN duplicate or an activated standby) but I would like to set an SCN there instead of a timestamp.
Ability to reload is good, but that’s something we may want to do not for all tables but, for example, for one table that we reorganized at the source.

Those are the cases where simple GUI wizard have their limits. It’s good to start but you may quickly have to do things a little more complex.

If I click on the simple run button, everything is smooth: it will do the initial load and then run replication.

CaptureAttunity0109

I started a transaction on my table that I’ve not commited yet:

03:15:50 SQL> update emp2 set sal=sal+100;
14 rows updated.

And the behaviour looks ok: wait the end of current transactions:
CaptureAttunity0110

My good surprise is that there is no lock waits, which would have blocked DML activity.

CaptureAttunity0111

From trace files (yes this is my first trial and I’ve already set trace for all session coming from repctl.exe and anyway, I cannot post a blog with only screenshots…) it reads V$TRANSACTION every few seconds:

=====================
PARSING IN CURSOR #140631075113120 len=49 dep=0 uid=5 oct=3 lid=5 tim=1460820357503567 hv=3003898522 ad='86236a30' sqlid='6ffgmn2thrqnu'
SELECT XIDUSN, XIDSLOT, XIDSQN FROM V$TRANSACTION
END OF STMT
PARSE #140631075113120:c=3999,e=4414,p=0,cr=2,cu=0,mis=1,r=0,dep=0,og=1,plh=3305425530,tim=1460820357503547
EXEC #140631075113120:c=0,e=62,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3305425530,tim=1460820357503922
WAIT #140631075113120: nam='SQL*Net message to client' ela= 22 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460820357504052
FETCH #140631075113120:c=0,e=392,p=0,cr=0,cu=2,mis=0,r=2,dep=0,og=1,plh=3305425530,tim=1460820357504540
WAIT #140631075113120: nam='SQL*Net message from client' ela= 1033 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460820357505766
...
*** 2016-04-17 03:26:52.265
WAIT #140631074940472: nam='SQL*Net message from client' ela= 5216008 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460820412265018
EXEC #140631074940472:c=0,e=25,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=735420252,tim=1460820412265099
WAIT #140631074940472: nam='control file sequential read' ela= 8 file#=0 block#=1 blocks=1 obj#=-1 tim=1460820412265137
WAIT #140631074940472: nam='control file sequential read' ela= 3 file#=0 block#=16 blocks=1 obj#=-1 tim=1460820412265152
WAIT #140631074940472: nam='control file sequential read' ela= 3 file#=0 block#=18 blocks=1 obj#=-1 tim=1460820412265163
WAIT #140631074940472: nam='control file sequential read' ela= 3 file#=0 block#=1 blocks=1 obj#=-1 tim=1460820412265205
WAIT #140631074940472: nam='control file sequential read' ela= 2 file#=0 block#=16 blocks=1 obj#=-1 tim=1460820412265215
WAIT #140631074940472: nam='control file sequential read' ela= 3 file#=0 block#=18 blocks=1 obj#=-1 tim=1460820412265225
WAIT #140631074940472: nam='control file sequential read' ela= 3 file#=0 block#=281 blocks=1 obj#=-1 tim=1460820412265235
WAIT #140631074940472: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460820412265257
FETCH #140631074940472:c=0,e=168,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=735420252,tim=1460820412265278
 
*** 2016-04-17 03:26:57.498
WAIT #140631074940472: nam='SQL*Net message from client' ela= 5233510 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460820417498816
EXEC #140631074940472:c=0,e=125,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=735420252,tim=1460820417499357
WAIT #140631074940472: nam='control file sequential read' ela= 42 file#=0 block#=1 blocks=1 obj#=-1 tim=1460820417499592
WAIT #140631074940472: nam='control file sequential read' ela= 26 file#=0 block#=16 blocks=1 obj#=-1 tim=1460820417499819
WAIT #140631074940472: nam='control file sequential read' ela= 16 file#=0 block#=18 blocks=1 obj#=-1 tim=1460820417499890
WAIT #140631074940472: nam='control file sequential read' ela= 14 file#=0 block#=1 blocks=1 obj#=-1 tim=1460820417500081
WAIT #140631074940472: nam='control file sequential read' ela= 17 file#=0 block#=16 blocks=1 obj#=-1 tim=1460820417500177
WAIT #140631074940472: nam='control file sequential read' ela= 11 file#=0 block#=18 blocks=1 obj#=-1 tim=1460820417500237
WAIT #140631074940472: nam='control file sequential read' ela= 47 file#=0 block#=281 blocks=1 obj#=-1 tim=1460820417500324
WAIT #140631074940472: nam='SQL*Net message to client' ela= 7 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460820417500470
FETCH #140631074940472:c=2000,e=1109,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=735420252,tim=1460820417500550

So it seems that it waits for a point where there is no current transaction, which is the right thing to do because it cannot replicate transactions that start before redo mining.
However, be careful, there is a ‘transaction consistency timeout’ that defaults to 10 minutes and it seems that the load just starts after this ‘timeout’. The risk is that if those transactions finally change the tables you replicate, you will get a lot of replication conflicts.

So I commit my transaction and the bulk load starts.

03:15:45 SQL> commit;

Here is what we can see from the trace:

PARSING IN CURSOR #139886395075952 len=206 dep=0 uid=5 oct=3 lid=5 tim=1460819745436746 hv=3641549327 ad='8624b838' sqlid='g2vhbszchv8hg'
select directory_name from all_directories where directory_path = '/u01/app/oracle/fast_recovery_area/XE/onlinelog' and (directory_name = 'ATTUREP_27C9EEFCEMP2' or 'ATTUREP_' != substr(directory_name,1,8) )
END OF STMT

This is a clue that Attunity Replicate creates a directory object (It’s actually created in SYS and I would prefer to be informed of that kind of things…)
And here is how redo is read – through SQL*Net:

WAIT #0: nam='BFILE get length' ela= 18 =0 =0 =0 obj#=-1 tim=1460819745437732
LOBGETLEN: c=0,e=84,p=0,cr=0,cu=0,tim=1460819745437768
WAIT #0: nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745437787
WAIT #0: nam='SQL*Net message from client' ela= 225 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745438025
WAIT #0: nam='BFILE open' ela= 53 =0 =0 =0 obj#=-1 tim=1460819745438113
LOBFILOPN: c=0,e=78,p=0,cr=0,cu=0,tim=1460819745438125
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745438137
WAIT #0: nam='SQL*Net message from client' ela= 196 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745438341
WAIT #0: nam='BFILE internal seek' ela= 15 =0 =0 =0 obj#=-1 tim=1460819745438379
WAIT #0: nam='BFILE read' ela= 13 =0 =0 =0 obj#=-1 tim=1460819745438401
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745438410
LOBREAD: c=0,e=56,p=0,cr=0,cu=0,tim=1460819745438416
WAIT #0: nam='SQL*Net message from client' ela= 208 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745438636
WAIT #0: nam='BFILE internal seek' ela= 4 =0 =0 =0 obj#=-1 tim=1460819745438672
WAIT #0: nam='BFILE read' ela= 14 =0 =0 =0 obj#=-1 tim=1460819745438695
WAIT #0: nam='SQL*Net message to client' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745438706
LOBREAD: c=0,e=48,p=0,cr=0,cu=0,tim=1460819745438713
WAIT #0: nam='SQL*Net message from client' ela= 361 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745439086
WAIT #0: nam='BFILE internal seek' ela= 4 =0 =0 =0 obj#=-1 tim=1460819745439124
WAIT #0: nam='BFILE read' ela= 4 =0 =0 =0 obj#=-1 tim=1460819745439135
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1460819745439144
WAIT #0: nam='BFILE internal seek' ela= 2 =0 =0 =0 obj#=-1 tim=1460819745439156
WAIT #0: nam='BFILE read' ela= 2 =0 =0 =0 obj#=-1 tim=1460819745439167
WAIT #0: nam='BFILE internal seek' ela= 2 =0 =0 =0 obj#=-1 tim=1460819745439178
WAIT #0: nam='BFILE read' ela= 2 =0 =0 =0 obj#=-1 tim=1460819745439188

So it probably reads the redo through utl_file and transfer it as a BLOB. This is good for simplicity when we want to install nothing on the source, but on the other hand, this means that filtering cannot be done upstream.

Change Data Capture

The GUI show the overall monitoring. Here are my transactions that are captured and buffered until they are commited:
CaptureAttunity0112

and once they are commited they are applied:
CaptureAttunity0113

Conflict resolution

I did some DML to create some replication conflict, which is something we have to deal with logical replication (because of constraints, triggers, 2-way replication, etc) and the default management is a bit loose in my opinion: log it and continue:
CaptureAttunity0114
Ignoring a problem and continuing makes the target unreliable until the issue is manually solved.

This behavior can be customizable for the whole replication task. This is the default:
CaptureAttunity0116
but we can customize it: either evict a table from the replication or stop the whole replication.
I prefer the latest because only the latest keeps full consistency on target. But then we have to be sure that no conflicts exists, or are resolved automatically.

I’ve not seen a table-level way to define automatic conflict resolution. I real life, it’s better to stop whenever any unexpected situation occurs (i.e when one redo record do not change exactly one row) but we may have to accept specific settings for few tables where the situation is expected (because of triggers, cascade constraints, etc).

Conclusion

Attunity Replicate buffers the transactions and apply them when commit is encountered, which is better in case of many rollbacks, but may lead to a replication gap in case of long transactions.
From the traces I’ve seen, the dictionary information is read from the source database each time it is needed. This is probably higher overhead on source when compared with solutions that get dictionary changes from the redo stream. And this probably raise more limitation on DDL (replication is not possible on changes occurring after ADD, DROP, EXCHANGE and TRUNCATE partition are not ).

Attunity GUI is very nice for simple setups and for graphical monitoring, and I understand why it is well known in SQL Server area for this reason.

The number of supported databases (or rather ‘endpoints’ because it goes beyond databases) is huge:

  • Source can be: Oracle Database, Microsoft SQL Server, Sybase ASE, MySQL, Hadoop, ODBC, Teradata, Salesforce Database, HP NonStop SQL/MX
  • Destination can be: Oracle Database, Microsoft SQL Server, Sybase, MySQL, Hadoop, ODBC, Teradata, Pivotal Greenplum, Actian Vector Database, Amazon Redshift, HP Vertica, Sybase IQ, Netezza, and text files

There are several logical replication solutions, with different prices, different design, different philosophy. I always recommend a Proof of Concept in order to see which one fits your context. Test everything: the setup and configuration alternatives, the mining of production workload, with DDL, special datatypes, high redo rate, etc. Don’t give up on first issue, it’s also a good way to consider documentation quality and support efficiency. And think about all the cases where you will need it: real-time BI, auditing, load balancing, offloading, migrations, data movement to and from the Cloud.

 

Cet article A short glance at Attunity replicate est apparu en premier sur Blog dbi services.

I/O Performance predictability in the Cloud

$
0
0

You don’t need good performance only for your system. You need reliable performance. You need performance predictability.

In a previous post I’ve measured physical IO performances of the Oracle Cloud Service, with SLOB. And performance is very good. And I continue to run the following SLOB configuration:

UPDATE_PCT=25
RUN_TIME=3600
WORK_LOOP=100000
SCALE=90000M
WORK_UNIT=64
REDO_STRESS=LITE
LOAD_PARALLEL_DEGREE=4

This is physical I/O (I’ve a small buffer cache here) with a fixed workload (the WORK_LOOP). When you do that, you should follow Kevin’s recommandation to set RUN_TIME to an arbitrary high value so that the run stops only at the end of WORK_LOOP. I thought one hour would always be sufficient but you will see that I was wrong.

For days, the performance was constant: 15 minutes to do about 6000 physical reads:

CaptureCLOUDIOPB1002

Except that we see something different on last Sunday. Let’s zoom at it (I’m using Orachrome Lighty):

CaptureCLOUDIOPB1001

As you see, the run has been longer, starting from April 23rd around 5pm and with the run time increasing. On 24th,from 3am to 10am you don’t see it increasing because it went over the 3600 seconds I’ve set in RUN_TIME. Then it came back to normal after 2pm.

In the lower part, you can see the plots from the Lighty ASH that shows an increase of I/O latency from 5am until noon.

As we don’t see the whole picture because the long run timed out, I extracted the physical reads per second from the AWR shapshots.

CaptureCLOUDIOPB1003

Here are the raw values:


SQL> set pagesize 1000 linesize 1000
column begin_interval_time format a17 trunc
column end_interval_time format a17 trunc
alter session set nls_timestamp_format='dd-mon-yyyy hh24:mi';
select * from (
select begin_interval_time,end_interval_time
,value-lag(value)over(partition by dbid,instance_number,startup_time order by snap_id) physical_reads
,round(((value-lag(value)over(partition by dbid,instance_number,startup_time order by snap_id))
/ (cast(end_interval_time as date)-cast(begin_interval_time as date)))/24/60/60) physical_reads_per_sec
, 24*60*60*((cast(end_interval_time as date)-cast(begin_interval_time as date))) seconds
from dba_hist_sysstat join dba_hist_snapshot using(dbid,instance_number,snap_id)
where stat_name='physical reads'
) where physical_reads>0
/
 
BEGIN_INTERVAL_TI END_INTERVAL_TIME PHYSICAL_READS PHYSICAL_READS_PER_SEC SECONDS
----------------- ----------------- -------------- ---------------------- ----------
23-apr-2016 10:17 23-apr-2016 10:34 6385274 6272 1018
23-apr-2016 10:44 23-apr-2016 11:01 6388349 6282 1017
23-apr-2016 11:12 23-apr-2016 11:29 6385646 6316 1011
23-apr-2016 11:39 23-apr-2016 11:56 6386464 6523 979
23-apr-2016 12:07 23-apr-2016 12:22 6387231 6780 942
23-apr-2016 12:33 23-apr-2016 12:48 6386537 7120 897
23-apr-2016 12:59 23-apr-2016 13:14 6389581 7147 894
23-apr-2016 13:24 23-apr-2016 13:40 6388724 6669 958
23-apr-2016 13:51 23-apr-2016 14:08 6387493 6478 986
23-apr-2016 14:18 23-apr-2016 14:35 6386402 6280 1017
23-apr-2016 14:46 23-apr-2016 15:03 6387153 6293 1015
23-apr-2016 15:14 23-apr-2016 15:30 6386722 6317 1011
23-apr-2016 15:41 23-apr-2016 15:58 6386488 6374 1002
23-apr-2016 16:09 23-apr-2016 16:25 6387662 6505 982
23-apr-2016 16:36 23-apr-2016 16:51 6387735 6745 947
23-apr-2016 17:02 23-apr-2016 17:17 6387303 7066 904
23-apr-2016 17:28 23-apr-2016 17:43 6387304 7042 907
23-apr-2016 17:54 23-apr-2016 18:10 6388075 6620 965
23-apr-2016 18:21 23-apr-2016 18:38 6386803 6219 1027
23-apr-2016 18:48 23-apr-2016 19:06 6387318 5969 1070
23-apr-2016 19:17 23-apr-2016 19:35 6386298 5785 1104
23-apr-2016 19:46 23-apr-2016 20:05 6388856 5517 1158
23-apr-2016 20:16 23-apr-2016 20:36 6387658 5297 1206
23-apr-2016 20:47 23-apr-2016 21:09 6386522 4838 1320
23-apr-2016 21:20 23-apr-2016 21:43 6386627 4555 1402
23-apr-2016 21:54 23-apr-2016 22:17 6387922 4521 1413
23-apr-2016 22:28 23-apr-2016 22:54 6388141 4135 1545
23-apr-2016 23:04 23-apr-2016 23:32 6388043 3905 1636
23-apr-2016 23:42 24-apr-2016 00:11 6392048 3771 1695
24-apr-2016 00:21 24-apr-2016 00:54 6387294 3237 1973
24-apr-2016 01:05 24-apr-2016 01:47 6391392 2506 2550
24-apr-2016 01:58 24-apr-2016 02:49 6389860 2102 3040
24-apr-2016 03:00 24-apr-2016 04:00 5723619 1589 3602
24-apr-2016 04:10 24-apr-2016 05:00 3196078 1073 2980
24-apr-2016 05:00 24-apr-2016 05:10 258522 416 622
24-apr-2016 05:21 24-apr-2016 06:00 1308239 564 2321
24-apr-2016 06:00 24-apr-2016 06:21 1246742 973 1281
24-apr-2016 06:32 24-apr-2016 07:32 2743521 762 3602
24-apr-2016 07:43 24-apr-2016 08:43 3498613 971 3602
24-apr-2016 08:53 24-apr-2016 09:53 4207757 1168 3603
24-apr-2016 10:04 24-apr-2016 11:00 5884053 1764 3335
24-apr-2016 11:00 24-apr-2016 11:04 507668 1960 259
24-apr-2016 11:15 24-apr-2016 12:00 6392338 2371 2696
24-apr-2016 12:10 24-apr-2016 12:38 6387428 3841 1663
24-apr-2016 12:49 24-apr-2016 13:11 6392759 4742 1348
24-apr-2016 13:22 24-apr-2016 13:42 6387570 5244 1218
24-apr-2016 13:53 24-apr-2016 14:12 6397352 5707 1121
24-apr-2016 14:23 24-apr-2016 14:41 6389321 5916 1080
24-apr-2016 14:51 24-apr-2016 15:09 6391483 6070 1053
24-apr-2016 15:20 24-apr-2016 15:37 6385094 6205 1029
24-apr-2016 15:47 24-apr-2016 16:04 6386833 6342 1007

However, don’t think that performance was bad then. You have disks and average single block read in in few milliseconds:


Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
db file sequential read 74,045 446.5 6.03 71.7 User I/O
db file parallel read 3,010 170.2 56.55 27.3 User I/O
DB CPU 10.6 1.7
log file sync 3 0 0.79 .0 Commit
Disk file operations I/O 5 0 0.14 .0 User I/O
utl_file I/O 4 0 0.02 .0 User I/O
SQL*Net message to client 6 0 0.00 .0 Network
db file async I/O submit 0 0 .0 System I
reliable message 0 0 .0 Other
db file single write 0 0 .0 User I/O

And the wait event histograms shows that only few I/O calls were above 32 milliseconds at the time of the worst IOPS:


% of Total Waits
-----------------------------------------------
Waits
64ms
Event to 2s <32ms <64ms <1/8s <1/4s <1/2s <1s =2s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
Data file init write 12 20.0 20.0 40.0 20.0
Disk file operations I/O 5 93.6 5.1 1.3
control file sequential r 1 100.0 .0
db file async I/O submit 27 96.7 .9 1.1 .7 .4 .1 .1
db file parallel read 1487 50.6 24.0 13.8 8.3 3.4
db file parallel write 853 74.8 10.0 8.4 5.9 .9 .0
db file sequential read 2661 96.5 1.6 1.1 .6 .2 .0

Here is the wait event histogram at microsecond level at the time where the storage head cache hit was probably at its minimum.


06:21:48 SQL>
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
------------------------------ --------------- ---------- ------------------------------
db file sequential read 1 0 1 microsecond
db file sequential read 2 0 2 microseconds
db file sequential read 4 0 4 microseconds
db file sequential read 8 0 8 microseconds
db file sequential read 16 0 16 microseconds
db file sequential read 32 0 32 microseconds
db file sequential read 64 0 64 microseconds
db file sequential read 128 0 128 microseconds
db file sequential read 256 16587 256 microseconds
db file sequential read 512 340140 512 microseconds
db file sequential read 1024 56516 1 millisecond
db file sequential read 2048 5140 2 milliseconds
db file sequential read 4096 12525 4 milliseconds
db file sequential read 8192 45465 8 milliseconds
db file sequential read 16384 53552 16 milliseconds
db file sequential read 32768 14962 32 milliseconds
db file sequential read 65536 9603 65 milliseconds
db file sequential read 131072 4430 131 milliseconds
db file sequential read 262144 1198 262 milliseconds
db file sequential read 524288 253 524 milliseconds
db file sequential read 1048576 3 1 second

Those are the performances that you can expect from a busy load when you don’t hit the storage head cache. It’s not bad on average. This is just what you can expect from disk.
You just need to keep in mind that the amazing performances that you can see usually are not guaranteed. It’s very nice to get those performance for development or test environments, but do not rely on it.

 

Cet article I/O Performance predictability in the Cloud est apparu en premier sur Blog dbi services.

APEX Connect 2016 – Day 1 – SQL and PL/SQL

$
0
0

This year the APEX connect conference spans over three days with the first day dedicated to SQL and PL/SQL which are the basement of APEX and its close link to the Database.
After the Keynote about “Six months of ask Tom” by Chris Saxon who is filling in for Tom Kyte on the famous “Ask Tom” website I decided
to attend presentations on following topics:
– A Primer on Service Workers
– Managing the changes in database structures in agile project with Oracle SQL Developer Data Modeler
– Oracle 12c for Developer
– Oracle SQL more functionality for more Performance
– SQL and PL/SQL hidden features
– Code Quality in PL/SQL

Ask Tom:
What was highlighted by Chris Saxon is that the key to get proper support, is to ask well defined questions.
Most of the time they are lacking details and have a misleading subject.
The keys to ask a good question and increase chances to have a pertinent answer are following:
1. Provide a subject which summarizes the question (not something generic like ‘Oracle’ or ‘DB error’)
2. Provide a test case (table, data, code, error details)
3. Provide test data as insert statement
Ideally you can use Oracle Live SQL (https://livesql.oracle.com) to test your code and provide the link to the “Ask Tom” team.
Don’t forget: making their work easier, gives them more time to answer more question. So if you spend a little bit more time to submit a proper question, it will benefit to the whole community.
Also, easy questions can most of the time find an answer with our friend: Google :-)

Service Workers:
What are service workers? Some would say, they are the future of the internet.
Basically they are Java scripts running between your web browser and the network.
So it’s a programmable network proxy, allowing you to control how network requests are handled.
Service workers have following steps in their lifecycle:
– Install
– Activate
– Wait (Idle)
– Fetch / transmit
They only work with HTTPS.
To learn more about service workers I suggest you to visit following blog:
http://www.html5rocks.com/en/tutorials/service-worker/introduction/?redirect_from_locale=de
Like every new component, they are not supported by all web browser versions. You can check compatibility from following web page:
http://caniuse.com/#feat=serviceworkers

Agile changes in database structures:
Oracle provides a very interesting tool for data modeling, for free: SQL Developer Data Modeler
(http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html)
So don’t hesitate to download it!
It a very powerful tool which allows you to manage versions of your data model by integrating Subversion.
That why it’s highly recommended to flag your data model with the same release number than the application code in order to keep track.
You can do following and more in order to manage you data model and stick to the changes required by the developers:
– Compare data models with different versions
– Compare data model with DB data dictionary
– Compare data model with DDL scripts
– Reverse engineer data model from DB data dictionary
– Reverse engineer data model from DDL scripts
– As a result of those comparisons, generate reports and align as required one or the other compared items
– Generate DDL scripts (which can be tuned by setting the preferences)
– Create design rules and transformation scripts
There is no excuse to live with a wrong data model.

Oracle 12c for Developers:
There are quite some new features in Oracle 12c which make the life of developers easier:
– Extended VARCHAR sizing
– Approximate count distinct
– Pattern matching
– White Lists
– Temporal validity
– JSON
– Data redaction
– Identity columns
I can only recommend you to visit the Oracle documentation to get the details of the use of those nice features.
https://docs.oracle.com/database/121/NEWFT/toc.htm

More SQL functionality for more performance:
There are some SQL features which allow you to lower your development efforts and improve the performance like:
– Invisible columns
– Archiving
– Hints/directives
– Use of result cache
– External table
– Use clause
– Row limiting
You are advised to visit the following blogs (8 parts in total):
http://db-oriented.com/2015/08/22/write-less-with-more-part-1/

SQL and PL/SQL hidden features:
I know you will be, but there are no hidden SQL nor PL/SQL features in the Oracle database.
All features are documented.
But some like the following earn to be better know and used like:
– DEFAULT on NULL
– MERGE
– DBMS_LOGGING package
– MODEL
– LINGUISTIC settings
– DBMS_ASSERT package (Prevent SQL injection)
– Application context for logging
The best advice is to have a look at the new features of every new release / patch of the Oracle database to know about what it brings and when it was available.

 

Cet article APEX Connect 2016 – Day 1 – SQL and PL/SQL est apparu en premier sur Blog dbi services.

APEX Connect 2016 – Day 2 – APEX

$
0
0

After the great Keynote about “APEX Vision, Past, Present, Future” by Mike Hichwa, the father of APEX, I decided to attend presentations on following topics:
– Dynamic Actions 5.1
– APEX and Oracle JET
– Enterprise UX patterns
– APEX Scripting
– Automation on APEX instance

APEX Vision, Past, Present, Future:
Mike Hichwa provided a very interesting overview of how WEB DB was born, turned into Project Marvel which gave HTML DB and finally became
APEX, with numerous stories around it.
The vision of the future for APEX with upcoming version 5.1 and further challenges and expectations is very promising.
APEX is looking at a bright future.
In addition we got to know about ABCS (Application Business Connector Services), which can be seen as “APEX light” for business users, as they can build their own apps from there.

Dynamic Actions 5.1:
The web browser can be seen as the new desktop. Nowadays you can do almost everything in your browser:
– Create documents
– Edit spreadsheets
– Create presentations
– Mail
– Edit pictures
– …
All of that is enabled for a big part of it, by JavaScript.
APEX tries to abstract JavaScript from the developer with Dynamic Actions which allow to hide some of the complexity.
“If you don’t use Dynamic Actions, you are a forms developer, not a web application developer” according to Juergen Schuster, the host of the presentation.
Thanks to Dynamic Actions APEX went easier into a new dimension with Ajax, so that it can be avoided to submit and reload to the same page. Applications look much more user friendly. You just need to notice that APEX handles Ajax calls like get and not post. So it is impacted by Session State Protection.
To learn more about Dynamic Actions and try them, please visit:
http://dynamic-actions.com

APEX and Oracle JET:
Oracle JavaScript Extension Tool is part of the future of APEX as new charts should appear in APEX 5.1 based on that technology.
Oracle JET is a framework developed for interaction with Oracle Cloud Services, based on different JavaScript libraries:
– JQuery
– JQuery UI
– Hammer
– RequireJS
– Knockout
The given demo was about charting and tables with and without live data.
If you want to know more about Oracle JET and APEX please visit following blog ny Sven Weller:
https://svenweller.wordpress.com/2016/04/07/integrate-oracle-jet-into-apex-5/
More details can be found on the Oracle site:
http://www.oracle.com/webfolder/technetwork/jet/index.html

Enterprise UX patterns:
What is the gain of using patterns? The answer is instant familiarity, people don’t have to think about before using it.
There are many patterns in our daily life and so are in the web sites.
For web applications most common patterns are about following items:
– Login page
– Filtering
– Marquee pages
– Modal Dialogs
– Complex Forms
In this area APEX also shows its power and flexibility to stick to the common patterns with few efforts for the developers.
I would suggest you to visit following website if you are interested in UI patterns:
http://ui-patterns.com/

APEX Scripting:
APEX is based on PL/SQL and provides numerous packages, so you can make use of them for scripting purpose as actions from the UI can be done by calling them. There are also tools provided like the java based export tool.
Therefore scripting can be used for following purpose around APEX:
– Export
– Backup
– Deployment
Packages which are useful for those purpose are:
– APEX_INSTANCE_ADMIN
– APEX_UTIL
– APEX_LANG
As those packages are in the database, the new upcoming SQLcl tool (replacement of SQL Plus) should be considered.
Please visit following webpage for more details around SQLcl:
http://www.oracle.com/technetwork/issue-archive/2015/15-sep/o55sql-dev-2692807.html

Automation on APEX instance:
As a continuity of the previous topic, and based on the same predicates, some tasks related to APEX administration can be automated like workspace provisioning, exports for versioning and regular “system” reporting.
APEX administration is somewhere between the DBA and the developer. Therefore following aspects must be looked at when going for administration automation:
– Responsibility
– Approvals on user requests
– Rights definition
– Resource management
– Administration tasks distribution
APEX provides some functionality for automated workspace delivery, but it can be enhanced by developing a custom solution where for example SSO would be used to identify enterprise users. Also specific roles could be provided and for example default template application installed as well as specific packaged/sample applications.

 

Cet article APEX Connect 2016 – Day 2 – APEX est apparu en premier sur Blog dbi services.

Can you become a PaaS provider for an Oracle Database service?

$
0
0

This post is not about technical limitations: Oracle Database is supported in most hypervisors (certified or not) so anybody can provide a DBaaS through a virtual machine as long as it run a supported OS, has enough CPU, memory, and storage, and open the required ports to access to the database (ssh and listener). But you cannot do what you want when you install a software, and you must agree with its licence terms. Who pays the Oracle Database licence then? And how to avoid to licence for all your servers in your Cloud datacenter?

Currently here are the possible solutions to host an Oracle Database in a Cloud:

  • OPC = Oracle Public Cloud
  • OCM = Oracle Cloud Machine
  • ACE = Authorized Cloud Environment

I’ll not use those acronyms anymore. I don’t like them. They have a different meaning for me.

The easiest solution for a customer that need a database service and can put his data into one of the Oracle data centers, is the Oracle Public Cloud. It’s provided by Oracle, so there are no doubts about the licensing terms. They don’t audit themselves I guess. And anyway, consolidation is done with Oracle VM, which is a virtualization technology that is accepted by Oracle LMS to licence on vCPU metrics.

However, for legal reasons for example, the customers may need to keep their data in their Country. Oracle will not have one datacenter in each country. And by one datacenter, I mean two of them as a disaster recovery site is a must when you care about your data. For this reason, Oracle comes with the possibility to move a part of their public cloud to the customer premises: the Oracle Cloud Machine. But that’s still for big customers as they have to host the infrastructure (rack, network, electricity) and it’s high availability.

There are a lot of hosting companies in each country which can provide Cloud services as in a virtual data center. Let’s say you are one of them and one of your customer wants tu run an Oracle database on the VM you provides. Or even better: you want to provide an Oracle Database as a Service. In licencing terms, it’s a bit more complex.

Let’s explain the two ways to licence your database usage here.

SaaS, PaaS, IaaS

An application requires the application software (example: the PeopleSoft ERP) which stores its data into a database (example: Oracle Database 12c), which run on a system with compute resources (CPU, memory), storage, network.

If you sell a compute service on the Cloud and your customer installs and manage the database and application software himself, you provide a IaaS service (infrastructure).
If you sell a database service on the Cloud where you manage the database software and the host, you provide a PaaS service (platform) and your customer installs and maintains the application himself.
If they don’t want to care about the platform, you provide an application service and this is SaaS (software).

Back to licencing, now.
In IaaS the customer probably wants a Bring Your Own Licenses (BYOL) approach. They come with the licences that they have purchased themselves, and want to use them for the database that they have installed on the IaaS service.
In PaaS they may prefer the Pay As You Go (PAYG). They create a DBaaS for a short term and pay for the hours of usage (usage means that the service is up – not necessarily that they are actively using it).
In SaaS they probably want the database usage to be embedded with the software usage and use same metric

CaptureLicencingDBaaS

In a Private Cloud – where hosting and usage are both within the same company – Oracle licences follow the standard agreements. The Edition and Options are chosen. The metric (processor or user) is chosen. And the company IT department can charge the service internally as they want: from usage (you monitor it) or fixed.

This is possible within a company: the users of the database service are the company users (employees, customers, etc.)

But if you want to become a Cloud provider for other companies, things are different. You are then doing Hosting. If you provide a Pay As You Go DBaaS, you – the hosting company – own the licences but you don’t use them. Your customer users are using the database.

Hosting

There are different rules for companies that are hosting a database for another company. The rules are documented by Oracle in: Oracle Technology Hosting

Here is where it says that the standard licence agreements – those that cover the licences you buy for the database that you use – do not apply in the case of hosting:
The standard Oracle license agreement allows the licensed customer to use the Oracle programs solely for its internal business operations. The customer is not allowed to use the licensed programs for the internal business operations of any other entity.

When you open the service to users that are not your users, you can’t use the ‘standard’ model and must get a ‘hosting’ model. The rules may be identical or different. You have to negociate them with Oracle sales. An example of a different rule came with Standard Edition 2. If you had Standard Edition, you can convert them to Standard Edition 2 for free (and you should do it to go to 12.1.0.2 or higher). However, this rule is for standard agreement only. For hosting licences, you may not be able to convert them 1:1

Back to the Cloud. If you provide your application as a cloud service, one solution is to provide an ESL (Embedded Software Licence) with your software. In this case, the customer owns the licence, which cover their internal business operations. But your customer can interact with the database only through your application. You must provide everything: administration, monitoring, backups, etc. This is perfect for SaaS.

When you want to provide Database as a Service (DBaaS), through PaaS, you need to negotiate a special agreement and price with Oracle.

Cloud

Besides the Oracle Cloud Services, some other big Cloud providers have negociated an agreement (Authorized Cloud Environment) with Oracle in order to be able to provide DBaaS for Oracle Databases. Those are listed in the following document: Licensing Oracle Software in the Cloud Computing Environment

  • Amazon Web Services: Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3)
  • Microsoft Windows Azure Platform

Another Authorized Cloud Environment has been signed with Verizon. It is currently not listed in the above document, but is explained on the oracle website: http://www.oracle.com/us/corporate/features/oracle-verizon/index.html

If you are not one of Amazon, Azure, Verizon or Oracle Cloud Services, you have to negotiate a hosting agreement with Oracle.

There are two ways to licence the Oracle DBaaS in those Clouds.

Bring Your Own License (BYOL)

If you provide IaaS, your customers have their own licences (ESL, ASFU, FU, ULA,…) and they can use them to install and run Oracle Database on their compute service that you have provided to them. However, you probably didn’t dedicate network, storage and servers to them, just to match their licences. And they don’t want to licence for all the processors you have in your data center. Only few hypervisors are accepted for counting only the virtual CPU allocated to a machine that is running Oracle Database. If your virtualization technology is not listed in the (in)famous Oracle Partitioning Policy you will need a special agreement there.

It’s easier with user licences (NUP) than with processor licences, but you have still to take care of the minimums: 25 NUP per processor in Enterprise Edition, 10 NUP per server for Standard Edition 2. The customer pays the licences and may be impacted by the fact that you add some servers in your data center. Besides the number of servers, be careful not to have more than 2 socket servers or you will not be able to host any Standard Edition 2.

Only a proper agreement with Oracle will allow you to lower the risk for your customers in case they have LMS audit. If you don’t the changes you don on your data center may impact the conformity of your customer licences.

Pay As You Go (PAYG)

Cloud Services are for higher agility and lower upfront costs. The BYOL model is probably not what is expected by your DBaaS customers.
When your customers wants to subscribe to a database service, as with a pre-paid card, you need to provide Pay As You Go operational-only costs. They can be subscribed for a short-term. They can be charged on vCPU. This is a good choice for short projects.

In this case, you, the cloud provider, own the licences and bills his customers according to metered usage. If you want to go there, you will probably dedicate servers to Oracle Database. Be sure to isolate them at maximum so that Oracle LMS will not be able to consider that the software is installed everywhere. Even better: use a different hypervisor and why not Oracle VM. Oracle appliance may be a good idea for that with Capacity on Demand. You activate more CPU when you have new customers.

However, as Microsoft and Amazon did, you can find an agreement with Oracle so that only vCPU are counted and you can transparently charge your customers on the same basis.

As an example, here is how the vCPU equivalent to processor metrics.

For Enterprise Edition in a Authorized Cloud Environment each vCPU is counted as a core. And the core factor applies depending on the processor model.

For Standard Edition 2 where the processor metric is a socket, here is the equivalence:

  • 1 socket for 1-4 vCPU
  • 2 socket for 5-8 vCPU
  • 3 socket for 9-12 vCPU
  • 4 socket for 13-16 vCPU

If you have several SE2 instances that need less than 4 vCPU then you should try to consolidate into 4 vCPU instances.
If you have instances that requires more than 16 cores, then Standard Edition is not the right edition. Anyway, SE2 instances cages CPU usage to 16 threads.

Conclusion

Once you’ve got the agreement with Oracle, you can provide DBaaS. You host the database and you administrate it. Obviously, as a long term IaaS provider, you are specialist in the administration of the infrastructure, the virtualization and the operating systems. But with PaaS you need many more skills. With DBaaS you will administrate the database 24/7. So don’t forget that we can help with our ISO2000 FlexService SLA.

 

Cet article Can you become a PaaS provider for an Oracle Database service? est apparu en premier sur Blog dbi services.

Testing Oracle on exoscale.ch

$
0
0

EXO_IMG_2802
My last post came from a discussion at SITB with exoscale. They are doing Cloud hosting with datacenters in Switzerland. In Switzerland a lot of companies cannot host their data outside of the country, which is a no-go for the big Cloud providers.

After the discussion they gave me a coupon for a trial IaaS instance.
And if you follow my blog you should know that when I have a trial access, there are good chances that I try it…

The provisioning interface is really simple: You choose the datacenter (I choose the one near Zurich, in a disused military bunker) and a VM with 2 vCPU and 2GB RAM.
I want to install Oracle Database. I choose Linux CentOS. I’ll probably try CoreOS later if I’ve some credits remaining.

EXOSCALE003

I’ll have to connect to it so I open the ssh port, which is done with a simple clic, and I add the listener port as my goal is to run SLOB on an Oracle Database and monitor performance with Orachrome Lighty – all components of my favorite ecosystem…

EXOSCALE005

I don’t like passwords so I import my ssh public key:

EXOSCALE002

and the system is ready in few seconds:

EXOSCALE004

Now I can connect with ssh on the ip address provided, as root, and set the system for Oracle Database:


[root@franck ~]# groupadd oinstall
[root@franck ~]# groupadd dba
[root@franck ~]# useradd -g oinstall -G dba oracle
[root@franck ~]# passwd oracle

I put my public ssh key into oracle authorized_keys. I set the kernel settings. I upload the 12c binaries and ready to install.

I install SLOB (https://kevinclosson.net/slob/), useing the create database kit and run the default PIO workload:

UPDATE_PCT: 25
RUN_TIME: 3600000
WORK_LOOP: 10000000
SCALE: 100M (12800 blocks)
WORK_UNIT: 64
REDO_STRESS: LITE
HOT_SCHEMA_FREQUENCY: 0
DO_HOTSPOT: FALSE
HOTSPOT_MB: 8
HOTSPOT_OFFSET_MB: 16
HOTSPOT_FREQUENCY: 3
THINK_TM_FREQUENCY: 0
THINK_TM_MIN: .1
THINK_TM_MAX: .5

EXOSCALE006

Result: 1500 IOPS with latency in few milliseconds for 8k reads.


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 0.2 0.05 3.89
DB CPU(s): 0.1 0.0 0.00 0.39
Background CPU(s): 0.0 0.0 0.00 0.00
Redo size (bytes): 255,885.4 51,476.9
Logical read (blocks): 1,375.2 276.7
Block changes: 652.0 131.2
Physical read (blocks): 1,261.9 253.9
Physical write (blocks): 324.1 65.2
Read IO requests: 1,261.7 253.8
Write IO requests: 310.8 62.5
Read IO (MB): 9.9 2.0
Write IO (MB): 2.5 0.5

Wait events (I don’t know why the sum is above 100%…)

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
db file sequential read 1,826,324 2442.5 1.34 48.2 User I/O
db file parallel read 74,983 2285.2 30.48 45.1 User I/O
DB CPU 509.2 10.0

And wait event histogram in microseconds for single block reads:


EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
------------------------------ --------------- ---------- ------------------------------
db file sequential read 1 0 1 microsecond
db file sequential read 2 0 2 microseconds
db file sequential read 4 0 4 microseconds
db file sequential read 8 0 8 microseconds
db file sequential read 16 0 16 microseconds
db file sequential read 32 0 32 microseconds
db file sequential read 64 0 64 microseconds
db file sequential read 128 191 128 microseconds
db file sequential read 256 3639 256 microseconds
db file sequential read 512 71489 512 microseconds
db file sequential read 1024 838371 1 millisecond
db file sequential read 2048 887138 2 milliseconds
db file sequential read 4096 21358 4 milliseconds
db file sequential read 8192 3625 8 milliseconds
db file sequential read 16384 1659 16 milliseconds
db file sequential read 32768 1863 32 milliseconds
db file sequential read 65536 4817 65 milliseconds
db file sequential read 131072 3721 131 milliseconds
db file sequential read 262144 5 262 milliseconds

Now time to look at CPU. The processors are not the latest Intel generation:

Model name: Intel Xeon E312xx (Sandy Bridge)
CPU MHz: 2593.748

I’ve run same slob.conf except that I reduce the scal to fit in buffer cache and do no updates


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 0.8 2.1 0.00 0.66
DB CPU(s): 0.8 2.0 0.00 0.65
Background CPU(s): 0.0 0.0 0.00 0.00
Redo size (bytes): 8,257.2 20,944.9
Logical read (blocks): 411,781.5 1,044,502.5
Block changes: 41.8 106.1
Physical read (blocks): 10.9 27.7
Physical write (blocks): 3.5 8.9
Read IO requests: 9.9 25.2
Write IO requests: 2.7 6.7

That’s 412000 logical reads per seconds per CPU. However I’ve only 0.8 CPU here.

I nearly forgot the great feature of 12c AWR report: the active-html with it’s ‘tetris’ view. Here it is:

EXOSCALE008

Ok it seems I had a little delay at the beginning of the run before it was 100% in CPU. Knowing that, I run a report on a shorter period:


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 2.1 0.00 2.81
DB CPU(s): 1.0 2.1 0.00 2.79
Background CPU(s): 0.0 0.0 0.00 0.00
Redo size (bytes): 8,518.5 18,255.4
Logical read (blocks): 518,770.0 1,111,743.4
Block changes: 41.0 87.9
Physical read (blocks): 11.8 25.3
Physical write (blocks): 1.2 2.6
Read IO requests: 11.5 24.7

Here are the numbers then. Up to 520000 logical reads per second per CPU.

That’s not extreme performance, but it’s an acceptable alternative to on-premises physical server. Provisioning is really easy.

 

Cet article Testing Oracle on exoscale.ch est apparu en premier sur Blog dbi services.


APEX Connect 2016 – Day 3 – APEX

$
0
0

No keynote for the last day, so I decided to attend presentations on following topics:
– APEX fine art printing
– APEX repository
– Images in APEX
– Interactive Grid in APEX 5.1
– Enhance quality through CI
– Universal Theme 5.1

APEX fine art printing:
Markdown is simplified markup language where tags are kept as simple as possible in order to keep the impact on the size minimal.
As opposed to HTML which is a publication format, markdown is a writing format.
You can find a lot of details about markdown which was created by John Gruber in Wikipedia (https://en.wikipedia.org/wiki/Markdown).
You can find different editors who support markdown, like Atom (https://atom.io/).
Markdown formatted text can easily be converted into other formats like PDF, Word, PowerPoint, HTML with tools like Pandoc.
There is an APEX plug-in which is based on following tools:
– Pandoc for conversion
– Node.js for the web rendering
The power of a tool like Pandoc, is that PDF rendering is based on LaTEX which provides very precise documents with pdf charts.
Drawback is that it’s slow to generate the high quality pdf file. It allows to generate the graphs directly from queries.
The plug-in can be downloaded from GitHub:
https://github.com/ogobrecht/markdown-apex-plugin

APEX repository:
As you may know, APEX is metadata driven. Everything is stored into tables of the Oracle database: the dictionary
This allows following kind of operations and more:
– Access application information
– Quality Checks and Audit
– Components creation without the UI
– Auto-generate components of the application on the fly (only for experts)
The kind of information you might be interested in are the ones helping you to monitor like response time, which can be found in
APEX_WORKSPACE_ACTIVITY_LOG, which can then be mapped to v$sql. That way you can create your own dashboard.
For Quality checks, the very first tool is the APEX advisor which uses the repository information.
To go even further with quality checks you can use the plug-in available on Git-Hub:
https://github.com/MTAGRatingen/Quality-Assurance-Apex-Plug-In
You may also want to check for example the type of authentication used in the application hosted in your workspace, or what plug-ins are used within those application, aso…
The dictionary can also be used to retrieve information about reports and generate dynamic display of columns from selection for example. It’s up to you to be creative with all information available.
The burning question for the “hackers”: Can one write data in the dictionary?
The answer is: Yes, but you shouldn’t, or only with great care.
This is going beyond the border as you will touch undocumented APIs which can break everything.

Images in APEX:
Images can be stored in different locations when it comes to APEX applications.
– Shared Components
– Tables
– Web Server
Prior to APEX 5 all shared components files (CSS, Images, Static files) were separated by type, could only be loaded one at a time and were not exported with the application.
Since APEX 5 there is only a separation between Workspace files and Application files, can be loaded within zip files (keeping the directory structure defined within the zip) and are part of the export. They can also be downloaded at once in a zip file.
Management if static files was improved with APEX 5.
It’s advised to use following substitution strings when you access the image files: #APP_IMAGES# and #WORKSPACE_IMAGES#
It is also better to use #IMAGE_PREFIX# rather than the virtual /i/ directory reference.
When it comes to store images in database tables they go into BLOBs.
Please visit following blog to learn more on images management within the database BLOBs:
http://joelkallman.blogspot.ch/2014/03/yet-another-post-how-to-link-to.html

Interactive Grid in APEX 5.1:
What is an Interactive Grid?
It’s an editable interactive report with much more in it.
That new feature will be available with the next APEX 5.1 version.
In the future it will completely replace the interactive reports.
It will contain following nice features:
– Sort from the column header
– Sort over multiple columns
– Change column order with Drag & Drop
– Column freezing (like in Excel)
– Column Grouping
– Move of column groups
– Revisited Action menu
– Duplicate Row
– Delete Row
– Refresh Row data
– Revert row changes
There will be no limit in the number of grids in the same page.
It will also support master details with several levels.

Enhance quality through CI:
Continuous integration allows you to automate the deployment flow of any application, so this can also implemented with APEX.
you can implement following tools for example:
– Jenkins as a CI server
– Maven for the build
– RSpec for testing
– Selenium for Web integration
And last but not least, you must use some VCS (Version Control System).
With an automated deployment which will have also automated testing you will improve the quality as you will take cumbersome and repetitive tasks from your development team and have instantaneous status of your application.

Universal Theme 5.1:
The Universal theme was a big change in APEX 5.0.
There will be some more nice improvements in APEX 5.1.
A short list of the main ones follows:
– Right to left support (reverse your screen in one click)
– Live template options
– Font APEX Icon Library
– Universal Theme Application updated
– 100+ improvements (Inline item help, Dialog auto sizing, User preference for theme style, Auto sized cards…)

All of the above new features make us even more impatient to put our hands on APEX 5.1.
I would like to thank the DOAG for organizing that APEX Connect event and hope to be there in Munich next year.

 

Cet article APEX Connect 2016 – Day 3 – APEX est apparu en premier sur Blog dbi services.

When a query has read 350TB after 73K nested loops

$
0
0

According that you have Tuning Pack, SQL Monitor is the right way to see what a query is currently doing. A query was running for days and the first figure I see is that it has read 350TB. This is not the kind of thing you do in only one operation, so I immediately checked the ‘executions’ columns: 73K table scans. So that’s finally only 5GB. The problem is not the full scan, but the nested loop that iterates into it.

Here’s the tweet that reclaim some more explanation:

Finally it could have been worse. The nested loop has iterated 21M times and thanks to the filter we did only 73K full table scans.

The problem is not the full table scan. 21M access by index would not have been better. The problem is nested loop. You can expect a hash join for that. I tried to force a hash join in vain and finally checked the query. I’ve reproduced it with same idea on a small table.

Here are the table creation:


create table demo as select rownum id from xmltable('1 to 10');
create table repart (rep_id number, id1 number, id2 number , x char);
insert into repart select rownum*4+0 , id id1, null id2 , 'A' from demo where id between 1 and 3;
insert into repart select rownum*4+1 , null id1, id id2 , 'B' from demo where id between 3 and 5;
insert into repart select rownum*4+2 , null id1, null id2 , 'C' from demo where id between 5 and 7;
insert into repart select rownum*4+3 , id id1, id id2 , 'D' from demo where id between 7 and 9;

table DEMO has rows with id from 1 to 10 and table REPART have rows that may match this 1 to 10 number wither in ID1 or in ID2

SQL> select * from repart;
 
REP_ID ID1 ID2 X
---------- ---------- ---------- -
4 1 A
8 2 A
12 3 A
5 3 B
9 4 B
13 5 B
6 C
10 C
14 C
7 7 7 D
11 8 8 D
15 9 9 D
 
12 rows selected.

And the user wants to get the rows that match one of them. He wants all rows from DEMO, then the value of “X” in REPART that matches with ID1 and if no row matches ID1 but matches ID2, he wants the “X” from this row. Not too hard to write: left outer joins, and a coalesce to get the first not null:

SQL> select id,coalesce(repart1.x,repart2.x) from demo
left outer join repart repart1 on demo.id=repart1.id1
left outer join repart repart2 on demo.id=repart2.id2
where repart1.rep_id is not null or repart2.rep_id is not null
/
 
ID C
---------- -
3 A
4 B
5 B
7 D
8 D
9 D
1 A
2 A

And the plan is ok with hash join:

Plan hash value: 3945081217
 
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 8 |00:00:00.01 | 18 | | | |
|* 1 | FILTER | | 1 | | 8 |00:00:00.01 | 18 | | | |
|* 2 | HASH JOIN OUTER | | 1 | 10 | 10 |00:00:00.01 | 18 | 1888K| 1888K| 1131K (0)|
|* 3 | HASH JOIN OUTER | | 1 | 10 | 10 |00:00:00.01 | 10 | 2440K| 2440K| 1468K (0)|
| 4 | TABLE ACCESS FULL| DEMO | 1 | 10 | 10 |00:00:00.01 | 3 | | | |
| 5 | TABLE ACCESS FULL| REPART | 1 | 12 | 12 |00:00:00.01 | 7 | | | |
| 6 | TABLE ACCESS FULL | REPART | 1 | 12 | 12 |00:00:00.01 | 8 | | | |
--------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter(("REPART1"."REP_ID" IS NOT NULL OR "REPART2"."REP_ID" IS NOT NULL))
2 - access("DEMO"."ID"="REPART2"."ID2")
3 - access("DEMO"."ID"="REPART1"."ID1")

but this is not the query I’ve seen. Actually, the user tried to optimize it. He wants to read REPART for ID2 only when there were no match for ID1. So his idea was to add a predicate in the join so that we join to REPART.ID2 only when REPART.ID1 is null:

SQL> select id,coalesce(repart1.x,repart2.x) from demo
left outer join repart repart1 on demo.id=repart1.id1
left outer join repart repart2 on demo.id=repart2.id2
/* added */ and repart1.rep_id is null
where repart1.rep_id is not null or repart2.rep_id is not null
/
 
ID C
---------- -
1 A
2 A
3 A
7 D
8 D
9 D
5 B
4 B

This attempt to optimize is there in the FILTER operation. And this is why in the original query we had to access only 73K times instead of 21M. But in order to do that, the optimizer has implemented the outer join to a lateral view through a nested loop:

Plan hash value: 1922575045
 
------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 8 |00:00:00.01 | 39 | | | |
|* 1 | FILTER | | 1 | | 8 |00:00:00.01 | 39 | | | |
| 2 | NESTED LOOPS OUTER | | 1 | 10 | 10 |00:00:00.01 | 39 | | | |
|* 3 | HASH JOIN OUTER | | 1 | 10 | 10 |00:00:00.01 | 11 | 2440K| 2440K| 1414K (0)|
| 4 | TABLE ACCESS FULL | DEMO | 1 | 10 | 10 |00:00:00.01 | 3 | | | |
| 5 | TABLE ACCESS FULL | REPART | 1 | 12 | 12 |00:00:00.01 | 8 | | | |
| 6 | VIEW | VW_LAT_3A0EC601 | 10 | 1 | 2 |00:00:00.01 | 28 | | | |
|* 7 | FILTER | | 10 | | 2 |00:00:00.01 | 28 | | | |
|* 8 | TABLE ACCESS FULL| REPART | 4 | 1 | 2 |00:00:00.01 | 28 | | | |
------------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter(("REPART1"."REP_ID" IS NOT NULL OR "ITEM_1" IS NOT NULL))
3 - access("DEMO"."ID"="REPART1"."ID1")
7 - filter("REPART1"."REP_ID" IS NULL)
8 - filter("DEMO"."ID"="REPART2"."ID2")

If I check how the query is transformed:

SQL> exec dbms_sqldiag.dump_trace(p_sql_id=>'4s003zk0ggftd',p_child_number=>0,p_component=>'Compiler',p_file_id=>'TB350');
PL/SQL procedure successfully completed.

I can see that it has been transformed to a lateral view in order to include the predicate on the left table column:

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "DEMO"."ID" "ID",COALESCE("REPART1"."X","VW_LAT_3A0EC601"."ITEM_4_3") "COALESCE(REPART1.X,REPART2.X)" FROM "DEMO"."DEMO" "DEMO","DEMO"."REPART" "REPART1", LATERAL( (SELECT "REPART2"."REP_ID" "ITEM_1_0","REPART2"."X" "ITEM_4_3" FROM "DEMO"."REPART" "REPART2" WHERE "DEMO"."ID"="REPART2"."ID2" AND "REPART1"."REP_ID" IS NULL))(+) "VW_LAT_3A0EC601" WHERE ("REPART1"."REP_ID" IS NOT NULL OR "VW_LAT_3A0EC601"."ITEM_1_0" IS NOT NULL) AND "DEMO"."ID"="REPART1"."ID1"(+)

and this can be joined only with nested loop…

Sometimes, it’s better to let the optimizer optimize rather than trying to do it ourselves because we can reduce the possibilities of join methods.
Of course, hash join had other problems such as the size of workarea. That’s what happens when we try to do some reporting on a table that has not been designed for it at all.

 

Cet article When a query has read 350TB after 73K nested loops est apparu en premier sur Blog dbi services.

Attunity Replicate: Replicate numeric data types from Oracle to SQL Server is easy

$
0
0

After the good article “A short glance at Attunity replicate” from Franck Pachot, I will continue to explain our test on Attunity Replicate with datatype and how this tool do the mapping between Oracle and SQL Server for numeric datatypes.

Introduction

The first step is to find the mapping between Oracle and SQL Server.
I summarize this mapping in this table:
attunity_replicate02

As you known (or not), in SQL Server, you have a replication with Oracle directly but this functionality is deprecated
attunity_replicate01

In the msdn, you have a default data type mapping here.
For numeric datatype, the mapping is approximatively the same as above.
To complete this part, you can also specify alternative data type mapping here.

This functionally is deprecated and I decide to find an alternative. I find that Attunity replicate is a good way to replicate from Oracle to SQL Server near real time.
Attunity Replicate has a self-datatype table for the mapping.
In the user guide from Attunity Replicate, you have the mapping between Oracle and the Attunity Data types.
If I summarize my previous mapping, I have this mapping:
attunity_replicate03
Then next step is to realize the mapping between the Attunity data type and the SQL Server data type, as explained in the user guide.
The mapping will give us this mapping:
attunity_replicate04
Oops, I have some “no match” like BIT, TINYINT, BIGINT, DECIMAL, MONEY or SMALLMONEY due to the intermediate Attunity data type mapping
attunity_replicate05

This is the theory, and now let’s go to the practice…

Create the source & destination connections

To create the task, follow the blog by Frank here.
In my case, I just change the destination database with a SQL Server connection
Add SQL Server image
One thing to not forget is to test the connection with the button on the bottom-left of the connection window
attunity_replicate07.png

Create the source table and destination table

With SQL Developer, I create in Oracle a table NUMERIC_DT in the schema OE to have all numerical data types from my example

CREATE TABLE OE.NUMERIC_DT 
(
 NUMBER_BIT NUMBER(1),
NUMBER_TINYINT NUMBER(3),
NUMBER_SMALLINT NUMBER(5),
NUMBER_INTEGER NUMBER(10),
NUMBER_BIGINT NUMBER(19),
BINARYFLOAT_REAL BINARY_FLOAT,
BINARYDOUBLE_FLOAT BINARY_DOUBLE,
FLOAT_FLOAT FLOAT,
REAL_FLOAT REAL,
NUMBER_NUMERIC NUMBER(30,20),
NUMBER_DECIMAL NUMBER(20,10),
NUMBER_MONEY NUMBER(19,4),
NUMBER_SMALLMONEY NUMBER(10,4)
)

attunity_01

With Attunity Replicate, I use the console to replicate my table through the web browser.
Then Select the schema OE in Oracle and now the table NUMERIC_DT.
attunity_replicate10
Now, I start the replication with the Run button
attunity_replicate11
The first action when you start the replication is a Full load.
In my case I just create the table without data so you have “Transferred count” value set to 0 in the monitoring
attunity_replicate12
I start SSMS to see the table that Attunity created in the SQL Server destination
attunity_replicate13
The result matches with my theorical table above.
To have a comparison, my colleague Vincent Matthey do the same test with the tool SQL Server Migration Assistant (SSMA) for Oracle
With SSMS, we can see the result
STH_SQL_Server

The result is a little strange… As you can see, all defined SQL Server integer datatype like TINYINT, SMALLINT, … are NUMERIC with a scale of 0 to have integer! For example, the NUMBER(1) becomes a numeric(1,0). It is not false but also not right at all!

Insert Data

I create a script to put some data in this table, to see if the replication works:

BEGIN
  for i in 1..15000
  loop
 insert into OE.NUMERIC_DT
(NUMBER_BIT,NUMBER_TINYINT,NUMBER_SMALLINT,NUMBER_INTEGER,NUMBER_BIGINT
,BINARYFLOAT_REAL,BINARYDOUBLE_FLOAT,FLOAT_FLOAT,REAL_FLOAT,NUMBER_NUMERIC
,NUMBER_DECIMAL,NUMBER_MONEY,NUMBER_SMALLMONEY)
VALUES( TRUNC(dbms_random.value(0,1)), 
TRUNC(dbms_random.value(1,255)),
TRUNC(dbms_random.value(1,32767)),
TRUNC(dbms_random.value(1,2147483647)),
TRUNC(dbms_random.value(1,9223372036854775807)),
dbms_random.value(1,9999999999),
dbms_random.value(1,9999999999),
dbms_random.value(1,9223372036854775807),
dbms_random.value(1,999999999999999999999999999999999999999999999999999999999999),
dbms_random.value(1,9999999999),
dbms_random.value(1,9999999999),
dbms_random.value(1,999999999999999),
dbms_random.value(1,999999));
end loop;
  commit;
  END;

attunity_replicate14
In the monitoring tool from Attunity, you have a powerful graphical monitoring to see all operations during the replication
attunity_replicate15
To be sure that the replication has correctly replicated the 15000 rows, I do a select in the SQL Server destination database and I see that I have all rows.
attunity_replicate16

Update data

Just to see the power of this interface, I run an update on Oracle side of all rows with NUMBER_BIT equals 0:

UPDATE OE.NUMERIC_DT
SET NUMBER_BIT=TRUNC(dbms_random.value(0,1)),
NUMBER_TINYINT=TRUNC(dbms_random.value(0,255)),
NUMBER_SMALLINT=TRUNC(dbms_random.value(1,32767)),
NUMBER_INTEGER=TRUNC(dbms_random.value(1,2147483647)),
NUMBER_BIGINT=TRUNC(dbms_random.value(1,9223372036854775807)),
BINARYFLOAT_REAL=TRUNC(dbms_random.value(1,9223372036854775807)),
BINARYDOUBLE_FLOAT=dbms_random.value(1,9999999999),
FLOAT_FLOAT=dbms_random.value(1,9999999999),
REAL_FLOAT=dbms_random.value(1,999999999999999999999999999999999999999999999999999999999999),
NUMBER_NUMERIC=dbms_random.value(1,9999999999),
NUMBER_DECIMAL=dbms_random.value(1,9999999999),
NUMBER_MONEY=dbms_random.value(1,9999999999),
NUMBER_SMALLMONEY=dbms_random.value(1,999999)
WHERE NUMBER_BIT=0;
COMMIT;

You can see that I have in the Oracle source 9986 rows
attunity_replicate17
And the same on SQL Server destination
attunity_replicate18
I apply the update…
attunity_replicate19
In the Attunity monitoring, you can see in the changing Processing that I have 1 transaction in the incoming changes and I can see the latency. In my case, my lab is not set to have good performance…
attunity_replicate20
When the transaction is applied on the destination, you can see that the incoming changes has 0 transaction and that the Applied Changes pie has now the update stats.
attunity_replicate21

As you can see, the monitoring interface is very powerful!
And in live, it is even better ;-)

Delete data

The last command to test is the delete command

DELETE FROM OE.NUMERIC_DT WHERE NUMBER_BIT=0;
COMMIT; 

After this command, with the monitoring interface, you can see the 3 commands:
attunity_replicate23
And in the destination database, you can see that the delete was applied.

Conclusion

This tool is very good for a replication between heterogeneous database systems.
The creation of a staging database on the SQL Server destination from an Oracle source is very simple to deploy and use, as you can see.
Finally, if you are interested by this blog and see more, I invite you to come in our free Event “Replication road show” who Vincent Matthey, David Barbarin and I, will present the replication between Oracle and SQL Server with Attunity Replicate and complete this demo with also change on DDL command like change a data type on a column or add a calculated column, etc.

 

Cet article Attunity Replicate: Replicate numeric data types from Oracle to SQL Server is easy est apparu en premier sur Blog dbi services.

Adaptive Plan: How much can STATISTICS COLLECTOR buffer?

$
0
0

The 12c adaptive plan prepares two join methods (Hash Join and Nested Loop), actives the one that has the better cost for the estimated cardinality and computes the point of inflection in cardinality estimation where the best cost changes to the other join method. At execution time, rows are buffered by a STATISTICS COLLECTOR operation in order to see if the point of inflection is reached. If it doesn’t, the plan continues as planned. If it does, the alternative join method is activated. But buffering has a limit…

Let’s try to find this limit empirically.

I create a table with enough rows:

SQL> create table demo1 (n constraint demo1pk primary key,x1) as select 0 , cast('x' as varchar2(4000)) from dual;
Table created.
SQL> insert --+ append
into demo1 select 1e7+rownum ,'x' from xmltable('1 to 200000');
200000 rows created.

and a second table to join:

SQL> create table demo2 (n constraint demo2pk primary key,x2) as select 0 , 'x' from dual;
Table created.

I filled the DEMO1 table in two steps. First, CTAS with one row so that the statistics (online statistics gathering) favors nested loops. And I inserted lot of rows later because I want to fill the Adaptive Plan buffer. DEMO2 is a small table but I want the FULL TABLE SCAN on it to be a bit more expensive or hash join will be always choosen. I do that by faking the number of blocks:

SQL> exec dbms_stats.set_table_stats(user,'DEMO2',numblks=3000,no_invalidate=>false);
PL/SQL procedure successfully completed.

If I check the execution plan I see that NESTED LOOP is chosen because estimated number of rows is small (artificially set to 1 row):

SQL> explain plan for
2 select max(x1),max(x2) from demo1 join demo2 using(n);
 
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'+adaptive'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 604214593
 
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|- * 2 | HASH JOIN | | 1 | 8 | 4 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 8 | 4 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 8 | 4 (0)| 00:00:01 |
|- 5 | STATISTICS COLLECTOR | | | | | |
| 6 | TABLE ACCESS FULL | DEMO1 | 1 | 4 | 3 (0)| 00:00:01 |
| * 7 | INDEX UNIQUE SCAN | DEMO2PK | 1 | | 0 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 1 | 4 | 1 (0)| 00:00:01 |
|- 9 | TABLE ACCESS FULL | DEMO2 | 1 | 4 | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."N"="DEMO2"."N")
7 - access("DEMO1"."N"="DEMO2"."N")
 
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
 

But the plan is adaptive and can switch to HASH JOIN of more rows than expected are encountered by STATISTICS COLLECTOR.

I run it and gather run time statistics

SQL> alter session set statistics_level=all;
Session altered.
SQL> select max(x1),max(x2) from demo1 join demo2 using(n);
MAX(X1)
----------------------------------------------------------------------------------------------------------------------------------------------------------------
M
-
x
x

And here is the adaptive plan: Hash Join is activated because we have actually lot of rows (200000):


SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID d2y436sr1cx3r, child number 0
-------------------------------------
select max(x1),max(x2) from demo1 join demo2 using(n)
 
Plan hash value: 740165205
 
----------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:02.04 | 372 | 423 | 483 | | | | |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:02.04 | 372 | 423 | 483 | | | | |
|* 2 | HASH JOIN | | 1 | 1 | 1 |00:00:02.04 | 372 | 423 | 483 | 11M| 4521K| 1262K (1)| 4096 |
| 3 | TABLE ACCESS FULL| DEMO1 | 1 | 1 | 200K|00:00:00.20 | 369 | 360 | 0 | | | | |
| 4 | TABLE ACCESS FULL| DEMO2 | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | 0 | | | | |
----------------------------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - access("DEMO1"."N"="DEMO2"."N")
7 - access("DEMO1"."N"="DEMO2"."N")
 
Note
-----
- this is an adaptive plan

The point of inflection is 814:

SQL> column tracefile new_value tracefile
SQL> alter session set tracefile_identifier='cbo_trace';
Session altered.
 
SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
 
TRACEFILE
----------------------------------------------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_3979_cbo_trace.trc
 
SQL> host > &tracefile.
 
SQL> exec dbms_sqldiag.dump_trace(p_sql_id=>'d2y436sr1cx3r',p_child_number=>0,p_component=>'Compiler',p_file_id=>'');
PL/SQL procedure successfully completed.
 
SQL> host grep -E "^DP" &tracefile. | tail
DP - distinct placement
DP: Found point of inflection for NLJ vs. HJ: card = 814.00

Filling the buffer

So here, 814 rows were buffered and the plan switched to HASH JOIN. I want to know how many rows can be buffered, so I want to increase the point of inflection. Easy, if the cost of DEMO2 full table scan is higher then the NESTED LOOP will be cheaper than HASH JOIN even with more rows. Let’s fake the DEMO2 statistics to show a larger table:

SQL> exec dbms_stats.set_table_stats(user,'DEMO2',numblks=4000,no_invalidate=>false);
PL/SQL procedure successfully completed.

And let’s run that again:


SQL> select max(x1),max(x2) from demo1 join demo2 using(n);
 
MAX(X1)
----------------------------------------------------------------------------------------------------------------------------------------------------------------
M
-
x
x
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID d2y436sr1cx3r, child number 0
-------------------------------------
select max(x1),max(x2) from demo1 join demo2 using(n)
 
Plan hash value: 604214593
 
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:01.57 | 374 | 360 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:01.57 | 374 | 360 |
| 2 | NESTED LOOPS | | 1 | 1 | 1 |00:00:01.57 | 374 | 360 |
| 3 | NESTED LOOPS | | 1 | 1 | 1 |00:00:01.57 | 373 | 360 |
| 4 | TABLE ACCESS FULL | DEMO1 | 1 | 1 | 200K|00:00:00.20 | 369 | 360 |
|* 5 | INDEX UNIQUE SCAN | DEMO2PK | 200K| 1 | 1 |00:00:00.42 | 4 | 0 |
| 6 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 1 | 1 | 1 |00:00:00.01 | 1 | 0 |
------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
5 - access("DEMO1"."N"="DEMO2"."N")
 
Note
-----
- this is an adaptive plan
 
27 rows selected.
 
SQL> column tracefile new_value tracefile
SQL> alter session set tracefile_identifier='cbo_trace';
Session altered.
 
SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
 
TRACEFILE
----------------------------------------------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_4083_cbo_trace.trc
 
SQL> host > &tracefile.
SQL> exec dbms_sqldiag.dump_trace(p_sql_id=>'d2y436sr1cx3r',p_child_number=>0,p_component=>'Compiler',p_file_id=>'');
PL/SQL procedure successfully completed.
 
SQL> host grep -E "^DP" &tracefile. | tail
DP - distinct placement
DP: Found point of inflection for NLJ vs. HJ: card = 1086.00

Read it from the end:

  1. The inflection point is higher: 1086, which was my goal
  2. The number of rows from DEMO1 is still 200000 rows, so it’s higher than the inflection point.
  3. We expect a HASH JOIN because the number of rows is higher than the inflection point
  4. But the plan stayed with NESTED LOOP because the buffering in STATISTICS COLLECTOR never reached the inflection point

Dichotomy

By Dichotomy, I’ve scripted similar tests to find the point where reaching the point of inflection do not trigger a plan switch.
‘JOIN’ is the method chosen (from dbms_xplan.display_cursor after execution), ‘INFLECTION POINT’ is the one gathered from 10053 trace and ‘STATBLKS’ is the numblks I set for DEMO2 in order to vary the point of inflection.

JOIN INFLECTION POINT HASH_AREA_SIZE BUFFER STATBLKS LPAD
NESTED 271889 1000000 2175117 1000000 1
NESTED 135823 1000000 1086590 500000 1
NESTED 67789 1000000 542319 250000 1
NESTED 33885 1000000 271087 125000 1
NESTED 16943 1000000 135551 62500 1
NESTED 8471 1000000 67775 31250 1
NESTED 4238 1000000 33904 15625 1
NESTED 2120 1000000 16960 7813 1
NESTED 1060 1000000 8480 3907 1
HASH 532 1000000 4256 1954 1
HASH 796 1000000 6368 2930 1
HASH 928 1000000 7424 3418 1
HASH 994 1000000 7952 3662 1
HASH 1026 1000000 8208 3784 1
NESTED 1044 1000000 8352 3845 1
HASH 1036 1000000 8288 3814 1
HASH 1040 1000000 8320 3829 1
NESTED 1042 1000000 8336 3837 1
HASH 1040 1000000 8320 3833 1
HASH 1040 1000000 8320 3835 1
NESTED 1042 1000000 8336 3836 1

I’ve added some variations on hash_area_size (my bad guess that it makes sense to buffer up to that amount because this is what will go to hash area size at least, if hash join is finally chosen) and on the DEMO1 row size (by varying an lpad on column X).
For the moment, when point of inflection is less than 1041 a plan switch occurs and when it is higher than 1042 no plan switch occurs.

But there are probably other parameters influencing because:

Any idea welcome…

Update 30 mins. later

Thanks to Chris Antognini. It appears the parameter that influences the number of rows buffered is not the actual size of the row. The number of rows to buffer is calculated from the theoretical size of the columns. Which is very bad in my opinion given the number of applications that declares column size at their maximum. And I see no reason why this has to be set like that. Rows are of variable size and allocating buffers on column definition is not a good idea. That reminds jdbc fetch size very well described by Sigrid Keydana.

Here is the limit for different size of the varchar2:

JOIN INFLECTION POINT HASH_AREA_SIZE BUFFER SIZE STATBLKS VARCHAR SIZE
NESTED 155345 65536 3572954 572641 1
NESTED 74899 65536 3894796 276092 30
NESTED 63551 65536 3940220 234258 40
NESTED 33289 65536 4061376 122802 100
NESTED 12865 65536 4142848 47453 300
NESTED 7973 65536 4162422 29409 500
NESTED 4090 65536 4179980 15079 1000
NESTED 2072 65536 4189584 7635 2000
NESTED 1388 65536 4194536 5110 3000

So it looks like oracle allocates a buffer of about few MB, calculates how many rows can fit there given their column definition, and limits the buffering to that number of rows.
The nonsense in my opinion is that size calculated from column definition can be calculated at parse time, when the point of inflection is determined. It makes no sense to set a point of inflection higher than the number of rows that can be buffered.

 

Cet article Adaptive Plan: How much can STATISTICS COLLECTOR buffer? est apparu en premier sur Blog dbi services.

Enterprise Manager 13c and BI Publisher

$
0
0

In Enterprise Manager 12c, if you need to display the database storage, from the report menu you have the following possibilities:

bi1

From now on, with Enterprise Manager 13c, this possibility does not exist anymore in the report menu, you have to use the BI publisher feature. If during the EM13c installation phase, you choose not to configure BI publisher, you will see the following message while trying to access the BIP page:

bi2

If you have a look at the Troubleshooting information, you will understand that as the previous 12c version, it is not necessary to setup the BI publisher, in EM 13c BI publisher is installed and automatically configured, you cannot de-install or de-configure it as it is a base framework component of Enterprise Manager.

We can check the status of BI publisher:

oracle@vmtestoraCC13c:/home/oracle/ [oms13c] emctl status oms -bip_only
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
BI Publisher Server is Down
BI Publisher is disabled, to enable BI Publisher on this host, 
use the 'emctl config oms -enable_bip' command

So  we enable BI publisher:

oracle@vmtestoraCC13c:/home/oracle/ [oms13c] emctl config oms -enable_bip
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
BI Publisher is enabled for startup on this host with the 'emctl start oms' 
 and 'emctl start oms -bip_only' commands.
Overall result of operations: SUCCESS

Then we only start the BI publisher:

oracle@vmtestoraCC13c:/home/oracle/ [oms13c] emctl start oms -bip_only
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting BI Publisher Server only.
Starting BI Publisher Server ...
WebTier Successfully Started
BI Publisher Server Successfully Started
BI Publisher Server is Up

And finally we have access to the BI publisher reports:

bi3

Watch out: Do not forget in Enterprise Manager, the included license for BI publisher only covers reporting against the Oracle Management Repository. If you use BI publisher against other targets, you will need to be licensed for each.

 

 

 

 

 

Cet article Enterprise Manager 13c and BI Publisher est apparu en premier sur Blog dbi services.

Viewing all 466 articles
Browse latest View live