Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 462 articles
Browse latest View live

Oracle – Suspending/Resuming an Oracle PID with “oradebug suspend/resume” or the OS kill command

$
0
0

Sometimes, if you kick off a huge load or transformation job, your Archive Destination might run full, faster than your RMAN backup job cleans up the destination. To avoid an Archiver Stuck in such situations, the “oradebug suspend/resume” can be helpful, or the UNIX kill command. Usually, the “oradebug suspend” and the UNIX kill command work quite well.

Before kicking off you SQL script, get all the information you need from your session.

select s.username as Username, 
       s.machine as Machine, 
	   s.client_info as Client_Info, 
	   s.module as Module, 
	   s.action as Action, 
	   s.sid as SessionID,
	   p.pid as ProcessID,
	   p.spid as "UNIX ProcessID"
from
v$session s, v$process p
where s.sid = sys_context ('userenv','sid')
and s.PADDR = p.ADDR;

USERNAME     MACHINE      CLIENT_INFO  MODULE                           ACTION        SESSIONID  PROCESSID UNIX ProcessID
------------ ------------ ------------ -------------------------------- ------------ ---------- ---------- ------------------------
SYS          oel001                    sqlplus@oel001 (TNS V1-V3)                           148         69 7186

In another SQL session you can now set the PID with setorapid or the Server Process ID with setospid. HINT: When the Oracle multiprocess/multithread feature is enabled, RDBMS processes are mapped to threads running in operating system processes, and the SPID identifier is not unique for RDBMS processes. When the Oracle multiprocess/multithread feature is not enabled on UNIX systems, the SPID identifier is unique for RDBMS processes.

SQL> oradebug setorapid 69
Oracle pid: 69, Unix process pid: 7186, image: oracle@oel001 (TNS V1-V3)
SQL> -- oradebug setospid 7186
SQL> oradebug suspend
Statement processed.

-- Now your session is suspended and any command executed by the suspended session is hanging, even select's
-- SQL> select * from dual;

-- Now you can take your time and clean up the archive destination e.g. by moving all archivelogs
-- to tape and delete those in the archive destination afterwards "RMAN> backup archivelog all delete all input;"
-- After the job is done, resume your operation.

SQL> oradebug resume
Statement processed.

-- Now the "select * from dual" comes back.
SQL> select * from dual;

D
-
X

In case you are running 11.2.0.2, it might happen that you see an ORA-600 after running oradebug suspend. No problem, in those cases we can achieve the same thing with the UNIX kill command as well.

On Linux you would run:

$ kill -sigstop $SPID
$ kill -sigcont $SPID

$ kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX

On AIX you would run:

$ kill -17 $SPID
$ kill -19 $SPID

$ kill -l
1) HUP                  14) ALRM                 27) MSG                  40) bad trap             53) bad trap
2) INT                  15) TERM                 28) WINCH                41) bad trap             54) bad trap
3) QUIT                 16) URG                  29) PWR                  42) bad trap             55) bad trap
4) ILL                  17) STOP                 30) USR1                 43) bad trap             56) bad trap
5) TRAP                 18) TSTP                 31) USR2                 44) bad trap             57) bad trap
6) ABRT                 19) CONT                 32) PROF                 45) bad trap             58) RECONFIG
7) EMT                  20) CHLD                 33) DANGER               46) bad trap             59) CPUFAIL
8) FPE                  21) TTIN                 34) VTALRM               47) bad trap             60) GRANT
9) KILL                 22) TTOU                 35) MIGRATE              48) bad trap             61) RETRACT
10) BUS                 23) IO                   36) PRE                  49) bad trap             62) SOUND
11) SEGV                24) XCPU                 37) VIRT                 50) bad trap             63) SAK
12) SYS                 25) XFSZ                 38) ALRM1                51) bad trap
13) PIPE                26) bad trap             39) WAITING              52) bad trap

Be very careful, different UNIX systems have different mappings between the signal number and the signal itself. Make sure you look it up first with “kill -l” to get the correct one. From my point of view, the suspend/resume feature, either with the Oracle oradebug or the UNIX kill command  is very useful.

Cheers,
William

 

Cet article Oracle – Suspending/Resuming an Oracle PID with “oradebug suspend/resume” or the OS kill command est apparu en premier sur Blog dbi services.


OTN Appreciation Day : Transportable tablespaces

$
0
0

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. Here is my favorite feature that I described at the “EOUC Database ACES Share Their Favorite Database Things“: Transportable Tablespaces appeared in Oracle 8.1.5

ROWID

I’ll start with a change that came between Oracle 7 and Oracle 8. The ROWID, which identifies the physical location of a row within an database (with file ID, block offset, and row directory number) changed to be the location within a tablespace only. The format did not change, but the file ID was changed to be a relative file number instead of an absolute file number.
Here is the idea:
CaptureTTS-ROWID

Actually, to be able to migrate without visiting each block (the ROWID is present in all blocks, all redo vectors, etc) they used the same number but that number is unique only within a tablespace. The first goal was to hold more datafiles per tablespace (Oracle 8 was the introduction of VLDB – Very Large Databases – concepts). The limit of 255 datafiles per database became a limit of 255 datafiles per tablespace. So the numbers starts the same as before but can go further.

This change was simple because anytime you want to fetch a row by its ROWID you know which table you query, so you know the tablespace. The exception is when the ROWID comes from a global index on a partitioned table, and for this case Oracle 8 introduced an extended ROWID that contains additional bytes to identify the segment by its DATA_OBJECT_ID.

By the way, this makes tablespaces more independent on the database that contains them because all row addressing is relative.

Locally Managed Tablespaces

Another change in 8i was Locally Managed Tablespaces. Before, the space management of the tablespaces was centralized in the database dictionary. Now, it is delocalized in each tablespace. What was stored in UET$ system table is now managed as a bitmap in the first datafile header.

Pluggable tablespaces

The original name of transportable tablespace was “pluggable tablespaces”. Because they are now more self-contained, you can detach them from a database an attach them to another database, without changing the content. This means that data is moved physically which is faster than the select/inserts that are behind a logical export/import. There are only two things that do not come with the datafiles.

The open transactions store their undo in the database UNDO tablespace. This means that if you detach a user tablespace you don’t have the information to rollback the ongoing transactions when you re-attach it elsewhere. For this reason, this ‘detach’ is possible only when there are no on-going transactions: you have to put the tablespace READ ONLY.

The user object metadata is stored in the database dictionary. Without them, the datafiles is just a bunch of bytes. You need the metadata to know what is a table or index, and which one. So, with transportable tablespaces, a logical export/import remains for the metadata only. This was done with exp/imp when TTS were introduced and is now done with DataPump. Small metadata is moved logically. Large data is moved physically.

CaptureTTS-EXP

Transportable tablespaces

TTS is faster than simple DataPump because data is moved physically by moving the datafiles. TTS is more flexible than an RMAN duplicate because you can move a subset of a database easily. Because the metadata is still transported logically, and datafiles are compatible with newer versions, TTS can be done cross-version, which makes it a nice way to migrate and upgrade. It is used also for tablespace point-in-time recovery where you have to recover to an auxiliary instance and then transport the tablespace to the target.
TTS is also used to move data quickly from operational database to a datawarehouse ODS.
It is also a good way to publish and share a database in read-only, on a DVD for example.

And beyond

Except with the move to DataPump for the metadata transfer, TTS has not change a lot until 12c. In 12.1 you have full transportable tablespace which automates the operations when you want to move a whole database. This can be used to migrate from non-CDB to multitenant architecture.

With multitenant, pluggable databases is an extension of TTS. Because user metadata come with the PDB system tablespaces yon don’t need to export them logically anymore: you transport the whole PDB. That’s the first restriction relieved. The second restriction, the need for read only, will be relieved as well when the UNDO will be local to the PDB and I don’t think I disclose any secret when telling that local UNDO has been announced for 12.2

OTN Appreciation Day

This was my contribution to the “EOUC Database ACES Share Their Favorite Database Things” at Oracle Open World 2016 organized by Debra Lilley. Tim Hall idea of “OTN Appreciation Day” comes from that. You still have time to contribute for this day. No need for long posts – I always write a but more than what I plan to. The “rules” for this day is described in oracle-base.com

 

Cet article OTN Appreciation Day : Transportable tablespaces est apparu en premier sur Blog dbi services.

OTN Appreciation Day : ADVM

$
0
0

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. Here is my favorite feature: ADVM – The Oracle ASM Dynamic Volume Manager.

So, what is it? The docs tell you this: “Oracle ASM Dynamic Volume Manager (Oracle ADVM) provides volume management services and a standard disk device driver interface to clients. File systems and other disk-based applications send I/O requests to Oracle ADVM volume devices as they would to other storage devices on a vendor operating system.”

The easy to understand version is this: It enables us to use regular file systems on top of ASM.

Does is make sense to use it? When you have ASM running on the host or all the hosts of a Grid Infrastructure cluster anyway then it definitely makes sense. ASM will do all the mirroring and striping for you so there is no need to use another technology to achieve that when you can create ADVM volumes and create file systems on top of these. Although the most common scenario is to create an ACFS file system on top of the volumes you are actually not limited to that. Lets do a short demo.

Lets say we have these devices available for use by the grid user:

[root@rac1 ~] ls -la /dev/sd[b-f]*
brw-rw----. 1 root disk     8, 16 Oct 10 17:54 /dev/sdb
brw-rw----. 1 grid asmadmin 8, 17 Oct 10 18:10 /dev/sdb1
brw-rw----. 1 root disk     8, 32 Oct 10 17:54 /dev/sdc
brw-rw----. 1 grid asmadmin 8, 33 Oct 10 18:10 /dev/sdc1
brw-rw----. 1 root disk     8, 48 Oct 10 17:54 /dev/sdd
brw-rw----. 1 grid asmadmin 8, 49 Oct 10 18:10 /dev/sdd1
brw-rw----. 1 root disk     8, 64 Oct 10 17:54 /dev/sde
brw-rw----. 1 grid asmadmin 8, 65 Oct 10 18:10 /dev/sde1
brw-rw----. 1 root disk     8, 80 Oct 10 17:54 /dev/sdf
brw-rw----. 1 grid asmadmin 8, 81 Oct 10 18:10 /dev/sdf1

We want to use “/dev/sde1″ for our new ADVM volume. What we need is an ASM diskgroup in a first step because for creating an ADVM volume you’ll need a ASM diskgroup where you can place your volume on:

grid@rac1:/home/grid/ [+ASM1] sqlplus / as sysasm
SQL> create diskgroup ADVM external redundancy disk '/dev/sde1';

Diskgroup created.

SQL> 

Ok, fine. How can we proceed with creating a volume? Quite easy:

grid@rac1:/home/grid/ [+ASM1] asmcmd volcreate -G ADMV -s 2g VOLADVM
ORA-15032: not all alterations performed
ORA-15221: ASM operation requires compatible.asm of 11.2.0.0.0 or higher (DBD ERROR: OCIStmtExecute)

Hm, quite clear when you search the documentation: ADVM is available since 11gR2:
advm_search

Easy to fix:

grid@rac1:/home/grid/ [+ASM1] sqlplus / as sysasm

SQL> alter diskgroup ADVM set attribute 'compatible.asm'='12.1';

Diskgroup altered.

SQL> 

Lets try again:

grid@rac1:/home/grid/ [+ASM1] asmcmd volcreate -G ADMV -s 2g VOLADVM
grid@rac1:/home/grid/ [+ASM1] 

Perfect. Now I have a volume visible to the operating system:

grid@rac1:/home/grid/ [+ASM1] ls -la /dev/asm/*advm*
brwxrwx---. 1 root asmadmin 252, 115201 Oct 10 18:20 /dev/asm/voladvm-225

On top of this volume we can now create file systems. The natural one would be ACFS:

[root@rac1 ~] mkfs.acfs /dev/asm/voladvm-225
mkfs.acfs: version                   = 12.1.0.2.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/voladvm-225
mkfs.acfs: volume size               = 2147483648  (   2.00 GB )
mkfs.acfs: Format complete.

But in fact every other file system the operating system supports is possible, too:

[root@rac1 ~] mkfs.xfs /dev/asm/voladvm-225
meta-data=/dev/asm/voladvm-225   isize=256    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Quite cool, isn’t it? Whatever file system your operating system supports can be put on ASM disk groups …

 

Cet article OTN Appreciation Day : ADVM est apparu en premier sur Blog dbi services.

OTN Appreciation Day – tnsping

$
0
0

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. I do have a lot of favorite Oracle tools, and the one I choose today is: tnsping

tnsping tells you, if your connect string can be resolved and if the listener where the connect string is pointing to, is available, and in the end, it displays an estimate of the round trip time (in milliseconds) it takes to reach the Oracle Net service.

All in all, tnsping is very easy to use and that’s why I love it, and not so overloaded like e.g. crsctl.  In fact, tnsping knows only 2 parameters. <address> and optionally <count>, like shown in the following example.

Usage: tnsping <address> [<count>]

Getting the option list of tnsping, a few lines are enough. I don’t need to scroll down several pages, like e.g. for emctl.  emctl is another one besides crsctl were you can spend a lifetime only reading the manual.  No, I picked tnsping this time because I like the option list.

Here we go … now I run one tnsping without and one with count.

oracle@oel001:/home/oracle/ [OCM121] tnsping RMAN
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 11-OCT-2016 14:28:14
Copyright (c) 1997, 2014, Oracle.  All rights reserved.
Used parameter files:
/u00/app/oracle/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))) 
(CONNECT_DATA = (SERVICE_NAME = OCM121)))
OK (0 msec)

oracle@oel001:/home/oracle/ [OCM121] tnsping RMAN 5
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 11-OCT-2016 14:28:20
Copyright (c) 1997, 2014, Oracle.  All rights reserved.
Used parameter files:
/u00/app/oracle/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))) 
(CONNECT_DATA = (SERVICE_NAME = OCM121)))
OK (0 msec)
OK (10 msec)
OK (0 msec)
OK (0 msec)
OK (0 msec)

But … wait a second … my RMAN connect string points to host oel001, but it should point to oel002. Let’s take a look in $ORACLE_HOME/network/admin/tnsnames.ora

oracle@oel001:/u00/app/oracle/ [OCM121] cat /u00/app/oracle/product/12.1.0.2/network/admin/tnsnames.ora
RMAN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel002)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = OCM121)
    )
  )

It looks correct. So what is going on here. There are several explanations to this issue.

1.) You might have set the TNS_ADMIN environment variable, which points to a total different directory
2.) Or your sqlnet.ora might point to a LDAP server first, which resolves the name
3.) Or a total different tnsnames.ora file is taken into account, but which one?
4.) Or something totally different, e.g. corrupt nscd, symlinks …

For quite a long time, Oracle is not searching first in the $ORACLE_HOME/network/admin/tnsnames.ora
to get the name resolved. The tnsnames.ora search order is the following:

1.) $HOME/.tnsnames.ora    # yes, it looks up the a hidden file in your home directory first
2.) /etc/tnsnames.ora    # then, a global tnsnames.ora in the /etc directory
3.) $ORACLE_HOME/network/admin/tnsnames.ora    # and last but not least, it looks it up in the $ORACLE_HOME/network/admin

To prove it, simply run a strace on your tnsping command and take a look at the trace file.

$ strace -o /tmp/tnsping.trc -f tnsping RMAN
$ cat /tmp/tnsping.trc | grep tnsnames

21919 access("/home/oracle/.tnsnames.ora", F_OK) = 0
21919 access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
21919 access("/u00/app/oracle/product/12.1.0.2/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
21919 stat("/home/oracle/.tnsnames.ora", {st_mode=S_IFREG|0644, st_size=173, ...}) = 0
21919 open("/home/oracle/.tnsnames.ora", O_RDONLY) = 3

Here we go … in my case, the “/home/oracle/.tnsnames.ora” was taken into account. Let’s take a look.
Indeed, I have found an entry here.

oracle@oel001:/home/oracle/ [OCM121] cat /home/oracle/.tnsnames.ora
RMAN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = OCM121)
    )
  )

Have fun with tnsping.

Cheers,
William

 

 

 

Cet article OTN Appreciation Day – tnsping est apparu en premier sur Blog dbi services.

How to destroy your performance: PL/SQL vs SQL

$
0
0

Disclaimer: This is in no way a recommendation to avoid PL/SQL. This post just describes a case I faced at a customer with a specific implementation in PL/SQL the customer (and me) believed is the most efficient way of doing it in PL/SQL. This was a very good example for myself to remind me to check the documentation and to verify if what I believed a feature does is really what the feature is actually doing. When I was doing PL/SQL full time in one my of previous jobs I used the feature heavily without really thinking on what happened in the background. Always keep learning …

Lets start by building the test case. The issue was on 12.1.0.2 on Linux but I think this will be reproducible on any release (although, never be sure :) ).

SQL> create table t1 as select * from dba_objects;
SQL> insert into t1 select * from t1;
SQL> /
SQL> /
SQL> /
SQL> /
SQL> /
SQL commit;
SQL> select count(*) from t1;

  COUNT(*)
----------
   5565632

SQL> create table t2 as select object_id from t1 where mod(object_id,33)=0;
SQL> select count(*) from t2;

  COUNT(*)
----------
    168896

This are my two tables used for the test: t1 contains around 5,5 millions rows and there is t2 which contains 168896 rows. Coming to the issue: There is a procedure which does this:

create or replace procedure test_update
is
  cursor c1 is select object_id from t2;
  type tab is table of t2.object_id%type index by pls_integer;
  ltab tab;
begin
  open c1;
  fetch c1 bulk collect into ltab;
  close c1;
  forall indx in 1..ltab.count
    update t1 set owner = 'AAA' where object_id = ltab(indx);
end test_update;
/

The procedure uses “bulk collect” and “forall” to fetch the keys from t2 in a first step and then uses these keys to update t1 in a second step. Seemed pretty well done: Not a loop over each single row, compare with the list and then do the update when there is a match. I really couldn’t see an issue here. But when you execute this procedure you’ll wait for ages (at least if you are in VM running on a notebook and not on super fast hardware).

The situation at the customer was that I was told that the update, when executed as plain SQL in sqlplus, takes less than a second. And really, when you execute this on the test case from above:

SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:05.30
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.44
SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:06.34
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.70
SQL>

It is quite fast (between 5 and 6 seconds on my environment). So why is the PL/SQL version so much slower? Aren’t “bulk collect” and “forall” the right methods to boost performance? Lets take a look at the execution plan for the plain SQL version:

----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |    A-Time     | Buffers | Reads  |  OMem |  1Mem |  O/1/M|
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |      1 |       |       | 24303 (100)|          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|   1 |  UPDATE               | T1       |      1 |       |       |            |          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|*  2 |   HASH JOIN           |          |      1 |    48 |  4416 | 24303   (1)| 00:00:01 |     168K|00:00:01.76    |   86719 |   9325 |  2293K|  2293K|  1/0/0|
|   3 |    VIEW               | VW_NSO_1 |      1 |   161K|  2044K|    72   (2)| 00:00:01 |    2639 |00:00:00.05    |     261 |     78 |       |       |       |
|   4 |     SORT UNIQUE       |          |      1 |     1 |  2044K|            |          |    2639 |00:00:00.04    |     261 |     78 |   142K|   142K|  1/0/0|
|   5 |      TABLE ACCESS FULL| T2       |      1 |   161K|  2044K|    72   (2)| 00:00:01 |     168K|00:00:00.01    |     261 |     78 |       |       |       |
|   6 |    TABLE ACCESS FULL  | T1       |      1 |  5700K|   429M| 23453   (1)| 00:00:01 |    5566K|00:00:05.88    |   86458 |   9247 |       |       |       |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

It is doing a hash join as expected. What about the PL/SQL version? It is doing this:

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
SQL_ID  4hh65t1u4basp, child number 0
-------------------------------------
UPDATE T1 SET OWNER = 'AAA' WHERE OBJECT_ID = :B1

Plan hash value: 2927627013

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |      |       |       | 23459 (100)|          |
|   1 |  UPDATE            | T1   |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   951 | 75129 | 23459   (1)| 00:00:01 |
---------------------------------------------------------------------------

Uh! Why that? This is what I wasn’t aware of. I always thought when you use “forall” to send PL/SQL’s SQL to the SQL engine Oracle would rewrite the statement to expand the list in the where clause or do other optimizations. But this does not happen. The only optimization that takes place when you use “forall” is that the statements are send in batches to the SQL engine rather then sending each statement after another. What happens here is that you execute 168896 full table scans because the same statement (with a another bind variable value) is executed 168896 times. Can’t be really fast compared to the SQL version.

Of course you could rewrite the procedure to do the same as the SQL but this is not the point here. The point is: When you think what you have implemented in PL/SQL is the same as what you compare to when you run it SQL: Better think twice and even better read the f* manuals, even when you think you are sure what a feature really does :)

 

Cet article How to destroy your performance: PL/SQL vs SQL est apparu en premier sur Blog dbi services.

Datawarehouse ODS load is fast and easy in Enterprise Edition

$
0
0

In a previous post, tribute to transportable tablespaces (TTS), I said that TTS is also used to move data quickly from operational database to a datawarehouse ODS. For sure, you don’t transport directly from the production database because TTS requires that the tablespace is read only. But you can transport from a snapshot standby. Both features (transportable tablespaces and Data Guard snapshot standby) are free in Enterprise Edition without option. Here is an exemple to show that it’s not difficult to automate

I have a configuration with the primary database “db1a”

DGMGRL> show configuration
 
Configuration - db1
 
Protection Mode: MaxPerformance
Members:
db1a - Primary database
db1b - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)
 
DGMGRL> show database db1b
 
Database - db1b
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 0 Byte/s
Real Time Query: ON
Instance(s):
db1
 
Database Status:
SUCCESS

I’ve a few tables in the tablespace USERS and this is what I want to transport to ODS database:

SQL> select segment_name,segment_type,tablespace_name from user_segments;
 
SEGMENT_NAME SEGMENT_TY TABLESPACE
------------ ---------- ----------
DEPT TABLE USERS
EMP TABLE USERS
PK_DEPT INDEX USERS
PK_EMP INDEX USERS
SALGRADE TABLE USERS

Snapshot standby

With Data Guard it is easy to open temporarily the standby database. Just convert it to a snapshot standby with a simple command:


DGMGRL> connect connect system/oracle@//db1b
DGMGRL> convert database db1b to snapshot standby;
Converting database "db1b" to a Snapshot Standby database, please wait...
Database "db1b" converted successfully

Export

Here you can start to do some Extraction/Load but better to reduce this window where the standby is not in sync. The only thing we will do is export the tablespace in the fastest way: TTS.

First, we put the USERS tablespace in read only:

SQL> connect system/oracle@//db1b
Connected.
 
SQL> alter tablespace users read only;
Tablespace altered.

and create a directory to export metadata:

SQL> create directory TMP_DIR as '/tmp';
Directory created.

Then export is easy

SQL> host expdp system/oracle@db1b transport_tablespaces=USERS directory=TMP_DIR
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/********@db1b transport_tablespaces=USERS directory=TMP_DIR
 
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/tmp/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
/u02/oradata/db1/users01.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Wed Oct 19 21:03:36 2016 elapsed 0 00:00:52

I’ve the metadata in /tmp/expdat.dmp and the data in /u02/oradata/db1/users01.dbf. I copy this datafile directly in his destination for the ODS database:

[oracle@VM118 ~]$ cp /u02/oradata/db1/users01.dbf /u02/oradata/ODS/users01.dbf

This is physical copy, which is the fastest data movement we can do.

I’m ready to import it into my ODA database, but I can already re-sync the standby database because I extracted everything I wanted.

Re-sync the physical standby

DGMGRL> convert database db1b to physical standby;
Converting database "db1b" to a Physical Standby database, please wait...
Operation requires shut down of instance "db1" on database "db1b"
Shutting down instance "db1"...
Connected to "db1B"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "db1" on database "db1b"
Starting instance "db1"...
ORACLE instance started.
Database mounted.
Connected to "db1B"
Continuing to convert database "db1b" ...
Database "db1b" converted successfully
DGMGRL>

The duration depends on the time to flashback the changes (and we did no change here as we only exported) and the time to apply the redo stream generated since the convert to snapshot standby (which duration has been minimized to its minimum).

This whole process can be automated. We did that at several customers and it works well. No need to change anything unless you have new tablespaces.

Import

Here is the import to the ODS database and I rename the USERS tablespace to ODS_USERS:

SQL> host impdp system/oracle transport_datafiles=/u02/oradata/db2B/users02.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** transport_datafiles=/u02/oradata/ODS/users01.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 3 error(s) at Wed Oct 19 21:06:18 2016 elapsed 0 00:00:10

Everything is there. You have all your data in ODS_USERS. You can have other data/code in this database. Only the ODS_USERS tablespace have to be dropped to be re-imported. You can have your staging tables here adn even permanent tables.

12c pluggable databases

In 12.1 it is even easier because the multitenant architecture gives the possibility to transport the pluggable databases in one command, through file copy or database links. It is even faster because metadata are transported physically with the PDB SYSTEM tablespace. I said multitenant architecture here, and didn’t mention any option. Multitenant option is needed only if you want multiple PDBs managed by the same instance. But if you want the ODS database to be an exact copy of the operational database, then you don’t need any option to unplug/plug.

In 12.1 you need to put the source in read only, so you still need a snapshto standby. And from my test, there’s no problem to convert it back to physical standby after a PDB has been unplugged. In next release, we may not need a standby because it has been announced that PDB can be cloned online.

I’ll explain the multitenant features available without any option (in 12c current and next release) at Oracle Geneva office on 23rd of November:

CaptureBreakfastNov23
Do not hesitate to register by e-mail.

 

Cet article Datawarehouse ODS load is fast and easy in Enterprise Edition est apparu en premier sur Blog dbi services.

2 x ODA X6-2S + Dbvisit Standby: Easy DR in SE

$
0
0

What’s common with Standard Edition, simplicity, reliability, high performance, and affordable price?
Dbvisit standby can be an answer because it brings Disaster Recovery to Standard Edition without adding complexity
ODA Lite (the new X6-2S and 2M) is another answer because you can run Standard Edition in those new appliance.
So it makes sense to bring them together, this is what I did recently at a customer.

I’ll not cover the reasons and the results here as this will be done later. Just sharing a few tips to set-up the following configuration: two ODA X6-2S runnimg 12c Standard Edition databases, protected by Dbvisit standby over two datacenters.

ODA repository

ODA X6 comes with a new interface to provision databases from command line (odacli) or GUI (https://oda:7093/mgmt/index.html). It’s a layer over the tools we usually use: it calls dbca in behind. What it does in addition is to log what has been done in a Java DB repository.

What is done is logged in the opt/oracle/dcs/log/dcs-agent.log:
2016-10-13 15:33:59,816 DEBUG [Database Creation] [] c.o.d.a.u.CommonUtils: run: cmd= '[su, -, oracle, -c, export PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin:/u01/app/oracle/product/12.1.0.2/dbhome_2/bin; export ORACLE_SID=MYNEWDB; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2; export PWD=******** /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/dbca -createDatabase -silent -gdbName MYNEWDB.das.ch -sid MYNEWDB -sysPassword ******* -systemPassword ******* -dbsnmpPassword ******* -asmSysPassword ******* -storageType ASM -datafileJarLocation /u01/app/oracle/product/12.1.0.2/dbhome_2/assistants/dbca/templates -emConfiguration DBEXPRESS -databaseConfType SINGLE -createAsContainerDatabase false -characterSet WE8MSWIN1252 -nationalCharacterSet AL16UTF16 -databaseType MULTIPURPOSE -responseFile NO_VALUE -templateName seed_noncdb_se2016-10-13_15-33-59.0709.dbc -initParams "db_recovery_file_dest_size=174080,db_unique_name=MYNEWDB" -recoveryAreaDestination /u03/app/oracle/fast_recovery_area/]'

Do I like it? Actually I don’t for two reasons. First reason is that I don’t want to learn new syntax every year. I know CREATE DATABASE from decades, I know DBCA for years. I just prefer to use those.
The second reason is that if you want to add a layer on something, you need to provide at least the same functionality and the same quality than the tool you call in behind. If you provide a command to create a database, then you must provide a command to delete it, even if the previous creation has failed. I’ve created a database which creation failed. The reason was that I changed the listener port, but the template explicitly sets local_listener to port 1521. Fortunately it calls DBCA and I know where are the logs. So my ODA repository has a database in failed status. The problem is that you can’t drop it (it doesn’t exist for DBCA) and you cannot re-create it (it exists for ODA). I’m not a developer, but when I write code I try to manage exceptions. At least they must implement a ‘force’ mode where errors are ignored when deleting something that does not exist.

So if you have the same problem, here is what I did:

  • Open a SR in the hope that they understand there’s something to fix in their code without asking me all log files to upload
  • create a database with same name, directly with DBCA, then drop it with ODACLI

Finally, My Workaround works and Their Oracle Support came with two solutions: create the database with another name or re-image the ODA!

But, when it doesn’t fail, the creation is very fast: from templates with datafiles, and datafiles in those very fast NVMe SSDs.

Create the standby

I don’t like this additional layer, but I have the feeling that it’s better than the ODA repository knows about my databases. The standby database is created with Dbvisit interface (I’m talking about real user friendly interface there, where errors are handled and you even have the possibility to resume a creation that failed). How to make it go to the ODA repository?

I see 3 possibilities.

The odacli has a “–register-database” option to register an already create database. But that does probably too much because it was designed to register databases created on previous ODAs with oakcli.

The odacli has a “–instanceonly” option which is there to register a standby database that will be created later with RMAN duplicate for example. Again this does too much as it creates an instance. I tried it and didn’t have the patience to make it work. When ODACLI encounters a problem, it doesn’t explain what’s wrong, but just show the command line help.

Finally what I did is create a database with ODACLI and the drop it (outside of ODACLI). This is ugly, but its the only way I got something where I understand exactly what is done. This is where I encountered the issue above, so my workflow was actually: create from ODACLI -> fails -> drom from DBCA -> re-create from ODACLI -> success -> drop

I didn’t drop it from DBCA because I wanted to keep the entry in ORATAB. I did it from RMAN:

RMAN> startup force dba mount
RMAN> drop database including backups noprompt;

Then, no problem to create the standby from Dbvisit GUI

Create a filesystem

I’ve created the database directly in ASM. I don’t see any reason to create an ACFS volume for them, especially for Standard Edition where you cannot use ACFS snapshots. It’s just a performance overhead (and with those no-latency disks, any CPU overhead counts) and a risk to remove a datafile as they are exposed in a filesystem with no reason for it.

However, Dbvisit needs a folder where to store the archived logs that are shipped to the standby. I can create a folder in in local filesystem, but I preferred to to create an ACFS filesystem for it.
I did it from ODACLI:


odacli create-dbstorage --dataSize 200 -n DBVISIT -r ACFS

This creates a 200GB filesystem mounted as /u02/app/oracle/oradata/DBVISIT/

Who starts the database?

Dbvisit comes with a scheduler that can start the databases in the required mode. But in ODA the resources are managed by Grid Infrastructure. So after creating the standby database you must modify its mount mode:

srvctl modify database -d MYNEWDB -startoption mount

Don’t forget to change the mount modes after a switchover or failover.

This can be scripted with something like: srvctl modify database -db $db -startoption $(/opt/dbvisit/standby/dbv_oraStartStop status $db| awk '/^Regular Database/{print "OPEN"}/^Standby Database/{print "MOUNT"}')

Keep it simple and test it

ODA is simple if you do what it has been designed for: run the database versions that are certified (currenty 11.2.0.4 and 12.1.0.2) adn don’t try to customize the configuration. Always test the switchover, so that you can rely on the protection. It’s easy with Dbvisit standby, either from GUI of command line. And be sure that your network can keep up with the redo rate. Again, this is easy to check from the GUI. Here is an exemple when testing the migration with Data Pump import:
DbvisitTransferLogSize

From public prices, and before any discount, you can get two ODA X6-2S plus perpetual licences for Oracle Database Standard Edition and Dbvisit standby for less than 90KUSD.
If you need more storage you can double the capacity for about additional 10KUSD for each ODA.
And if you think that ODA may need a DBA sometimes, have a look at our SLAs and you have a reliable and affordable system on your premises to store and process your data.

 

Cet article 2 x ODA X6-2S + Dbvisit Standby: Easy DR in SE est apparu en premier sur Blog dbi services.

Dbvisit replicate when you flashback the target

$
0
0

I had a quick question today about Dbvisit replicate as a solution to feed an Operational Data Store. The need is to run some end-of-month jobs on data from a specific state. The best way is to stop the APPLY at that fixed state for the duration of the job. Then no need for flashback query and no risk of ORA-1555. And what if we know this state only afterwards? Easy if the target is in Enterprise Edition, running in FLASHBACK ON. Then how to continue the APPLY? It’s easy with Dbvisit replicate. Here is an example.

One solution would be to create a restore point before doing the flashback in order to be able to get forth to the state before the flashback and re-start the APPLY where it was stopped. But if we do that, we need to open the database READ ONLY for running the job on it.
The other solution is to be able to re-start the apply at the point where the database has been flashed back. Do you think we have to get the right SCN, reset some configuration with it, etc? No. There’s something very simple and practical with Dbvisit replicate:

  • The MINE configuration is stored in the source database
  • The APPLY configuration is stored in the destination database

When you install Dbvisit for the first time, you may wonder why there is not only one repository. But this architecture makes it easy as the metadata is stored with the related data.

In my case, if I flashback the target database, the APPLY repository is flashed back as well, so the APPLY continues exactly at the right point

But just in case let’s test it.
I use my old #repattack VMs, which explains why the dbvrep version is old, and why the time is on New Zealand time zone. Here is a running replication:

/ Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6648759 (11/04/2016 09:14:08).
APPLY IS running. Currently at plog 392 and SCN 6648742 (11/04/2016 09:14:06).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 100% Mine:210/210 Unrecov:0/0 Applied:210/210 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ADDRESSES: 100% Mine:210/210 Unrecov:0/0 Applied:210/210 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.CARD_DETAILS: 100% Mine:195/195 Unrecov:0/0 Applied:195/195 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ORDER_ITEMS: 99% Mine:1553/1553 Unrecov:0/0 Applied:1550/1550 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ORDERS: 99% Mine:1245/1245 Unrecov:0/0 Applied:1243/1243 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.INVENTORIES: 99% Mine:1523/1523 Unrecov:0/0 Applied:1521/1521 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.LOGON: 99% Mine:1493/1493 Unrecov:0/0 Applied:1491/1491 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
--------------------------------------------------------------------------------------------------------------------------------------------

I want to flashback to a quarter before, at 09:00, and because I’m in Oracle XE I’ll restore rather than flashback:

[oracle@target ~]$ rman target /
 
Recovery Manager: Release 11.2.0.2.0 - Production on Fri Nov 4 09:10:40 2016
 
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: XE (DBID=2736105154)
 
RMAN> startup force mount;
 
Oracle instance started
database mounted
...

RMAN> restore database until time "timestamp'2016-11-04 09:00:00'";
 
Starting restore at 04-NOV-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=10 device type=DISK

RMAN> recover database until time "timestamp'2016-11-04 09:00:00'";
 
Starting recover at 04-NOV-16
using channel ORA_DISK_1
 
starting media recovery
...
media recovery complete, elapsed time: 00:02:17
Finished recover at 04-NOV-16
 
RMAN> sql "alter database open resetlogs";
sql statement: alter database open resetlogs

So the APPLY stopped because the destination was down, but the MINE continues:

- Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6654476 (11/04/2016 09:27:12).
Could not connect to APPLY process. Process not started or connection refused.
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: ---% Mine:300/300 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ADDRESSES: ---% Mine:300/300 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.CARD_DETAILS: ---% Mine:277/277 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ORDER_ITEMS: ---% Mine:2178/2178 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ORDERS: ---% Mine:1735/1735 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.INVENTORIES: ---% Mine:2129/2129 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.LOGON: ---% Mine:2109/2109 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
--------------------------------------------------------------------------------------------------------------------------------------------

My database has been flashed back to its state at 09:00 and I can do what I want. Then I just re-start the APPLY:

[oracle@target dbvrep_XE]$ sh dbvrep_XE-run-target.sh
Initializing......done
DDC loaded from database (352 variables).
Dbvisit Replicate version 2.7.06.4485
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /u01/app/oracle/dbvrep_XE/dbvrep_XE-APPLY.ddc loaded.
Starting process APPLY...started

and nothing to do manually, it restarts from where it needs, applying the changes from 09:00

/ Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6655130 (11/04/2016 09:28:33).
APPLY IS running. Currently at plog 392 and SCN 6645037 (11/04/2016 09:05:53).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 46% Mine:312/312 Unrecov:0/0 Applied:145/12 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ADDRESSES: 46% Mine:312/312 Unrecov:0/0 Applied:145/12 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.CARD_DETAILS: 46% Mine:289/289 Unrecov:0/0 Applied:135/11 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ORDER_ITEMS: 49% Mine:2247/2247 Unrecov:0/0 Applied:1105/80 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ORDERS: 49% Mine:1793/1793 Unrecov:0/0 Applied:890/60 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.INVENTORIES: 49% Mine:2199/2199 Unrecov:0/0 Applied:1083/77 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.LOGON: 48% Mine:2183/2183 Unrecov:0/0 Applied:1055/75 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
--------------------------------------------------------------------------------------------------------------------------------------------

This is quick as it has already been mined. The APPLY quickly resolved the gap:

- Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6655589 (11/04/2016 09:29:36).
APPLY IS running. Currently at plog 392 and SCN 6655567 (11/04/2016 09:29:34).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 99% Mine:319/319 Unrecov:0/0 Applied:317/184 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ADDRESSES: 99% Mine:319/319 Unrecov:0/0 Applied:317/184 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.CARD_DETAILS: 99% Mine:296/296 Unrecov:0/0 Applied:294/170 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ORDER_ITEMS: 99% Mine:2291/2291 Unrecov:0/0 Applied:2289/1264 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ORDERS: 100% Mine:1828/1828 Unrecov:0/0 Applied:1828/998 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.INVENTORIES: 100% Mine:2238/2238 Unrecov:0/0 Applied:2238/1232 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.LOGON: 99% Mine:2235/2235 Unrecov:0/0 Applied:2232/1252 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
--------------------------------------------------------------------------------------------------------------------------------------------

Things are simple when they are well designed. The APPLY has to synchronize with the destination, so they have put the APPLY repository into the destination. No problem if the replication went further than the point in time you need to freeze. Any solution is possible as long ans the whole database is consistent: Flashback database, PITR, snapshots, etc.

 

Cet article Dbvisit replicate when you flashback the target est apparu en premier sur Blog dbi services.


Oracle Public Cloud: create a database from command line

$
0
0

You love the Oracle Public Cloud with its simple Web interface? Great. But what if you want to automate a database creation from command line?
Easy with the CURL and REST API.

JSON

First, you need to create a JSON file with all the information for your service.
Everything is documented: https://apicatalog.oraclecloud.com/ui/views/apicollection/oracle-public/database/1.1/serviceinstances

$ cat createrequestbody.json
 
{
"description": "Mon Nov 7 21:03:39 CET 2016",
"edition": "EE_HP",
"level": "PAAS",
"serviceName": "CDB122",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"version": "12.2.0.1",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"parameters": [
{
"type": "db",
"usableStorage": "15",
"adminPassword": "P4ss#ord",
"sid": "CDB122",
"pdbName": "PDB1",
"failoverDatabase": "no",
"backupDestination": "NONE"
}
] }

You can see that you have exactly the same information as from the GUI.

Create Instance

Then, you run the following CURL command (having the cacert.pem certificate in the current directory:

$ curl --include --request POST --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" --header "Content-Type:application/json" --data @createrequestbody.json https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:03:59 GMT

Here “opcoct” is my identity domain id. You find it in the header X-ID-TENANT-NAME and the URL.
The myuser@oracle.com:P4ss#ord is the user and password in the domain.

From the GUI you can see that the creation has started:

CaptureDBAASREST01

DBaaS instance information

Here is the information for the database service


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:07:52 GMT
content-length: 1244
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"num_nodes": 1,
"level": "PAAS",
"edition": "EE_HP",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"creation_job_id": "2738110",
"num_ip_reservations": 1,
"backup_destination": "NONE",
"failover_database": false,
"rac_database": false,
"sid": "CDB122",
"pdbName": "PDB1",
"demoPdb": "",
"listenerPort": 1521,
"timezone": "UTC",
"is_clone": false,
"clone_supported_version": "16.3.1",
"active_jobs": [
{
"active_job_operation": "start-db-compute-resources",
"active_job_id": 2738113,
"active_job_messages": [] },
{
"active_job_operation": "create-dbaas-service",
"active_job_id": 2738110,
"active_job_messages": [] }
],
"compute_site_name": "EM003_Z19",
"jaas_instances_using_service": ""
}

The status is ‘in progress’. Let’s look at the compute service.

Compute instance information

From the compute service, you can see if there is already an IP address assigned here


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122/servers
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:08:35 GMT
content-length: 430
 
[{
"status": "Running",
"creation_job_id": "2738110",
"creation_time": "2016-11-07T20:03:59.524+0000",
"created_by": "myuser@oracle.com",
"shape": "oc3",
"sid": "CDB122",
"pdbName": "PDB1",
"listenerPort": 1521,
"connect_descriptor": "CDB122:1521\/PDB1",
"connect_descriptor_with_public_ip": "null:1521\/PDB1",
"initialPrimary": true,
"storageAllocated": 142336,
"reservedIP": "",
"hostname": "CDB122"
}]

No IP address yet. I have the job id (2738110) so that I can check it later.

Job information


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:09:08 GMT
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"message": ["Starting Compute resources..."],
"job_start_date": "Mon Nov 07 20:04:01 GMT 2016",
"job_status": "InProgress",
"job_operation": "create-dbaas-service",
"job_request_params": {
"edition": "EE_HP",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"count": "2",
"provisioningTimeout": "180",
"subscriptionType": "MONTHLY",
"createStorageContainerIfMissing": "false",
"dbConsolePort": "1158",
"listenerPort": "1521",
"serviceName": "CDB122",
"namespace": "dbaas",
"version": "12.2.0.1",
"timezone": "UTC",
"pdbName": "PDB1",
"level": "PAAS",
"tenant": "opcoct",
"serviceInstance": "CDB122",
"description": "Mon Nov 7 21:03:39 CET 2016",
"failoverDatabase": "false",
"emExpressPort": "5500",
"ncharset": "AL16UTF16",
"trial": "false",
"sid": "CDB122",
"noRollback": "false",
"operationName": "create-dbaas-service",
"goldenGate": "false",
"backupDestination": "NONE",
"ibkup": "false",
"charset": "AL32UTF8",
"serviceVersion": "12.2.0.1",
"shape": "oc3",
"identity_domain_id": "opcoct",
"serviceType": "dbaas",
"usableStorage": "15",
"disasterRecovery": "false",
"server_base_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/",
"computeSiteName": "EM003_Z19",
"isRac": "false"
}
}

REST Endpoint

Here, my test is on the EMEA datacenter and this is why the URL starts with https://dbcs.emea.oraclecloud.com
If you don’t know, you can check on My Cloud Services where you have all information:
CaptureDBAASREST02

 

Cet article Oracle Public Cloud: create a database from command line est apparu en premier sur Blog dbi services.

Observing DML restarts caused by invalidations

$
0
0

Usually, cursor invalidations do not interrupt the current execution of the cursor. You can even drop a table while a query on it is running and the query may end successfully. However some invalidations will throw internal errors that are catch and the cursor execution restarts transparently. I’ve reproduced one here to show the symptoms: it’s a different EXEC_SQL_ID so the restarts are not accounted within the same SQL Monitoring.

I had that question because I encountered a SQL Monitoring report where the ‘executions’ count of the INSERT operation was more than one. I wondered if it can be a case of DML restart or if DML restart should be accounted in different EXEC_SQL_ID. I got the quick answer from a very reliable source:

However for my better understanding, I’m now reproducing a case of DML restart to show the symptoms on V$SQL and SQL Monitor.

I need a table with some rows:

SQL> create table DEMO as select rownum n from xmltable('1 to 3');
Table created.

Initially my problem was with an insert into a GTT but now I’m reproducing the case with a partitioned table:

SQL> create table DEMOGTT (n number) partition by hash(n);
Table created.

For internal reasons, when a cursor is invalidated after it has acquired a lock on a partition, a DML restart occurs. If you have a doubt, “oerr ora 14403″ explains that.

The easiest way to invalidate while the cursor is running is to call a function that does it for each row. And as you will see that it can be restarted to infinity I set a timeout timestamp after 30 seconds.


SQL> column limit new_value limit
SQL> select to_char(sysdate+30/24/60/60,'yyyymmddhh24miss') limit from dual;
 
LIMIT
--------------
20161124212154
 
SQL> create function DEMOSLOW(n number) return number as
2 pragma autonomous_transaction;
3 begin
4 if sysdate > to_date('&limit','yyyymmddhh24miss')
5 then
6 dbms_stats.gather_table_stats(user,'DEMOGTT',no_invalidate=>false);
7 end if;
8 return n;
9 end;
10 /
old 4: if sysdate < to_date('&limit','yyyymmddhh24miss')
new 4: if sysdate < to_date('20161124212154','yyyymmddhh24miss')
 
Function created.

Time to run the insert, calling the function for each row:

SQL> alter session set statistics_level=all;
Session altered.
 
SQL> set timing on time on
21:21:24 SQL> insert /*+ monitor */ into DEMOGTT select n from DEMO where n=DEMOSLOW(n);
3 rows created.
 
Elapsed: 00:00:29.30
21:21:54 SQL>

This has taked 30 seconds. Without the timeout, it never stops.

Execution plan for last run shows only one ‘Start':

21:21:54 SQL> select * from table(dbms_xplan.display_cursor('','','allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 3fk1hyq9vnuzx, child number 0
-------------------------------------
insert /*+ monitor */ into DEMOGTT select n from DEMO where
n=DEMOSLOW(n)
 
Plan hash value: 4000794843
 
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:00.01 | 138 |
| 1 | LOAD TABLE CONVENTIONAL | DEMOGTT | 1 | | 0 |00:00:00.01 | 138 |
|* 2 | TABLE ACCESS FULL | DEMO | 1 | 1 | 3 |00:00:00.01 | 9 |
----------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("N"="DEMOSLOW"("N"))

This is a proof that DML restarts are not accounted within the same execution. the ‘last’ statistics are only from the last restart.

V$SQL accounts only one execution but thousands of invalidations:

21:21:54 SQL> select executions,invalidations from v$sql where sql_id='3fk1hyq9vnuzx';
 
EXECUTIONS INVALIDATIONS
---------- -------------
1 1571

ASH shows the different SQL_EXEC_ID:

21:21:54 SQL> select sample_time,sql_id,sql_exec_id from v$active_session_history where sql_id='3fk1hyq9vnuzx' and sample_time>sysdate-10/24/60/60 order by sample_time desc;
 
SAMPLE_TIME SQL_ID SQL_EXEC_ID
--------------------------------------------------------------------------- ------------- -----------
24-NOV-16 09.21.53.773 PM 3fk1hyq9vnuzx 16778774
24-NOV-16 09.21.51.773 PM 3fk1hyq9vnuzx
24-NOV-16 09.21.48.773 PM 3fk1hyq9vnuzx 16778501
24-NOV-16 09.21.46.773 PM 3fk1hyq9vnuzx 16778396
24-NOV-16 09.21.45.773 PM 3fk1hyq9vnuzx 16778341

and SQL Monitoring see different executions:

21:21:54 SQL> select sql_id,sql_exec_id,status,last_refresh_time,plan_operation,starts from v$sql_plan_monitor where sql_id='3fk1hyq9vnuzx' and plan_line_id=0 order by last_refresh_time desc fetch first 10 rows only;
 
SQL_ID SQL_EXEC_ID STATUS LAST_REFRESH_TI PLAN_OPERATION STARTS
------------- ----------- ------------------- --------------- ------------------------------ ----------
3fk1hyq9vnuzx 16778762 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778765 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778767 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778766 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778771 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778769 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778764 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778770 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778763 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778768 DONE 24-nov 21:21:53 INSERT STATEMENT 1

So those are the symptoms of DML restart. And my initial problem is still there:

  • This test case does not reproduce the issue on a GTT
  • My issue on the GTT had only one SQL_EXEC_ID with multiple ‘starts’ of the plan operations.

Any comments greatly appreciated :)

 

Cet article Observing DML restarts caused by invalidations est apparu en premier sur Blog dbi services.

#DOAG2016

$
0
0

That’s a very late ‘wrap-up’ post about the DOAG Conference.
Well, not so late because it was last week but time flies and blog posts are already listed.

12.2

It started very well when Oracle has released the full 12.2 on the DBaaS cloud a few days before the conference. As lot of speakers, my slides and demo was full of 12.2 features. Of course, as beta tester we can ask the permission to talk about it but it’s not easy to replace any “12.2” with “the next generation of Oracle database” especially in titles that are limited to 60 characters ;)
So 12cR2 came on the Oracle Public Cloud, easily available in free trial, full documentation became public and my demos were still working… this was perfect.
The 12.2 beta program started one year ago. At the time of abstract submission, all speakers believed that 12.2 would be out at DOAG conference time. This was just in time.

Book

Exactly one year ago, at DOAG 2015, Vit Spinka came to me with this crazy idea to write a book on Multitenant. It was not the perfect fit in the planning for me as I was starting to prepare OCM 12c but that was a really good idea. The book was mostly finished in June but we had to wait for the public availability of 12.2. Now Oracle Press got the authorization tou publish it so it will probably be available soon. I had lot of questions about the date at DOAG. We have no date, but I hope to see it before Christmas.

Sharing

Of course those international events are very good for networking. Yes, DOAG conference is international. Lot of speakers coming from other continents. DOAG organized a nice dinner with speakers coming from abroad and/or being Oakies and/or ACED. Very nice conversations, and unconferences on different subjects: #cloud, #DBADev, PL/SQL … and those conversations will continue in two weeks in Birmingham.

Among the conversations, I was very happy to discuss with Tobias Deml who I asked to explain me the #DOAGNextGen community. Oracle DBA is a job where you don’t find a lot of young people. We always have that image of senior bearded guys in front of a text screen. It’s good to have experienced people but this must be mixed with some fresh blood. I really like what the DOAG Next Generation Community is doing. They have a lot of energy and ideas.

Tokyo

Speaking in the Tokio room is great. Especially when doing live demos. You don’t worry about your mintty fonts being too small or not.

2016-11-15 16.42.36

I was talking about the Multitenant features that can help in avoiding too many roundtrips between Dev and Ops.

DSC09076 - S

The future is about automation, fast provisioning, fast refresh and clear separation of roles. Call it #cloud if you like.

dbi services

DOAG is a big event for dbi services: lot of us participate, most being speakers, we have a booth to meet our customers and any one with questions about IT infrastructure. Of course, a bit about Cloud.

2016-11-17 08.42.41-crop

csm_DOAG_85_ba6c760e40

2017

I’m looking forward to DOAGDB in Dusseldorf (30-31 May 2017) – calls for paper currently opened. And next conference 21-24 November 2017

In the meantime if you want to learn more about 12.2, plan your upgrades and master the multitenant architecture… our 12c New Features workshop will be on 12cR2 in 2017 – first date in February: http://www.dbi-services.com/trainings/oracle-12c-new-features-workshop/

 

Cet article #DOAG2016 est apparu en premier sur Blog dbi services.

Histograms on character strings between 11.2.0.3 and 11.2.0.4

$
0
0

In short, when have statistics gathered in 11.2.0.3 but the query is now running with the 11.2.0.4 optimizer, you may have wrong cardinality estimation on histograms, leading to sub-optimal plans.

I had a table with a flag that has two values ‘Y’ and ‘N’ with even distribution between them. It’s a good case for frequency histograms. I had frequency histograms and expected exact cardinality estimation for a WHERE FLAG=’Y’ predicate. But that was not the case: very low estimation leading to very bad execution plan. Because the cardinality estimation was far from what we have in histograms and far from what we would have without histograms, I checked a 10053 trace and this is what I’ve find:
Using prorated density: 0.004557 of col #97 as selectivity of out-of-range/non-existent value pred

This is linear decay because of out-of-range predicate (I’ll show that next Wednesday at UKOUG TECH16) but I don’t expect an out-of-range condition when I provide one of the two values that are in the frequency histogram.

Here is my testcase


SQL> create table DEMO ( flag char);
Table created.
 
SQL> select count(*) from DEMO where flag='Y';
 
COUNT(*)
----------
0
 
SQL> insert into DEMO select 'Y' from xmltable('1 to 100000');
100000 rows created.
 
SQL> insert into DEMO select 'N' from xmltable('1 to 1000');
1000 rows created.
 
SQL> select flag,to_char(ascii(flag),'XX'),count(*) from DEMO group by flag;
 
FLAG TO_CHAR(ASCII(FLAG),'XX') COUNT(*)
---- ------------------------- --------
Y 59 100000
N 4E 1000

100000 rows with ‘Y’ and 1000 rows with ‘N’.

11.2.0.3

I gather statistics in 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.3';
Session altered.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And run a query looking for rows where flag is ‘Y’


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

The estimate is perfect thanks to the histograms.

11.2.0.4

Now the CBO will running in 11.2.0.4 but statistics have not been gathered since 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.4';
Session altered.
 
SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 500 |
-------------------------------------------

Bad estimation here. Were those 500 come from?
It’s not from the histogram that knows that 100K rows have value ‘Y’
Without histograms the estimation would be based on 2 distinct values among 101000 so that would be 50500.
CBO trace says:

Access path analysis for DEMO
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for DEMO[DEMO] SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
 
kkecdn: Single Table Predicate:"DEMO"."FLAG"='Y'
Column (#1):
NewDensity:0.004950, OldDensity:0.000005 BktCnt:101000.000000, PopBktCnt:101000.000000, PopValCnt:2, NDV:2
Column (#1): FLAG(CHARACTER)
AvgLen: 2 NDV: 2 Nulls: 0 Density: 0.004950
Histogram: Freq #Bkts: 2 UncompBkts: 101000 EndPtVals: 2 ActualVal: no
Using density: 0.004950 of col #1 as selectivity of pred having unreasonably low value
Table: DEMO Alias: DEMO
Card: Original: 101000.000000 Rounded: 500 Computed: 500.000000 Non Adjusted: 500.000000

Predicate having unreasonably low value…

Time to look at the histograms.

ENDPOINT_VALUE

You can get the histogram entries. For CHAR datatype (my case here) the endpoint value is hashed through their ASCII representation put in decimal. Or at least with first characters.

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 405650737316592000000000000000000000 4E20202020203A7BB119D5F6000000 N :{��
101000 462766002760475000000000000000000000 59202020202034D998FF0B5AE00000 Y 4٘�
Z�

Looks good. I can see the ‘N’ and ‘Y’ values here. But obviously the CBO sees that as different than ‘Y’.

Let’s gather statistics again (I’m now with 11.2.0.4 CBO):

SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And look at the difference:

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 404999154965717000000000000000000000 4E0000000000181F436C7BBB200000 NCl{�
101000 462114420409600000000000000000000000 590000000000127D2B51B120000000 Y}+Q�

In 11.2.0.3 the characters (‘Y’ is ASCII 0x59) were padded with spaces (ASCII 0x20). But In 11.2.0.4 they are padded with nulls (ASCII 0x00).
This is the reason why it was considered different. The ENDPOINT_VALUE for ‘Y’ calculated by the 11.2.0.4 version of the CDB is different from the one calculated by the 11.2.0.3 dbms_stats.

Now, the estimation is good again:


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

char_value

The ENDPOINT_VALUE function has been described by Jonathan Lewis in Cost-Based Oracle Fundamentals and the script to encode a character string is downloadable rom http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip. I use it (as an inline function because I’m running that in 12c) to show the values calculated from ‘Y’ and ‘N’ as well as the values calculated from same strings right-padded with spaces:


SQL> with function char_value(i_vc varchar2) return number
-- function coming from Jonathan Lewis char_value.sql http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip
is
m_vc varchar2(15) := substr(rpad(i_vc,15,chr(0)),1,15);
m_n number := 0;
begin
for i in 1..15 loop
m_n := m_n + power(256,15-i) * ascii(substr(m_vc,i,1));
end loop;
m_n := round(m_n, -21);
return m_n;
end;
select flag,char_value(flag),cast(flag as char(32)),char_value(cast(flag as char(32))),count(*) from DEMO.DEMO group by flag
/
 
FLAG CHAR_VALUE(FLAG) CAST(FLAGASCHAR(32)) CHAR_VALUE(CAST(FLAGASCHAR(32))) COUNT(*)
---- ------------------------------------ -------------------------------- ------------------------------------ --------
Y 462114420409600000000000000000000000 Y 462766002760475000000000000000000000 100000
N 404999154965717000000000000000000000 N 405650737316592000000000000000000000 1000

Comparing the calculated values with the ENDPOINT_VALUE I had above, we can see that the function has not changed but the input string was padded with spaces before which is not the case anymore.

So what?

Be careful when upgrading from <= 11.2.0.3 to higher version. You should re-gather the statistics (but then read http://blog.dbi-services.com/a-migration-pitfall-with-all-column-size-auto/ as there’s another pitfall). Don't set optimizer_features_enable to lower version as the scope is very wide and many things may be different. It is always better to take the time to focus on the features you want to disable. And never give up. The weirdest situations can be troubleshooted. Oracle Database has lot of ways to understand what happens, from the debugging tools provided with the software, and from the excellent literature about it.

 

Cet article Histograms on character strings between 11.2.0.3 and 11.2.0.4 est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 3

$
0
0

uk3

Today at UKOUG 2016, the Cloud has won against the sun :=)

The first sesssion I attended this morning was animated by Kamil Stawiarski from ORA 600 company: Securing the database againt Unauthorized attacks, but the real title was Oracle Hacking Session.

The session was amazing, as usual with Kamil, no slides , only technical demos :=))

He first showed us that after creating a standard user in an Oracle database with the classical privileges connect, resource and create any index, and using a simple function he created, the standard user could receive the DBA privilege.

The seconf demonstration was about DirtyCow (a computer vulnerability under Linux that allows remote execution of non-privileged code to achieve remote root access on a computer). He showed us how easy it is to get connected root under Linux.

In the last demo he showed us how it is possible to read the data from a particular table directly from the data file, only by using one of his C program and the data_object_id of the table.

He finished his session by asking himself why a lot of money is wasted to protect data, and why it should not be more intelligent to spend less money and to write correct applications with correct privileges.

The second session was more corporate: Oracle database 12cR2, the overview by Dominic Giles from Oracle. He talked us about Oracle 12cR2 on the cloud; What is available now: Exadata Express Cloud Server and Database Cloud Service. Comming soon: Exadata Cloud Machine.

Then he talked about the new features of Oracle database 12cR2:

Performances: The main idea for 12cR2 is: go faster, he gave us some examples: a high compression rate of indexes (subject to licensing option of course) which might result in I/O improvement and significantly space savings.

Security: Oracle 12cR2 introduces online encryption of existing data files. There is also the posiibility of full encryption of internal database structures such as SYSTEM SYSAUX or UNDO. Also a Database Vault simulation mode which defines and tests security protection profiles through application lifecycle.

Developpers: AL32UTF8 is the default character set for databases. Object name for tables or columns can now be 128 bits long.

Manageability: PDB number per container increased from 252 to 4096. The PDB are optimized for RAC. And interesting it will be possible to realize PDB hot clones, PDB refresh and PDB relocate without downtime.

Availability: a lot of improvements for RAC: RAC reader nodes, ASM flex disk groups, Autonomous Health Framework (identifies issues, notifies with corrective actions). For active dataguard, diagnostic tuning and SQL plan advisor will be available on standby side, no user disconnection on failover, high speed block comparaison between primary and standby database. And finally there will be the possibility to use SSL redo transport to be more secure.

Finally, I attended at the last session of the day, but one the most active essentially because of the speaker’s talent and of course the subject: Upgrade to the next generation of Oracle Database; live and uncensored !

He talked us about the different ways to upgrade to 12.1.0.2 or 12.2.0.2 abording subjects like extended support, direct upgrade and DBUA.

A new upgrade script is available : preupgrade.jar executes checks in source environment, generates detailed recommendations, generates also fixup scripts and last but not least is rerunnable :=))

He showed us that the upgrade process is faster and has less downtime, and we have the possibility to run databse upgrade in parallel (by using catctlpl.pl with the -n 8 option for example). It deals with non CDBs and CDBs. During his upgrade from 11.2.0.4 to 12.1.0.2 he interrupted the upgrade process by typing CTRL-C during the upgrade process to 12.1.0.2 … and he proved that the process upgrade is rerunnable by running catctl.pl with the -R option :=)

He is not a great fan of DBUA for multiple reasons : for him it is hard to debug, the parallel option is by default to cpu_count, the progress bar is impredictive and sometimes we have to wait a lot without knowing what’s happening in the source database, we have to be careful with datapatch in 12.1 version. For me the only advantage is the timezone  automatic upgrade by using dbua.

Well this was another exciting day at UKOUG 2016, tomorrow is the last day with other interesting sessions and an OEM round table :=)

 

Cet article UKOUG 2016 DAY 3 est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 4

$
0
0

uk4

Today is the last day at UKOUG in Birmingham; the first session I attended this morning was presented by Julian Dyke about installing and upgrading Oracle 12c release 2 Grid infrastructure and RAC.

He had the possibility to test the installation and upgrade phases at Oracle during 5 days at Oracle last spring. The following tests were done:

single instance : install 12.2.0.1, create database with dbca, upgrade 12.1.0.2 to 12..2.0.1 with dbua

RAC: install 12.2.0.2 grid infrastructure, install 12.2.0.1 RDBMS software, create ASM disk groups (ASMCA), create 12.2.0.2 RAC database (DBCA) , upgrade 12.1.0.2 Grid infrastructure to 12.2.0.1 (gridSetup.sh), upgrade 12.1.0.2 RAC database to 12.2.0.1.

He showed us the main different screenshots describing the installation phases and told us that they did not meet a lot of problems during their installation or upgrade  phases. To upgrade the Grid infrastructure, it is important to run the CVU connected as grid user for example :

runcluvfy.sh -src_crshome=/u00/app/12.1.0.2 -dest_crshome=/u00/app/12.2.0.1
 -dest_version=12.2.0.1 -fixupnoexec

Then after you have the possibility to resolve any issues detected using the generated fixup script.

In his opinion, the use of DBUA is sufficiently robust to use for most upgrades, expecially when the upgrade concerns non critical databases, or databases with fast recovery times or databases on virtual machines. By the way he also mentioned that Oracle is still recommending using scripts for upgrades of large or business critical databases.

He encountered some isssues concerning the upgrade phase for Grid Infrastructure. In particular with the memory_target parameter setting because the ASM and GIMR instances use more memory than in 12.1.0.2, he received the classical ORA-00845 error message. He also encountered problems with invalid objects  and had to extend the root file system of his virtual machine.

Then I attended to Franck Pachot’s session about Statistics Gathering, Best Practices  and Statistic Advisor:

uk5

His session described us his findings and recommendations about how to gather statistics, with a lot of technical demonstrations done on the Cloud. A lot of cases were shown, for example volatile tables, preferences for partitioned tables. index gathering statistics.

He showed us the Oracle 12c release 2 statistics Advisor which might be a useful tool, I will check if it is available in Enterprise Manager 13.2.

He finished his by giving us hsi own recommendations: use automatic job for most of the tables, customize the statistics gathering for volatile tables, gather statistics for tables that you load, and important customize the maintenance window for the gathering statistics job.

Finally I wanted to attend at the OEM round table, but unfortunately the session has been canceled :=((

UK6

Well,this was a very interesting week with a lot of exchanges and sharing experiences with other Oracle DBA. hope to come back at UKOUG next year !

 

 

 

Cet article UKOUG 2016 DAY 4 est apparu en premier sur Blog dbi services.

Upgrade to Enterprise Manager 13.2

$
0
0

I will describe how to upgrade Enterprise Manager 13.1.0.0 to the new 13.2.0.0 version.

At the beginning we have to ensure that we applied the latest PSU on the repository database.

It is mandatory to apply the following patch before upgrading to Enterprise Manager Cloud Control 13.2.0.0:

DATABASE PATCH SET UPDATE 12.1.0.2.160419

To install the patch you have to check the following point:

- you have to use Oracle Interim Patch Installer version 12.2.0.1.8

Once you have downloaded the patch and unzipped it, you can check for potential conflicts:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail 
-phBaseDir /oracle/u01/app/oracle/software/22899531/22806133
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed. 
OPatch succeeded.

 

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail -phBaseDir 
/oracle/u01/app/oracle/software/22899531/23006522
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

And finally, you check for the system space available:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] $ORACLE_HOME/OPatch/opatch prereq 
CheckSystemSpace -phBaseFile /oracle/u01/app/oracle/software/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checksystemspace"
Prereq "checkSystemSpace" passed.
OPatch succeeded.

Then once the pre requisites are ok, you stop the repository database, and you run the classical opatch apply command from the directory where you have unzipped the PSU.

You finally check the Oracle inventory:

oracle@vmtestoraCC13c:/home/oracle/ [EMREP13C] opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
--------------------------------------------------------------------------------
Local Machine Information:
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Database 12c                                      12.1.0.2.0
There are 1 products installed in this Oracle Home. 
Interim patches (1) : 
Patch  22806133     : applied on Tue Nov 22 11:19:55 CET 2016
Unique Patch ID:  19983161
Patch description:  "DATABASE BUNDLE PATCH: 12.1.0.2.160419 (22806133)"

Secondly you have to disable the optimizer_adaptive_features parameter in the repository database:

SQL> alter system set optimizer_adaptive_features=false scope=both;

System altered.

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

We verify if any login triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';

We verify if any logoff triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';

If we find a trigger enabled, we disable it:

SQL> SELECT trigger_name, owner from sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
 
TRIGGER_NAME         OWNER
GSMLOGOFF            GSMADMIN_INTERNAL
 
SQL> alter trigger gsmadmin_internal.gsmlogoff disable;

Then you have to copy the EMKEY to the repository database:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl config emkey 
-copy_to_repos -sysman_pwd dbi05manager
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, 
secure the EMKey by running "emctl config emkey -remove_from_repos".

Concerning the OMS shutdown we follow the following procedure:

We stop the JVMD and ADP engines explicitly:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms jvmd stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework
 
oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms adp stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework

Then we shut down completely the OMS:

oracle@v333:/home/oracle/ [oms13c] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down

It is mandatory to stop the management agent in order to avoid errors during the migration phase:

oracle@vmtestoraCC13c:/home/oracle/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ... stopped.

WATCH OUT: before running the installer unset PERLLIB and PERL5LIB variables (if for example you have environment variables defined for your oracle user) otherwise you will get the following error:

/opt/oracle/Middleware13c/perl/lib/5.10.0/x86_64-linux-thread-multi/auto/Cwd/Cwd.so: 
undefined symbol: Perl_Gthr_key_ptr
ERROR: Unable to continue with the installation 
because some one-off patches could not be applied successfully.

Then once the oms and the management agent are stopped you can run:

oracle@v333:/opt/software/ [oms13c] ./em13200_linux64.bin
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 7999 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    
Passed (64-bit not required)
Preparing to launch the Oracle Universal Installer 
from /tmp/OraInstall2016-11-18_10-07-45AM
====Prereq Config Location main===
/tmp/OraInstall2016-11-18_10-07-45AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2016-11-18_10-07-45AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2016-11-18_10-07-45AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_f8wrWz/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/opt/software
EMFileLoc:/tmp/OraInstall2016-11-18_10-07-45AM/oui/em/
ScratchPathValue :/tmp/OraInstall2016-11-18_10-07-45AM

The first installer screen appears:

em1

I choosed not to receive Security alerts then I Select Next:

em2

I select Next, I skip the software Updates

em3

We check the prerequisites are ok

em4

We choose to upgrade an existing Enterprise Manager system, we enter the old Middleware home, and we select Next

em5

We enter the new Middleware home, and we select Next

em6

We enter the repository connection details, the sys and sysman passords, we confirm we have a correct repository backupm we disable the DDMP jobs and we select Next

em7

We select Yes to fix the issues

em8

We review the plugins and we select Next

em9

We have the possibility to add plugins we want to deploy while upgrading to EM 13.2

em10

We enter the weblogic username and password and we select Next

em11

I choosed not to configure a shared location for Oracle BI publisher

em12

We choose the default ports and we select Next

em13

We select Upgrade:

em14

The upgrade is running fine:=)

At the end of the upgrade, you have to run connected as root from the new OMS home the shell root.sh:

root@vmCC13c oms13cr2]# . root.sh
/etc exist
/u03/app/oracle/oms13cr2

Finally you have successfully upgraded to EM 13.2:

em15

Once the OMs is upgraded, you have to upgrade the management agents with the classical procedure. From the EM console you select upgrade agent as follows:

em16

em17

You add the management agent, and you select Submit

em18

You can follow the upgrade phase, finally you have to run the root.sh script on the management agent and cleanup the old agent environment with the EM console as follows:

You select the Post Agent Upgrade Tasks, you select the agent previously migrated:

em19

And you select Submit

The old agent home will be removed; you will just have to adapt your /etc/oratab file to give the new ORACLE_HOME for your agent 13.2

The last phase consists in deleting the old OMS home. As we were in 13.1 version, we only have to check nothing is running with the old environment, then delete the old home.

oracle@vmCC13c:/u03/app/oracle/oms13cr2/ [oms13c] ps -ef | grep oms | grep -v 13cr2
oracle   28463 27736  0 15:52 pts/5    00:00:00 grep --color=auto oms
 
oracle@vmCC13c:/u03/app/oracle/ [oms13c] ls
agent13c  CC13c_setupinfo.txt  gc_inst  gc_inst1  oms13c  oms13cr2  swlib
oracle@vmCC13c:/u03/app/oracle/ [oms13c] rm -rf oms13c

The new features are listed in the following URL and mainly concern the virtualization, the Middleware and Cloud Management, and some new features about incident management (always on monitoring installed on a different host for example).

http://docs.oracle.com/cd/E73210_01/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Finally the upgrade phase to EM 13.2 is finished in a couple of hours and did not present any errors.

 

 

 

 

 

 

 

 

Cet article Upgrade to Enterprise Manager 13.2 est apparu en premier sur Blog dbi services.


IT-Tage 2016 Informatik aktuell: feedback

$
0
0

Today, to finish the year, I post a brief personal impression of the IT-Tage 2016 in Frankfurt at the Hotel Maritim, where I was also be a speaker.

IMG_3808

I presented 2 sessions on SQL Server: “SQL Server Errorlog Entmystifizierung” & “SQL Server 2016: Neue Sicherheitsfunktionen”.
I wasn’t the only one from dbi services who spoke at that conference:

  • David Barbarin with also 2 sessions: “SQL Server – Locks, latches and spinlocks” & “SQL Server 2016 Availability Group Enhancements”
  • Clemens Bleile with 1 session: “SQL Plan Directives: Neuigkeiten in 12.2. Produktions-Ausführungspläne in Testumgebungen reproduzieren”
  • Philippe Schweitzer with 1 session: “Feasibility study for building a software factory based on GIT repository”
  • Daniel Westermann with 1 session: “Breaking the deadlock: Migrating from proprietary databases to PostgreSQL”

You can already download all presentations on this link.

After my presentation day, I had the opportunity to go to a very interesting session by Oliver Hock “Ein Prozess lernt laufen: LEGO-Mindstorms-Steuerung mit BPMN”. With a Lego Mindstorm kit, he showed how to solve a magic cube.

IMG_3823

This session is also on youtube and look the demo at the end (the last 60 seconds) . It was very nice! ;-)

I would like to thank the entire team of Informatik Aktuell, who have put together a smooth and interesting process.

I hope that I can go also next year, with new sessions and follow other interesting sessions…

In the evening, you could also enjoy the Christmas Market, which is 2 metro’s stop from the Hotel. IMG_3810

I wish you a merry Christmas and like we said in Alsace: “A guetta rutsch ins neja Johr!”

 

Cet article IT-Tage 2016 Informatik aktuell: feedback est apparu en premier sur Blog dbi services.

NLS defaults for LANGUAGE and TERRITORY

$
0
0

When you set NLS_LANGUAGE then the values of NLS_DATE_LANGUAGE and NLS_SORT are set to their defaults for the LANGUAGE you set.
When you set NLS_TERRITORY then values of NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS, NLS_CURRENCY and NLS_ISO_CURRENCY are set to the defaults for the TERRITORY you set. For reference, I’ve listed the defaults in this post.

Here for each value found from V$NLS_VALID_VALUES I set the session parameter and display the values that are derived, as well as an example of a date

TERRITORY

The TERRITORY sets the decimal and thousand separators, the currency and the date format. I display the short date example of last day of 2016.

SQL> with
2 function nls_territory_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_territory='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_FORMAT','NLS_NUMERIC_CHARACTERS','NLS_CURRENCY','NLS_ISO_CURRENCY') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'ds');
11 end;
12 select rpad(value,20)||nls_territory_defaults(value) "TERRITORY default NLS settings" from v$nls_valid_values where parameter='TERRITORY' order by 1
13 /
 
TERRITORY default NLS settings
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
AFGHANISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AFGHANISTAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=؋ example: 31/12/16
ALBANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ALBANIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=Lek example: 31.12.2016
ALGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ALGERIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ج. example: 31/12/2016
AMERICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AMERICA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 12/31/2016
ARGENTINA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ARGENTINA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
ARMENIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ARMENIA NLS_DATE_FORMAT=DD.fmMM.RRRR NLS_CURRENCY=դր. example: 31.12.2016
AUSTRALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AUSTRALIA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=$ example: 31/12/2016
AUSTRIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AUSTRIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
AZERBAIJAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=AZERBAIJAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=₼ example: 31.12.2016
BAHAMAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHAMAS NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=B$ example: 31/12/2016
BAHRAIN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHRAIN NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ب. example: 31/12/2016
BANGLADESH NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BANGLADESH NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=৳ example: 31-12-2016
BELARUS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=BELARUS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.2016
BELGIUM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BELGIUM NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
BELIZE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BELIZE NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=BZ$ example: 12/31/2016
BERMUDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BERMUDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=BD$ example: 31/12/2016
BOLIVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOLIVIA NLS_DATE_FORMAT=fmDD-MM-RR NLS_CURRENCY=B$ example: 12-31-2016
BOSNIA AND HERZEGOVINLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOSNIA AND HERZEGOVI NLS_DATE_FORMAT=fmMM.DD.RRRR NLS_CURRENCY=КМ example: 12.31.2016
BRAZIL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BRAZIL NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=R$ example: 31/12/2016
BULGARIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BULGARIA NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=лв example: 31.12.2016
CAMBODIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CAMBODIA NLS_DATE_FORMAT=RRRR-fmMM-DD NLS_CURRENCY=៛ example: 2016-12-31
CAMEROON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CAMEROON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CANADA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CANADA NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=$ example: 2016-12-31
CATALONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CATALONIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31-12-16
CHILE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CHILE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31-12-2016
CHINA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CHINA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=¥ example: 2016-12-31
CIS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CIS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.16
COLOMBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COLOMBIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
CONGO BRAZZAVILLE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO BRAZZAVILLE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CONGO KINSHASA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO KINSHASA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=FrCD example: 31/12/16
COSTA RICA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COSTA RICA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
CROATIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CROATIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=kn example: 31.12.16
CYPRUS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CYPRUS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=£ example: 31/12/2016
CZECH REPUBLIC NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECH REPUBLIC NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kč example: 31.12.2016
CZECHOSLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECHOSLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kčs example: 31.12.16
DENMARK NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=DENMARK NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=kr example: 31-12-2016
DJIBOUTI NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=DJIBOUTI NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ف.ج. example: 31/12/2016
ECUADOR NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ECUADOR NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
EGYPT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EGYPT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.م. example: 31/12/2016
EL SALVADOR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EL SALVADOR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
ESTONIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=ESTONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
ETHIOPIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ETHIOPIA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=ብር example: 12/31/2016
FINLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FINLAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=€ example: 31.12.2016
FRANCE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FRANCE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
FYR MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=FYR MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=ден. example: 31.12.2016
GABON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=GABON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
GERMANY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GERMANY NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
GREECE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GREECE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
GUATEMALA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=GUATEMALA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Q example: 31/12/2016
HONDURAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONDURAS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=L example: 31.12.2016
HONG KONG NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONG KONG NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 31/12/2016
HUNGARY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=HUNGARY NLS_DATE_FORMAT=RR-MON-DD NLS_CURRENCY=Ft example: 2016.12.31.
ICELAND NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ICELAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr. example: 31.12.2016
INDIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=INDIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=Rs example: 31/12/2016
INDONESIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=INDONESIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=Rp example: 31-12-2016
IRAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAN NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
IRAQ NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAQ NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ع. example: 31/12/2016
IRELAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRELAND NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
ISRAEL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ISRAEL NLS_DATE_FORMAT=DD-MON-RRRR NLS_CURRENCY=ש"ח example: 31/12/2016
ITALY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ITALY NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
IVORY COAST NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=IVORY COAST NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
JAPAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JAPAN NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=¥ example: 2016/12/31
JORDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JORDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ا.‏ example: 31/12/2016
KAZAKHSTAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=KAZAKHSTAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=KZT example: 31.12.2016
KENYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KENYA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=S example: 12/31/2016
KOREA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KOREA NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=₩ example: 2016/12/31
KUWAIT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KUWAIT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ك. example: 31/12/2016
LAOS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LAOS NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₭ example: 31/12/2016
LATVIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LATVIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=Ls example: 2016.12.31
LEBANON NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LEBANON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.ل. example: 31/12/2016
LIBYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LIBYA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ل. example: 31/12/2016
LITHUANIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LITHUANIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=€ example: 2016-12-31
LUXEMBOURG NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=LUXEMBOURG NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31.12.2016
MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=den. example: 31.12.2016
MALAYSIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALAYSIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=RM example: 31/12/2016
MALDIVES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALDIVES NLS_DATE_FORMAT=fmDD/fmMM/RR NLS_CURRENCY=ރ. example: 12/31/16
MALTA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALTA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=€ example: 12/31/2016
MAURITANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MAURITANIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ا.م. example: 31/12/2016
MEXICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MEXICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
MONTENEGRO NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MONTENEGRO NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=€ example: 31.12.2016.
MOROCCO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MOROCCO NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.م. example: 31/12/2016
NEPAL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEPAL NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=र example: 12/31/2016
NEW ZEALAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEW ZEALAND NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
NICARAGUA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NICARAGUA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C$ example: 31/12/2016
NIGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NIGERIA NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₦ example: 31/12/2016
NORWAY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=NORWAY NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
OMAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=OMAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ع. example: 31/12/2016
PAKISTAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PAKISTAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=PKR example: 31/12/2016
PANAMA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PANAMA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=B/ example: 12/31/2016
PARAGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PARAGUAY NLS_DATE_FORMAT=fmDD/MM/RR NLS_CURRENCY=G example: 12/31/16
PERU NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PERU NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S/ example: 31/12/2016
PHILIPPINES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PHILIPPINES NLS_DATE_FORMAT=MM/DD/RRRR NLS_CURRENCY=Php example: 12/31/2016
POLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=POLAND NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=zł example: 2016-12-31
PORTUGAL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PORTUGAL NLS_DATE_FORMAT=RR.MM.DD NLS_CURRENCY=€ example: 31-12-2016
PUERTO RICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PUERTO RICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
QATAR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=QATAR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ق. example: 31/12/2016
ROMANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ROMANIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=LEI example: 31.12.2016
RUSSIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=RUSSIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=₽ example: 31.12.2016
SAUDI ARABIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SAUDI ARABIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.س. example: 31/12/2016
SENEGAL NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=SENEGAL NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
SERBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=RSD example: 31.12.2016.
SERBIA AND MONTENEGRNLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA AND MONTENEGR NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
SINGAPORE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SINGAPORE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S$ example: 31/12/2016
SLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Sk example: 31.12.16
SLOVENIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVENIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SIT example: 31.12.16
SOMALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOMALIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ش.ص. example: 31/12/2016
SOUTH AFRICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOUTH AFRICA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=R example: 2016/12/31
SPAIN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SPAIN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
SRI LANKA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SRI LANKA NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
SUDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SUDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.س. example: 31/12/2016
SWEDEN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SWEDEN NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=Kr example: 2016-12-31
SWITZERLAND NLS_NUMERIC_CHARACTE=.' NLS_ISO_CURRENCY=SWITZERLAND NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SFr. example: 31.12.2016
SYRIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SYRIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.س. example: 31/12/2016
TAIWAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TAIWAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=NT$ example: 2016/12/31
TANZANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TANZANIA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=TSh example: 12/31/2016
THAILAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=THAILAND NLS_DATE_FORMAT=DD MON RRRR NLS_CURRENCY=฿ example: 31/12/2016
THE NETHERLANDS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=THE NETHERLANDS NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=€ example: 31-12-2016
TUNISIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TUNISIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ت. example: 31/12/2016
TURKEY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=TURKEY NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=₺ example: 31.12.2016
UGANDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UGANDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
UKRAINE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=UKRAINE NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=грн. example: 31.12.2016
UNITED ARAB EMIRATESNLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED ARAB EMIRATES NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.إ. example: 31/12/2016
UNITED KINGDOM NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED KINGDOM NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=£ example: 31/12/2016
URUGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=URUGUAY NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=NU$ example: 31/12/2016
UZBEKISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=UZBEKISTAN NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=UZS example: 31/DEC/16
VENEZUELA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VENEZUELA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Bs example: 31/12/2016
VIETNAM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VIETNAM NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=₫ example: 31-12-2016
YEMEN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=YEMEN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ي. example: 31/12/2016
YUGOSLAVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=YUGOSLAVIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
ZAMBIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ZAMBIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=ZK example: 31/12/2016

LANGUAGE

The LANGUAGE sets the linguistic sorting and the language used for words in dates. I display the long date example of last day of 2016.

SQL> with
2 function nls_language_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_language='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_LANGUAGE','NLS_SORT') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'dl');
11 end;
12 select rpad(value,20)||nls_language_defaults(value) "LANGUAGE default NLS settings" from v$nls_valid_values where parameter='LANGUAGE' order by 1
13 /
 
LANGUAGE default NLS settings
-----------------------------------------------------------------------------------------------------------------------------------------------
ALBANIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ALBANIAN example: E Shtunë, Dhjetor 31, 2016
AMERICAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=AMERICAN example: Saturday, December 31, 2016
AMHARIC NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=AMHARIC example: ቅዳሜ, ዲሴምበር 31, 2016
ARABIC NLS_SORT=ARABIC NLS_DATE_LANGUAGE=ARABIC example: السبت, ديسمبر 31, 2016
ARMENIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ARMENIAN example: Շաբա, Դեկտեմբեր 31, 2016
ASSAMESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=ASSAMESE example: শনিবাৰ, ডিচেম্বৰ 31, 2016
AZERBAIJANI NLS_SORT=AZERBAIJANI NLS_DATE_LANGUAGE=AZERBAIJANI example: Şənbə, Dekabr 31, 2016
BANGLA NLS_SORT=BINARY NLS_DATE_LANGUAGE=BANGLA example: শনিবার, ডিসেম্বর 31, 2016
BELARUSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=BELARUSIAN example: Субота, Сьнежань 31, 2016
BENGALI NLS_SORT=BENGALI NLS_DATE_LANGUAGE=BENGALI example: শনিবার, ডিসেমর 31, 2016
BRAZILIAN PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=BRAZILIAN PORTUGUESE example: Sábado, Dezembro 31, 2016
BULGARIAN NLS_SORT=BULGARIAN NLS_DATE_LANGUAGE=BULGARIAN example: Събота, Декември 31, 2016
CANADIAN FRENCH NLS_SORT=CANADIAN_M NLS_DATE_LANGUAGE=CANADIAN FRENCH example: Samedi, Décembre 31, 2016
CATALAN NLS_SORT=CATALAN NLS_DATE_LANGUAGE=CATALAN example: Dissabte, Desembre 31, 2016
CROATIAN NLS_SORT=CROATIAN NLS_DATE_LANGUAGE=CROATIAN example: Subota, Prosinac 31, 2016
CYRILLIC KAZAKH NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC KAZAKH example: Сенбі, Желтоқсан 31, 2016
CYRILLIC SERBIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC SERBIAN example: Субота, Децембар 31, 2016
CYRILLIC UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC UZBEK example: Шанба, Декабр 31, 2016
CZECH NLS_SORT=CZECH NLS_DATE_LANGUAGE=CZECH example: Sobota, Prosinec 31, 2016
DANISH NLS_SORT=DANISH NLS_DATE_LANGUAGE=DANISH example: Lørdag, December 31, 2016
DARI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DARI example: شنبه, دسمبر 31, 2016
DIVEHI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DIVEHI example: ހޮނިހިރު, ޑިސެމްބަރ 31, 2016
DUTCH NLS_SORT=DUTCH NLS_DATE_LANGUAGE=DUTCH example: Zaterdag, December 31, 2016
EGYPTIAN NLS_SORT=ARABIC NLS_DATE_LANGUAGE=EGYPTIAN example: السبت, ديسمبر 31, 2016
ENGLISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=ENGLISH example: Saturday, December 31, 2016
ESTONIAN NLS_SORT=ESTONIAN NLS_DATE_LANGUAGE=ESTONIAN example: Laupäev, Detsember 31, 2016
FINNISH NLS_SORT=FINNISH NLS_DATE_LANGUAGE=FINNISH example: Lauantai, Joulukuu 31, 2016
FRENCH NLS_SORT=FRENCH NLS_DATE_LANGUAGE=FRENCH example: Samedi, Décembre 31, 2016
GERMAN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN example: Samstag, Dezember 31, 2016
GERMAN DIN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN DIN example: Samstag, Dezember 31, 2016
GREEK NLS_SORT=GREEK NLS_DATE_LANGUAGE=GREEK example: Σάββατο, Δεκέμβριος 31, 2016
GUJARATI NLS_SORT=BINARY NLS_DATE_LANGUAGE=GUJARATI example: શનિવાર, ડિસેમ્બર 31, 2016
HEBREW NLS_SORT=HEBREW NLS_DATE_LANGUAGE=HEBREW example: שבת, דצמבר 31, 2016
HINDI NLS_SORT=BINARY NLS_DATE_LANGUAGE=HINDI example: शनिवार, दिसम्बर 31, 2016
HUNGARIAN NLS_SORT=HUNGARIAN NLS_DATE_LANGUAGE=HUNGARIAN example: Szombat, December 31, 2016
ICELANDIC NLS_SORT=ICELANDIC NLS_DATE_LANGUAGE=ICELANDIC example: Laugardagur, Desember 31, 2016
INDONESIAN NLS_SORT=INDONESIAN NLS_DATE_LANGUAGE=INDONESIAN example: Sabtu, Desember 31, 2016
IRISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=IRISH example: Dé Sathairn, Nollaig 31, 2016
ITALIAN NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=ITALIAN example: Sabato, Dicembre 31, 2016
JAPANESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=JAPANESE example: 土曜日, 12月 31, 2016
KANNADA NLS_SORT=BINARY NLS_DATE_LANGUAGE=KANNADA example: ಶನಿವಾರ, ಡಿಸೆಂಬರ್ 31, 2016
KHMER NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=KHMER example: ថ្ងៃសៅរ៍, ធ្នូ 31, 2016
KOREAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=KOREAN example: 토요일, 12월 31, 2016
LAO NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LAO example: ວັນເສົາ, ທັນວາ 31, 2016
LATIN AMERICAN SPANI NLS_SORT=SPANISH NLS_DATE_LANGUAGE=LATIN AMERICAN SPANI example: Sábado, Diciembre 31, 2016
LATIN BOSNIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN BOSNIAN example: Subota, Decembar 31, 2016
LATIN SERBIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=LATIN SERBIAN example: Subota, Decembar 31, 2016
LATIN UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN UZBEK example: Shanba, Dekabr 31, 2016
LATVIAN NLS_SORT=LATVIAN NLS_DATE_LANGUAGE=LATVIAN example: Sestdiena, Decembris 31, 2016
LITHUANIAN NLS_SORT=LITHUANIAN NLS_DATE_LANGUAGE=LITHUANIAN example: Šeštadienis, Gruodžio 31, 2016
MACEDONIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=MACEDONIAN example: Сабота, Декември 31, 2016
MALAY NLS_SORT=MALAY NLS_DATE_LANGUAGE=MALAY example: Sabtu, Disember 31, 2016
MALAYALAM NLS_SORT=BINARY NLS_DATE_LANGUAGE=MALAYALAM example: ശനിആഴ്ച, ഡിസംബര് 31, 2016
MALTESE NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=MALTESE example: Is-Sibt, Diċembru 31, 2016
MARATHI NLS_SORT=BINARY NLS_DATE_LANGUAGE=MARATHI example: शनिवार, डिसेंबर 31, 2016
MEXICAN SPANISH NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=MEXICAN SPANISH example: Sábado, Diciembre 31, 2016
NEPALI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=NEPALI example: शनिबार, डिसेम्बर 31, 2016
NORWEGIAN NLS_SORT=NORWEGIAN NLS_DATE_LANGUAGE=NORWEGIAN example: Lørdag, Desember 31, 2016
ORIYA NLS_SORT=BINARY NLS_DATE_LANGUAGE=ORIYA example: ଶନିବାର, ଡିସେମ୍ବର 31, 2016
PERSIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=PERSIAN example: شنبه, دسامبر 31, 2016
POLISH NLS_SORT=POLISH NLS_DATE_LANGUAGE=POLISH example: Sobota, Grudzień 31, 2016
PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=PORTUGUESE example: Sábado, Dezembro 31, 2016
PUNJABI NLS_SORT=BINARY NLS_DATE_LANGUAGE=PUNJABI example: ਸ਼ਨਿਚੱਰਵਾਰ, ਦਸੰਬਰ 31, 2016
ROMANIAN NLS_SORT=ROMANIAN NLS_DATE_LANGUAGE=ROMANIAN example: Sâmbătă, Decembrie 31, 2016
RUSSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=RUSSIAN example: Суббота, Декабрь 31, 2016
SIMPLIFIED CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=SIMPLIFIED CHINESE example: 星期六, 12月 31, 2016
SINHALA NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SINHALA example: සෙනසුරාදා, දෙසැම්බර් 31, 2016
SLOVAK NLS_SORT=SLOVAK NLS_DATE_LANGUAGE=SLOVAK example: Sobota, December 31, 2016
SLOVENIAN NLS_SORT=SLOVENIAN NLS_DATE_LANGUAGE=SLOVENIAN example: Sobota, December 31, 2016
SPANISH NLS_SORT=SPANISH NLS_DATE_LANGUAGE=SPANISH example: Sábado, Diciembre 31, 2016
SWAHILI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SWAHILI example: Jumamosi, Desemba 31, 2016
SWEDISH NLS_SORT=SWEDISH NLS_DATE_LANGUAGE=SWEDISH example: Lördag, December 31, 2016
TAMIL NLS_SORT=BINARY NLS_DATE_LANGUAGE=TAMIL example: சனிக்கிழமை, டிசம்பர் 31, 2016
TELUGU NLS_SORT=BINARY NLS_DATE_LANGUAGE=TELUGU example: శనివారం, డిసెంబర్ 31, 2016
THAI NLS_SORT=THAI_DICTIONARY NLS_DATE_LANGUAGE=THAI example: เสาร์, ธันวาคม 31, 2016
TRADITIONAL CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=TRADITIONAL CHINESE example: 星期六, 12月 31, 2016
TURKISH NLS_SORT=TURKISH NLS_DATE_LANGUAGE=TURKISH example: Cumartesi, Aralık 31, 2016
UKRAINIAN NLS_SORT=UKRAINIAN NLS_DATE_LANGUAGE=UKRAINIAN example: Субота, Грудень 31, 2016
VIETNAMESE NLS_SORT=VIETNAMESE NLS_DATE_LANGUAGE=VIETNAMESE example: Thứ Bảy, Tháng MườI Hai 31, 2016

NLS_LANG

Note that you can also set the NLS settings with environment variables, but you can do that only if NLS_LANG is set. This means that you must set NLS_LANG first, which sets the LANGUAGE and TERRITORY, and then set for example NLS_DATE_FORMAT.

Here is an example.
I set NLS_DATE_FORMAT from the environment:

[oracle@VM102 data]$ export NLS_DATE_FORMAT=DL

but the date format is the set in my session:
[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
---------
04-JAN-17

Now I set NLS_LANG (and you don’t need to specify a LANGUAGE, TERRITORY or CHARACTERSET, just put the separators ‘_’ and ‘.’)

[oracle@VM102 data]$ export NLS_LANG="_."

And now my NLS_DATE_FORMAT is taken into account

[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
-----------------------------
Wednesday, January 04, 2017

In general you should set NLS_LANG on any client, to the LANGUAGE and TERRITORY you want to display to client (or store in file, or manage by the application server).

I wish you an Happy New Year 2017.

 

Cet article NLS defaults for LANGUAGE and TERRITORY est apparu en premier sur Blog dbi services.

Exadata Express Cloud Service: PDB_ADMIN privileges

$
0
0

I usually explain that the main point about Multitenant Architecture is the separation of roles between a system DBA who manages the container, and is responsible for its availability, and application DBA who owns all the application objects, and is responsible for data integrity and performance. The Exadata Express Cloud Service is the implementation of that: a CDB managed by Oracle and a PDB that you administrate. But the fact that the system is shared, over the internet, brings the necessity for new limitations. Let’s see what you can do or not as a PDB administrator.

CaptureEXCS024 When you create a pluggable database you need to provide a local user and its password. When you create an Exadata Express Cloud Service, the PDM_ADMIN user is created but the account is locked. You need to provide your password in order to unlock it. Then, this PDB_ADMIN is the only user for which you know the password.

But what really means ‘PDB administrator’? Which privileges do you have?

System privileges

PDB_ADMIN looks like a DBA. He is not granted ‘DBA’ but ‘PDB_DBA’ and many other roles.
Let’s see which system privileges are not granted, either directly or though a path of roles:

with my_user_roles(grantee,granted_role) as (
select grantee,granted_role from dba_role_privs where grantee='PDB_ADMIN' union all select r.grantee,r.granted_role from dba_role_privs r join my_user_roles u on r.grantee =u.granted_role
) select listagg(privilege,',')within group(order by privilege) from (
select distinct privilege from dba_sys_privs minus select distinct privilege from dba_sys_privs where grantee in (select granted_role from my_user_roles));
 
LISTAGG(PRIVILEGE,',')WITHINGROUP(ORDERBYPRIVILEGE)
---------------------------------------------------
ADMINISTER KEY MANAGEMENT,BACKUP ANY TABLE,BECOME USER,CREATE ANY CREDENTIAL,CREATE ANY DIRECTORY,CREATE ANY JOB,CREATE ANY LIBRARY,CREATE CREDENTIAL,CREATE DATABASE LINK,CREATE EXTERNAL JOB,CREATE LIBRARY,CREATE PUBLIC DATABASE LINK,DROP ANY DIRECTORY,DROP PUBLIC DATABASE LINK,EM EXPRESS CONNECT,EXECUTE ANY CLASS,EXECUTE ANY PROGRAM,EXEMPT ACCESS POLICY,EXEMPT DDL REDACTION POLICY,EXEMPT DML REDACTION POLICY,EXEMPT REDACTION POLICY,EXPORT FULL DATABASE,FLASHBACK ANY TABLE,FLASHBACK ARCHIVE ADMINISTER,FORCE TRANSACTION,GRANT ANY PRIVILEGE,GRANT ANY ROLE,IMPORT FULL DATABASE,INHERIT ANY PRIVILEGES,LOGMINING,MANAGE ANY FILE GROUP,MANAGE FILE GROUP,MANAGE SCHEDULER,SELECT ANY TRANSACTION,UNLIMITED TABLESPACE,USE ANY JOB RESOURCE,USE ANY SQL TRANSLATION PROFILE

So your PDB_ADMIN has some system privileges. For example you can gather system statistics:

SQL> exec dbms_stats.gather_system_stats;
PL/SQL procedure successfully completed.
 
SNAME PNAME PVAL1 PVAL2
----- ----- ----- -----
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 01-24-2017 19:48
SYSSTATS_INFO DSTOP 01-24-2017 19:48
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 3097
SYSSTATS_MAIN IOSEEKTIM 7
SYSSTATS_MAIN IOTFRSPEED 4096

but you cannot create database links or directories, which may be considered as application objects:

create directory mydir as '/tmp'
ORA-01031: insufficient privileges
 
SQL> create database link mydblink connect to pdb_admin identified by "Ach1z0#d" using '//localhost';
ORA-01031: insufficient privileges

What you should understand is that you have a PDB, in a shared CDB, it is yours and you can do whatever you want as long as it has no side effects on your neighbours. You will see that communication with the external world (network, filesystem, host server) are very limited.

Lockdown profiles

Now we will see that the privileges we have are going beyond those that you can see in DBA_SYS_PRIVS and DBA_ROLE_PRIVS.

From my query above, I have the ALTER SYSTEM privilege, so I can change some parameters:

SQL> alter system set temp_undo_enabled=true;
System SET altered.
SQL> alter system set cursor_sharing=exact;
System SET altered.

However, some parameters cannot be set:

SQL> alter session set max_idle_time=60;
ORA-01031: insufficient privileges
SQL> alter system set sga_target=3G;
ORA-01031: insufficient privileges
SQL> alter system set sql_trace=true;
ORA-01031: insufficient privileges

and most of the other ALTER SYSTEM statements are forbidden:

SQL> alter system flush shared_pool;
ORA-01031: insufficient privileges

This is forbidden by a multitenant lockdown profile, which gives a finer grain than privileges: it disables some statements, or statements clauses or options. The bad thing about it is that you don’t know what you are allowed or not. Always the same ‘insufficient privileges’ and the detail is stored only on CDB$ROOT. From the PDB:

SQL> select * from dba_lockdown_profiles;
no rows selected

the only thing that can be known from the PDB is the name of the lockdown profile:

SQL> show parameter lockdown
NAME TYPE VALUE
------------ ------ -----
pdb_lockdown string S20

That’s not a coincidence. I’m on a ‘S20′ service (30 GB storage, 3GB SGA) and the lockdown profile is associated with the service level.
Of course, you cannot change it:

SQL> alter system set pdb_lockdown='S50';
ORA-32017: failure in updating SPFILE
ORA-01031: insufficient privileges
 
SQL> alter system set pdb_lockdown='S50' scope=memory;
ORA-01031: insufficient privileges

Always the same message… Before 12cR2 you just query the system privileges to understand why you get this message. Now, you need to ask to your CDB administrator. Of course, for Exadata Express Cloud Service, this is documented in limitations.

Lockdown profiles are not only for ALTER SYSTEM.
You have a few things that you cannot do with ALTER SESSION, such as setting SQL Trace, any Event, and any underscore parameter is also forbidden. ALTER DATABASE and ALTER PLUGGABLE DATABASE allow only OPEN/CLOSE, or change default edition, default tablespace and temporary tablespace. On datafiles, you can only resize and set autoextend on/off. You can also set the time zone.

Well, I’m not completely sure about CLOSE:

SQL> alter pluggable database close immediate;
ORA-01031: insufficient privileges
alter pluggable database open
ORA-65019: pluggable database EPTDOJVM1KG already open

Lockdown profiles goes beyond enabling or disable statements. It can disable a few features that you see mentioned in ‘limitations': you can’t create a manual AWR snapshot, can’t access the OS files by any way (UTL_FILE is disabled), can’t use any network protocol (UTL_TCP, UTL_MAIL,…)

So what?

I’m a DBA and connecting to a database where I’m not SYSDBA is a bit frustrating. PDB_ADMIN is granted SYSOPER but is locked in the password file. I’ve unlocked PDB_ADMIN but it is still EXPIRED & LOCKED in password file;

SQL> select * from dba_users where username='PDB_ADMIN';
 
USERNAME USER_ID PASSWORD ACCOUNT_STATUS LOCK_DATE EXPIRY_DATE DEFAULT_TABLESPACE TEMPORARY_TABLESPACE LOCAL_TEMP_TABLESPACE CREATED PROFILE INITIAL_RSRC_CONSUMER_GROUP EXTERNAL_NAME PASSWORD_VERSIONS EDITIONS_ENABLED AUTHENTICATION_TYPE PROXY_ONLY_CONNECT COMMON LAST_LOGIN ORACLE_MAINTAINED INHERITED DEFAULT_COLLATION IMPLICIT
-------- ------- -------- -------------- --------- ----------- ------------------ -------------------- --------------------- ------- ------- --------------------------- ------------- ----------------- ---------------- ------------------- ------------------ ------ ---------- ----------------- --------- ----------------- --------
PDB_ADMIN 94 OPEN 22-JUL-2017 19:17:41 SYSEXT TEMP TEMP 27-JUL-2016 17:05:02 DEFAULT DEFAULT_CONSUMER_GROUP 11G 12C N PASSWORD N NO 24-JAN-17 08.46.02.000000000 PM EUROPE/BERLIN N NO USING_NLS_COMP NO
 
SQL> select * from v$pwfile_users where username='PDB_ADMIN';
 
USERNAME SYSDBA SYSOPER SYSASM SYSBACKUP SYSDG SYSKM ACCOUNT_STATUS PASSWORD_PROFILE LAST_LOGIN LOCK_DATE EXPIRY_DATE EXTERNAL_NAME AUTHENTICATION_TYPE COMMON CON_ID
-------- ------ ------- ------ --------- ----- ----- -------------- ---------------- ---------- --------- ----------- ------------- ------------------- ------ ------
PDB_ADMIN FALSE TRUE FALSE FALSE FALSE FALSE EXPIRED & LOCKED DEFAULT 07-JAN-2017 04:47:03 07-JAN-2017 04:47:03 PASSWORD NO 47

Keep in mind that this service is not for system DBA. You have the DBaaS for this. This is Oracle managed PDBaaS for developers and developers will get lot of freedom here: ability to test any feature, any optimizer setting, etc. This is a lot more than what Amazon RDS allows. You don’t have file or network access, but this is for small databases and it is totally integrated with SQL Developer to move data. No sql trace, no tkprof, but you have SQL Monitor. This is a very interesting platform for development small projects, 12cR2 features testing or prototypes. Very easy and fast to start. You can connect and code with APEX, .Net, Java, OCI…

 

Cet article Exadata Express Cloud Service: PDB_ADMIN privileges est apparu en premier sur Blog dbi services.

Oracle Core factor and Oracle or non-Oracle Cloud

$
0
0

Oracle Database licensing can be based on two metrics: Processor or Named User Plus (NUP). NUP also necessitate to count the processors because there is a minimum number of NUP per processor. Then it is very important to know how to count processors. The time where a machine had 1 socket with 1 processor with 1 chip with 1 core running 1 thread is really old past. Today, we have physical machines with multi-core processors, running multiple threads per core. And virtualization allocates part of it to virtual machines, often called vCPU.

Oracle still uses the name ‘processor’ but declines the way of counting them by processor vendor (Core Factor), virtualization solution, and cloud provider.

References

The main document referenced here are:

The Oracle Processor Core Factor Table: http://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf which describes how to adjust core count to processor count. It also describes how to count Oracle Cloud CPU (aka OCPU).

The Licensing Oracle Software in the Cloud Computing Environment: http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf which describes how to count Amazon AWS vCPU and Microsoft Azure Cores.

There is no mention of other cloud providers, but they all run virtualization so comes the Oracle Partitioning Policy to know how to count physical vs. virtual CPU: http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf. However, their hosting contract may mention different things.

At the end of those documents, you see a mention that they are ‘for educational purposes only’ so basically there’s no legal truth in them and my blog post being my interpretation on that has no value at all… Your contracts have the truth when it comes to legal interpretation.

So you might wonder what’s the point reading those documents if they have no value? Well, your goal, as an IT manager, is not to go to court and fight with attorneys. If you want to avoid any problems, it is better to read those rules and adhere to them, whether you find them fair or not. Of course, you can also choose to go to other vendors with other models, such as open source. But that’s not the goal of this post. Here I suppose that you like the oracle database technology, as much as I do, and that you are looking for a way to run it at affordable cost without worrying about LMS audits…

Something else about those documents: they change, so you may want to look at history. For example, about Amazon and Azure, the definitions were a bit different a few days ago and the web time machine can help us: http://web.archive.org/web/20161226082104/http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf.

Vendors

Of course, there may be some technical reasons behind core factor, virtualization and cloud factors. But let’s face it, besides the database, Oracle is also a hardware vendor, a virtualization software vendor, and a cloud provider. They are at the top in database market, willing to do the same on the cloud market. Making database cheaper on their products may be a way to achieve that. Customers wanting to stay with Oracle Database will be tempted to go to full Oracle stack.

The combination of core factor and virtualisation rules clearly disadvantages all competitors:

  • IBM LPAR virtualisation is accepted, but IBM POWER core factor makes the database 2x more expensive than on processors you find in Oracle hardware (Intel, SPARC)
  • Most data center run on VMWare ESX with 0.5 core factor Intel, but counting vCPU is not accepted and the whole datacenter may have to be licenced
  • We are allowed to count vCPU on Amazon and Azure virtualization, they run Intel Xeon, but core factor applies only on Oracle Cloud. You need x2 processor licenses on AWS or Azure for the same processor cores
  • Disabling cores from the BIOS is how Capacity on Demand is done on bare metal ODA, but we are not allowed to do the same on similar hardware

So basically, running Oracle Database is cheaper on Oracle Hardware, Oracle virtualization and Oracle Cloud. When you do it on other vendors, you can’t do capacity on demand, and you need more processor licenses to run on same number of CPU.

Enterprise Edition

On the Oracle Cloud, you allocate CPU resources with shape OCPU. One OCPU is the equivalent of one Intel Xeon core with 2 threads. And one processor licences covers 2 OCPUs. This looks like the core factor for Inter Xeon even if it is not directly mentioned (but this equivalence is written in the core factor document). So One processor license covers 2 cores, 4 vCPU with hyper-threading.

On Amazon EC2 and RDS one processor licence covers 1 core. This can be 1 vCPU (T2 and m3.medium have one thread per core) or 2 vCPU (hyper-threaded instances). The AWS document https://aws.amazon.com/ec2/instance-types/ mentions that each vCPU is a hyperthread of an Intel Xeon core except for T2 and m3.medium.

On Microsoft Azure one processor license covers 1 core. This is 1 vCPU as there is no hyper-threading enabled.

23-Jan-2017

I mentioned looking at archive.org version of the documents. Tim Hall has mentioned what has changed on is blog

The previous document was not very clear, mentioning ‘virtual cores’. Actually, ‘virtual cores’ are the number of cores that are behind the vCPU: one for T2 instances (and m3.medium), two for hyper-threaded instances. And this is listed precisely on https://aws.amazon.com/ec2/virtualcores/ for Amazon. It is clear also for Azure as there is no hyper-threading.

But what has changed is that there were previously two mentions of applying the core factor:

  • An example on 8 virtual cores requiring 8 * 0.5 = 4 processor licenses
  • The mention that AWS implements Oracle VM and then follow the rules of hard partitioning and ‘standard processor metric’ which, IMO, includes the core factor

Those two have disappear in the new document, with an explicit mention that When counting Oracle Processor license requirements in Authorized Cloud Environments, the Oracle Processor Core Factor Table is not applicable. This makes the Oracle Cloud competitors two times more expensive than before for running Oracle Databases. The fun part is that IaaS providers that are not ‘Authorized’, or private clouds, can apply core factor and count vCPUs as long as they virtualize with Oracle VM on Intel Xeon. But hosting companies may have different rules in their contracts.

NUP

When licensing in NUP, the metric is not the number of processor. But you have to count them because you need to buy more than 25 NUP per processor. This means that if you have 25 NUP only (for a development environment with few developers for example) you can run it on 2 cores in the Oracle Public Cloud, but only one core on the other ‘Authorized’ ones.

Standard Edition

Note that the core factor has nothing to do in Standard Edition. Processor metric for Standard Edition is not a core but a socket. On current Intel processors, a socket can run 12, 18, or 24 cores. But maths are different in the Cloud from an Oracle licensing perspective.

In the Oracle Cloud, one ‘socket’ is 4 OCPU which is 4 cores (8 threads). An instance with 1 to 8 vCPU requires 1 SE2 processor license. 9 to 16 vCPU requires 2 SE processor licences and I think it is the maximum you can have on Oracle Cloud. So the idea is that one socket is equivalent to 4 cores. This is low, but it was worse before October where one ‘socket’ was 2 cores only.

Worse also for the competitors cloud, as this has not changed and it is still one ‘socket’ for 2 cores. This means that in Azure a SE2 processor licence covers 2 cores, and in Amazon it covers 4 vCPU (which are 4 threads on 2 cores).

Standard Edition 2

Now if you remember that SE2 has a limit of 16 threads for on-premises, the limit on the Cloud is 8 vCPUs on Amazon. You cannot run SE2 on a AWS instance with more than 8 vCPU. This is a big limitation. On-premises, the limit is soft only: resource manager will schedule at most 16 user sessions on CPU. But more resources can be used by background processes. On AWS, you are limited to 8 CPU treads in total, which means that you have only 4 cores for all processes – user and background included. And CPU starvation is not very good when you wait for LGWR.

This is even worse on Azure where the limit is 4 cores without hyper-threading.

This is clearly stated in the ‘Authorized Cloud’ document:
Oracle Standard Edition One and Standard Edition 2 may only be licensed on Authorized Cloud Environment instances up to eight Amazon vCPUs or four Azure CPU Cores.. Here again, the SE2 comes at the price of SE but with limitations of SE1.

Once again, rules are more friendly for Oracle Cloud where you can create a Standard Edition instance on 16 OCPU (which is 32 threads). SE2 resource manager will still limit to 16 user sessions, and they will have one core for each. And background processes will be able use the additional resource coming from hyperthreading.

ULA

There are two important points regarding ULA and ‘authorized clouds': ULA licences can be used during the ULA, but they don’t count in certification to exit ULA.

So what?

I don’t think I’ll risk a conclusion here. Having different price for same software running on similar hardware doesn’t look fair. I’m more annoyed by the difficulty to compare, espacially when rules change from time to time. But I can understand that vendors want to push their own products.

Let’s get technical. My recommandation is to use the minimum of CPU resources so that you don’t have to run your application on dozens of vCPUs. This is why we like Oracle: it is very efficient both for OLTP and BI. Most of the code used by your critical use-cases was optimized decades ago for mono-core processors. Look at an AWR report, identify the queries that are responsible for most of CPU and tune them. Stop hard parsing over and over. Stop reading always the same blocks. Stop doing row-by-row round-trips to your database. Don’t waste your CPU cycles doing all this unnecessary work and you will save money, whatever the ‘core factor’ is.

 

Cet article Oracle Core factor and Oracle or non-Oracle Cloud est apparu en premier sur Blog dbi services.

Running SLOB on Exadata Express Cloud Service

$
0
0

The Exadata Express Cloud Service is a managed PDBaaS: Oracle is the system admin and the CDB database administrator, you are the PDB administrator. You connect with a local user, PDB_ADMIN, which is no SYSDBA privilege but has a PDB_DBA which has nearly all DBA rights, but with some features disabled by lockdown profile.
I have no worry about the performance on this service: it is an Exadata X5 half rack bare metal (my guess), the CDB running as RAC One Node on 2 compute nodes, accessing the 7 storage cells. Smart Scan is disabled and given the small size of the PDB, and the low usage of the CDB, I/O is fast (most single block latency bwtween 128 and 256us coming from cell flash cache).
But I like to run SLOB on new platforms and I wanted to see if I can run it here, without SYSDBA role, and connecting to a CDB.

TNS_ADMIN

I’ll connect remotely because SLOB will be running on my computer (I have no access to the host for the PDBaaS managed service). In order to connect remotely, through a SQL*Net over SSL, you have to download the client credentials kit
CaptureEXCS017

It is a zip file that contains the encryption wallet and the sqlnet.ora and tnsnames.ora to use. I unzip it in a directory (/var/tmp/client_credentials in this example) and I’ll use it by setting the TNS_ADMIN environment to this directory.
Thus I change the wallet location in the sqlnet.ora:
$ cat sqlnet.ora
WALLET_LOCATION = (SOURCE = (METHOD = file)
(METHOD_DATA = (DIRECTORY="$TNS_ADMIN")))

and I use the provided tnsnames.ora which defines the ‘dbaccess’ service:
$ cat tnsnames.ora
dbaccess = (description=
(address=(protocol=tcps)(port=1522)(host=dbaccess.us2.oraclecloudapps.com))
(connect_data=(service_name=eptdojjm1ag.usdc2.oraclecloud.com))
(security=(ssl_server_cert_dn="CN=dbaccess.us2.oraclecloudapps.com,O=Oracle Corporation,L=Redwood Shores,ST=California,C=US"))
)

Changes in slob.conf

The default slob.conf connects with a bequeath connection. Here I need to connect through the network service described above. I uncommented:

#ADMIN_SQLNET_SERVICE=slob
#SQLNET_SERVICE_BASE=slob
#SYSDBA_PASSWD=change_on_install

and replaced it with:

ADMIN_SQLNET_SERVICE=dbaccess
SQLNET_SERVICE_BASE=dbaccess
SYSDBA_PASSWD=xxxxxxxxx

Where xxxxxxxxx is my PDB_ADMIN password that I setup in:
CaptureEXCS024

Of course we need an Oracle Client. Download instant client if you don’t have one.

Changes in setup.sh

SLOB creates the users and grant them RESOURCE,DBA
In Exadata Express Cloud Service, I don’t have the DBA role but the CREATE SESSION, and the PDB_ROLE which is sufficient. In addition to that I need to give quota to the IOPS tablespace because PDB_ADMIN do not have unlimited tablespace. Finally here is the create user part after my modifications:


msg NOTIFY "Creating user: $user "
 
sqlplus "$constring" <<EOF
WHENEVER SQLERROR EXIT 2;
SET TERMOUT ON
SET ECHO ON
PROMPT User grants for $user
WHENEVER SQLERROR EXIT 2;
GRANT CREATE SESSION TO $user IDENTIFIED BY $user;
GRANT PDB_DBA TO $user;
ALTER USER $user DEFAULT TABLESPACE $tablespace ;
ALTER USER $user QUOTA UNLIMITED ON $tablespace ;
EXIT;
EOF

The setup.sh starts by dropping the SLOB schemas up to MAX_SLOB_SCHEMAS just in case they are there. The default is 4096 but that’s too much for me because of the latency to the server in US (it is planned to have Exadata Express Cloud Service in Europe in the future), so I reduced it to 8 (as I have only 1 CPU in this service, there’s no need for more users):

export MAX_SLOB_SCHEMAS=8

And last modification in setup.sh was to disable the Multitenant checking because PDBaaS is by definition on a Multitenant database:


if ( ! check_mto "$ADMIN_CONNECT_STRING")
then
msg FATAL ""
msg FATAL "This version of SLOB does not support Oracle Multitenant Option"
msg FATAL
#exit 1
fi

I’ve not seen any problem. Oracle designed multitenant so that everything you used to do on a database is possible to do on a PDB.

Changes in runit.sh

First, in order to connect as PDB_ADMIN instead of SYS as SYSDBA, you have to change the admin_connect_string:

export admin_connect_string="pdb_admin/${SYSDBA_PASSWD}@${ADMIN_SQLNET_SERVICE}"

Second, SLOB calls UTL_FILE which is not allowed in Exadata Express Cloud Service. Becase I’ve seen in the code that this is bypassed when the ‘nt’ argument is 1 so I’ve set it to this value in runit.sh:

arg9=1 #was: $nt

Finally, because we can’t create manual AWR snapshots in Exadata Express Cloud Service, I replaced the EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT by the creation of a small view to get the basic statistics:

create or replace view FPAV as
select 'STAT' class,name,value from V$SYSSTAT
union all
select 'WAIT' class,event,total_waits from V$SYSTEM_EVENT
union all
select 'TIME' class,stat_name,value from V$SYS_TIME_MODEL
union all
select 'V$OS' class,stat_name,value from V$OSSTAT
union all
select 'MICRO' class,event||' - '||wait_time_format,wait_count from V$EVENT_HISTOGRAM_MICRO
;
create table FPAT as select sysdate time,FPAV.* from FPAV where 1=0;
insert into FPAT select sysdate time,FPAV.* from FPAV;
commit;

and I replaced the part that runs the AWR report with a simple query:


($admin_conn < awr.txt
column per_sec format 999G999G999G999G999
spool awr.txt append
select to_char(btim,'hh24:mi:ss') btime,to_char(time,'hh24:mi:ss')etime,seconds,class,name,round(delta/seconds) per_second , decode(class||' '||name
,'STAT session logical reads','LIO/s'
,'STAT physical read total IO requests','PIO/s'
,'TIME DB CPU','CPU us'
,'TIME DB time','DB time us'
,'STAT physical read total bytes','Read B/s'
,'STAT physical write total bytes','Write B/s'
,'V$OS BUSY_TIME','OS busy cs'
,'WAIT resmgr:cpu quantum','resmgr'
) main from (
select
FPAT.*
,max(time)over(order by time rows between unbounded preceding and unbounded following) last
,value-lag(value)over(partition by class,name order by time) delta
,(time-lag(time)over(partition by class,name order by time))*24*60*60 seconds
,lag(time)over(partition by class,name order by time) btim
from FPAT
) where time=last and delta>0 order by main,per_second desc;
spool off
exit
EOF

Statspack is not an alternative here because it requires some views and grants from SYS which you cannot do on Exadata Express Cloud Service. I really don’t understand why we cannot use AWR locally because in 12.2 you can have AWR snapshots at PDB level. But any use of dbms_workload_repository is disabled by lockdown profile.

Result

So this is the kind of output I get on running 2 SLOB sessions during 10 minutes:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
11:16:09 11:26:15 606 TIME DB CPU 1968151 CPU us
11:16:09 11:26:15 606 TIME DB time 1981479 DB time us
11:16:09 11:26:15 606 STAT session logical reads 1228557 LIO/s
11:16:09 11:26:15 606 STAT physical read total IO requests 0 PIO/s
11:16:09 11:26:15 606 STAT physical read total bytes 189 Read B/s
11:16:09 11:26:15 606 STAT physical write total bytes 189 Write B/s
11:16:09 11:26:15 606 WAIT resmgr:cpu quantum 0 resmgr
11:16:09 11:26:15 606 STAT logical read bytes from cache 1.0064E+10
11:16:09 11:26:15 606 TIME sql execute elapsed time 1980376
11:16:09 11:26:15 606 V$OS IDLE_TIME 6787
11:16:09 11:26:15 606 V$OS BUSY_TIME 391
11:16:09 11:26:15 606 V$OS USER_TIME 311

1.96 CPU seconds per second and 1.98 DB time. Those are my 2 cached SLOB sessions.
About 1.2 million LIO per seconds. This is what I expect from those test on Exadata X5 (PCT_UPDATE=0 and WORK_UNIT=64).
Resource manager ready to kick in as my CPU_COUNT is set to 2 in my PDB (set by lockdown profile, you cannot change it)
The overall system hosting the CDB has been mostly idle: 3.91 / (67.87 + 3.91) is 5.44% busy. I’m responsible for 2 sessions over 72 threads (36 cores): 2/72=2.78% CPU usage

The Exadata Express Cloud Service allocates 1 core (2 threads) but because the system is nearly idle, I didn’t use 2 threads of the same core. Let’s compare with a 1 session only run:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
12:15:49 12:25:55 606 TIME DB CPU 989008 CPU us
12:15:49 12:25:55 606 TIME DB time 991198 DB time us
12:15:49 12:25:55 606 STAT session logical reads 668368 LIO/s
12:15:49 12:25:55 606 STAT physical read total IO requests 0 PIO/s
12:15:49 12:25:55 606 STAT physical read total bytes 203 Read B/s
12:15:49 12:25:55 606 STAT physical write total bytes 189 Write B/s
12:15:49 12:25:55 606 STAT logical read bytes from cache 5475272359
12:15:49 12:25:55 606 TIME sql execute elapsed time 990290
12:15:49 12:25:55 606 STAT consistent gets 668368

Comparing logical reads, 1228557/668368=1.8 so probably I was not running the two sessions on the same core.

So what?

SLOB (https://kevinclosson.net/slob/) doesn’t need to be installed on the database server because it is mostly PL/SQL, so no roundtrips. And this is a way to run it when you have only access to a PDB without SYSDBA privileges.

 

Cet article Running SLOB on Exadata Express Cloud Service est apparu en premier sur Blog dbi services.

Viewing all 462 articles
Browse latest View live