Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 461 articles
Browse latest View live

Oracle disables your multitenant option when you run on EC2

$
0
0

I have installed Oracle 19.6 on an EC2 for our Multitenant Workshop training. And of course, during the workshop we create a lot of PDBs. If you don’t have paid for the Enterprise Edition plus the Multitenant Option you can create at most 3 pluggable database. But with this option you can create up to 252 pluggable databases. Does it worth the price, which according to the public price list is USD 47,500 + 17,500 per processor, which means per-core because Oracle doesn’t count the core factor when your Intel processors are in AWS Cloud (according to the Authorized Cloud Environments paper)? Probably not because Oracle detects where you run and bridles some features depending whether you are on the Dark or the Light Side of the public cloud (according to their criteria of course).

At one point I have 3 pluggable databases in my CDB:


SQL> show pdbs
   CON_ID     CON_NAME    OPEN MODE    RESTRICTED
_________ ____________ ____________ _____________
        2 PDB$SEED     READ ONLY    NO
        3 CDB1PDB01    MOUNTED
        4 CDB1PDB03    MOUNTED
        5 CDB1PDB02    MOUNTED

I want to create a 4th one:


SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

It fails. The maximum number of pluggable databases is defined by MAX_PDBS, but I defined nothing in my SPFILE:


SQL> show spparameter max_pdbs
SID NAME     TYPE    VALUE
--- -------- ------- -----
*   max_pdbs integer

I thought that the default was 4098 (which is incorrect anyway as you cannot create more than 4096) but it is actually 5 here:


SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 5

Ok… this parameter is supposed to count the number of user pluggable databases (the ones with CON_ID>2) and I have 3 of them here. The limit is 5 and I have an error mentioning that I’ve reached the limit. That’s not the first time I see wrong maths with this parameter. But there’s worse as I cannot change it:


SQL> alter system set max_pdbs=6;

alter system set max_pdbs=6
 *
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-65334: invalid number of PDBs specified

I can change it in the SPFILE but it doesn’t help me to create more pluggable databases:


SQL> alter system set max_pdbs=200 scope=spfile;

System altered.

SQL> startup force;

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               587202560 bytes
Database Buffers           1543503872 bytes
Redo Buffers                  7639040 bytes
Database mounted.
Database opened.

SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 200

SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Something bridles me. There’s a MOS Note ORA-65010 When Oracle Database Hosted on AWS Cloud (Doc ID 2328600.1) about the same problem but that’s in 12.1.0.2 (before MAX_PDBS was introduced) which is supposed to be fixed in AUG 2017 PSU. But here I am 3 years later in 19.6 (the January 2020 Release Update for the latest version available on-premises).

So, Oracle limits the number of pluggable databases when we are on a public cloud provider which is not the Oracle Public Cloud. This limitation is not documented in the licensing documentation which mentions 252 as the Enterprise Edition limit, and I see nothing about “Authorized Cloud Environments” limitations for this item. This, and the fact that it can come and go with Release Updates put customers at risk when running on AWS EC2: financial risk and availability risk. I think there are only two choices, on long term, when you want to run your database on a cloud: go to Oracle Cloud or leave for another Database.

How does the Oracle instance know on which public cloud you run? All cloud platforms provide some metadata through HTTP api. I have straced all sendto() and recvfrom() system calls when starting the instance:


strace -k -e trace=recvfrom,sendto -yy -s 1000 -f -o trace.trc sqlplus / as sysdba <<<'startup force'

And I searched for Amazon and AWS here:

This is clear: the instance has a function to detect the cloud provider (kgcs_clouddb_provider_detect) when initializing the SGA in a multitenant architecture (kpdbInitSga) with the purpose of detecting non-oracle clouds (kscs_is_non_oracle_cloud). This queries the AWS metadata (documented on Retrieving Instance Metadata):


[oracle@ora-cdb-1 ~]$ curl http://169.254.169.254/latest/meta-data/services/domain
amazonaws.com/

When Oracle software sees the name of the enemy in the domain name amazonaws.com, it sets an internal limit for the number of pluggable databases that bypasses the MAX_PDBS setting. Ok, I don’t need this metadata and I’m root on EC2 so my simple workaround is to block this metadata API:


[root@ora-cdb-1 ~]# iptables -A OUTPUT -d 169.254.169.254  -j REJECT
[root@ora-cdb-1 ~]# iptables -L
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
REJECT     udp  --  anywhere             10.0.0.2             udp dpt:domain reject-with icmp-port-unreachable
REJECT     all  --  anywhere             10.0.0.2             reject-with icmp-port-unreachable

Then restart the instance and it works: I can set or reset MAX_PDBS and create more pluggable databases.

I can remove the rule


[root@ora-cdb-1 ~]# iptables -D OUTPUT -d 169.254.169.254  -j REJECT

If, for watever reason I want to revert back.

Finally, because they had many bugs with the MAX_PDBS soft limit, there’s a parameter to disable it and this disables also the hard limit:


SQL> alter system set "_cdb_disable_pdb_limit"=true scope=spfile;
System altered.

Thanks to Mauricio Melnik for the heads-up on that:

However, with this parameter you cannot control anymore the maximum number of PDBs so don’t forget to monitor your AUX_COUNT in DBA_FEATURE_USAGE_STATISTICS.

Here was my discovery when preparing the multitenant workshop lab environment. Note that given the current situation where everybody works from home when possible, we are ready to give this training full of hands-on exercises though Microsoft Teams and AWS EC2 virtual machines. Two days to be comfortable when moving to CDB architecture, which is what should be done this year when you plan to stay with Oracle Database for the future versions.

Cet article Oracle disables your multitenant option when you run on EC2 est apparu en premier sur Blog dbi services.


A change in full table scan costs in 19c?

$
0
0

During tests in Oracle 19c I recently experienced this:

cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       | 26439 (100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |     1 |    10 | 26439  (14)| 00:00:02 |
---------------------------------------------------------------------------

–> The costs of the full table scan are 26439.

Setting back the optimizer_features_enable to 18.1.0 showed different full table scan costs:

cbleile@orcl@orcl> alter session set optimizer_features_enable='18.1.0';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |   109K(100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |     1 |    10 |   109K  (4)| 00:00:05 |
---------------------------------------------------------------------------

–> The costs are 109K versus around 26K in 19c.

Why do we have such a difference for the costs of a full table scan between 18c and 19c?
With the CPU-cost model full table scans are computed as follows:

FTS Cost = ((BLOCKS/MBRC) x MREADTIM)/ SREADTIM + “CPU-costs”
REMARK: This is not 100% correct, but the difference is not important here.

In my case:

cbleile@orcl@orcl> select blocks from tabs where table_name='DEMO4';
 
    BLOCKS
----------
     84888
 
cbleile@orcl@orcl> select * from sys.aux_stats$;
 
SNAME                          PNAME                          PVAL1      PVAL2
------------------------------ ------------------------------ ---------- ----------
...
SYSSTATS_MAIN                  SREADTIM                       1
SYSSTATS_MAIN                  MREADTIM                       10
SYSSTATS_MAIN                  CPUSPEED                       2852
SYSSTATS_MAIN                  MBRC                           8
...

I.e.
FTS Cost = ((BLOCKS/MBRC) x MREADTIM)/ SREADTIM + CPU = ((84888/8) x 10)/ 1 + CPU = 106110 + CPU
Considering the additional CPU-cost we are at the costs we see in 18c: 109K
Why do we see only costs of 26439 in 19c (around 25% of 18c)?
The reason is that the optimizer considers “wrong” system statistics here. I.e. let’s check the system statistics again:

SNAME                          PNAME                          PVAL1      PVAL2
------------------------------ ------------------------------ ---------- ---------
SYSSTATS_MAIN                  SREADTIM                       1
SYSSTATS_MAIN                  MREADTIM                       10
SYSSTATS_MAIN                  MBRC                           8

In theory it’s not possible that MREADTIM > SREADTIM * MBRC. I.e. reading e.g. 8 contiguous blocks from disk cannot be slower than reading 8 random blocks from disk. Oracle has considered that and treats the available system statistics as wrong and takes different values internally. The change was implemented with bug fix 27643128. See My Oracle Support Note “Optimizer Chooses Expensive Index Full Scan over Index Fast Full Scan or Full Table Scan from 12.1 (Doc ID 2382922.1)” for details.

I.e. switching the bug fix off results in full table scan costs as in 18c:

cbleile@orcl@orcl> alter session set optimizer_features_enable='19.1.0';
cbleile@orcl@orcl> alter session set "_fix_control"='27643128:OFF';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |   109K(100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |	    1 |    10 |   109K  (4)| 00:00:05 |
---------------------------------------------------------------------------

To get the intended behavior in 19c you should make sure that
MREADTIM <= SREADTIM * MBRC
E.g. in my case

cbleile@orcl@orcl> alter system set db_file_multiblock_read_count=12
cbleile@orcl@orcl> exec dbms_stats.set_system_stats('MBRC',12);
cbleile@orcl@orcl> alter session set optimizer_features_enable='19.1.0';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       | 74188 (100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |	    1 |    10 | 74188   (5)| 00:00:03 |
---------------------------------------------------------------------------
...
 
cbleile@orcl@orcl> alter session set optimizer_features_enable='18.1.0';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	      |       |       | 74188 (100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |	    1 |    10 | 74188   (5)| 00:00:03 |
---------------------------------------------------------------------------
...

I.e. the costs in 19c and 18c are the same again.

Please consider the following:
– If you’ve gathered or set system statistics then always check that they are reasonable.
– If you do work with a very low SREADTIM and high MREADTIM to favor Index-access (to not use low values for OPTIMIZER_INDEX_COST_ADJ) then make sure that MREADTIM <= SREADTIM * MBRC. Otherwise you may see plan changes when migrating to 19c.

Cet article A change in full table scan costs in 19c? est apparu en premier sur Blog dbi services.

Oracle recovery concepts

$
0
0

I’ve published a while ago a twitter thead on some Oracle recovery concepts. For those who are not following twitter, I’m putting the whole thread here:
 

🔴⏬ Here I start a thread about some Oracle Database concepts. We will see how far it goes - all questions/comments welcome.

🔴⏬ A database (or DBMS - database management system) stores (for short and long term) and manipulates (from many concurrent users/devices) your #data.

🔴⏬ #data is logically structured (tablespaces, schemas, tables, columns, datatypes, constraints,…). The structure is described by #metadata.

🔴⏬ Stored #data (tables and indexes) is physically structured (blocks, extents, segments, datafiles), which is also maintained by #metadata

🔴⏬ The logical #metadata and some physical #metadata are stored in the #dictionary ,also known #catalog

🔴⏬ The #dictionary is also data itself: it is the system data describing the user data. Its own metadata is fixed, hardcoded in the software for #bootstrap

🔴⏬ The set of persistent files that stores system data (metadata for user data) and user data is what we call (with Oracle) the #database

🔴⏬ The database files are internally referenced by an identifier (file_id, file#, Absolute File Number,…). The file names are not defined in the #dictionary but (only) in the main metadata file for the database, the #controlfile

🔴⏬ So, we have user #data and its #metadata (system data) in #datafiles. Where is metadata for system data? The code that contains this metadata (structures), and the functions to manipulate data, is the database #software

🔴⏬ Oracle database software installed on the server is often referred by its install location environment variable, the $ORACLE_HOME

🔴⏬ As with any software, the Oracle Database binary code is loaded by the OS to be run by multiple #process, which work in #memory

🔴⏬ The processes and memory running the Oracle software for one database system is what is called an Oracle #instance

🔴⏬ I think the #instance was simply called the Oracle ‘system’ at some time as we identify with Oracle System ID (ORACLE_SID) and modify it with ALTER SYSTEM

🔴⏬ The #instance processes open the database files when needed, to read or write data (user #data or #metadata), when started in state #OPEN

🔴⏬ Before being in #OPEN state, where all database files are opened, the instance must read the list of database files from the #controlfile, when the state is #MOUNT

🔴⏬ Multiple instances can open the same database from different nodes, in the case of Real Application Cluster (RAC) and synchronizes themselves though the #shared storage and the private network #interconnect

🔴⏬ For the instance to know which #controlfile to read, we provide its name as an instance parameter, which is read when the instance is started in its first state: #NOMOUNT

🔴⏬Those parameters (memory sizes, database to open, flags…) are stored on the server in the instance configuration file, the server-side parameter file: the #spfile

🔴⏬ The #spfile is in binary format, stored on the server, updated by the instance. When we create a new database we create the spfile from a #pfile

🔴⏬ The #pfile is a simple text file that lists the instance parameter names and value (and comments). It is also referred to as #init.ora

🔴⏬ So, the #spfile or #pfile is the instance metadata used to open the #controlfile, which is the database metadata, which is used to open the dictionary, which is the user data metadata used to… but at the root is the $ORACLE_SID

🔴⏬ $ORACLE_SID identifies which #spfile or #pfile to read when starting the instance in #nomount. It is an #environment #variable

🔴⏬ The instance will read by default, in $ORACLE_HOME/dbs, spfile$ORACLE.SID or init$ORACLE_SID.ora or init.ora when not provided with a specific #pfile

🔴⏬ That’s for the first to start the #instance. But when the instance is running we can connect to it by attaching our process to the shared memory structure, called System Global Area: #SGA

🔴⏬ The #SGA shared memory is identified with a key that is derived from ORACLE_SID and ORACLE_HOME. If no instance was started with the same ORACLE_SID and ORACLE_HOME (literally the same) you get a “connect to idle instance”

🔴⏬ Of course, connecting by attaching to the SGA is possible only from the same host. This protocol is called #bequeath connection

🔴⏬ In order to connect remotely, there’s a process running on the server which knows the $ORACLE_SID and $ORACLE_HOME. It is the local #listener

🔴⏬ The local listener listens on a TCP/IP port for incoming connection and handles the creation of process and attach to the SGA, just by being provided with the desired #service_name

🔴⏬ So, how does the local listener know which #service_name goes to which instance (and then which database)? It can be listed in the listener’s configuration, that’s #static registration

🔴⏬ But in High Availability where multiple instances can run one service, it is the instance which tells it’s local listener which service it runs. That’s #dynamic registration

🔴⏬ Of course, the connection can start a session only when authorized (CREATE SESSION privilege) so the user/password hash is verified and also the privileges. All this is stored in the dictionary. V$SESSION_CONNECT_INFO shows that as #database #authentication

🔴⏬ Database authentication can be done only when the database is opened (access to the dictionary). Not possible in #nomount or #mount. For these, the system passwords are cached in a #password file

🔴⏬ The password file is found in $ORACLE_HOME/dbs and its name is orapw$ORACLE_SID and, once created, changing of password must be done from the database to be sure it is in sync between dictionary and password file

🔴⏬ When connecting locally with bequeath protocol, belonging to a privileged system group may be sufficient. The password provided is not even verified in that case. That uses #OS authentication

🔴⏬ #OS authentication is one case of passwordless authentication. By default, the Linux users in group ‘dba’ have passwordless bequeath (i.e local) access with the highest privilege #sysdba

🔴⏬ I said that data modifications are written to database files, but that would not be efficient when having multiple users because the database is #shared

🔴⏬ Reading from #shared resources requires only a ‘share’ lock but the only way to write without corruption to a shared resource is with an #exclusive lock

🔴⏬ Locking a full portion of data to write directly to disk is done only for specific non-concurrent bulk loads (direct-path inserts like with APPEND hint). All conventional modifications are done in memory into shared #buffers

🔴⏬ Writing the modifications to disk is done asynchronously by a background process, the #dbwriter

🔴⏬ Reads are also going through the #shared buffers to be sure to see the current version. This is a #logical read

🔴⏬ If the buffer is not already in memory, then before the logical read, it must be read from disk with a #physical read

🔴⏬ Keeping many buffers in memory for a while also saves a lot of disk access which is usually latency expensive. This memory is stored in the #SGA as the #buffer cache

🔴⏬ As the changes are written in memory (buffer cache), they can be lost in case of instance or server crash. To protect for this, all changes are logged to a #log buffer

🔴⏬ The log buffer is in memory and asynchronously flushed to persistent storage (disk), to the #online redo logs (and, maybe, to some #standby redo logs in remote locations).

🔴⏬ When a user commits a transaction, the server must be sure that the redo which protects his changes is flushed to disk. If not, before saying ‘commit successful’ it waits on #log file sync

🔴⏬ When changes are written in memory (#buffer cache) the #redo that protects it from an instance crash must be written to persistent storage before the change itself. This is #WAL (Write Ahead Logging)

🔴⏬ This redo is written by #logwriter. It must be fast because this is where the user may have to wait for physical writes. The advantage is that the writes are sequential, with higher throughput than the #dbwriter which has to do #random writes scattered in all datafiles

🔴⏬ The #redo log stream can be long as it contains all changes. But for server/instance recovery we need only the redo for the changes that were not yet flushed to disk by #dbwriter, in the #dirty blocks

🔴⏬ The instance ensures that regularly all #dirty buffers are flushed, so that the previous #redo can be discarded. It is known as a #checkpoint

🔴⏬ That’s sufficient for instance recovery (redo the changes that were made only in memory and lost by the instance crash) but what if we lose or corrupt a #datafile, like #media failure?

🔴⏬ As with any #persistent data, we must take backups (copy of files) so that, in case of some loss or corruption, we can #restore in a predictable time.

🔴⏬ After restoring the backup we need to apply the redo to roll forward the modifications that happened between the beginning of backup until the point of failure. That’s media #recovery

🔴⏬ The recovery may need more than the online redo logs for the changes between the restored backup and the last checkpoint. This is why before being overwritten, the online redo logs are #archived

🔴⏬ We always want to protect for instance failure (or all the database is inconsistent) but we can choose not to protect for media failure (and accept outage at backup and transaction loss at restore) when the database is in #noarchivelog mode

🔴⏬ If the redo cannot be written to disk, the database cannot accept more changes as it cannot ensure the D in ACID: transaction #durability

🔴⏬ As the online redo logs are allocated and formated at instance startup, they can always be written even if the filesystem is full (except if size of fs. is virtual). But they can be overwritten only when checkpoint made them inactive, or we wait on “checkpoint not complete”

🔴⏬ In archive log mode, there’s another requirement to overwrite an online redo log: it must have been archived to ensure media recovery. If not yet archived, we wait on “file switch (archiving needed)”.

🔴⏬ Archived logs are never overwritten. If the destination is full, the instance hangs. You need to move or backup them elsewhere. The most important to monitor is V$RECOVERY_AREA_USAGE so that PERCENT_SPACE_USED – PERCENT_SPACE_RECLAIMABLE never goes to 100% (stuck archiver)

🔴⏬ How long to keep the archived logs or their backups? You need the redo from the latest backup you may want to restore: the backup retention window. When a database backup is obsolete, the previous archived logs become obsolete. RMAN knows that and you just “delete obsolete”

🔴⏬ In the recovery area, files are managed by Oracle and you don’t need to “delete obsolete”. Obsolete files are automatically deleted when space is needed (“space pressure”). That’s the PERCENT_SPACE_RECLAIMABLE of V$RECOVERY_AREA_USAGE

🔴⏬ at the end of recovery, the database state is recovered at the same state as it was at the point-in-time when the last applied redo was generated. Only some uncommitted changes are lost (those that were in log buffer, in memory, and now lost).

🔴⏬ If the recovery reaches the point of failure, all new changes can continue from there. If not, because we can’t or just because we do a point-in-time recovery, the chain of redo is broken and we #resetlogs to a new #incarnation.

🔴⏬ At the end of recovery, the transactions that are not committed cannot continue (we lost the state of the session, probably some redo and block changes, and the connection to the user is lost) and must be un-done with #rollback

🔴⏬ Oracle does not store directly the #rollback information in the redo stream like other databases, because #rollback is also used for another reason: rollback a transaction or re-build a past version of a block.

🔴⏬ When data is changed (in a memory buffer) the #rollback information that can be used to build the previous version of the buffer is stored as special system data: the #undo

🔴⏬ Actually, the #rollback segment is involved for all letters in ACID, mainly: Atomicity (if we cannot commit we must rollback) and Isolation (undo the uncommitted changes made by others)

🔴⏬ Whether the #redo is primarily optimized for a sequential access on time (replay all changes in the same order), the #undo is optimized to be accessed by #transaction (@jloracle Why Undo? https://jonathanlewis.wordpress.com/2010/02/09/why-undo/ …)

🔴⏬In summary, changes made to data and undo blocks generate the #redo for it, which is applied in memory to the buffer. This redo goes to disk asynchronously. When your changes are committed, the database guarantees that your redo reached the disk so that recovery is possible.

🔴⏬ The rollforward + rollback is common to many databases, but some are faster than others there. PostgreSQL stores old versions in-place and this rollback phase is immediate. Oracle stores it in UNDO, checkpointed with data, but has to rollback all incomplete transactions.

🔴⏬ Mysql InnoDB is similar to Oracle. SQL Server stores the undo with the redo, then may have to the transaction log from before the last checkpoint if transactions stay long Then rollforward time can be unpredictable. This changed recently with Accelerated Database Recovery.

🔴⏬For an in-depth on Oracle recovery internals, there is this old document still around on internet archives. From Oracle7 – 25 years ago! – but the concepts are still valid.
 https://pastebin.com/n8emqu08 
🔴⏫
Any questions?

Cet article Oracle recovery concepts est apparu en premier sur Blog dbi services.

Setup Oracle XE on Linux Mint – a funny exercise

$
0
0

On my old Laptop (Acer Travelmate with an Intel Celeron N3160 CPU) I wanted to install Oracle XE. Currently the available XE version is 18.4. My Laptop runs on Linux Mint 19.3 (Tricia). The Blog will describe the steps I had to follow (steps for Ubuntu would be similar).

REMARK: The following steps were done just for fun and are not supported and not licensable from Oracle. If you follow them then you do it at your own risk 😉

Good instructions on how to install Oracle XE are already available here.

But the first issue not mentioned in the instructions above is that Oracle can no longer be installed on the latest Mint version due to a change in glibc. This has also been described in various blogs about e.g. installing Oracle on Fedora 26 or Fedora 27. The workaround for the problem is to do the following:


cd $ORACLE_HOME/lib/stubs
mkdir BAK
mv libc* BAK/
$ORACLE_HOME/bin/relink all

This brings us to the second issue. Oracle does not provide you with a mechanism to relink Oracle XE. You can of course relink an Enterprise Edition or a Standard Edition 2 version, but relinking XE is not possible because lots of archives and objects are not available in a XE-release. So how can we achieve to install Oracle XE on Linux Mint then? This needs a bit of an unsupported hack by copying archive and object-files from an Enterprise Edition version to XE, but I’ll get to that later.

So here are the steps to install Oracle XE on Linux Mint (if not separately mentioned, the steps are done as root. I.e. you may prefix your command with a “sudo” if you do not login to root directly):

1. Install libaio and alien


root@clemens-TravelMate:~# apt-get update && apt-get upgrade
root@clemens-TravelMate:~# apt-get install libaio*
root@clemens-TravelMate:~# apt-get install alien

2. Download the Oracle rpm from here and convert it to a deb-file


root@clemens-TravelMate:~# cd /opt/distr
root@clemens-TravelMate:/opt/distr# alien --script oracle-database-xe-18c_1.0-2_amd64.rpm

3. Delete the original rpm to save some space


root@clemens-TravelMate:/opt/distr# ls -l oracle-database-xe-18c_1.0-2_amd64.deb
...
root@clemens-TravelMate:/opt/distr# rm oracle-database-xe-18c_1.0-2_amd64.rpm

4. Install the package


root@clemens-TravelMate:/opt/distr# dpkg -i oracle-database-xe-18c_1.0-2_amd64.deb

REMARK: In case the installation fails or the database cannot be created then you can find instructions on how to clean everything up again here.

5. Make sure your host has an IPv4 address in your hosts file


root@clemens-TravelMate:/opt/distr# more /etc/hosts
127.0.0.1 localhost localhost.localdomain
192.168.10.49 clemens-TravelMate.fritz.box clemens-TravelMate

6. Disable the system check in the configuration script


cd /etc/init.d/
cp -p oracle-xe-18c oracle-xe-18c-cfg
vi oracle-xe-18c-cfg

Add the parameter


-J-Doracle.assistants.dbca.validate.ConfigurationParams=false 

in line 288 of the script, so that it finally looks as follows:


    $SU -s /bin/bash  $ORACLE_OWNER -c "(echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD') | $DBCA -silent -createDatabase -gdbName $ORACLE_SID -templateName $TEMPLATE_NAME -characterSet $CHARSET -createAsContainerDatabase $CREATE_AS_CDB -numberOfPDBs $NUMBER_OF_PDBS -pdbName $PDB_NAME -sid $ORACLE_SID -emConfiguration DBEXPRESS -emExpressPort $EM_EXPRESS_PORT -J-Doracle.assistants.dbca.validate.DBCredentials=false -J-Doracle.assistants.dbca.validate.ConfigurationParams=false -sampleSchema true $SQLSCRIPT_CONSTRUCT $DBFILE_CONSTRUCT $MEMORY_CONSTRUCT"

7. Adjust user oracle, so that it has bash as its default shell


mkdir -p /home/oracle
chown oracle:oinstall /home/oracle
vi /etc/passwd
grep oracle /etc/passwd
oracle:x:54321:54321::/home/oracle:/bin/bash

You may of course add a .bashrc or .bash_profile in /home/oracle.

8. Adjust the Oracle make-scripts for Mint/Ubuntu (I took the script from here):


oracle@clemens-TravelMate:~/scripts$ cat omkfix_XE.sh 
#!/bin/sh
# Change the path below to point to your installation
export ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE
# make changes in orld script
sed -i 's/exec gcc "\$@"/exec gcc -no-pie "\$@"/' $ORACLE_HOME/bin/orald
# Take backup before committing changes
cp $ORACLE_HOME/rdbms/lib/ins_rdbms.mk $ORACLE_HOME/rdbms/lib/ins_rdbms.mk.back
cp $ORACLE_HOME/rdbms/lib/env_rdbms.mk $ORACLE_HOME/rdbms/lib/env_rdbms.mk.back
cp $ORACLE_HOME/network/lib/env_network.mk $ORACLE_HOME/network/lib/env_network.mk.back
cp $ORACLE_HOME/srvm/lib/env_srvm.mk $ORACLE_HOME/srvm/lib/env_srvm.mk.back
cp $ORACLE_HOME/crs/lib/env_has.mk $ORACLE_HOME/crs/lib/env_has.mk.back
cp $ORACLE_HOME/odbc/lib/env_odbc.mk $ORACLE_HOME/odbc/lib/env_odbc.mk.back
cp $ORACLE_HOME/precomp/lib/env_precomp.mk $ORACLE_HOME/precomp/lib/env_precomp.mk.back
cp $ORACLE_HOME/ldap/lib/env_ldap.mk $ORACLE_HOME/ldap/lib/env_ldap.mk.back
cp $ORACLE_HOME/ord/im/lib/env_ordim.mk $ORACLE_HOME/ord/im/lib/env_ordim.mk.back
cp $ORACLE_HOME/ctx/lib/env_ctx.mk $ORACLE_HOME/ctx/lib/env_ctx.mk.back
cp $ORACLE_HOME/plsql/lib/env_plsql.mk $ORACLE_HOME/plsql/lib/env_plsql.mk.back
cp $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk.back
cp $ORACLE_HOME/bin/genorasdksh $ORACLE_HOME/bin/genorasdksh.back
#
# make changes changes in .mk files
#
sed -i 's/\$(ORAPWD_LINKLINE)/\$(ORAPWD_LINKLINE) -lnnz18/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(HSOTS_LINKLINE)/\$(HSOTS_LINKLINE) -lagtsh/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(EXTPROC_LINKLINE)/\$(EXTPROC_LINKLINE) -lagtsh/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(OPT) \$(HSOTSMAI)/\$(OPT) -Wl,--no-as-needed \$(HSOTSMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(OPT) \$(HSDEPMAI)/\$(OPT) -Wl,--no-as-needed \$(HSDEPMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(OPT) \$(EXTPMAI)/\$(OPT) -Wl,--no-as-needed \$(EXTPMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(SPOBJS) \$(LLIBDMEXT)/\$(SPOBJS) -Wl,--no-as-needed \$(LLIBDMEXT)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/\$(S0MAIN) \$(SSKRMED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKRMED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSBBDED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSBBDED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKRSED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKRSED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SKRNPT)/\$(S0MAIN) -Wl,--no-as-needed \$(SKRNPT)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSTRCED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSTRCED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSTNTED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSTNTED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFEDED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFEDED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/\$(S0MAIN) \$(SSKFODED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFODED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFNDGED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFNDGED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFMUED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFMUED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFSAGED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFSAGED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(DBGVCI)/\$(S0MAIN) -Wl,--no-as-needed \$(DBGVCI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(DBGUCI)/\$(S0MAIN) -Wl,--no-as-needed \$(DBGUCI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKECED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKECED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/^\(ORACLE_LINKLINE.*\$(ORACLE_LINKER)\) \($(PL_FLAGS)\)/\1 -Wl,--no-as-needed \2/g' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/^\(TNSLSNR_LINKLINE.*\$(TNSLSNR_OFILES)\) \(\$(LINKTTLIBS)\)/\1 -Wl,--no-as-needed \2/g' $ORACLE_HOME/network/lib/env_network.mk
sed -i 's/\$LD \$1G/$LD -Wl,--no-as-needed \$LD_RUNTIME/' $ORACLE_HOME/bin/genorasdksh
sed -i 's/\$(GETCRSHOME_OBJ1) \$(OCRLIBS_DEFAULT)/\$(GETCRSHOME_OBJ1) -Wl,--no-as-needed \$(OCRLIBS_DEFAULT)/' $ORACLE_HOME/srvm/lib/env_srvm.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/crs/lib/env_has.mk;
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/odbc/lib/env_odbc.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/precomp/lib/env_precomp.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/srvm/lib/env_srvm.mk;
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/network/lib/env_network.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ldap/lib/env_ldap.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ord/im/lib/env_ordim.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ctx/lib/env_ctx.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/plsql/lib/env_plsql.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk
oracle@clemens-TravelMate:~/scripts$ 
oracle@clemens-TravelMate:~/scripts$ chmod +x omkfix_XE.sh
oracle@clemens-TravelMate:~/scripts$ . ./omkfix_XE.sh
oracle@clemens-TravelMate:~/scripts$ 

9. Install an Oracle Enterprise Edition 18.4. in a separate ORACLE_HOME /u01/app/oracle/product/18.0.0/dbhome_1. You may follow the steps to install it here.

REMARK: At this step I also updated the /etc/sysctl.conf with the usual Oracle requirements and activated the parameters with sysctl -p.


vm.swappiness=1
fs.file-max = 6815744
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 8589934592
kernel.sem = 250 32000 100 128
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
vm.nr_hugepages = 600

10. Copy Object- and archive-files from the Enterprise Edition Oracle-Home to the XE Oracle-Home:


oracle@clemens-TravelMate:~/scripts$ cat cpXE.bash 
OH1=/u01/app/oracle/product/18.0.0/dbhome_1
OH=/opt/oracle/product/18c/dbhomeXE
cp -p $OH1/rdbms/lib/libknlopt.a $OH/rdbms/lib
cp -p $OH1/rdbms/lib/opimai.o $OH/rdbms/lib
cp -p $OH1/rdbms/lib/ssoraed.o $OH/rdbms/lib
cp -p $OH1/rdbms/lib/ttcsoi.o $OH/rdbms/lib
cp -p $OH1/lib/nautab.o $OH/lib
cp -p $OH1/lib/naeet.o $OH/lib
cp -p $OH1/lib/naect.o $OH/lib
cp -p $OH1/lib/naedhs.o $OH/lib
 
cp -p $OH1/lib/*.a $OH/lib
cp -p $OH1/rdbms/lib/*.a $OH/rdbms/lib
oracle@clemens-TravelMate:~/scripts$ bash ./cpXE.bash

11. relink oracle


cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk config.o ioracle

REMARK: This is of course not supported and you’ve effectively changed your Oracle XE to an Enterprise Edition version now!!!

12. Configure XE as root
REMARK: Without the relink above the script below would hang at the output “Copying database files”. Actually it would hang during the “startup nomount” of the DB.


root@clemens-TravelMate:/etc/init.d# ./oracle-xe-18c-cfg configure
/bin/df: unrecognized option '--direct'
Try '/bin/df --help' for more information.
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password: 
************
Enter SYSTEM user password: 
**********
Enter PDBADMIN User Password: 
***********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.
 
Connect to Oracle Database using one of the connect strings:
     Pluggable database: clemens-TravelMate.fritz.box/XEPDB1
     Multitenant container database: clemens-TravelMate.fritz.box
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE
root@clemens-TravelMate:/etc/init.d# 

Done. Now you can use your XE-DB:


oracle@clemens-TravelMate:~$ . oraenv
ORACLE_SID = [oracle] ? XE
The Oracle base has been set to /opt/oracle
oracle@clemens-TravelMate:~$ sqlplus / as sysdba
 
SQL*Plus: Release 18.0.0.0.0 - Production on Mon Apr 6 21:22:46 2020
Version 18.4.0.0.0
 
Copyright (c) 1982, 2018, Oracle.  All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
 
SQL> show pdbs
 
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 XEPDB1                         READ WRITE NO
SQL> 

REMARK: As you can see, the logon-Banner shows “Enterprise Edition”. I.e. the software installed is no longer Oracle XE and absolutely not supported and not licensable under XE. The installation may just serve as a simple test and fun exercise to get Oracle working on Linux Mint.

Finally I installed the Swingbench Simple Order Entry schema and ran a test with 100 concurrent OLTP-Users. It worked without issues.

Cet article Setup Oracle XE on Linux Mint – a funny exercise est apparu en premier sur Blog dbi services.

Starting an Oracle Database when a first connection comes in

$
0
0

To save resources I thought about the idea to start an Oracle database automatically when a first connection comes in. I.e. if there are many smaller databases on a server, which are not required during specific times, then we may shut them down and automatically start them when a connection comes in. The objective was that even the first connection should be successful. Is that possible? Yes, it is. Here’s what I did:

First of all I needed a failed connection event which triggers the startup of the database. In my case I took the message a listener produces on a connection of a not registered service. E.g.


sqlplus cbleile/@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:06:55 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

With the default listener-logging, above connection produces a message like the following in the listener.log-file:


oracle@oracle-19c6-vagrant:/opt/oracle/diag/tnslsnr/oracle-19c6-vagrant/listener/trace/ [orclcdb (CDB$ROOT)] tail -2 listener.log 
09-APR-2020 14:06:55 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCLPDB1)(CID=(PROGRAM=sqlplus)(HOST=oracle-19c6-vagrant)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=11348)) * establish * ORCLPDB1 * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor

or alternatively you may check listener alert-xml-logfile log.xml in the listener/alert directory.
To keep it easy, I used the listener.log file to check if such a failed connection came in.

So now we just need a mechanism to check if we have a message in the listener.log and trigger the database-startup then.

First I do create an application service on my PDB:


alter system set container=pdb1;
exec dbms_service.create_service('APP_PDB1','APP_PDB1');
exec dbms_service.start_service('APP_PDB1');
alter system register;
alter pluggable database save state;

REMARK1: Do not use the default service of a PDB when connecting with the application. ALWAYS create a service for the application.
REMARK2: By using the “save state” I do ensure that the service is started automatically on DB-startup.

Secondly I created a tnsnames-alias, which retries the connection several times in case it fails initially:


ORCLPDB1_S =
  (DESCRIPTION =
  (CONNECT_TIMEOUT=10)(RETRY_COUNT=30)(RETRY_DELAY=2)
   (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))
   )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = APP_PDB1)
    )
  )

Important are the parameters


(RETRY_COUNT=30)(RETRY_DELAY=2)

I.e. in case of an error we do wait for 2 seconds and try again. We try 30 times to connect. That means we do have 60 seconds to start the database when the first connection comes in.

The simple BASH-script below polls for the message of a failed connection and starts the database when such a message is in the listener.log:


#!/bin/bash
 
# Set the env for orclcdb
export ORAENV_ASK=NO
export ORACLE_SID=orclcdb
. oraenv
 
# Define where the listener log-file is
LISTENER_LOG=/opt/oracle/diag/tnslsnr/oracle-19c6-vagrant/listener/trace/listener.log
 
# create a fifo-file
fifo=/tmp/tmpfifo.$$
mkfifo "${fifo}" || exit 1
 
# tail the listener.log and write to fifo in a background process
tail -F -n0 $LISTENER_LOG >${fifo} &
tailpid=$! # optional
 
# check if a connection to service APP_PDB1 arrived and the listener returns a TNS-12514
# TNS-12514 TNS:listener does not currently know of service requested in connect descriptor
# i.e. go ahead if we detect a line containing "establish * APP_PDB1 * 12514" in the listener.log
grep -i -m 1 "establish \* app_pdb1 \* 12514" "${fifo}"
 
# if we get here a request to connect to service APP_PDB1 came in and the service is not 
# registered at the listener. We conclude then that the DB is down.
 
# Do some cleanup by killing the tail-process and removing the fifo-file
kill "${tailpid}" # optional
rm "${fifo}"
 
# Startup the DB
sqlplus -S / as sysdba <<EOF
startup
exit
EOF

REMARK1: You may check the discussion here on how to poll for a string in a file on Linux.
REMARK2: In production above script would probably need a trap in case of e.g. Ctrl-C’ing it to kill the background tail process and remove the tmpfifo file.

Test:

The database is down.
In session 1 I do start my simple bash-script:


oracle@oracle-19c6-vagrant:/home/oracle/tools/test_db_start_whenconnecting/ [orclcdb (CDB$ROOT)] bash ./poll_listener.bash

In session 2 I try to connect:


oracle@oracle-19c6-vagrant:/home/oracle/ [orclcdb (CDB$ROOT)] sqlplus cbleile@orclpdb1_s
 
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:33:11 2020
Version 19.6.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
Enter password: 

After entering my password my connection-attempt “hangs”.
In session 1 I can see the following messages:


09-APR-2020 14:33:15 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=APP_PDB1)(CID=(PROGRAM=sqlplus)(HOST=oracle-19c6-vagrant)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=11592)) * establish * APP_PDB1 * 12514
ORACLE instance started.
 
Total System Global Area 3724537976 bytes
Fixed Size		    9142392 bytes
Variable Size		 1224736768 bytes
Database Buffers	 2483027968 bytes
Redo Buffers		    7630848 bytes
Database mounted.
Database opened.
./poll_listener.bash: line 38: 19096 Terminated              tail -F -n0 $LISTENER_LOG > ${fifo}
oracle@oracle-19c6-vagrant:/home/oracle/tools/test_db_start_whenconnecting/ [orclcdb (CDB$ROOT)] 

And session 2 automatically connects as the DB is open now:


oracle@oracle-19c6-vagrant:/home/oracle/ [orclcdb (CDB$ROOT)] sqlplus cbleile@orclpdb1_s
 
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:33:11 2020
Version 19.6.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
Enter password: 
Last Successful login time: Thu Apr 09 2020 14:31:44 +01:00
 
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
 
cbleile@orclcdb@PDB1> 

All subsequent connects the the DB are fast of course.

With such a mechanism I could even think of starting a virtual machine from a common listener once such a connection arrives. I.e. this would even allow us to e.g. start DB-servers in the Cloud when a first connection from the application comes in. I.e. we could stop DB-VMs on the Cloud to save money and start them up with a terraform-script (or whatever CLI-tool you use to manage your Cloud) when a first DB-connection arrives.

REMARK: With (Transparent) Application Continuity there are even more possibilities, but that feature requires RAC/RAC One Node or Active Data Guard and is out of scope here.

Cet article Starting an Oracle Database when a first connection comes in est apparu en premier sur Blog dbi services.

Find the SQL Plan Baseline for a plan operation

$
0
0

By Franck Pachot

.
If you decide to capture SQL Plan Baselines, you achieve plan stability by being conservative: if the optimizer comes with a new execution plan, it is loaded into the SQL Plan Management base, but not accepted. One day, you may add an index to improve some queries. Then you should check if there is any SQL Plan Baseline for queries with the same access predicate. Because the optimizer will probably find this index attractive, and add the new plan in the SPM base, but it will not be used unless you evolve it to accept it. Or you may remove the SQL Plan Baseline for these queries now that you know you provided a very efficient access path.

But how do you find all SQL Plan Baselines that are concerned? Here is an example.

I start with the SCOTT schema where I capture the SQL Plan Baselines for the following queries:


set time on sqlprompt 'SQL> '
host TWO_TASK=//localhost/CDB1A_PDB1.subnet.vcn.oraclevcn.com sqlplus sys/"demo##OracleDB20c" as sysdba @ ?/rdbms/admin/utlsampl.sql
connect scott/tiger@//localhost/CDB1A_PDB1.subnet.vcn.oraclevcn.com
alter session set optimizer_mode=first_rows optimizer_capture_sql_plan_baselines=true;
select * from emp where ename='SCOTT';
select * from emp where ename='SCOTT';

This is a full table scap because I have no index here.
Now I create an index that helps for this kind of queries:


alter session set optimizer_mode=first_rows optimizer_capture_sql_plan_baselines=false;
host sleep 1
create index emp_ename on emp(ename);
host sleep 1
select * from emp where ename='SCOTT';

I have now, in addition to the accepted FULL TABLE SCAN baseline, the loaded, but not accepted, plan with INDEX access.
Here is the detail the list of plans:


SQL> select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin from dba_sql_plan_baselines;

             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN
_______________________ _________________________________ __________________ ______ ______ ______ _______________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g8d8a279cc    17-apr-20 19:37    YES    YES    NO     AUTO-CAPTURE

Full table scan:

SQL> select * from dbms_xplan.display_sql_plan_baseline('SQL_62193752b864a1e8','SQL_PLAN_6469raaw698g8d8a279cc'
);

                                                                  PLAN_TABLE_OUTPUT
___________________________________________________________________________________

--------------------------------------------------------------------------------
SQL handle: SQL_62193752b864a1e8
SQL text: select * from emp where ename='SCOTT'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_6469raaw698g8d8a279cc         Plan id: 3634526668
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    87 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    87 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ENAME"='SCOTT')

Index access - not accepted

SQL> select * from dbms_xplan.display_sql_plan_baseline('SQL_62193752b864a1e8','SQL_PLAN_6469raaw698g854d6b671'
);

                                                                                   PLAN_TABLE_OUTPUT
____________________________________________________________________________________________________

--------------------------------------------------------------------------------
SQL handle: SQL_62193752b864a1e8
SQL text: select * from emp where ename='SCOTT'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_6469raaw698g854d6b671         Plan id: 1423357553
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 2855689319

-------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |           |     1 |    87 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| EMP       |     1 |    87 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | EMP_ENAME |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ENAME"='SCOTT')

SQL Plan Baseline lookup by plan operation

Now, I want to know all queries in this case, where a SQL Plan Baseline references this index because I’ll probably want to delete all plans for this query, or maybe evolve the index access to be accepted.
Here is my query on sys.sqlobj$plan


select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin
 ,operation,options,object_name
from (
 -- SPM execution plans
 select signature,category,obj_type,plan_id
 ,operation, options, object_name
 from sys.sqlobj$plan
) natural join (
 -- SQL Plan Baselines
 select signature,category,obj_type,plan_id
 ,name plan_name
 from sys.sqlobj$
 where obj_type=2
) natural join (
 select plan_name
 ,sql_handle,created,enabled,accepted,fixed,origin
 from dba_sql_plan_baselines
)
where operation='INDEX' and object_name like 'EMP_ENAME'
/

This gets the signature and plan identification from sys.sqlobj$plan then joins to sys.sqlobj$ to get the plan name, and finally dba_sql_plan_baselines to get additional information:


             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN    OPERATION       OPTIONS    OBJECT_NAME
_______________________ _________________________________ __________________ ______ ______ ______ _______________ ____________ _____________ ______________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE    INDEX        RANGE SCAN    EMP_ENAME

You can see that I like natural joins but be aware that I do that only when I fully control the columns by defining, in subqueries, the column projections before the join.

I have the following variant if I want to lookup by the outline hints:


select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin
 ,operation,options,object_name
 ,outline_data
from (
 -- SPM execution plans
 select signature,category,obj_type,plan_id
 ,operation, options, object_name
 ,case when other_xml like '%outline_data%' then extract(xmltype(other_xml),'/*/outline_data').getStringVal() end outline_data
 from sys.sqlobj$plan
) natural join (
 -- SQL Plan Baselines
 select signature,category,obj_type,plan_id
 ,name plan_name
 from sys.sqlobj$
 where obj_type=2
) natural join (
 select plan_name
 ,sql_handle,created,enabled,accepted,fixed,origin
 from dba_sql_plan_baselines
)
where outline_data like '%INDEX%'
/

This is what we find on the OTHER_XML and it is faster to filter here rather than calling dbms_xplan for each:


             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN       OPERATION                   OPTIONS    OBJECT_NAME                                                                                                                                                                                                                                                                                                                                                                                                                             OUTLINE_DATA
_______________________ _________________________________ __________________ ______ ______ ______ _______________ _______________ _________________________ ______________ ________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE    TABLE ACCESS    BY INDEX ROWID BATCHED    EMP            <outline_data><hint><![CDATA[BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$1" "EMP"@"SEL$1")]]></hint><hint><![CDATA[INDEX_RS_ASC(@"SEL$1" "EMP"@"SEL$1" ("EMP"."ENAME"))]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$1")]]></hint><hint><![CDATA[FIRST_ROWS]]></hint><hint><![CDATA[DB_VERSION('20.1.0')]]></hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('20.1.0')]]></hint><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint></outline_data>

Those SYS.SQLOBJ$ tables are the tables where Oracle stores the queries for the SQL Management Base (SQL Profiles, SQL Plan Baselines, SQL Patches, SQL Quarantine).

If you want to find the SQL_ID from a SQL Plan Baseline, I have a query in a previous post:
https://medium.com/@FranckPachot/oracle-dba-sql-plan-baseline-sql-id-and-plan-hash-value-8ffa811a7c68

Cet article Find the SQL Plan Baseline for a plan operation est apparu en premier sur Blog dbi services.

“Segment Maintenance Online Compress” feature usage

$
0
0

By Franck Pachot

.
On Twitter, Ludovico Caldara mentioned the #licensing #pitfall when using the Online Partition Move with Basic Compression. Those two features are available in Enterprise Edition without additional option, but when used together (moving online a compressed partition) they enable the usage of Advance Compression Option:


And there was a qustion about detection of this feature. I’ll show how this is detected. Basically, the ALTER TABLE MOVE PARTITION sets the “fragment was compressed online” flag in TABPART$ or TABSUBPART$ when the segment was compressed during the online move.

I create a partitioned table:


SQL> create table SCOTT.DEMO(id,x) partition by hash(id) partitions 2 as select rownum,lpad('x',100,'x') from xmltable('1 to 1000');

Table created.

I set basic compression, which does not compress anything yet but only for future direct loads:


SQL> alter table SCOTT.DEMO modify partition for (42) compress;

Table altered.

I move without the ‘online’ keyword:


SQL> alter table SCOTT.DEMO move partition for (42);

Table altered.

This does not enable the online compression flag (which is 0x2000000):


SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and objec
t_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75609      75610          2         18 12

The 0x12 is about the presence of statistics (the MOVE does online statistics gathering in 12c).


SQL> exec sys.dbms_feature_usage_internal.exec_db_usage_sampling(sysdate)

PL/SQL procedure successfully completed.

SQL> select name,detected_usages,currently_used,feature_info from dba_feature_usage_statistics where name='Segment Maintenance Online Compress';

NAME                                     DETECTED_USAGES CURRE FEATURE_INFO
---------------------------------------- --------------- ----- --------------------------------------------------------------------------------
Segment Maintenance Online Compress                    0 FALSE

Online Move of compressed partition

Now moving online this compressed segment:


SQL> alter table SCOTT.DEMO move partition for (42) online;

Table altered.

This has enabled the 0x2000000 flag:


SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and objec
t_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75611      75611          2   33554450 2000012

And, of course, is logged by the feature usage detection:


SQL> exec sys.dbms_feature_usage_internal.exec_db_usage_sampling(sysdate)

PL/SQL procedure successfully completed.

SQL> select name,detected_usages,currently_used,feature_info from dba_feature_usage_statistics where name='Segment Maintenance Online Compress';

NAME                                     DETECTED_USAGES CURRE FEATURE_INFO
---------------------------------------- --------------- ----- --------------------------------------------------------------------------------
Segment Maintenance Online Compress                    1 FALSE Partition Obj# list: 75611:

The FEATURE_INFO mentions the object_id for the concerned partitions (for the last detection only).

No Compress

The only way I know to disable this flag is to uncompress the partition, and this can be done online:


SQL> alter table SCOTT.DEMO move partition for (42) nocompress online;

Table altered.

SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and object_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75618      75618          2         18 12

DBMS_REDEFINITION

As a workaround, DBMS_REDEFINITION does not use the Advanced Compression Option. For example, this does not enable any flag:


SYS@CDB$ROOT>
SYS@CDB$ROOT> alter table SCOTT.DEMO rename partition for (24) to PART1;

Table altered.

SYS@CDB$ROOT> create table SCOTT.DEMO_X for exchange with table SCOTT.DEMO;

Table created.

SYS@CDB$ROOT> alter table SCOTT.DEMO_X compress;

Table altered.

SYS@CDB$ROOT> exec dbms_redefinition.start_redef_table(uname=>'SCOTT',orig_table=>'DEMO',int_table=>'DEMO_X',part_name=>'PART1',options_flag=>dbms_redefinition.cons_use
_rowid);

PL/SQL procedure successfully completed.

SYS@CDB$ROOT> exec dbms_redefinition.finish_redef_table(uname=>'SCOTT',orig_table=>'DEMO',int_table=>'DEMO_X',part_name=>'PART1');

PL/SQL procedure successfully completed.

SYS@CDB$ROOT> drop table SCOTT.DEMO_X;                                                                                                                        
Table dropped.

But of course, the difference is that only the blocks that are direct-path inserted into the interim table are compressed. Not the online modifications.

Only for partitions?

As far as I know, this is detected only for partitions and subpartitions, the online partition move operation which came in 12cR1. Since 12cR2 we can also move online a non-partitioned table and this, as far as I know, is not detected by dba_feature_usage_statistics. But don’t count on this as this may be considered as a bug which may be fixed one day.

Cet article “Segment Maintenance Online Compress” feature usage est apparu en premier sur Blog dbi services.

Oracle Support: Easy export of SQL Testcase

$
0
0

By Franck Pachot

.
Many people complain about the quality of support. And there are some reasons behind that. But before complaining, be sure that you provide all information. Because one reason for inefficient Service Request handling is the many incomplete tickets the support engineers have to manage. Oracle provides the tools to make this easy for you and for them. Here I’ll show how easy it is to provide a full testcase with DBMS_DIAG. I’m not talking about hours spent to identify the tables involved, the statistics, the parameters,… All that can be done autonomously with a single command as soon as you have the SQL text or SQL_ID.

In my case, I’ve reproduced my problem (very long parse time) with the following:


set linesize 120 pagesize 1000
variable sql clob
exec select sql_text into :sql from dba_hist_sqltext where sql_id='5jyqgq4mmc2jv';
alter session set optimizer_features_enable='18.1.0';
alter session set tracefile_identifier='5jyqgq4mmc2jv';
select value from v$diag_info where name='Default Trace File';
alter session set events 'trace [SQL_Compiler.*]';
exec execute immediate 'explain plan for '||:sql;
alter session set events 'trace [SQL_Compiler.*] off';

I was too lazy to copy the big SQL statement, so I get it directly from AWR. Because it is a parsing problem, I just run an EXPLAIN PLAN. I set Optimizer Feature Enable to my current version because the first workaround in production was to keep the previous version. I ran a “SQL Compiler” trace, aka event 10053, in order to get the timing information (which I described in a previous blog post). But that’s not the topic. Rather than providing those huge traces to Oracle Support, better to give an easy to reproduce test case.

So this is the only thing I added to get it:


variable c clob
exec DBMS_SQLDIAG.EXPORT_SQL_TESTCASE(directory=>'DATA_PUMP_DIR',sql_text=>:sql,testcase=>:c);

Yes, that’s all. This generates the following files in my DATA_PUMP_DIR directory:

There’s a README, there’s a dump of the objects (I used the default which exports only metadata and statistics), there’s the statement, the system statistics,… you can play with this or simply import the whole with DBMS_SQLDIAG.

I just tar’ed this and copy it to another environment (I provisioned a 20c database in the Oracle Cloud for that) and ran the following:


grant DBA to DEMO identified by demo container=current;
connect demo/demo@&_connect_identifier
create or replace directory VARTMPDPDUMP as '/var/tmp/dpdump';
variable c clob
exec DBMS_SQLDIAG.IMPORT_SQL_TESTCASE(directory=>'VARTMPDPDUMP',filename=>'oratcb_0_5jyqgq4mmc2jv_1_018BBEEE0001main.xml');
@ oratcb_0_5jyqgq4mmc2jv_1_01A20CE80001xpls.sql

And that’s all. This imported all the objects and statistics to exactly reproduce my issue. Now that it reproduces everywhere, I can open a SR, with a short description and the SQL Testcase files (5 MB here). It is not always easy to reproduce a problem, but if you can reproduce it in your environment, there’s a good chance that you can quickly export what is required to reproduce it in another environment.

SQL Testcase Builder is available in any edition. You can use it yourself to reproduce in pre-production a production issue or to provide a testcase to the Oracle Support. Or to send to your preferred troubleshooting consultant: we are doing more and more remote expertise, and reproducing an issue in-house is the most efficient way to analyze a problem.

Cet article Oracle Support: Easy export of SQL Testcase est apparu en premier sur Blog dbi services.


티베로 – The most compatible alternative to Oracle Database

$
0
0

By Franck Pachot

.
Do you remember that time where we were able to buy IBM PC clones, cheaper than the IBM PC but fully compatible? I got the same impression when testing Tibero, the TmaxSoft relational database compatible with the Oracle Database. Many Oracle customers are looking for alternatives to the Oracle Database, because of unfriendly commercial and licensing practices, like forcing the usage of expensive options or not counting vCPU for licensing. Up to now, I was not really impressed by the databases that claim Oracle compatibility. You simply cannot migrate an application from Oracle to another RDBMS without having to change a lot of code. This makes it nearly impossible to move a legacy application where the business logic has been implemented during years in the database model and stored procedures. Who will take the risk to guarantee the same behavior even after very expensive UAT? Finally, with less effort, you may optimize your Oracle licenses and stay with the same database software.

Tibero

However, in Asia, some companies have another reason to move out of Oracle. Not because of Oracle, but because it is an American company. This is true especially for public government organizations for which storing data and running critical application should not depend on a US company. And once they have built their alternative, they may sell it worldwide. In this post I’m looking at Tibero, a database created by a South Korean company – TmaxSoft – with an incredible level of compatibility with Oracle.

I’ll install and run a Tibero database to get an idea about what compatibility means.

Demo trial

After creating a login account on the TmaxSoft TechNet, I’ve requested a demo license on: https://technet.tmaxsoft.com/en/front/common/demoPopup.do

You need to now the host where you will run this as you have to provide the result of `uname -n` to get the license key. That’s a 30 days trial (longer if you don’t restart the instance) that can run everything on this host. I’ve used an Oracle Compute instance running OEL7 for this test. I’ve downloaded the Tibero 6 software installation: tibero6-bin-FS07_CS_1902-linux64–166256-opt.tar.gz from TmaxSoft TechNet > Downloads > Database > Tibero > Tibero 6

For the installation, I followed the instructions from https://store.dimensigon.com/deploy-tibero-database/ that I do not reproduce here. Basically, you need some packages, some sysctl.conf settings for shared memory, some limits.conf settings, a user in ‘dba’ group,… Very similar to Oracle prerequisites. Then untar the software – this installs a $TB_HOME about 1GB.

Database creation

The first difference with Oracle is that you cannot start an instance without a valid license file:


$ $TB_HOME/bin/tb_create_db.sh
  ********************************************************************
* ERROR: Can't open the license file!!
* (1) Check the license file - /home/tibero/tibero6/license/license.xml
  ********************************************************************

I have my trial license file and move it to $TB_HOME/license/license.xml

The creation of the database is ready and there’s a simple tb_create_db.sh for that. First stage is starting the instance (NOMOUNT mode):


$ $TB_HOME/bin/tb_create_db.sh
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NOMOUNT mode).

A few information about settings is displayed:


+----------------------------- size -------------------------------+
 system size = 100M (next 10M)
 syssub size = 100M (next 10M)
   undo size = 200M (next 10M)
   temp size = 100M (next 10M)
    usr size = 100M (next 10M)
    log size = 50M
+--------------------------- directory ----------------------------+
 system directory = /home/tibero/tibero6/database/t6a
 syssub directory = /home/tibero/tibero6/database/t6a
   undo directory = /home/tibero/tibero6/database/t6a
   temp directory = /home/tibero/tibero6/database/t6a
    log directory = /home/tibero/tibero6/database/t6a
    usr directory = /home/tibero/tibero6/database/t6a

And the creation is going - that really looks like an Oracle Database:


+========================== newmount sql ==========================+
 create database "t6a"
  user sys identified by tibero
  maxinstances 8
  maxdatafiles 100
  character set MSWIN949
  national character set UTF16
  logfile
  group 1 ('/home/tibero/tibero6/database/t6a/log001.log') size 50M,
  group 2 ('/home/tibero/tibero6/database/t6a/log002.log') size 50M,
  group 3 ('/home/tibero/tibero6/database/t6a/log003.log') size 50M
    maxloggroups 255
    maxlogmembers 8
    noarchivelog
  datafile '/home/tibero/tibero6/database/t6a/system001.dtf' 
    size 100M autoextend on next 10M maxsize unlimited
  SYSSUB 
  datafile '/home/tibero/tibero6/database/t6a/syssub001.dtf' 
    size 10M autoextend on next 10M maxsize unlimited
  default temporary tablespace TEMP
    tempfile '/home/tibero/tibero6/database/t6a/temp001.dtf'
    size 100M autoextend on next 10M maxsize unlimited
    extent management local autoallocate
  undo tablespace UNDO
    datafile '/home/tibero/tibero6/database/t6a/undo001.dtf'
    size 200M
    autoextend on next 10M maxsize unlimited
    extent management local autoallocate
  default tablespace USR
    datafile  '/home/tibero/tibero6/database/t6a/usr001.dtf'
    size 100M autoextend on next 10M maxsize unlimited
    extent management local autoallocate;
+==================================================================+

Database created.
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).

Then the dictionary is loaded (equivalent to catalog/catproc):


/home/tibero/tibero6/bin/tbsvr
Dropping agent table...
Creating text packages table ...
Creating the role DBA...
Creating system users & roles...
Creating example users...
Creating virtual tables(1)...
Creating virtual tables(2)...
Granting public access to _VT_DUAL...
Creating the system generated sequences...
Creating internal dynamic performance views...
Creating outline table...
Creating system tables related to dbms_job...
Creating system tables related to dbms_lock...
Creating system tables related to scheduler...
Creating system tables related to server_alert...
Creating system tables related to tpm...
Creating system tables related to tsn and timestamp...
Creating system tables related to rsrc...
Creating system tables related to workspacemanager...
Creating system tables related to statistics...
Creating system tables related to mview...
Creating system package specifications:
    Running /home/tibero/tibero6/scripts/pkg/pkg_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_standard_extension.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_clobxmlinterface.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_udt_meta.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_seaf.sql...
    Running /home/tibero/tibero6/scripts/pkg/anydata.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_db2_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_application_info.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aq.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aq_utl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aqadm.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_assert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_crypto.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_db2_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_db_version.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_ddl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_debug.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_debug_jdwp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_errlog.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_expression.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_fga.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_flashback.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_geom.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_java.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_job.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_lob.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_lock.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_metadata.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mssql_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview_refresh_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_obfuscation.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_output.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_pipe.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_random.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_redefinition.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_redefinition_stats.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_repair.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_result_cache.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rls.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rowid.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rsrc.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_scheduler.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_session.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_space.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_space_admin.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sph.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql_analyze.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sqltune.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_stats.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_stats_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_system.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_transaction.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_types.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_utility.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_utl_tb.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_verify.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmldom.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlgen.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlquery.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xplan.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dg_cipher.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_htf.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_htp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_psm_sql_result_cache.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_sys_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_tb_utility.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_text.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_tudiconst.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_encode.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_file.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_tcp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_http.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_url.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_i18n.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_match.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_raw.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_smtp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_str.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_compress.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_text_japanese_lexer.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_tpm.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_recomp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_monitor.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_server_alert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_ctx_ddl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_odci.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_ref.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_owa_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_alert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_client_internal.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xslprocessor.sql...
    Running /home/tibero/tibero6/scripts/pkg/uda_wm_concat.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_diutil.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlsave.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlparser.sql...
Creating auxiliary tables used in static views...
Creating system tables related to profile...
Creating internal system tables...
Check TPR status..
Stop TPR
Dropping tables used in TPR...
Creating auxiliary tables used in TPR...
Creating static views...
Creating static view descriptions...
Creating objects for sph:
    Running /home/tibero/tibero6/scripts/iparam_desc_gen.sql...
Creating dynamic performance views...
Creating dynamic performance view descriptions...
Creating package bodies:
    Running /home/tibero/tibero6/scripts/pkg/_pkg_db2_standard.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aq.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aq_utl.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aqadm.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_assert.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_db2_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_errlog.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_metadata.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mssql_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview_refresh_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_redefinition_stats.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_rsrc.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_scheduler.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_session.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sph.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sql_analyze.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sql_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sqltune.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_stats.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_stats_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_utility.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_utl_tb.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_verify.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_workspacemanager.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xmlgen.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xplan.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dg_cipher.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_htf.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_htp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_text.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_http.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_url.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_i18n.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_smtp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_text_japanese_lexer.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_tpm.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_recomp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_server_alert.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xslprocessor.tbw...
Running /home/tibero/tibero6/scripts/pkg/_uda_wm_concat.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xmlparser.tbw...
Creating public synonyms for system packages...
Creating remaining public synonyms for system packages...
Registering dbms_stats job to Job Scheduler...
Creating audit event pacakge...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_audit_event.tbw...
Creating packages for TPR...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_tpr.sql...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_tpr.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_apm.tbw...
Start TPR
Create tudi interface
    Running /home/tibero/tibero6/scripts/odci.sql...
Creating spatial meta tables and views ...
Creating internal system jobs...
Creating Japanese Lexer epa source ...
Creating internal system notice queue ...
Creating sql translator profiles ...
Creating agent table...
Done.
For details, check /home/tibero/tibero6/instance/t6a/log/system_init.log.

From this log, you can already imagine the PL/SQL DBMS_% packages compatibility with Oracle Database: they are all there.
All seems good, I have a TB_HOME and TB_SID to identify the instance:


**************************************************
* Tibero Database 't6a' is created successfully on Fri Dec  6 17:23:57 GMT 2019.
*     Tibero home directory ($TB_HOME) =
*         /home/tibero/tibero6
*     Tibero service ID ($TB_SID) = t6a
*     Tibero binary path =
*         /home/tibero/tibero6/bin:/home/tibero/tibero6/client/bin
*     Initialization parameter file =
*         /home/tibero/tibero6/config/t6a.tip
*
* Make sure that you always set up environment variables $TB_HOME and
* $TB_SID properly before you run Tibero.
**************************************************

This looks very similar to Oracle Database and here is my ‘init.ora’ equivalent:

I should add _USE_HUGE_PAGE=Y there as I don’t like to see 3GB allocated with 4k pages.
Looking at the instance processes shows many background Worker Processes that have several threads:

Not going into the details there, but DBWR does more than the Oracle Database Writer as it runs treads for writing to datafiles as well as writing to redo logs. RCWP is the recovery process (also used by standby databases). PEWP runs the parallel query threads. FGWP runs the foreground (session) threads.

Tibero is similar to Oracle but not equal. Tibero has been developed in 2003 with the goal of maximum compatibility with Oracle: SQL, PL/SQL, MVCC compatibility for easy application migration as well as architecture compatibility for easier adoption by DBA. But it was also built from scratch for modern OS and runs processes and threads. I installed Linux x86-64 but Tibero is also available for AIX, HP-UX, Solaris, Windows.

Connect

I can connect with the SYS user by attaching to the SHM when the TB_HOME and TB_SID is set to my local instance:


SQL> Disconnected.
[SID=t6a u@h:w]$ TB_HOME=~/tibero6 TB_ID=t6a tbsql sys/tibero

tbSQL 6

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

I can also connect though the listener (the port was mentioned at database creation):


[SID=t6a u@h:w]$ TB_HOME= TB_ID= tbsql sys/tibero@localhost:8629/t6a

tbSQL 6

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

Again it is similar to Oracle (like ezconnect or full connection string) but not exactly the same:

Similar but not a clone

The first time I looked at Tibero, I was really surprised how far it goes with the compatibility with Oracle Database. I’ll probably write more blog posts about it but even complex PL/SQL packages can run without any change. Then comes the idea: is it only an API compatibility or is this software a clone of Oracle? I’ve even heard rumours that some source code must have leaked in order to reach such compatibility. I want to make it clear here: I’m 100% convinced that this database engine was written from scratch, inspired by Oracle architecture and features, and implementing the same language, dictionary packages and views, but with completely different code and internal design. When we troubleshoot Oracle we are used to see the C function stacks in trace dumps. Let’s have a look at the C functions here.

I’ll strace the pread64 call while running a query in order to see the stack behind. I get the PID to trace:


select client_pid,pid,wthr_id,os_thr_id from v$session where sid in (select sid from v$mystat);

The process for my session is: tbsvr_FGWP000 -t NORMAL -SVR_SID t6a and the PID is the Linux PID (OS_THR_ID is the thread).
I strace (compiled with libunwind to show the call stack):


strace -k -e trace=pread64 -y -p 7075


Here is the first call stack for the first pread64() call:


pread64(49, "\4\0\0\0\2\0\200\0\261]\2\0\0\0\1\0\7\0\0\0\263\0\0\0l\2\0\0\377\377\377\377"..., 8192, 16384) = 8192
 > /usr/lib64/libpthread-2.17.so(__pread_nocancel+0x2a) [0xefc3]
 > /home/tibero/tibero6/bin/tbsvr(read_dev_ne+0x2b2) [0x14d8cd2]
 > /home/tibero/tibero6/bin/tbsvr(read_dev+0x94) [0x14d96e4]
 > /home/tibero/tibero6/bin/tbsvr(buf_read1_internal+0x2f8) [0x14da158]
 > /home/tibero/tibero6/bin/tbsvr(tcbh_read_blks_internal+0x5d8) [0x14ccf98]
 > /home/tibero/tibero6/bin/tbsvr(tcbh_read_blk_internal+0x1d) [0x14cd2dd]
 > /home/tibero/tibero6/bin/tbsvr(tcbuf_pin_read_locked+0x39c) [0x14ec99c]
 > /home/tibero/tibero6/bin/tbsvr(tcbuf_get+0x198a) [0x14f3c9a]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_units_internal+0x256) [0x17b0a56]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_units_from_df+0x406) [0x17b2396]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_ext_internal+0x2ff) [0x17b5b1f]
 > /home/tibero/tibero6/bin/tbsvr(tx_sgmt_create+0x1cb) [0x1752ffb]
 > /home/tibero/tibero6/bin/tbsvr(ddl_create_dsgmt+0xc0) [0x769260]
 > /home/tibero/tibero6/bin/tbsvr(_ddl_ctbl_internal+0x155d) [0x7f86dd]
 > /home/tibero/tibero6/bin/tbsvr(ddl_create_table+0xf9) [0x7f9dd9]
 > /home/tibero/tibero6/bin/tbsvr(ddl_execute+0xf04) [0x44aa54]
 > /home/tibero/tibero6/bin/tbsvr(ddl_process_internal+0xf6a) [0x44ed0a]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_sql_process+0x4082) [0x1def92]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_msg_sql_common+0x1518) [0x1ca718]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_handle_msg_internal+0x2225) [0x1847c5]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_wthr_request_from_cl_conn+0x70a) [0x187eea]
 > /home/tibero/tibero6/bin/tbsvr(wthr_get_new_cli_con+0xc94) [0x18fea4]
 > /home/tibero/tibero6/bin/tbsvr(thread_main_chk_bitmask+0x18d) [0x1966ed]
 > /home/tibero/tibero6/bin/tbsvr(svr_wthr_main_internal+0x1393) [0x1ab2b3]
 > /home/tibero/tibero6/bin/tbsvr(wthr_init+0x80) [0xa2a5b0]
 > /usr/lib64/libpthread-2.17.so(start_thread+0xc5) [0x7ea5]
 > /usr/lib64/libc-2.17.so(clone+0x6d) [0xfe8cd]

I don’t think there is anything in common with the Oracle software code or layer architecture here, except some well known terms (segment, extent, buffer get, buffer pin,…).

I also show a data block dump here just to get an idea:


SQL> select dbms_rowid.rowid_absolute_fno(rowid),dbms_rowid.rowid_block_number(rowid) from demo where rownum=1;

DBMS_ROWID.ROWID_ABSOLUTE_FNO(ROWID) DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------ ------------------------------------
                                   2                                 2908

SQL> alter system dump datafile 2 block 2908;

The dump is in /home/tibero/tibero6/instance/t6a/dump/tracedump/tb_dump_7029_73_31900660.trc:


**Dump start at 2020-04-19 14:54:46
DUMP of BLOCK file #2 block #2908

**Dump start at 2020-04-19 14:54:46
data block Dump[dba=02_00002908(8391516),tsn=0000.0067cf33,type=13,seqno =1]
--------------------------------------------------------------
 sgmt_id=3220  cleanout_tsn=0000.00000000  btxcnt=2
 l1dba=02_00002903(8391511), offset_in_l1=5
 btx      xid                undo           fl  tsn/credit
 00  0000.00.0000  00_00000000.00000.00000  I  0000.00000000
 01  0000.00.0000  00_00000000.00000.00000  I  0000.00000000
--------------------------------------------------------------
Data block dump:
  dlhdr_size=16  freespace=7792  freepos=7892  symtab_offset=0  rowcnt=4
Row piece dump:
 rp 0 8114:  [74] flag=--H-FL--  itlidx=255    colcnt=11
  col 0: [6]
   0000: 05 55 53 45 52 32                               .USER2
  col 1: [6]
   0000: 05 49 5F 43 46 31                               .I_CF1
  col 2: [1]
   0000: 00                                              .
  col 3: [4]
   0000: 03 C2 9F CA                                     ....
  col 4: [6]
   0000: 05 49 4E 44 45 58                               .INDEX
  col 5: [2]
   0000: 01 80                                           ..
  col 6: [9]
   0000: 08 78 78 01 05 13 29 15 00                      .xx...)..
  col 7: [9]
   0000: 08 78 78 01 05 13 29 15 00                      .xx...)..
  col 8: [20]
   0000: 13 32 30 32 30 2D 30 31 2D 30 35 3A 31 39 3A 34 .2020-01-05:19:4
   0010: 31 3A 32 31                                     1:21
  col 9: [6]
   0000: 05 56 41 4C 49 44                               .VALID
  col 10: [2]
   0000: 01 4E                                           .N

Even if there are obvious differences in the implementation, this really looks similar to an Oracle block format with ITL list in the block header and row pieces with flags.

If you look for a compatible alternative to Oracle Database, you have probably found some database which try to accept the same SQL and PL/SQL syntax. But this is not sufficient to run an application with minimal changes. Here, with Tibero I was really surprised to see how it copies the Oracle syntax, behavior and features. The dictionary views are similar, with some differences because the implementation is different. Tibero has also an equivalent of ASM and RAC. You can expect other blog posts about it, so do not hesitate to follow the rss or twitter feed.

Cet article 티베로 – The most compatible alternative to Oracle Database est apparu en premier sur Blog dbi services.

Terraform and Oracle Cloud Infrastructure

$
0
0

Introduction

When you learn a cloud technology, like OCI, the one from Oracle, you start building your demo infrastructure with the web interface and numerous clicks. It’s convenient and easy to handle, even more if you’re quite used to infrastructure basics: network, routing, firewalling, servers, etc. But when it comes to build complex infrastructures with multiple servers, subnets, rules, databases, it’s more than a few clicks to do. And rebuilding a clone infrastructure (for example for testing purpose) can be a nightmare.

Is it possible to script an infrastructure?

Yes for sure, most of the cloud providers have a command line interface, and actually, all the clouds are based on a command line interface and a web console on top of it. But scripting all the commands is something not very digestible.

Infrastructure as Code

Why we couldn’t manage an infrastructure as if it were a piece of software? It’s the purpose of “Infrastructure as Code”. The benefits seem obvisous: faster to deploy, reusable code, automation of infrastructure deployment, scalability, reduced cost with an embedded “drop infrastructure” feature, …

There are multiple tools to do IaC, but Oracle recommands Terraform. And it looks like the best solution for now.

What is Terraform?

The goal of Terraform is to help infrastructure administrators to model and provision large and complex infrastructures. It’s not dedicated to OCI as it supports multiple providers, so if you think about an infrastructure based on OCI and Microsoft Azure (as they are becoming friends), it makes even more sense.

Terraform is using a specific langage called HCL, HashiCorp Configuration Langage. Obviously, it’s compatible with code repositories, like GIT. Templates are available to ease your job when you’re a beginner.

The main steps for terraforming an infrastructure are:
1) write your HCL code (describe)
2) preview the execution by reading the configuration (plan)
3) build the infrastructure (apply)
4) eventually delete the infrastructure (destroy)

3 ways of using Terraform with OCI

You can use Terraform by copying the binary on your computer (Windows/Mac/Linux), it’s quite easy to set up and use (no installation, only one binary). Terraform can run from a VM already in the cloud.

Terraform is also available in SaaS mode, just sign up on terraform.io website and you will be able to work with Terraform without installing anything.

You can also use Terraform through Oracle Resource Manager (ORM) inside the OCI. ORM is a free service provided by Oracle and based on Terraform language. ORM will manage stacks, each stack being a set of Terraform files you bring to OCI as a zip file. From this stacks, ORM let you perform the actions you would have done in Terraform: plan, apply and destroy.

Typical use cases

Terraform is quite nice to make cross-platform deployments, build demos, give the ability for people to build an infrastructure as a self-service, make a Proof Of Concept, …

Terraform can also be targeted to Devops engineer, giving them the ability to deploy a staging environment, fix the issues and then deploy production environments reusing the terraform configuration.

How does it work?

A terraform configuration is actually a directory with one or multiple .tf files (depending on your preferences). As HCL is not a scripting language, blocks in the file(s) are not describing any order in the execution.

During the various steps previously described, special subfolders should appear during execution: *tfstate* for current status and .terraform, a kind of cache.

If you need to script your infrastructure deployment, you can use Python, Bash, Powershell or other tools to call the binary.

To be able to authorize your Terraform binary to create resources in the cloud, you’ll have to provide the API key of an OCI user with enough authorizations.

As cloud providers are pushing update quite often, Terraform will keep the plugin of your cloud provider updated regularly.

Terraform can also manage dependencies (for example a VM depending on another one) because tasks will be done in parallel to speed up the infrastructure deployment.

Some variables can be provided as an input (most often through environment variables) for example for naming the compartment. Imagine you want to deploy several test infrastructures isolated from each others.

Conclusion

Terraform is a great tool to leverage cloud benefits, even for a simple infrastructure. Don’t miss that point!

Cet article Terraform and Oracle Cloud Infrastructure est apparu en premier sur Blog dbi services.

APEX Connect 2020 – Day 1

$
0
0

This year the APEX connect conference goes virtual online, like all other major IT events, due to the pandemic. Unfortunately it spans only over two days with mixed topics around APEX, like JavaScript, PL/SQL and much more. After the welcome speech and the very interesting Keynote about “APEX 20.1 and beyond: News from APEX Development” by Carsten Czarski, I decided to attend presentations on following topics:
– The Basics of Deep Learning
– “Make it faster”: Myths about SQL performance
– Using RESTful Services and Remote SQL
– The Ultimate Guide to APEX Plug-ins
– Game of Fraud Detection with SQL and Machine Learning

APEX 20.1 and beyond: News from APEX Development

Carsten Czarski from the APEX development team shared about the evolution of the latest APEX releases up to 20.1 released on April 23rd.
Since APEX 18.1 there are 2 release per year. There are no major nor minor releases, all are managed at the same level.
Beside those releases bundle PSE to fix critical issues are provided.
From the recent features a couple have retained my attention:
– Faceted search
– Wider integration of Oracle TEXT
– Application backups
– Session timeout warnings
– New URL
And more to come with next releases like:
– Native PDF export
– New download formats
– External data sources
A lot to test and enjoy!

The Basics of Deep Learning

Artificial Intelligence (AI) is now part of our life mostly without noticing it. Machine learning (ML) is part of AI and Deap Learning (DL) a specific sub-part of ML.
ML is used in different sectors and used for example in:

  • SPAM filters
  • Data Analytics
  • Medical Diagnosis
  • Image recognition

and much more…
DL is integrating automated feature extraction which makes it suited for:

  • Natural Language processing
  • Speech recognition
  • Text to Speech
  • Machine translation
  • Referencing to Text

You can find some example of text generator based on DL with Talk to transformer
It is also heavily used in visual recognition (feature based recognition). ML is dependent on the datasets and preset models used so it’s key to have a large set of data to cover a wide range of possibilities. DL has made a big step forward with Convolutional Neural Networks (by Yann Lecun).
DL is based on complex mathematical models in Neural Networks at different levels, which use activation functions, model design, hyper parameters, backpropagation, loss functions, optimzer.
You can learn how it is implemented in image recognition at pyimagesearch.com
Another nice example of DL with reinforcement learning is AI learns to park

“Make it faster”: Myths about SQL performance

Performance of the Database is a hot topic when it comes to data centric application development like with APEX.
The pillars of DB performance are following:
– Performance planning
– Instance tuning
– SQL tuning
To be efficient performance must be considered at every stage of a project.
Recurring statement is: “Index is GOOD, full table scan is BAD”
But when is index better than full table scan? As a rule of thumb you can consider when selectivity is less than 5%
To improve the performance there are also options like:
– KIWI (Kill It With Iron) where more hardware should solve the performance issue
– Hints where you cut branches of the optimizer decision tree to force its choice (which is always the less expensive plan)
Unfortunately there is no golden hint able to improve performance whenever it’s used

Using RESTful Services and Remote SQL

REST web services are based on URI returning different types of data like HTML, XML, CSV or JSON.
Those web services are based on request methods:

  • POST to insert data
  • PUT to update/replace data
  • GET to read data
  • DELETE to DELETE data
  • PATCH to update/modify data

The integration of web services in APEX allows to make use of data outside of the Oracle database and connect to services like:
– Jira
– GitHub
– Online Accounting Services
– Google services
– …
Web services module on ORDS instance provides extensions on top of normal REST which support APEX out of the box but also enables Remote SQL.
Thanks to that, SQL statement can be sent over REST to the Oracle Database and executed remotely returning the data formatted as per REST standards.

The Ultimate Guide to APEX Plug-ins

Even though APEX plug-ins are not trivial to build they have benefits like:
– Introduction of new functionality
– Modularity
– Reusability
which makes them very interesting.
There are already a lot of Plug-ins available which can be found on apex.world or on professional providers like FOEX
What is important to look at with plug-ins are support, quality, security and updates.
The main elements of a plug-in are:
– Name
– Type
– Callbacks
– Standard attributes
– Custom attributes
– Files (CSS, JS, …)
– Events
– Information
– Help Text
Plug-ins are also a way to provide tools to improve the APEX developer experience like APEX Nitro or APEX Builder extension by FOS

Game of Fraud Detection with SQL and Machine Learning

With the example of some banking fraud, the investigation method based on deterministic SQL was compared to the method based on probabilistic ML.
Even though results were close on statistics Supervised Machine Learning (looking for patterns to identify the solutions) was giving more accurate ones. In fact, the combination of both methods was giving even better results.
The challenge is to gain acceptance from the business on results produced using help of ML as they are not based on fully explainable rules.
The Oracle database is embedding ML for free with specific package like DBMS_DATA_MINING for several years now.

The day ended with the most awaited session: Virtual beer!

Cet article APEX Connect 2020 – Day 1 est apparu en premier sur Blog dbi services.

Handle DB-Links after Cloning an Oracle Database

$
0
0

By Clemens Bleile

After cloning e.g. a production database into a database for development or testing purposes, the DBA has to make sure that no activities in the cloned database have an impact on data in other production databases. Because after cloning production data jobs may still try to modify data through e.g. db-links. I.e. scheduled database jobs must not start in the cloned DB and applications connecting to the cloned database must not modify remote production data. Most people are aware of this issue and a first measure is to start the cloned database with the DB-parameter

job_queue_processes=0

That ensures that no database job will start in the cloned database. However, before enabling scheduler jobs again, you have to make sure that no remote production data is modified. Remote data is usually accessed through db-links. So the second step is to handle the db-links in the cloned DB.

In a recent project we decided to be strict and drop all database links in the cloned database.
REMARK: Testers and/or developers should create the needed db-links later again pointing to non-production data.
But how to do that, because private DB-Links can only be dropped by the owner of the db-link? I.e. even a connection with SYSDBA-rights cannot drop private database links:

sys@orcl@orcl> connect / as sysdba
Connected.
sys@orcl@orcl> select db_link from dba_db_links where owner='CBLEILE';
 
DB_LINK
--------------------------------
CBLEILE_DB1
PDB1
 
sys@orcl@orcl> drop database link cbleile.cbleile_db1;
drop database link cbleile.cbleile_db1
                   *
ERROR at line 1:
ORA-02024: database link not found
 
sys@orcl@orcl> alter session set current_schema=cbleile;
 
Session altered.
 
sys@orcl@orcl> drop database link cbleile_db1;
drop database link cbleile_db1
*
ERROR at line 1:
ORA-01031: insufficient privileges

We’ll see later on how to drop the db-links. Before doing that we make a backup of the db-links. That can be achieved with expdp:

Backup of db-links with expdp:

1.) create a directory to store the dump-file:

create directory prod_db_links as '<directory-path>';

2.) create the param-file expdp_db_links.param with the following content:

full=y
INCLUDE=DB_LINK:"IN(SELECT db_link FROM dba_db_links)"

3.) expdp all DB-Links

expdp dumpfile=prod_db_links.dmp logfile=prod_db_links.log directory=prod_db_links parfile=expdp_db_links.param
Username: <user with DATAPUMP_EXP_FULL_DATABASE right>

REMARK: Private db-links owned by SYS are not exported by the command above. But SYS must not own user-objects anyway.

In case the DB-Links have to be restored you can do the following:

impdp dumpfile=prod_db_links.dmp logfile=prod_db_links_imp.log directory=prod_db_links
Username: <user with DATAPUMP_IMP_FULL_DATABASE right>

You may also create a script prod_db_links.sql with all ddl (passwords are not visible in the created script):

impdp dumpfile=prod_db_links.dmp directory=prod_db_links sqlfile=prod_db_links.sql
Username: <user with DATAPUMP_IMP_FULL_DATABASE right>

Finally drop the directory again:

drop directory prod_db_links;

Now, that we have a backup we can drop all db-links. As mentioned earlier, private db-links cannot be dropped, but you can use the following method to drop them:

As procedures are running with definer rights by default, we can create a procedure under the owner of the db-link and in the procedure drop the dblink. SYS has the privileges to execute the procedure. The following example will drop the db-link cbleile.cbleile_db1:

select db_link from dba_db_links where owner='CBLEILE';
 
DB_LINK
--------------------------------
CBLEILE_DB1
PDB1

create or replace procedure CBLEILE.drop_DB_LINK as begin
execute immediate 'drop database link CBLEILE_DB1';
end;
/
 
exec CBLEILE.drop_DB_LINK;
 
select db_link from dba_db_links where owner='CBLEILE';
 
DB_LINK
--------------------------------
PDB1

I.e. the db-link CBLEILE_DB1 has been dropped.
REMARK: Using a proxy-user would also be a possibility to connect as the owner of the db-link, but that cannot be automated in a script that easily.

As we have a method to drop private db-links we can go ahead and automate creating the drop db-link commands with the following sql-script drop_all_db_links.sql:

set lines 200 pages 999 trimspool on heading off feed off verify off
set serveroutput on size unlimited
column dt new_val X
select to_char(sysdate,'yyyymmdd_hh24miss') dt from dual;
spool drop_db_links_&&X..sql
select 'set echo on feed on verify on heading on' from dual;
select 'spool drop_db_links_&&X..log' from dual;
select 'select count(*) from dba_objects where status='''||'INVALID'||''''||';' from dual;
REM Generate all commands to drop public db-links
select 'drop public database link '||db_link||';' from dba_db_links where owner='PUBLIC';
REM Generate all commands to drop db-links owned by SYS (except SYS_HUB, which is oracle maintained)
select 'drop database link '||db_link||';' from dba_db_links where owner='SYS' and db_link not like 'SYS_HUB%';
PROMPT
REM Generate create procedure commands to drop private db-link, generate the execute and the drop of it.
declare
   current_owner varchar2(32);
begin
   for o in (select distinct owner from dba_db_links where owner not in ('PUBLIC','SYS')) loop
      dbms_output.put_line('create or replace procedure '||o.owner||'.drop_DB_LINK as begin');
      for i in (select db_link from dba_db_links where owner=o.owner) loop
         dbms_output.put_line('execute immediate '''||'drop database link '||i.db_link||''''||';');
      end loop;
      dbms_output.put_line('end;');
      dbms_output.put_line('/');
      dbms_output.put_line('exec '||o.owner||'.drop_DB_LINK;');
      dbms_output.put_line('drop procedure '||o.owner||'.drop_DB_LINK;');
      dbms_output.put_line('-- Seperator -- ');
   end loop;
end;
/
select 'select count(*) from dba_objects where status='''||'INVALID'||''''||';' from dual;
select 'set echo off' from dual;
select 'spool off' from dual;
spool off
 
PROMPT
PROMPT A script drop_db_links_&&X..sql has been created. Check it and then run it to drop all DB-Links.
PROMPT

Running above script generates a sql-script drop_db_links_<yyyymmdd_hh24miss>.sql, which contains all drop db-link commands.

sys@orcl@orcl> @drop_all_db_links
...
A script drop_db_links_20200509_234906.sql has been created. Check it and then run it to drop all DB-Links.
 
sys@orcl@orcl> !cat drop_db_links_20200509_234906.sql
 
set echo on feed on verify on heading on
 
spool drop_db_links_20200509_234906.log
 
select count(*) from dba_objects where status='INVALID';
 
drop public database link DB1;
drop public database link PDB2;
 
create or replace procedure CBLEILE.drop_DB_LINK as begin
execute immediate 'drop database link CBLEILE_DB1';
execute immediate 'drop database link PDB1';
end;
/
exec CBLEILE.drop_DB_LINK;
drop procedure CBLEILE.drop_DB_LINK;
-- Seperator --
create or replace procedure CBLEILE1.drop_DB_LINK as begin
execute immediate 'drop database link PDB3';
end;
/
exec CBLEILE1.drop_DB_LINK;
drop procedure CBLEILE1.drop_DB_LINK;
-- Seperator --
 
select count(*) from dba_objects where status='INVALID';
 
set echo off
 
spool off
 
sys@orcl@orcl>

After checking the file drop_db_links_20200509_234906.sql I can run it:

sys@orcl@orcl> @drop_db_links_20200509_234906.sql
sys@orcl@orcl> 
sys@orcl@orcl> spool drop_db_links_20200509_234906.log
sys@orcl@orcl> 
sys@orcl@orcl> select count(*) from dba_objects where status='INVALID';
 
  COUNT(*)
----------
   1
 
1 row selected.
 
sys@orcl@orcl> 
sys@orcl@orcl> drop public database link DB1;
 
Database link dropped.
 
sys@orcl@orcl> drop public database link PDB2;
 
Database link dropped.
 
sys@orcl@orcl> 
sys@orcl@orcl> create or replace procedure CBLEILE.drop_DB_LINK as begin
  2  execute immediate 'drop database link CBLEILE_DB1';
  3  execute immediate 'drop database link PDB1';
  4  end;
  5  /
 
Procedure created.
 
sys@orcl@orcl> exec CBLEILE.drop_DB_LINK;
 
PL/SQL procedure successfully completed.
 
sys@orcl@orcl> drop procedure CBLEILE.drop_DB_LINK;
 
Procedure dropped.
 
sys@orcl@orcl> -- Seperator --
sys@orcl@orcl> create or replace procedure CBLEILE1.drop_DB_LINK as begin
  2  execute immediate 'drop database link PDB3';
  3  end;
  4  /
 
Procedure created.
 
sys@orcl@orcl> exec CBLEILE1.drop_DB_LINK;
 
PL/SQL procedure successfully completed.
 
sys@orcl@orcl> drop procedure CBLEILE1.drop_DB_LINK;
 
Procedure dropped.
 
sys@orcl@orcl> -- Seperator --
sys@orcl@orcl> 
sys@orcl@orcl> select count(*) from dba_objects where status='INVALID';
 
  COUNT(*)
----------
   1
 
1 row selected.
 
sys@orcl@orcl> 
sys@orcl@orcl> set echo off
sys@orcl@orcl> 
sys@orcl@orcl> select owner, db_link from dba_db_links;

OWNER				 DB_LINK
-------------------------------- --------------------------------
SYS				 SYS_HUB

1 row selected.

A log-file drop_db_links_20200509_234906.log has been produced as well.

After dropping all db-links you may do the following checks as well before releasing the cloned database for the testers or the developers:

  • disable all jobs owned by not Oracle maintained users. You may use the following SQL to generate the commands in sqlplus:

select 'exec dbms_scheduler.disable('||''''||owner||'.'||job_name||''''||');' from dba_scheduler_jobs where enabled='TRUE' and owner not in (select username from dba_users where oracle_maintained='Y');
  • check all directories in the DB and make sure the directory-paths do not point to shared production folders

column owner format a32
column directory_name format a32
column directory_path format a64
select owner, directory_name, directory_path from dba_directories order by 1;
  • mask sensitive data, which should not be visible to testers and/or developers.

At that point you are quite sure to not affect production data with your cloned database and you can set
job_queue_processes>0
again and provide access to the cloned database to the testers and/or developers.

Cet article Handle DB-Links after Cloning an Oracle Database est apparu en premier sur Blog dbi services.

APEX Connect 2020 – Day 2

$
0
0

For the second and last virtual conference day, I decided to attend presentations on following topics:
– Universal Theme new features
– Oracle APEX Source Code Management and Release Lifecycle
– Why Google Hates My APEX App
– We ain’t got no time! Pragmatic testing with utPLSQL
– Why APEX developers should know FLASHBACK
– ORDS – Behind the scenes … and more!
and the day ended with a keynote from Kellyn Pot’Vin-Gorman about “Becoming – A Technical Leadership Story”

Universal Theme new features

What is the Universal Theme (UT)?
The user interface of APEX integrated since APEX version 5.0 also known as Theme 42 (“Answer to the Ultimate Question of Life, the Universe, and Everything” – The Hitchhiker’s Guide to the Galaxy by Douglas Adams)
New features introduced with UT:
– Template options
– Font APEX
– Full modal dialog page
– Responsive design
– Mobile support
With APEX 20.1 released in April, a new version 1.5 of the UT comes. With that new version different other components related to it like JQuery libraries, OracleJET, Font APEX, … have changed, so check the release notes.
One of the most relevant new features is Mega Menu introducing a new navigation useful if you need to maximize the display of your application pages. You can check the UT sample app embedded on APEX to test it.
Some other changes are:
– Theme Roller enhancement
– Application Builder Redwood UI
– Interactive Grid with Control Break editing
– Friendly URL
Note also that Font Awesome is no longer natively supported (since APEX 19.2) so consider moving to Font APEX.
You can find more about UT online with the dedicated page.

Oracle APEX Source Code Management and Release Lifecycle

Source code management with APEX is always a challenging question for developers used to work with other programming languages and source code version control systems like GitHub.
There are different aspects to be considered like:
– 1 central instance for all developers or 1 local instance for each developer?
– Export full application or export pages individually?
– How to best automate application exports?
There is no universal answers to them. This must be considered based on the size of the development team and the size of the project.
There are different tools provided by APEX to manage export of the applications:
– ApexExport java classes
– Page UI
– APEX_EXPORT package
– SQLcl
But you need to be careful about workspace and application IDs when you run multiple instances.
Don’t forget that merge changes are not supported in APEX!
You should have a look into the Oracle apex life cycle management White Paper for further insight.

Why Google Hates My APEX App

When publishing a public web site Google provides different tools to help you getting more out of it with tools based on:
– Statistics
– Promotion
– Search
– Adds (to get money back)
When checking Google Analytics for the statistics of an APEX application, you realize that the outcome doesn’t really reflect the content of the APEX application, specially in terms of pages. This is mainly due to the way APEX manages page parameter in the f?p= procedure call. That call is much different than the standards URLs where parameters are given by “&” (which Goggle tools are looking for) and not “:”.
LEt’s hope this is going to improve with the new Friendly URL feature introduced by APEX 20.1.

We ain’t got no time! Pragmatic testing with utPLSQL

Unit testing should be considered right from the beginning while developing new PL/SQL packages.
utPLSQL is an open source PL/SQL package that can help to unit test your code. Tests created as part of the development process deliver value during implementation.
What are the criteria of choice for test automation?
– Risk
– Value
– Cost efficiency
– Change probability
Unit testing can be integrated into test automation which is of great value in the validation of your application.
If you want to know more about test automation you can visit the page of Angie Jones.

Why APEX developers should know FLASHBACK

For most people Flashback is an emergency procedure, but it’s much more in fact!
APEX developers know about flashback thanks to the restore as of functionality on pages in the app Builder.
Flashback is provided at different levels:

  1. Flashback query: allows to restore data associated to a specific query based on the SCN. This can be useful for unit testing.
  2. Flashback session: allows to flashback all queries of the session. By default up to 900 seconds in the past (undo retention parameter).
  3. Flashback transaction: allows to rollback committed transaction thanks to transaction ID (XID) with dbms_transaction.undo_transaction
  4. Flashback drop: allows to recover dropped objects thanks to the user recycle bin. Deleted objects are kept until space is free (advice: keep 20% of free sapce). BEWARE! this is not working for truncated objects.
  5. Flashback table: allows to recover a table to a given point in time. Only applicable for data and cannot help in case of DDL or drop.
  6. Flashback database: allows to restore the database to a given point in time based on restore points. This is only for DBAs. This can be useful to rollback an APEX application deployment as a lot of objects are changed. As it works with pluggable databases it can be used to produce copies to be distributed to individual XE instances for multiple developers.
  7. Data archive: allows to recover based on audit history. It’s secure and efficient and can be imported from existing application audits. It’s now FREE (unless using compression option).

The different flashback option can be used to rollback mistakes, but not only. They can also be used for unit testing or reproducing issues. Nevertheless you should always be careful when using commands like DROP and even more TRUNCATE.

ORDS – Behind the scenes … and more!

ORDS provides multiple functionalities:
– RESTful services for the DB
– Web Listener for APEX
– Web Client for the DB
– DB Management REST API
– Mongo style API for the DB
Regarding APEX Web Listener, EPG and mod_plsql are deprecated so ORDS is the only option for the future.
ORDS integrates into different architectures allowing to provide isolation like:
– APEX application isolation
– REST isolation
– LB whitelists
With APEX there are 2 options to use RESTful services:
– Auto REST
– ORDS RESTful services
developers can choose the best suited one according to their needs.
The most powerful feature is REST enabled SQL.

Becoming – A Technical Leadership Story

Being a leader is defined by different streams:
-Influencing others
-Leadership satisfaction
-Technical leadership
and more…
A couple of thoughts to be kept:
– Leaders are not always managers.
– Mentors are really important because they talk to you not about you like sponsors.
– Communication is more than speaking
But what is most important on my point of you is caring about others, how about you?

Thanks to virtualization of the conference all the presentations have been recorded, so keep tuned on DOAG and you will be able to see those and much more! So take some time and watch as much as possible because everything is precious learning. Thanks a lot to the community.
Keep sharing and enjoy APEX!

Cet article APEX Connect 2020 – Day 2 est apparu en premier sur Blog dbi services.

20c: AWR now stores explain plan predicates

$
0
0

By Franck Pachot

.
In a previous post https://blog.dbi-services.com/awr-dont-store-explain-plan-predicates/ I explained this limitation in gathering filter and access predicates by Statspack and then AWR because of old bugs about reverse parsing of predicates. Oracle listens to its customers through support (enhancement requests), though the community (votes on database ideas), and through the product managers who participate in User Groups and ACE program. And here it is: in 20c the predicates are collected by AWS and visible with DBMS_XPLAN and AWRSQRPT reports.

I’ll test with a very simple query:


set feedback on sql_id echo on pagesize 1000

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL_ID: g4gx2zqbkjwh1

I used the “FEEDBACK ON SQL” feature to get the SQL_ID.

Because this query is fast, it will not be gathered by AWR except if I ‘color’ it:


SQL> exec dbms_workload_repository.add_colored_sql('g4gx2zqbkjwh1');

PL/SQL procedure successfully completed.

Coloring a statement is the AWR feature to use when you want to get a statement always gathered, for example when you have optimized it and want compare the statistics.

Now running the statement between two snapshots:


SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

Here, I’m sure it has been gathered.

Now checking the execution plan:


SQL> select * from dbms_xplan.display_awr('g4gx2zqbkjwh1');

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID g4gx2zqbkjwh1
--------------------
select * from dual where ascii(dummy)=42

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ASCII("DUMMY")=42)


18 rows selected.

Here I have the predicate. This is a silly example but the predicate information is very important when looking at a large execution plan trying to understand the cardinality estimation or the reason why an index is not used.

Of course, this is also visible from the ?/rdbms/admin/awrsqrpt report:

What if you upgrade?

AWR gathers the SQL Plan only when it is not already there. Then, when we will update to 20c only the new plans will get the predicates. Here is an example where I simulate the pre-20c behaviour with “_cursor_plan_unparse_enabled”=false:


SQL> alter session set "_cursor_plan_unparse_enabled"=false;

Session altered.

SQL> exec dbms_workload_repository.add_colored_sql('g4gx2zqbkjwh1');

PL/SQL procedure successfully completed.

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dbms_xplan.display_awr('g4gx2zqbkjwh1');

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID g4gx2zqbkjwh1
--------------------
select * from dual where ascii(dummy)=42

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

13 rows selected.

No predicate here. Even If I re-connect to reset the “_cursor_plan_unparse_enabled”:


SQL> connect / as sysdba
Connected.
SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dbms_xplan.display_awr('g4gx2zqbkjwh1');

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID g4gx2zqbkjwh1
--------------------
select * from dual where ascii(dummy)=42

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

13 rows selected.

This will be the situation after upgrade.

If you want to re-gather all sql_plans, you need to purge the AWR repository:


SQL> execute dbms_workload_repository.drop_snapshot_range(1,1e22);

PL/SQL procedure successfully completed.

SQL> execute dbms_workload_repository.purge_sql_details();

PL/SQL procedure successfully completed.

SQL> commit;

This clears everything, so I do not recommend to do that at the same time as the upgrade as you may like to compare some performance with the past. Anyway, we have time and maybe this fix will be backported in 19c.

There are very small chances that fix is ported to Statspack, but you can do it yourself as I mentioned in http://viewer.zmags.com/publication/dd9ed62b#/dd9ed62b/36 (“on Improving Statspack Experience”) with something like:


sed -i -e 's/ 0 -- should be//' -e 's/[(]2254299[)]/--&/' $ORACLE_HOME/rdbms/admin/spcpkg.sql

Cet article 20c: AWR now stores explain plan predicates est apparu en premier sur Blog dbi services.

Oracle Database Appliance: which storage capacity to choose?

$
0
0

Introduction

If you’re considering ODA for your next platform, you surely already appreciate the simplicity of the offer. 3 models with few options, this is definitely easy to choose from.

One of the other benefit is also the hardware support of 5 years, and combined with software updates generally available for up to 7 years old ODAs, you can keep your ODA running even longer for non-critical databases and/or if you have a strong Disaster Recovery solution (including Data Guard or Dbvisit standby). Some of my customers are still using X4-2s and are confident in their ODAs because it’s been quite reliable across the years.

Models and storage limits

One of the main drawback of the ODA: it doesn’t have unlimited storage. Disks are local NVMe SSDs (or in a dedicated enclosure), and it’s not possible (technically possible but not recommended) to add storage through a NAS connexion.

3 ODA models are available, X8-2S and X8-2M are one-node ODAs, and X8-2HA being a two-nodes ODA with DAS storage including SSD and/or HDD (High Performance or High Capacity version).

Please refer to my previous blog post for more information about the current generation.

Storage on ODA is always dedicated to database related files: datafiles, redologs, controlfiles, archivelogs, flashback logs, backups (if you do it locally on ODA), etc. Linux system, Oracle products (Grid Infrastructure and Oracle database engines), home folders and so on reside on internal M2 SSD disks large enough for a normal use.

X8-2S/X8-2M storage limit

ODA X8-2S is the entry level ODA. It only has one CPU, but with 16 powerfull cores and 192GB of memory it’s all but a low end server. 10 empty storage slots are available in the front pane but don’t expect to extend the storage. This ODA is delivered with 2 disks and doesn’t support adding more disks. That’s it. With the two 6.4TB disks, you’ll have a RAW capacity of 12.8TB.

ODA X8-2M is much more capable than his little brother. Physically identical to X8-2S, it has two CPUs and twice the amount of RAM. This 32-cores server fitted with 384GB of RAM is a serious player. It’s still delivered with two 6.4TB disks but unlike the S version, all the 10 empty storage slots can be populated to reach a stunning 76.8TB of RAW storage. This is still not unlimited, but the limit is actually quite high. Disks can be added by pair, so you can have 2-4-6-8-10-12 disks for various configurations and for a maximum of 76.8TB RAW capacity. Only disks dedicated for ODA are suitable, and don’t expect to put bigger disks as it only supports the same 6.4TB disks than those embedded with the base server.

RAW capacity means without redundancy, and you will loose half of the capacity with ASM redundancy. It’s not possible to run an ODA without redundancy, if you think about that. ASM redundancy is the only way to secure data, as no RAID controler is inside the server. You already know that disk capacity and real capacity always differs, so Oracle included several years ago in the documentation the usable capacity depending on your configuration. The usable capacity includes reserved space for a single disk failure (15% starting from 4 disks).

On base ODAs (X8-2S and X8-2M with 2 disks only). The usable storage capacity is actually 5.8TB and no space is reserved for disk failure. If a disk fails, there is no way to rebuild redundancy as only one disk survives.

Usable storage is not database storage, don’t miss that point. You’ll need to split this usable storage between DATA area and RECO area (actually ASM diskgroups). Most often, RECO is sized between 10% and 30% of usable storage.

Here is a table with various configurations. Note that I didn’t include ASM high redundancy configurations here, I’ll explain that later.

Nb disks Disk size TB RAW cap. TB Official cap. TB DATA ratio DATA TB RECO TB
2 6.4 12.8 5.8 90% 5.22 0.58
2 6.4 12.8 5.8 80% 4.64 1.16
2 6.4 12.8 5.8 70% 4.06 1.74
4 6.4 25.6 9.9 90% 8.91 0.99
4 6.4 25.6 9.9 80% 7.92 1.98
4 6.4 25.6 9.9 70% 6.93 2.97
6 6.4 38.4 14.8 90% 13.32 1.48
6 6.4 38.4 14.8 80% 11.84 2.96
6 6.4 38.4 14.8 70% 10.36 4.44
8 6.4 51.2 19.8 90% 17.82 1.98
8 6.4 51.2 19.8 80% 15.84 3.96
8 6.4 51.2 19.8 70% 13.86 5.94
10 6.4 64 24.7 90% 22.23 2.47
10 6.4 64 24.7 80% 19.76 4.94
10 6.4 64 24.7 70% 17.29 7.41
12 6.4 76.8 29.7 90% 26.73 2.97
12 6.4 76.8 29.7 80% 23.76 5.94
12 6.4 76.8 29.7 70% 20.79 8.91

X8-2HA storage limit

Storage is more complex on X8-2HA. If you’re looking for complete information about its storage, review the ODA documentation for all the possibilities.

Briefly, X8-HA is available in two flavors: High Performance, the one I highly recommend, or High Capacity, which is nice if you have really big databases you want to store on only one ODA. But this High Capacity version will make use of spinning disks to achieve such amount of TB. Definitely not the best solution for the performance. The 2 nodes of this ODA are empty, no disk in the front panel, just empty space. All data disks are in a separate enclosure connected on both nodes with SAS cables. Depending on your configuration, you’ll have 6 to 24 SSD (HP) or a mix of 6 SSD and 18 HDD (HC). When your first enclosure is filled with disks, you can also add another storage enclosure of the same kind to eventually double the total capacity. Usable storage starts from 17.8TB to 142.5TB for HP, and from 114.8TB to 230.6TB for HC.

Best practice for storage usage

First you should consider that ODA storage is high performance storage for high database throughput. Thus, storing backups on ODA is a nonsense. Backups are files written once and mainly dedicated to get erased without being used. Don’t loose precious TB for that. Moreover, if backups are done in the FRA, they are actually located on the same disks as DATA. It’s why most of the configuration will be done with 10% to 20% of RECO, not more. Because we definitely won’t put backups on the same disks as DATA. 10% for RECO is a minimum, I wouldn’t recommend setting less than that, Fast Recovery Area being always a problem if too small.

During deployment you’ll have to choose between NORMAL or HIGH redundancy. NORMAL is quite similar to RAID1, but at the block level and without requiring disk parity (you need 2 or more disks). HIGH is available starting from 3 disks and makes each block existing 3 times on 3 different disks. HIGH seems to be better, but you loose even more precious space, and it doesn’t protect you from other failures like disaster in your datacenter or user errors. Most of the failure protection systems embedded in the servers are actually doubling the components: power supplies, network interfaces, system disks, and so on. So increasing the security of block redundancy without increasing the security of other components is not necessary in my opinion. Real solution for increased failure protection is Data Guard or Dbvisit: 2 ODAs, in 2 different geographical regions, with databases replicated from 1 site to the other.

Estimate your storage needs for the next 5 years, and even more

Are you able to do that? It’s not that simple. Most of the time you can estimate for the next 2-3 years, but more than that is highly uncertain. Maybe a new project will start and will require much more storage? Maybe you will have to provision more databases for testing purpose? Maybe your main software will leave Oracle to go to MS SQL or PostgreSQL in 2 years? Maybe a new CTO will arrive and decide that Oracle is too expensive and will build a plan to get rid of Oracle. We never know what’s going to happen in such a long time. But at least you can provide an estimation with all the information you have now and your own margin.

Which margin should I choose?

You probably plan to monitor the free space on your ODA. Based on the classic threshold, higher than 85% of disk usage is something you should not reach. Because you may not have a solution for expanding storage. 75% is on my opinion a good space usage you shouldn’t reach on ODA. So consider 25% less usable space than available when you do your calcultations.

Get bigger to last longer

I don’t like wasting money or resources for things that don’t need to, but in that particular case, I mean on ODA, after years working on X3-2, X4-2, and newer versions, I strongly advise to choose the maximum number of extensions you could. Maybe not 76TB on an ODA X8-2M if you only need 10TB, but 50TB is definitely more secure for 5 years and more. Buying new extensions could be challenging after 3 or 4 years, because you have no guarantee that these extensions will still be available. You can live with memory or CPU contentions, but without enough disk space, it’s much more difficult. Order your ODA fully loaded to make sure no extension will be needed.

The more disk you get, the more fast and secure you are

Last but not least, having more disks on your ODA maximize the throughput: because ASM is mirroring and stripping blocks on all the disks. For sure on NVMe disks you probably won’t use all that bandwith. More disks also adds more security for your data. Loosing one disk in a 4-disk ODA requires the rebalancing of 25% of your data to the 3 safe disks, and rebalancing is not immediate. Loosing one disk in a 8-disk ODA requires the rebalancing of much less data, actually 12.5% assuming you have the same amount of data on the 2 configurations.

A simple example

You need a single-node ODA with expandable storage. So ODA X8-2M seems fine.

You have an overview of your databases growth trend and plan to double the size in 5 years. Starting from 6TB, you plan to reach 12TB at a maximum. As you are aware of the threshold you shouldn’t reach, you know that you’ll need 16TB of usable space for DATA (maximum of 75% of disk space used). You want to make sure to have enough FRA so you plan to set DATA/RECO ratio to 80%/20%. Your RECO should be set to 4TB. Your ODA disk configuration should have at least 20TB of usable disk space. A 8-disk ODA is 19.8TB of usable space, not enough. A 10-disk ODA is 24.7TB of usable space for 19.76TB of DATA and 4.94TB of RECO, 23% more than needed, a comfortable additional margin. And don’t hesitate to take a 12-disk ODA (1 more extension) if you want to secure your choice and be ready for unplanned changes.

Conclusion

Storage on ODA is quite expensive, but don’t forget that you may not find a solution for an ODA with insufficient storage. Take the time to make your calculation, keep a strong margin, and think long-term. Being long-term is definitely the purpose of an ODA.

Cet article Oracle Database Appliance: which storage capacity to choose? est apparu en premier sur Blog dbi services.


Install & configure a Nagios Server

$
0
0

What is Nagios ?

“Nagios is a powerful monitoring system that enables organizations to identify and resolve IT infrastructure problems before they affect critical business processes.” https://www.nagios.org/

In simple words, you can monitor your servers (linux, MSSSQL, etc …) and databases (Oracle, SQL Server, Postgres, MySQL, MariaDB, etc …) with nagios.

Nagios architecture

 

We use the free version !!! 😀

 

VM configuration :

OS     : CentOS Linux 7 
CPU    : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz 
Memory : 3GB 
Disk   : 15GB

What we need :

  1. We need Nagios Core. This is the brain of our Nagios server. All the configuration will be done in this part (set the contacts, set the notification massages, etc …)
  2. We must install Nagios plugins. Plugins are standalone extensions to Nagios Core that make it possible to monitor anything and everything with Core. Plugins process command-line arguments, perform a specific check, and then return the results to Nagios Core
  3. The NRPE addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor “local” resources (like CPU load, memory usage,etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.

Preconditions

All below installation are as root user.

Installation steps

  1. Install Nagios Core (I will not explain it. Because their documentation is complete)
    support.nagios.com/kb/article/nagios-core-installing-nagios-core-from-source-96.html#CentOS
  2. Install Nagios Plugins

    support.nagios.com/kb/article/nagios-core-installing-nagios-core-from-source-96.html#CentOS

  3. Install NRPE
    https://support.nagios.com/kb/article.php?id=515
  4. Now you must decide which type of database you want to monitor. Then install the check_health plugin for it (you can install all of them if you want)
    Here we install the Oracle and SQL Server check_health
Install and configure Oracle Check_health

You need Oracle client to communicate with an Oracle instance 

  • Download and install check_oracle_health (https://labs.consol.de/nagios/check_oracle_health/index.html)
    wget https://labs.consol.de/assets/downloads/nagios/check_oracle_health-3.2.1.2.tar.gz 
    tar xvfz check_oracle_health-3.2.1.2.tar.gz 
    cd check_oracle_health-3.2.1.2 
    ./configure 
    make 
    make install
  • Download and install Oracle Client (here we installed 12c version – https://www.oracle.com/database/technologies/oracle12c-linux-12201-downloads.html)
  • Create check_oracle_health_wrapped file in /usr/local/nagios/libexec
  • Set parameters+variables needed to start the plugin check_oracle_health
    #!/bin/sh
    ### -------------------------------------------------------------------------------- ###
    ### We set some environment variable before to start the check_oracle_health script.
    ### -------------------------------------------------------------------------------- ###
    ### Set parameters+variables needed to start the plugin check_oracle_health:
    
    export ORACLE_HOME=/u01/app/oracle/product/12.2.0/client_1
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib
    export TNS_ADMIN=/usr/local/nagios/tns
    export PATH=$PATH:$ORACLE_HOME/bin
    
    export ARGS="$*"
    
    ### start the plugin check_oracle_health with the arguments of the Nagios Service:
    
    /usr/local/nagios/libexec/check_oracle_health $ARGS
  • Create the tns folder in /usr/local/nagios and then make a tnsnames.ora file and add a tns
    DBTEST =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 172.22.10.2)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SID = DBTEST)
        )
      )
  • Test the connection
    [nagios@vmnagios objects]$ check_oracle_health_wrapped --connect DBTEST --mode tnsping

 

Install and configure MSSQL Check_health
  • Download and install the check_mssql_health
    wget https://labs.consol.de/assets/downloads/nagios/check_mssql_health-2.6.4.16.tar.gz
    tar xvfz check_mssql_health-2.6.4.16.tar.gz
    cd check_mssql_health-2.6.4.16
    ./configure 
    make 
    make install
  • Download and install freetds  (www.freetds.org/software.html)
    wget ftp://ftp.freetds.org/pub/freetds/stable/freetds-1.1.20.tar.gz
    tar xvfz freetds-1.1.20
    ./configure --prefix=/usr/local/freetds
    make
    make install
    yum install freedts freetds-devel gcc make perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
  • Download and install DBD-Sybase
    wget http://search.cpan.org/CPAN/authors/id/M/ME/MEWP/DBD-Sybase-1.15.tar.gz
    tar xvfz DBD-Sybase-1.15
    cd DBD-Sybase-1.15
    export SYBASE=/usr/local/freetds
    perl Makefile.PL
    make
    make install
  • Add your instance information in : /usr/local/freetds/etc/freetds.conf

Configuration steps

  1. Set your domaine name
    [root@vmnagios ~]# cat /etc/resolv.conf
    # Generated by NetworkManager
    search exemple.ads
    nameserver 10.175.222.10
  2. Set SMPT and Postfix
    [root@vmnagios ~]# /etc/postfix/main.cf
    ### ----------------- added by dbi-services ---------------- ###
    relayhost = http://smtp.exemple.net
    smtp_generic_maps = hash:/etc/postfix/generic
    sender_canonical_maps = hash:/etc/postfix/canonical
    -------------------------------------------------------- ###
  3. Configure Postfix
    [root@vmnagios ~]# cat /etc/postfix/generic
    @localdomain.local dba@exemple.com
    @.exemple.ads dba@exemple.com
    
    [root@vmnagios ~]# cat /etc/postfix/canonical
    root dba@exemple.com
    nagios dba@exemple.com

    After we need to generate the generic.db and canonical.db
    postmap /etc/postfix/generic
    postmap /etc/postfix/canonical

 

Done ! 🙂

Now you have a new Nagios Server. All you should do is to configure your client and create a config file on your brand new Nagios server.

Cet article Install & configure a Nagios Server est apparu en premier sur Blog dbi services.

Oracle Standard Edition on AWS ☁ socket arithmetic

$
0
0

By Franck Pachot

.
Note that I’ve written previously about Oracle Standard Edition 2 licensing before but a few rules change. This is written in May 2020.
TL;DR: 4 vCPU count for 1 socket and 2 sockets count for 1 server wherever hyper-threading is enabled or not.

The SE2 rules

I think the Standard Edition rules are quite clear now: maximum server capacity, cluster limit, minimum NUP, and processor metric. Oracle has them in the Database Licensing guideline.

2 socket capacity per server

Oracle Database Standard Edition 2 may only be licensed on servers that have a maximum capacity of 2 sockets.

We are talking about capacity which means that even when you remove a processor from a 4 socket server, it is still a 4 socket server. You cannot run Standard Edition if the servers have be the possibility for more than 2 sockets per server whether there is a processor in the socket or not.

2 socket used per cluster

When used with Oracle Real Application Clusters, Oracle Database Standard Edition 2 may only be licensed on a maximum of 2 one-socket servers

This one is not about capacity. You can remove a processor from a bi-socket to become a one-socket server, and then build a cluster running RAC in Standard Edition with 2 of those nodes. The good thing is that you can even use Oracle hypervisor (OVM or KVM), LPAR or Zones to pin one socket only for the usage of Oracle, and use the other for something else. The bad thing is that as of 19c, RAC with Standard Edition is not possible anymore. You can run the new SE HA which allows more on one node (up to the 2 socket rule) because the other node is stopped (the 10 days rule).

At least 10 NUP per server

The minimum when licensing by Named User Plus (NUP) metric is 10 NUP licenses per server.

Even when you didn’t choose the processor metric you need to count the servers. For example, if your vSphere cluster runs on 4 bi-socket servers, you need to buy 40 NUP licenses even if you can count a smaller population of users.

Processor metric

When licensing Oracle programs with … Standard Edition in the product name, a processor is counted equivalent to a socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.

A socket is a plastic slot where you can put a processor in it. This is what counts for the “2 socket capacity per server”. An occupied socket is one with a processor, physically or pinned with an accepted hard partitioning hypervisor method (Solaris Zones, IBM LPAR, Oracle OVM or KVM,…). This is what counts for the “2 socket occupied per cluster” rule. Intel is not concerned by the multi-module chip exception.

What about the cloud?

So, the rules mention servers, sockets and processors. How does this apply to modern computing where you provision a number of vCPU without knowing anything about the underlying hardware? In the AWS shared responsibility model you are responsible for the Oracle Licences (BYOL – Bring Your Own Licences) but they are responsible for the physical servers.

Oracle established the rules (which may or may not be referenced by your contract) in the Licensing Oracle Software in the Cloud Computing Environment (for educational purposes only – refer to your contract if you want the legal interpretation).

This document is only for AWS and Azure. There’s no agreement with Google Cloud and then you cannot run an Oracle software under license. Same without your local cloud provider: you are reduced to hosting on physical servers. The Oracle Public Cloud has its own rules and you can license Standard Edition on a compute instance with up to 16 OCPU and one processor license covers 4 OCPU (which is 2 hyper-threaded Intel cores).

Oracle authorizes to run on those 2 competitor public clouds. But they generally double the licenses required on competitor platforms in order to be cheaper on their own. They did that on-premises a long time ago for IBM processors and they do that now for Amazon AWS and Microsoft Azure.

So, the arithmetic is based on the following idea: 4 vCPU counts for 1 socket and 2 sockets counts for 1 server

Note that there was a time where it was 1 socket = 2 cores which meant that it was 4 vCPU when hyper-threading is enabled but 2 vCPU when not. They have changed the document and we count vCPU without looking at cores or threads. Needless to say that for optimal performance/price in SE you should disable hyper-threading in AWS in order to have your processes running on full cores. And use instance caging to limit the user sessions in order to leave a core available for the background processes.

Here are the rules:

  • 2 socket capacity per server: maximum 8 vCPU
  • 2 socket occupied per cluster: forget about RAC in Standard Edition and RAC in AWS
  • Minimum NUP: 10 NUP are ok to cover the maximum allowed 8 vCPU
  • Processor metric: 1 license covers 4 vCPU

Example

The maximum you can use for one database:
2 SE2 processor licences = 1 server = 2 sockets = 8 AWS vCPU
2 SE2 processor licences = 8 cores = 16 OCPU in Oracle Cloud

The cheaper option means smaller capacity:
1 SE2 processor licences = 1 sockets = 4 AWS vCPU
1 SE2 processor licences = 4 cores = 8 OCPU in Oracle Cloud

As you can see The difference between Standard and Enterprise Edition in the clouds is much smaller than on-premises where a socket can run more and more cores. The per-socket licensing was made at a time where processors had only a few cores. With the evolution, Oracle realized that SE was too cheap. They caged the SE2 usage to 16 threads per database and limit further on their competitor’s cloud. Those limits are not technical but governed by revenue management: they provide a lot of features in SE but also need to ensure that large companies still require EE.

But…

… there’s always an exception. It seems that Amazon has a special deal to allow Oracle Standard Edition on AWS RDS with EC2 instances up to 16 vCPU:

You know that I always try to test what I’m writing in a blog post. So, at least as of the publishing date and with the tested versions, it gets some verified facts.
I started an AWS RDS Oracle database on db.m4.4xlarge which is 16 vCPU. I’ve installed the instant client in my bastion console to access it:


sudo yum localinstall http://yum.oracle.com/repo/OracleLinux/OL7/oracle/instantclient/x86_64/getPackage/oracle-instantclient19.5-basic-19.5.0.0.0-1.x86_64.rpm
sudo yum localinstall http://yum.oracle.com/repo/OracleLinux/OL7/oracle/instantclient/x86_64/getPackage/oracle-instantclient19.5-sqlplus-19.5.0.0.0-1.x86_64.rpm

This is Standard Edition 2:


[ec2-user@ip-10-0-2-28 ~]$ sqlplus admin/FranckPachot@//database-1.ce45l0qjpoax.us-east-1.rds.amazonaws.com/ORCL

SQL*Plus: Release 19.0.0.0.0 - Production on Tue May 19 21:32:47 2020
Version 19.5.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Tue May 19 2020 21:32:38 +00:00

Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.7.0.0.0

On 16 vCPU:


SQL> show parameter cpu_count

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     16

On AWS:

SQL> host curl http://169.254.169.254/latest/meta-data/services/domain
amazonaws.com

With more than 16 threads in CPU:

SQL> @ snapper.sql ash 10 1 all
Sampling SID all with interval 10 seconds, taking 1 snapshots...

-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)


---------------------------------------------------------------------------------------------------------------
  ActSes   %Thread | INST | SQL_ID          | SQL_CHILD | EVENT                               | WAIT_CLASS
---------------------------------------------------------------------------------------------------------------
   19.29   (1929%) |    1 | 3zkr1jbq4ufuk   | 0         | ON CPU                              | ON CPU
    2.71    (271%) |    1 | 3zkr1jbq4ufuk   | 0         | resmgr:cpu quantum                  | Scheduler
     .06      (6%) |    1 |                 | 0         | ON CPU                              | ON CPU

--  End of ASH snap 1, end=2020-05-19 21:34:00, seconds=10, samples_taken=49, AAS=22.1

PL/SQL procedure successfully completed.

I also checked on CloudWatch (the AWS monitoring from the hypervisor) that I am running 100% on CPU.

I tested this on a very limited time free lab environment (this configuration is expensive) and didn’t check whether hyperthreading was enabled or not (my guess is: disabled) and I didn’t test if setting CPU_COUNT would enable instance caging (SE2 is supposed to be internally caged at 16 CPUs but I see more sessions on CPU there).

Of course, I shared my surprise (follow me on Twitter if you like this kind of short info about databases – I don’t really look at the numbers but it seems I may reach 5000 followers soon so I’ll continue at the same rate):


and I’ll update this post when I have more info about this.

Cet article Oracle Standard Edition on AWS ☁ socket arithmetic est apparu en premier sur Blog dbi services.

How to use DBMS_SCHEDULER to improve performance ?

$
0
0

From an application point of view, the oracle scheduler DBMS_SCHEDULER allows to reach best performance by parallelizing your process.

Let’s start with the following PL/SQL code inserting in serial several rows from a metadata table to a target table. In my example, the metadata table does not contain “directly” the data but a set a of sql statement to be executed and for which the rows returned must be inserted into the target table My_Target_Table_Serial :

Let’s verify the contents of the source table called My_Metadata_Table:

SQL> SELECT priority,dwh_id, amq_name, sql_statement,scope from dwh_amq_v2;
ROWNUM  DWH_ID  AMQ_NAME SQL_STATEMENT          SCOPE
1	7	AAA1	 SELECT SUM(P.age pt.p	TYPE1
2	28	BBB2  	 SELECT CASE WHEN pt.p	TYPE1
3	37	CCC3	 "select cm.case_id fr"	TYPE2
4	48	DDD4	 "select cm.case_id fr"	TYPE2
5	73	EEE5	 SELECT DISTINCT pt.p	TYPE1
6	90	FFF6 	 SELECT LAG(ORW pt.p	TYPE1
7	114	GGG7	 SELECT distinct pt.	TYPE1
8	125	HHH8	 SELECT DISTINCT pt.p	TYPE1
...
148    115     ZZZ48    SELECT ROUND(TO_NUMBER TYPE2

Now let’s check the PL/SQL program :

DECLARE
  l_errm VARCHAR2(200);
  l_sql  VARCHAR2(32767) := NULL;
  sql_statement_1  VARCHAR2(32767) := NULL;
  sql_statement_2  VARCHAR2(32767) := NULL;
  l_amq_name VARCHAR2(200);
  l_date NUMBER;
BEGIN
  SELECT TO_NUMBER(TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS')) INTO l_date FROM dual;
  FOR rec IN (SELECT dwh_id, amq_name, sql_statement,scope 
                FROM My_Metadata_Table,
                     (SELECT dwh_pit_date FROM dwh_code_mv) pt
               WHERE dwh_status = 1
                 AND (pt.dwh_pit_date >= dwh_valid_from AND pt.dwh_pit_date < dwh_valid_to) 
               ORDER BY priority, dwh_id) LOOP
    ...
    sql_statement_1 := substr(rec.sql_statement, 1, 32000);
    sql_statement_2 := substr(rec.sql_statement, 32001);
    IF rec.SCOPE = 'TYPE1' THEN 
      -- TYPE1 LEVEL SELECT
      l_sql := 'INSERT /*+ APPEND */ INTO My_Target_Table_Serial (dwh_pit_date, AMQ_ID, AMQ_TEXT, CASE_ID, ENTERPRISE_ID)'||CHR(13)|| 'SELECT DISTINCT TO_DATE(code.dwh_pit_date, ''YYYYMMDDHH24MISS''),'||rec.dwh_id|| ',''' ||rec.amq_name ||''', case_id, 1'||CHR(13)
      || ' FROM (SELECT dwh_pit_date FROM dwh_code) code, ('||sql_statement_1;
      EXECUTE IMMEDIATE l_sql || sql_statement_2 || ')';
      COMMIT;    
    ELSE 
      -- TYPE2 LEVEL SELECT
      l_sql :=  'INSERT /*+ APPEND */ INTO My_Target_Table_Serial (dwh_pit_date, AMQ_ID, AMQ_TEXT, CASE_ID, ENTERPRISE_ID)
      SELECT DISTINCT TO_DATE(code.dwh_pit_date, ''YYYYMMDDHH24MISS''), '||rec.dwh_id|| ',''' ||rec.amq_name || ''', cm.case_id, cm.enterprise_id'||CHR(13)
      || '  FROM (SELECT dwh_pit_date FROM dwh_code) code, v_sc_case_master cm, v_sc_case_event ce, ('||sql_statement_1;
              
      EXECUTE IMMEDIATE l_sql || sql_statement_2 || ') pt'||CHR(13)
      || ' WHERE cm.case_id = ce.case_id'||CHR(13) 
      || '   AND cm.deleted IS NULL AND cm.state_id <> 1'||CHR(13)
      || '   AND ce.deleted IS NULL AND ce.pref_term = pt.pt_name';
      COMMIT;         
    END IF;
    ...
   END LOOP:
END;
Number of Rows Read : 148 (Means 148 Sql Statement to execute)
START : 16:17:46
END : 16:57:42
Total :  40 mins

 

As we can see, each Sql Statement is executed in serial, let’s check the audit table recording the loading time (Insert Time) and the “scheduling”   :

CREATE_DATE		NAME	START_DATE		END_DATE            LOADING_TIME
22.05.2020 16:46:34	AAA1	22.05.2020 16:46:34	22.05.2020 16:57:42    11.08mins
22.05.2020 16:42:05	BBB2	22.05.2020 16:42:05	22.05.2020 16:46:34    04.29mins
22.05.2020 16:41:15	CCC3	22.05.2020 16:41:15	22.05.2020 16:42:05    50sec
22.05.2020 16:40:42	DDD4	22.05.2020 16:40:42	22.05.2020 16:41:15    32sec
22.05.2020 16:40:20	EEE5	22.05.2020 16:40:20	22.05.2020 16:40:42    22sec
22.05.2020 16:37:23	FFF6	22.05.2020 16:37:23	22.05.2020 16:40:20    02.57mins
22.05.2020 16:37:12	GGG7	22.05.2020 16:37:12	22.05.2020 16:37:23    11sec
...
22.05.2020 16:36:03	ZZZ148	22.05.2020 16:17:35	22.05.2020 16:17:46    11sec

To resume :

  • The 148 rows (148 Sql Statement) coming from the source table are loaded in serial in 40mins.
  • The majority of rows have taken less than 01 min to load (Ex. : Name = CCC3,DDD4,EEE5,GGG7 and ZZZ148)
  • Few rows have taken more than a couple of minutes to load.
  • The maximum loading time is 11.08mins for the Name “AA1”.
  • Each row must wait the previous row complete his loading before to start his loading (compare END_DATE previous vs START_DATE current).

To optimize the process, let’s trying to load all the rows coming from the source table in parallel by using the oracle scheduler DBMS_SCHEDULER.

Instead to execute directly the Insert command in the loop, let’s create a job through DBMS_SCHEDULER:

FOR rec IN (SELECT priority,dwh_id, amq_name, sql_statement,scope 
                FROM My_Metadata_Table,
                     (SELECT dwh_pit_date FROM dwh_code_mv) pt
               WHERE dwh_status = 1
                 AND (pt.dwh_pit_date >= dwh_valid_from AND pt.dwh_pit_date < dwh_valid_to) 
               ORDER BY priority, dwh_id) LOOP

     l_amq_name := rec.amq_name;
       IF rec.SCOPE = 'TYPE1' THEN 
        -- TYPE1 LEVEL SELECT
         ...
  
            --Execute Job to insert the AMQ : Background process
            DBMS_SCHEDULER.CREATE_JOB (
            job_name             => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
            job_type             => 'PLSQL_BLOCK',
            job_action           => 'BEGIN
                                      LOAD_DATA(''CASE'','||''''||l_amq_name||''''||','||rec.priority||','||l_date||','||v_SESSION_ID||','||i||');
                                     END;',
            start_date    =>  sysdate,  
            enabled       =>  TRUE,  
            auto_drop     =>  TRUE,  
            comments      =>  'job for amq '||l_amq_name);
          END IF;
        ELSE 
            ...
            END IF;
        END IF; 
      i := i +1;
  END LOOP;
Number of Rows Read : 148 (Means 148 Sql Statement to execute)
START : 08:14:03
END : 08:42:32
Total :  27.57 mins

To resume :

  • The 148 rows (148 Sql Statement) coming from the source table are loaded now in parallel in 27.57mins instead of 40mins in serial.
  • The options of DBMS_SCHEDULER are  :
    • As we are limited in number of character for the parameter “job_action”, we have to insert the data through a PL/SQL procedure LOAD_DATA.
    • The job is executed immediately (start_date=sysdate) and purged immediately after his execution (auto_drop=TRUE).

Let’s check now how the jobs are scheduled. Since we do a loop of 148 times, I expect to have 148 jobs:

First, let’s check now if the rows (Remember, One Row = One Insert Into Target Table From Source Table) are loaded in parallel :

CREATE_DATE 	    NAME START_DATE 	        END_DATE 				       
22.05.2020 16:46:34 AAA1 23.05.2020 08:14:04	23.05.2020 08:21:19
22.05.2020 16:42:05 BBB2 23.05.2020 08:14:04	23.05.2020 08:20:43
22.05.2020 16:41:15 CCC3 23.05.2020 08:14:04	23.05.2020 08:21:59
22.05.2020 16:40:42 DDD4 23.05.2020 08:14:03	23.05.2020 08:15:29
22.05.2020 16:40:20 EEE5 23.05.2020 08:14:03	23.05.2020 08:15:05
22.05.2020 16:37:23 FFF6 23.05.2020 08:14:03	23.05.2020 08:14:47
22.05.2020 16:37:12 GGG7 23.05.2020 08:14:03	23.05.2020 08:15:59
...                     
22.05.2020 16:36:03 ZZZ148 22.05.2020 16:17:35 22.05.2020 16:17:46

This is the case, all rows have the same start_date, meaning all rows start in parallel. Let’s verify into “all_scheduler_job_run_details” to check we have our 148 jobs in parallel :

SQL> select count(*) from all_scheduler_job_run_details where job_name like '%20200523081403';

  COUNT(*)
----------
       148
SQL> select log_date,job_name,status,req_start_date from all_scheduler_job_run_details where job_name like '%20200523081403';
LOG_DATE		JOB_NAME		        STATUS		REQ_START_DATE
23-MAY-20 08.42.41	AMQ_P3J147_20200523081403	SUCCEEDED	23-MAY-20 02.42.32
23-MAY-20 08.42.32	AMQ_P2J146_20200523081403	SUCCEEDED	23-MAY-20 02.23.13
23-MAY-20 08.37.56	AMQ_P2J145_20200523081403	SUCCEEDED	23-MAY-20 02.23.13
23-MAY-20 08.37.33	AMQ_P2J144_20200523081403	SUCCEEDED	23-MAY-20 02.23.13
23-MAY-20 08.37.22	AMQ_P2J143_20200523081403	SUCCEEDED	23-MAY-20 02.23.13
23-MAY-20 08.37.03	AMQ_P2J141_20200523081403	SUCCEEDED	23-MAY-20 02.23.13
23-MAY-20 08.36.50	AMQ_P2J142_20200523081403	SUCCEEDED	23-MAY-20 02.23.13
23-MAY-20 08.33.57	AMQ_P2J140_20200523081403	SUCCEEDED	23-MAY-20 02.23.13
--Only the first 8 rows are displayed

To resume :

  • We have 148 jobs all started, most of the time in parallel (job with same REQ_START_DATE, oracle parallelizes jobs per block randomly).
  • My PL/SQL process now took 27.57 mins instead of 40mins.

But if we have a look in details, we have a lot of small jobs. Those are jobs where run_duration is less than 01 mins:

SQL> select run_duration from all_scheduler_job_run_details where job_name like '%20200523081403' order by run_duration;

RUN_DURATION
+00 00:00:04.000000
+00 00:00:07.000000
+00 00:00:09.000000
+00 00:00:10.000000
+00 00:00:13.000000
+00 00:00:15.000000
+00 00:00:20.000000
+00 00:00:27.000000
+00 00:00:33.000000
+00 00:00:35.000000
+00 00:00:36.000000
+00 00:00:38.000000
+00 00:00:43.000000
+00 00:00:46.000000
+00 00:00:51.000000
+00 00:00:52.000000

As we have a lot of small jobs (short-lived jobs), it will be more interesting to use lightweight jobs instead of regular jobs.

In contrary of regular jobs, lightweight jobs :

  • Require less meta data, so they have quicker create and drop times.
  • Suited for short-lived jobs (small jobs, jobs where run_duration is low).

Let’s rewrite our PL/SQL process using lightweight jobs :

To use lightweight jobs, first create a program suitable for a lightweight job :

begin
dbms_scheduler.create_program
(
    program_name=>'LIGHTWEIGHT_PROGRAM',
    program_action=>'LOAD_AMQ',
    program_type=>'STORED_PROCEDURE',
    number_of_arguments=>6, 
    enabled=>FALSE);
END;

Add the arguments (parameters) and enable the program :

BEGIN
dbms_scheduler.DEFINE_PROGRAM_ARGUMENT(
program_name=>'lightweight_program',
argument_position=>1,
argument_type=>'VARCHAR2',
DEFAULT_VALUE=>NULL);

dbms_scheduler.DEFINE_PROGRAM_ARGUMENT(
program_name=>'lightweight_program',
argument_position=>2,
argument_type=>'VARCHAR2');

dbms_scheduler.DEFINE_PROGRAM_ARGUMENT(
program_name=>'lightweight_program',
argument_position=>3,
argument_type=>'NUMBER');

dbms_scheduler.DEFINE_PROGRAM_ARGUMENT(
program_name=>'lightweight_program',
argument_position=>4,
argument_type=>'NUMBER');

dbms_scheduler.DEFINE_PROGRAM_ARGUMENT(
program_name=>'lightweight_program',
argument_position=>5,
argument_type=>'VARCHAR');

dbms_scheduler.DEFINE_PROGRAM_ARGUMENT(
program_name=>'lightweight_program',
argument_position=>6,
argument_type=>'NUMBER');

dbms_scheduler.enable('lightweight_program');  
end;

Into the PL/SQL code, let’s create the lightweight job without forget to set the argument value before running the job:

DECLARE
...
BEGIN
....
LOOP
DBMS_SCHEDULER.create_job (
job_name        => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
program_name    => 'LIGHTWEIGHT_PROGRAM',
job_style       => 'LIGHTWEIGHT',
enabled         => FALSE);
                  
 DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
   job_name                => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
   argument_position       => 1,
   argument_value          => rec.scope);
   
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
   job_name                => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
   argument_position       => 2,
   argument_value          => l_amq_name);
   
DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
   job_name                => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
   argument_position       => 3,
   argument_value          => rec.priority);

DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
   job_name                => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
   argument_position       => 4,
   argument_value          => l_date);   

DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
   job_name                => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
   argument_position       => 5,
   argument_value          => v_SESSION_ID);  

DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE (
   job_name                => 'AMQ_P'||rec.priority||'j'||i||'_'||l_date,
   argument_position       => 6,
   argument_value          => i); 

dbms_scheduler.run_job('AMQ_P'||rec.priority||'j'||i||'_'||l_date,TRUE);
...
END LOOP;
Number of Rows Read : 148 (Means 148 Sql Statement to execute) 
START : 18:08:56
END : 18:27:40
Total : 18.84 mins

 

Let’s check we have always 148 jobs in parallel :

SQL> select count(*) from all_scheduler_job_run_details where job_name like '%20200524175036';

  COUNT(*)
----------
       148
SQL> select log_date,job_name,status,req_start_date from all_scheduler_job_run_details where job_name like '%20200524175036';

LOG_DATE           JOB_NAME     STATUS	        REQ_START_DATE
24-MAY-20 05.50.51 AB1C		SUCCEEDED	24-MAY-20 05.50.36
24-MAY-20 05.50.56 AB1D		SUCCEEDED	24-MAY-20 05.50.51
24-MAY-20 05.51.14 AB1E		SUCCEEDED	24-MAY-20 05.50.56
24-MAY-20 05.51.49 AB1I		SUCCEEDED	24-MAY-20 05.51.14
24-MAY-20 05.52.14 AB1P		SUCCEEDED	24-MAY-20 05.51.49
24-MAY-20 05.52.34 AB1L		SUCCEEDED	24-MAY-20 05.52.14
24-MAY-20 05.52.55 AB1N		SUCCEEDED	24-MAY-20 05.52.34
24-MAY-20 05.53.17 AB1M		SUCCEEDED	24-MAY-20 05.52.55
24-MAY-20 05.53.29 AB1K		SUCCEEDED	24-MAY-20 05.53.17
24-MAY-20 05.53.39 AB1O		SUCCEEDED	24-MAY-20 05.53.29
24-MAY-20 05.53.57 AB1U		SUCCEEDED	24-MAY-20 05.53.39
24-MAY-20 05.54.07 AB1V		SUCCEEDED	24-MAY-20 05.53.57

To resume :

  • We have 148 jobs all started, most of the time in parallel.
  • My PL/SQL process now took 18.54 mins (Lightweight Jobs) instead of 27.57mins (Regular Jobs).
  • If we compare Regular Jobs VS Lightweight Jobs, the former seems to schedule the jobs randomly (start jobs with block of 4,5,6…8) while the last one schedule jobs by block of 3 or 4 (as we can see above).

Conclusion :

  • DBMS_SCHEDULER (Regular Jobs or Lightweight Jobs) can improve significantly your PL/SQL performance transforming transforming your serial process in parallel process.
  • If you have small jobs (short lived-jobs), use lightweight jobs instead regular jobs.
  • Don’t underestimate the development time (development, test, bug solving) to transform your serial process to parallel process. Create 1 job is different to create more than 100 or 1000 jobs through a PL/SQL loop (concurrency problem, CPU used by create/drop the jobs).
  • As developer, you are responsible to manage your jobs (create,drop,purge) in order to not fill the oracle parameter job_queue_processes (used by a lot of critical oracle processes).

Cet article How to use DBMS_SCHEDULER to improve performance ? est apparu en premier sur Blog dbi services.

How to configure additional network card on an ODA X8 family

$
0
0

During a past project, we were using ODA X8-2M with one additional network card. As per my knowledge, on an appliance, additional cards are used to extend connectivity to additional network. Customer was really enforcing to have network redundancy between the 2 cards. I then took opportunity for some tests. In this post, I would like to share my experience on these tests and how to properly configure network card extension on an ODA.

Introduction

On an appliance, we can use RJ45 or optical fiber connectivity. On ODA X8 family, optical fiber cards have got 2 ports and RJ45 cards 4 ports. First card is installed in PCIe slot 7.
On ODA X8-2S 2 additional cards can be installed in slot 8 and slot 10.
On ODA X8-2M 2 additional cards can be installed in slot 2 and slot 10.
The only requirement is that all cards from the same server must be from the same type. You can not mix RJ45 and optical fiber on the same ODA.

In my case, as I have got 2 optical fiber network cards, my first card will have p7p1 and p7p2 ports configured (btbond1) and my second card will have p2p1 and p2p2 ports configured (btbond3).

ODA bonding is configured to use active-backup mode with no LACP.

Is bonding redundancy possible on an ODA?

Configuring bonding across network card

My first experience was to edit ifcfg-p7p2 and ifcfg-p2p1 linux network script configuration file, and assign p7p2 to btbond3 and p2p1 to btbond1. Of course, this was just for an experience and to keep such solution as definitive would mean to rollback this non supported configuration before any ODA patching. I really do not encourage to do so. 😉
Anyway, my experience was not successful as after a reboot, both files were restored in their original configuration.

Configuring a different IP address for each network on each card

I then tried to configure both cards with 2 IP addresses from the 2 networks as below :
Bond 1 IP 10.3.1.20 (no VLAN)
Bond 3 IP 10.3.1.21 (VLAN 723)
Bond 1 IP 192.20.30.2 (no VLAN)
Bond 3 IP 192.20.30.3 (VLAN 723)

The configuration could be successfully applied and everything was configured as expected :

[root@ODA01 network-scripts]# ip addr show btbond1
8: btbond1: mtu 1500 qdisc noqueue state UP
link/ether b0:26:28:72:0e:d0 brd ff:ff:ff:ff:ff:ff
inet 10.3.1.20/24 brd 10.3.1.255 scope global btbond1
valid_lft forever preferred_lft forever
 
[root@ODA01 network-scripts]# ip addr show btbond1.723
17: btbond1.723@btbond1: mtu 1500 qdisc noqueue state UP
link/ether b0:26:28:72:0e:d0 brd ff:ff:ff:ff:ff:ff
inet 192.20.30.2/29 brd 192.20.30.7 scope global btbond1.723
valid_lft forever preferred_lft forever
 
[root@ODA01 network-scripts]# ip addr show btbond3
9: btbond3: mtu 1500 qdisc noqueue state UP
link/ether b0:26:28:7c:8a:50 brd ff:ff:ff:ff:ff:ff
inet 10.3.1.21/24 brd 10.3.1.255 scope global btbond3
valid_lft forever preferred_lft forever
 
[root@ODA01 network-scripts]# ip addr show btbond3.723
18: btbond3.723@btbond3: mtu 1500 qdisc noqueue state UP
link/ether b0:26:28:7c:8a:50 brd ff:ff:ff:ff:ff:ff
inet 192.20.30.3/29 brd 192.20.30.7 scope global btbond3.723
valid_lft forever preferred_lft forever

But, unfortunately, such a configuration is not working successfully as some packets got lost by the kernel when routing between both bonding.

So, what’s next?

Opening a SR by Oracle I could get the confirmation that network cards redundancy and physical failover between cards in case of total loss of a network card is not compatible and not possible. Appliance does not support it.

Configure an ODA with 2 network cards

In this part I would like to share how to configure additional network cards on an ODA. Below would be the IP Addresses to use :
Bond 1 IP 10.3.1.20 (no VLAN) for application and backup network
Bond 3 IP 192.20.30.2 (VLAN 723) for redundancy network

btbond1 configuration

btbond1 configuration has been done through configure-firstnet after reimaging the ODA :

btbond3 configuration

To configure an additional card on the ODA we will use the odacli create-network command.

[root@ODA01 network-scripts]# odacli create-network -n btbond3 -t BOND -g 192.20.30.1 -p 192.20.30.2 -v 723 -m Replication -s 255.255.255.248 -w Dataguard
{
"jobId" : "d4695212-82f9-4ad1-9570-a0b7cd2faca5",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "March 19, 2020 11:04:17 AM CET",
"resourceList" : [ ],
"description" : "Network service creation with names btbond3:Replication ",
"updatedTime" : "March 19, 2020 11:04:17 AM CET"
}
 
[root@ODA01 network-scripts]# odacli describe-job -i "d4695212-82f9-4ad1-9570-a0b7cd2faca5"
 
Job details
----------------------------------------------------------------
ID: d4695212-82f9-4ad1-9570-a0b7cd2faca5
Description: Network service creation with names btbond3:Replication
Status: Success
Created: March 19, 2020 11:04:17 AM CET
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting network March 19, 2020 11:04:17 AM CET March 19, 2020 11:04:27 AM CET Success
Setting up Network March 19, 2020 11:04:17 AM CET March 19, 2020 11:04:17 AM CET Success

Check network

[root@ODA01 network-scripts]# odacli list-networks
 
ID Name NIC InterfaceType IP Address Subnet Mask Gateway VlanId
---------------------------------------- -------------------- ---------- ---------- ------------------ ------------------ ------------------ ----------
e920fc5c-62b4-4877-9008-ef3df96722ff Private-network priv0 INTERNAL 192.168.16.24 255.255.255.240
28446275-ad88-4a62-ae5d-9835bbc73d8a Public-network btbond1 BOND 10.3.1.20 255.255.255.0 10.3.1.1
9faf494b-d319-4190-8ba9-088f24e58918 Replication btbond3 BOND 192.20.30.2 255.255.255.248 192.20.30.1 723

Check IP configuration from OS

[root@ODA01 ~]# ip addr sh
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: em1: mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 00:10:e0:ef:52:f6 brd ff:ff:ff:ff:ff:ff
3: p7p1: mtu 1500 qdisc mq master btbond1 state UP qlen 1000
link/ether b0:26:28:72:0e:d0 brd ff:ff:ff:ff:ff:ff
4: p7p2: mtu 1500 qdisc mq master btbond1 state UP qlen 1000
link/ether b0:26:28:72:0e:d0 brd ff:ff:ff:ff:ff:ff
5: p2p1: mtu 1500 qdisc mq master btbond3 state DOWN qlen 1000
link/ether b0:26:28:7c:8a:50 brd ff:ff:ff:ff:ff:ff
6: p2p2: mtu 1500 qdisc mq master btbond3 state UP qlen 1000
link/ether b0:26:28:7c:8a:50 brd ff:ff:ff:ff:ff:ff
7: bond0: mtu 1500 qdisc noop state DOWN
link/ether f2:30:97:19:bc:f4 brd ff:ff:ff:ff:ff:ff
8: btbond1: mtu 1500 qdisc noqueue state UP
link/ether b0:26:28:72:0e:d0 brd ff:ff:ff:ff:ff:ff
inet 10.3.1.20/24 brd 10.3.1.255 scope global btbond1
valid_lft forever preferred_lft forever
9: btbond3: mtu 1500 qdisc noqueue state UP
link/ether b0:26:28:7c:8a:50 brd ff:ff:ff:ff:ff:ff
inet 192.20.30.2/29 brd 192.20.30.7 scope global btbond3
valid_lft forever preferred_lft forever
12: priv0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 8e:7e:04:71:1b:96 brd ff:ff:ff:ff:ff:ff
inet 192.168.16.24/28 brd 192.168.16.31 scope global priv0
valid_lft forever preferred_lft forever
13: virbr0: mtu 1500 qdisc noqueue state DOWN
link/ether 52:54:00:c3:87:7d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
14: virbr0-nic: mtu 1500 qdisc noop master virbr0 state DOWN qlen 500
link/ether 52:54:00:c3:87:7d brd ff:ff:ff:ff:ff:ff

Ping the IP address

From the other ODA we can check that ODA01 is answering on both IPs.

[root@ODA02 network-scripts]# ping 192.20.30.2
PING 192.20.30.2 (192.20.30.2) 56(84) bytes of data.
64 bytes from 192.20.30.2: icmp_seq=1 ttl=64 time=0.144 ms
64 bytes from 192.20.30.2: icmp_seq=2 ttl=64 time=0.078 ms
64 bytes from 192.20.30.2: icmp_seq=3 ttl=64 time=0.082 ms
^C
--- 192.20.30.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2575ms
rtt min/avg/max/mdev = 0.078/0.101/0.144/0.031 ms
 
[root@ODA02 network-scripts]# ping 10.3.1.20
PING 10.3.1.20 (10.3.1.20) 56(84) bytes of data.
64 bytes from 10.3.1.20: icmp_seq=1 ttl=64 time=0.153 ms
64 bytes from 10.3.1.20: icmp_seq=2 ttl=64 time=0.077 ms
64 bytes from 10.3.1.20: icmp_seq=3 ttl=64 time=0.075 ms
^C
--- 10.3.1.20 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2182ms
rtt min/avg/max/mdev = 0.075/0.101/0.153/0.038 ms

Conclusion

On Oracle Appliance, network redundancy is only possible on the port side, so in case of faulty cable. Today, there is no solution to cover a faulty network card or a card loss. Having bonding crossing several network cards is not possible. Neither can we configure multiple IP addresses of same network over several network cards.
Of course adding network IP from different VLAN on a card is still possible using odaadmcli as long as all bondings are on a different subnet.

Cet article How to configure additional network card on an ODA X8 family est apparu en premier sur Blog dbi services.

Issue deleting a database on ODA?

$
0
0

I have recently faced an issue deleting database on an ODA. I was getting following error whatever database I wanted to delete : DCS-10001:Internal error encountered: null.

Through this blog, I would like to share with you my experience on this case hoping it will help you if you are facing same problem. On this project I was using ODA Release 18.5 and 18.8 and faced the same problem on both versions. On 18.3 and previous releases this was not the case.

Deleting the database

With odacli I tried to delete my TEST database, running following commands :

[root@ODA01 bin]# odacli delete-database -in TEST -fd
{
"jobId" : "bcdcbf59-0fe6-44b7-af7f-91f68c7697ed",
"status" : "Running",
"message" : null,
"reports" : [ {
"taskId" : "TaskZJsonRpcExt_858",
"taskName" : "Validate db d6542252-dfa4-47f9-9cfc-22b4f0575c51 for deletion",
"taskResult" : "",
"startTime" : "May 06, 2020 11:36:38 AM CEST",
"endTime" : "May 06, 2020 11:36:38 AM CEST",
"status" : "Success",
"taskDescription" : null,
"parentTaskId" : "TaskSequential_856",
"jobId" : "bcdcbf59-0fe6-44b7-af7f-91f68c7697ed",
"tags" : [ ],
"reportLevel" : "Info",
"updatedTime" : "May 06, 2020 11:36:38 AM CEST"
} ],
"createTimestamp" : "May 06, 2020 11:36:38 AM CEST",
"resourceList" : [ ],
"description" : "Database service deletion with db name: TEST with id : d6542252-dfa4-47f9-9cfc-22b4f0575c51",
"updatedTime" : "May 06, 2020 11:36:38 AM CEST"
}

The job was failing with DCS-10001 Error :

[root@ODA01 bin]# odacli describe-job -i "bcdcbf59-0fe6-44b7-af7f-91f68c7697ed"
 
Job details
----------------------------------------------------------------
ID: bcdcbf59-0fe6-44b7-af7f-91f68c7697ed
Description: Database service deletion with db name: TEST with id : d6542252-dfa4-47f9-9cfc-22b4f0575c51
Status: Failure
Created: May 6, 2020 11:36:38 AM CEST
Message: DCS-10001:Internal error encountered: null.
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
database Service deletion for d6542252-dfa4-47f9-9cfc-22b4f0575c51 May 6, 2020 11:36:38 AM CEST May 6, 2020 11:36:50 AM CEST Failure
database Service deletion for d6542252-dfa4-47f9-9cfc-22b4f0575c51 May 6, 2020 11:36:38 AM CEST May 6, 2020 11:36:50 AM CEST Failure
Validate db d6542252-dfa4-47f9-9cfc-22b4f0575c51 for deletion May 6, 2020 11:36:38 AM CEST May 6, 2020 11:36:38 AM CEST Success
Database Deletion May 6, 2020 11:36:39 AM CEST May 6, 2020 11:36:39 AM CEST Success
Unregister Db From Cluster May 6, 2020 11:36:39 AM CEST May 6, 2020 11:36:39 AM CEST Success
Kill Pmon Process May 6, 2020 11:36:39 AM CEST May 6, 2020 11:36:39 AM CEST Success
Database Files Deletion May 6, 2020 11:36:39 AM CEST May 6, 2020 11:36:40 AM CEST Success
Deleting Volume May 6, 2020 11:36:47 AM CEST May 6, 2020 11:36:50 AM CEST Success
database Service deletion for d6542252-dfa4-47f9-9cfc-22b4f0575c51 May 6, 2020 11:36:50 AM CEST May 6, 2020 11:36:50 AM CEST Failure

Troubleshooting

In the dcs-agent.log, located in /opt/oracle/dcs/log folder, you might see following errors :

2019-11-27 13:54:30,106 ERROR [database Service deletion for 89e11f5d-9789-44a3-a09d-2444f0fda99e : JobId=05a2d017-9b64-4e92-a7df-3ded603d0644] [] c.o.d.c.j.JsonRequestProcessor: RPC request invocation failed on request: {"classz":"com.oracle.dcs.agent.rpc.service.dataguard.DataguardActions","method":"deleteListenerEntry","params":[{"type":"com.oracle.dcs.agent.model.DB","value":{"updatedTime":1573023492194,"id":"89e11f5d-9789-44a3-a09d-2444f0fda99e","name":"TEST","createTime":1573023439244,"state":{"status":"CONFIGURED"},"dbName":"TEST","databaseUniqueName":"TEST_RZB","dbVersion":"11.2.0.4.190115","dbHomeId":"c58cdcfd-e5b2-4041-b993-8df5a5d5ada4","dbId":null,"isCdb":false,"pdBName":null,"pdbAdminUserName":null,"enableTDE":false,"isBcfgInSync":null,"dbType":"SI","dbTargetNodeNumber":"0","dbClass":"OLTP","dbShape":"odb1","dbStorage":"ACFS","dbOnFlashStorage":false,"level0BackupDay":"sunday","instanceOnly":true,"registerOnly":false,"rmanBkupPassword":null,"dbEdition":"SE","dbDomainName":"ksbl.local","dbRedundancy":null,"dbCharacterSet":{"characterSet":"AL32UTF8","nlsCharacterset":"AL16UTF16","dbTerritory":"AMERICA","dbLanguage":"AMERICAN"},"dbConsoleEnable":false,"backupDestination":"NONE","cloudStorageContainer":null,"backupConfigId":null,"isAutoBackupDisabled":false}}],"revertable":false,"threadId":111}
! java.lang.NullPointerException: null
! at com.oracle.dcs.agent.rpc.service.dataguard.DataguardOperations.deleteListenerEntry(DataguardOperations.java:2258)
! at com.oracle.dcs.agent.rpc.service.dataguard.DataguardActions.deleteListenerEntry(DataguardActions.java:24)
! at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
! at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
! at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
! at java.lang.reflect.Method.invoke(Method.java:498)
! at com.oracle.dcs.commons.jrpc.JsonRequestProcessor.invokeRequest(JsonRequestProcessor.java:33)
! ... 23 common frames omitted
! Causing: com.oracle.dcs.commons.exception.DcsException: DCS-10001:Internal error encountered: null.
! at com.oracle.dcs.commons.exception.DcsException$Builder.build(DcsException.java:68)
! at com.oracle.dcs.commons.jrpc.JsonRequestProcessor.invokeRequest(JsonRequestProcessor.java:45)
! at com.oracle.dcs.commons.jrpc.JsonRequestProcessor.process(JsonRequestProcessor.java:74)
! at com.oracle.dcs.agent.task.TaskZJsonRpcExt.callInternal(TaskZJsonRpcExt.java:65)
! at com.oracle.dcs.agent.task.TaskZJsonRpc.call(TaskZJsonRpc.java:182)
! at com.oracle.dcs.agent.task.TaskZJsonRpc.call(TaskZJsonRpc.java:26)
! at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:82)
! at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:37)
! at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:39)
! at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:10)
! at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:82)
! at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:37)
! at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:39)
! at com.oracle.dcs.agent.task.TaskZLockWrapper.call(TaskZLockWrapper.java:64)
! at com.oracle.dcs.agent.task.TaskZLockWrapper.call(TaskZLockWrapper.java:21)
! at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:82)
! at com.oracle.dcs.commons.task.TaskApi.call(TaskApi.java:37)
! at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:39)
! at com.oracle.dcs.commons.task.TaskSequential.call(TaskSequential.java:10)
! at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:82)
! at com.oracle.dcs.commons.task.TaskWrapper.call(TaskWrapper.java:17)
! at java.util.concurrent.FutureTask.run(FutureTask.java:266)
! at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
! at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
! at java.lang.Thread.run(Thread.java:748)
2019-11-27 13:54:30,106 INFO [database Service deletion for 89e11f5d-9789-44a3-a09d-2444f0fda99e : JobId=05a2d017-9b64-4e92-a7df-3ded603d0644] [] c.o.d.a.z.DCSZooKeeper: DCS node id is - node_0
2019-11-27 13:54:30,106 DEBUG [database Service deletion for 89e11f5d-9789-44a3-a09d-2444f0fda99e : JobId=05a2d017-9b64-4e92-a7df-3ded603d0644] [] c.o.d.a.t.TaskZJsonRpc: Task[TaskZJsonRpcExt_124] RPC request 'Local:node_0@deleteListenerEntry()' completed: Failure

The key error to note would be : Local:node_0@deleteListenerEntry()’ completed: Failure

Explaination

This problem comes from the fact that the listener.ora file has been customized. As per Oracle Support, on an ODA, the listener.ora should never be customized and default listener.ora file should be used. I still have a SR opened with Oracle Support to clarify the situation as I’m fully convinced that this is a regression :

  1. It was always possible in previous ODA versions to delete a database with a customized listener file
  2. We need to customize the listener when setting Data Guard on Oracle 11.2.0.4 Version (still supported on ODA)
  3. We need to customize the listener when doing duplication as dynamic registration is not possible when the database is in nomount state and database is restarted during the duplication.

Moreover other ODA documentations are still referring customization of the listener.ora file when using ODA :
White paper : STEPS TO MIGRATE NON-CDB DATABASES TO ACFS ON ORACLE DATABASEAPPLIANCE 12.1.2
Deploying Oracle Data Guard with Oracle Database Appliance – A WhitePaper (2016-7) (Doc ID 2392307.1)

I will update the post as soon as I have some feedback from Oracle support on this.

The workaround would be to set back the default listener.ora file time of the deletion, which would request a maintenance windows for some customer.

Solution/Workaround

Backup of the current listener configuration

OK, so let’s backup our current listener configuration first :

grid@ODA01:/home/grid/ [+ASM1] cd $TNS_ADMIN
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] cp -p listener.ora ./history/listener.ora.20200506

Default ODA listener configuration

The backup of the default listener configuration is the following one :

grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] cat listener19071611AM2747.bak
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent
ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET # line added by Agent

Stopping the listener

Let’s stop the listener :

grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl stop listener -listener listener
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl status listener -listener listener
Listener LISTENER is enabled
Listener LISTENER is not running

Put default listener configuration

grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] mv listener.ora listener.ora.before_db_del_20200506
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] cp -p listener19071611AM2747.bak listener.ora

Start the listener

grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl start listener -listener listener
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl status listener -listener listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): oda01

Delete database

We will try to delete the database again by running the same odacli command :

[root@ODA01 bin]# odacli delete-database -in TEST -fd
{
"jobId" : "5655be19-e0fe-4452-b8a9-35382c67bf96",
"status" : "Running",
"message" : null,
"reports" : [ {
"taskId" : "TaskZJsonRpcExt_1167",
"taskName" : "Validate db d6542252-dfa4-47f9-9cfc-22b4f0575c51 for deletion",
"taskResult" : "",
"startTime" : "May 06, 2020 11:45:01 AM CEST",
"endTime" : "May 06, 2020 11:45:01 AM CEST",
"status" : "Success",
"taskDescription" : null,
"parentTaskId" : "TaskSequential_1165",
"jobId" : "5655be19-e0fe-4452-b8a9-35382c67bf96",
"tags" : [ ],
"reportLevel" : "Info",
"updatedTime" : "May 06, 2020 11:45:01 AM CEST"
} ],
"createTimestamp" : "May 06, 2020 11:45:01 AM CEST",
"resourceList" : [ ],
"description" : "Database service deletion with db name: TEST with id : d6542252-dfa4-47f9-9cfc-22b4f0575c51",
"updatedTime" : "May 06, 2020 11:45:01 AM CEST"
}

Unfortunately the deletion will fail with another error : DCS-10011:Input parameter ‘ACFS Device for delete’ cannot be NULL.

This is due to the fact that previous deletion has already removed the corresponding ACFS volume for the database (DATA and REDO). We will have to create them manually again. I have already described this solution in a previous post : Database deletion stuck in deleting-status.

After restoring the corresponding ACFS Volume, we can retry our database deletion again :

[root@ODA01 bin]# odacli delete-database -in TEST -fd
{
"jobId" : "5e227755-478b-46c5-a5cd-36687cb21ed8",
"status" : "Running",
"message" : null,
"reports" : [ {
"taskId" : "TaskZJsonRpcExt_1443",
"taskName" : "Validate db d6542252-dfa4-47f9-9cfc-22b4f0575c51 for deletion",
"taskResult" : "",
"startTime" : "May 06, 2020 11:47:53 AM CEST",
"endTime" : "May 06, 2020 11:47:53 AM CEST",
"status" : "Success",
"taskDescription" : null,
"parentTaskId" : "TaskSequential_1441",
"jobId" : "5e227755-478b-46c5-a5cd-36687cb21ed8",
"tags" : [ ],
"reportLevel" : "Info",
"updatedTime" : "May 06, 2020 11:47:53 AM CEST"
} ],
"createTimestamp" : "May 06, 2020 11:47:53 AM CEST",
"resourceList" : [ ],
"description" : "Database service deletion with db name: TEST with id : d6542252-dfa4-47f9-9cfc-22b4f0575c51",
"updatedTime" : "May 06, 2020 11:47:53 AM CEST"
}

Which this time will be successful :

[root@ODA01 bin]# odacli describe-job -i "5e227755-478b-46c5-a5cd-36687cb21ed8"
 
Job details
----------------------------------------------------------------
ID: 5e227755-478b-46c5-a5cd-36687cb21ed8
Description: Database service deletion with db name: TEST with id : d6542252-dfa4-47f9-9cfc-22b4f0575c51
Status: Success
Created: May 6, 2020 11:47:53 AM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate db d6542252-dfa4-47f9-9cfc-22b4f0575c51 for deletion May 6, 2020 11:47:53 AM CEST May 6, 2020 11:47:53 AM CEST Success
Database Deletion May 6, 2020 11:47:53 AM CEST May 6, 2020 11:47:54 AM CEST Success
Unregister Db From Cluster May 6, 2020 11:47:54 AM CEST May 6, 2020 11:47:54 AM CEST Success
Kill Pmon Process May 6, 2020 11:47:54 AM CEST May 6, 2020 11:47:54 AM CEST Success
Database Files Deletion May 6, 2020 11:47:54 AM CEST May 6, 2020 11:47:54 AM CEST Success
Deleting Volume May 6, 2020 11:48:01 AM CEST May 6, 2020 11:48:05 AM CEST Success
Delete File Groups of Database TEST May 6, 2020 11:48:05 AM CEST May 6, 2020 11:48:05 AM CEST Success

Restore our customized listener configuration

We can now restore our customized configuration as follows :

grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl stop listener -listener listener
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl status listener -listener listener
Listener LISTENER is enabled
Listener LISTENER is not running
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] mv listener.ora.before_db_del_20200506 listener.ora
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl start listener -listener listener
 
grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] srvctl status listener -listener listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): oda01

We could also confirm that the listener started successfully by displaying the tnslsnr running processes :

grid@ODA01:/u01/app/18.0.0.0/grid/network/admin/ [+ASM1] ps -ef | grep tnslsnr | grep -v grep
grid 14922 1 0 10:52 ? 00:00:00 /u01/app/18.0.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid 97812 1 0 12:07 ? 00:00:00 /u01/app/18.0.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit

Conclusion

Starting ODA Release 18.5, database deletion will fail if the listener has been customized. Workaround is to to restore the listener default configuration for executing the deletion. This might imply for some customers to have a maintenance windows.

Cet article Issue deleting a database on ODA? est apparu en premier sur Blog dbi services.

Viewing all 461 articles
Browse latest View live