Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 461 articles
Browse latest View live

Oracle 19c

$
0
0

Oracle 19c has been released quite a while ago already and some customers already run it in Production. However, as it is the long term supported release, I thought I blog about some interesting information and features around 19c to encourage people to migrate to it.

Download Oracle 19c:

https://www.oracle.com/technetwork/database/enterprise-edition/downloads
or
https://edelivery.oracle.com (search e.g. for “Database Enterprise Edition”)

Docker-Images:
https://github.com/oracle/docker-images/tree/master/OracleDatabase

Oracle provides different offerings for 19c:

On-premises:
– Oracle Database Standard Edition 2 (SE2)
– Oracle Database Enterprise Edition (EE)
– Oracle Database Enterprise Edition on Engineered Systems (EE-ES)
– Oracle Database Personal Edition (PE)

Cloud:
– Oracle Database Cloud Service Standard Edition (DBCS SE)
– Oracle Database Cloud Service Enterprise Edition (DBCS EE)
– Oracle Database Cloud Service Enterprise Edition -High Performance (DBCS EE-HP)
– Oracle Database Cloud Service Enterprise Edition -Extreme Performance (DBCS EE-EP)
– Oracle Database Exadata Cloud Service (ExaCS)

REMARK: When this Blog was released the Autonomous DB offerings provided by Oracle did not run on 19c yet (they actually ran on 18c).

Unfortunately some promising 19c new features are only available on Exadata. If that’s the case (like for Automatic Indexing) then you can still test the feature on EE after setting:


SQL> alter system set "_exadata_feature_on"=TRUE scope=spfile;

and a DB-Restart.

REMARK: DO THAT ON YOUR OWN TESTSYSTEMS ONLY AND USE INTERNAL ORACLE PARAMETERS ONLY WHEN ORACLE SUPPORT RECOMMENDS TO DO SO.

Anyway, there are lots of new features and I wanted to share some interesting of them with you and provide some examples.

REMARK: You may check https://www.oracle.com/a/tech/docs/database19c-wp.pdf as well

1. Automatic Indexing (only available on EE-ES and ExaCS)

Oracle continually evaluates the executing SQL and the underlying tables to determine which indexes to automatically create and which ones to potentially remove.

Documentation:

You can use the AUTO_INDEX_MODE configuration setting to enable or disable automatic indexing in a database.

The following statement enables automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be used in SQL statements:


EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

The following statement enables automatic indexing in a database, but creates any new auto indexes as invisible indexes, so that they cannot be used in SQL statements:


EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

The following statement disables automatic indexing in a database, so that no new auto indexes are created, and the existing auto indexes are disabled:


EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

Show a report of automatic indexing activity:


set serveroutput on size unlimited lines 200 pages 200
declare
report clob := null;
begin
report := DBMS_AUTO_INDEX.REPORT_LAST_ACTIVITY();
dbms_output.put_line(report);
end;
/

In a test I ran some statements repeatedly on a table T1 (which contains 32 times the data of all_objects). The table has no index:


SQL> select * from t1 where object_id=:b1;
SQL> select * from t1 where data_object_id=:b2;

After some time indexes were created automatically:


SQL> select table_name, index_name, auto from ind;
 
TABLE_NAME INDEX_NAME AUT
-------------------------------- -------------------------------- ---
T1 SYS_AI_5mzwj826444wv YES
T1 SYS_AI_gs3pbvztmyaqx YES
 
2 rows selected.
 
SQL> select dbms_metadata.get_ddl('INDEX','SYS_AI_5mzwj826444wv') from dual;
 
DBMS_METADATA.GET_DDL('INDEX','SYS_AI_5MZWJ826444WV')
------------------------------------------------------------------------------------
CREATE INDEX "CBLEILE"."SYS_AI_5mzwj826444wv" ON "CBLEILE"."T1" ("OBJECT_ID") AUTO
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"

2. Real-Time Statistics (only available on EE-ES and ExaCS)

The database automatically gathers real-time statistics during conventional DML operations. You can see in the Note-section of dbms_xplan.display_cursor when stats used to optimize a Query were gathered during DML:


SQL> select * from table(dbms_xplan.display_cursor);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
SQL_ID 7cd3thpuf7jxm, child number 0
-------------------------------------
 
select * from t2 where object_id=:y
 
Plan hash value: 1513984157
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 24048 (100)| |
|* 1 | TABLE ACCESS FULL| T2 | 254 | 31242 | 24048 (1)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("OBJECT_ID"=:Y)
 
Note
-----
- dynamic statistics used: statistics for conventional DML

3. Quarantine problematic SQL (only available on EE-ES and ExaCS)

Runaway SQL statements terminated by Resource Manager due to excessive consumption of processor and I/O resources can now be automatically quarantined. I.e. instead of letting the SQL run until it reaches a resource plan limit, the SQL is not executed at all.

E.g. create a resource plan which limits SQL-exec-time for User CBLEILE to 16 seconds:


begin
-- Create a pending area
dbms_resource_manager.create_pending_area();
...
dbms_resource_manager.create_plan_directive(
plan => 'LIMIT_RESOURCE',
group_or_subplan => 'TEST_RUNAWAY_GROUP',
comment => 'Terminate SQL statements when they exceed the' ||'execution time of 16 seconds',
switch_group => 'CANCEL_SQL',
switch_time => 16,
switch_estimate => false);
...
-- Set the initial consumer group of the 'CBLEILE' user to 'TEST_RUNAWAY_GROUP'
dbms_resource_manager.set_initial_consumer_group('CBLEILE','TEST_RUNAWAY_GROUP');
end;
/

A SQL-Statement with SQL_ID 12jc0zpmb85tm executed by CBLEILE runs in the 16 seconds limit:


SQL> select count(*) X
2 from kill_cpu
3 connect by n > prior n
4 start with n = 1
5 ;
from kill_cpu
*
ERROR at line 2:
ORA-00040: active time limit exceeded - call aborted
 
Elapsed: 00:00:19.85

So I quarantine the SQL now:


set serveroutput on size unlimited
DECLARE
quarantine_config VARCHAR2(80);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(
SQL_ID => '12jc0zpmb85tm');
dbms_output.put_line(quarantine_config);
END;
/
 
SQL_QUARANTINE_1d93x3d6vumvs
 
PL/SQL procedure successfully completed.
 
SQL> select NAME,ELAPSED_TIME,ENABLED from dba_sql_quarantine;
 
NAME ELAPSED_TIME ENA
---------------------------------------- -------------------------------- ---
SQL_QUARANTINE_1d93x3d6vumvs ALWAYS YES

Other CBLEILE-session:


SQL> select count(*) X
2 from kill_cpu
3 connect by n > prior n
4 start with n = 1
5 ;
from kill_cpu
*
 
ERROR at line 2:
ORA-56955: quarantined plan used
Elapsed: 00:00:00.00
 
SQL> !oerr ora 56955
56955, 00000, "quarantined plan used"
// *Cause: A quarantined plan was used for this statement.
// *Action: Increase the Oracle Database Resource Manager limits or use a new plan.

–> The SQL does not run for 16 seconds, but is stopped immediately (is under quarantine). You can define the Plan-Hash-Value for which a SQL should be in quarantine and define quarantine thresholds. E.g. 20 seconds for the elapsed time. As long as the resource plan is below those 20 seconds the SQL is under quarantine. If the resource plan is defined to be above 20 seconds execution time limit, the SQL is executed.

4. Active Standby DML Redirect (only available with Active Data Guard)

On Active Data Guard you may allow moderate write activity. These writes are then transparently redirected to the primary database and written there first (to ensure consistency) and then the changes are shipped back to the standby. This approach allows applications to use the standby for moderate write workloads.

5. Hybrid Partitioned Tables

Create partitioned tables where some partitions are inside and some partitions are outside the database (on filesystem, on a Cloud-Filesystem-service or on a Hadoop Distributed File System (HDFS)). This allows e.g. “cold” partitions to remain accessible, but on cheap storage.

Here an example with 3 partitions external (data of 2016-2018) and 1 partition in the DB (data of 2019):


!mkdir -p /u01/my_data/sales_data1
!mkdir -p /u01/my_data/sales_data2
!mkdir -p /u01/my_data/sales_data3
!echo "1,1,01-01-2016,1,1,1000,2000" > /u01/my_data/sales_data1/sales2016_data.txt
!echo "2,2,01-01-2017,2,2,2000,4000" > /u01/my_data/sales_data2/sales2017_data.txt
!echo "3,3,01-01-2018,3,3,3000,6000" > /u01/my_data/sales_data3/sales2018_data.txt
 
connect / as sysdba
alter session set container=pdb1;
 
CREATE DIRECTORY sales_data1 AS '/u01/my_data/sales_data1';
GRANT READ,WRITE ON DIRECTORY sales_data1 TO cbleile;
 
CREATE DIRECTORY sales_data2 AS '/u01/my_data/sales_data2';
GRANT READ,WRITE ON DIRECTORY sales_data2 TO cbleile;
 
CREATE DIRECTORY sales_data3 AS '/u01/my_data/sales_data3';
GRANT READ,WRITE ON DIRECTORY sales_data3 TO cbleile;
 
connect cbleile/difficult_password@pdb1
 
CREATE TABLE hybrid_partition_table
( prod_id NUMBER NOT NULL,
cust_id NUMBER NOT NULL,
time_id DATE NOT NULL,
channel_id NUMBER NOT NULL,
promo_id NUMBER NOT NULL,
quantity_sold NUMBER(10,2) NOT NULL,
amount_sold NUMBER(10,2) NOT NULL
)
EXTERNAL PARTITION ATTRIBUTES (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY sales_data1
ACCESS PARAMETERS(
FIELDS TERMINATED BY ','
(prod_id,cust_id,time_id DATE 'dd-mm-yyyy',channel_id,promo_id,quantity_sold,amount_sold)
)
REJECT LIMIT UNLIMITED
)
PARTITION BY RANGE (time_id)
(
PARTITION sales_2016 VALUES LESS THAN (TO_DATE('01-01-2017','dd-mm-yyyy')) EXTERNAL
LOCATION ('sales2016_data.txt'),
PARTITION sales_2017 VALUES LESS THAN (TO_DATE('01-01-2018','dd-mm-yyyy')) EXTERNAL
DEFAULT DIRECTORY sales_data2 LOCATION ('sales2017_data.txt'),
PARTITION sales_2018 VALUES LESS THAN (TO_DATE('01-01-2019','dd-mm-yyyy')) EXTERNAL
DEFAULT DIRECTORY sales_data3 LOCATION ('sales2018_data.txt'),
PARTITION sales_2019 VALUES LESS THAN (TO_DATE('01-01-2020','dd-mm-yyyy'))
);
 
insert into hybrid_partition_table values (4,4,to_date('01-01-2019','dd-mm-yyyy'),4,4,4000,8000);
 
commit;
 
SQL> select * from hybrid_partition_table where time_id in (to_date('01-01-2017','dd-mm-yyyy'),to_date('01-01-2019','dd-mm-yyyy'));
 
PROD_ID CUST_ID TIME_ID CHANNEL_ID PROMO_ID QUANTITY_SOLD AMOUNT_SOLD
---------- ---------- --------- ---------- ---------- ------------- -----------
2 2 01-JAN-17 2 2 2000 4000
4 4 01-JAN-19 4 4 4000 8000
 
2 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
SQL_ID c5s33u5kanzb5, child number 0
-------------------------------------
select * from hybrid_partition_table where time_id in
(to_date('01-01-2017','dd-mm-yyyy'),to_date('01-01-2019','dd-mm-yyyy'))
 
Plan hash value: 2612538111
 
-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 83 (100)| | | |
| 1 | PARTITION RANGE INLIST | | 246 | 21402 | 83 (0)| 00:00:01 |KEY(I) |KEY(I) |
|* 2 | TABLE ACCESS HYBRID PART FULL| HYBRID_PARTITION_TABLE | 246 | 21402 | 83 (0)| 00:00:01 |KEY(I) |KEY(I) |
|* 3 | TABLE ACCESS FULL | HYBRID_PARTITION_TABLE | | | | |KEY(I) |KEY(I) |
-------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter((SYS_OP_XTNN("HYBRID_PARTITION_TABLE"."AMOUNT_SOLD","HYBRID_PARTITION_TABLE"."QUANTITY_SOLD","HYBRID_PARTITION_TABLE"."PROMO_ID","HYBRID_PARTITION_TABLE"."CHANNEL_ID","HYBRID_PARTITION_TABLE"."TIME_ID","HYBRID_PARTITION_TABLE"."CUST_ID","HYBRID_PARTITION_TABLE"."PROD_ID") AND INTERNAL_FUNCTION("TIME_ID")))
 
3 - filter((SYS_OP_XTNN("HYBRID_PARTITION_TABLE"."AMOUNT_SOLD","HYBRID_PARTITION_TABLE"."QUANTITY_SOLD","HYBRID_PARTITION_TABLE"."PROMO_ID","HYBRID_PARTITION_TABLE"."CHANNEL_ID","HYBRID_PARTITION_TABLE"."TIME_ID","HYBRID_PARTITION_TABLE"."CUST_ID","HYBRID_PARTITION_TABLE"."PROD_ID") AND INTERNAL_FUNCTION("TIME_ID")))

6. Memoptimized Rowstore

Enables fast data inserts into Oracle Database 19c from applications, such as Internet of Things (IoT), which ingest small, high volume transactions with a minimal amount of transactional overhead.

7. 3 PDBs per Multitenant-DB without having to pay for the Multitenant option

Beginning with 19c it is allowed to create 3 PDBs in a Container-DB without requiring the Mutitenant-Option license from Oracle. As the single- or multi-tenant DB becomes a must in Oracle 20, it is a good idea to start using the container-DB architecture with 19c already.

Please let me know your experience with Oracle 19c.

Cet article Oracle 19c est apparu en premier sur Blog dbi services.


Creating archived redolog-files in group dba instead of oinstall

$
0
0

Since Oracle 11g files created by the database belong by default to the Linux group oinstall. Changing the default group after creating the central inventory is difficult. In this Blog I want to show how locally created archived redo can be in group dba instead of oinstall.

One of my customers had the requirement to provide read-access on archived redo to an application for logmining. To ensure the application can access the archived redo, we created an additinal local archive log destination:


LOG_ARCHIVE_DEST_9 = 'LOCATION=/logmining/ARCHDEST/NCEE19C valid_for=(online_logfile,primary_role)'

and provided NFS-access to that directory for the application. To ensure that the application can access the archived redo, the remote user was part of a remote dba-group, which had the same group-id (GID) as the dba-group on the DB-server. Everything worked fine until we migrated to a new server and changed the setup to use oinstall as the default group for Oracle. The application could no longer read the files, because they were created with group oinstall:


oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] ls -ltr
-rw-r-----. 1 oracle oinstall 24403456 Oct 9 21:21 1_32_1017039068.dbf
-rw-r-----. 1 oracle oinstall 64000 Oct 9 21:25 1_33_1017039068.dbf
-rw-r-----. 1 oracle oinstall 29625856 Oct 9 21:27 1_34_1017039068.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C]

One possibility to workaround this would have been to use the id-mapper on Linux, but there’s something better:

With the group-sticky-bit on Linux we can achieve, that all files in a directory are part of the group of the directory.

I.e.


oracle@19c:/logmining/ARCHDEST/ [NCEE19C] ls -l
total 0
drwxr-xr-x. 1 oracle dba 114 Oct 9 21:27 NCEE19C
oracle@19c:/logmining/ARCHDEST/ [NCEE19C] chmod g+s NCEE19C
oracle@19c:/logmining/ARCHDEST/ [NCEE19C] ls -l
drwxr-sr-x. 1 oracle dba 114 Oct 9 21:27 NCEE19C

Whenever an archived redo is created in that directory it will be in the dba-group:


SQL> alter system switch logfile;
 
System altered.
 
SQL> exit
 
oracle@19c:/logmining/ARCHDEST/ [NCEE19C] cd NCEE19C/
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] ls -ltr
-rw-r-----. 1 oracle oinstall 24403456 Oct 9 21:21 1_32_1017039068.dbf
-rw-r-----. 1 oracle oinstall 64000 Oct 9 21:25 1_33_1017039068.dbf
-rw-r-----. 1 oracle oinstall 29625856 Oct 9 21:27 1_34_1017039068.dbf
-rw-r-----. 1 oracle dba 193024 Oct 9 21:50 1_35_1017039068.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C]

To make all files part of the dba-group use chgrp and use the newest archivelog as a reference:


oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] chgrp --reference 1_35_1017039068.dbf 1_3[2-4]*.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] ls -ltr
-rw-r-----. 1 oracle dba 24403456 Oct 9 21:21 1_32_1017039068.dbf
-rw-r-----. 1 oracle dba 64000 Oct 9 21:25 1_33_1017039068.dbf
-rw-r-----. 1 oracle dba 29625856 Oct 9 21:27 1_34_1017039068.dbf
-rw-r-----. 1 oracle dba 193024 Oct 9 21:50 1_35_1017039068.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C]

Hope this helps somebody.

Cet article Creating archived redolog-files in group dba instead of oinstall est apparu en premier sur Blog dbi services.

Connecting to ODA derby database

$
0
0

ODA light (ODA X7-2S, X7-2M, X8-2S, X8-2M) come with an internal derby database to manage ODA metadata. From time to time, there is a need to check or update some information within it, as for example when facing database deletion issue. I would like to strongly advise that updating manually the ODA repository should only be done after getting Oracle support guidance and agreement to do so. Neither the author (that’s me 🙂 ) nor dbi services 😉 would be responsible for any issue or consequence following commands described in this blog. This would be your own responsability. 😉
This blog is more intended to show how to connect to this internal derby database.

Performing a backup of the derby database

Breaking the derby database would damage the ODA, and having as consequence to reimage the ODA.

To backup the derby database, we need to stop the dcs agent.

[root@ODA03 derbyjar]# initctl stop initdcsagent
initdcsagent stop/waiting

[root@ODA03 derbyjar]# ps -ef | grep dcs-agent | grep -v grep
[root@ODA03 derbyjar]#

Then we can backup the repository :

[root@ODA03 derbyjar]# cd /opt/oracle/dcs/repo/

[root@ODA03 repo]# ls -l
total 4
drwxr-xr-x 4 root root 4096 Jun 12 14:05 node_0

[root@ODA03 repo]# cp -rp node_0 node_0_BKP_12.06.2019

Connecting to the derby database

To connect to the backup previously done, use the following connect string :

[root@ODA03 repo]# /usr/java/jdk1.8.0_161/db/bin/ij
ij version 10.11
ij> connect 'jdbc:derby:node_0_BKP_12.06.2019';

To connect to the running derby database use the following connect string (dcs agent needs to be stopped) :

[root@ODA03 repo]# /usr/java/jdk1.8.0_161/db/bin/ij
ij version 10.11
ij> connect 'jdbc:derby:node_0';

Running commands

The language used to interact with the derby database is common SQL language :

ij> select id,name,dbname,dbid,status,DBSTORAGE from db where dbname='test';
ID                                                                                                                              |NAME                                                                                                                            |DBNAME                                                                                                                          |DBID                                                                                                                            |STATUS                                                                                                                          |DBSTORAGE
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
c7db612e-f237-4491-9a30-ff2c2b75831f                                                                                            |test                                                                                                                            |test                                                                                                                            |032306496044                                                                                                                    |Deleting                                                                                                                        |Acfs

1 row selected
ij>

To list all the tables, you can run :

ij> show tables;
TABLE_SCHEM         |TABLE_NAME                    |REMARKS
------------------------------------------------------------------------
SYS                 |SYSALIASES                    |
SYS                 |SYSCHECKS                     |
...
...
...
APP                 |CPUCORES                      |
APP                 |DATABASE                      |
APP                 |DATABASEHOME                  |
APP                 |DB                            |
APP                 |DBHOME                        |
APP                 |DBNODE                        |
APP                 |DBSTORAGEDETAILS              |
APP                 |DBSTORAGEDETAILS_VOLS         |
APP                 |DBSTORAGELOCATIONS            |
APP                 |DCSPARAMETERS                 |
APP                 |DCS_USER                      |
APP                 |DG_CONFIGURATION              |
APP                 |DG_CONFIGURATION_REPLICATION_&|
APP                 |DISK                          |
APP                 |DISKGROUP                     |
APP                 |DISKINFO                      |
APP                 |FILEMULTIUPLOADPARTSRECORD    |
APP                 |FILEMULTIUPLOADRECORD         |
APP                 |FS                            |
APP                 |GI                            |
APP                 |GIHOME                        |
APP                 |GROUPENTITY                   |
APP                 |IDEMPOTENCYMAP                |
APP                 |JOBEXECUTION                  |
APP                 |JOBSCHEDULE                   |
APP                 |JOB_REPORT                    |
APP                 |JOB_RESOURCE_INFO             |
APP                 |LOGCLEANPOLICY                |
APP                 |LOGCLEANUPSUMMARY             |
APP                 |NETSECURITYRULES              |
APP                 |NETSECURITY_ENCRYPTIONALGORIT&|
APP                 |NETSECURITY_INTEGRITYALGORITH&|
APP                 |NETWORK                       |
APP                 |NETWORKINTERFACE              |
APP                 |NETWORKINTERFACE_INTERFACEMEM&|
APP                 |NETWORK_NETWORKTYPE           |
APP                 |OBJECTSTORESWIFT              |
APP                 |OBSERVER                      |
...
...
...

93 rows selected
ij>

Exiting derby database connection

To exit the tool, use following command :

ij> exit;

Getting last version of derby jar

One time, the derby jar file provided by the oracle support team was not the correct version. As per there guidance, I had to run :

[root@ODA03 repo]# java -cp /root/derbyjar/derby.jar:/root/derbyjar/derbytools.jar org.apache.derby.tools.ij
ij version 10.11
ij> connect 'jdbc:derby:node_0';

I got following errors :

ERROR XJ040: Failed to start database 'node_0' with class loader sun.misc.Launcher$AppClassLoader@42a57993, see the next exception for details.
ERROR XSLAN: Database at /opt/oracle/dcs/repo/node_0 has an incompatible format with the current version of the software.  The database was created by or upgraded by version 10.14.

I could download the last and appropriate derby jar from the apache web site. I could then successfully connect using correct version of derby jar related to the current ODA database.

[root@ODA03 repo]# java -cp /root/derbyjar/derby.jar:/root/derbyjar/derbytools.jar org.apache.derby.tools.ij
ij version 10.11
ij> connect 'jdbc:derby:node_0';
ij>

Conclusion

It might be interesting to know how to connect to metadata database from the ODA in order to check, troubleshoot and understand some internal ODA behavior. BUT, remember, that no update should be done in this database without Oracle support agreement and without running previously a backup of the repository.

Cet article Connecting to ODA derby database est apparu en premier sur Blog dbi services.

Having multiple standby databases and cascading with dbvisit

$
0
0

Dbvisit standy is a disaster recovery solution that you will be able to use with Oracle standard edition. I have been working on a customer project where I had to setup a system having one primary and two standby databases. One of the standby database had to run with a gap of 24 hours. Knowing that flashback possibilities are very limited on standard edition, this would give customer the ability to extract and restore some data been wrongly lost following human errors.

The initial configuration would be the following one :

Database instance, db_name : MyDB
MyDB_02 (db_unique_name) primary database running on srv02 server.
MyDB_01 (db_unique_name) expected standby database running on srv01 server.
MyDB_03 (db_unique_name) expected standby database running on srv03 server.

The following DDC configuration file will be used :
MyDBSTD1 : Configuration file for first standby been synchronized every 10 minutes.
MyDBSTD2 : Configuration file for second standby been synchronized every 24 hours.

Let me walk you through the steps to setup such configuration. This article is not intended to show the whole process of implementing a dbvisit solution, but only the steps required to work with multiple standby. We will also talk about how we can implement cascaded standby and apply lag delay within dbvisit.

Recommendations

In order to limit the manual configuration changes in the DDC file after a switchover, it is recommended to use as much as possible same ORACLE_HOME, ARCHIVE Destination and DBVISIT home directory.

Creating MyDBSTD1 DDC configuration file

The first standby configuration file will be created and used between MyDB_03 (srv03) and MyDB_02 (srv02).

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -o setup


=========================================================

     Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd)
           http://www.dbvisit.com

=========================================================

=>dbvctl only needs to be run on the primary server.

Is this the primary server?  [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:

     DDC
     ===
1)   Create New DDC
2)   Cancel

Please enter choice [] : 1

Is this correct?  [Yes]:

...
...
...

Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         MyDB
ORACLE_HOME                        /opt/oracle/product/12.2.0

SOURCE                             srv02
ARCHSOURCE                         /u03/app/oracle/dbvisit_arch/MyDB
RAC_DR                             N
USE_SSH                            N
DESTINATION                        srv03
NETPORT                            7890
DBVISIT_BASE_DR                    /u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0.1/dbhome_1
DB_UNIQUE_NAME_DR                  MyDB_03
ARCHDEST                           /u03/app/oracle/dbvisit_arch/MyDB
ORACLE_SID_DR                      MyDB
ENV_FILE                           MyDBSTD1

Are these variables correct?  [Yes]:

>>> Dbvisit Database configuration (DDC) file MyDBSTD1 created.

>>> Dbvisit Database repository (DDR) MyDB created.
   Repository Version          8.4
   Software Version            8.4
   Repository Status           VALID


Do you want to enter license key for the newly created Dbvisit Database configuration (DDC) file?  [Yes]:

Enter license key and press Enter: []: XXXXXXXXXXXXXXXXXXXXXXXXXXX
>>> Dbvisit Standby License
License Key     : XXXXXXXXXXXXXXXXXXXXXXXXXXX
customer_number : XXXXXX
dbname          : MyDB
expiry_date     : 2099-05-06
product_id      : 8
sequence        : 1
status          : VALID
updated         : YES

PID:423545
TRACE:dbvisit_install.log

Synchronizing both MyDB_02 and MyDB_03

Shippping logs from primary to standby

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d MyDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 326409)
dbvctl started on srv02: Mon May 20 16:29:14 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 30. Transfer log gap: 58080
>>> Sending heartbeat message... skipped
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    srv03...
>>> Transferring Log file(s) from MyDB on srv02 to srv03 for thread 1:

    thread 1 sequence 58051 (1_58051_987102791.dbf)
    thread 1 sequence 58052 (1_58052_987102791.dbf)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.dbf)
    thread 1 sequence 58080 (1_58080_987102791.dbf)

=============================================================
dbvctl ended on srv02: Mon May 20 16:30:50 2019
=============================================================

Applying log on standby database

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 21504)
dbvctl started on srv03: Mon May 20 16:33:42 2019
=============================================================

>>> Sending heartbeat message... skipped

>>> Applying Log file(s) from srv02 to MyDB on srv03:

    thread 1 sequence 58051 (1_58051_987102791.arc)
    thread 1 sequence 58052 (1_58052_987102791.arc)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.arc)
    thread 1 sequence 58080 (1_58080_987102791.arc)
    Last applied log(s):
    thread 1 sequence 58080

    Next SCN required for recovery 49719323442 generated at 2019-05-20:16:27:09 +02:00.
    Next required log thread 1 sequence 58081

=============================================================
dbvctl ended on srv03: Mon May 20 16:36:52 2019
=============================================================

Running a gap report

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 335068)
dbvctl started on srv02: Mon May 20 16:37:53 2019
=============================================================


Dbvisit Standby log gap report for MyDB thread 1 at 201905201637:
-------------------------------------------------------------
Destination database on srv03 is at sequence: 58081.
Source database on srv02 is at log sequence: 58082.
Source database on srv02 is at archived log sequence: 58081.
Dbvisit Standby last transfer log sequence: 58081.
Dbvisit Standby last transfer at: 2019-05-20 16:37:36.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:37:57 2019
=============================================================

Switchover to srv03

At that time in the project we did a switchover to the newly created srv03 in order to test its stability. The switchover has been performed as described below, but this step is not mandatory when implementing several standby databases. As best practices, we will always test the first configuration by running a switchover before moving forward.

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 12196)
dbvctl started on srv02: Tue May 28 00:07:34 2019
=============================================================

>>> Starting Switchover between srv02 and srv03

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: srv03
    Standby Database Server: srv02

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD1


PID:12196
TRACE:12196_dbvctl_switchover_MyDBSTD1_201905280007.trc

=============================================================
dbvctl ended on srv02: Tue May 28 00:13:31 2019
=============================================================

srv03 is now the new primary and srv02 a new standby database.

Creating MyDBSTD2 DDC configuration file

Once myDB_01 standby database is up and running, we can create its related DDC configuration file. To do so, we simply copy previous DDC configuration file, MyDBSTD1, and update it as needed.

I first transferred the file from current primary srv03 to new standby server srv01 :

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] scp dbv_MyDBSTD1.env oracle@srv01:$PWD
dbv_MyDBSTD1.env		100% 	23KB 	22.7KB/s 		00:00

I copied it into the new DDC configuration file name :

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] cp dbv_MyDBSTD1.env dbv_MyDBSTD2.env

I updated new DDC configuration accordingly to have :

  • DESTINATION as srv01 instead of srv02
  • DB_UNIQUE_NAME_DR as MyDB_01 instead of MyDB_02
  • MAILCFG to see the alerts coming from STD2 configuration.

The primary will remain the same : srv03.

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD2.env

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] diff dbv_MyDBSTD1.env dbv_MyDBSTD2.env
86c86
DESTINATION = srv02
---
DESTINATION = srv01
93c93
DB_UNIQUE_NAME_DR = MyDB
---
DB_UNIQUE_NAME_DR = MyDB_01
135,136c135,136
MAILCFG_FROM = dbvisit_conf_1@domain.name MAILCFG_FROM_DR = dbvisit_conf_1@domain.name
---
MAILCFG_FROM = dbvisit_conf_2@domain.name
MAILCFG_FROM_DR = dbvisit_conf_2@domain.name

In case the ORACLE_HOME and ARCHIVE destination are not the same, these parameters will have to be updated as well.

Synchronizing both MyDB_03 and MyDB_01

Shippping logs from primary to standby

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 25914)
dbvctl started on srv03: Wed Jun  5 20:32:09 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 383. Transfer log gap: 67385
>>> Sending heartbeat message... done
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    srv01...
>>> Transferring Log file(s) from MyDB on srv03 to srv01 for thread 1:

    thread 1 sequence 67003 (o1_mf_1_67003_ghgwj0z2_.arc)
    thread 1 sequence 67004 (o1_mf_1_67004_ghgwmj1w_.arc)
...
...
...
    thread 1 sequence 67384 (o1_mf_1_67384_ghj2fbgj_.arc)
    thread 1 sequence 67385 (o1_mf_1_67385_ghj2g883_.arc)

=============================================================
dbvctl ended on srv03: Wed Jun  5 20:42:05 2019
=============================================================

Applying log on standby database

oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 69764)
dbvctl started on srv01: Wed Jun  5 20:42:45 2019
=============================================================

>>> Sending heartbeat message... done

>>> Applying Log file(s) from srv03 to MyDB on srv01:

    thread 1 sequence 67003 (1_67003_987102791.arc)
    thread 1 sequence 67004 (1_67004_987102791.arc)
...
...
...
    thread 1 sequence 67384 (1_67384_987102791.arc)
    thread 1 sequence 67385 (1_67385_987102791.arc)
    Last applied log(s):
    thread 1 sequence 67385

    Next SCN required for recovery 50112484332 generated at 2019-06-05:20:28:24 +02:00.
    Next required log thread 1 sequence 67386

>>> Dbvisit Archive Management Module (AMM)

    Config: number of archives to keep      = 0
    Config: number of days to keep archives = 3
    Config: diskspace full threshold        = 80%
==========

Processing /u03/app/oracle/dbvisit_arch/MyDB...
    Archive log dir: /u03/app/oracle/dbvisit_arch/MyDB
    Total number of archive files   : 383
    Number of archive logs deleted = 0
    Current Disk percent full       : 8%

=============================================================
dbvctl ended on srv01: Wed Jun  5 21:16:30 2019
=============================================================

Running a gap report

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 44143)
dbvctl started on srv03: Wed Jun  5 21:17:03 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052117:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 67385.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67385.
Dbvisit Standby last transfer at: 2019-06-05 20:42:05.

Archive log gap for thread 1:  1.
Transfer log gap for thread 1: 1.
Standby database time lag (DAYS-HH:MI:SS): +00:48:41.

Switchover to srv01

Now we are having both srv01 and srv02 standby databases up and running and connected with current srv03 primary database. Let’s switchover to srv01 and see what would be the required steps. After each switchover the other standby DDC configuration files will have to be manually updated.

Checking srv03 and srv02 are synchronized

Both srv03 and srv02 databases should be in sync otherwise ship and apply archive logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 93307)
dbvctl started on srv03: Wed Jun  5 21:27:02 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052127:
-------------------------------------------------------------
Destination database on srv02 is at sequence: 67386.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67386.
Dbvisit Standby last transfer at: 2019-06-05 21:24:47.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:27:02.


=============================================================
dbvctl ended on srvxdb03: Wed Jun  5 21:27:08 2019
=============================================================

Checking srv03 and srv01 are synchronized

Both srv03 and srv01 databases should be in sync otherwise ship and apply archive logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 90871)
dbvctl started on srv03: Wed Jun  5 21:26:31 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906052126:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 67386.
Source database on srv03 is at log sequence: 67387.
Source database on srv03 is at archived log sequence: 67386.
Dbvisit Standby last transfer log sequence: 67386.
Dbvisit Standby last transfer at: 2019-06-05 21:26:02.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:26:02.

Switchover to srv01

oracle@srv03:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 20334)
dbvctl started on srv03: Wed Jun  5 21:31:56 2019
=============================================================

>>> Starting Switchover between srv03 and srv01

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: srv01
    Standby Database Server: srv03

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD2


PID:20334
TRACE:20334_dbvctl_switchover_MyDBSTD2_201906052131.trc

=============================================================
dbvctl ended on srv03: Wed Jun  5 21:37:40 2019
=============================================================

Attach srv02 to srv01 (new primary)

Previously to the switchover :

  • srv03 and srv01 was using MyDBSTD2 DDC configuration file
  • srv03 and srv02 was using MyDBSTD1 DDC configuration file

srv02 standby database needs now to be attach to new primary srv01. For this we will copy the MyDBSTD1 DDC configuration file from srv02 to srv01 as it is the first time srv01 is primary. Otherwise, we would only need to update accordingly the already existing file.

I have been transferring the DDC file :

oracle@srv02:/u01/app/dbvisit/standby/conf/ [MyDB] scp dbv_MyDBSTD1.env oracle@srv01:$PWD
dbv_MyDBSTD1.env    100%   23KB  14.8MB/s   00:00

MyDBSTD1 configuration file has been updated accordingly to reflect new changes and configuration :

  • SOURCE needs to be replaced from srv03 to srv01
  • DESTINATION will remain srv02
  • DB_UNIQUE_NAME needs to be replaced fromMyDB_03 to MyDB_01
  • DB_UNIQUE_NAME_DR will remain MyDB_02
oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD1.env

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] grep ^SOURCE dbv_MyDBSTD1.env
SOURCE = srv01

oracle@srv01:/u01/app/dbvisit/standby/conf/ [MyDB] grep DB_UNIQUE_NAME dbv_MyDBSTD1.env
# DB_UNIQUE_NAME      - Primary database db_unique_name
DB_UNIQUE_NAME = MyDB_01
# DB_UNIQUE_NAME_DR   - Standby database db_unique_name
DB_UNIQUE_NAME_DR = MyDB_02

Checking that databases are all synchronized

After performing several switch logfile on the primary in order to generate archive logs, I transferred and applied needed archive log files on both srv02 and srv03 standby databases. I made sure both are synchronized.

srv01 and srv03 databases :

oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 98156)
dbvctl started on srv01: Wed Jun  5 21:52:08 2019
=============================================================


Dbvisit Standby log gap report for MyDB_01 thread 1 at 201906052152:
-------------------------------------------------------------
Destination database on srv03 is at sequence: 67413.
Source database on srv01 is at log sequence: 67414.
Source database on srv01 is at archived log sequence: 67413.
Dbvisit Standby last transfer log sequence: 67413.
Dbvisit Standby last transfer at: 2019-06-05 21:51:13.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:00.


=============================================================
dbvctl ended on srv01: Wed Jun  5 21:52:18 2019
=============================================================

srv01 and srv02 databases :

oracle@srv01:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 100393)
dbvctl started on srv01: Wed Jun  5 21:56:06 2019
=============================================================


Dbvisit Standby log gap report for MyDB_01 thread 1 at 201906052156:
-------------------------------------------------------------
Destination database on srv02 is at sequence: 67413.
Source database on srv01 is at log sequence: 67414.
Source database on srv01 is at archived log sequence: 67413.
Dbvisit Standby last transfer log sequence: 67413.
Dbvisit Standby last transfer at: 2019-06-05 21:55:22.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:05:13.


=============================================================
dbvctl ended on srv01: Wed Jun  5 21:56:07 2019
=============================================================

Apply delay lag

MyDBSTD2 configuration should at the end have an apply lag of 24 hours. This can be achieved using APPLY_DELAY_LAG_MINUTES in the configuration. In order to test it, I have decided with customer to use 60 minutes delay.

Update MyDBSTD2 DDC configuration file

Following parameters have been updated in the configuration :
APPLY_DELAY_LAG_MINUTES = 60
DMN_MONITOR_INTERVAL_DR = 0
TRANSFER_LOG_GAP_THRESHOLD = 0
ARCHIVE_LOG_GAP_THRESHOLD = 60

APPLY_DELAY_LAG_MINUTES is the delay in minutes to take in account before applying the vector changes.
DMN_MONITOR_INTERVAL_DR is the interval in sec for log monitor schedule on destination. 0 mean deactivated.
TRANSFER_LOG_GAP_THRESHOLD is the difference allowed between the last archived sequence on the primary and the last sequence transferred to the standby server.
ARCHIVE_LOG_GAP_THRESHOLD is the difference allowed between the last archived sequence on the primary and the last applied sequence on the standby database before an alert is sent.

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] cp dbv_MyDBSTD2.env dbv_MyDBSTD2.env.201906131343

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] vi dbv_MyDBSTD2.env

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] diff dbv_MyDBSTD2.env dbv_MyDBSTD2.env.201906131343
281c281
DMN_MONITOR_INTERVAL_DR = 0
---
DMN_MONITOR_INTERVAL_DR = 5
331c331
APPLY_DELAY_LAG_MINUTES = 60
---
APPLY_DELAY_LAG_MINUTES = 0
374c374
ARCHIVE_LOG_GAP_THRESHOLD = 60
---
ARCHIVE_LOG_GAP_THRESHOLD = 0

oracle@srv03:/u01/app/dbvisit/standby/conf/ [MyDB] grep ^TRANSFER_LOG_GAP_THRESHOLD dbv_MyDBSTD2.env
TRANSFER_LOG_GAP_THRESHOLD = 0

Report displayed with an apply delay lag been configured

When generating a report, we can see that there is no gap in the log transfer as the archive log would be transferred through the crontab every 10 minutes. On the other side, we can see that there is an expected delay of 60 minutes in applying the logs.

oracle@srv03:/u01/app/dbvisit/standby/ [MyDBTEST] ./dbvctl -d MyDBSTD2 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 66003)
dbvctl started on srv03: Thu Jun 13 15:21:29 2019
=============================================================


Dbvisit Standby log gap report for MyDB_03 thread 1 at 201906131521:
-------------------------------------------------------------
Destination database on srv01 is at sequence: 73856.
Source database on srv03 is at log sequence: 73890.
Source database on srv03 is at archived log sequence: 73889.
Dbvisit Standby last transfer log sequence: 73889.
Dbvisit Standby last transfer at: 2019-06-13 15:20:15.

Archive log gap for thread 1:  33 (apply_delay_lag_minutes=60).
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +01:00:00.


=============================================================
dbvctl ended on srv03: Thu Jun 13 15:21:35 2019
=============================================================

Cascading standby database

What about cascading standby database? Cascading standby database is possible with dbvisit. We would be using a cascaded standby for a reporting server that needs to be updated less frequently or if we would like to unload the primary database in sending archive logs to multiple standby databases. The cascaded standby database will remain updated through the first standby. Cascading is possible since dbvisit version 8.

Following needs to be known :

  • Switchover will not be possible between the primary and the cascaded standby database.
  • The DDC configuration file between the first standby and the cascaded standby needs to have :
    • As SOURCE the first standby database
    • CASCADE parameter set to Y. This will be done automatically when creating the DDC configuration with dbvctl -o setup. From the traces you will see : >>> Source database is a standby database. CASCADE flag will be turned on.
    • ARCHDEST and ARCHSOURCE location on the first standby needs to have same values.

    The principle is then exactly the same, and running dbvctl -d from the first standby will ship the archive log to the second standby.

I had been running some tests in my lab.

Environment

DBVP is the primary server.
DBVS is the first standby server.
DBVS2 is the second cascaded server.

oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : OPEN
DB_UNIQUE_NAME         : DBVPDB_SITE1
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
VERSION                : 12.2.0.1.0
CDB Enabled            : NO
*************************************

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : DBVPDB_SITE2
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************


oracle@DBVS2:/u01/app/dbvisit/standby/ [DBVPDB] DBVPDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : DBVPDB_SITE3
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************

Create cascaded DDC configuration file

The DDC configuration file will be created from the first standby node.
DBVS (first standby server) will be the SOURCE.
DBVS2 (cascaded standby server) will be the DESTINATION.

oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -o setup


=========================================================

     Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b)
           http://www.dbvisit.com

=========================================================

=>dbvctl only needs to be run on the primary server.

Is this the primary server?  [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:

     DDC
     ===
1)   Create New DDC
2)   DBVPDB
3)   DBVPDB_SITE1
4)   DBVPOMF_SITE1
5)   Cancel

Please enter choice [] : 1

Is this correct?  [Yes]:

...


Continue ?  [No]: yes

=========================================================
Dbvisit Standby setup begins.
=========================================================
The following Oracle instance(s) have been found on this server:

     SID            ORACLE_HOME
     ===            ===========
1)   rdbms12201     /u01/app/oracle/product/12.2.0/dbhome_1
2)   DBVPDB         /u01/app/oracle/product/12.2.0/dbhome_1
3)   DBVPOMF        /u01/app/oracle/product/12.2.0/dbhome_1
4)   DUP            /u01/app/oracle/product/12.2.0/dbhome_1
5)   Enter own ORACLE_SID and ORACLE_HOME
Please enter choice [] : 2

Is this correct?  [Yes]:
=>ORACLE_SID will be: DBVPDB
=>ORACLE_HOME will be: /u01/app/oracle/product/12.2.0/dbhome_1

>>> Source database is a standby database. CASCADE flag will be turned on.

Yes to continue or No to cancel setup?  [Yes]:

...
...
...

Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         DBVPDB
ORACLE_HOME                        /u01/app/oracle/product/12.2.0/dbhome_1

SOURCE                             DBVS
ARCHSOURCE                         /u90/dbvisit_arch/DBVPDB_SITE2
RAC_DR                             N
USE_SSH                            Y
DESTINATION                        DBVS2
NETPORT                            22
DBVISIT_BASE_DR                    /oracle/u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0/dbhome_1
DB_UNIQUE_NAME_DR                  DBVPDB_SITE3
ARCHDEST                           /u90/dbvisit_arch/DBVPDB_SITE3
ORACLE_SID_DR                      DBVPDB
ENV_FILE                           DBVPDB_CASCADED

Are these variables correct?  [Yes]:

>>> Dbvisit Database configuration (DDC) file DBVPDB_CASCADED created.

>>> Dbvisit Database repository (DDR) already installed.
   Repository Version          8.3
   Software Version            8.3
   Repository Status           VALID


Do you want to enter license key for the newly created Dbvisit Database configuration (DDC) file?  [Yes]:

Enter license key and press Enter: []: 4jo6z-8aaai-u09b6-ijjxe-cxks5-1114a-ozfvp
oracle@dbvs2's password:
>>> Dbvisit Standby License
License Key     : 4jo6z-8aaai-u09b6-ijjxe-cxks5-1114a-ozfvp
customer_number : 1
dbname          :
expiry_date     : 2019-05-29
product_id      : 8
sequence        : 1
status          : VALID
updated         : YES

PID:25571
TRACE:dbvisit_install.log

dbvisit software could see that the SOURCE is already a standby database. The software will then automatically configured the CASCADE flag to Y.

>>> Source database is a standby database. CASCADE flag will be turned on.
oracle@DBVS:/u01/app/dbvisit/standby/conf/ [DBVPDB] grep CASCADE dbv_DBVPDB_CASCADED.env
# Variable: CASCADE
#      CASCADE = Y
CASCADE = Y

Synchronize first standby with primary

Ship archive log from primary to first standby
oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 23506)
dbvctl started on DBVP: Wed May 15 01:24:55 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 3. Transfer log gap: 3
>>> Transferring Log file(s) from DBVPDB on DBVP to DBVS for thread 1:

    thread 1 sequence 50 (o1_mf_1_50_gfpmk7sg_.arc)
    thread 1 sequence 51 (o1_mf_1_51_gfpmkc7p_.arc)
    thread 1 sequence 52 (o1_mf_1_52_gfpmkf7w_.arc)

=============================================================
dbvctl ended on DBVP: Wed May 15 01:25:06 2019
=============================================================
Apply archive log on first standby
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 27769)
dbvctl started on DBVS: Wed May 15 01:25:25 2019
=============================================================


>>> Applying Log file(s) from DBVP to DBVPDB on DBVS:

>>> No new logs to apply.
    Last applied log(s):
    thread 1 sequence 52

    Next SCN required for recovery 885547 generated at 2019-05-15:01:24:29 +02:00.
    Next required log thread 1 sequence 53

=============================================================
dbvctl ended on DBVS: Wed May 15 01:25:27 2019
=============================================================
Run a gap report
oracle@DBVP:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB -i
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 23625)
dbvctl started on DBVP: Wed May 15 01:25:55 2019
=============================================================


Dbvisit Standby log gap report for DBVPDB_SITE1 thread 1 at 201905150125:
-------------------------------------------------------------
Destination database on DBVS is at sequence: 52.
Source database on DBVP is at log sequence: 53.
Source database on DBVP is at archived log sequence: 52.
Dbvisit Standby last transfer log sequence: 52.
Dbvisit Standby last transfer at: 2019-05-15 01:25:06.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:33.


=============================================================
dbvctl ended on DBVP: Wed May 15 01:25:58 2019
=============================================================

Synchronize cascaded standby with first standby

Ship archive log from first standby to cascaded standby
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 27965)
dbvctl started on DBVS: Wed May 15 01:26:41 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 3. Transfer log gap: 3
>>> Transferring Log file(s) from DBVPDB on DBVS to DBVS2 for thread 1:

    thread 1 sequence 50 (1_50_979494498.arc)
    thread 1 sequence 51 (1_51_979494498.arc)
    thread 1 sequence 52 (1_52_979494498.arc)

=============================================================
dbvctl ended on DBVS: Wed May 15 01:26:49 2019
=============================================================
Apply archive log on cascaded standby
oracle@DBVS2:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 21118)
dbvctl started on DBVS2: Wed May 15 01:27:21 2019
=============================================================


>>> Applying Log file(s) from DBVS to DBVPDB on DBVS2:

    thread 1 sequence 50 (1_50_979494498.arc)
    thread 1 sequence 51 (1_51_979494498.arc)
    thread 1 sequence 52 (1_52_979494498.arc)
    Last applied log(s):
    thread 1 sequence 52

    Next SCN required for recovery 885547 generated at 2019-05-15:01:24:29 +02:00.
    Next required log thread 1 sequence 53

=============================================================
dbvctl ended on DBVS2: Wed May 15 01:27:33 2019
=============================================================
Run a gap report
oracle@DBVS:/u01/app/dbvisit/standby/ [DBVPDB] ./dbvctl -d DBVPDB_CASCADED -i
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 28084)
dbvctl started on DBVS: Wed May 15 01:28:07 2019
=============================================================


Dbvisit Standby log gap report for DBVPDB_SITE2 thread 1 at 201905150128:
-------------------------------------------------------------
Destination database on DBVS2 is at sequence: 52.
Source database on DBVS is at applied log sequence: 52.
Dbvisit Standby last transfer log sequence: 52.
Dbvisit Standby last transfer at: 2019-05-15 01:26:49.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:00.


=============================================================
dbvctl ended on DBVS: Wed May 15 01:28:11 2019
=============================================================

Conclusion

With dbvisit we are able to configure several standby databases, choose apply lag delay and also configure cascaded standby. The cons would be that the DDC configuration file needs to be manually adapted after each switchover.

Cet article Having multiple standby databases and cascading with dbvisit est apparu en premier sur Blog dbi services.

Adding a dbvisit standby database on the ODA in a non-OMF environment

$
0
0

I have recently been working on a customer project where I had been challenged adding a dbvisit standby database on an ODA X7-2M, named ODA03. The existing customer environment was composed of Oracle Standard 12.2 version database. The primary database, myDB, is running on server named srv02 and using a non-OMF configuration. On the ODA side we are working with OMF configuration. The dbvisit version available at that time was version 8. You need to know that version 9 is currently the last one and brings some new cool features. Through this blog I would like to share with you my experience, the problem I have been facing and the solution I could put in place.

Preparing the instance on the ODA

First of all I have been creating an instance only database on the ODA.

root@ODA03 ~]# odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status   
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
ec33e32a-37d1-4d0d-8c40-b358dcf5660c     OraDB12201_home1     12.2.0.1.180717                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

[root@ODA03 ~]# odacli create-database -m -u myDB_03 -dn domain.name -n myDB -r ACFS -io -dh ec33e32a-37d1-4d0d-8c40-b358dcf5660c
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
                     ID:  96fd4d07-4604-4158-9c25-702c01f4493e
            Description:  Database service creation with db name: myDB
                 Status:  Created
                Created:  May 15, 2019 4:29:15 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@ODA03 ~]# odacli describe-job -i 96fd4d07-4604-4158-9c25-702c01f4493e

Job details
----------------------------------------------------------------
                     ID:  96fd4d07-4604-4158-9c25-702c01f4493e
            Description:  Database service creation with db name: myDB
                 Status:  Success
                Created:  May 15, 2019 4:29:15 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               May 15, 2019 4:29:16 PM CEST        May 15, 2019 4:29:16 PM CEST        Success
Creating volume datmyDB                    May 15, 2019 4:29:16 PM CEST        May 15, 2019 4:29:38 PM CEST        Success
Creating volume reco                     May 15, 2019 4:29:38 PM CEST        May 15, 2019 4:30:00 PM CEST        Success
Creating ACFS filesystem for DATA        May 15, 2019 4:30:00 PM CEST        May 15, 2019 4:30:17 PM CEST        Success
Creating ACFS filesystem for RECO        May 15, 2019 4:30:17 PM CEST        May 15, 2019 4:30:35 PM CEST        Success
Database Service creation                May 15, 2019 4:30:35 PM CEST        May 15, 2019 4:30:51 PM CEST        Success
Auxiliary Instance Creation              May 15, 2019 4:30:35 PM CEST        May 15, 2019 4:30:47 PM CEST        Success
password file creation                   May 15, 2019 4:30:47 PM CEST        May 15, 2019 4:30:49 PM CEST        Success
archive and redo log location creation   May 15, 2019 4:30:49 PM CEST        May 15, 2019 4:30:49 PM CEST        Success
updating the Database version            May 15, 2019 4:30:49 PM CEST        May 15, 2019 4:30:51 PM CEST        Success

Next steps are really common DBA operations :

  • Create a pfile from the current primary database
  • Transfer the pfile to the ODA
  • Update the pfile as needed (path, db_unique_name, …)
  • Create a spfile from the pfile on the new ODA database
  • Apply ODA specific instance parameters
  • Copy or create the password file with same password

The parameters that are mandatory to be set on the ODA instance are the following :
*.db_create_file_dest=’/u02/app/oracle/oradata/myDB_03′
*.db_create_online_log_dest_1=’/u03/app/oracle/redo’
*.db_recovery_file_dest=’/u03/app/oracle/fast_recovery_area’

Also all the convert parameters should be removed. Using convert parameter is incompatible with OMF.

Creating the standby database

Using dbvisit

I first tried to use dbvisit to create the standby database.

As usual and common dbvisit operation, I first created the DDC configuration file from the primary server :

oracle@srv02:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -o setup
...
...
...
Below are the list of configuration variables provided during the setup process:

Configuration Variable             Value Provided
======================             ==============
ORACLE_SID                         myDB
ORACLE_HOME                        /opt/oracle/product/12.2.0

SOURCE                             srv02
ARCHSOURCE                         /u03/app/oracle/dbvisit_arch/myDB
RAC_DR                             N
USE_SSH                            N
DESTINATION                        ODA03
NETPORT                            7890
DBVISIT_BASE_DR                    /u01/app/dbvisit
ORACLE_HOME_DR                     /u01/app/oracle/product/12.2.0.1/dbhome_1
DB_UNIQUE_NAME_DR                  myDB_03
ARCHDEST                           /u03/app/oracle/dbvisit_arch/myDB
ORACLE_SID_DR                      myDB
ENV_FILE                           myDBSTD1

Are these variables correct?  [Yes]:
...
...
...

I then used this DDC configuration file to create the standby database :

oracle@srv02:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -d myDBSTD1 --csd


-------------------------------------------------------------------------------

INIT ORA PARAMETERS
-------------------------------------------------------------------------------
*              audit_file_dest                         /u01/app/oracle/admin/myDB/adump
*              compatible                              12.2.0
*              control_management_pack_access          NONE
*              db_block_size                           8192
*              db_create_file_dest                     /u02/app/oracle/oradata/myDB_03
*              db_create_online_log_dest_1             /u03/app/oracle/redo
*              db_domain
*              db_name                                 myDB
*              db_recovery_file_dest                   /u03/app/oracle/fast_recovery_area
*              db_recovery_file_dest_size              240G
*              db_unique_name                          myDB_03
*              diagnostic_dest                         /u01/app/oracle
*              dispatchers                             (PROTOCOL=TCP) (SERVICE=myDBXDB)
*              instance_mode                           READ-WRITE
*              java_pool_size                          268435456
*              log_archive_dest_1                      LOCATION=USE_DB_RECOVERY_FILE_DEST
*              open_cursors                            3000
*              optimizer_features_enable               12.2.0.1
*              pga_aggregate_target                    4194304000
*              processes                               8000
*              remote_login_passwordfile               EXCLUSIVE
*              resource_limit                          TRUE
*              sessions                                7552
*              sga_max_size                            53687091200
*              sga_target                              26843545600
*              shared_pool_reserved_size               117440512
*              spfile                                  OS default
*              statistics_level                        TYPICAL
*              undo_retention                          300
*              undo_tablespace                         UNDOTBS1

-------------------------------------------------------------------------------

Status: VALID

What would you like to do:
   1 - Create standby database using existing saved template
   2 - View content of existing saved template
   3 - Return to the previous menu
   Please enter your choice [1]:

This operation failed with following errors :

Cannot create standby data or temp file /usr/oracle/oradata/myDB/myDB_bi_temp01.dbf for
primary file /usr/oracle/oradata/myDB/myDB_bi_temp01.dbf as location /usr/oracle/oradata/myDB
does not exist on the standby.

A per dbvisit documentation, dbvisit standby is certified ODA and fully compatible with non-OMF and OMF databases. This is correct, the only distinction is that the full environment needs to be in same configuration. That’s to say that if the primary is OMF, the standby is expected to be OMF. If the primary is running a non-OMF configuration, the standby should be using non-OMF as well.

Using RMAN

I decided to duplicate the database using RMAN and a backup that I transferred locally on the ODA. The backup was the previous nightly inc0 backup. Before running the rman duplication I executed a last archive log backup to make sure to have the most recent archive used in the duplication.

I’m taking this opportunity to highlight that, thanks to ODA NVMe technology, the duplication of the 3 TB database without multiple channel (standard edition) took a bit more than 2 hours only. On the existing servers this took about 10 hours.

I added following tns entry in the tnsnames.ora.

myDBSRV3 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ODA03.domain.name)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = myDB)
      (UR = A)
    )
  )

Of course I could have been using a local connection.

I made sure the database to be in nomount status and ran the rman duplication :

oracle@ODA03:/opt/oracle/backup/ [myDB] rmanh

Recovery Manager: Release 12.2.0.1.0 - Production on Mon May 20 13:24:29 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect auxiliary sys@myDBSRV3

auxiliary database Password:
connected to auxiliary database: myDB (not mounted)

RMAN> run {
2> duplicate target database for standby dorecover backup location '/opt/oracle/backup/myDB';
3> }

Starting Duplicate Db at 20-MAY-2019 13:25:51

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   restore clone standby controlfile from  '/opt/oracle/backup/myDB/ctl_myDB_myDB_s108013_p1_newbak.ctl';
}
executing Memory Script

sql statement: alter system set  control_files =   ''/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 20-MAY-2019 13:25:51
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=9186 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u03/app/oracle/redo/myDB_03/controlfile/o1_mf_gg4qvpnn_.ctl
Finished restore at 20-MAY-2019 13:25:52

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=9186 device type=DISK

contents of Memory Script:
{
   set until scn  49713361973;
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   switch clone tempfile all;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  7 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  10 to new;
   set newname for clone datafile  11 to new;
   set newname for clone datafile  12 to new;
   set newname for clone datafile  13 to new;
   set newname for clone datafile  14 to new;
   set newname for clone datafile  15 to new;
   set newname for clone datafile  16 to new;
   set newname for clone datafile  17 to new;
   set newname for clone datafile  18 to new;
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_bi_te_%u_.tmp in control file

executing command: SET NEWNAME

...
...
...

executing command: SET NEWNAME

Starting restore at 20-MAY-2019 13:25:57
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lxdataid_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_renderz2_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00007 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_ods_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00013 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_renderzs_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00015 to /u02/app/oracle/oradata/myDB_03/myDB_03/datafile/o1_mf_lx_stagi_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /opt/oracle/backup/myDB/inc0_myDB_s107963_p1
...
...
...
archived log file name=/opt/oracle/backup/myDB/1_58043_987102791.dbf thread=1 sequence=58043
archived log file name=/opt/oracle/backup/myDB/1_58044_987102791.dbf thread=1 sequence=58044
archived log file name=/opt/oracle/backup/myDB/1_58045_987102791.dbf thread=1 sequence=58045
archived log file name=/opt/oracle/backup/myDB/1_58046_987102791.dbf thread=1 sequence=58046
archived log file name=/opt/oracle/backup/myDB/1_58047_987102791.dbf thread=1 sequence=58047
archived log file name=/opt/oracle/backup/myDB/1_58048_987102791.dbf thread=1 sequence=58048
archived log file name=/opt/oracle/backup/myDB/1_58049_987102791.dbf thread=1 sequence=58049
archived log file name=/opt/oracle/backup/myDB/1_58050_987102791.dbf thread=1 sequence=58050
media recovery complete, elapsed time: 00:12:40
Finished recover at 20-MAY-2019 16:06:22
Finished Duplicate Db at 20-MAY-2019 16:06:39

I could check and see that my standby database has been successfully created on the ODA :

oracle@ODA03:/u01/app/oracle/local/dmk/etc/ [myDB] myDB
********* dbi services Ltd. *********
STATUS                 : MOUNTED
DB_UNIQUE_NAME         : myDB_03
OPEN_MODE              : MOUNTED
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PHYSICAL STANDBY
FLASHBACK_ON           : NO
FORCE_LOGGING          : YES
CDB Enabled            : NO
*************************************

As a personal note, I really found using oracle RMAN more convenient to duplicate a database. Albeit dbvisit script and tool is really stable, I think that this will give you more flexibility.

Registering the database in the grid cluster

As next step I registered the database in the grid.

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [LX] srvctl add database -db MyDB_03 -oraclehome /u01/app/oracle/product/12.2.0.1/dbhome_1 -dbtype SINGLE -instance MyDB -domain team-w.local -spfile /u02/app/oracle/oradata/MyDB_03/dbs/spfileMyDB.ora -pwfile /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/orapwMyDB -role PHYSICAL_STANDBY -startoption MOUNT -stopoption IMMEDIATE -dbname MyDB -node ODA03 -acfspath "/u02/app/oracle/oradata/MyDB_03,/u03/app/oracle"

I stopped the database :

SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

And started it again with the grid infrastructure :

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] MyDB
********* dbi services Ltd. *********
STATUS          : STOPPED
*************************************

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl status database -d MyDB_03
Instance MyDB is not running on node ODA03

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl start database -d MyDB_03

oracle@ODA03:/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/ [MyDB] srvctl status database -d MyDB_03
Instance MyDB is running on node ODA03

dbvisit synchronization

We now have our standby database created on the ODA. We just need to synchronize it with the primary.

Run a gap report

Executing a gap report, we can see that the newly created database is running almost 4 hours behind.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 321953)
dbvctl started on srv02: Mon May 20 16:24:35 2019
=============================================================


Dbvisit Standby log gap report for myDB thread 1 at 201905201624:
-------------------------------------------------------------
Destination database on ODA03 is at sequence: 58050.
Source database on srv02 is at log sequence: 58080.
Source database on srv02 is at archived log sequence: 58079.
Dbvisit Standby last transfer log sequence: .
Dbvisit Standby last transfer at: .

Archive log gap for thread 1:  29.
Transfer log gap for thread 1: 58079.
Standby database time lag (DAYS-HH:MI:SS): +03:39:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:24:40 2019
=============================================================

Send the archive logs from primary to the standby database

I have been shipping the last archive logs from the primary database to the newly created standby.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 326409)
dbvctl started on srv02: Mon May 20 16:29:14 2019
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 30. Transfer log gap: 58080
>>> Sending heartbeat message... skipped
>>> First time Dbvisit Standby runs, Dbvisit Standby configuration will be copied to
    ODA03...
>>> Transferring Log file(s) from myDB on srv02 to ODA03 for thread 1:

    thread 1 sequence 58051 (1_58051_987102791.dbf)
    thread 1 sequence 58052 (1_58052_987102791.dbf)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.dbf)
    thread 1 sequence 58080 (1_58080_987102791.dbf)

=============================================================
dbvctl ended on srv02: Mon May 20 16:30:50 2019
=============================================================

Apply archive logs on the standby database

Then I could finally apply the archive logs on the standby database.

oracle@ODA03:/u01/app/dbvisit/standby/ [myDB] ./dbvctl -d myDBSTD1
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 21504)
dbvctl started on ODA03: Mon May 20 16:33:42 2019
=============================================================

>>> Sending heartbeat message... skipped

>>> Applying Log file(s) from srv02 to myDB on ODA03:

    thread 1 sequence 58051 (1_58051_987102791.arc)
    thread 1 sequence 58052 (1_58052_987102791.arc)
...
...
...
    thread 1 sequence 58079 (1_58079_987102791.arc)
    thread 1 sequence 58080 (1_58080_987102791.arc)
    Last applied log(s):
    thread 1 sequence 58080

    Next SCN required for recovery 49719323442 generated at 2019-05-20:16:27:09 +02:00.
    Next required log thread 1 sequence 58081

=============================================================
dbvctl ended on ODA03: Mon May 20 16:36:52 2019
=============================================================

Run a gap report

Running a new gap report, we can see that there is no delta between the primary and the standby database.

oracle@srv02:/u01/app/dbvisit/standby/ [rdbms12201] ./dbvctl -d myDBSTD1 -i
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 335068)
dbvctl started on srv02: Mon May 20 16:37:53 2019
=============================================================


Dbvisit Standby log gap report for myDB thread 1 at 201905201637:
-------------------------------------------------------------
Destination database on ODA03 is at sequence: 58081.
Source database on srv02 is at log sequence: 58082.
Source database on srv02 is at archived log sequence: 58081.
Dbvisit Standby last transfer log sequence: 58081.
Dbvisit Standby last transfer at: 2019-05-20 16:37:36.

Archive log gap for thread 1:  0.
Transfer log gap for thread 1: 0.
Standby database time lag (DAYS-HH:MI:SS): +00:00:01.


=============================================================
dbvctl ended on srv02: Mon May 20 16:37:57 2019
=============================================================

Preparing the database for switchover

Are we done? Absolutely not. In order to be able to successfully perform a switchover, 3 main modifications are mandatory on the non-ODA Server (running non-OMF database) :

  • The future database files should be OMF
  • The online redo log should be newly created
  • The temporary file should be newly created

Otherwise you might end with unsuccessfull switchover having below errors :

>>> Starting Switchover between srv02 and ODA03

Running pre-checks       ... failed
No rollback action required

>>> Database on server srv02 is still a Primary Database
>>> Database on server ODA03 is still a Standby Database


<<<>>>
PID:40386
TRACEFILE:40386_dbvctl_switchover_myDBSTD1_201905272153.trc
SERVER:srv02
ERROR_CODE:1
Remote execution error on ODA03.

====================Remote Output start: ODA03=====================
<<<>>>
PID:92292
TRACEFILE:92292_dbvctl_f_gs_get_info_standby_myDBSTD1_201905272153.trc
SERVER:ODA03
ERROR_CODE:2146
Dbvisit Standby cannot proceed:
Cannot create standby data or temp file /usr/oracle/oradata/myDB/temp01.dbf for primary
file /usr/oracle/oradata/myDB/temp01.dbf as location /usr/oracle/oradata/myDB does not
exist on the standby.
Cannot create standby data or temp file /usr/oracle/oradata/myDB/lx_bi_temp01.dbf for
primary file /usr/oracle/oradata/myDB/lx_bi_temp01.dbf as location /usr/oracle/oradata/myDB
does not exist on the standby.
Review the following standby database parameters:
        db_create_file_dest = /u02/app/oracle/oradata/myDB_03
        db_file_name_convert =
>>>> Dbvisit Standby terminated <<<>>> Dbvisit Standby terminated <<<<

Having new OMF configuration

There is no need to convert the full database into OMF. A database can run having both file naming configuration, non-OMF and OMF. We just need to have the database working now with OMF configuration. For this we will just apply the appropriate value to the init parameter. In my case the existing primary database was storing all data and redo files in the /opt/oracle/oradata directory.

SQL> alter system set DB_CREATE_FILE_DEST='/opt/oracle/oradata' scope=both;

System wurde geändert.

SQL> alter system set DB_CREATE_ONLINE_LOG_DEST_1='/opt/oracle/oradata' scope=both;

System wurde geändert.

Refresh the online log

We will create new OMF redo log files as described below.

The current redo log configuration :

SQL> select v$log.group#, member, v$log.status from v$logfile, v$log where v$logfile.group#=v$log.group#;

    GROUP# MEMBER                                             STATUS
---------- -------------------------------------------------- ----------
        12 /opt/oracle/oradata/myDB/redo12.log                  ACTIVE
        13 /opt/oracle/oradata/myDB/redo13.log                  CURRENT
        15 /opt/oracle/oradata/myDB/redo15.log                  INACTIVE
        16 /opt/oracle/oradata/myDB/redo16.log                  INACTIVE
         1 /opt/oracle/oradata/myDB/redo1.log                   INACTIVE
         2 /opt/oracle/oradata/myDB/redo2.log                   INACTIVE
        17 /opt/oracle/oradata/myDB/redo17.log                  INACTIVE
        18 /opt/oracle/oradata/myDB/redo18.log                  INACTIVE
        19 /opt/oracle/oradata/myDB/redo19.log                  INACTIVE
        20 /opt/oracle/oradata/myDB/redo20.log                  INACTIVE
         3 /opt/oracle/oradata/myDB/redo3.log                   INACTIVE
         4 /opt/oracle/oradata/myDB/redo4.log                   INACTIVE
         5 /opt/oracle/oradata/myDB/redo5.log                   INACTIVE
         6 /opt/oracle/oradata/myDB/redo6.log                   INACTIVE
         7 /opt/oracle/oradata/myDB/redo7.log                   INACTIVE
         8 /opt/oracle/oradata/myDB/redo8.log                   ACTIVE
         9 /opt/oracle/oradata/myDB/redo9.log                   ACTIVE
        10 /opt/oracle/oradata/myDB/redo10.log                  ACTIVE
        11 /opt/oracle/oradata/myDB/redo11.log                  ACTIVE
        14 /opt/oracle/oradata/myDB/redo14.log                  INACTIVE

For all INACTIVE redo log groups, we will be able to drop the group and create it again with the OMF naming convention :

SQL> alter database drop logfile group 1;

Datenbank wurde geändert.

SQL> alter database add logfile group 1;

Datenbank wurde geändert.

In order to move to the next redo group and release the current one, we will run a switch log file :

SQL> alter system switch logfile;

System wurde geändert.

To move the ACTIVE redo log to INACTIVE we will run a checkpoint :

SQL> alter system checkpoint;

System wurde geändert.

And then drop and recreate the last INACTIVE redo groups :

SQL> alter database drop logfile group 10;

Datenbank wurde geändert.

SQL> alter database add logfile group 10;

Datenbank wurde geändert.

To finally have all our online log with OMF format :

SQL> select v$log.group#, member, v$log.status from v$logfile, v$log where v$logfile.group#=v$log.group# order by group#;

    GROUP# MEMBER                                                       STATUS
---------- ------------------------------------------------------------ ----------
         1 /opt/oracle/oradata/myDB/onlinelog/o1_mf_1_ggqx5zon_.log       INACTIVE
         2 /opt/oracle/oradata/myDB/onlinelog/o1_mf_2_ggqxjky2_.log       INACTIVE
         3 /opt/oracle/oradata/myDB/onlinelog/o1_mf_3_ggqxjodl_.log       INACTIVE
         4 /opt/oracle/oradata/myDB/onlinelog/o1_mf_4_ggqxkddc_.log       INACTIVE
         5 /opt/oracle/oradata/myDB/onlinelog/o1_mf_5_ggqxkj1t_.log       INACTIVE
         6 /opt/oracle/oradata/myDB/onlinelog/o1_mf_6_ggqxkmnm_.log       CURRENT
         7 /opt/oracle/oradata/myDB/onlinelog/o1_mf_7_ggqxn373_.log       UNUSED
         8 /opt/oracle/oradata/myDB/onlinelog/o1_mf_8_ggqxn7b3_.log       UNUSED
         9 /opt/oracle/oradata/myDB/onlinelog/o1_mf_9_ggqxnbxd_.log       UNUSED
        10 /opt/oracle/oradata/myDB/onlinelog/o1_mf_10_ggqxvlbf_.log      UNUSED
        11 /opt/oracle/oradata/myDB/onlinelog/o1_mf_11_ggqxvnyg_.log      UNUSED
        12 /opt/oracle/oradata/myDB/onlinelog/o1_mf_12_ggqxvqyp_.log      UNUSED
        13 /opt/oracle/oradata/myDB/onlinelog/o1_mf_13_ggqxvv2o_.log      UNUSED
        14 /opt/oracle/oradata/myDB/onlinelog/o1_mf_14_ggqxxcq7_.log      UNUSED
        15 /opt/oracle/oradata/myDB/onlinelog/o1_mf_15_ggqxxgfg_.log      UNUSED
        16 /opt/oracle/oradata/myDB/onlinelog/o1_mf_16_ggqxxk67_.log      UNUSED
        17 /opt/oracle/oradata/myDB/onlinelog/o1_mf_17_ggqxypwg_.log      UNUSED
        18 /opt/oracle/oradata/myDB/onlinelog/o1_mf_18_ggqy1z78_.log      UNUSED
        19 /opt/oracle/oradata/myDB/onlinelog/o1_mf_19_ggqy2270_.log      UNUSED
        20 /opt/oracle/oradata/myDB/onlinelog/o1_mf_20_ggqy26bj_.log      UNUSED

20 Zeilen ausgewählt.

Refresh temporary file

The database was using 2 temp tablespaces : TEMP and MyDB_BI_TEMP.

We first need to add new temp files in OMF format for both tablespaces.

SQL> alter tablespace TEMP add tempfile size 20G;

Tablespace wurde geändert.

SQL> alter tablespace myDB_BI_TEMP add tempfile size 20G;

Tablespace wurde geändert.

Both tablespace will now include 2 files : a previous non-OMF one and a new OMF one :

SQL> @qdbstbsinf.sql
Enter a tablespace name filter (US%): TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
TEMP                 /opt/oracle/oradata/myDB/datafile/o1_mf_temp_ggrjzm9o_.tmp     ONLINE               20480 NO                    0
TEMP                 /usr/oracle/oradata/myDB/temp01.dbf                            ONLINE               20480 NO                    0

SQL> @qdbstbsinf.sql
Enter a tablespace name filter (US%): myDB_BI_TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
myDB_BI_TEMP           /opt/oracle/oradata/myDB/datafile/o1_mf_lx_bi_te_ggrk0wxz_.tmp ONLINE               20480 NO                    0
myDB_BI_TEMP           /usr/oracle/oradata/myDB/lx_bi_temp01.dbf                      ONLINE               20480 YES                5120

Dropping temporary file will end into error :

SQL> alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles;
alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles
*
FEHLER in Zeile 1:
ORA-25152: TEMPFILE kann momentan nicht gelöscht werden

We need to restart the database. This will only be possible during the maintenance windows scheduled to run the switchover.

SQL> shutdown immediate;
Datenbank geschlossen.
Datenbank dismounted.
ORACLE-Instanz heruntergefahren.

SQL> startup
ORACLE-Instanz hochgefahren.

Total System Global Area 5,3687E+10 bytes
Fixed Size                 26330584 bytes
Variable Size            3,3152E+10 bytes
Database Buffers         2,0401E+10 bytes
Redo Buffers              107884544 bytes
Datenbank mounted.
Datenbank geöffnet.

The previous non-OMF temporary file can now be deleted :

SQL>  alter database tempfile '/usr/oracle/oradata/myDB/temp01.dbf' drop including datafiles;

Datenbank wurde geändert.

SQL> alter database tempfile '/usr/oracle/oradata/myDB/lx_bi_temp01.dbf' drop including datafiles;

Datenbank wurde geändert.

And we only have OMF temporary files now :

SQL>  @qdbstbsinf.sql
Enter a tablespace name filter (US%): TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
TEMP                 /opt/oracle/oradata/myDB/datafile/o1_mf_temp_ggrjzm9o_.tmp     ONLINE               20480 NO                    0

SQL>  @qdbstbsinf.sql
Enter a tablespace name filter (US%): myDB_BI_TEMP

TABLESPACE_NAME      FILE_NAME                                                    STATUS             SIZE_MB AUTOEXTENSIB MAXSIZE_MB
-------------------- ------------------------------------------------------------ --------------- ---------- ------------ ----------
myDB_BI_TEMP           /opt/oracle/oradata/myDB/datafile/o1_mf_lx_bi_te_ggrk0wxz_.tmp ONLINE               20480 NO                    0

Testing switchover

We are now ready to test the switchover from current srv02 primary to ODA03 server after making sure both databases are synchronized.

oracle@srv02:/u01/app/dbvisit/standby/ [MyDB] ./dbvctl -d MyDBSTD1 -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.26_0_g3fdeaadd) (pid 12196)
dbvctl started on srv02: Tue May 28 00:07:34 2019
=============================================================

>>> Starting Switchover between srv02 and ODA03

Running pre-checks       ... done
Pre processing           ... done
Processing primary       ... done
Processing standby       ... done
Converting standby       ... done
Converting primary       ... done
Completing               ... done
Synchronizing            ... done
Post processing          ... done

>>> Graceful switchover completed.
    Primary Database Server: ODA03
    Standby Database Server: srv02

>>> Dbvisit Standby can be run as per normal:
    dbvctl -d MyDBSTD1


PID:12196
TRACE:12196_dbvctl_switchover_MyDBSTD1_201905280007.trc

=============================================================
dbvctl ended on srv02: Tue May 28 00:13:31 2019
=============================================================

Conclusion

With dbvisit standby it is possible to mix non-OMF and OMF databases after completing several manual steps. The final recommendation would be to run a unique configuration. This is why, after having run a switchover to the new ODA03 database, and making sure the new database is stable, we created from scratch the old primary srv02 database with OMF configuration. Converting a database to OMF using move option is not possible with standard edition.

Cet article Adding a dbvisit standby database on the ODA in a non-OMF environment est apparu en premier sur Blog dbi services.

Some words about SOUG Day in Lausanne

$
0
0

Today I participate to SOUG Day which takes place in Lausanne at the “Centre Pluriculturel et social d’Ouchy”.

After a coffee and a welcome speech by Yann Neushaus, Ludovico Caldara and Flora Barriele,

the event starts with 2 global sessions:

A l’heure du serverless, le futur va-t-il aller aux bases de données distribuées?

Franck Pachot makes a comparison between Oracle products (Active Data Guard, RAC, Sharding) and new distributed databases in order to scale-up and scale-out.
Briefly his talk makes reference to:
– Differences among RDBMS, NoSQL and NewSQL according to the CAP Theorem
– Definition and needs for NoSQL and NewSQL
– Definition of services such as Google’s Cloud Spanner, TiDB, CockroachDB, YugabyteDB.

From DBA to Data Engineer – How to survive a career transition?

Kamran Agayev from Azerbaijan speaks about what Big Data in general is and the transition from DBA to Database Engineer.
He addresses several interesting topics:
– Definition of Big Data
– Which are the skills for a Data Engineer, a Data Architect (more complex competences than “just” being a Database Administrator)
– Definition of products like Hadoop, Kafka, NoSQL
After the coffee break, the choice is between 2 different streams. Here some words about the sessions I attend.

Amazing SQL

Laetitia Avrot from EnterpriseDB talks about SQL, which is much more than what we know. SQL is different from other programming languages but it must be treated as one of them. At school we still learn SQL from before 1992, but in 1999 it changed to add relational algebra and data atomicity. PostgreSQL is very close to this standard. Laetitia shows lots of concrete examples of subqueries, common table expressions (CTE), lateral joins (not implemented in MySQL for the moment), anti joins, rollup, window functions, recursive CTE, and also some explanations about key words such as in, values, not in, not exists.

Graph Database

After the lunch, Gianni Ceresa presents property graph databases as combination of vertex (node, properties, ID) and edge (node, ID, label, properties). To start working with Oracle graphs, we can use PGX (Oracle Labs Parallel Graph AnalytiX). The OTN version is better for documentation. Through a demo, Gianni shows how to build a graph using Apache Zeppelin as interpreter and Phyton and Jupyter to visualize it. And then we can also use it to write some data analysis.

5 mistakes that you’re making while presenting your data

Luiza Nowak, a non-IT girl working with IT people (she is a board member of POUG), talks about IT presentations. There are 4 important parts defining them: the content, the story, the speaker performance and visualization.
Here recurrent errors of IT presentations and how to handle them:
1. Lack of data selection – you need to filter your data, to consider who and where you are talking to
2. Too much at once – you need to divide your content, create some suspense and put less information into slides to let the audience listen to you instead of reading them
3. Forget about contrast – you have to use contrast on purpose because it can be useful to underline something but it can also distract your audience
4. Wrong type of charts – you have to be clear about your data and explain results
5. You don’t have any story – you need to conceptualize your data.

How can Docker help a MariaDB cluster for disaster/recovery

Saïd Mendi from dbi services explains what a MariaDB Galera Cluster is and his benefits and how Docker can help in some critical situations. Actually you can create some delayed slaves which can be useful to emulate the flashback functionality.

Conclusion

SOUG Day arrives to an end. It was a nice opportunity to meet international speakers, discuss with some customers and colleagues, learn and share. As usual, this is part of dbi services spirit and matches with our values!
And now, I have to say you goodbye: it’s the aperitif and dinner time with the community 😉 Hope to see you at the next SOUG event.

Cet article Some words about SOUG Day in Lausanne est apparu en premier sur Blog dbi services.

Elapsed time of Oracle Parallel Executions are not shown correctly in AWR

$
0
0

As the elapsed time (time it takes for a task from start to end, often called wall-clock time) per execution of parallel queries are not shown correctly in AWR-reports, I thought I setup a testcase to find a way to get an elapsed time closer to reality.

REMARK: To use AWR (Automatic Workload Repository) and ASH (Active Session History) as described in this Blog you need to have the Oracle Diagnostics Pack licensed.

I created a table t5 with 213K blocks:

SQL> select blocks from tabs where table_name='T5';
 
    BLOCKS
----------
    213064

In addition I enabled Linux-IO-throttling with 300 IOs/sec through a cgroup on my device sdb to ensure the parallel-statements take a couple of seconds to run:

[root@19c ~]# CONFIG_BLK_CGROUP=y
[root@19c ~]# CONFIG_BLK_DEV_THROTTLING=y
[root@19c ~]# echo "8:16 300" > /sys/fs/cgroup/blkio/blkio.throttle.read_iops_device

After that I ran my test:

SQL> select sysdate from dual;
 
SYSDATE
-------------------
14.11.2019 14:03:51
 
SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.
 
SQL> set timing on
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.63
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.62
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.84
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.73
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.63
SQL> select /*+ parallel(t5 2) full(t5) */ count(*) from t5;
 
  COUNT(*)
----------
  50403840
 
Elapsed: 00:00:05.74
SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.

Please consider the elapsed time of about 5.7 seconds per execution.

The AWR-report shows the following in the “SQL ordered by Elapsed time”-section:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
            67.3              6         11.22   94.5   37.4   61.3 04r3647p2g7qu
Module: SQL*Plus
select /*+ parallel(t5 2) full(t5) */ count(*) from t5

I.e. 11.22 seconds in average per execution. However, as we can see above, the average execution time is around 5.7 seconds. The reason for the wrong elapsed time per execution is that the elapsed time for the parallel slaves is summed up to the elapsed time even though the processes worked in parallel. Thanks to the column SQL_EXEC_ID (very useful) we can get the sum of the elapsed times per execution from ASH:

SQL> break on report
SQL> compute avg of secs_db_time on report
SQL> select sql_exec_id, qc_session_id, qc_session_serial#, count(*) secs_db_time from v$active_session_history
  2  where sql_id='04r3647p2g7qu' and sample_time>to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss')
  3  group by sql_exec_id, qc_session_id, qc_session_serial#
  4  order by 1;
 
SQL_EXEC_ID QC_SESSION_ID QC_SESSION_SERIAL# SECS_DB_TIME
----------- ------------- ------------------ ------------
   16777216	      237                  16626           12
   16777217	      237                  16626           12
   16777218	      237                  16626           10
   16777219	      237                  16626           12
   16777220	      237                  16626           10
   16777221	      237                  16626           10
                                             ------------
avg                                                    11
 
6 rows selected.

I.e. the 11 secs correspond to the 11.22 secs in the AWR-report.

How do we get the real elapsed time for the parallel queries? If the queries take a couple of seconds we can get the approximate time from ASH as well by subtracting the sample_time at the beginning from the sample_time at the end of each execution (SQL_EXEC_ID):

SQL> select sql_exec_id, extract (second from (max(sample_time)-min(sample_time))) secs_elapsed 
  2  from v$active_session_history
  3  where sql_id='04r3647p2g7qu'
  4  and sample_time>to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss')
  5  group by sql_exec_id
  6  order by 1;
 
SQL_EXEC_ID SECS_ELAPSED
----------- ------------
   16777216         5.12
   16777217        5.104
   16777218         4.16
   16777219        5.118
   16777220        4.104
   16777221        4.171
 
6 rows selected.

I.e. those numbers reflect the real execution time much better.

REMARK: If the queries take minutes (or hours) to run then you have to extract the minutes (and hours) as well of course. See also the example I have at the end of the Blog.

The info in V$SQL is also not very helpful:

SQL> set lines 200 pages 999
SQL> select child_number, plan_hash_value, elapsed_time/1000000 elapsed_secs, 
  2  executions, px_servers_executions, last_active_time 
  3  from v$sql where sql_id='04r3647p2g7qu';
 
CHILD_NUMBER PLAN_HASH_VALUE ELAPSED_SECS EXECUTIONS PX_SERVERS_EXECUTIONS LAST_ACTIVE_TIME
------------ --------------- ------------ ---------- --------------------- -------------------
           0      2747857355    67.346941          6                    12 14.11.2019 14:05:17

I.e. for the QC we have the column executions > 0 and for the parallel slaves we have px_servers_executions > 0. You may actually get different child cursors for the Query Coordinator and the slaves.

So in theory we should be able to do something like:

SQL> select child_number, (sum(elapsed_time)/sum(executions))/1000000 elapsed_time_per_exec_secs 
  2  from v$sql where sql_id='04r3647p2g7qu' group by child_number;
 
CHILD_NUMBER ELAPSED_TIME_PER_EXEC_SECS
------------ --------------------------
           0                 11.2244902

Here we do see the number from the AWR again.

So in future be careful when checking the elapsed time per execution of statements, which ran with parallel slaves. The number will be too high in AWR or V$SQL. Further analysis to get the real elapsed time per execution would be necessary.

REMARK: As the numbers in AWR do come from e.g. dba_hist_sqlstat, the following query provides “wrong” output for parallel executions as well:

SQL> column begin_interval_time format a32
SQL> column end_interval_time format a32
SQL> select begin_interval_time, end_interval_time, ELAPSED_TIME_DELTA/1000000 elapsed_time_secs, 
  2  (ELAPSED_TIME_DELTA/EXECUTIONS_DELTA)/1000000 elapsed_per_exec_secs
  3  from dba_hist_snapshot snap, dba_hist_sqlstat sql 
  4  where snap.snap_id=sql.snap_id and sql_id='04r3647p2g7qu' 
  5  and snap.BEGIN_INTERVAL_TIME > to_date('14.11.2019 14:03:51','dd.mm.yyyy hh24:mi:ss');
 
BEGIN_INTERVAL_TIME              END_INTERVAL_TIME                ELAPSED_TIME_SECS ELAPSED_PER_EXEC_SECS
-------------------------------- -------------------------------- ----------------- ---------------------
14-NOV-19 02.04.00.176 PM        14-NOV-19 02.05.25.327 PM                67.346941            11.2244902

To take another example I did run a query from Jonathan Lewis from
https://jonathanlewis.wordpress.com/category/oracle/parallel-execution:

SQL> @jonathan
 
19348 rows selected.
 
Elapsed: 00:06:42.11

I.e. 402.11 seconds

AWR shows 500.79 seconds:

        Elapsed                  Elapsed Time
        Time (s)    Executions  per Exec (s)  %Total   %CPU    %IO    SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
           500.8              1        500.79   97.9   59.6   38.6 44v4ws3nzbnsd
Module: SQL*Plus
select /*+ parallel(t1 2) parallel(t2 2)
 leading(t1 t2) use_hash(t2) swa
p_join_inputs(t2) pq_distribute(t2 hash hash) ca
rdinality(t1,50000) */ t1.owner, t1.name, t1.typ

Let’s check ASH with the query I used above (this time including minutes):

select sql_exec_id, extract (minute from (max(sample_time)-min(sample_time))) minutes_elapsed,
extract (second from (max(sample_time)-min(sample_time))) secs_elapsed 
from v$active_session_history
where sql_id='44v4ws3nzbnsd'
group by sql_exec_id
order by 1;
 
SQL_EXEC_ID MINUTES_ELAPSED SECS_ELAPSED
----------- --------------- ------------
   16777216	              6       40.717

I.e. 06:40.72 which is close to the real elapsed time of 06:42.11

Cet article Elapsed time of Oracle Parallel Executions are not shown correctly in AWR est apparu en premier sur Blog dbi services.

#DOAG2019

$
0
0

#DOAG2019, a real interesting week and a great first experience as well as DOAG speaker.

What a great organization

As a newbie I really found this event amazing. Great organization and great place.

Our booth

Some great sessions

I attended some very interesting sessions.

Oracle OCI versus AWS RDS for SE2 by David Hueber
A great presentation showing the difference between AWS and Oracle Cloud with amazing demos highlighting how scripting can be powerful.

Oracle Cloud Infrastructure – Scripting mit dem OCI CLI by Robert Marz
This was an interesting session showing that the CLI are not dead and that it gives various possibilities even in the Cloud.

Persistent Memory: Revolutionizing the Modern Database by Gavin Parish
This session gave a real good overview how persistent memory offers many possibilities and advantages.

19 features you will miss if you leave Oracle Database by Franck Pachot
Very interesting session giving the overall message that choosing a solution should not be made on popularity but on technical expertise. Franck also demonstrated some feature that we might really miss leaving Oracle database technology : read consistency, fast load insert, in-place update, index only access, no need to reorg, cursor sharing, partitionning, optimizer enable feature, hints, RMAN, crash/media recovery, flashback, Data Guard and many others…

19c SE2, was tun ohne RAC? by Marco Mischke
Interesting discussion and presentation about RAC and failover cluster going deep in the technical commands.

Die Top Fehler beim Datenbank-Monitoring vermeiden by Sebatian Röhrig
Nice shared experience on the importance of the monitoring and the error to avoid.

POC – Deploying WebLogic Servers as “Services” in Containers by Pascal Brand
High level technical session providing real customer case on deploying Weblogic Servers into independent containers to limit downtime.

PostgreSQL Tuning for Oracle DBAs by Hervé Schweitzer
What a show! Hervé presented PostgreSQL Tuning based on his own experience from 22 years working with databases.

Hybrid Data Guard in OCI als DR- und Migrationslösung by Thomas Rein
Thomas presented an excellent overview of prooth of concept and how to implement Data Guard in the Oracle Cloud.

New Features and Best Practices for Oracle ACFS by Jim Williams
Session with overview of the new ACFS features.

My first DOAG speaker experience

Moreover all, I had the opportunity to be speaker and had a session about the transport possibilities in Data Guard, and how to guarantee zero data loss in WAN environment using RedoRoutes, cascading and Far Sync.
The demos went great. I had about 30 attendees which showed having a strong interest on the subject.

Let’s have a party

With dbi services, after working time, there is always a place for fun. Our customers and many participants enjoyed and joined us to share some Ricola-coktail Apero prepared by our CEO. 🙂

Conclusion

This first DOAG in Nürnberg was a real great experience. Let’s hope to see you there next year again!

Cet article #DOAG2019 est apparu en premier sur Blog dbi services.


ODA hang/crash due to software raid-check

$
0
0

Oracle Database Appliance (ODA) is by default configured with software raid for Operating System and Oracle Database software file system (2 internal SSD disks). 2 raid devices are configured : md0 and md1.ODA are configured to run raid-check every Sunday at 1am.

Analysing the problem

In case the ODA is having some load during raid-check, it can happen that the server freezes. Only IP layer seems to still be alive : server is replying to the ping command, but ssh layer is not available any more.
Nothing can be done with the ODA : no ssh connection, all logs and writes on the server are stuck, ILOM serial connection is impossible.

The only solution is to power cycle the ODA through ILOM.

Problem could be reproduced on customer side by running 2 RMAN database backups and manually executing the raid-check.

In /var/log/messages we can see that server hung doing raid-check on md1 :

Oct 27 01:00:01 ODA02 kernel: [6245829.462343] md: data-check of RAID array md0
Oct 27 01:00:01 ODA02 kernel: [6245829.462347] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Oct 27 01:00:01 ODA02 kernel: [6245829.462349] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
Oct 27 01:00:01 ODA02 kernel: [6245829.462364] md: using 128k window, over a total of 511936k.
Oct 27 01:00:04 ODA02 kernel: [6245832.154108] md: md0: data-check done.
Oct 27 01:01:02 ODA02 kernel: [6245890.375430] md: data-check of RAID array md1
Oct 27 01:01:02 ODA02 kernel: [6245890.375433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Oct 27 01:01:02 ODA02 kernel: [6245890.375435] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
Oct 27 01:01:02 ODA02 kernel: [6245890.375452] md: using 128k window, over a total of 467694592k.
Oct 27 04:48:07 ODA02 kernel: imklog 5.8.10, log source = /proc/kmsg started. ==> Restart of ODA with ILOM, server freezed on data-check of RAID array md1
Oct 27 04:48:07 ODA02 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="5788" x-info="http://www.rsyslog.com"] start
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Initializing cgroup subsys cpuset
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Initializing cgroup subsys cpu
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Initializing cgroup subsys cpuacct
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Linux version 4.1.12-124.20.3.el6uek.x86_64 (mockbuild@ca-build84.us.oracle.com) (gcc version 4.9.2 20150212 (Red Hat 4.9.2-6.2.0.3) (GCC) ) #2 SMP Thu Oct 11 17:47:32 PDT 2018
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Command line: ro root=/dev/mapper/VolGroupSys-LogVolRoot rd_NO_LUKS rd_MD_UUID=424664a7:c29524e9:c7e10fcf:d893414e rd_LVM_LV=VolGroupSys/LogVolRoot rd_LVM_LV=VolGroupSys/LogVolSwap SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM pci=noaer crashkernel=256M@64M loglevel=3 panic=60 transparent_hugepage=never biosdevname=1 ipv6.disable=1 intel_idle.max_cstate=1 nofloppy nomce numa=off console=ttyS0,115200n8 console

Solution

Reduce raid check CPU and IO priority

By default raid check is configured with low priority. Setting the priority to idle would ensure to limit the resource used by the check.

Change NICE=low to NICE=idle in /etc/sysconfig/raid-check configuration file.

[root@ODA02 log]# cat /etc/sysconfig/raid-check
#!/bin/bash
#
# Configuration file for /usr/sbin/raid-check
#
# options:
# ENABLED - must be yes in order for the raid check to proceed
# CHECK - can be either check or repair depending on the type of
# operation the user desires. A check operation will scan
# the drives looking for bad sectors and automatically
# repairing only bad sectors. If it finds good sectors that
# contain bad data (meaning that the data in a sector does
# not agree with what the data from another disk indicates
# the data should be, for example the parity block + the other
# data blocks would cause us to think that this data block
# is incorrect), then it does nothing but increments the
# counter in the file /sys/block/$dev/md/mismatch_count.
# This allows the sysadmin to inspect the data in the sector
# and the data that would be produced by rebuilding the
# sector from redundant information and pick the correct
# data to keep. The repair option does the same thing, but
# when it encounters a mismatch in the data, it automatically
# updates the data to be consistent. However, since we really
# don't know whether it's the parity or the data block that's
# correct (or which data block in the case of raid1), it's
# luck of the draw whether or not the user gets the right
# data instead of the bad data. This option is the default
# option for devices not listed in either CHECK_DEVS or
# REPAIR_DEVS.
# CHECK_DEVS - a space delimited list of devs that the user specifically
# wants to run a check operation on.
# REPAIR_DEVS - a space delimited list of devs that the user
# specifically wants to run a repair on.
# SKIP_DEVS - a space delimited list of devs that should be skipped
# NICE - Change the raid check CPU and IO priority in order to make
# the system more responsive during lengthy checks. Valid
# values are high, normal, low, idle.
# MAXCONCURENT - Limit the number of devices to be checked at a time.
# By default all devices will be checked at the same time.
#
# Note: the raid-check script is run by the /etc/cron.d/raid-check cron job.
# Users may modify the frequency and timing at which raid-check is run by
# editing that cron job and their changes will be preserved across updates
# to the mdadm package.
#
# Note2: you can not use symbolic names for the raid devices, such as you
# /dev/md/root. The names used in this file must match the names seen in
# /proc/mdstat and in /sys/block.
 
ENABLED=yes
CHECK=check
NICE=idle
# To check devs /dev/md0 and /dev/md3, use "md0 md3"
CHECK_DEVS=""
REPAIR_DEVS=""
SKIP_DEVS=""
MAXCONCURRENT=

Change raid-check scheduling

Configure raid-check to be run in low activity period. Avoid running raid check during database backup periods for example.

[root@ODA02 ~]# cd /etc/cron.d
 
[root@ODA02 cron.d]# cat raid-check
# Run system wide raid-check once a week on Sunday at 1am by default
0 1 * * Sun root /usr/sbin/raid-check
 
[root@ODA02 cron.d]# vi raid-check
 
[root@ODA02 cron.d]# cat raid-check
# Run system wide raid-check once a week on Sunday at 1am by default
0 19 * * Sat root /usr/sbin/raid-check

Conclusion

These configuration changes could be successfully tested on customer environment. No crash/hang was experienced with NICE parameter set to idle.
As per the oracle documentation, the ODA BIOS default configuration could be change to use hardware raid.
ODA – configuring RAID
The question would be if patching an ODA is still possible afterwards. If you would like to changed this configuration I would strongly recommend you to get Oracle support approval.

Cet article ODA hang/crash due to software raid-check est apparu en premier sur Blog dbi services.

dbvisit dbvctl process is terminating abnormally with Error Code: 2044

$
0
0

When applying archive logs on the standby, dbvctl process can terminate abnormally with Error Code: 2044. This can happen in case there are several archive logs with huge size to be applied.

Problem description

With dbvisit there is 2 ways to recover the archive log on the standby, either using sqlplus or rman. By default the configuration is set to sqlplus. It can happen that, following a maintenance windows where synchronization had to be suspended, a huge gap is faced between the primary and the standby databases. Several archive logs need to be applied. Problem is even more visible if the size of the archive log files is big. In my case there were about 34 archive logs to be applied following a maintenance activity and size of each file was 8 GB.

Applying the archive logs on the standby failed as seen in the following output.

oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ./dbvctl -d DDC_name
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 1939)
dbvctl started on server_name: Mon Oct 28 16:19:01 2019
=============================================================
 
 
>>> Applying Log file(s) from primary_server to DB_name on standby_server:
 
 
Dbvisit Standby terminated...
Error Code: 2044
File (/u01/app/dbvisit/standby/tmp/1939.dbvisit.201910281619.sqlplus.dbv) does not
exist or is empty. Please check space and file permissions.
 
Tracefile from server: server_name (PID:1939)
1939_dbvctl_DB_name_201910281619.trc
 
oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ls -l /u01/app/dbvisit/standby/tmp/1939.dbvisit.201910281619.sqlplus.dbv
ls: cannot access /u01/app/dbvisit/standby/tmp/1939.dbvisit.201910281619.sqlplus.dbv: No such file or directory

Solution

To solve this problem, you can change DDC configuration on the primary to use RMAN to apply archive log, at least time the gap is caught up. You will have to synchronize the standby configuration as well.
To use RMAN, set APPLY_ARCHIVE_RMAN parameter to Y in the DDC configuration file.

Procedure is described below :

Backup the DDC configuration file

oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] cd conf
oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] cp -p dbv_DCC_name.env dbv_DCC_name.env.20191028

Change the parameter

oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] vi dbv_DCC_name.env
oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] diff dbv_DCC_name.env dbv_DCC_name.env.20191028
543c543
APPLY_ARCHIVE_RMAN = Y
---
APPLY_ARCHIVE_RMAN = N

Send the configuration changes to the standby

oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] cd ..
oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ./dbvctl -d DCC_name -C
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 9318)
dbvctl started on server_name: Mon Oct 28 16:51:45 2019
=============================================================
 
>>> Dbvisit Standby configurational differences found between primary_server and standby_server.
Synchronised.
 
=============================================================
dbvctl ended on server_name: Mon Oct 28 16:51:52 2019
=============================================================

Apply archive log on the standby again and it will be completed successfully

oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ./dbvctl -d DDC_name
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 50909)
dbvctl started on server_name: Mon Oct 28 16:53:05 2019
=============================================================
 
 
>>> Applying Log file(s) from from primary_server to DB_name on standby_server:
 
 
Next SCN required for recovery 3328390017 generated at 2019-10-28:11:57:42 +01:00.
Next log(s) required for recovery:
thread 1 sequence 77553
>>> Searching for new archive logs under /u03/app/oracle/dbvisit_arch/DB_name_SITE2... done
thread 1 sequence 77553 (1_77553_973158276.arc)
thread 1 sequence 77554 (1_77554_973158276.arc)
thread 1 sequence 77555 (1_77555_973158276.arc)
thread 1 sequence 77556 (1_77556_973158276.arc)
thread 1 sequence 77557 (1_77557_973158276.arc)
thread 1 sequence 77558 (1_77558_973158276.arc)
thread 1 sequence 77559 (1_77559_973158276.arc)
thread 1 sequence 77560 (1_77560_973158276.arc)
thread 1 sequence 77561 (1_77561_973158276.arc)
thread 1 sequence 77562 (1_77562_973158276.arc)
thread 1 sequence 77563 (1_77563_973158276.arc)
thread 1 sequence 77564 (1_77564_973158276.arc)
thread 1 sequence 77565 (1_77565_973158276.arc)
thread 1 sequence 77566 (1_77566_973158276.arc)
thread 1 sequence 77567 (1_77567_973158276.arc)
thread 1 sequence 77568 (1_77568_973158276.arc)
thread 1 sequence 77569 (1_77569_973158276.arc)
thread 1 sequence 77570 (1_77570_973158276.arc)
thread 1 sequence 77571 (1_77571_973158276.arc)
thread 1 sequence 77572 (1_77572_973158276.arc)
thread 1 sequence 77573 (1_77573_973158276.arc)
thread 1 sequence 77574 (1_77574_973158276.arc)
>>> Catalog archives... done
>>> Recovering database... done
Last applied log(s):
thread 1 sequence 77574
 
Next SCN required for recovery 3331579974 generated at 2019-10-28:16:50:18 +01:00.
Next required log thread sequence
 
>>> Dbvisit Archive Management Module (AMM)
 
Config: number of archives to keep = 0
Config: number of days to keep archives = 7
Config: diskspace full threshold = 80%
==========
 
Processing /u03/app/oracle/dbvisit_arch/DB_name_SITE2...
Archive log dir: /u03/app/oracle/dbvisit_arch/DB_name_SITE2
Total number of archive files : 1025
Number of archive logs deleted = 8
Current Disk percent full : 51%
 
=============================================================
dbvctl ended on server_name: Mon Oct 28 17:10:36 2019
=============================================================

Cet article dbvisit dbvctl process is terminating abnormally with Error Code: 2044 est apparu en premier sur Blog dbi services.

Focus on 19c NOW!

$
0
0

Introduction

For years, Oracle used the same mechanism for database versioning. A major version, represented by the first number. And then a release number, 1 for the very first edition, and a mature and reliable release 2 for production databases. Both of them having patchsets (the last number) and regular patchset updates (the date optionally displayed at the end) to remove bugs and to increase security. Jumping from release 1 to release 2 required a migration as if you were coming from an older version. Recently, Oracle broke this release pace to introduce a new versioning system based on the year of release, like Microsoft and a lot of others did. Patchsets are also replaced by release updates. Quite obvious: it’s been a long time patchsets have become complete releases. Lots of Oracle DBAs are now in the fog, and as a result, could take wrong decision regarding the version to choose.

A recent history of Oracle Database versioning

Let’s focus on the versions currently running on most of customer’s databases:

  • 11.2.0.4: The terminal version of 11gR2 (long-term). 4 is the latest patchset of the 11gR2, there will never exist a 11.2.0.5. If you install the latest PSU (Patchset update) your database will precisely run on 11.2.0.4.191015 (as of the 29th of November 2019)
  • 12.1.0.2: The terminal version of 12cR1 (sort of long-term). A 12.1.0.1 existed but for a very short time
  • 12.2.0.1: first version of 12cR2 (short-term). This is the latest version with old versioning model
  • 18c: actually 12.2.0.2 – first patchset of the 12.2.0.1 (short-term). You cannot apply this patchset on top of the 12.2.0.1
  • 19c: actually 12.2.0.3 – terminal version of the 12cR2 (long-term). The next version will no more be based on 12.2 database kernel

18c and 19c also have sort of patchset but the name has changed: we’re now talking about RU (release update). RU are actually the second number, 18.8 for example. Each release update can also be updated with PSUs, still the last number, for example 18.8.0.0.191015.

Is there a risk to use older versions?

Actually, there is no risk using 11.2.0.4 and 12.1.0.2. These versions represent almost all the Oracle databases running in the world. Few people already migrated to 12.2 or newer versions. The risk is more related to the support provided by Oracle. With premier support (linked to the support fees almost every customer pay each year), you have limited access to My Oracle Support. Looking up for something in the knowledge database is OK, downloading old patches is OK, but downloading newest patches will no more be possible. And if you open a SR, the Oracle support team could ask you to buy extended support, or at least to apply the latest PSU you cannot download. If you want to keep your databases fully supported by Oracle, you’ll have to ask and pay for extended support, as far as your version can still be supported with this kind of support. For sure, 11gR1, 10gR2 and older versions are no more eligible for extended support.

Check this My Oracle Support note for fresh information about support timeline: Doc ID 742060.1

Should I migrate to 12.2 or 18c?

If you plan to migrate to 12.2 or 18c in 2020, think twice. The problem with these versions is that premier support is ending soon: before the end of 2020 for 12.2 and in the middle of 2021 for 18c. It’s very short and you probably won’t have the possibility to buy extended support (these are not terminal releases), you’ll have to migrate to 19c or newer version in 2020 or 2021.

Why 19c is probably the only version you should migrate to?

19c is the long-term support release, meaning that premier support will last longer (until 2023) and also that extended support will be available (until 2026). If you plan to migrate to 19c in 2020, you will benefit from all the desired patches and full support for 3 years. And there is a chance that Oracle will also offer extended support for the first year or more, as they did for 11.2 and 12.1, even it’s pure assumption.

How about the costs?

You probably own perpetual licenses, meaning that the Oracle database product is yours (if you are compliant regarding the number of users or processors defined in your contract). Your licenses are not attached to a specific version, you can use 11gR2, 12c, 18c, 19c… Each year, you pay support fees: these fees give you access to My Oracle Support, for downloading patches or opening a Service Request in case of problem. But you are supposed to run recent version of the database with this premier support. For example, as of the 29th of November 2019, the versions supported with premier support are 12.2.0.1, 18c and 19c. If you’re using older versions, like 12.1.0.2 or 11.2.0.4, you should pay additional fees for extended support. Extended support is not something you have to subscribe indefinitely, as the purpose is only to keep your database supported before you migrate to a newer version and return to premier support.

So, keeping older versions will cost you more, and in-time migration will keep your support fees as low as possible.

For sure, migrating to 19c also comes at a cost, but we’re now quite aware of the importance of migrating software and stay up to date for a lot of reasons.

Conclusion

Motivate your software vendor or your development team to validate and support 19c. The amount of work for supporting 19c against 18c or 12c is quite the same. All these versions being actually 12c. The behaviour of the database will be the same for most of us. Avoid migrating to 12.2.0.1 or 18c as you’ll have to migrate again in 1 year. Keep your 11gR2 and/or 12cR1 and take extended support for one year while preparing the migration to 19c if you’re not yet ready. 20c will be a kind of very first release 1: you probably won’t migrate to this version if you mostly consider stability and reliability for your databases.

Cet article Focus on 19c NOW! est apparu en premier sur Blog dbi services.

Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS

$
0
0

I’ve down quite some real time logical replication projects in the past, either using Oracle Golden Gate or EDB replication server. Build in logical replication in PostgreSQL (which is available since PostgreSQL 10) can be used as well when both, the source and the target are PostgreSQL instances. While being at the DOAG conference and exhibition 2019 I got in contact with people from DBPLUS and they provide a product which is called “Data Replicator”. The interesting use case for me is the real time replication from Oracle to PostgreSQL as the next project for such a setup is already in the pipe so I thought I’ll give it try.

The “Data Replicator” software needs to be installed on a Windows machine and all traffic will go through that machine. The following picture is stolen from the official “Data Replicator” documentation and it pretty well describes the architecture when the source system is Oracle:

As “Data Replicator” will use Oracle LogMiner, no triggers need to be installed on the source system. Installing something on a validated system might become tricky so this already is a huge benefit compared to some other solutions, e.g. SymmetricDS. When you know GoldenGate the overall architecture is not so much different: What GoldeGate calls the extract is the “Reader” in Data Replicator and the replicat becomes the “Applier”.

The installation on the Windows machine is so simple, that I’ll just be providing the screenshots without any further comments:





In the background three new services have been created and started by the installation program:

There is the replication manager which is responsible for creating replication processes. And then there are two more services for reading from source and writing data to the target. In addition the graphical user interface was installed (which could also be running on another windows machine) which looks like this once you start it up:

Before connecting with the GUI you should do the basic configuration by using the “DBPLUS Replication Manager Configuration” utility:

Once that is done you can go back to the client and connect:

The initial screen has not much content, except for the possibility to create a new replication and I really like that: No overloaded, very hard to initially understand interface but easy and tidy. With only one choice it is easy to go forward so lets create a new replication:

Some concept here: Very clean interface, only 5 steps to follow. My source system is Oracle 19.3 EE and all I have to do is to provide the connection parameters, admin user and a new user/password combination I want to us for the logical replication:

Asking “Data Replicator” to create the replication user, and all is fine:

SQL> r
  1* select username,profile from dba_users where username = 'REPLUSR'

USERNAME                       PROFILE
------------------------------ ------------------------------
REPLUSR                        DEFAULT

Of course some system privileges have been granted to the user that got created:

SQL> select privilege from dba_sys_privs where grantee = 'REPLUSR';

PRIVILEGE
----------------------------------------
SELECT ANY TRANSACTION
LOGMINING
SELECT ANY DICTIONARY
SELECT ANY TABLE

Proceeding with the target database, which is PostgreSQL 12.1 in my case:

As you can see there is no option to create a user on the target. What I did is this:

postgres=# create user replusr with login password 'xxxxxxx';
CREATE ROLE
postgres=# create database offloadoracle with owner = 'replusr';
CREATE DATABASE
postgres=# 

Once done, the connection succeeds and can be saved:

That’s all for the first step and we can proceed to step two:

I have installed the Oracle sample schemas for this little demo and as I only want to replicate these I’ve changed the selection to “REPLICATE ONLY SELECTED SCHEMAS AND TABLES”.

Once more this is all that needs to be done and the next step would be to generate the report for getting an idea of possible issues:

The reported issues totally make sense and you even get the commands to fix it, except for the complaints about the unique keys, of course (If you go for logical replication you should anyway make sure that each table either contains a primary key or at last a unique key). Once the Oracle database is in archive mode and supplemental log data was added the screen will look fine (I will ignore the two warnings as they are not important for this demo):

The next step is to define the “Start Options” and when you select “automatic” you’ll have to specify the options for the transfer server:

There is a small configuration utility for that as well:

When you are happy with it, provide the details in the previous screen and complete the replication setup by providing a name in the last step:

That’s all you need to do and the replication is ready to be started:

… and then it immediately fails because we do not have a valid license. For getting a trial license you need to provide the computer ID which can be found in the information section:

Provide that to DBPLUS and request a trial license. Usually they are responding very fast:

Starting the replication once more:

You’ll see new processes on the PostgreSQL side:

postgres@centos8pg:/home/postgres/ [121] ps -ef | grep postgres
root      1248   769  0 12:58 ?        00:00:00 sshd: postgres [priv]
postgres  1252     1  0 12:58 ?        00:00:00 /usr/lib/systemd/systemd --user
postgres  1256  1252  0 12:58 ?        00:00:00 (sd-pam)
postgres  1262  1248  0 12:58 ?        00:00:00 sshd: postgres@pts/0
postgres  1263  1262  0 12:58 pts/0    00:00:00 -bash
postgres  1667     1  0 12:58 ?        00:00:00 /u01/app/postgres/product/12/db_0/bin/postgres -D /u02/pgdata/12
postgres  1669  1667  0 12:58 ?        00:00:00 postgres: checkpointer   
postgres  1670  1667  0 12:58 ?        00:00:00 postgres: background writer   
postgres  1671  1667  0 12:58 ?        00:00:00 postgres: walwriter   
postgres  1672  1667  0 12:58 ?        00:00:00 postgres: autovacuum launcher   
postgres  1673  1667  0 12:58 ?        00:00:00 postgres: stats collector   
postgres  1674  1667  0 12:58 ?        00:00:00 postgres: logical replication launcher   
postgres  2560  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40790) idle
postgres  2562  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40800) idle
postgres  2588  1263  0 14:40 pts/0    00:00:00 ps -ef
postgres  2589  1263  0 14:40 pts/0    00:00:00 grep --color=auto postgres

… and you’ll see LogMiner proceses on the Oracle side:

LOGMINER: summary for session# = 2147710977
LOGMINER: StartScn: 2261972 (0x00000000002283d4)
LOGMINER: EndScn: 18446744073709551615 (0xffffffffffffffff)
LOGMINER: HighConsumedScn: 0
LOGMINER: PSR flags: 0x0
LOGMINER: Session Flags: 0x4000441
LOGMINER: Session Flags2: 0x0
LOGMINER: Read buffers: 4
LOGMINER: Region Queue size: 256
LOGMINER: Redo Queue size: 4096
LOGMINER: Memory LWM: limit 10M, LWM 12M, 80%
LOGMINER: Memory Release Limit: 0M
LOGMINER: Max Decomp Region Memory: 1M
LOGMINER: Transaction Queue Size: 1024
2019-11-22T14:05:54.735533+01:00
LOGMINER: Begin mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log
2019-11-22T14:05:54.759197+01:00
LOGMINER: End   mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log

In the details tab there is more information about what is currently going on:

Although it looked quite good at the beginning there is the first issue:

Oracle data type is unknown: OE.CUST_ADDRESS_TYP
Stack trace:
System.ArgumentException: Oracle data type is unknown: OE.CUST_ADDRESS_TYP
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

As with all logical replication solutions custom types are tricky and usually not supported. What I will be doing now is to replicate the “HR” and “SH” schemas only, which do not contain any custom type:

Once again, starting the replication, next issue:

Oracle data type is unknown: ROWID
Stack trace:
System.ArgumentException: Oracle data type is unknown: ROWID
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

Lets check which column(s) and table(s) that is/are:

SQL> SELECT owner, table_name, column_name from dba_tab_columns where data_type = 'ROWID' and owner in ('HR','SH');

OWNER                TABLE_NAME                     COLUMN_NAME
-------------------- ------------------------------ ------------------------------
SH                   DR$SUP_TEXT_IDX$U              RID
SH                   DR$SUP_TEXT_IDX$K              TEXTKEY

Such columns can be easily excluded:



Starting over again, next issue:

At least the schemas need to exist on the target, so:

postgres=# \c offloadoracle postgres
You are now connected to database "offloadoracle" as user "postgres".
offloadoracle=# create schema sh;
CREATE SCHEMA
offloadoracle=# create schema hr;
CREATE SCHEMA
offloadoracle=# 

Next try:

On the source side:

SQL> grant flashback any table to REPLUSR;

Grant succeeded.

SQL> 

On the target side:

offloadoracle=# grant all on schema hr to replusr;
GRANT
offloadoracle=# grant all on schema sh to replusr;
GRANT

Finally most of the tables are replicating fine now:

There are a few warnings about missing unique keys and some tables can not be replicated at all:

For now I am just going to exclude the failed tables as this is fine for the scope of this post:

… an my replication is fine. A quick check on the target:

offloadoracle=# select * from sh.products limit 3;
 prod_id |           prod_name           |           prod_desc           | prod_subcategory | prod_subcategory_id | prod_subcategory_desc |        prod_category        | prod_category_id |     prod_category_desc      | prod_weight_class | prod_unit_of_measure | prod_pac>
---------+-------------------------------+-------------------------------+------------------+---------------------+-----------------------+-----------------------------+------------------+-----------------------------+-------------------+----------------------+--------->
      13 | 5MP Telephoto Digital Camera  | 5MP Telephoto Digital Camera  | Cameras          |     2044.0000000000 | Cameras               | Photo                       |   204.0000000000 | Photo                       |                 1 | U                    | P       >
      14 | 17" LCD w/built-in HDTV Tuner | 17" LCD w/built-in HDTV Tuner | Monitors         |     2035.0000000000 | Monitors              | Peripherals and Accessories |   203.0000000000 | Peripherals and Accessories |                 1 | U                    | P       >
      15 | Envoy 256MB - 40GB            | Envoy 256MB - 40Gb            | Desktop PCs      |     2021.0000000000 | Desktop PCs           | Hardware                    |   202.0000000000 | Hardware                    |                 1 | U                    | P       >
(3 rows)

lines 1-7/7 (END)

… confirms the data is there. As this post is already long enough here some final thoughts: The installation of “Data Replicator” is a no-brainer. I really like the simple interface and setting up a replication between Oracle and PostgreSQL is quite easy. Of course you need to know the issues you can run into with logical replication (missing unique or primary keys, not supported data types, …) but this is the same topic for all solutions. What I can say for sure is, that I never was as fast for setting up a demo replication as with “Data Replicator”. More testing to come …

Cet article Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS est apparu en premier sur Blog dbi services.

Oracle FAST=TRUE in sqlplus? Some thoughts about rowprefetch

$
0
0

During my time as a Consultant working on Tuning Tasks I had the feeling that many people think that there is an Oracle-parameter “FAST=TRUE” to speed up the performance and throughput of the database calls. Unfortunately such a parameter is not available, but since version 12cR2 Oracle provided the option “-F” or “-FAST” for sqlplus, which looks like a “FAST=TRUE”-setting. Here an excerpt from the documentation:


The FAST option improves general performance. This command line option changes the values of the following default settings:
 
- ARRAYSIZE = 100
- LOBPREFETCH = 16384
- PAGESIZE = 50000
- ROWPREFETCH = 2
- STATEMENTCACHE = 20

I was interested in where the rowprefetch-setting could result in an improvement.

The documentation about rowprefetch is as follows:


SET ROWPREFETCH {1 | n}
 
Sets the number of rows that SQL*Plus will prefetch from the database at one time. The default value is 1.
 
Example
 
To set the number of prefetched rows to 200, enter
 
SET ROWPREFETCH 200
 
If you do not specify a value for n, the default is 1 row. This means that rowprefetching is off.
 
Note: The amount of data contained in the prefetched rows should not exceed the maximum value of 2147483648 bytes (2 Gigabytes). The  setting in the oraaccess.xml file can override the SET ROWPREFETCH setting in SQL*Plus. For more information about oraaccess.xml, see the Oracle Call Interface Programmer's Guide. 

A simple test where rowprefetch can make a difference is the use of hash clusters (see the Buffers column in the execution plan below). E.g.


SQL> create cluster DEMO_CLUSTER(CUST_ID number) size 4096 single table hashkeys 1000 ;
 
Cluster created.
 
SQL> create table DEMO cluster DEMO_CLUSTER(CUST_ID) as select * from CUSTOMERS;
 
Table created.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
 
PL/SQL procedure successfully completed.
 
SQL> select num_rows,blocks from user_tables where table_name='DEMO';
 
  NUM_ROWS     BLOCKS
---------- ----------
     55500	 1035
 
SQL> show rowprefetch
rowprefetch 1
SQL> select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where cust_id=101;
 
ROWID		      CUST_ID
------------------ ----------
AAAR4qAAMAAAAedAAA	  101
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
-----------------------------------------
SQL_ID	9g2nyr9h2ytk4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where
cust_id=101
 
Plan hash value: 3286081706
 
------------------------------------------------------------------------------------
| Id  | Operation	  | Name | Starts | E-Rows | A-Rows |	A-Time	 | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |	1 |	   |	  1 |00:00:00.01 |	 2 |
|*  1 |  TABLE ACCESS HASH| DEMO |	1 |	 1 |	  1 |00:00:00.01 |	 2 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("CUST_ID"=101)
 
SQL> set rowprefetch 2
SQL> select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where cust_id=101;
 
ROWID		      CUST_ID
------------------ ----------
AAAR4qAAMAAAAedAAA	  101
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
-----------------------------------------
SQL_ID	9g2nyr9h2ytk4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where
cust_id=101
 
Plan hash value: 3286081706
 
------------------------------------------------------------------------------------
| Id  | Operation	  | Name | Starts | E-Rows | A-Rows |	A-Time	 | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |	1 |	   |	  1 |00:00:00.01 |	 1 |
|*  1 |  TABLE ACCESS HASH| DEMO |	1 |	 1 |	  1 |00:00:00.01 |	 1 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("CUST_ID"=101)

Due to the prefetch of 2 rows Oracle detects that there actually is only 1 row and avoids the second logical IO (a second fetch).
If cust_id is unique then I would have created a unique (or primary) key constraint here, which would avoid a second fetch as well (because Oracle knows from the constraint that there can be max 1 row per cust_id), but in that case I have to maintain the created index.

I made a couple of tests, which compared the behaviour with different settings of rowprefetch and arraysize in sqlplus (what is actually the difference between the 2 settings?). That will be a subject of a future Blog.

Cet article Oracle FAST=TRUE in sqlplus? Some thoughts about rowprefetch est apparu en premier sur Blog dbi services.

ARRAYSIZE or ROWPREFETCH in sqlplus?

$
0
0

ARRAYSIZE or ROWPREFETCH in sqlplus?

What is the difference between the well known sqlplus-setting arraysize and the new sqlplus-12.2.-feature rowprefetch? In Blog
https://blog.dbi-services.com/oracle-fasttrue-in-sqlplus-some-thoughts-about-rowprefetch/ I showed a case, which helps to reduce the logical IOs when using rowprefetch.

Here the definition of arraysize and rowprefetch according the documentation:

arraysize:

SET System Variable Summary: Sets the number of rows, called a batch, that SQL*Plus will fetch from the database at one time. Valid values are 1 to 5000. A large value increases the efficiency of queries and subqueries that fetch many rows, but requires more memory. Values over approximately 100 provide little added performance. ARRAYSIZE has no effect on the results of SQL*Plus operations other than increasing efficiency

About SQL*Plus Script Tuning: The effectiveness of setting ARRAYSIZE depends on how well Oracle Database fills network packets and your network latency and throughput. In recent versions of SQL*Plus and Oracle Database, ARRAYSIZE may have little effect. Overlarge sizes can easily take more SQL*Plus memory which may decrease overall performance.

REMARK: The arraysize setting also has an impact on the COPY-command with the COPYCOMMIT-setting (commits every n arraysize batches of records).

rowprefetch:

SET System Variable Summary: Sets the number of rows that SQL*Plus will prefetch from the database at one time. The default value is 1 (max is 32767).
Note: The amount of data contained in the prefetched rows should not exceed the maximum value of 2147483648 bytes (2 Gigabytes). The setting in the oraaccess.xml file can override the SET ROWPREFETCH setting in SQL*Plus.

Differences between ARRAYSIZE and ROWPREFETCH

When doing my tests one of the important differences between ARRAYSIZE and ROWPREFETCH is that ROWPREFETCH allows Oracle to transfer query results on return from its internal OCI execute call. I.e. in a 10046-trace the first FETCH is showing ROWPREFETCH rows fetched regardless of the ARRAYSIZE setting. E.g. with the default setting of ROWPREFETCH 1, ARRAYSIZE 15 I can see the following number of rows fetched (see the r= in the trace):

FETCH #139623638001936:c=448,e=1120,p=0,cr=8,cu=0,mis=0,r=1,dep=0,og=1,plh=3403427028,tim=110487525476
...
FETCH #139623638001936:c=66,e=66,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=1,plh=3403427028,tim=110487525830
...
FETCH #139623638001936:c=15,e=15,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=1,plh=3403427028,tim=110487526093
...

I.e. 1, 15, 15,…

With ROWPREFETCH 3, ARRAYSIZE 15 the rows fetched are 3, 15, 15, …

The following table shows the number of rows fetched with different settings of ROWPREFETCH and ARRAYSIZE from a query, which returns 70 rows:


ROWPREFETCH ARRAYSIZE ROWS_FETCH1 ROWS_FETCH2 ROWS_FETCH3 ROWS_FETCH4 ROWS_FETCH5 ROWS_FETCH6 ROWS_FETCH7 ROWS_FETCH8 
 1          15         1          15          15          15          15          9
 2          15         2          15          15          15          15          8
20          15        20          30          20
16          15        16          30          24
 6           5         6          10          10          10          10          10          10          4
 9           5         9          10          10          10          10          10          10          1
10          10        10          20          20          20          0
10           5        10          15          15          15          15          0
16           3        16          18          18          18          0

We can see 3 things here:
- The first FETCH (from the internal OCI execute) contains always the number of rows as defined in the ROWPREFETCH setting
- The second FETCH (and all subsequent fetches) contains a multiple of the ARRAYSIZE setting rows. The following code fragment should show the logic:

2nd_Fetch_Rows = if ROWPREFETCH < ARRAYSIZE 
                 then ARRAYSIZE 
                 else (TRUNC(ROWPREFETCH/ARRAYSIZE)+1)*ARRAYSIZE


- If a fetch does not detect the end of the data in the cursor then an additional fetch is necessary. In 3 cases above a last fetch fetched 0 rows.

Memory required by the client

With the Linux pmap command I checked how much memory the client requires for different ROWPREFETCH and ARRAYSIZE settings.

Testcase:


SQL> connect cbleile/cbleile@orclpdb1
Connected.
SQL> create table big_type (a varchar2(2000), b varchar2(2000), c varchar2(2000), d varchar2(2000), e varchar2(2000));
 
Table created.
 
SQL> insert into big_type select 
  2  rpad('X',2000,'Y'),
  3  rpad('X',2000,'Y'),
  4  rpad('X',2000,'Y'),
  5  rpad('X',2000,'Y'),
  6  rpad('X',2000,'Y') from xmltable('1 to 4100');
 
4100 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> exec dbms_stats.gather_table_stats(user,'BIG_TYPE');
SQL> select avg_row_len from tabs where table_name='BIG_TYPE';
 
AVG_ROW_LEN
-----------
      10005

Before the test:


oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] ps -ef | grep sqlplus
oracle    31537  31636  3 17:49 pts/2    00:01:20 sqlplus   as sysdba
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000   1580K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
...
 
SQL> show rowprefetch arraysize
rowprefetch 1
arraysize 15
SQL> set arraysize 1000 pages 2 pause on lines 20000
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000   1580K rw---   [ anon ]
00007f7efc40f000  10336K rw---   [ anon ]
...
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000   1580K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
SQL> set arraysize 1
SQL> set rowprefetch 1000
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  12664K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  12664K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
SQL> set rowprefetc 1
SQL> set arraysize 1000
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  22472K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  12660K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
SQL> set rowprefetch 501 arraysize 500 pages 502
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  17568K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]

New table with just 1 Byte per column:


SQL> create table big_type_small_data as select * from big_type where 1=2;
 
Table created.
 
SQL> insert into  big_type_small_data select 'X','X','X','X','X' from big_type;
 
4100 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> exec dbms_stats.gather_table_stats(user,'BIG_TYPE_SMALL_DATA');
 
PL/SQL procedure successfully completed.
 
SQL> select avg_row_len from tabs where table_name='BIG_TYPE_SMALL_DATA';
 
AVG_ROW_LEN
-----------
	 10
 
Client-Memory before the test:
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1580K rw---   [ anon ]
00007f3c3b8d2000    516K rw---   [ anon ]
 
SQL> show rowprefetch
rowprefetch 1
SQL> show array
arraysize 15
SQL> set arraysize 1000 rowprefetch 1 pages 2 pause on lines 20000
SQL> select * from big_type_small_table;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1580K rw---   [ anon ]
00007f3c3af3b000  10336K rw---   [ anon ]
 
--> 9.6MB allocated. 
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1580K rw---   [ anon ]
00007f3c3b8d2000    516K rw---   [ anon ]
 
--> All memory released.
 
SQL> set arraysize 1 rowprefetch 1000
SQL> select * from big_type_snall_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1852K rw---   [ anon ]
00007f3c3b8d2000    516K rw---   [ anon ]
 
--> Only 272K allocated.
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   1852K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> Memory not released.
 
Back to previous setting:
SQL> set arraysize 1000 rowprefetch 1
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000  11644K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> 9.6MB addtl memory allocated.
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   1832K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> Memory released, but not to the initial value. I.e. it seems the memory for the rowprefetch is still allocated.
 
Back to the settings with rowprefetch:
 
SQL> set arraysize 1 rowprefetch 1000
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   1832K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> It obviously reused the previous memory.
 
SQL> set arraysize 500 rowprefetch 501 pages 503
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   6752K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> Relogin
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   1580K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
SQL> set arraysize 500 rowprefetch 501 pages 503 pause on lines 20000
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   1720K rw---   [ anon ]
00007ff8cbda4000   5436K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   1720K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   6608K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   6608K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
--> This time the memory for the arraysize has not been released.
 
--> Relogin
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 17005 | grep anon
0000000001c40000   1580K rw---   [ anon ]
00007f90ea747000    516K rw---   [ anon ]
 
SQL> set arraysize 1 rowprefetch 32767 pages 3 pause on lines 20000
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 17005 | grep anon
0000000001c40000   8312K rw---   [ anon ]
00007f90ea6a2000   1176K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 17005 | grep anon
0000000001c40000   8308K rw---   [ anon ]
00007f90ea6a2000   1176K rw---   [ anon ]
 
--> almost nothing released.

So the tests showed that ARRAYSIZE allocates more memory than ROWPREFETCH (i.e. it allocates according the data-type-size and not according the real data in the column), but in contrast to ROWPREFETCH memory is (often) released with ARRAYSIZE once the SQL finished fetching.

Summary

So when should ROWPREFETCH and ARRAYSIZE be used? As with all fetch-size-settings (e.g. for the JDBC-driver), both can be used to reduce the number of network roundtrips and logical IOs on the DB when lots of data has to be transferred between the server and the client. According my tests ROWPREFETCH requires less memory on the client, but does not release the memory after the query has finished. ARRAYSIZE requires more memory, but often releases memory when the query has finished. ROWPREFETCH = 2 is very useful in case only 1 row is returned by a query, because it returns the row with the internal OCI execute call (first fetch in the 10046 trace) and does not require a subsequent fetch to realize that all data has been fetched already. I.e. it saves 1 network roundtrip.

A good compromise is the use of

ROWPREFETCH = 2
ARRAYSIZE = 100

That setting is actually also used when starting sqlplus with -F(AST). If lots of data has to be transferred to the client then higher ROWPREFETCH or ARRAYSIZE settings can be used to reduce the number logical IOs and network roundtrips. But the best setting also depends on the data to transfer per row and client memory requirements may vary with higher ROWPREFETCH or ARRAYSIZE settings if sqlplus runs a batch-job with many queries or only a few queries. As usual, the best setting when transferring lots of data through sqlplus has to be found by testing the queries and scripts of your environment with different settings.

Cet article ARRAYSIZE or ROWPREFETCH in sqlplus? est apparu en premier sur Blog dbi services.

Make Oracle database simple again!

$
0
0

Introduction

Let’s have a look at how to make Oracle database as simple as it was before.

Oracle database is a great piece of software, yes it’s quite expensive, but it’s still the reference and most of the companies can find a configuration that fits their needs according to a budget. Another complain about Oracle is the complexity: nothing is really simple, and you’ll need skillful DBA(s) to deploy, manage, upgrade, troubleshoot your databases. But complexity is sometimes caused by wrong decisions you make without having the knowledge, mainly because some choices add significant complexity compared to others.

The goal

Why the things need to be simple?

Obviously, simplification is:

  • easier troubleshooting
  • more “understandable by the others”
  • reinstallation made possible in case of big troubles
  • avoiding bugs related to the mix of multiple components
  • less work, because you probably have enough work with migration, patching, performance, …
  • more reliability because less components is less problems

On the hardware side

Rules for simplifying on the hardware side are:

  • Choose the same hardware for all your environments (DEV/TEST/QUAL/PROD/…): same server models, same CPU family, same revision. Make only slight variations on memory amount, number of disks and processor cores configuration if needed. Order all the servers at the same time. If a problem is related to hardware, you will be able to test the fix on a less important environment before going on production
  • Don’t use SAN: SAN is very nice, but SAN is not the performance guarantee you expect. Adopt local SSD disks: NVMe type SSDs have amazing speed, they are a true game changer in today’s database performance. Getting rid of the SAN is also getting rid of multipathing, resource sharing, complex troubleshooting, external dependencies and so on
  • Provision very large volumes for data: dealing with space pressure is not the most interesting part of your job. And it’s time consuming. You need 4TB of disks? Order 12TB and you’ll be ready for each situation even those not planned. For sure it’s more expensive, but adding disks afterall is not always that simple. It makes me think about a customer case where trying to add a single disk led to a nightmare (production down for several days)
  • Consider ODA appliances (Small or Medium): even if it’s not simplifying everything, at least hardware is all that you need and is dedicated to Oracle database software
  • Think consolidation: Oracle database has a strong database isolation thus leading to easy consolidation. Consolidate to limit the number of servers you need to manage is also simplifying your environment
  • Avoid virtualization: without talking about the license, virtualization is for sure underlying complexity

On the system side

Some rules are also good to know regarding the system:

  • Go for Redhat or Oracle Linux: mainly because it’s the most common OS for Oracle databases. Releases and patches are always available for Linux first. UNIX and Windows are decreasing in popularity for Oracle Databases these past 10 years
  • Same OS: please keep your operating systems strictly identical from development to production. If you decide to upgrade the OS, do that first on TEST/DEV and finish with production servers without waiting months. And never update through internet as packages can be different each time you update a system
  • Limit the filesystems number for your databases: 1 big oradata and 1 big FRA is enough on SSD, you don’t need to slice everything as we did before, and slicing is always wasting space

On the software side

A lot of things should be done, or not done regarding software:

  • Install the same Oracle version (release + patch) and use the same tree everywhere. Use OFA (/u01/…) or not but be consistent
  • Limit the Oracle versions in use: inform your software vendors that your platform cannot support too old versions, especially non-terminal releases like 11.2.0.3. 19c, 12.1.0.2 and eventually 11.2.0.4 are the only recommended version to deploy
  • Don’t use ASM: because ext4 is fine and SSDs now bring you maximum performance even on a classic filesystem. ASM will always be linked to Grid Infrastructure making dependencies between the DB Homes and that Grid stack, making patching much more complex
  • Don’t use RAC: because most of your applications cannot correctly manage high availability. RAC is much more complex compared to single instance databases. Not choosing RAC is getting rid of interconnect, cluster repository, fusion cache for SGA sharing, SAN or NAS technologies, split brains, scan listeners and complex troubleshooting. Replacing RAC with Data Guard or Dbvisit standby is the new way of doing sort of high availability without high complexity

Simplify backup configuration

For sure you will use RMAN, but how to simplify backups with RMAN?

  • Use the same backup script for all the databases
  • Use the same backup frequency for each database because when you need to restore, you’d better have a fresh backup
  • Configure only different retention on each database
  • Backup to disk (the most convenient being on a big nfs share) and without any specific library (backup your /backup filesystem later with your enterprise backup tool if needed)
  • provision large enough filesystem to never need to delete backups still in the retention period

Using the same backup strategy means being able to use the same restore procedure on all databases because you always need a quick restore of a broken database.

Always backup controlfile and spfile on the standby databases, the resulting backupset has a very small footprint and makes easier restore of the standby using database backupsets from the primary without the need for duplication.

Consider RMAN catalog only if you have enough databases to manage.

Simplify database management and configuration

  • Create scripts for database configuration and tablespace creation (for example: configure_SID.sql and tablespaces_SID.sql) to be able to reconfigure the same database elsewhere
  • don’t create grid and oracle users if you plan to use Grid Infrastructure/ASM: as a DBA you probably manage both ASM and database instances. Instead of loosing time switching between these 2 users, configure only one oracle user for both
  • never use graphical tools to create a database, deploy an appliance, configure something: because screenshots are far less convenient than pure text commands easily repeatable and scriptable
  • Use OMF: configure only db_create_file_dest and db_recovery_file_dest and Oracle will multiplex the controlfile and the redologs in these areas. OMF is also naming datafiles for you: there is no need for manual naming, who really cares about the name of the datafiles?
  • Don’t use multitenant: multitenant is fine but it’s been years we’re living with non-CDB databases and it works like a charm. You can still use non-CDB architecture in 19c, so multitenant is not mandatory even on this latest version. Later migration from non-CDB to pluggable database is quite easy, you will be able to use multitenant later
  • Keep your spfile clean: don’t specify unused parameters or parameters that already have the given value as a standard. Remove from the spfile these parameters using ALTER SYSTEM RESET parameter SCOPE=spfile SID='*';

Simplify patching

Patching can also be simplified:

  • Patch once a year, because you need to patch, but you don’t need to spend all your time applying each PSU every 3 months
  • Start with test/dev databases and take the time to test from the application
  • Don’t wait too much to patch the other environments: production should be patched few weeks after the first patched environment

Simplify Oracle*Net configuration

Simplifying also concerns Oracle*Net configuration:

  • Avoid configuring multiple listeners on a system because one is enough for all your databases
  • put your Oracle*Net configuration files in /etc because you don’t want multiple files in multiple homes
  • Keep your Oracle*Net configuration files clean and organized for increased readability

Make your database movable

Last but not least, one of the biggest mistake is to create a strong dependency between a database and a system. How to make your database easily movable? By configuring a standby database and using Data Guard or Dbvisit standby. Moving your database to another server is done within a few minutes with a single SWITCHOVER operation.

Using standby databases make your life easier for all of these purposes:

  • you need to move your server to another datacenter
  • a power outage happened on one site
  • you need to update hardware or software
  • you suspect a hardware problem impacting the database

Don’t only create standbys for production databases: even development databases are some kind of production for developers. If a database cannot be down for 1 day, you need a standby database.

Finest configuration is not dedicating a server for the primaries and a server for the standbys but dispatching the primaries between 2 identical servers on 2 sites. Each database having a preference server for its primary, the standby being on the opposite server.

Conclusion

It’s so easy to increase complexity without any good reason. Simplifying is the power of saying NO. No to interesting features and configurations that are not absolutely necessary. All you need for your databases is reliability, safety, availability, performance. Simplicity helps you in that way.

Cet article Make Oracle database simple again! est apparu en premier sur Blog dbi services.


ROLLBACK TO SAVEPOINT;

$
0
0

By Franck Pachot

.
I love databases and, rather than trying to compare and rank them, I like to understand their difference. Sometimes, you make a mistake and encounter an error. Let’s take the following example:
create table DEMO (n int);
begin transaction;
insert into DEMO values (0);
select n "after insert" from DEMO;
update DEMO set n=1/n;
select n "after error" from DEMO;
commit;
select n "after commit" from DEMO;

The “begin transaction” is not valid syntax in all databases because transactions may be started implicitly, but the other statements are valid syntax in all the common SQL databases. They all raise an error in the update execution because there’s one row with N=0 and then we cannot calculate 1/N as it is a math error. But, what about the result of the last select?

If I run this with Oracle, DB2, MS SQL Server, My SQL (links go to example in db<>fiddle), the row added by the insert is always visible by my session: after the insert, of course, after the update error, and after the commit (then visible by everybody).

The same statements run with PostgreSQL have a different result. You cannot do anything after the error. Only rollback the transaction. Even if you “commit” it will rollback. 

Yes, no rows are remaining there! Same code but different result.

You can have the same behavior as the other databases by defining a savepoint before the statement, and rollback to savepoint after the error. Here is the db<>fiddle. With PostgreSQL you have to define an explicit savepoint if you want to continue in your transaction after the error. Other databases take an implicit savepoint. By the way, I said “statement” but here is Tanel Poder showing that in Oracle the transaction is actually not related to the statement but the user call: Oracle State Objects and Reading System State Dumps Hacking Session Video – Tanel Poder’s blog

In Oracle, you can run multiple statements in a user call with a PL/SQL block. With PostgreSQL, you can group multiple statements in one command but you can also run a PL/pgSQL block. And with both, you can catch errors in the exception block. And then, it is PostgreSQL that takes now an implicit savepoint as I explained in a previous post: PostgreSQL subtransactions, savepoints, and exception blocks

This previous post was on Medium ( you can read https://www.linkedin.com/pulse/technology-advocacy-why-i-am-still-nomad-blogger-franck-pachot/ where I explain my blog “nomadism”), but as you can see I’m back on the dbi-services blog for my 500th post there. 

My last post here was called “COMMIThttps://blog.dbi-services.com/commit/ where I explained that I was quitting consulting for CERN to start something new. But even if I decided to change, I was really happy at dbi-services (as I mentioned on a LinkedIn post about great places to work). And when people like to work together it creates an implicit SAVEPOINT where you can come back if you encounter some unexpected results. Yes… this far-fetched analogy just to mention that I’m happy to come back to dbi services and this is where I’ll blog again.

As with many analogies, it reaches the limits of the comparison very quickly. You do not ROLLBACK a COMMIT and it is not a real rollback because this year at CERN was a good experience. I’ve met great people there, learned interesting things about matter and anti-matter, and went out of my comfort zone like co-organizing a PostgreSQL meetup and inviting external people ( https://www.linkedin.com/pulse/working-consultants-only-externalization-franck-pachot/) for visits and conferences. 

This “rollback” is actually a step further, but back in the context I like: solve customer problems in a company that cares about its employees and customers. And I’m not exactly coming back at the same “savepoint”. I was mostly focused on Oracle and I’m now covering more technologies in the database ecosystem. Of course, consulting on Oracle Database will still be a major activity. But today, many other databases are raising: NoSQL, NewSQL… Open Source is more and more relevant. And in this jungle, the replication and federation technologies are raising. I’ll continue to share on these areas and you can follow this blog, the RSS feed, and/or my twitter account.

Cet article ROLLBACK TO SAVEPOINT; est apparu en premier sur Blog dbi services.

Running SQL Server on the Oracle Free tier

$
0
0

By Franck Pachot

The Oracle Cloud is not only for Oracle Database. You can create a VM running Oracle Linux with full root access to it, even in the free tier: a free VM that will be always up, never expires, with full ssh connectivity to a sudoer user, where you are able to tunnel any port. Of course, there are some limits that I’ve detailed in a previous post. But that is sufficient to run a database, given that you configure a low memory usage. For Oracle Database XE, Kamil Stawiarski mentions that you can just hack the memory test in the RPM shell script.
But for Microsoft SQL Server, that’s a bit more complex because this test is hardcoded in the sqlservr binary and the solution I propose here is to intercept the call to the sysinfo() system call.

Creating a VM in the Oracle Cloud is very easy, here are the steps in one picture:

I’m connecting to the public IP Address with ssh (the public key is uploaded when creating the VM) and I’ll will run everything as root:

ssh opc@129.213.138.34
sudo su -
cat /etc/oracle-release

I install docker engine (version 19.3 there)
yum install -y docker-engine

I start docker

systemctl start docker
docker info


I’ll use the latest SQL Server 2019 image built on RHEL
docker pull mcr.microsoft.com/mssql/rhel/server:2019-latest
docker images

5 minutes to download a 1.5GB image. Now trying to start it.
The nice thing (when I compare to Oracle) is that we don’t have to manually accept the license terms with a click-through process. I just mention that I have read and accepted them with: ACCEPT_EULA=Y 

I try to run it:
docker run \
-e "ACCEPT_EULA=Y" \
-e 'MSSQL_PID=Express' \
-p 1433:1433 \
-e 'SA_PASSWORD=**P455w0rd**' \
--name mssql \
mcr.microsoft.com/mssql/rhel/server:2019-latest

There’s a hardcoded prerequisite verification to check that the system has at least 2000 MB of RAM. And I have less than one GB here in this free tier:


awk '/^Mem/{print $0,$2/1024" MB"}' /proc/meminfo

Fortunately, there’s always a nice geek on the internet with an awesome solution: hack the sysinfo() system call with a LD_PRELOAD’ed wrapper : A Slightly Liberated Microsoft SQL Server Docker image

Let’s get it:
git clone https://github.com/justin2004/mssql_server_tiny.git
cd mssql_server_tiny

I changed the FROM to build from the 2019 RHEL image and I preferred to use /etc/ld.so.preload rather than overriding the CMD command with LD_LIBRARY:


FROM oraclelinux:7-slim AS build0
WORKDIR /root
RUN yum update -y && yum install -y binutils gcc
ADD wrapper.c /root/
RUN gcc -shared -ldl -fPIC -o wrapper.so wrapper.c
FROM mcr.microsoft.com/mssql/rhel/server:2019-latest
COPY --from=build0 /root/wrapper.so /root/
ADD wrapper.c /root/
USER root
RUN echo "/root/wrapper.so" > /etc/ld.so.preload
USER mssql

I didn’t change the wrapper for the sysinfo function:
#define _GNU_SOURCE
#include
#include
#include
int sysinfo(struct sysinfo *info){
// clear it
//dlerror();
void *pt=NULL;
typedef int (*real_sysinfo)(struct sysinfo *info);
// we need the real sysinfo function address
pt = dlsym(RTLD_NEXT,"sysinfo");
//printf("pt: %x\n", *(char *)pt);
// call the real sysinfo system call
int real_return_val=((real_sysinfo)pt)(info);
// but then modify its returned totalram field if necessary
// because sqlserver needs to believe it has "2000 megabytes"
// physical memory
if( info->totalram totalram = 1000l * 1000l * 1000l * 2l ;
}
return real_return_val;
}

I build the image from there:

docker build -t mssql .


I run it:

docker run -d \
-e "ACCEPT_EULA=Y" \
-e 'MSSQL_PID=Express' \
-p 1433:1433 \
-e 'SA_PASSWORD=**P455w0rd**' \
--name mssql \
mssql

I wait until it is ready:

until docker logs mssql | grep -C10 "Recovery is complete." ; do sleep 1 ; done

All is ok and I connect and check the version:

Well… as you can see my first attempt failed. I am running with very low memory here, and then many memory allocation problems can be expected. If you look at the logs after a while, many automatic system tasks fail. But that’s sufficient for a minimal lab and you can tweak some Linux and SQL Server parameters if you need it. Comments are welcome here for feedback and ideas…

The port 1433 is exposed here locally and it can be tunneled through ssh. This is a free lab environment always accessible from everywhere to do small tests in MS SQL, running on the Oracle free tier. Here is how I connect with DBeaver from my laptop, just mentioning the public IP address, private ssh key and connection information:

Cet article Running SQL Server on the Oracle Free tier est apparu en premier sur Blog dbi services.

SQLNET.EXPIRE_TIME and ENABLE=BROKEN

$
0
0

By Franck Pachot

.
Those parameters, SQLNET.EXPIRE_TIME in sqlnet.ora and ENABLE=BROKEN in a connection description exist for a long time but may have changed in behavior. They are both related to detecting dead TCP connections with keep-alive probes. The former from the server, and the latter from the client.

The change in 12c is described in the following MOS note: Oracle Net 12c: Changes to the Functionality of Dead Connection Detection (DCD) (Doc ID 1591874.1). Basically instead sending a TNS packet for the keep-alive, the server Dead Connection Detection now relies on the TCP keep-alive feature when available. The note mentions that it may be required to set (ENABLE=BROKEN) in the connection string “in some circumstances” - which is not very precise. This “ENABLE=BROKEN” was used in the past for transparent failover when we had no VIP (virtual IP) in order to detect a lost connection to the server.

I don’t like those statements like “on some platform”, “in some circumstances”, “with some drivers”, “it may be necessary”… so there’s only one solution: test it in your context.

My listener is on (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))) and I will connect to it and keep my connection idle (no user call to the server).I trace the server (through the forks of the listener, found by pgrep with the name of listener associated with this TCP address) and color it in green (GREP_COLORS=’ms=01;32′):


pkill strace ; strace -fyye trace=socket,setsockopt -p $(pgrep -f "tnslsnr $(lsnrctl status "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))" | awk '/^Alias/{print $2}') ") 2>&1 | GREP_COLORS='ms=01;32' grep --color=auto -E '^|.*sock.*|^=*' &

I trace the client and color it in yellow (GREP_COLORS=’ms=01;32′):


strace -fyye trace=socket,setsockopt sqlplus demo/demo@"(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=PDB1))(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))" <<<quit 2>&1 | GREP_COLORS='ms=01;33' grep --color=auto -E '^|.*sock.*|^=*'

I’m mainly interested by the setsockopt() here because this is how to enable TCP Keep Alive.

(ENABLE=BROKEN) on the client

My first test is without enabling DCD on the server: I have nothing defined in sqlnet.ora on the server side. I connect from the client without mentioning “ENABLE=BROKEN”:


The server (green) has set SO_KEEPALIVE but not the client.

Now I run the same scenario but adding (ENABLE=BROKEN) in the description:


strace -fyye trace=socket,setsockopt sqlplus demo/demo@"(DESCRIPTION=(ENABLE=BROKEN)(CONNECT_DATA=(SERVICE_NAME=PDB1))(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))" <<<quit 2>&1 | GREP_COLORS='ms=01;33' grep --color=auto -E '^|.*sock.*|^=*'

The client (yellow) has now a call to set keep-alive:


setsockopt(9<TCP:[1810151]>, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0

However, as I’ll show later, this uses the TCP defaults:


[oracle@db195 tmp]$ tail /proc/sys/net/ipv4/tcp_keepalive*
==> /proc/sys/net/ipv4/tcp_keepalive_intvl <== 
75
==> /proc/sys/net/ipv4/tcp_keepalive_probes <== 
9
==> /proc/sys/net/ipv4/tcp_keepalive_time <== 
7200

After 2 hours (7200 seconds) of idle connection, the client will send a probe 9 times, every 75 seconds. If you want to reduce it you must change it on the client system settings. If you don’t add “(ENABLE=BROKEN)” the dead broken connection will not be detected before then next user call, after the default TCP timeout (15 minutes).

That’s only from the client when its connection to the server is lost.

SQLNET.EXPIRE_TIME on the server

On the server side, we have seen that SO_KEEPALIVE is set - using the TCP defaults. But, there, it may be important to detect dead connections faster because a session may hold some locks. You can (and should) set a lower value in sqlnet.ora with SQLNET.EXPIRE_TIME. Before 12c this parameter was used to send TNS packets as keep-alive probes but now that SO_KEEPALIVE is set, this parameter will control the keep-alive idle time (using TCP_KEEPIDL instead of the default /proc/sys/net/ipv4/tcp_keepalive_time).
Here is the same as my first test (without the client ENABLE=BROKER) but after having set SQLNET.EXPIRE_TIME=42 in $ORACLE_HOME/network/admin/sqlnet.ora

Side note: I’ve got the “do we need to restart the listener?” question very often about changes in sqlnet.ora but the answer is clearly “no”. This file is read for each new connection to the database. The listener forks the server (aka shadow) process and this one reads the sqlnet.ora, as we can see here when I “strace -f” the listener but the forked process is setting-up the socket.

Here is the new setsockopt() from the server process:


[pid  5507] setsockopt(16<TCP:[127.0.0.1:1521->127.0.0.1:31374]>, SOL_TCP, TCP_KEEPIDLE, [2520], 4) = 0
[pid  5507] setsockopt(16<TCP:[127.0.0.1:1521->127.0.0.1:31374]>, SOL_TCP, TCP_KEEPINTVL, [6], 4) = 0
[pid  5507] setsockopt(16<TCP:[127.0.0.1:1521->127.0.0.1:31374]>, SOL_TCP, TCP_KEEPCNT, [10], 4) = 0

This means that the server waits for 42 minutes of inactivity (the EXPIRE_TIME that I’ve set, here TCP_KEEPIDLE=2520 seconds) and then sends a probe. Without answer (ack) it re-probes every 6 seconds during one minute (the 6 seconds interval is defined by TCP_KEEPINTVL and TCP_KEEPCNT sets the retries to 10 times). We control the idle time with SQLNET.EXPIRE_TIME and then can expect that a dead connection is closed after one additional minute of retry.

Here is a combination of SQLNET.EXPIRE_TIME (server detecting dead connection in 42+1 minute) and ENABLE=BROKEN (client detecting dead connection after the default of 2 hours):

tcpdump and iptable drop

The above, with strace, shows the translation of Oracle settings to Linux settings. Now I’ll translate to the actual behavior by tracing the TCP packets exchanged, with tcpdump:


sqlplus demo/demo@"(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=PDB1))(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))"
host cat $ORACLE_HOME/network/admin/sqlnet.ora
host sudo netstat -np  | grep sqlplus
host sudo netstat -np  | grep 36316
set time on escape on
host sudo tcpdump -vvnni lo port 36316 \&

“netstat -np | grep sqlplus” finds the client connection in order to get the port and “netstat -np | grep $port” shows both connections (“sqlplus” for the client and “oracleSID” for the server).

I have set SQLNET.EXPIRE_TIME=3 here and I can see that the server sends a 0-length packets every 3 minutes (connection at 14:43:39, then idle, 1st probe: 14:46:42, 2nd probe: 14:49:42…). And each time the client replied with an ACK and then the server knows that the connection is still alive:

Now I simulate a client that doesn’t answer, by blocking the input packets:


host sudo iptables -I INPUT 1 -p tcp --dport 36316 -j DROP
host sudo netstat -np  | grep 36316

Here I see the next probe 3 minutes after the last one (14:55:42) and then, as there is no reply, the 10 probes every 6 seconds:

At the end, I checked the TCP connections and the server one has disappeared. But the client side remains. That is exactly what DCD does: when a session is idle for a while it tests if the connection is dead and closes it to release all resources.
If I continue from there and try to run a query, the server cannot be reached and I’ll hang for the default TCP timeout of 15 minutes. If I try to cancel, I get “ORA-12152: TNS:unable to send break message” as it tries to send an out-of-bound break. SQLNET.EXPIRE_TIME is only for the server-side. The client detects nothing until it tries to send something.

For the next test, I remove my iptables rule to stop blocking the packets:


host sudo iptables -D INPUT 1

And I’m now running the same but with (ENABLE=BROKEN)


connect demo/demo@(DESCRIPTION=(ENABLE=BROKEN)(CONNECT_DATA=(SERVICE_NAME=PDB1))(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521)))
host sudo netstat -np  | grep sqlplus
host sudo netstat -np  | grep 37064
host sudo tcpdump -vvnni lo port 37064 \&
host sudo iptables -I INPUT 1 -p tcp --dport 37064 -j DROP
host sudo netstat -np  | grep 37064
host sudo iptables -D INPUT 1
host sudo netstat -np  | grep 37064

Here is the same as before: DCD after 3 minutes idle, and 10 probes that fail because I’ve blocked again with iptables:

As with the previous test, the server connection (the oracleSID) has been closed and only the client one remains. As I know that SO_KEEPALIVE has been enabled thanks to (ENABLE=BROKEN) the client will detect the closed connection:

17:52:48 is 2 hours after the last activity and probes 9 times every 1’15 according to the system defaults:


[oracle@db195 tmp]$ tail /proc/sys/net/ipv4/tcp_keepalive*
==> /proc/sys/net/ipv4/tcp_keepalive_intvl <==    TCP_KEEPINTVL
75
==> /proc/sys/net/ipv4/tcp_keepalive_probes <==     TCP_KEEPCNT
9
==> /proc/sys/net/ipv4/tcp_keepalive_time <==      TCP_KEEPIDLE

It was long (but you can change those defaults on the client) but finally, the client connection is cleared up (sqlplus not there in the last netstat).
Now, an attempt to run a user call fails immediately with the famous ORA-03113 because the client knows that the connection is closed:

Just a little additional test to show ORA-03135. If the server detects and closes the dead connection, but before the dead connection is detected on the client, we have seen that we wait for a 15 minutes timeout. But that’s because the iptable rule was still there to drop the packet. If I remove the rule before attempting a user-call, the server can be reached (then no wait and timeout) and detects immediately that there’s no endpoint anymore. This raises “connection lost contact”.

In summary:

  • On the server, the keep-alive is always enabled and SQLNET.EXPIRE_TIME is used to reduce the tcp_keepalive_time defined by the system, because it is probably too long.
  • On the client, the keep-alive is enabled only when (ENABLE=BROKEN) is in the connection description, and uses the tcp_keepalive_time from the system. Without it, the broken connection will be detected only when attempting a user call.

Setting SQLNET.EXPIRE_TIME to a few minutes (like 10) is a good idea because you don’t want to keep resources and locks on the server when a simple ping can ensure that the connection is lost and we have to rollback. If we don’t, then the dead connections may disappear only after 2 hours and 12 minutes (the idle time + the probes). On the client-side, it is also a good idea to add (ENABLE=BROKEN) so that idle sessions that have lost contact have a chance to know it before trying to use them. This is a performance gain if it helps to avoid sending a “select 1 from dual” each time you grab a connection from the pool

And, most important: the documentation is imprecise, which means that the behavior can change without notification. This is a test on specific OS, specific driver, specific version,… Do not take the results from this post, but now you know how to check in your environment.

Cet article SQLNET.EXPIRE_TIME and ENABLE=BROKEN est apparu en premier sur Blog dbi services.

Oracle 20c SQL Macros: a scalar example to join agility and performance

$
0
0

By Franck Pachot

.
Let’s say you have a PEOPLE table with FIRST_NAME and LAST_NAME and you want, in many places of your application, to display the full name. Usually my name will be displayed as ‘Franck Pachot’ and I can simply add a virtual column to my table, or view, as: initcap(FIRST_NAME)||’ ‘||initcap(LAST_NAME). Those are simple SQL functions. No need for procedural code there, right? But, one day, the business will come with new requirements. In some countries (I’ve heard about Hungary but there are others), my name may be displayed with last name… first, like: ‘Pachot Franck’. And in some context, it may have a comma like: ‘Pachot, Franck’.

There comes a religious debate between Dev and Ops:

  • Developer: We need a function for that, so that the code can evolve without changing all SQL queries or views
  • DBA: That’s the worst you can do. Calling a function for each row is a context switch between SQL and PL/SQL engine. Not scalable.
  • Developer: Ok, let’s put all that business logic in the application so that we don’t have to argue with the DBA…
  • DBA: Oh, that’s even worse. The database cannot perform correctly with all those row-by-row calls!
  • Developer: No worry, we will put the database on Kubernetes, shard and distribute it, and scale as far as we need for acceptable throughput

And this is where we arrive in an unsustainable situation. Because we didn’t find a tradeoff between code maintainability and application performance, we get the worst from each of them: crazy resource usage for medium performance.

However, in Oracle 20c, we have a solution for that. Did you code some C programs where you replace functions by pre-processor macros? So that your code is readable and maintainable like when using modules and functions. But compiled as if those functions have been merged to the calling code at compile time? What was common in those 3rd generation languages is now possible in a 4th generation declarative language: Oracle SQL.

Let’s take an example. I’m building a PEOPLE table using the Linux /usr/share/dict of words:


create or replace directory "/usr/share/dict" as '/usr/share/dict';
create table people as
with w as (
select *
 from external((word varchar2(60))
 type oracle_loader default directory "/usr/share/dict" access parameters (nologfile) location('linux.words'))
) select upper(w1.word) first_name , upper(w2.word) last_name
from w w1,w w2 where w1.word like 'ora%' and w2.word like 'aut%'
order by ora_hash(w1.word||w2.word)
/

I have 100000 rows table here with first and last names.
Here is a sample:


SQL> select count(*) from people;

  COUNT(*)
----------
    110320

SQL> select * from people where rownum<=10;

FIRST_NAME                     LAST_NAME
------------------------------ ------------------------------
ORACULUM                       AUTOMAN
ORANGITE                       AUTOCALL
ORANGUTANG                     AUTHIGENOUS
ORAL                           AUTOPHOBIA
ORANGUTANG                     AUTOGENEAL
ORATORIAN                      AUTOCORRELATION
ORANGS                         AUTOGRAPHICAL
ORATORIES                      AUTOCALL
ORACULOUSLY                    AUTOPHOBY
ORATRICES                      AUTOCRATICAL

PL/SQL function

Here is my function that displays the full name, with the Hungarian specificity as an example but, as it is a function, it can evolve further:


create or replace function f_full_name(p_first_name varchar2,p_last_name varchar2)
return varchar2
as
 territory varchar2(64);
begin
 select value into territory from nls_session_parameters
 where parameter='NLS_TERRITORY';
 case (territory)
 when 'HUNGARY'then return initcap(p_last_name)||' '||initcap(p_first_name);
 else               return initcap(p_first_name)||' '||initcap(p_last_name);
 end case;
end;
/
show errors

The functional result depends on my session settings:


SQL> select f_full_name(p_first_name=>first_name,p_last_name=>last_name) from people
     where rownum<=10;

FIRST_NAME,P_LAST_NAME=>LAST_NAME)
------------------------------------------------------------------------------------------------
Oraculum Automan
Orangite Autocall
Orangutang Authigenous
Oral Autophobia
Orangutang Autogeneal
Oratorian Autocorrelation
Orangs Autographical
Oratories Autocall
Oraculously Autophoby
Oratrices Autocratical

10 rows selected.

But let’s run it on many rows, like using this function in the where clause, with autotrace:


SQL> set timing on autotrace on
select f_full_name(first_name,last_name) from people
where f_full_name(p_first_name=>first_name,p_last_name=>last_name) like 'Oracle Autonomous';

F_FULL_NAME(FIRST_NAME,LAST_NAME)
------------------------------------------------------------------------------------------------------
Oracle Autonomous

Elapsed: 00:00:03.47

Execution Plan
----------------------------------------------------------
Plan hash value: 2528372185

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |  1103 | 25369 |   129   (8)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| PEOPLE |  1103 | 25369 |   129   (8)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("F_FULL_NAME"("P_FIRST_NAME"=>"FIRST_NAME","P_LAST_NAME"=>
              "LAST_NAME")='Oracle Autonomous')


Statistics
----------------------------------------------------------
     110361  recursive calls
          0  db block gets
        426  consistent gets
          0  physical reads
          0  redo size
        608  bytes sent via SQL*Net to client
        506  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

100000 recursive calls. That is bad and not scalable. The time spent in context switches from the SQL to the PL/SQL engine is a waste of CPU cycles.

Note that this is difficult to improve because we cannot create on index for that predicate:


SQL> create index people_full_name on people(f_full_name(first_name,last_name));
create index people_full_name on people(f_full_name(first_name,last_name))
                                        *
ERROR at line 1:
ORA-30553: The function is not deterministic

Yes, this function cannot be deterministic because it depends on many other parameters (like the territory in this example, in order to check if I am in Hungary)

Update 25-FEB-2020

If you have read this post yesterday, pleased note that I’ve updated it. I initially didn’t add the territory parameter, thinking that changing NLS_TERRITORY would re-parse the query but it seems that cursors are shared across different territories. Anyway, the documentation says:
Although the DETERMINISTIC property cannot be specified, a SQL macro is always implicitly deterministic.
So, better not relying on child cursor sharing. Thanks to Stew Ashtom for the heads up on that:

SQL Macro

The solution in 20c, currently available in the Oracle Cloud, here is very easy. I create a new function, M_FULL_NAME, when the only differences with F_FULL_NAME are:

  1. I add the SQL_MACRO(SCALAR) keyword and change the return type to varchar2 (if not already)
  2. I enclose the return expression value in quotes (using q'[ … ]’ for better readability) to return it as a varchar2 containing the expression string where variable names are just placeholders (no bind variables here!)
  3. I add all external values as parameters because the SQL_MACRO function must be deterministic

create or replace function m_full_name(p_first_name varchar2,p_last_name varchar2,territory varchar2)
return varchar2 SQL_MACRO(SCALAR)
as
begin
 case (territory)
 when 'HUNGARY'then return q'[initcap(p_last_name)||' '||initcap(p_first_name)]';
 else               return q'[initcap(p_first_name)||' '||initcap(p_last_name)]';
 end case;
end;
/

Here is the difference if I call both of them:


SQL> set serveroutput on
SQL> exec dbms_output.put_line(f_full_name('AAA','BBB'));
Aaa Bbb

PL/SQL procedure successfully completed.

SQL> exec dbms_output.put_line(m_full_name('AAA','BBB','SWITZERLAND'));
initcap(p_first_name)||' '||initcap(p_last_name)

PL/SQL procedure successfully completed.

SQL> select m_full_name('AAA','BBB','SWITZERLAND') from dual;

M_FULL_
-------
Aaa Bbb

One returns the function value, the other returns the expression that can be used to return the value. It is a SQL Macro that can be applied to a SQL text to replace part of it – a scalar expression in this case as I mentioned SQL_MACRO(SCALAR)

The result is the same as with the previous function:


SQL> select m_full_name(p_first_name=>first_name,p_last_name=>last_name,territory=>'SWITZERLAND') from people
     where rownum<=10;

M_FULL_NAME(P_FIRST_NAME=>FIRST_NAME,P_LAST_NAME=>LAST_NAME,TERRITORY=>'SWITZERLAND')
------------------------------------------------------------------------------------------------------------------------
Oraculum Automan
Orangite Autocall
Orangutang Authigenous
Oral Autophobia
Orangutang Autogeneal
Oratorian Autocorrelation
Orangs Autographical
Oratories Autocall
Oraculously Autophoby
Oratrices Autocratical


10 rows selected.

And now let’s look at the query using this as a predicate:


SQL> set timing on autotrace on
SQL> select m_full_name(first_name,last_name,territory=>'SWITZERLAND') from people
     where m_full_name(p_first_name=>first_name,p_last_name=>last_name,territory=>'SWITZERLAND') like 'Oracle Autonomous';

M_FULL_NAME(FIRST_NAME,LAST_NAME,TERRITORY=>'SWITZERLAND')
------------------------------------------------------------------------------------------------------------------------
Oracle Autonomous

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 1341595178

------------------------------------------------------------------------------------------------
| Id  | Operation        | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                             |     1 |    46 |     4   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| PEOPLE_FULL_NAME_FIRST_LAST |     1 |       |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access(INITCAP("FIRST_NAME")||' '||INITCAP("LAST_NAME")='Oracle Autonomous')


Statistics
----------------------------------------------------------
         42  recursive calls
          0  db block gets
         92  consistent gets
          7  physical reads
          0  redo size
        633  bytes sent via SQL*Net to client
        565  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

I don’t have all those row-by-row recursive calls. And the difference is easy to see in the execution plan predicate sections: there’s no call to my PL/SQL function there. It was called only at parse time to transform the SQL statement: now only using the string returned by the macro, with parameter substitution.

That was my goal: stay in SQL engine for the execution, calling only standard SQL functions. But while we are in the execution plan, can we do something to avoid the full table scan? My function is not deterministic but has a small number of variations. Two in my case. Then I can create an index for each one:


 
SQL>
SQL> create index people_full_name_first_last on people(initcap(first_name)||' '||initcap(last_name));
Index created.

SQL> create index people_full_name_first_first on people(initcap(last_name)||' '||initcap(first_name));
Index created.

And run my query again:


SQL> select m_full_name(first_name,last_name,'SWITZERLAND') from people
     where m_full_name(p_first_name=>first_name,p_last_name=>last_name,'SWITZERLAND') like 'Autonomous Oracle';

no rows selected

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 1341595178

------------------------------------------------------------------------------------------------
| Id  | Operation        | Name                        | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                             |  1103 | 25369 |   118   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| PEOPLE_FULL_NAME_FIRST_LAST |   441 |       |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access(INITCAP("FIRST_NAME")||' '||INITCAP("LAST_NAME")='Autonomous Oracle')

Performance and agility

Now we are ready to bring back the business logic into the database so that it is co-located with data and run within the same process. Thanks to SQL Macros, we can even run it within the same engine, SQL, calling the PL/SQL one only at compile time to resolve the macro. And we keep full code maintainability as the logic is defined in a function that can evolve and be used in many places without duplicating the code.

Cet article Oracle 20c SQL Macros: a scalar example to join agility and performance est apparu en premier sur Blog dbi services.

Documentum – LSS registerOracleVersionView script with wrong content

$
0
0

As discussed in a previous blog, working with LSS might prove a little bit challenging from time to time. In this blog, I wanted to share an error I saw while installing LSS 16.6.1 on an Oracle database. Initially, I developed my silent installation for LSS (while encapsulate the LSS silent scripts provided by OpenText) using a PostgreSQL database because it’s usually easier to setup an environment on Kubernetes with PG because of licenses.

 

So, the silent installation scripts were created several months ago and working since then, apparently. Recently, I had to execute manually my silent install script of LSS on an environment which was using an Oracle database. The script completed properly, my automatic log file checking didn’t show any sign or errors or anything so for me it was fully installed. However, I still did a review of the logs printed on the screen to be sure and I did see a new “error” I wasn’t familiar with. I’m not sure you can call that an error because it’s just one line drowned in the flood of logs printed without any “ERROR” or “_E_” messages but it is clearly an error from a Linux point of view:

...
./registerOracleVersionView.sh: line 1: oracle: command not found
...

 

This message never appeared in the generated log file of the LSS installation, it’s only displayed on the screen, which makes it… quite difficult to see in automation. So, anyway, what’s the issue this time? Well looking at the message, it’s clear that the shell script has a wrong content because it is trying to execute a command “oracle” which doesn’t exist. Where is this file? What’s its content?

[dmadmin@cs-0 ~]$ workspace="/tmp/lss/"
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ cd ${workspace}/*/
[dmadmin@cs-0 LSSuite]$
[dmadmin@cs-0 LSSuite]$ ls -l *.sh
-rwxr-x--- 1 dmadmin dmadmin 13479 Oct  4 09:15 LSConfigImport.sh
-rwxr-x--- 1 dmadmin dmadmin  4231 Oct  4 09:15 iHubConfigImport.sh
-rwxr-x--- 1 dmadmin dmadmin  8384 Oct  4 09:15 install.sh
-rwxr-x--- 1 dmadmin dmadmin  3096 Oct  4 09:15 myInsightPostInstall.sh
[dmadmin@cs-0 LSSuite]$
[dmadmin@cs-0 LSSuite]$ find . -name registerOracleVersionView.sh
./scripts/registerOracleVersionView.sh
[dmadmin@cs-0 LSSuite]$
[dmadmin@cs-0 LSSuite]$ cd ./scripts/
[dmadmin@cs-0 scripts]$
[dmadmin@cs-0 scripts]$ cat registerOracleVersionView.sh
if  "$4" == "oracle"
then
        idql "$1" -U"$2" -P"$3" -R"./scripts/$4/oracleVersion.dql"
fi
[dmadmin@cs-0 scripts]$

 

If you are familiar with bash/shell scripting, you probably already saw what’s wrong with the script. It’s simply that this isn’t the correct way to write IF statements. I won’t go into the details of the correct formatting (one bracket, two brackets, with test command, aso…) because there are already plenty of documentation around that online but that’s definitively not a correct way to write IF statements. So, to correct this script, I opened the OpenText SR#4450083 and provided them the commands to fix it in a future patch/release. I didn’t receive a confirmation yet but it should be in the next LSS release. In the meanwhile, I put the workaround on my silent install script (if the correct format is already there it won’t be anything but if it’s not, then it will correct the file):

[dmadmin@cs-0 scripts]$ cat registerOracleVersionView.sh
if  "$4" == "oracle"
then
        idql "$1" -U"$2" -P"$3" -R"./scripts/$4/oracleVersion.dql"
fi
[dmadmin@cs-0 scripts]$
[dmadmin@cs-0 scripts]$ ./registerOracleVersionView.sh Repo01 dmadmin xxx oracle
./registerOracleVersionView.sh: line 1: oracle: command not found
[dmadmin@cs-0 scripts]$
[dmadmin@cs-0 scripts]$ sed -i -e 's,^if[[:space:]]*",if \[\[ ",' -e 's,^if \[\[ .*"$,& \]\],' registerOracleVersionView.sh
[dmadmin@cs-0 scripts]$
[dmadmin@cs-0 scripts]$ cat registerOracleVersionView.sh
if [[ "$4" == "oracle" ]]
then
        idql "$1" -U"$2" -P"$3" -R"./scripts/$4/oracleVersion.dql"
fi
[dmadmin@cs-0 scripts]$
[dmadmin@cs-0 scripts]$ ./registerOracleVersionView.sh Repo01 dmadmin xxx oracle

        OpenText Documentum idql - Interactive document query interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0170.0080

Connecting to Server using docbase Repo01
[DM_SESSION_I_SESSION_START]info:  "Session 010f1234800113af started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
1> 2> result
------------
T
(1 row affected)
1> 2> new_object_ID
----------------
190f1234800edc59
(1 row affected)
1>
[dmadmin@cs-0 scripts]$
[dmadmin@cs-0 scripts]$ cat ./scripts/oracle/oracleVersion.dql
execute exec_sql with query = 'create view oracle_version as select * from v$version'
go
REGISTER TABLE dm_dbo.oracle_version (banner String(80))
go[dmadmin@cs-0 scripts]$

 

As you can see above, the shell executes a DQL script “oracleVersion.dql”. This simply creates a new view “oracle_version”. I have no clue where this might be used in LSS but what I can tell you is that this script was already wrong in LSS 16.6.0 (released in Jul 2019 I believe) and nobody complained about it so far apparently, so maybe you can wait for the official fix from OpenText or you can fix it yourself like I did, up to you!

 

Cet article Documentum – LSS registerOracleVersionView script with wrong content est apparu en premier sur Blog dbi services.

Viewing all 461 articles
Browse latest View live