Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 461 articles
Browse latest View live

RMAN catalog upgrade, why, when and how

$
0
0

One of our customer has been initially creating a RMAN catalog on an Oracle database release 12.1.0.2.0 and was now intending to register new Oracle 12.2.0.1.0 databases.

Registering the databases will be failing with errors :

PL/SQL package RCAT.DBMS_RCVCAT version 12.01.00.02 in RCVCAT database is too old
RMAN-06429: RCVCAT database is not compatible with this version of RMAN

The problem is coming from the catalog version that needs to be at least equal or higher than the database version to register. I had been then wondering if we are talking about the version of the catalog database itself or the version of the catalog.

Fortunately, in most of the cases a catalog database upgrade is not needed and a catalog upgrade is enough.

RMAN Compatibility Matrix (Doc ID 73431.1) MOS Note will provide the below compatibility matrix.

Target/Auxiliary
Database
RMAN Executable Catalog Database Catalog Schema
8.1.7.4 8.1.7.4 >=8.1.7 < 12C 8.1.7.4
8.1.7.4 8.1.7.4 >=8.1.7 < 12C >=9.0.1.4
9.0.1 9.0.1 >=8.1.7 < 12C >= RMAN executable
9.2.0 >=9.0.1.3 and <= Target database >=8.1.7 < 12C >= RMAN executable
10.1.0.5 >=10.1.0.5 and <= Target database >=10.1.0.5 >= RMAN executable
10.2.0 >=10.1.0.5 and <= target database >=10.1.0.5 >= RMAN executable
11.1.0 >=10.1.0.5 and <= target database >=10.2.0.3 (note 1) >= RMAN executable
11.2.0 >=10.1.0.5 and <= target database >=10.2.0.3 (note 1) >= RMAN executable
>=12.1.0.x = target database executable >=10.2.0.3 >= RMAN executable
18.1 = target database executable >=10.2.0.3 >= RMAN executable

So, in our case, we will connect from the database to be registered (R12.2.0.1.0) in order to :

  1. Check the release of the catalog
    dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] sqlplus <catalog_user>/<pwd>@<catalog_TNS>
    
    SQL*Plus: Release 12.2.0.1.0 Production on Thu Aug 16 14:50:08 2018
    
    Connected to:
    
    Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
    
    SQL> select * from rcver;
    
    VERSION
    
    ------------
    
    12.01.00.02
  2. Upgrade the catalog version
    dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] rman target / catalog <catalog_user>/<pwd>@<catalog_TNS>
    
    Recovery Manager: Release 12.2.0.1.0 - Production on Thu Aug 16 14:52:15 2018
    
    connected to target database: ARGOSP (DBID=469810750)
    
    connected to recovery catalog database
    
    PL/SQL package <catalog_user>.DBMS_RCVCAT version 12.01.00.02. in RCVCAT database is too old
    
    RMAN> upgrade catalog;
    
    recovery catalog owner is <catalog_user>
    
    enter UPGRADE CATALOG command again to confirm catalog upgrade
    
    RMAN> upgrade catalog;
    
    recovery catalog upgraded to version 12.02.00.01
    
    DBMS_RCVMAN package upgraded to version 12.02.00.01
    
    DBMS_RCVCAT package upgraded to version 12.02.00.01.
  3. Check the release of the catalog
dbiservices@<server_name>:C:\Windows\system32\ [DB_NAME] sqlplus <catalog_user>/<pwd>@<catalog_TNS>

SQL*Plus: Release 12.2.0.1.0 Production on Thu Aug 16 14:50:08 2018

Connected to:

Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> select * from rcver;

VERSION

------------

12.02.00.01

 

Database registration will then be successful:

RMAN> register database;

database registered in recovery catalog

starting full resync of recovery catalog

full resync complete
 

Cet article RMAN catalog upgrade, why, when and how est apparu en premier sur Blog dbi services.


SQL Plan stability in 11G using stored outlines

$
0
0

Plan stability preserves execution plans in stored outlines. An outline is implemented as a set of optimizer hints that are associated with the SQL statement. If the use of the outline is enabled for the statement, then Oracle Database automatically considers the stored hints and tries to generate an execution plan in accordance with those hints (Oracle documentation).

Oracle Database can create a public or private stored outline for one or all SQL statements. The optimizer then generates equivalent execution plans from the outlines when you enable the use of stored outlines. You can group outlines into categories and control which category of outlines Oracle Database uses to simplify outline administration and deployment (Oracle documentation).

The plans that Oracle Database maintains in stored outlines remain consistent despite changes to a system’s configuration or statistics. Using stored outlines also stabilizes the generated execution plan if the optimizer changes in subsequent Oracle Database releases (Oracle documentation).

 

Many times we are into the situation when the performance of a query regressing, or the optimizer is not able to choose the better execution plan.

In the next lines I will try to describe a scenario that needs the usage of a stored outline on a Standard Edition 2 Database:

–we will identify the different plans that exists for our sql_id

SQL> select hash_value,child_number,sql_id,executions from v$sql where sql_id='574gkxxxxxxxx';

HASH_VALUE CHILD_NUMBER SQL_ID        EXECUTIONS 
---------- ------------ ------------- ---------- 
 524000000            0 574gkxxxxxxxx          4 
 576000001            1 574gkxxxxxxxx          5

 

Between the two different plans, we know that the best one is that with the cost 15 and the hash_value : 444444444444 , but which is not all the time choosed by the optimizer, causing peak of performance

SQL> select * from table(dbms_xplan.display_cursor(‘574gkxxxxxxxx’,0));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkxxxxxxxx, child number 0
-------------------------------------
Select   <qeury>
........................................................

Plan hash value: 4444444444444

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          |                            |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          |                            |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


   7 - filter("SERIAL#"=1xxxxxxxxxxxx)
  10 - filter("SERIAL#"=1xxxxxxxxxxxx)
----------------------------------------------

 

In order to fix this , we will create and enable an outline, that should help the optimizer to choose always the best plan:

 BEGIN
      DBMS_OUTLN.create_outline(hash_value    =>52400000,child_number  => 0);
    END;
  /

PL/SQL procedure successfully completed.

SQL>
SQL> alter system set use_stored_outlines=TRUE;

System altered.

As the parameter “use_stored_outlines” is a ‘pseudo’ parameter, is not persistent over the reboot of the system, for that reason we had to create this trigger on startup database.

SQL> create or replace trigger my_trigger after startup on database
  2  begin
  3  execute immediate 'alter system set use_stored_outlines=TRUE';
  4  end;
  5  /

Trigger created.

Now we can check , if the outline is used:

NAME                           OWNER                          CATEGORY                       USED
------------------------------ ------------------------------ ------------------------------ ------
SYS_OUTLINE_1xxxxxxxxxxxxxxxx  TEST                           DEFAULT                        USED

And also, to check that the execution is taking in account

SQL> select * from table(dbms_xplan.display_cursor('574gkxxxxxxxx',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkxxxxxxxx, child number 0
-------------------------------------
Select  
...................

Plan hash value: 444444444444

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          |                            |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          |                            |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


   7 - filter("SERIAL#"=1xxxxxxxxxxx)
  10 - filter("SERIAL#"=1xxxxxxxxxxx)
  
Note
-----
   - outline "SYS_OUTLINE_18xxxxxxxxxxxx" used for this statement

To use stored outlines when Oracle compiles a SQL statement we need to enable them by setting the system parameter USE_STORED_OUTLINES to TRUE or to a category name. This parameter can be also be set at the session level.
By setting this parameter to TRUE, the category by default on which the outlines are created is DEFAULT.
If you prefer to add a category on the procedure of outline creation, Oracle will used this outline category until you provide another category value or you disable the usage of the outlines by putting the parameter USE_STORED_OUTLINE to FALSE.

Additionally, I would like to mention that outlines are unsupported feature from Oracle, but still helps us to fix performance issue on Standard Edition configurations.

Cet article SQL Plan stability in 11G using stored outlines est apparu en premier sur Blog dbi services.

From Oracle to Postgres with the EDB Postgres Migration Portal

$
0
0

EnterpriseDB is a valuable actor in PostgreSQL’s world. In addition to provide support, they also deliver very useful tools to manage easily your Postgres environments. Among these we can mention EDB Enterprise Manager, EDB Backup & Recovery Tool, EDB Failover Manager, aso…
With this post I will present one of the last in the family, EDB Postgres Migration Portal, a helpful tool to migrate from Oracle to Postgres.

To acces to the Portal, use your EDB account or create one if you don’t have. By the way, with your account you can also connect to PostgresRocks, a very interesting community platform. Go take a look :) .

Once connected, click on “Create project” :
1

Fulfill the fields and click on “Create”. Currently it is only possible to migrate from Oracle 11 or 12 to Postgres EDB Advanced Server 10 :
2

All your projects are displayed at the bottom of the page. Click on the “Assess” link to continue :
3

The migration steps consist of the following :

  1. Extracting the DDL metadata from Oracle database using the EDB’s DDL Extractor script
  2. Running assessment
  3. Correcting conflicts
  4. Downloading and running the new DDL statements adapted to your EDB Postgres database
  5. Migrating data

1. Extracting the DDL metadata from Oracle database

The DDL Extractor script is easy to use. You just need to specify the schema name to extract the DDLs and the path to store the DDLs file. As you can guess, the script run the Oracle dbms_metadata.get_dll package to extract objects definitions :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select object_type, count(*) from dba_objects where owner='HR' and status='VALID' group by object_type order by 1;

OBJECT_TYPE COUNT(*)
----------------------- ----------
INDEX 19
PROCEDURE 2
SEQUENCE 3
TABLE 7
TRIGGER 2

SQL>

SQL> @edb_ddl_extractor.sql
# -- EDB DDL Extractor Version 1.2 for Oracle Database -- #
# ------------------------------------------------------- #
Enter SCHEMA NAME to extract DDLs : HR
Enter PATH to store DDL file : /home/oracle/migration

Writing HR DDLs to /home/oracle/migration_gen_hr_ddls.sql
####################################################################################################################
## DDL EXTRACT FOR EDB POSTGRES MIGRATION PORTAL CREATED ON 03-10-2018 21:41:27 BY DDL EXTRACTION SCRIPT VERSION 1.2
##
## SOURCE DATABASE VERSION: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
####################################################################################################################
Extracting SYNONYMS...
Extracting DATABASE LINKS...
Extracting TYPE/TYPE BODY...
Extracting SEQUENCES...
Extracting TABLEs...
Extracting PARTITION Tables...
Extracting CACHE Tables...
Extracting CLUSTER Tables...
Extracting KEEP Tables...
Extracting INDEX ORGANIZED Tables...
Extracting COMPRESSED Tables...
Extracting NESTED Tables...
Extracting EXTERNAL Tables..
Extracting INDEXES...
Extracting CONSTRAINTS...
Extracting VIEWs..
Extracting MATERIALIZED VIEWs...
Extracting TRIGGERs..
Extracting FUNCTIONS...
Extracting PROCEDURE...
Extracting PACKAGE/PACKAGE BODY...

DDLs for Schema HR have been stored in /home/oracle/migration_gen_hr_ddls.sql
Upload this file to the EDB Migration Portal to assess this schema for EDB Advanced Server Compatibility.

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@vmrefdba01:/home/oracle/migration/ [DB1]

2. Assessment

Go back to your browser. It’s time to check if the Oracle schema can be imported to Postgres or not. Upload the output file…
4…and click on “Run assessment” to start the check.
The result is presented as follow :
6

3. Correcting conflicts

We can notice an issue in the report above… the bfile type is not supported by EDB PPAS. You can click on the concerned table to get more details about the issue :7Tips : when you want to manage bfile columns in Postgres, you can use the external_file extension.
Of course several other conversion issues can happen. A very good point with the Portal is that it provide a knowledge base to solve conflicts. You will find all necessary information and workarounds by navigating to the “Repair handler” and “Knowledge base” tabs. Moreover, you can do the corrections directly from the Portal.

4. Creating the objects in Postgres database

Once you have corrected the conflicts and the assess report indicates a 100% success ratio, click on the top right “Export DLL” button to download the new creation script adapted for Postgres EDB :
8
Then connect to your Postgres instance and run the script :
postgres=# \i Demo_HR.sql
CREATE SCHEMA
SET
CREATE SEQUENCE
CREATE SEQUENCE
CREATE SEQUENCE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE PROCEDURE
CREATE PROCEDURE
CREATE TRIGGER
CREATE TRIGGER
postgres=#

Quick check :
postgres=# select object_type, count(*) from dba_objects where schema_name='HR' and status='VALID' group by object_type order by 1;
object_type | count
-------------+-------
INDEX | 19
PROCEDURE | 2
SEQUENCE | 3
TABLE | 7
TRIGGER | 2
(5 rows)

Sounds good ! All objects have been created successfully.

5. Migrating data

The Migration Portal doesn’t provide an embedded solution to import the data. So to do that you can use the EDB Migration Tool Kit.
Let see how it works…
You will find MTK in the edbmtk directory of the {PPAS_HOME}. Inside etc the toolkit.properties file is used to store the connection parameters to the source & target database :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb] cat toolkit.properties
SRC_DB_URL=jdbc:oracle:thin:@192.168.22.101:1521:DB1
SRC_DB_USER=system
SRC_DB_PASSWORD=manager

TARGET_DB_URL=jdbc:edb://localhost:5444/postgres
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=admin123
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb]

MTK use JDBC to connect to the Oracle database. You need to download the Oracle JDBC driver (ojdbc7.jar) and to store it in the following location :
postgres@ppas01:/home/postgres/ [PG10edb] ll /etc/alternatives/jre/lib/ext/
total 11424
-rw-r--r--. 1 root root 4003800 Oct 20 2017 cldrdata.jar
-rw-r--r--. 1 root root 9445 Oct 20 2017 dnsns.jar
-rw-r--r--. 1 root root 48733 Oct 20 2017 jaccess.jar
-rw-r--r--. 1 root root 1204766 Oct 20 2017 localedata.jar
-rw-r--r--. 1 root root 617 Oct 20 2017 meta-index
-rw-r--r--. 1 root root 2032243 Oct 20 2017 nashorn.jar
-rw-r--r--. 1 root root 3699265 Jun 17 2016 ojdbc7.jar
-rw-r--r--. 1 root root 30711 Oct 20 2017 sunec.jar
-rw-r--r--. 1 root root 293981 Oct 20 2017 sunjce_provider.jar
-rw-r--r--. 1 root root 267326 Oct 20 2017 sunpkcs11.jar
-rw-r--r--. 1 root root 77962 Oct 20 2017 zipfs.jar
postgres@ppas01:/home/postgres/ [PG10edb]

As HR’s objects already exist, let’s start the data migration with the -dataOnly option :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb] ./runMTK.sh -dataOnly -truncLoad -logBadSQL HR
Running EnterpriseDB Migration Toolkit (Build 51.0.1) ...
Source database connectivity info...
conn =jdbc:oracle:thin:@192.168.22.101:1521:DB1
user =system
password=******
Target database connectivity info...
conn =jdbc:edb://localhost:5444/postgres
user =postgres
password=******
Connecting with source Oracle database server...
Connected to Oracle, version 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '10.5.12'
Importing redwood schema HR...
Loading Table Data in 8 MB batches...
Disabling FK constraints & triggers on hr.countries before truncate...
Truncating table COUNTRIES before data load...
Disabling indexes on hr.countries before data load...
Loading Table: COUNTRIES ...
[COUNTRIES] Migrated 25 rows.
[COUNTRIES] Table Data Load Summary: Total Time(s): 0.054 Total Rows: 25
Disabling FK constraints & triggers on hr.departments before truncate...
Truncating table DEPARTMENTS before data load...
Disabling indexes on hr.departments before data load...
Loading Table: DEPARTMENTS ...
[DEPARTMENTS] Migrated 27 rows.
[DEPARTMENTS] Table Data Load Summary: Total Time(s): 0.046 Total Rows: 27
Disabling FK constraints & triggers on hr.employees before truncate...
Truncating table EMPLOYEES before data load...
Disabling indexes on hr.employees before data load...
Loading Table: EMPLOYEES ...
[EMPLOYEES] Migrated 107 rows.
[EMPLOYEES] Table Data Load Summary: Total Time(s): 0.168 Total Rows: 107 Total Size(MB): 0.0087890625
Disabling FK constraints & triggers on hr.jobs before truncate...
Truncating table JOBS before data load...
Disabling indexes on hr.jobs before data load...
Loading Table: JOBS ...
[JOBS] Migrated 19 rows.
[JOBS] Table Data Load Summary: Total Time(s): 0.01 Total Rows: 19
Disabling FK constraints & triggers on hr.job_history before truncate...
Truncating table JOB_HISTORY before data load...
Disabling indexes on hr.job_history before data load...
Loading Table: JOB_HISTORY ...
[JOB_HISTORY] Migrated 10 rows.
[JOB_HISTORY] Table Data Load Summary: Total Time(s): 0.035 Total Rows: 10
Disabling FK constraints & triggers on hr.locations before truncate...
Truncating table LOCATIONS before data load...
Disabling indexes on hr.locations before data load...
Loading Table: LOCATIONS ...
[LOCATIONS] Migrated 23 rows.
[LOCATIONS] Table Data Load Summary: Total Time(s): 0.053 Total Rows: 23 Total Size(MB): 9.765625E-4
Disabling FK constraints & triggers on hr.regions before truncate...
Truncating table REGIONS before data load...
Disabling indexes on hr.regions before data load...
Loading Table: REGIONS ...
[REGIONS] Migrated 4 rows.
[REGIONS] Table Data Load Summary: Total Time(s): 0.025 Total Rows: 4
Enabling FK constraints & triggers on hr.countries...
Enabling indexes on hr.countries after data load...
Enabling FK constraints & triggers on hr.departments...
Enabling indexes on hr.departments after data load...
Enabling FK constraints & triggers on hr.employees...
Enabling indexes on hr.employees after data load...
Enabling FK constraints & triggers on hr.jobs...
Enabling indexes on hr.jobs after data load...
Enabling FK constraints & triggers on hr.job_history...
Enabling indexes on hr.job_history after data load...
Enabling FK constraints & triggers on hr.locations...
Enabling indexes on hr.locations after data load...
Enabling FK constraints & triggers on hr.regions...
Enabling indexes on hr.regions after data load...
Data Load Summary: Total Time (sec): 0.785 Total Rows: 215 Total Size(MB): 0.01

Schema HR imported successfully.
Migration process completed successfully.

Migration logs have been saved to /home/postgres/.enterprisedb/migration-toolkit/logs

******************** Migration Summary ********************
Tables: 7 out of 7

Total objects: 7
Successful count: 7
Failed count: 0
Invalid count: 0

*************************************************************
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb]

Quick check :
postgres=# select * from hr.regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
(4 rows)

Conclusion

Easy, isn’t it ?
Once again, EnterpriseDB is providing a very practical, user-frendly and quick to handle tool. In my demo the HR schema is pretty simple. The migration of more complexe schema can be more challenging. Currently only migrations from Oracle are available but SQL Server and other legacy databases should be supported in future versions. In the meantime, you must use EDB Migration Tool Kit for that.

That’s it. Have fun !

Cet article From Oracle to Postgres with the EDB Postgres Migration Portal est apparu en premier sur Blog dbi services.

How to migrate Grid Infrastructure from release 12c to release 18c

$
0
0

Oracle Clusterware 18c builds on this innovative technology by further enhancing support for larger multi-cluster environments and improving the overall ease of use. Oracle Clusterware is leveraged in the cloud in order to provide enterprise-class resiliency where required and dynamic as well as online allocation of compute resources where needed, when needed.
Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application.
HA in consolidated environments is no longer simple active/standby failover.

In this blog we will see how to upgrade our Grid Infrastructure stack from 12cR2 to 18c.

Step1: You are required to patch your GI with the patch 27006180

[root@dbisrv04 ~]# /u91/app/grid/product/12.2.0/grid/OPatch/opatchauto apply /u90/Kit/27006180/ -oh /u91/app/grid/product/12.2.0/grid/

Performing prepatch operations on SIHA Home........

Start applying binary patches on SIHA Home........

Performing postpatch operations on SIHA Home........

[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u91/app/grid/product/12.2.0/grid successfully
OPatchAuto successful.

Step2: Check the list of patches applied

grid@dbisrv04:/u90/Kit/ [+ASM] /u91/app/grid/product/12.2.0/grid/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Lsinventory Output file location : /u91/app/grid/product/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-10-11_09-06-44AM.txt

--------------------------------------------------------------------------------
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.


Interim patches (1) :

Patch  27006180     : applied on Thu Oct 11 09:02:50 CEST 2018
Unique Patch ID:  21761216
Patch description:  "OCW Interim patch for 27006180"
   Created on 5 Dec 2017, 09:12:44 hrs PST8PDT
   Bugs fixed:
     13250991, 20559126, 22986384, 22999793, 23340259, 23722215, 23762756
........................
     26546632, 27006180

 

Step3: Upgrage the binaries to the release 18c

upgrade_grid

directory_new_grid

– recommend to run the rootUpgrade.sh script manually

run_root_script

/u90/app/grid/product/18.3.0/grid/rootupgrade.sh
[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u90/app/grid/product/18.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u90/app/grid/product/18.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/dbisrv04/crsconfig/roothas_2018-10-11_09-21-27AM.log

2018/10/11 09:21:29 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2018/10/11 09:21:30 CLSRSC-363: User ignored prerequisites during installation
2018/10/11 09:21:31 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2018/10/11 09:21:34 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.

ASM has been upgraded and started successfully.

2018/10/11 09:22:25 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2018/10/11 09:23:52 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2018/10/11 09:23:57 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node dbisrv04 successfully pinned.
2018/10/11 09:24:00 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2018/10/11 09:24:02 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2018/10/11 09:24:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/10/11 09:24:49 CLSRSC-595: Executing upgrade step 11 of 12: 'UpgradeSIHA'.
CRS-4123: Oracle High Availability Services has been started.


dbisrv04     2018/10/11 09:25:58     /u90/app/grid/product/18.3.0/grid/cdata/dbisrv04/backup_20181011_092558.olr     70732493   

dbisrv04     2018/07/31 15:24:14     /u91/app/grid/product/12.2.0/grid/cdata/dbisrv04/backup_20180731_152414.olr     0
2018/10/11 09:25:59 CLSRSC-595: Executing upgrade step 12 of 12: 'InstallACFS'.
CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbisrv04'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'dbisrv04'
CRS-2677: Stop of 'ora.driver.afd' on 'dbisrv04' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dbisrv04' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/10/11 09:27:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

– you can ignore the warning related to the memory resources

ignore_prereq

completed_succesfully

– once finished the installation, verify what has been made

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA3.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.db18c.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
--------------------------------------------------------------------------------

Cet article How to migrate Grid Infrastructure from release 12c to release 18c est apparu en premier sur Blog dbi services.

Where come from Oracle CMP$ tables and how to delete them ?

$
0
0

Regarding the following “MOS Note Is Table SCHEMA.CMP4$222224 Or Similar Related To Compression Advisor? (Doc ID 1606356.1)”,
we know that since Oracle 11.2.0.4 BP1 or Higher, due to the failure of Compression Advisor some tables with names
that include “CMP”, created “temporary – the time the process is running” by Compression Advisor process (ie CMP4$23590) are not removed from the database as that should be the case.
How theses tables are created ? How to “cleanly” remove them ?

1.Check None CMP tables exist.

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         0

2. Check there is no compression enabled for the table we will use to test the Compression Advisor.

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

3.Execute the Compression Advisor procedure

The procedure DBMS_COMPRESSION.get_compression_ratio analyzes the compression ratio of a table, and gives information about compressibility of a table.
For information, Oracle Database 12c include a number of enhancements to the DBMS_COMPRESSION package such as In-Memory Compression or Advanced Compression.

Let’s executing the DBMS_COMPRESSION.get_compression_ratio procedure:

SQL> 
alter session set tracefile_identifier = 'CompTest1110201815h51';
alter session set events '10046 trace name context forever, level 12';
set serveroutput on

DECLARE
  l_blkcnt_cmp    PLS_INTEGER;
  l_blkcnt_uncmp  PLS_INTEGER;
  l_row_cmp       PLS_INTEGER;
  l_row_uncmp     PLS_INTEGER;
  l_cmp_ratio     NUMBER;
  l_comptype_str  VARCHAR2(32767);
BEGIN
  DBMS_COMPRESSION.get_compression_ratio (
    scratchtbsname  => 'USERS',
    ownname         => 'TEST_LAF',
    objname         => 'FOO',
    subobjname      => NULL,
    comptype        => DBMS_COMPRESSION.comp_advanced,
    blkcnt_cmp      => l_blkcnt_cmp,
    blkcnt_uncmp    => l_blkcnt_uncmp,
    row_cmp         => l_row_cmp,
    row_uncmp       => l_row_uncmp,
    cmp_ratio       => l_cmp_ratio,
    comptype_str    => l_comptype_str,
    subset_numrows  => DBMS_COMPRESSION.comp_ratio_allrows,
    objtype         => DBMS_COMPRESSION.objtype_table
  );

  DBMS_OUTPUT.put_line('Number of blocks used (compressed)       : ' ||  l_blkcnt_cmp);
  DBMS_OUTPUT.put_line('Number of blocks used (uncompressed)     : ' ||  l_blkcnt_uncmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (compressed)   : ' ||  l_row_cmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (uncompressed) : ' ||  l_row_uncmp);
  DBMS_OUTPUT.put_line('Compression ratio                        : ' ||  l_cmp_ratio);
  DBMS_OUTPUT.put_line('Compression type                         : ' ||  l_comptype_str);
END;
/

Number of blocks used (compressed)       : 1325
Number of blocks used (uncompressed)     : 1753
Number of rows in a block (compressed)   : 74
Number of rows in a block (uncompressed) : 55
Compression ratio                        : 1.3
Compression type                         : "Compress Advanced"

PL/SQL procedure successfully completed.

4.Which “CMP internal” tables are created by DBMS_COMPRESSION.get_compression_ratio ?

To handle the compression advisor process, Oracle creates 4 CMP* tables : CMP1$23590, CMP2$23590, CMP3$23590, CMP4$23590.

Strangely, Oracle Trace 10046 files contains only DDL for the creation of the last 2 ones (we can also use LogMinner to find the DDL) : CMP3$23590, CMP4$23590.
The table CMP3$23590 is a copy of the source table.
The table CMP4$23590 is a copy “compressed” of CMP3$23590 table.

grep  "CMP*" DBI_ora_20529_CompTest1110201823h19.trc

drop table "TEST_LAF".CMP1$23590 purge
drop table "TEST_LAF".CMP2$23590 purge
drop table "TEST_LAF".CMP3$23590 purge
drop table "TEST_LAF".CMP4$23590 purge
create table "TEST_LAF".CMP3$23590 tablespace "USERS" nologging  as select /*+ DYNAMIC_SAMPLING(0) FULL("TEST_LAF"."FOO") */ *  from "TEST_LAF"."FOO"  sample block( 99) mytab
create table "TEST_LAF".CMP4$23590 organization heap  tablespace "USERS"  compress for all operations nologging as select /*+ DYNAMIC_SAMPLING(0) */ * from "TEST_LAF".CMP3$23590 mytab
drop table "TEST_LAF".CMP1$23590 purge
drop table "TEST_LAF".CMP2$23590 purge
drop table "TEST_LAF".CMP3$23590 purge
drop table "TEST_LAF".CMP4$23590 purge

As we can see above, the “internal” tables (even the one compressed CMP4$23590) are removed at the end of the process.

To be sure, we check in the database :

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         0

So, everything is fine, no ‘CMP’ tables exist and the source table is not compressed :

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

5.But what happens if DBMS_COMPRESSION.get_compression_ratio fails ?

Let’s forcing the failure of the DBMS_COMPRESSION.get_compression_ratio procedure…

SQL> 
alter session set tracefile_identifier = 'CompTest1410201822h03';
alter session set events '10046 trace name context forever, level 12';
set serveroutput on

DECLARE
  l_blkcnt_cmp    PLS_INTEGER;
  l_blkcnt_uncmp  PLS_INTEGER;
  l_row_cmp       PLS_INTEGER;
  l_row_uncmp     PLS_INTEGER;
  l_cmp_ratio     NUMBER;
  l_comptype_str  VARCHAR2(32767);
BEGIN
  DBMS_COMPRESSION.get_compression_ratio (
    scratchtbsname  => 'USERS',
    ownname         => 'TEST_LAF',
    objname         => 'FOO',
    subobjname      => NULL,
    comptype        => DBMS_COMPRESSION.comp_advanced,
    blkcnt_cmp      => l_blkcnt_cmp,
    blkcnt_uncmp    => l_blkcnt_uncmp,
    row_cmp         => l_row_cmp,
    row_uncmp       => l_row_uncmp,
    cmp_ratio       => l_cmp_ratio,
    comptype_str    => l_comptype_str,
    subset_numrows  => DBMS_COMPRESSION.comp_ratio_allrows,
    objtype         => DBMS_COMPRESSION.objtype_table
  );
 24
  DBMS_OUTPUT.put_line('Number of blocks used (compressed)       : ' ||  l_blkcnt_cmp);
  DBMS_OUTPUT.put_line('Number of blocks used (uncompressed)     : ' ||  l_blkcnt_uncmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (compressed)   : ' ||  l_row_cmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (uncompressed) : ' ||  l_row_uncmp);
  DBMS_OUTPUT.put_line('Compression ratio                        : ' ||  l_cmp_ratio);
  DBMS_OUTPUT.put_line('Compression type                         : ' ||  l_comptype_str);
END;
 32  /
DECLARE
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation

What “CMP*” tables persist after ?

Two “CMP*” tables is always present :

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         2

SQL> select owner,table_name from dba_tables where table_name like 'CMP%';

OWNER     TABLE_NAME
------- ----------
TEST_LAF  CMP3$23687
TEST_LAF  CMP4$23687


Since “CMP3*” and “CMP4*” are copy (compressed for the second one) of source table, space disk can increase dramatically if Compressoin Advisor fails frequently and mainly with huge tables, so it’s important to remove these tables.

The source table called FOO, CMP3$23687 and CMP4$23687 internal tables contains same set of data (less for the last 2 ones since we use the sample block option)…

SQL> select count(*) from test_laf.CMP3$23687;

  COUNT(*)
----------
     22147

SQL> select count(*) from test_laf.CMP4$23687;

  COUNT(*)
----------
     22147

SQL> select count(*) from test_laf.foo;

  COUNT(*)
----------
     22387

The worst is that now we are in presence of compressed table while we don’t have the compression license option :

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'CMP4$23687';

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED  ADVANCED

To remove the oracle “CMP*” internal tables tables, let’s analyzing the 10046 trace file to check how oracle remove these tables when the DBMS_COMPRESSION.get_compression_ratio procedure run successfully:

Find below all the steps that oracle does to drop these tables:

drop table "TEST_LAF".CMP1$23687 purge

BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;

drop table "TEST_LAF".CMP2$23687 purge

PARSING IN CURSOR #140606951937256 len=515 dep=2 uid=0 oct=47 lid=0 tim=3421988631 hv=2219505151 ad='69fd11c8' sqlid='ct6c4h224pxgz'
BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;


drop table "TEST_LAF".CMP3$23687 purge

BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;


drop table "TEST_LAF".CMP4$23687 purge
BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;

To remove “CMP*” tables, Oracle does :
– drop table *** purge
– call internal procedure : xdb.XDB_PITRIG_PKG.pitrig_truncate or xdb.XDB_PITRIG_PKG.pitrig_dropmetadata regarding if Oracle Virtual Private Database is used.

7. Last Test : Check the source table is not compressed, we don’t want to have the compression enabled since we are not licensing…

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

6.Conclusion

To drop “CMP*” tables used by the DBMS_COMPRESSION.get_compression_ratio procedure, just execute : drop table CMP* purge.

I have not tested more in details the case where compression is used into Oracle VPD, so I don’t know the impact of executing the system procedure : xdb.XDB_PITRIG_PKG.pitrig_truncate or xdb.XDB_PITRIG_PKG.pitrig_dropmetadata in case we use VPD.

Cet article Where come from Oracle CMP$ tables and how to delete them ? est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D1: Microservices Get Rid of Your DBA and Send the DB into Burnout

$
0
0

I had the pleasure to attend this morning (23.10.2018) to the session of Franck Pachot about Microservices. Between 70 and 100 peoples were present in the room to listen the ACE director, OAK table member and OCM speaking about microservices.

Franck Pachot - microservices

Franck introduces microservices based on the fact that customers could want to get rid of their databases.

Getting rid of your database because it’s shared, it contains persistent data and because you query it with SQL could be good reasons. With smaller components you can share less and you can dedicate each component to an owner. With that it mind comes the idea of micro-services. Of course such reasoning has many limits such as the fact that micro-services shouldn’t have data in common.

Usually you query databases with SQL or PL/SQL. However SQL is a 4th generation language and  SQL developpers are rare. SQL is not only too complicated but also not portable. It’s even worse with PL/SQL and T-SQL.

Solution: microservices with easier technology, development offshored. This is precisely what Franck spoke about in his session

Indeed he did a demo with two tables (account and customers tables). He transfered few dollars from one account to another. At first using SQL, then with PL/SQL, JavaScript on the client and then JavaScript in the Database using MLE (Multi Language Engine) and checked CPU time for each of these methods. The results are the following:

  • SQL – 5 seconds of CPU
  • PL/SQL – 30 seconds of CPU
  • JavaScript on client – 2 minutes of CPU (45s on the client and 75 into the database)
  • JavaScript in DB (MLE) – 1 minute of CPU

SQL Statement

What is particulary interesting here is that you can think that you will offload the database by executing this statement with java on the client. Such wish could be motivated by decreasing the CPU power and therefore Oracle licensing footprint. However it is exaclty the opposite that Franck proved. You will multiply by at least twice the CPU power required to execute the same operation. Running on through different engine, process, machine does not scale and burns more CPU cycle in each tier.

The difference between SQL and PL/SQL which are running in the same process is due to context switches.

The diffrence betwen SQL and JavaScript on the client is due to context switches on the service, context switch on the client but also network latency.

Even if a context switch is really fast (have a look on to see cost in CPU cycle), Franck (who is working at CERN) explained us that during this time a proton can do a complete round of the CERN Large Hadron Collider (27km).

Anyway it has been really interesting to see that it will be possible in the future to load javascript in an Oracle Database using MLE.

Cet article Oracle Open World 2018 D1: Microservices Get Rid of Your DBA and Send the DB into Burnout est apparu en premier sur Blog dbi services.

5 mistakes you should avoid with Oracle Database Appliance

$
0
0

Introduction

It has been 5 years I’m working on ODA and I now have a quite good overview of the pros and the cons of such a solution. On one hand, ODA can greatly streamline your Oracle Database environment if you understand the purpose of this appliance. On the other hand, ODA can be your nightmare if you’re not using it as it supposed to be used. Let’s discover the common mistakes to avoid from the very beginning.

1st mistake – Consider the ODA as an appliance

Appliances are very popular these days, you need one feature, you buy a box that handles this feature and nothing else, you plug it and it works straight. Unfortunatly you cannot do that with an ODA. First of all, the level of embedded software is not clever enough to bundle all the features needed to simplify the DBA’s job. No doubt it will help you with faster deployment compared to home made environments, but all the tasks the DBA did before still exist. The appliance part is limited to basic operations. With an ODA, you will have to connect in terminal mode, with root, grid and oracle users to do a lot of operations. And you will need Linux skills to manage your ODA: it can be a problem if you are a Windows-only user. ODA provides a graphical interface, but it’s not something you will use very often. And there is still a lot of work for Oracle to gain the appliance moniker.

2nd mistake – Consider the ODA as a normal server

Second mistake is to consider the ODA as a normal server. Because it looks like it’s a normal server.

On the software side, if you ask a dba to connect to your Oracle environment on ODA, he probably won’t see that it’s actually an ODA, unless the server name contains the oda word :-) The only tool that differs from a normal server is the oakcli/odacli appliance manager, a command-line tool created to manage several features like database creation/deletion. What can be dangerous is that you will have system level access on the server, and all the advantages it comes with. But if you do some changes on your system, for example by installing a new package, manually changing the network configuration, tuning some Linux parameters, it can later avoid you to apply the next patch. The DBA should also keep off patching a database with classic patches available for Linux systems. Doing that will make your dbhome and related databases no more manageable by the appliance manager. Wait for the ODA-dedicated quaterly patch bundle if you want to patch.

On the hardware side, yes an ODA looks like a normal server, with free disk slots in the front. But you always have to order extension from the ODA catalog, and you cannot do what you want. You need to change disks for bigger ones? It’s not possible. You want to add 2 new disks? Not more possible, disks are sold as a 3-disk pack and are supposed to be installed together. ODAs have limitations. The small one cannot be extended at all. And the other ones support limited extensions (in number and in time).

Please keep your ODA away from Linux gurus and hardware geeks. And get stuck with recommended configurations.

3rd mistake – Compare the ODA hardware to other systems

When you consider ODA, it’s quite common to compare the hardware to what other brands can propose for the same amount of money. But it’s clearly not a good comparison. You’ll probably get more for your money from other brands. You should consider the ODA as a hardware and software bundle made to last more than a normal system. As an example, I deployed my first ODA X4-2 in May 2014, and the actual software bundle is still compatible with this ODA. Support for this ODA will end in February 2020, nearly 6 years of update for all the stack for a server that is able to run 11.2.0.4, 12.1.0.2 and 12.2.0.1 databases. Do the other brands propose that? I don’t think so.

What you cannot immediately realize is how fast the adoption of ODA can be. Most of the ODA projects I did are going on full production within 1 year, starting from initial ODA consideration. On a classic project, choosing the server/storage/software takes longer, deployment last longuer because multiple people are involved, you sometimes get stucked with hardware/software compatibily problem, and you have no guarantee about the performance even if you choose the best components. ODA reduces the duration and the cost of a project for sure.

4th mistake – Buy only one ODA

If you consider just buying one ODA, you probably need to think twice. Unless you do not want to patch regularly, this is probably not a good solution. Patching is not a zero-downtime operation, and it’s not reversible. Even if ODA patch bundles simplify patching, it’s still a complex operation especially when the patch is updating the operating system and the Grid Infrastructure components. Remember that one of the big advantage of an ODA is the availabily of a new patch every 3 months to update all the stack: firmwares, bios, ILOM, operating system, grid infrastructure, oracle homes, oracle databases, … So if you want to secure the patching process, you’d better go for 2 ODAs, one for production databases and one for dev/test databases for example. And it makes no sense to move only a part of your databases on ODA, leaving the other databases on classic systems.

Another advantage of 2+ ODAs, if you’re lucky enough to use Enterprise Edition, is the free use of Data Guard (without Active mode – standby database will stay mounted only). Most often, thinking about ODA is also thinking about disaster recovery solutions. And both are better together.

5th mistake – Manage your Enterprise licenses as you always did

One of the key feature of the ODA is the ability to scale the Enterprise licenses, starting from 1 PROC on a single ODA (or 25 named users). 1 license is only 2 cores on ODA. Does it makes sense on this platform to have limited number of licenses? Answer is yes and yes. Oracle recommends at least one core per database, but it’s not a problem to deploy 5 or even 10 (small) databases with just 1 license, there is no limit for that. Appart from the CPU limitation (applying the license will limit the available cores), ODA has quite a big amount of RAM (please tune your SGA according to this) and fast I/O speed that makes reads on disks not so expensive. CPU utilisation will be optimized.

What I mean is that you probably need less licenses on ODA than you need on a normal system. You can better spread these licenses on more ODAs and/or decrease the number of licenses you need. ODA hardware is sometimes self-financed by the economy of licenses. Keep in mind that 1 Oracle Enterprise PROC license costs more than the medium-size ODA. And you can always increase the number of licenses if needed (on-demand capacity).

Buying ODA hardware can be cheaper than you thought.

Conclusion

ODA is a great piece of hardware and knowing what it is designed for and how it works will make you better manage your Oracle Database environment.

Cet article 5 mistakes you should avoid with Oracle Database Appliance est apparu en premier sur Blog dbi services.

DOAG 2018, more and more open source

$
0
0

We speak about the increasing interest in open source technologies since several years and now, in 2018, you even feel that at the DOAG. There are sessions about PostgreSQL, MariaDB, Docker, Kubernetes and much more. As usual we had to do our hard preparation work before the conference opened to the public and prepare our booth and Michael almost came to his limits (and maybe he was not even sure on what he was doing there :) ):

cof

Jerome kicked of the DOAG 2018 for us with one of the very sessions and talked about “Back to the roots: Oracle Datenbanken I/O Management”. That is still an interesting topic and even if it was a session early in the morning the room was almost full.

Elisa continued in the afternoon with her session about MySQL and GPDR and she was deep diving into MySQL data and redo log files:
Dsc_R4qXQAIxs20.jpg:large A very well prepared and interesting talk.

David followed with his favorite topic, which is ODA and the session was “ODA HA, what about VM backups?”:
LRG_DSC07146

During the sessions not much is happening at our booth, so there is time for internal discussions:
cof

Finally, in the late afternoon, it was Hans’ (from Mobiliar) and my turn to talk about Oracle, PostgreSQL, Docker and Kubernetes and we have been quite surprised on how many people we had in the room. The room just filled, see yourself:
6818f84d-19fe-49cf-b11b-b08c19d8e40d-original

It seems this is a topic almost everywhere.

And now: Day two at the DOAG, lets see what happens today.

Cet article DOAG 2018, more and more open source est apparu en premier sur Blog dbi services.


ODA : Free up space on local filesystems

$
0
0

Introduction

When you work on ODA you sometimes get struggled with local filesystem free space. ODA has terabytes of space on data disks, but local disks are still limited to a raid-1 array of 2x 480GB disks. And only few GB are dedicated to / and /u01 filesystems. You do not need hundreds of GB on these filesystems, but I think that you prefer to keep at least 20-30% of free space. And if you plan to patch your ODA, you surely need more space to pass all the steps without reaching dangerous level of filling. Here is how to grab free space on these filesystems.

Use additional purgeLogs script

PurgeLogs script is provided as an additional tool from Oracle. It should have been available with oakcli/odacli but it’s not. Download it from MOS note 2081655.1. As this tool is not part of the official ODA tool, please test it before using it on a production environment. It’s quite easy to use, put the zip in a folder, unzip it, and run it with root user. You can use this script with a single parameter that will clean up all the logfiles for all the Oracle products aged of a number of days:


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 29G 23G 56% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 43G 45G 50% /u01

cd /tmp/
unzip purgeLogs.zip
du -hs /opt/oracle/oak/log/*
11G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
232K /opt/oracle/oak/log/test
./purgeLogs -days 1

--------------------------------------------------------
purgeLogs version: 1.43
Author: Ruggero Citton
RAC Pack, Cloud Innovation and Solution Engineering Team
Copyright Oracle, Inc.
--------------------------------------------------------

2018-12-20 09:20:06: I adrci GI purge started
2018-12-20 09:20:06: I adrci GI purging diagnostic destination diag/asm/+asm/+ASM1
2018-12-20 09:20:06: I ... purging ALERT older than 1 days

2018-12-20 09:20:47: S Purging completed succesfully!
du -hs /opt/oracle/oak/log/*
2.2G /opt/oracle/oak/log/aprhodap02db0
4.0K /opt/oracle/oak/log/fishwrap
28K /opt/oracle/oak/log/test


df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 18G 34G 35% /
df -h /u01/
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb1 92G 41G 47G 48% /u01

In this example, you just freed up about 13GB. If your ODA is composed of 2 nodes, don’t forget to use the same script on the other node.

Truncate hardware log traces

Hardware related traces are quietly filling up the filesystem if your ODA is running since a long time. This traces are located under /opt/oracle/oak/log/`hostname`/adapters. I don’t know if each model has this kind of behaviour but this was an example on an old X4-2 running for 3 years now.

cd /opt/oracle/oak/log/aprhodap02db0/adapters
ls -lrth
total 2.2G
-rw-r--r-- 1 root root 50M Dec 20 09:26 ServerAdapter.log
-rw-r--r-- 1 root root 102M Dec 20 09:27 ProcessorAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:28 MemoryAdapter.log
-rw-r--r-- 1 root root 110M Dec 20 09:28 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 318M Dec 20 09:30 NetworkAdapter.log
-rw-r--r-- 1 root root 794M Dec 20 09:30 CoolingAdapter.log
head -n 3 CoolingAdapter.log
[Mon Apr 27 18:02:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
In CoolingAdapter.scr
[Mon Apr 27 18:07:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
head -n 3 MemoryAdapter.log
[Mon Apr 27 18:02:26 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery
In MemoryAdapter.scr
[Mon Apr 27 18:07:25 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery

Let’s purge the oldest lines in these files:

for a in `ls *.log` ; do tail -n 200 $a > tmpfile ; cat tmpfile > $a ; rm -f tmpfile; done
ls -lrth
total 176K
-rw-r--r-- 1 root root 27K Dec 20 09:32 CoolingAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ProcessorAdapter.log
-rw-r--r-- 1 root root 30K Dec 20 09:32 PowerSupplyAdapter.log
-rw-r--r-- 1 root root 29K Dec 20 09:32 NetworkAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 MemoryAdapter.log
-rw-r--r-- 1 root root 27K Dec 20 09:32 ServerAdapter.log

2GB of traces you’ll never use! Don’t forget the second node on a HA ODA.

Purge old patches in the repository: simply because they are useless

If you successfully patched your ODA at least 2 times, you can remove the oldest patch in the ODA repository. As you may know, patches are quite big in size because they include a lot of things. So it’s a good practise to remove the oldest patches when you have successfuly patched your ODA. To identify if old patches are still on your ODA, you can dig into folder /opt/oracle/oak/pkgrepos/orapkgs/. Purge of old patches is easy:

df -h / >> /tmp/dbi.txt
oakcli manage cleanrepo --ver 12.1.2.6.0
Deleting the following files...
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95000N/SF04/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95001N/SA03/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/WDC/WD500BLHXSUN/5G08/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H101860SFSUN600G/A770/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST360057SSUN600G/0B25/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H106060SDSUN600G/A4C0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109060SESUN600G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/HUS1560SCSUN600G/A820/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA6SUN200G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA4SUN400G/A29A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/ZeusIOPs-es-G3/E12B/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF2EUSUN73G/9440/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4370-es-M2/3.0.16.22.f-es-r100119/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109090SESUN900G/A720/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF4EUSUN200G/944A/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240AS60SUN4.0T/A2D2/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240B520SUN4.0T/M554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7280A520SUN8.0T/P554/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4170-es-M3/3.2.4.26.b-es-r101722/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4-2/3.2.4.46.a-es-r101689/Base
Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X5-2/3.2.4.52-es-r101649/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.3.4.0.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.3.1/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.160119/Patches/21948354
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.160119/Patches/21948347
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.7/Patches/6.7.1
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.1.2.6.0/Base
Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/GI/12.1.0.2.160119/Base
df -h / >> /tmp/dbi.txt
cat /tmp/dbi.txt
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 28G 24G 54% /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 55G 21G 31G 41% /

Increase /u01 filesystem with remaining space

This only concern ODAs in bare metal. You may have noticed that not all the disk space is allocated to your ODA local filesystems. On modern ODAs, you have 2 M2 SSD of 480GB each in a RAID1 configuration for the system, and only half of the space is allocated. As the appliance is using LogicalVolumes, you can extend very easily the size of your /u01 filesystem.

This is an example on a X7-2M:


vgdisplay
--- Volume group ---
VG Name VolGroupSys
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 6
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 446.00 GiB
PE Size 32.00 MiB
Total PE 14272
Alloc PE / Size 7488 / 234.00 GiB
Free PE / Size 6784 / 212.00 GiB
VG UUID wQk7E2-7M6l-HpyM-c503-WEtn-BVez-zdv9kM


lvdisplay
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRoot
LV Name LogVolRoot
VG Name VolGroupSys
LV UUID icIuHv-x9tt-v2fN-b8qK-Cfch-YfDA-xR7y3W
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:00 +0100
LV Status available
# open 1
LV Size 30.00 GiB
Current LE 960
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:0
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolU01
LV Name LogVolU01
VG Name VolGroupSys
LV UUID ggYNkK-GfJ4-ShHm-d5eG-6cmu-VCdQ-hoYzL4
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:07 +0100
LV Status available
# open 1
LV Size 100.00 GiB
Current LE 3200
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:2
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolOpt
LV Name LogVolOpt
VG Name VolGroupSys
LV UUID m8GvKZ-zgFF-2gXa-NSCG-Oy9l-vTYd-ALi6R1
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:30 +0100
LV Status available
# open 1
LV Size 60.00 GiB
Current LE 1920
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:3
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolSwap
LV Name LogVolSwap
VG Name VolGroupSys
LV UUID 9KWiYw-Wwot-xCmQ-uzCW-mILq-rsPz-t2X2pr
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:40:44 +0100
LV Status available
# open 2
LV Size 24.00 GiB
Current LE 768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:1
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolDATA
LV Name LogVolDATA
VG Name VolGroupSys
LV UUID oTUQsd-wpYe-0tiA-WBFk-719z-9Cgd-ZjTmei
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:4
--- Logical volume ---
LV Path /dev/VolGroupSys/LogVolRECO
LV Name LogVolRECO
VG Name VolGroupSys
LV UUID mJ3yEO-g0mw-f6IH-6r01-r7Ic-t1Kt-1rf36j
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 249:5

212GB are available. Let’s take 100GB for extending /u01:


lvextend -L +100G /dev/mapper/VolGroupSys-LogVolU01
Size of logical volume VolGroupSys/LogVolU01 changed from 100.00 GiB (3200 extents) to 200.00 GiB.
Logical volume LogVolU01 successfully resized.

Filesystem needs to be resized:

resize2fs /dev/mapper/VolGroupSys-LogVolU01
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01; on-line resizing required
old_desc_blocks = 7, new_desc_blocks = 13
Performing an on-line resize of /dev/mapper/VolGroupSys-LogVolU01 to 52428800 (4k) blocks.
The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 52428800 blocks long.

Now /u01 is bigger:

df -h /dev/mapper/VolGroupSys-LogVolU01
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
197G 77G 111G 41% /u01

Conclusion

Don’t hesitate to clean up your ODA before having to deal with space pressure.

Cet article ODA : Free up space on local filesystems est apparu en premier sur Blog dbi services.

Documentum 7+ internal error during installation or upgrade DBTestResult7092863812136784595.tmp

$
0
0

This blog will go straight to the topic. When upgrading/installing your content server to 7+, you may experience an internal error with a popup telling you to look into a file called something like: DBTestResult7092863812136784595.tmp

In fact, the installation process failed to test the database connection. Even if it managed to find your schema previously. In the file you’ll find something like:

 Last SQL statement executed by DB was:

#0  0x00000033b440f33e in waitpid () from /lib64/libpthread.so.0
#1  0x00000000004835db in dmExceptionManager::WalkStack(dmException*, int, siginfo*, void*) ()
#2  0x0000000000483998 in dmExceptionHandlerProc ()
#3  <signal handler called>
#4  0x00007f3d8c0e7d85 in ber_flush2 () from /dctm/product/7.3/bin/liblber-2.4.so.2
#5  0x00007f3d8bebb00b in ldap_int_flush_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#6  0x00007f3d8bebb808 in ldap_send_server_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#7  0x00007f3d8bebbb30 in ldap_send_initial_request () from /dctm/product/7.3/bin/libldap-2.4.so.2
#8  0x00007f3d8beab828 in ldap_search () from /dctm/product/7.3/bin/libldap-2.4.so.2
#9  0x00007f3d8beab952 in ldap_search_st () from /dctm/product/7.3/bin/libldap-2.4.so.2
#10 0x00007f3d898f93b2 in nnflqbf () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#11 0x00007f3d898ef124 in nnflrne1 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#12 0x00007f3d898fe5b6 in nnfln2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#13 0x00007f3d886cffc0 in nnfgrne () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#14 0x00007f3d887f4274 in nlolgobj () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#15 0x00007f3d886ce43f in nnfun2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#16 0x00007f3d886ce213 in nnfsn2a () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#17 0x00007f3d8875f7f1 in niqname () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#18 0x00007f3d88612d06 in kpplcSetServerType () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#19 0x00007f3d8861387b in kpuatch () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#20 0x00007f3d893e9dc1 in kpulon2 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#21 0x00007f3d892e15f2 in OCILogon2 () from /opt/oracle/product/client_12.1.0.2//lib/libclntsh.so
#22 0x0000000000555232 in DBConnection::Connect(DBString const*, DBString const*, DBString const*) ()
#23 0x00000000005555e4 in DBConnection::DBConnection(DBString const&, DBString const&, DBString const&, DBString const&, DBStats*, dmListHead*, int, int volatile*) ()
#24 0x000000000055f6ff in DBDataBaseImp::DBDataBaseImp(DBString const&, DBString const&, DBString const&, DBString const&, DBStats*, DBDataBase*, dmListHead*, int, int volatile*) ()
#25 0x0000000000545aaf in DBDataBase::DBDataBase(DBStats*, DBString const&, DBString const&, DBString const&, DBString const&, dmListHead*, int, int volatile*) ()
#26 0x0000000000466bd8 in dmServer_Dbtest(int, char**) ()
#27 0x00000033b3c1ed1d in __libc_start_main () from /lib64/libc.so.6
#28 0x0000000000455209 in _start ()
Tue Jan  8 16:18:15 2019 Documentum Internal Error: Assertion failure at line: 1459 in file: dmexcept.cxx

Not so precise right?

In fact, it’s pretty simple. The installer failed to use your tnsnames.ora file because LDAP auth is set with a higher priority. For those who don’t know, the tnsnames.ora holds your database connection information. You won’t be able to connect documentum without it, as documentum will try to locate it.

Sometimes, depending on how the DBA installed the oracle client on the machine, LDAP identification may be set prior to the tnsnames identification. So you have two possibilities:

  • Edit sqlnet.ora to set TNSNAMES before LDAP.
  • Rename ldap.ora to something else so that the Oracle Client doesn’t find it and fall back to TNSNAMES. I recommend this way as if the DBA patches the Client, the sqlnet.ora may be set back to LDAP in priority.

For info, these files are located in $ORACLE_HOME/network/admin, by default they are installed under the Oracle user install owner. So to edit the files you must be root or ask the DBAs to do it for you.

Cet article Documentum 7+ internal error during installation or upgrade DBTestResult7092863812136784595.tmp est apparu en premier sur Blog dbi services.

Recover a corrupted datafile in your DataGuard environment 11G/12C.

$
0
0

On a DG environment, a datafile needs to be recovered on the STANDBY site, in two situations : when is deleted or corrupted.
Below, I will explain  how to recover a corrupted datafile, in order to be able to repair the Standby database, without to be necessary to restore entire database.

Initial situation :

DGMGRL> connect /
Connected to "PROD_SITE2"
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - CONFIG1

  Protection Mode: MaxPerformance
  Members:
  PROD_SITE2 - Primary database
    PROD_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 15 seconds ago)

On this  environment, we have a table called EMP with 100 rows, owned by the user TEST (default tablespace TEST).

SQL> set linesize 220;
SQL> select username,default_tablespace from dba_users where username='TEST';

USERNAME     DEFAULT_TABLESPACE
-------------------------------
TEST         TEST

SQL> select count(*) from test.emp;

  COUNT(*)
----------
       100

By mistake, the datafile on Standby site, get corrupted.

SQL> alter database open read only;
alter database open read only
*
ORA-01578: ORACLE data block corrupted (file # 5, block # 3)
ORA-01110: data file 5: '/u02/oradata/PROD/test.dbf'

As is corrupted, the apply of the redo log is stopped until will be repaired. So the new inserts into the EMP table will not be applied:

SQL> begin
  2  for i in 101..150 loop
  3  insert into test.emp values (i);
  4  end loop;
  5  END;
  6  /

PL/SQL procedure successfully completed.

SQL> COMMIT;

Commit complete.

SQL> select count(*) from test.emp;

  COUNT(*)
----------
       150

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
PROD      PROD_SITE2                     PRIMARY

To repair it, we will use PRIMARY site to backup controlfile and the related datafile.

oracle@dbisrv03:/home/oracle/ [PROD] rman target /

connected to target database: PROD (DBID=410572245)

RMAN> backup current controlfile for standby format '/u02/backupctrl.ctl';


RMAN> backup datafile 5 format '/u02/testbkp.dbf';

Starting backup at 29-JAN-2019 10:59:37
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=276 device type=DISK

We will transfer the backuppieces on the STANDBY server, using scp:

 scp backupctrl.ctl oracle@dbisrv04:/u02/
 scp testbkp.dbf oracle@dbisrv04:/u02/

Now, will start the restore/recover on the STANDBY server :

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1895825408 bytes
Fixed Size                  8622048 bytes
Variable Size             570425376 bytes
Database Buffers         1308622848 bytes
Redo Buffers                8155136 bytes
SQL> exit
oracle@dbisrv04:/u02/oradata/PROD/ [PROD] rman target /


Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD (not mounted)

RMAN> restore controlfile from '/u02/backupctrl.ctl'; 
.........
RMAN> alter database mount;


RMAN> catalog start with '/u02/testbkp.dbf';

searching for all files that match the pattern /u02/testbkp.dbf

List of Files Unknown to the Database
=====================================
File Name: /u02/testbkp.dbf

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u02/testbkp.dbf




RMAN> restore datafile 5;

Starting restore at 29-JAN-2019 11:06:31
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /u02/oradata/PROD/test.dbf
channel ORA_DISK_1: reading from backup piece /u02/testbkp.dbf
channel ORA_DISK_1: piece handle=/u02/testbkp.dbf tag=TAG20190129T105938
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 29-JAN-2019 11:06:33

RMAN> exit

Now, we will start to apply the logs again and try to resync the STANDBY database.
!!! Here you need to stop recovery process if you do not have a dataguard active license.

SQL> recover managed standby database using current logfile disconnect from session;
Media recovery complete.
SQL> recover managed standby database cancel;
SQL> alter database open read only;

Database altered.

SQL> select count(*) from test.emp;

  COUNT(*)
----------
       150

Now, we can see the last insert activity on the PRIMARY site that is available on the STANDBY site.

On 12c environment, with an existing container PDB1, the things are easier, with the feature RESTORE/RECOVER from service :

connect on the standby site
rman target /
restore tablespace PDB1:USERS from service PROD_PRIMARY;
recover tablespace PDB1:USERS;

Cet article Recover a corrupted datafile in your DataGuard environment 11G/12C. est apparu en premier sur Blog dbi services.

Italian Oracle User Group Tech Days 2019

$
0
0

The Italian Oracle User Group (ITOUG) is an independent group of Oracle enthusiasts and experts which work together as volunteers to promote technical knowledge sharing in Italy.

Here the ITOUG Board members:
ITOUG Board

This year ITOUG Tech Days take place in Milan on 30th January and in Rome on 1st February. Two different streams for each event:
– Database
– Analytics and Big Data
Today I participated to the event in Milan.
But before talking about that, ITOUG Tech Days started with the speakers’ dinner on Tuesday evening in Milan: aperitif, good Italian food and very nice people.
ITOUG Speakers Dinner

On Wednesday morning, we all met at Oracle Italia in Cinisello Balsamo (MI):
ITOUG Milan

After the welcome message by some ITOUG Board members:
ITOUG Welcome  Msg
sessions finally started. I attended the following ones of the Database stream:

- “Instrumentation 2.0: Performance is a feature” by Lasse Jenssen from Norway
Lasse
We have to understand what’s going on into a system, performance is a feature and we need instrumentation. Oracle End-to-End metrics, new tags in 12c, v$sql_monitor, dbms_monitor… And work in progress for instrumentation 3.0 with ElasticSearch, LogStash and Kibana.

- “Hash Join Memory Optimization” by one of the ITOUG Board member, Donatello Settembrino
Donatello
How Hash Join works and how to improve PGA consumption performances. Examples of partitioning (to exclude useless data), (full) Partitioning Wise Join (to use less resources) and parallelism. Differences between Right-Deep Join Trees and Left-Deep Join Trees, and concept of Bushy Join Trees in 12R2.

- “Design your databases using Oracle SQL Developer Data Modeler” by Heli Helskyaho from Finland
Heli
Oracle SQL Developer Data Modeler with SQL Developer or in a standalone mode to design your database. It uses Subversion integrated in the tool for the version control and management. It also has support for other databases, MySQL for example. And it’s free.

- “Bringing your Oracle Database alive with APEX” by Dimitri Gielis from Belgium
Dimitri
Two things to learn from this session:
1) Use Oracle Application Express to design and develop a web application.
2) And Quick SQL to create database objects and build a data model
And all that in a very fast way.

- “Blockchain beyond the Hype” by one of the ITOUG Board member, Paolo Gaggia
Paolo
The evolution of blockchain from bitcoin to new Enterprise-Oriented implementation and some interesting use cases.

Every session was very interesting: thanks to the great and amazing speakers (experts working on Oracle technologies, Oracle ACE, Oracle ACE Director…) for their sharing.

Follow the Italian Oracle User Group on Twitter (IT_OUG) and see you at the next ITOUG event!

Cet article Italian Oracle User Group Tech Days 2019 est apparu en premier sur Blog dbi services.

Create a primary database using the backup of a standby database on 12cR2

$
0
0

The scope of this blog will be to show how to create a primary role database based on a backup of a standby database on 12cR2.

Step1: We are assuming that an auxiliary instance has been created and started in nomount mode.

rman target /
restore primary controlfile from 'backup_location_directory/control_.bkp';
exit;

By specifying “restore primary” , will modify the flag into the controlfile, and will mount a primary role instance instead of a standby one.

Step2: Once mounted the instance, we will restore the backup of the standby database.

run
{
catalog start with 'backup_location_directory';
restore database;
alter database flashback off;
recover database 
}

If in the pfile used to start the instance, you specified the recovery destination and size parameters it will try to enable the flashback.
The flashback enable , before during the recovery is not allowed, so we will deactivate for the moment.

Step3: Restore/recover completed successfully, we will try to open the database, but got some errors:

alter database open :

ORA-03113: end-of-file on communication channel
Process ID: 2588
Session ID: 1705 Serial number: 5

Step4: Fix the errors and try to open the database:

--normal redo log groups
alter database clear unarchived logfile group YYY;

--standby redo log groups
alter database clear unarchived logfile group ZZZ;
alter database drop logfile group ZZZ;

Is not enough. Looking on the database alert log file we can see :

LGWR: Primary database is in MAXIMUM AVAILABILITY mode 
LGWR: Destination LOG_ARCHIVE_DEST_2 is not serviced by LGWR 
LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR 

Errors in file /<TRACE_DESTINATION>_lgwr_1827.trc: 
ORA-16072: a minimum of one standby database destination is required 
LGWR: terminating instance due to error 16072 
Instance terminated by LGWR, pid = 1827

Step5: Complete the opening procedure:

alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
alter database set standby database to maximize performance;

SQL> select name,open_mode,protection_mode from v$database;

NAME      OPEN_MODE            PROTECTION_MODE
--------- -------------------- --------------------
NAME      MOUNTED              MAXIMUM PERFORMANCE

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
NAME      NAME_UNIQUE                    PRIMARY

Cet article Create a primary database using the backup of a standby database on 12cR2 est apparu en premier sur Blog dbi services.

When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is not modified

$
0
0

Below, I will try to explain, a particular case for the general error : ORA-01555 snapshot too old error..

Normally, when we have this error, we are trying to adapt the retention parameters or to tune our queries.

SQL> show parameter undo;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled                    boolean     FALSE
undo_management                      string      AUTO
undo_retention                       integer     3600 --extended from 900,
undo_tablespace                      string      UNDOTBS1

But, are some scenarios where the above rule is not working .

We got from the alert log file of the DB the sql id which caused the issue : pmrbk5fdfd665

But, when you want to search for it, in V$SQL/V$SQLAREA  is not there

SQL> select sql_fulltext from v$sql where sql_id like '%pmrbk5fdfd665%';

no rows selected

Why?

Seems that, sql_id is present in V$OPEN_CURSOR, with an input into the sqltext column.
The issue is coming from the fact that the statement is accessing a LOB column, which causes to Oracle to generate a new sql id.
The execution part related to the LOBs will not appear into V$SQL/V$SQLAREA and is not captured into the AWR reports.

SQL>  select distinct * from v$open_cursor
  2     where rownum < 25
  3     and sql_id like '%pmrbk5fdfd665%';

SADDR                   SID USER_NAME                      ADDRESS          HASH_VALUE SQL_ID        SQL_TEXT                                                     LAST_SQL SQL_EXEC_ID CURSOR_TYPE
---------------- ---------- ------------------------------ ---------------- ---------- ------------- ------------------------------------------------------------ -------- ----------- ---------------
0000000670A19780         74 my_user                   00000002EB91F1F0 3831220380 pmrbk5fdfd665 table_104_11_XYZT_0_0_0
00000006747F0478        131 my_user                   00000002EB91F1F0 3831220380 pmrbk5fdfd665 table_104_11_XYZT_0_0_0

Apparently, the string into the sql_text column is  a  HEX representation of the object_id that is being accessed.
In our case is :XYZT

SQL>    select owner, object_name, object_type
  2    from dba_objects
  3    where object_id = (select to_number('&hex_value','XXXXXX') from dual);
Enter value for hex_value: XYZT
old   3:   where object_id = (select to_number('&hex_value','XXXXXX') from dual)
new   3:   where object_id = (select to_number('XYZT','XXXXXX') from dual)

                                                                                                                    
OWNER                  OBJECT_TYPE                                               OBJECT_NAME
---------------------- --------------------------------------------------------------------------
my_user                TABLE                                                     my_table


SQL> desc my_user.my_table;
 Name                  Type
 -------------------   ----------------
 EXPERIMENT_ID          VARCHAR2(20)
 DOCUMENT               BLOB
............….

If we are looking for the retention on the ” DOCUMENT ” column, we will see :

SQL> select table_name, pctversion, retention,segment_name from dba_lobs where table_name in ('my_table');

TABLE_NAME                                                                               
                                                  PCTVERSION  RETENTION                  SEGMENT_NAME
---------------------------------------------------------------------------------------- ------------------------------------
my_table                                                       900                       SYS_LOB0000027039C00002$$

In order to fix it , try to run this column to adapt the retention of the blob column, related to the new value of the UNDO_RETENTION parameter,

ALTER TABLE my_table MODIFY LOB (DOCUMENT) (3600);

Cet article When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is not modified est apparu en premier sur Blog dbi services.

How to migrate Grid Infrastructure from release 12c to release 18c

$
0
0

Oracle Clusterware 18c builds on this innovative technology by further enhancing support for larger multi-cluster environments and improving the overall ease of use. Oracle Clusterware is leveraged in the cloud in order to provide enterprise-class resiliency where required and dynamic as well as online allocation of compute resources where needed, when needed.
Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application.
HA in consolidated environments is no longer simple active/standby failover.

In this blog we will see how to upgrade our Grid Infrastructure stack from 12cR2 to 18c.

Step1: You are required to patch your GI with the patch 27006180

[root@dbisrv04 ~]# /u91/app/grid/product/12.2.0/grid/OPatch/opatchauto apply /u90/Kit/27006180/ -oh /u91/app/grid/product/12.2.0/grid/

Performing prepatch operations on SIHA Home........

Start applying binary patches on SIHA Home........

Performing postpatch operations on SIHA Home........

[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u91/app/grid/product/12.2.0/grid successfully
OPatchAuto successful.

Step2: Check the list of patches applied

grid@dbisrv04:/u90/Kit/ [+ASM] /u91/app/grid/product/12.2.0/grid/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Lsinventory Output file location : /u91/app/grid/product/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-10-11_09-06-44AM.txt

--------------------------------------------------------------------------------
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.


Interim patches (1) :

Patch  27006180     : applied on Thu Oct 11 09:02:50 CEST 2018
Unique Patch ID:  21761216
Patch description:  "OCW Interim patch for 27006180"
   Created on 5 Dec 2017, 09:12:44 hrs PST8PDT
   Bugs fixed:
     13250991, 20559126, 22986384, 22999793, 23340259, 23722215, 23762756
........................
     26546632, 27006180

 

Step3: Upgrage the binaries to the release 18c

upgrade_grid

directory_new_grid

– recommend to run the rootUpgrade.sh script manually

run_root_script

/u90/app/grid/product/18.3.0/grid/rootupgrade.sh
[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u90/app/grid/product/18.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u90/app/grid/product/18.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/dbisrv04/crsconfig/roothas_2018-10-11_09-21-27AM.log

2018/10/11 09:21:29 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2018/10/11 09:21:30 CLSRSC-363: User ignored prerequisites during installation
2018/10/11 09:21:31 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2018/10/11 09:21:34 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.

ASM has been upgraded and started successfully.

2018/10/11 09:22:25 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2018/10/11 09:23:52 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2018/10/11 09:23:57 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node dbisrv04 successfully pinned.
2018/10/11 09:24:00 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2018/10/11 09:24:02 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2018/10/11 09:24:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/10/11 09:24:49 CLSRSC-595: Executing upgrade step 11 of 12: 'UpgradeSIHA'.
CRS-4123: Oracle High Availability Services has been started.


dbisrv04     2018/10/11 09:25:58     /u90/app/grid/product/18.3.0/grid/cdata/dbisrv04/backup_20181011_092558.olr     70732493   

dbisrv04     2018/07/31 15:24:14     /u91/app/grid/product/12.2.0/grid/cdata/dbisrv04/backup_20180731_152414.olr     0
2018/10/11 09:25:59 CLSRSC-595: Executing upgrade step 12 of 12: 'InstallACFS'.
CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbisrv04'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'dbisrv04'
CRS-2677: Stop of 'ora.driver.afd' on 'dbisrv04' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dbisrv04' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/10/11 09:27:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

– you can ignore the warning related to the memory resources

ignore_prereq

completed_succesfully

– once finished the installation, verify what has been made

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA3.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.db18c.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
--------------------------------------------------------------------------------

Cet article How to migrate Grid Infrastructure from release 12c to release 18c est apparu en premier sur Blog dbi services.


Discover target database 18c with EM12c

$
0
0

Working on Enterprise Manager 12.1.0.4 version at a client’s site, we would like to know if oracle database target in 18c version could be discovered and monitored, even if Enterprise Manager 12.1.0.4 does not support Oracle 18c database targets.

Installing the agent 12c on the target host did not cause any problem, the oracle database 18c discovery ran successfully, but the database was seen as down in the Enterprise Manager 12.1.0.4 console.

We tried several tricks without any positive results, but running the following command shows us that this was a connection problem:


oracle@em12c:/home/oracle/:> emctl getmetric agent DB18,oracle_database,Response
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.
All rights reserved.
Status,State,oraerr,Archiver,DatabaseStatus,ActiveState0,UNKNOWN,
Failed to connect: java.sql.SQLException: 
ORA-28040: No matching authentication protocol,UNKNOWN,UNKNOWN,UNKNOWN

With Oracle 18c, the default value for SQLNET.ALLOWED_LOGON_VERSION parameter is 12, it means that database clients using pre-12c jdbc thin drivers cannot authenticate to 18c database servers.

The workaround is to add in the database server sqlnet.ora the following lines:

SQLNET.ALLOWED_LOGON_VERSION_SERVER=11
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=11

We restart the database and the agent, and the Oracle database 18c is displayed up and running in Enterprise Manager 12.1.0.4:

Some more tests showed us that everything is working fine: incident management, performance, top activity ..aso

Nevertheless do not forget that oracle database target version 18c are not supported with Enterprise Manager 12.1.0.4. I will recommand to use the last Enterprise Manager 13.3 last version in order to administer and monitor Oracle database 18c.

Cet article Discover target database 18c with EM12c est apparu en premier sur Blog dbi services.

Huawei Dorado 6000 V3 benchmark

$
0
0

I had the opportunity to test the new Dorada 6000 V3 All-Flash storage system.
See what the all-new Dorado 6000 V3 All-Flash Storage system is capable as storage for your database system.

Before you read

This is a series of different blog posts:
In the first blog post, I talk about “What you should measure on your database storage and why”.
The second blog post will talk about “How to do database storage performance benchmark with FIO”.
The third blog post will show “How good is the new HUAWEI Dorada 6000V3 All-Flash System for databases” measured with the methods and tools from post one and two (aka this one here).

The first two posts give you the theory to understand all the graphics and numbers I will show in the third blog post.

So in this post, we see, what are the results when we test a Huawei Dorado 6000V3 All-Flash storage system with these technics.

I uploaded all the files to a github repository: Huawei-Dorado6000V3-Benchmark.

Foreword

The setup was provided by Huawei in Shengsen, China. I’ve got remote access with a timeout at a certain point. Every test run runs for 10h, because of the timeout I was sometimes not able to capture all performance view pictures. That’s why some of the pictures are missing. Storage array and servers were provided free of charge, there was no exercise of influence from Huawei on the results or conclusion in any way.

Setup

4 Server were. provided, each with 4×16 GBit/s FC adapter direct connected to the storage systems.
There are 256 GByte of memory installed and 2x 14 Cores 2.6 GHz E5-2690 Intel CPUs.
Hyperthreading is disabled.
The 10 GBit/s network interfaces are irrelevant for this test here because all storage. traffic runs over FC.

The Dorado 6000 V3 System has 1 TByte of cache and 50x 900 GByte SSD from Huawei.
Deduplication was disabled.
Tests were made with and without compression.

Theoretical max speed

With 4x16GBit/s a maximal throughput of 64 GBit/s or 8 GByte/s is possible.
In IOPS this means we can transmit 8192 IOPS with 1 MByte block size or 1’048’576 IOPS with 8 KByte block size.
As mentioned in the title, this is theoretically or raw bandwidth, the usable bandwidth or payload is, of course, smaller: A FC-frames is 2112 bytes with 36 bytes of protocol overhead.
So in a 64 GBit/s FC network we can transfer: 64GBit/s / 8 ==> 8GByte/s * 1024 ==> 8192 MByte/s (raw) * (100-(36/2.112))/100 ==> 6795MByte/s (payload).

So we end up with a maximum of 6975 IOPS@1MByte or 869’841 IOPS@8KByte (payload) not included is the effect, that we are using multipathing* with 4x16GBit/s, which will also consume some power.

*If somebody out there has a method to calculate the overhead of multipathing in such a setup, please contact me!

Single-Server Results

General

All single server tests were made on devices with enabled data compression. Unfortunately, I do not have the results from my tests with uncompressed devices for single server anymore, but you can see the difference in the multi-server section.

8 KByte block size

The 8 KByte block size tests on a single server were very performant.
What we can already tell, as higher the parallelity as better the storage performs. This is not really a surprise. Most storage systems work better, as higher the parallel access is.
Specialy for 1 thread, we see the differenc between having one disk in a diskgroup and be able to use 3967 IOPS or using e.g. 5 disks and 1 thread an be able to use 16700 IOPS.
The latency for all tests was great with 0.25ms to 0.4ms for reading operation and 0.1 to 0.4ms for write operations.
The 0.1ms for write is not that impressive, because it is mainly the performance of the write cache, but even when we exceeded the write cache we were not higher then 0.4ms

1 MByte block size

On the 1 MByte tests, we see, that we already hit the max speed with 6 devices (parallelity of 6) to 9 devices (parallelity 2).

As an example to interpret the graphic, when you have a look at the green line (6 devices), we reach the peak performance at a parallelity of 6.
For the dark blue line (7 devices) we hit the max peak at parallelity 4 and so on.

If we increase the parallelity over this point, the latency will grow or even the throughput will decrease.
For the 1 MByte tests, we hit a limitation at around 6280 IOPS. This is around 90% of the calculated maximum speed.

So if we go with Oracle ASM, we should bundle at least 5 devices together to a diskgroup.
We also see, that when we run a rebalance diskgroup we should go for a small rebalance power. A value smaller than 4 should be chosen, every value over 8 is counterproductive and will consume all possible I/O on your system and slow all databases on this server.

Monitoring / Verification

To verify the results, I am using dbms_io_calibration on the very same devices as the performance test was running. The expectation is, that we will see more or less the same results.

On large IO the measured 6231 IOPS by IO calibration is almost the same as measured by FIO (+/- 1%).
IO calibration measured 604K IOPS for small IO, which is significantly more than the +/- 340kw IOPS measured with FIO. This is explainable because IO calibration is working with the number of disks for the parallelity and I did this test with 20 disks instead. of 10. Sadly when I realized my mistake, I already had no more access to the system.

In the following pictures you see the performance view of the storage system with the data measured by FIO as an overlay. As we can see, the values for the IOPS matches perfectly.
The value for latency was lower on the storage part, which is explainable with the different points where we are measuring (once on the storage side, once on the server side).
All print screens of the live performance view of the storage can be found in the git repository. The values for Queue depth, throughput, and IOPS matched perfectly with the measured results.


Multi-Server Results with compression

General

The tests for compressed and uncompressed devices were made with 3 parallel servers.

8 KByte block size

For random read with 8 KByte blocks, the IOPS increased almost linear from 1 to 3 nodes and we hit a peak of 655’000 IOPS with 10 devices / 10 threads. The answer time was between 0.3 and 0.45 ms.
For random write, we hit some kind of limitation at around 250k IOPS. We could not get a higher value than that which was kind of surprising for me. I would have expected better results here.
From the point, where we hit the maximum number of IOPS we see the same behavior as with 1 MByte blocks: More threads does only increase the answer time but does not get you better performance.
So for random write with 8 KByte blocks, the maximum numbers are around 3 devices and 10 threads or 10 devices and 3 threads or a parallelity of 30.
As long as we stay under this limit we see answer times between 0.15 and 0.5ms, over this limit the answer times can increase <10ms.

1 MByte block size

The multi-server tests show some interesting behavior with large reads on this storage system.
We hit a limitation at around 7500 to 7800 IOPS per second. For sequential write, we could achieve almost double this result with up to 14.5k IOPS.

Of course, I discussed all the results with Huawei to see their view on my tests.
The explanation for the way better performance on write then read was, with write we go straight to the 1 TByte big cache, for reading the system had to scratch everything from disk. This Beta-Firmware version did not have any read cache and that’s why the results were lower. All Firmwares starting from the end of February do have also read cache.
I go with this answer and hope to retest it in the future with the newest firmware, still thinking the 7500 IOPS is a little bit low even without read cache.

Multi-Server Results without compression

Comparing the results for compressed devices to uncompressed devices we see an increase of IOPS up to 30% and a decrease of latency at the same level for 8 KByte block size.
For 1 MByte sequential read, the difference was smaller with around 10%, for 1 MByte sequential write we could gain an increase of around 15-20%.

Multi-Server Results with high parallelity

General

Because the tests with 3 servers did no max out the storage on the 8 KByte block size, I decided to do a max test with 4 parallel servers and with a parallelity from 1-100 instead of 1-10.
The steps were 1,5,10,15,20,30,40,50,75 and 100.
These tests were only performed on uncompressed devices.

8 KByte block size

It took 15 threads (per server) with 10 devices: 60 processes in total to reach the peak performance of the Dorado 6000V3 systems.
At this point, we reached 8 KByte random read 940k IOPS @0.637 ms. Remembering the answer, that this Firmware version does not have any read cache, this performance is achieved completely from the SSDs and could theoretically be even better with enabled read cache
If we increase the parallelity further, we see the same effect as with 1 MByte blocks: the answer time is increasing (dramatically) and the throughput is decreasing.

Depending on the number of parallel devices, we need between 60 parallel processes (with 10 devices) up to 300 parallel processes (with 3 parallel devices).

1 MByte block size

For the large IOs, we see the same picture as with 1 or 3 servers. A combined parallelity of 20-30 can max out the storage systems. So be very careful with your large IO tasks not to affect the other operations on the storage system.

Mixed Workload

After these tests, we know, the upper limit for this storage in single case tests. In a normal workload, we will never see only one kind of IO: There will always be a mixture of 8 KByte read & write IOPS side by side with 1 MByte IO. To simulate this picture, we create two FIO files. One creates approx: 40k-50k IOPS with random read and random write in a 50/50 split.
This will be our baseline, then we add approx. 1000 1 MByte IOPS every 60 seconds and see how the answer time reacts.


As seen in this picture from the performance monitor of the storage system the 1 MByte IOPS blocks had two effects on the smaller IOPS
The throughput of the small IOPS is decreasing
The latency is increasing.
In the middle of the test, we stop the small IOPS to see the latency of just the 1 MByte IOPS.

Both effects are expected and within the expected parameters: Test passed.

So with a base workload of 40k-50k IOPS, we can run e.g. backups in parallel with a bandwidth up to 5.5 GByte/s without interfering with the database work or we can do up to 5 active duplicates on the same storage without interfering with the other databases.

Summary

This storage system showed a fantastic performance at 8 KByte block size with very low latency. Especially the high number of parallel processes we can run against it before we hit the peak performance makes it a good choice to serving a large number of Oracle databases on it.

The large IO (1 MByte) performance for write operations was good but not that good compared with the excellent 8 KByte performance. The sequential read part is missing the read cache badly compared to the performance which is possible for writing. But even that is not on top of the line compared to other storage systems. Here I had seen other storage systems with a comparable configuration which were able to deliver up to 12k IOPS@1MByte.

Remember the questions from the first blog post:
-How many devices should I bundle into a diskgroup for best performance?
As many as possible.

-How many backups/duplicates can I run in parallel to my normal database workload without interfering with it?
You can run 5 parallel backup/duplicates with 1000 IOPS each without interferring a base line of 40-50k IOPS@8KByte

-What is the best rebalance power I can use on my system?
2-4 is absolutley enough for this system. More will slow down your other operations on the server.

Cet article Huawei Dorado 6000 V3 benchmark est apparu en premier sur Blog dbi services.

Useful Linux commands for an Oracle DBA

$
0
0

Introduction

Oracle & Linux is a great duet. Very powerfull, very scriptable. Here are several commands that make my life easier. These tools seems to be widespread on most of the Linux distributions.

watch with diff

It’s my favorite tool since a long time. watch can repeat a command indefinitely until you stop it with Ctrl+C. And it’s even more useful with the – -diff parameter. All the differences since last run are highlighted. For example if you want to monitor a running backup, try this:

watch -n 60 --diff 'sqlplus -s /nolog @check_backup; echo ; du -hs /backup'

The check_backup.sql being:


conn / as sysdba
set feedback off
set lines 150
set pages 100
col status for a30
alter session set NLS_DATE_FORMAT="DD/MM-HH24:MI:SS";
select start_time "Start", round (input_bytes/1024/1024,1) "Source MB", round(output_bytes/1024/1024,1) "Backup MB", input_type "Type", status "Status", round(elapsed_seconds/60,1) "Min", round(compression_ratio,1) "Ratio" from v$rman_backup_job_details where start_time >= SYSDATE-1 order by 1 desc;
exit;

Every minute (60 seconds), you will check, in the rman backup views, the amount of data already backed up. And the amount of data in your backup folder.

Very convenient to keep an eye on things without actually repeating the commands.

Truncate a logfile in one simple command

Oracle is generating a lot of logfiles, some of them can reach several GB and fill up your filesystem. How to quickly empty a big logfile without removing it? Simply use the true command:

true > listener.log

Run a SQL script on all the running databases

You need to check something on every databases running on your system? Or eventually make the same change to all these databases? A single line will do the job:

for a in `ps -ef | grep pmon | grep -v grep | awk '{print $8}' | cut -c 10- | sort`; do . oraenv <<< $a; sqlplus -s / as sysdba @my_script.sql >> output.log; done

Don’t forget to put an exit at the end of your SQL script my_script.sql. Using this script through ansible will even increase the scope and save hours of work.

Copy a folder to another server

scp is fine for copying single file or multiple files inside a folder. But copying a folder recursively to a remote server with scp is more complicated. Actually, you need to do a tarfile for that purpose. A clever solution is to use tar without creating any archive on the source server, but with a pipe to the destination server. Very useful and efficient, with just one line:

tar cf - source_folder | ssh oracle@192.168.50.167 "cd destination_folder_for_source_folder; tar xf -"

For sure, you will need +rwx on destination_folder_for_source_folder for oracle user on 192.168.50.167.

Check the network speed – because you need to check

As an Oracle DBA you probably have to deal with performance: not a problem it’s part of your job. But are you sure your database system is running at full network speed? You probably didn’t check that, but low network speed could be the root cause of some performance issues. This concerns copper-based networks.

Today’s servers handle 10Gb/s ethernet speed but can also work with 1Gb/s depending on the network behind the servers. You should be aware that you can still find 100Mb/s network speeds, for example if the network port of the switch attached to your server has been limitated for some reason (needed for the server connected to this port before yours for example). If 1Gb/s is probably enough for most of the databases, 100Mb/s is clearly inadequate, and most of the recent servers will even not handle correctly 100Mb/s network speed. Your Oracle environment may work, but don’t expect high performance level as your databases will have to wait for the network to send packets. Don’t forget that 1Gb/s gives you about 100-120MBytes/s in real condition, and 100Mb/s only allows 10-12MBytes/s, “Fast Ethernet” of the 90’s…

Checking the network speed is easy, with ethtool.

[root@oda-x6-2 ~]# ethtool btbond1
Settings for btbond1:
Supported ports: [ ] Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 1000Mb/s <= Network speed is OK
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

In case of a network bonding interface, please also check the real interfaces associated to the bonding, all the network interfaces belonging to the bonding need to have the same network speed :

[root@oda-x6-2 ~]# ethtool em1
Settings for em1:
Supported ports: [ TP ] Supported link modes: 100baseT/Full <= This network interface is physically supporting 100Mb/s
1000baseT/Full <= also 1Gb/s
10000baseT/Full <= and 10Gb/s
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Advertised link modes: 100baseT/Full
1000baseT/Full
10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Speed: 1000Mb/s <= Network speed is 1Gb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: external
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes <= This interface is connected to a switch

Conclusion

Hope this helps!

Cet article Useful Linux commands for an Oracle DBA est apparu en premier sur Blog dbi services.

Moving oracle database to new home on ODA

$
0
0

Moving database to new ORACLE_HOME is a common dba task. Performing this task on an ODA will need an additional step knowing ODA Lite is using an internal Derby database for the metadata. ODA HA will not be a problem here, knowing we do not have any Derby database. Through this blog I would like to give you some guidance and work around to move database to a new home (of same major release). In this example we will move a database named mydb, with db_unique_name set to mydb_site1 from OraDB11204_home1 to OraDB11204_home2.

I would like to highlight that the following blog is showing the procedure to move a database between ORACLE_HOME of same major release. The new ORACLE_HOME would for example run additionnal patches. An upgrade between Oracle major release is not possible following this procedure, and you would need to use the appropriate odacli commands (odacli upgrade-database) in that case.
Last but not least, I also would like to strongly advise that updating manually the ODA repository should only be peformed after getting Oracle support guidance and agreement to do so. Neither the author (that’s me 🙂 ) nor dbi services 😉 would be responsible for any issue or consequence following commands described in this blog. This would be your own responsability. 😉

I’m running ODA release 12.2.1.3. The database version used in this exemple is an Oracle 11g, but would work exactly the same for any other version like Oracle 12c databases.

Curent database information

Let’s first get information on which dbhome our mydb database is running.

List dbhomes :
[root@oda tmp]# odacli list-dbhomes
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
ed0a6667-0d70-4113-8a5e-3afaf1976fc2 OraDB12102_home1 12.1.0.2.171017 (26914423, 26717470) /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured
89f6687e-f575-45fc-91ef-5521374c54c0 OraDB11204_home1 11.2.0.4.171017 (26609929, 26392168) /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
8c6bc663-b064-445b-8a14-b7c46df9d1da OraDB12102_home3 12.1.0.2.171017 (26914423, 26717470) /u01/app/oracle/product/12.1.0.2/dbhome_3 Configured
9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8 OraDB11204_home2 11.2.0.4.171017 (26609929, 26392168) /u01/app/oracle/product/11.2.0.4/dbhome_2 Configured

List database information :
[root@oda tmp]# odacli list-databases
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3 mydb Si 11.2.0.4 false OLTP Odb1 ACFS Configured 89f6687e-f575-45fc-91ef-5521374c54c0

Our database is running on OraDB11204_home1 Oracle home.

Moving database to new home

Let’s move mydb database on OraDB11204_home2.

    The process to link the database to a new home is quite simple and would just be easily done by :

  1. Moving the instance parameter file and password file to the new oracle home
  2. Updating the listener configuration by inserting the new oracle home in case static registration is used
  3. Changing grid cluster information using srvctl command

First we need to stop the database :
oracle@oda:/u01/app/oracle/product/11.2.0.4/dbhome_2/dbs/ [mydb] srvctl stop database -d mydb_site1

We can list the current grid configuration :
oracle@oda:/u01/app/oracle/product/11.2.0.4/dbhome_2/dbs/ [mydb] srvctl config database -d mydb_site1
Database unique name: mydb_site1
Database name: mydb
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1
Oracle user: oracle
Spfile: /u02/app/oracle/oradata/mydb_site1/dbs/spfilemydb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: mydb_site1
Database instance: mydb
Disk Groups:
Mount point paths: /u02/app/oracle/oradata/mydb_site1,/u03/app/oracle/
Services:
Type: SINGLE
Database is administrator managed

As we can see the grid cluster database is referring the current dbhome. Let’s update it to have it linked to new oracle home :
oracle@oda:/u01/app/oracle/product/11.2.0.4/dbhome_2/dbs/ [mydb] srvctl modify database -d mydb_site1 -o /u01/app/oracle/product/11.2.0.4/dbhome_2

Note that if you are using Oracle 12c, the srvctl command option might differ. Use :
-db for database name
-oraclehome for database oracle home
-pwfile for password file

With Oracle 12c database you will have to specify the change for the password file as well in case it is stored in $ORACLE_HOME/dbs folder.

We can check the new grid database configuration :
oracle@oda:/u01/app/oracle/product/11.2.0.4/dbhome_2/dbs/ [mydb] srvctl config database -d mydb_site1
Database unique name: mydb_site1
Database name: mydb
Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_2
Oracle user: oracle
Spfile: /u02/app/oracle/oradata/mydb_site1/dbs/spfilemydb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: mydb_site1
Database instance: mydb
Disk Groups:
Mount point paths: /u02/app/oracle/oradata/mydb_site1,/u03/app/oracle/
Services:
Type: SINGLE
Database is administrator managed

And we can start our database again :
oracle@oda:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [mydb] srvctl start database -d mydb_site1

Our database will now successfully be running on OraDB11204_home2.

We can check and see that dcs agent has successfully updated the oratab file :
oracle@oda:/tmp/ [mydb] grep mydb /etc/oratab
mydb:/u01/app/oracle/product/11.2.0.4/dbhome_2:N # line added by Agent

Our ORACLE_HOME env variable will now be :
oracle@oda01-p:/tmp/ [mydb] echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/dbhome_2

Are we done? No, let’s check how the ODA will display the new updated information.

Checking ODA metadata information

List dbhomes :
[root@oda tmp]# odacli list-dbhomes
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
ed0a6667-0d70-4113-8a5e-3afaf1976fc2 OraDB12102_home1 12.1.0.2.171017 (26914423, 26717470) /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured
89f6687e-f575-45fc-91ef-5521374c54c0 OraDB11204_home1 11.2.0.4.171017 (26609929, 26392168) /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
8c6bc663-b064-445b-8a14-b7c46df9d1da OraDB12102_home3 12.1.0.2.171017 (26914423, 26717470) /u01/app/oracle/product/12.1.0.2/dbhome_3 Configured
9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8 OraDB11204_home2 11.2.0.4.171017 (26609929, 26392168) /u01/app/oracle/product/11.2.0.4/dbhome_2 Configured

List database information :
[root@oda tmp]# odacli list-databases
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3 mydb Si 11.2.0.4 false OLTP Odb1 ACFS Configured 89f6687e-f575-45fc-91ef-5521374c54c0

As we can see, ODA metadata coming from the derby database will still show mydb database linked to OraDB11204_home1.

Updating ODA metadata

Let’s update derby database to reflect the changes.

You can get your current appliance version by running the command :
odacli describe-component

ODA version 18.3 or higher

If you are running ODA version 18.3 or higher you can use following command to move a database from one database home to another database home of same base version :
odacli modify-database -i -dh

This command might not be successfull if your database was initially created as instance only :

[root@oda tmp]# odacli modify-database -i f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3 -dh 9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8
DCS-10045:Validation error encountered: Changing the database home is not allowed for an instance only database.

ODA version lower than 18.3

If you are running a lower version of ODA, you will need to update the derby DB manually. I would strongly recommend to act carefully on the derby database to make sure not to corrupt the ODA. I would also encourage you to get Oracle support guidance in case you need to act on your production ODA.
The next steps will describe how to update the derby DB manually.

1) Stop the DCS Agent

[root@oda ~]# initctl stop initdcsagent
initdcsagent stop/waiting

[root@oda ~]# ps -ef | grep dcs-agent | grep -v grep
[root@oda ~]#
2) Copy the derby Database

It is important to backup the repository and to apply the changes on the backup in order to keep the original version unchanged in case of trouble.

Go in the derby db repository folder :
[root@oda tmp]# cd /opt/oracle/dcs/repo

List current repository folder :
[root@oda repo]# ls -l
total 24
-rw-r--r-- 1 root root 1149 Aug 27 11:57 derby.log
drwxr-xr-x 4 root root 4096 Aug 27 15:32 node_0
drwxr-xr-x 4 root root 4096 Aug 12 16:18 node_0_orig_12082019_1619
drwxr-xr-x 4 root root 4096 Aug 26 11:31 node_0_orig_26082019_1132
drwxr-xr-x 4 root root 4096 Aug 26 15:27 node_0_orig_26082019_1528
drwxr-xr-x 4 root root 4096 Aug 27 11:57 node_0_orig_27082019_1158

Backup the repository (we will apply the changes on the backup repository to keep the original so far unchanged) :
[root@oda repo]# cp -rp node_0 node_0_backup_27082019_1533

List current repository folder :
[root@oda repo]# ls -l
total 28
-rw-r--r-- 1 root root 1149 Aug 27 11:57 derby.log
drwxr-xr-x 4 root root 4096 Aug 27 15:32 node_0
drwxr-xr-x 4 root root 4096 Aug 27 15:32 node_0_backup_27082019_1533
drwxr-xr-x 4 root root 4096 Aug 12 16:18 node_0_orig_12082019_1619
drwxr-xr-x 4 root root 4096 Aug 26 11:31 node_0_orig_26082019_1132
drwxr-xr-x 4 root root 4096 Aug 26 15:27 node_0_orig_26082019_1528
drwxr-xr-x 4 root root 4096 Aug 27 11:57 node_0_orig_27082019_1158

3) Start DCS Agent

[root@oda repo]# initctl start initdcsagent
initdcsagent start/running, process 45530

[root@oda repo]# ps -ef | grep dcs-agent | grep -v grep
root 45530 1 99 15:33 ? 00:00:10 java -Xms128m -Xmx512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=512m -XX:+DisableExplicitGC -XX:ParallelGCThreads=4 -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:/opt/oracle/dcs/log/gc-dcs-agent-%t-%p.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Doracle.security.jps.config=/opt/oracle/dcs/agent/jps-config.xml -jar /opt/oracle/dcs/bin/dcs-agent-2.4.12-oda-SNAPSHOT.jar server /opt/oracle/dcs/conf/dcs-agent.json
[root@oda repo]#
4) Update metadata information

We now need to connect to the Derby backup database and make home id changes for the specific database.

Let’s connect to the derby database :
[root@oda repo]# /usr/java/jdk1.8.0_161/db/bin/ij
ij version 10.11
ij> connect 'jdbc:derby:node_0_backup_27082019_1533';

Let’s check current metadata information :
ij> select DBHOMEID from DB where ID='f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3';
DBHOMEID
--------------------------------------------------------------------------------------------------------------------------------
89f6687e-f575-45fc-91ef-5521374c54c0
1 row selected

Let’s update metadata according the home changes :
ij> update DB set DBHOMEID='9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8' where ID='f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3';
1 row inserted/updated/deleted

Let’s check the updated information :
ij> select DBHOMEID from DB where ID='f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3';
DBHOMEID
--------------------------------------------------------------------------------------------------------------------------------
9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8
1 row selected

Let’s commit the changes :
ij> commit;

And finally exit :
ij> exit;

5) Stop the DCS Agent

[root@oda repo]# initctl stop initdcsagent
initdcsagent stop/waiting

[root@oda repo]# ps -ef | grep dcs-agent | grep -v grep
[root@oda repo]#
6) Apply the changes in production

In this step, we will rename the original repository to keep a backup and put our changes in production.

List current repository folder :
[root@oda repo]# ls -ltrh
total 28K
drwxr-xr-x 4 root root 4.0K Aug 12 16:18 node_0_orig_12082019_1619
drwxr-xr-x 4 root root 4.0K Aug 26 11:31 node_0_orig_26082019_1132
drwxr-xr-x 4 root root 4.0K Aug 26 15:27 node_0_orig_26082019_1528
drwxr-xr-x 4 root root 4.0K Aug 27 11:57 node_0_orig_27082019_1158
drwxr-xr-x 4 root root 4.0K Aug 27 15:35 node_0_backup_27082019_1533
-rw-r--r-- 1 root root 1.2K Aug 27 15:35 derby.log
drwxr-xr-x 4 root root 4.0K Aug 27 15:36 node_0

Backup the original database :
[root@oda repo]# mv node_0 node_0_orig_27082019_1536

Put our changes in production :
[root@oda repo]# mv node_0_backup_27082019_1533 node_0

Check the repository folder :
[root@oda repo]# ls -ltrh
total 28K
drwxr-xr-x 4 root root 4.0K Aug 12 16:18 node_0_orig_12082019_1619
drwxr-xr-x 4 root root 4.0K Aug 26 11:31 node_0_orig_26082019_1132
drwxr-xr-x 4 root root 4.0K Aug 26 15:27 node_0_orig_26082019_1528
drwxr-xr-x 4 root root 4.0K Aug 27 11:57 node_0_orig_27082019_1158
drwxr-xr-x 4 root root 4.0K Aug 27 15:35 node_0
-rw-r--r-- 1 root root 1.2K Aug 27 15:35 derby.log
drwxr-xr-x 4 root root 4.0K Aug 27 15:36 node_0_orig_27082019_1536

7) Start the DCS Agent

[root@oda repo]# initctl start initdcsagent
initdcsagent start/running, process 59703

[root@oda repo]# ps -ef | grep dcs-agent | grep -v grep
root 59703 1 99 15:37 ? 00:00:11 java -Xms128m -Xmx512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=512m -XX:+DisableExplicitGC -XX:ParallelGCThreads=4 -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:/opt/oracle/dcs/log/gc-dcs-agent-%t-%p.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Doracle.security.jps.config=/opt/oracle/dcs/agent/jps-config.xml -jar /opt/oracle/dcs/bin/dcs-agent-2.4.12-oda-SNAPSHOT.jar server /opt/oracle/dcs/conf/dcs-agent.json
[root@oda repo]#
8) Check ODA metadata

Now we can check and see that derby database is showing correct metadata information.

Lsit dbhomes :
[root@oda repo]# odacli list-dbhomes
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
ed0a6667-0d70-4113-8a5e-3afaf1976fc2 OraDB12102_home1 12.1.0.2.171017 (26914423, 26717470) /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured
89f6687e-f575-45fc-91ef-5521374c54c0 OraDB11204_home1 11.2.0.4.171017 (26609929, 26392168) /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
8c6bc663-b064-445b-8a14-b7c46df9d1da OraDB12102_home3 12.1.0.2.171017 (26914423, 26717470) /u01/app/oracle/product/12.1.0.2/dbhome_3 Configured
9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8 OraDB11204_home2 11.2.0.4.171017 (26609929, 26392168) /u01/app/oracle/product/11.2.0.4/dbhome_2 Configured

Check new medata database information :
[root@oda repo]# odacli list-databases
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3 mydb Si 11.2.0.4 false OLTP Odb1 ACFS Configured 9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8

[root@oda repo]# odacli describe-database -i f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3
Database details
----------------------------------------------------------------
ID: f38f3a6c-987c-4e11-8cfa-af5cb66ff4e3
Description: mydb
DB Name: mydb
DB Version: 11.2.0.4
DB Type: Si
DB Edition: EE
DBID:
Instance Only Database: true
CDB: false
PDB Name:
PDB Admin User Name:
Class: OLTP
Shape: Odb1
Storage: ACFS
CharacterSet: AL32UTF8
National CharacterSet: AL16UTF16
Language: AMERICAN
Territory: AMERICA
Home ID: 9783fd89-f035-4d1a-aaaf-f1cdb09c6ea8
Console Enabled: false
Level 0 Backup Day: Sunday
AutoBackup Disabled: false
Created: June 11, 2019 2:46:35 PM CEST
DB Domain Name: in-kon.ch

Conclusion

Database is now running on new oracle home and ODA metadata information are up to date. Updating metada might be very important for further database upgrade or further database deletion that will be performed with odacli commands. Otherwise next commands might failed.

Cet article Moving oracle database to new home on ODA est apparu en premier sur Blog dbi services.

Migrating Oracle database from windows to ODA

$
0
0

Nowadays I have been working on an interesting customer project where I had to migrate windows oracle standard databases to ODA. The ODAs are X7-2M Models, running version 18.5. This version is coming with Red Hat Enterprise Linx 6.10 (Santiago). Both windows databases and target ODA databases are running PSU 11.2.0.4.190115. But this would definitively also be working for oracle 12c and oracle 18c databases. The databases are licensed with Standard Edition, so migrating through data guard was not possible. Through this blog I would like to share the experience I could get on this topic as well as the method and steps I have been using to successfully migrate those databases.

Limitations

Windows and Linux platform been on the same endian, I have been initially thinking that it would not be more complicated than simply duplicating the windows database to an ODA instance using the last backup. ODA databases are OMF databases, so can not be easier, as no convert parameter is needed.
After having created a single instance database on the ODA, exported the current database pfile and adapted it for the ODA, created the needed TNS connections, I have been running a single RMAN duplicate command :

RMAN> run {
2> set newname for database to new;
3> duplicate target database to 'ODA_DBNAME' backup location '/u99/app/oracle/backup';
4> }

Note : If the database is huge, as for example, more than a Tera bytes, and your sga is small, you might want to increase it. Having a bigger sga size will lower the restore time. Minimum 50 GB would be a good compromise. Also if your ODA is from the ODA-X7 family you will benefit from the NVMe technologie. As per my experience, a duplication of 1.5 TB database, with backup stored locally, did not take more than 40 minutes.

I have been more than happy to see the first duplication step been successfully achieved :

Finished restore at 17-JUL-2019 16:45:10

And I was expecting the same for the next recovery part.

Unfortunately, this didn’t end as expected and I quickly got following restore errors :

Errors in memory script
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01507: database not mounted
ORA-06512: at "SYS.X$DBMS_RCVMAN", line 13661
ORA-06512: at line 1
RMAN-03015: error occurred in stored script Memory Script
RMAN-20000: abnormal termination of job step
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_07_17/o1_mf_1_25514_glyf3yd3_.arc'
RMAN-11001: Oracle Error:
ORA-10562: Error occurred while applying redo to data block (file# 91, block# 189)
ORA-10564: tablespace DBVISIT
ORA-01110: data file 91: '/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_dbvisit_glyczqcj_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 501874
ORA-00600: internal error code, arguments: [4502], [0], [], [], [], [], [], [], [], [], [], [] RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/17/2019 16:45:32
RMAN-05501: aborting duplication of target database

Troubleshooting the problem I could understand that migrating database from Windows to Linux might not be so simple. Following oracle Doc ID is describing the problem :
Restore From Windows To Linux using RMAN Fails (Doc ID 2003327.1)
Cross-Platform Database Migration (across same endian) using RMAN Transportable Database (Doc ID 1401921.1)
RMAN DUPLICATE/RESTORE/RECOVER Mixed Platform Support (Doc ID 1079563.1)
Restore From Windows To Linux using RMAN Fails (Doc ID 2003327.1)

The problem is coming from the fact that recovering redo transactions between windows and linux platform is not supported if the database is not a standby one. For standard database version, the only possibility would be to go through a cold backup which, in my case, was impossible knowing the database size, the time taken to execute a backup and the short maintenance windows.

Looking for other solution and doing further tests, I could find a solution that I’m going to describe in the next steps.

Restoring the database from the last backup

In order to restore the database, I have been running next steps.

  1. Start the ODA instance in no mount :

  2. SQL> startup nomount

  3. Restore the last available control file from backup with rman :

  4. RMAN> connect target /
     
    RMAN> restore controlfile from '/mnt/backupNFS/oracle/ODA_DBNAME/20190813_233004_CTL_ODA_DBNAME_1179126808_S2864_P1.BCK';

  5. Mount the database :

  6. SQL> alter database mount;

  7. Catalog the backup path :

  8. RMAN> connect target /
     
    RMAN> catalog start with '/mnt/backupNFS/oracle/ODA_DBNAME';

  9. And finally restore the database :

  10. RMAN> connect target /
     
    RMAN> run {
    2> set newname for database to new;
    3> restore database;
    4> switch datafile all;
    5> }

Convert the primary database to a physical standby database

In order to be able to recover the database we will convert the primary database to a physical standby one.

  1. We can check the actual status and see that our database is a primary one in mounted state :

  2. SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
     
    STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
    ------------ ---------------- ---------------- --------------------
    MOUNTED ODA_DBNAME PRIMARY MOUNTED

  3. We will convert the database to a physical standby

  4. SQL> alter database convert to physical standby;
     
    Database altered.

  5. We need to restart the database.

  6. SQL> shutdown immediate
     
    SQL> startup mount

  7. We can check new database status

  8. SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
     
    STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
    ------------ ---------------- ---------------- --------------------
    MOUNTED ODA_DBNAME PHYSICAL STANDBY MOUNTED

Get the current windows SCN database

We are now ready to recover the database and the application can be stopped. The next steps will now be executed during the maintenance windows. The windows database listener can be stopped to make sure there is no new connection.

  1. We will make sure there is no existing application session on the database :

  2. SQL> set linesize 300
    SQL> set pagesize 500
    SQL> col machine format a20
    SQL> col service_name format a20
     
    SQL> select SID, serial#, username, machine, process, program, status, service_name, logon_time from v$session where username not in ('SYS', 'PUBLIC') and username is not null order by status, username;

  3. We will create a restore point :

  4. SQL> create restore point for_migration_14082019;
     
    Restore point created.

  5. We will get the last online log transactions archived :

  6. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
     
    System altered.

  7. We will retrieve the SCN corresponding to the restore point :

  8. SQL> col scn format 999999999999999
     
    SQL> select scn from v$restore_point where lower(name)='for_migration_14082019';
     
    SCN
    ----------------
    13069540631

  9. We will backup the last archive log. This will be executed on the windows database using our dbi services internal DMK tool (https://www.dbi-services.com/offering/products/dmk-management-kit/) :

  10. servicedbi@win_srv:E:\app\oracle\local\dmk_custom\bin\ [ODA_DBNAME] ./rman_backup_ODA_DBNAME_arc.bat
     
    E:\app\oracle\local\dmk_custom\bin>powershell.exe -command "E:\app\oracle\local\dmk_ha\bin\check_primary.ps1 ODA_DBNAME 'dmk_rman.ps1 -s ODA_DBNAME -t bck_arc.rcv -c E:\app\oracle\admin\ODA_DBNAME\etc\rman.cfg
     
    [OK]::KSBL::RMAN::dmk_dbbackup::ODA_DBNAME::bck_arc.rcv
     
    Logfile is : E:\app\oracle\admin\ODA_DBNAME\log\ODA_DBNAME_bck_arc_20190814_141754.log
     
    RMAN return Code: 0
    2019-08-14_02:19:01::check_primary.ps1::MainProgram ::INFO ==> Program completed

Recover the database

The database can now be recovered till our 13069540631 SCN number.

  1. We will first need to catalog new archive log backups :

  2. RMAN> connect target /
     
    RMAN> catalog start with '/mnt/backupNFS/oracle/ODA_DBNAME';

  3. And recover the database till SCN 13069540632 :

  4. RMAN> connect target /
     
    RMAN> run {
    2> set until scn 13069540632;
    3> recover database;
    4> }
     
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30098_go80084r_.arc RECID=30124 STAMP=1016289320
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc thread=1 sequence=30099
    channel default: deleting archived log(s)
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc RECID=30119 STAMP=1016289320
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc thread=1 sequence=30100
    channel default: deleting archived log(s)
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc RECID=30121 STAMP=1016289320
    media recovery complete, elapsed time: 00:00:02
    Finished recover at 14-AUG-2019 14:35:23

  5. We can check the alert log and see that recovering has been performed until SCN 13069540632 :

  6. oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] taa
    ORA-279 signalled during: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30098_go80084r_.arc'...
    alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc'
    Media Recovery Log /u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc
    ORA-279 signalled during: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc'...
    alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc'
    Media Recovery Log /u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc
    Wed Aug 14 14:35:23 2019
    Incomplete Recovery applied until change 13069540632 time 08/14/2019 14:13:46
    Media Recovery Complete (ODA_DBNAME)
    Completed: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc'

  7. We can check the new ODA database current SCN :

  8. SQL> col current_scn format 999999999999999
     
    SQL> select current_scn from v$database;
     
    CURRENT_SCN
    ----------------
    13069540631

Convert database to primary again

Database can now be converted back to primary.

SQL> alter database activate standby database;
 
Database altered.


SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
 
STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
------------ ---------------- ---------------- --------------------
MOUNTED ODA_DBNAME PRIMARY MOUNTED

At this step if the windows source database would be running 11.2.0.3 version, we could successfully upgrade the new ODA database to 11.2.0.4 following common oracle database upgrade process.

And finally we can open our database and have the database been migrated from windows to linux.


SQL> alter database open;
 
Database altered.


SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
 
STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
------------ ---------------- ---------------- --------------------
OPEN ODA_DBNAME PRIMARY READ WRITE


oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

Post migration steps

There will be a few post migration steps to be executed.

Created redo logs again

Redo logs are still stamped with windows path and therefore have been created in $ORACLE_HOME/dbs folder. In this steps we will create new OMF one again.

  1. Checking current online log members :

  2. SQL> set linesize 300
    SQL> set pagesize 500
    SQL> col member format a100
     
    SQL> select a.GROUP#, b.member, a.status, a.bytes/1024/1024 MB from v$log a, v$logfile b where a.GROUP#=b.GROUP#;
     
    GROUP# MEMBER STATUS MB
    ---------- ---------------------------------------------------------------------------------------------------- ---------------- ----------
    6 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_1.LOG UNUSED 500
    6 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_2.LOG UNUSED 500
    5 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_2.LOG UNUSED 500
    5 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_1.LOG UNUSED 500
    4 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_2.LOG UNUSED 500
    4 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_1.LOG UNUSED 500
    3 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_2.LOG UNUSED 500
    3 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_1.LOG UNUSED 500
    2 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_2.LOG UNUSED 500
    2 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_1.LOG UNUSED 500
    1 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_2.LOG CURRENT 500
    1 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_1.LOG CURRENT 500

  3. Drop the first unused redo log group keeping only one :

  4. SQL> alter database drop logfile group 6;
     
    Database altered.
     
    SQL> alter database drop logfile group 5;
     
    Database altered.
     
    SQL> alter database drop logfile group 4;
     
    Database altered.
     
    SQL> alter database drop logfile group 3;
     
    Database altered.
     
    SQL> alter database add logfile group 3 size 500M;
     
    Database altered.

  5. Create the recent dropped group again :

  6. SQL> alter database add logfile group 3 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 4 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 5 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 6 size 500M;
     
    Database altered.

  7. Drop the last unused redo log group and create it again :

  8. SQL> alter database drop logfile group 2;
     
    Database altered.
     
    SQL> alter database add logfile group 2 size 500M;
     
    Database altered.

  9. Execute a switch log file and checkpoint so the current redo group becomes unused :

  10. SQL> alter system switch logfile;
     
    System altered.
     
    SQL> alter system checkpoint;
     
    System altered.

  11. Drop it and create it again :

  12. SQL> alter database drop logfile group 1;
     
    Database altered.
     
    SQL> alter database add logfile group 1 size 500M;
     
    Database altered.

  13. Check redo group members :

  14. SQL> select a.GROUP#, b.member, a.status, a.bytes/1024/1024 MB from v$log a, v$logfile b where a.GROUP#=b.GROUP#;
     
    GROUP# MEMBER STATUS MB
    ---------- ---------------------------------------------------------------------------------------------------- ---------------- ----------
    3 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_3_go81rj4t_.log INACTIVE 500
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_3_go81rjqn_.log INACTIVE 500
    4 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_4_go81ron1_.log UNUSED 500
    4 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_4_go81rp6o_.log UNUSED 500
    5 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_5_go81rwhs_.log UNUSED 500
    5 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_5_go81rx1g_.log UNUSED 500
    6 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_6_go81s1rk_.log UNUSED 500
    6 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_6_go81s2bx_.log UNUSED 500
    2 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_2_go81sgdf_.log CURRENT 500
    2 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_2_go81sgxd_.log CURRENT 500
    1 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_1_go81vpls_.log UNUSED 500
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_1_go81vq4v_.log UNUSED 500

  15. Delete the wrong previous redo log members files :

  16. oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] cdh
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/ [ODA_DBNAME] cd dbs
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ls -ltrh *REDO*.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 15:05 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 15:05 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_1.LOG
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] rm *REDO*.LOG

Created temp file again

  1. Checking current temp file we can see that the path is still the windows one :

  2. SQL> set linesize 300
    SQL> col name format a100
     
    SQL> select b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    NAME STATUS MB NAME
    ---------------------------------------------------------------------------------------------------- ------- ---------- -------------------------------------------
    F:\ORADATA\ODA_DBNAME\TEMPORARY_DATA_1.DBF ONLINE 8192 TEMPORARY_DATA

  3. We can check that the default temporary tablespace is TEMPORARY_DATA

  4. SQL> col property_value format a50
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMPORARY_DATA
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  5. Let’s create a new temp tablespace and make it the default one

  6. SQL> create temporary tablespace TEMP tempfile size 8G;
     
    Tablespace created.
     
    SQL> alter database default temporary tablespace TEMP;
     
    Database altered.
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMP
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  7. Drop previous TEMPORARY_DATA tablespace

  8. SQL> drop tablespace TEMPORARY_DATA including contents and datafiles;
     
    Tablespace dropped.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temp_go83m1tp_.tmp ONLINE 8192 TEMP

  9. Create TEMPORARY_DATA tablespace again and make it the default one :

  10. SQL> create temporary tablespace TEMPORARY_DATA tempfile size 8G;
     
    Tablespace created.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp ONLINE 8192 TEMPORARY_DATA
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temp_go83m1tp_.tmp ONLINE 8192 TEMP
     
    SQL> alter database default temporary tablespace TEMPORARY_DATA;
     
    Database altered.
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMPORARY_DATA
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  11. And finally drop the intermediare temp tablespace :

  12. SQL> drop tablespace TEMP including contents and datafiles;
     
    Tablespace dropped.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp ONLINE 8192 TEMPORARY_DATA

  13. Appropriate max size can be given to the new created temp tablespace

  14. SQL> alter database tempfile '/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp' autoextend on maxsize 31G;
     
    Database altered.

  15. Remove wrong temp file stored in $ORACLE_HOME/dbs

  16. oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ls -ltr
    -rw-r--r-- 1 oracle oinstall 2851 May 15 2009 init.ora
    -rw-r--r-- 1 oracle oinstall 64 Jul 25 08:10 initODA_DBNAME.ora.old
    -rw-r----- 1 oracle oinstall 2048 Jul 25 08:10 orapwODA_DBNAME
    -rw-r--r-- 1 oracle oinstall 67 Jul 25 08:31 initODA_DBNAME.ora
    -rw-r----- 1 oracle asmadmin 8589942784 Aug 14 08:14 F:ORADATAODA_DBNAMETEMPORARY_DATA_1.DBF
    -rw-rw---- 1 oracle asmadmin 1544 Aug 14 14:59 hc_ODA_DBNAME.dat
    -rw-r----- 1 oracle asmadmin 43466752 Aug 14 15:48 snapcf_ODA_DBNAME.f
     
    oracle@RZA-ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] rm F:ORADATAODA_DBNAMETEMPORARY_DATA_1.DBF

Apply specific ODA parameters

Following specific ODA parameters can be updated to the new created instance.


SQL> alter system set "_datafile_write_errors_crash_instance"=false scope=spfile;
 
System altered.
 
SQL> alter system set "_db_writer_coalesce_area_size"=16777216 scope=spfile;
 
System altered.
 
SQL> alter system set "_disable_interface_checking"=TRUE scope=spfile;
 
System altered.
 
SQL> alter system set "_ENABLE_NUMA_SUPPORT"=FALSE scope=spfile;
 
System altered.
 
SQL> alter system set "_FILE_SIZE_INCREASE_INCREMENT"=2143289344 scope=spfile;
 
System altered.
 
SQL> alter system set "_gc_policy_time"=0 scope=spfile;
 
System altered.
 
SQL> alter system set "_gc_undo_affinity"=FALSE scope=spfile;
 
System altered.
 
SQL> alter system set db_block_checking='FULL' scope=spfile;
 
System altered.
 
SQL> alter system set db_block_checksum='FULL' scope=spfile;
 
System altered.
 
SQL> alter system set db_lost_write_protect='TYPICAL' scope=spfile;
 
System altered.
 
SQL> alter system set sql92_security=TRUE scope=spfile;
 
System altered.
 
SQL> alter system set use_large_pages='only' scope=spfile;
 
System altered.

“_fix_control”parameter is specific to Oracle12c and not compatible Oracle 11g. See Doc ID 2145105.1.

Register database in grid

After applying specific ODA instance parameters, we can register the database in the grid and start it with the grid.


oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl add database -d ODA_DBNAME_RZA -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -c SINGLE -i ODA_DBNAME -x RZA-ODA02 -m ksbl.local -p /u02/app/oracle/oradata/ODA_DBNAME_RZA/dbs/spfileODA_DBNAME.ora -r PRIMARY -s OPEN -t IMMEDIATE -n ODA_DBNAME -j "/u02/app/oracle/oradata/ODA_DBNAME_RZA,/u03/app/oracle"
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl start database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

We can check the well functionning :

oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl stop database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is not running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl start database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

Conclusion

Going through a physical standby database, I was able to migrate successfully the windows databases into ODA linux one. I have been able to achieve migration of source 11.2.0.4 databases but also 11.2.0.3 database by adding an upgrade step in the process.

Cet article Migrating Oracle database from windows to ODA est apparu en premier sur Blog dbi services.

Viewing all 461 articles
Browse latest View live