Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 461 articles
Browse latest View live

Oracle DB on Azure with Multitenant Option

$
0
0

By Franck Pachot

.
If you want to run an Oracle Database in the Microsoft Azure cloud, you will install it yourself on a VM. And then, you can expect the same as when you install it in your premises, in a virtual environment. Except that Oracle makes it two times more expensive by accepting to license the processor metric on vCPUs at the condition that the Intel core factor is not applied. And in addition to that there is a weird limitation on multitenant. I create a VM with the 19c image provided by Azure. I’m not sure there’s a big advantage to have this image. You have an Oracle Linux OS, with all prerequisites (which is like having installed the preinstall rpm), and a 19.3 Oracle Home (which is like downloading it, unzip and runInstaller). But you will still have to apply the latest Release Update and create a database.

Here is what I’ve run after the creation of the VM (can be done through CustomData)


# download latest OPatch and RU (requires the URLs in OPatch and ReleaseUpdate tags)
(
mkdir -p /var/tmp/OPatch && cd /var/tmp/OPatch
wget -qc $( curl 'http://169.254.169.254/metadata/instance/compute/tags?api-version=2018-02-01&format=text' -H Metadata:true -s | awk -F";" '/OPatch:/{sub(/(^|;).*OPatch:/,"");print $1}')
mkdir -p /var/tmp/RU && cd /var/tmp/RU
wget -qc $(curl 'http://169.254.169.254/metadata/instance/compute/tags?api-version=2018-02-01&format=text' -H Metadata:true -s | awk -F";" '/ReleaseUpdate:/{sub(/(^|;).*ReleaseUpdate:/,"");print $1}')
unzip -qo p*_*_Linux-x86-64.zip
rm -f p*_*_Linux-x86-64.zip
chown -R oracle:oinstall /var/tmp/OPatch /var/tmp/RU
)
# fix the missing sar directory
mkdir -p /var/log/sa

I set the OPatch and ReleaseUpdate tags for the VM with the URL of the binaries in the Object Store. I download it there. And also fix the missing /var/log/sa.


# get Oracle Home from inventory
ORACLE_HOME=$(sudo xmllint --xpath 'string(/INVENTORY/HOME_LIST[1]/HOME/@LOC)' $(awk -F= '/^inventory_loc/{print $2}' /etc/oraInst.loc)/ContentsXML/inventory.xml)
# create autostart service
sudo tee /etc/systemd/system/oracledb.service <<CAT
[Unit]
Description=Oracle Database start/stop
Before=shutdown.target
After=network-online.target
[Service]
Type=idle
#LimitMEMLOCK=infinity
#LimitNOFILE=65535
#EnvironmentFile=/etc/oracle.env
User=oracle
Group=oinstall
ExecStart=$ORACLE_HOME/bin/dbstart $ORACLE_HOME
ExecStop=$ORACLE_HOME/bin/dbshut $ORACLE_HOME
RemainAfterExit=yes
Restart=no
[Install]
WantedBy=multi-user.target
CAT
sudo systemctl enable oracledb

This will automatically stop and start the database flagged with ‘Y’ in /etc/oratab


# Now connecting as Oracle user
sudo su - oracle <<'SU'
# set read only Oracle Home
roohctl -enable
# update to latest RU
unzip -qo -d $ORACLE_HOME /var/tmp/OPatch/p*_*_Linux-x86-64.zip
$ORACLE_HOME/OPatch/opatch lspatches
cd /var/tmp/RU/*
$ORACLE_HOME/OPatch/opatch apply --silent
cd ; rm -rf /var/tmp/RU/*
SU

I set the Oracle Home in “ROOH” mode where the configuration files are separate from the binaries. And apply the downloaded Release Update.


sudo su - oracle <<'SU'
# put the passwords in the wallet
mkdir -p       /u01/app/oracle/dbc_wallet
 yes "W4Lle7p422w0RD" | mkstore -wrl  /u01/app/oracle/dbc_wallet -create
for user in sys system pdbAdmin ; do
 yes "W4Lle7p422w0RD" | mkstore -wrl /u01/app/oracle/dbc_wallet -createEntry oracle.dbsecurity.${user}Password "Oracle19c$RANDOM"
done
# delete existing databases (just in case because we will re-create)
for i in $(awk -F: '/^.*:/{print $1}' /etc/oratab | sort -r) ; do $ORACLE_HOME/bin/dbca -silent -deleteDatabase -sourceDB $i -forceArchiveLogDeletion -useWalletForDBCredentials true -dbCredentialsWalletLocation /u01/app/oracle/dbc_wallet ; done ; rm -rf /u01/ora*
# create the DB
cdb_name=CDB
pdb_name=PDB
unique_name=${cdb_name}_$( curl 'http://169.254.169.254/metadata/instance/compute/name?api-version=2018-02-01&format=text' -H Metadata:true -s )
grep ^CDB1: /etc/oratab || $ORACLE_HOME/bin/dbca -silent \
 -createDatabase -gdbName CDB1 -sid CDB1 -createAsContainerDatabase true -numberOfPdbs 1 -pdbName PDB1 \
 -useWalletForDBCredentials true -dbCredentialsWalletLocation /u01/app/oracle/dbc_wallet \
 -datafileDestination /u01/oradata -useOMF true -storageType FS \
 -recoveryAreaDestination /u01/orareco -recoveryAreaSize 4096 -enableArchive true \
 -memoryMgmtType AUTO_SGA -totalMemory 4096 \
 -createListener LISTENER:1521 -emConfiguration EMEXPRESS -emExpressPort 443 \
 -templateName General_Purpose.dbc -databaseType OLTP -sampleSchema true -redoLogFileSize 100 \
 -initParams db_unique_name=CDB1,user_large_page=AUTO,shared_pool_size=600M
# old habit to run datapatch
$ORACLE_HOME/OPatch/datapatch
# enable database flashback and block change tracking.
rman target / <<<'set echo on; alter database flashback on; alter database enable block change tracking;'
# create a service for the application
sqlplus / as sysdba <'ORCL',network_name=>'ORCL');
exec dbms_service.start_service(service_name=>'ORCL');
alter pluggable database save state;
SQL
SU

This is the creation of the database, CDB1 here with a PDB1 pluggable database.


sudo su - oracle <<'SU'
# define the crontab to backup locally (recovery area)
t=0 ;for i in $(awk -F: '/^.*:/{print $1}' /etc/oratab | sort -r) ; do t=$(($t+1))
cat <<CRONTAB
30 $t * * 6   ( PATH="/usr/local/bin:$PATH" ; . oraenv -s << /tmp/crontab_backup.log 2>&1 <<<'set echo on; backup as compressed backupset incremental level 0 database tag "ORACLE_CRONTAB_WEEKLY";'
05 $t * * 1-5 ( PATH="/usr/local/bin:$PATH" ; . oraenv -s << /tmp/crontab_backup.log 2>&1 <<<'set echo on; backup incremental as compressed backupset level 1 database tag "ORACLE_CRONTAB_DAILY";'
${t}0 * * * * ( PATH="/usr/local/bin:$PATH" ; . oraenv -s << /tmp/crontab_backup.log 2>&1 <<<'set echo on; backup as compressed backupset archivelog all tag "ORACLE_CRONTAB_HOURLY";'
CRONTAB
done | crontab
SU

This sets a very basic backup strategy into the recovery area: full on week-end, incremental on nights, archivelog every hour. To be customized (maybe backup the recovery area to NFS – see Tim Gorman answer on AskTOM) and monitored (I like to run a “report need backup” independantly, and, of course, monitor the non-reclaimable v$recovery_area_usage).

Multitenant

I blogged about a strange limitation of multitenant when running on AWS and here I’m showing that the same exists on Azure and even with the latest Release Update. Let’s be clear, I’ve no idea if this limitation is intentional or not (bug or feature ;)) and why it has not been removed yet if considered as a bug. But if you Bring You Own License and have paid for the multitenant option, you probably want to create several pluggable databases rather than several instances in a VM.


[oracle@o19c ~]$ sql / as sysdba

SQLcl: Release 19.1 Production on Mon Feb 22 07:51:23 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0


SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
SQL>

SQL> show parameter max_pdbs

NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 5

I have one PDB that I have created at CDB creation time. And MAX_PDBS is set to five. That reminds me what I have seen on AWS. Let’s see how many PDBs I can create.


SQL> create pluggable database PDB2 from PDB1;

Pluggable database created.

SQL> c/2/3
  1* create pluggable database PDB3 from PDB1;
SQL> /

Pluggable database created.

SQL> c/3/4
  1* create pluggable database PDB4 from PDB1;
SQL> /

create pluggable database PDB4 from PDB1
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           MOUNTED
         5 PDB3                           MOUNTED
SQL>

I cannot reach this MAX_PDBS (it is supposed to be the number of user created PDBS, not the number of containers). This is definitely not the correct behaviour. It looks like a MAX_PDBS=3 as when you don’t have the multitenant option.


SQL> oradebug setmypid
Statement processed.
SQL> oradebug call kscs_cloudprov_ut
kscs_cloudprov_ut: Detecting cloud provider...
This is a non oracle cloud database instance
This is a Azure instance
CloudDB(10), CloudDetect(10), CloudQuery(ff)
Function returned BF5A07ED
SQL>

The Oracle binaries have a function that detects on which cloud provider they run (“This is a Azure instance”) because some features are limited when you are not on the Oracle Cloud (“This is a non oracle cloud database instance”). But the binaries are the same. The detection is done through each cloud provider metadata API.


[oracle@cloud ~]$ strings $ORACLE_HOME/bin/oracle | grep "http://169.254.169.254"
http://169.254.169.254/computeMetadata/v1/instance/id
http://169.254.169.254/latest/meta-data/services/domain
http://169.254.169.254/opc/v1/instance/id
http://169.254.169.254/metadata/instance/compute/vmId?api-version=2017-04-02&format=text

The first one is valid in GCP, the second one AWS, third one OCI, and last one looks like Azure. Oracle binaries have a directory of the main cloud providers metadata endpoint to detect where it runs.


[azureuser@o19c ~]$ sudo iptables -A OUTPUT -d 169.254.169.254  -j REJECT
[azureuser@o19c ~]$ curl 'http://169.254.169.254/metadata/instance/compute/vmId?api-version=2018-02-01&format=text' -H Metadata:true -s
[azureuser@o19c ~]$

I’ve blocked this endpoint.


SQL> create pluggable database PDB4 from PDB1;

Pluggable database created.

and that works, I can create my additional PDB (note that I had to restart the instance as the detection is done at startup time)


SQL> oradebug setmypid
Statement processed.
SQL> oradebug call kscs_cloudprov_ut
kscs_cloudprov_ut: Detecting cloud provider...
This is an on-premise database instance
CloudDB(0), CloudDetect(0), CloudQuery(ff)
Function returned E31011D7

You see, I’m not on-premises as no cloud metadata is available…


[azureuser@o19c ~]$ sudo iptables -D OUTPUT -d 169.254.169.254  -j REJECT
[azureuser@o19c ~]$ curl 'http://169.254.169.254/metadata/instance/compute/vmId?api-version=2018-02-01&format=text' -H Metadata:true -s
4967bff7-591f-4583-bd57-5eb0969b9ff0[azureuser@o19c ~]$

I re-open the metadata by deleting my iptables rule. Because a Cloud VM has some cloud provider software running there which require this.

Workaround


SQL> alter system set "_cdb_disable_pdb_limit"=true scope=spfile;
System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 3439327576 bytes
Fixed Size                  9140568 bytes
Variable Size             704643072 bytes
Database Buffers         2717908992 bytes
Redo Buffers                7634944 bytes
Database mounted.
Database opened.

SQL> oradebug call kscs_cloudprov_ut
kscs_cloudprov_ut: Detecting cloud provider...
This is a non oracle cloud database instance
This is a Azure instance
CloudDB(10), CloudDetect(10), CloudQuery(ff)
Function returned BF5A07ED
SQL>

I’ve disabled this PDB limit by setting “_cdb_disable_pdb_limit”=true and check again that the instance detects that I’m running on Azure.


SQL> create pluggable database PDB5 from PDB1;
Pluggable database created.

SQL> c/5/6
  1* create pluggable database PDB6 from PDB1
SQL> /
Pluggable database created.

Ok, now I was able to create an additional PDB. But do you remember that MAX_PDBS was 5? How was I able to create a 6th one?


SQL> show parameter max_pdbs

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_pdbs                             integer     4098
SQL>

When disabling the limit, the MAX_PDBS was raised. So be careful because you are not allowed to create more than 252 PDBs (4096 on Oracle platforms and never 4098 – I tried it)


SQL> alter system set max_pdbs=6;
System altered.

SQL> show parameter max_pdbs

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_pdbs                             integer     6
SQL> create pluggable database PDB7 from PDB1;
create pluggable database PDB7 from PDB1
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Fortunately, I can set the MAX_PDBS limit now that “_cdb_disable_pdb_limit”=true.

Why does it make sense?

Because when you bring your own license you are already penalized for not choosing the Oracle Cloud and one processor license that covered two cores on-premises now cover only one. Then, you want at least to optimize by consolidation: PDBs in a CDB rather than instances in a VM, and rather than multiple VMs. Note that this arithmetic on the number of cores behind the Oracle definition of a processor is subject to discussions. The documents mentioning it are “for educational purpose” and “does not constitute a contract or a commitment to any specific term”.

Anyway, you Bring Your Own License, where you can run 3 PDBs per CDB when you don’t have the multitenant option (and then you want to set MAX_PDBS=3 to be safe) or up to 252 when you have the option (and then you don’t want to be limited to 3). So in all cases, you have something to change. And Pluggable Database options can be useful to clone or relocate PDBs so that you can change the size of your VM (requires a restart) and balance the services on them.

Maybe all images are not in all region, so here is how to check:


[opc@a demo]$ az vm image list-publishers --location $(az account list-locations | jq -r '.[] | select ( .metadata.physicalLocation == "Geneva") .name ')
Command group 'vm' is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
(NoRegisteredProviderFound) No registered resource provider found for location 'switzerlandwest' and API version '2020-06-01' for type 'locations/publishers'. The supported api-versions are '2015-05-01-preview, 2015-06-15, 2016-03-30, 2016-04-30-preview, 2016-08-30, 2017-03-30, 2017-12-01, 2018-04-01, 2018-06-01, 2018-10-01, 2019-03-01, 2019-07-01, 2019-12-01, 2020-06-01, 2020-09-30, 2020-12-01'. The supported locations are 'eastus, eastus2, westus, centralus, northcentralus, southcentralus, northeurope, westeurope, eastasia, southeastasia, japaneast, japanwest, australiaeast, australiasoutheast, australiacentral, brazilsouth, southindia, centralindia, westindia, canadacentral, canadaeast, westus2, westcentralus, uksouth, ukwest, koreacentral, koreasouth, francecentral, southafricanorth, uaenorth, switzerlandnorth, germanywestcentral, norwayeast'.

[opc@a demo]$ az vm image list-publishers --location $(az account list-locations | jq -r '.[] | select ( .metadata.physicalLocation == "Zurich") .name ') | jq '.[] | select( .name == "Oracle" ) '
WARNING: Command group 'vm' is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
{
  "id": "/Subscriptions/bfaaad07-4a06-45e1-9e93-4eb58fa52f87/Providers/Microsoft.Compute/Locations/SwitzerlandNorth/Publishers/Oracle",
  "location": "SwitzerlandNorth",
  "name": "Oracle",
  "tags": null
}

[opc@a demo]$ az vm image list --all --location "SwitzerlandNorth" --publisher Oracle --output table --offer database
Command group 'vm' is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Offer                 Publisher    Sku                      Urn                                                         Version
--------------------  -----------  -----------------------  ----------------------------------------------------------  ---------
oracle-database-19-3  Oracle       oracle-database-19-0904  Oracle:oracle-database-19-3:oracle-database-19-0904:19.3.1  19.3.1

So for me, nothing in Geneva, but I have the list of regions, and available in Zurich.

A final note. This VM for “oracle-database-19-3” image runs on Oracle Linux. Until Jan 6th, 2021 Oracle did not allow for “Basic Limited” and “Premier Limited” on VMs larger than 8 vCPUs. This has changed to a limit of 64vCPU (and one system license can even cover two VMs within this limit). So there are good news in licensing documents sometimes…

Cet article Oracle DB on Azure with Multitenant Option est apparu en premier sur Blog dbi services.


Oracle Rolling Invalidate Window Exceeded(3)

$
0
0

By Franck Pachot

.
This extends a previous post (Rolling Invalidate Window Exceeded) where, in summary, the ideas were:

  • When you gather statistics, you want the new executions to take into account the new statistics, which means that the old execution plans (child cursors) should be invalidated
  • You don’t want all child cursors to be immediately invalidated, to avoid an hard parse storm, and this is why this invalidation is rolling: a 5 hour window is defined, starting at the next execution (after the statistics gathering) and a random timestamp is set there where a newer execution will hard parse rather than sharing an existing cursor
  • The “invalidation” term is misleading as it has nothing to do with the V$SQL.INVALIDATIONS wich is at parent cursor level. Here the existing plans are still valid. The “rolling invalidation” just make them non-shareable

In this blog post I’ll share my query to show the timestamps involved:

  • INVALIDATION_WINDOW which is the start of the invalidation (or rather the end of sharing of this cursor) for a future parse call
  • KSUGCTM (Kernel Service User Get Current TiMe) which is the time when non-sharing occured and a new child cursor has been created (hard parse instead of soft parse)

As usual, here is a simple example


alter system flush shared_pool;
create table DEMO as select * from dual;
insert into DEMO select * from dual;
commit;
alter system set "_optimizer_invalidation_period"=15 scope=memory;

I have created a demo table and set the invalidation to 15 seconds instead of the 5 hours default.


20:14:19 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:14:19 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT

1 row selected.

I’ve gathered the statistics at 20:14:19 but there are no cursor yet to invalidate.


20:14:20 SQL> host sleep 30

20:14:50 SQL> select * from DEMO;

DUMMY
-----
X

1 row selected.

20:14:50 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0

1 row selected.

I have executed my statement which created the child parent and, of course, no invalidation yet.


20:14:50 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:14:50 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.14.50.270984000 PM GMT

2 rows selected.

20:14:50 SQL> host sleep 30

20:15:20 SQL> select * from DEMO;

DUMMY
-----
X


1 row selected.

20:15:20 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0

1 row selected.

I have gathered statistics and ran my statement again. There’s no invalidation yet because the invalidation window starts only at next parse or execution that occurs after statistics gathering. This next execution occured after 20:15:20 and sets the start of the invalidation window. But for the moment, the same child is still shared.


20:15:20 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:15:20 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.14.50.270984000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.15.20.476025000 PM GMT


3 rows selected.

20:15:20 SQL> host sleep 30

20:15:50 SQL> select * from DEMO;

DUMMY
-----
X

1 row selected.

20:15:50 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><size>0x0</size><details>already_processed</details></ChildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1614111334</invalidation_window><ksugctm>1614111350</ksugctm></ChildNode>
             1

2 rows selected.

I’ve gathered the statistics again, but what matters here is that I’ve run my statement now that the invalidation window has been set (by the previous execution from 20:15:20), and has been reached (I waited 30 seconds which is more than the 15 second window I’ve defined), and then this new execution set the cursor as non-shareable , for “Rolling Invalidate Window Exceeded(3)” reason, and has created a new child cursor.

20:15:50 SQL> select child_number,invalidations,parse_calls,executions,cast(last_active_time as timestamp) last_active_time
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*.invalidation_window>([0-9]*)./invalidation_window>.ksugctm>([0-9]*).*','\1')),'second') invalidation_window
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*([0-9]*)./invalidation_window>.ksugctm>([0-9]*)./ksugctm>.*','\2')),'second') ksugctm
    from v$sql_shared_cursor left outer join v$sql using(sql_id,child_number,address,child_address)
    where reason like '%Rolling Invalidate Window Exceeded(3)%' and sql_id='0m8kbvzchkytt'
    order by sql_id,child_number,invalidation_window desc
    ;

  CHILD_NUMBER   INVALIDATIONS   PARSE_CALLS   EXECUTIONS LAST_ACTIVE_TIME                  INVALIDATION_WINDOW               KSUGCTM
--------------   -------------   -----------   ---------- -------------------------------   -------------------------------   ----------------------------

             0               0             3            2 23-FEB-21 08.15.50.000000000 PM   23-FEB-21 08.15.34.000000000 PM   23-FEB-21 08.15.50.000000000 PM

1 row selected.

So at 20:15:20 the invalidation has been set (but not exposed yet) at random within the next 15 seconds (because I changed the 5 hours default) and it is now visible as INVALIDATION_WINDOW: 20:15:34 and then the next execution after this timestamp has created a new child at 21:15:50 which is visible by KSUGCTM but also in LAST_ACTIVE_TIME (even if this child cursor has not been executed, just updated).

The important thing is that those child cursors will not be used again but are still there, increasing the length of the list of child cursors that is read when parsing a new statement with the same SQL text. And this can go up to 8192 if you’ve left the default “_cursor_obsolete_threshold” (which is recommended to lower – see Mike Dietrich blog post)

And this also means that you should not gather statistics too often and this is why GATHER AUTO is the default option. You may lower the STALE_PERCENT for some tables (if very large with few changes, it may not be gathered othen enough) but gathering stats from a table everyday, even if small, has bad effect on cursor versions.


SQL> alter session set nls_timestamp_format='dd-mon hh24:mi:ss';
SQL>

select sql_id,child_number,ksugctm,invalidation_window
    ,(select cast(max(stats_update_time) as timestamp) from v$object_dependency 
      join dba_tab_stats_history on to_owner=owner and to_name=table_name and to_type=2
      where from_address=address and from_hash=hash_value and stats_update_time ([0-9]*)./invalidation_window>.ksugctm>([0-9]*).*','\1')),'second') invalidation_window
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*([0-9]*)./invalidation_window>.ksugctm>([0-9]*)./ksugctm>.*','\2')),'second') ksugctm
    from v$sql_shared_cursor left outer join v$sql using(sql_id,child_number,address,child_address)
    where reason like '%Rolling Invalidate Window Exceeded(3)%' --and sql_id='0m8kbvzchkytt'
    ) order by sql_id,child_number,invalidation_window desc;

SQL_ID            CHILD_NUMBER KSUGCTM           INVALIDATION_WINDOW   LAST_ANALYZE
-------------     ------------ ---------------   -------------------   ----------------------------------
04kug40zbu4dm                2 23-feb 06:01:23   23-feb 06:01:04
0m8kbvzchkytt                0 23-feb 21:34:47   23-feb 21:34:25       23-FEB-21 09.34.18.582833000 PM GMT
0m8kbvzchkytt                1 23-feb 21:35:48   23-feb 21:35:23       23-FEB-21 09.35.18.995779000 PM GMT
0m8kbvzchkytt                2 23-feb 21:36:48   23-feb 21:36:22       23-FEB-21 09.36.19.305025000 PM GMT
0m8kbvzchkytt                3 23-feb 21:37:49   23-feb 21:37:32       23-FEB-21 09.37.19.681986000 PM GMT
0m8kbvzchkytt                4 23-feb 21:38:50   23-feb 21:38:26       23-FEB-21 09.38.20.035265000 PM GMT
0m8kbvzchkytt                5 23-feb 21:39:50   23-feb 21:39:32       23-FEB-21 09.39.20.319662000 PM GMT
0m8kbvzchkytt                6 23-feb 21:40:50   23-feb 21:40:29       23-FEB-21 09.40.20.617857000 PM GMT
0m8kbvzchkytt                7 23-feb 21:41:50   23-feb 21:41:28       23-FEB-21 09.41.20.924223000 PM GMT
0m8kbvzchkytt                8 23-feb 21:42:51   23-feb 21:42:22       23-FEB-21 09.42.21.356828000 PM GMT
0m8kbvzchkytt                9 23-feb 21:43:51   23-feb 21:43:25       23-FEB-21 09.43.21.690408000 PM GMT
0sbbcuruzd66f                2 23-feb 06:00:46   23-feb 06:00:45
0yn07bvqs30qj                0 23-feb 01:01:09   23-feb 00:18:02
121ffmrc95v7g                3 23-feb 06:00:35   23-feb 06:00:34

This query joins with the statistics history in order to get an idea of the root cause of the invalidation. I look at the cursor dependencies, and the table statistics. This may be customized with partitions, index names,…

The core message here is that gathering statistics on a table will make the cursors unshareable. If you have, say 10 versions because of multiple NLS settings et bind variable length,… and gather the statistics every day, the list of child cursor will increase until reaching the obsolete threshold. And when the list is long, you will have more pressure on library cache during attempts to soft parse. If you gather statistics without the automatic job, and do it without ‘GATHER AUTO’, even on small tables where gathering is fast, you increase the number of cursor versions without a reason. The best practice for statistics gathering is keeping the AUTO settings. The query above may help to see the correlation between statistics gathering and rolling invalidation.

Cet article Oracle Rolling Invalidate Window Exceeded(3) est apparu en premier sur Blog dbi services.

Oracle Blockchain Tables: COMMIT-Time

$
0
0

Oracle Blockchain Tables are available now with Oracle 19.10. (see Connor’s Blog on it), they are part of all editions and do not need any specific license. I.e. whenever we need to store data in a table, which should never be updated anymore and we have to ensure data cannot be tampererd, then blockchain tables should be considered as an option. As Oracle writes in the documentation that blockchain tables could e.g. be used for “audit trails”, I thought to test them by archiving unified audit trail data. Let me share my experience:

First of all I setup a 19c-database so that it supports blockchain tables:

– Installed 19.10.0.0.210119 (patch 32218454)
– Set COMAPTIBLE=19.10.0 and restarted the DB
– Installed patch 32431413

REMARK: All tests I’ve done with 19c have been done with Oracle 21c on the Oracle Cloud as well to verify that results are not caused by the backport of blockchain tables to 19c.

Creating the BLOCKCHAIN TABLE:

Blockchain Tables do not support the Create Table as Select-syntax:


create blockchain table uat_copy_blockchain2
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data
as select * from unified_audit_trail;

ERROR at line 6:
ORA-05715: operation not allowed on the blockchain table

I.e. I have to pre-create the blockain table and insert with “insert… select”:


CREATE blockchain TABLE uat_copy_blockchain 
   ("AUDIT_TYPE" VARCHAR2(64),
	"SESSIONID" NUMBER,
	"PROXY_SESSIONID" NUMBER,
	"OS_USERNAME" VARCHAR2(128),
...
	"DIRECT_PATH_NUM_COLUMNS_LOADED" NUMBER,
	"RLS_INFO" CLOB,
	"KSACL_USER_NAME" VARCHAR2(128),
	"KSACL_SERVICE_NAME" VARCHAR2(512),
	"KSACL_SOURCE_LOCATION" VARCHAR2(48),
	"PROTOCOL_SESSION_ID" NUMBER,
	"PROTOCOL_RETURN_CODE" NUMBER,
	"PROTOCOL_ACTION_NAME" VARCHAR2(32),
	"PROTOCOL_USERHOST" VARCHAR2(128),
	"PROTOCOL_MESSAGE" VARCHAR2(4000)
   )
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data;

Table created.

Now load the data into the blockchain table:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

Elapsed: 00:00:07.24
SQL> commit;

Commit complete.

Elapsed: 00:00:43.26

Over 43 seconds for the COMMIT!!!

The reason for the long COMMIT-time is that the blockchain (or better the row-chain of hashes for the 26526 rows) is actually built when committing. I.e. all blockchain related columns in the table are empty after the insert, before the commit:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  and ORABCTAB_CHAIN_ID$ is NULL
  4  and ORABCTAB_SEQ_NUM$ is NULL
  5  and ORABCTAB_CREATION_TIME$ is NULL
  6  and ORABCTAB_USER_NUMBER$ is NULL
  7  and ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
     26526

During the commit those hidden columns are updated:


SQL> commit;

Commit complete.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  or ORABCTAB_CHAIN_ID$ is NULL
  4  or ORABCTAB_SEQ_NUM$ is NULL
  5  or ORABCTAB_CREATION_TIME$ is NULL
  6  or ORABCTAB_USER_NUMBER$ is NULL
  7  or ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
         0

When doing a SQL-Trace I can see the following recursive statements during the COMMIT:


SQL ID: 6r4qu6xnvb3nt Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_inst_id$ = :1,
  orabctab_chain_id$ = :2, orabctab_seq_num$ = :3, orabctab_user_number$ = :4,
   ORABCTAB_CREATION_TIME$ = :5
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.56       0.55          0          0          0           0
Execute  26526     10.81      12.21       3824       3395      49546       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052     11.38      12.76       3824       3395      49546       26526

********************************************************************************

SQL ID: 4hc26wpgb5tqr Plan Hash: 2019081831

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      9.29      10.12        512      26533      27822       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    26527      9.29      10.12        512      26533      27822       26526

********************************************************************************

SQL ID: 2t5ypzqub0g35 Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_hash$ = :1
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.58       0.57          0          0          0           0
Execute  26526      6.79       7.27       1832       2896      46857       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052      7.37       7.85       1832       2896      46857       26526

********************************************************************************

SQL ID: bvggpqdp5u4uf Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26527      5.34       5.51          0          0          0           0
Fetch    26527      0.75       0.72          0      53053          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53055      6.10       6.24          0      53053          0       26526

********************************************************************************

SQL ID: dktp4suj3mn0t Plan Hash: 4188997816

SELECT  "AUDIT_TYPE",  "SESSIONID",  "PROXY_SESSIONID",  "OS_USERNAME",
  "USERHOST",  "TERMINAL",  "INSTANCE_ID",  "DBID",  "AUTHENTICATION_TYPE",
  "DBUSERNAME",  "DBPROXY_USERNAME",  "EXTERNAL_USERID",  "GLOBAL_USERID",
  "CLIENT_PROGRAM_NAME",  "DBLINK_INFO",  "XS_USER_NAME",  "XS_SESSIONID",
  "ENTRY_ID",  "STATEMENT_ID",  "EVENT_TIMESTAMP",  "EVENT_TIMESTAMP_UTC",
  "ACTION_NAME",  "RETURN_CODE",  "OS_PROCESS",  "TRANSACTION_ID",  "SCN",
  "EXECUTION_ID",  "OBJECT_SCHEMA",  "OBJECT_NAME",  "SQL_TEXT",  "SQL_BINDS",
    "APPLICATION_CONTEXTS",  "CLIENT_IDENTIFIER",  "NEW_SCHEMA",  "NEW_NAME",
   "OBJECT_EDITION",  "SYSTEM_PRIVILEGE_USED",  "SYSTEM_PRIVILEGE",
  "AUDIT_OPTION",  "OBJECT_PRIVILEGES",  "ROLE",  "TARGET_USER",
  "EXCLUDED_USER",  "EXCLUDED_SCHEMA",  "EXCLUDED_OBJECT",  "CURRENT_USER",
  "ADDITIONAL_INFO",  "UNIFIED_AUDIT_POLICIES",  "FGA_POLICY_NAME",
  "XS_INACTIVITY_TIMEOUT",  "XS_ENTITY_TYPE",  "XS_TARGET_PRINCIPAL_NAME",
  "XS_PROXY_USER_NAME",  "XS_DATASEC_POLICY_NAME",  "XS_SCHEMA_NAME",
  "XS_CALLBACK_EVENT_TYPE",  "XS_PACKAGE_NAME",  "XS_PROCEDURE_NAME",
  "XS_ENABLED_ROLE",  "XS_COOKIE",  "XS_NS_NAME",  "XS_NS_ATTRIBUTE",
  "XS_NS_ATTRIBUTE_OLD_VAL",  "XS_NS_ATTRIBUTE_NEW_VAL",  "DV_ACTION_CODE",
  "DV_ACTION_NAME",  "DV_EXTENDED_ACTION_CODE",  "DV_GRANTEE",
  "DV_RETURN_CODE",  "DV_ACTION_OBJECT_NAME",  "DV_RULE_SET_NAME",
  "DV_COMMENT",  "DV_FACTOR_CONTEXT",  "DV_OBJECT_STATUS",  "OLS_POLICY_NAME",
    "OLS_GRANTEE",  "OLS_MAX_READ_LABEL",  "OLS_MAX_WRITE_LABEL",
  "OLS_MIN_WRITE_LABEL",  "OLS_PRIVILEGES_GRANTED",  "OLS_PROGRAM_UNIT_NAME",
   "OLS_PRIVILEGES_USED",  "OLS_STRING_LABEL",  "OLS_LABEL_COMPONENT_TYPE",
  "OLS_LABEL_COMPONENT_NAME",  "OLS_PARENT_GROUP_NAME",  "OLS_OLD_VALUE",
  "OLS_NEW_VALUE",  "RMAN_SESSION_RECID",  "RMAN_SESSION_STAMP",
  "RMAN_OPERATION",  "RMAN_OBJECT_TYPE",  "RMAN_DEVICE_TYPE",
  "DP_TEXT_PARAMETERS1",  "DP_BOOLEAN_PARAMETERS1",
  "DIRECT_PATH_NUM_COLUMNS_LOADED",  "RLS_INFO",  "KSACL_USER_NAME",
  "KSACL_SERVICE_NAME",  "KSACL_SOURCE_LOCATION",  "PROTOCOL_SESSION_ID",
  "PROTOCOL_RETURN_CODE",  "PROTOCOL_ACTION_NAME",  "PROTOCOL_USERHOST",
  "PROTOCOL_MESSAGE",  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",
  "ORABCTAB_SEQ_NUM$",  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",
  "ORABCTAB_HASH$",  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.85       3.84          0          0          0           0
Fetch    26526      1.31       1.31          0      28120          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      5.17       5.15          0      28120          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

I.e. for every row inserted in the transaction, several recursive statements have to be executed to compute and update the inserted rows to link them together through the hash chain.

That raises the question if I should take care of PCTFREE when creating the blockchain table to avoid row migration (often wrongly called row chaining).

As with normal tables, blockchain tables have a default of 10% for PCTFREE:


SQL> select pct_free from tabs where table_name='UAT_COPY_BLOCKCHAIN';

  PCT_FREE
----------
        10

Do we actually have migrated rows after the commit?


SQL> @?/rdbms/admin/utlchain

Table created.

SQL> analyze table uat_copy_blockchain list chained rows;

Table analyzed.

SQL> select count(*) from chained_rows;

  COUNT(*)
----------
      7298

SQL> select count(distinct dbms_rowid.rowid_relative_fno(rowid)||'_'||dbms_rowid.rowid_block_number(rowid)) blocks_with_rows
  2  from uat_copy_blockchain;

BLOCKS_WITH_ROWS
----------------
	    1084

So it makes sense to adjust the PCTFREE. In my case best would be something like 25-30%, because the blockchain date makes around 23% of the average row length:


SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN';

SUM(AVG_COL_LEN)
----------------
	     401

SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN'
  2  and column_name like 'ORABCTAB%';

SUM(AVG_COL_LEN)
----------------
	      92

SQL> select (92/401)*100 from dual;

(92/401)*100
------------
  22.9426434

I could reduce the commit-time by 5 secs by adjusting the PCTFREE to 30.

But coming back to the commit-time issue:

This can easily be tested by just chekcing how much the commit-time increases when more data is loaded per transaction. Here the test done on 21c on the Oracle Cloud:


SQL> create blockchain table test_block_chain (a number, b varchar2(100), c varchar2(100))
  2  no drop until 0 days idle
  3  no delete until 31 days after insert
  4  hashing using "sha2_512" version v1;

Table created.

SQL> set timing on
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 1000;

999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:00.82
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 2000;

1999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:01.56
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 4000;

3999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:03.03
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 8000;

7999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:06.38
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 16000;

15999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:11.71

I.e. the more data inserted, the longer the commit-times. The times go up almost linearly with the amount of data inserted per transaction.

Can we gain something here by doing things in parallel? A commit-statement cannot be parallelized, but you may of course split your e.g. 24000 rows insert into 2 x 12000 rows inserts and run them in parallel and commit them at the same time. I created 2 simple scripts for that:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct.bash 
#!/bin/bash

ROWS_TO_LOAD=$1

sqlplus -S cbleile/${MY_PASSWD}@pdb1 <<EOF
insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum <= $ROWS_TO_LOAD ;
-- alter session set events '10046 trace name context forever, level 12';
set timing on
commit;
-- alter session set events '10046 trace name context off';
exit
EOF

exit 0

oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct_parallel.bash 
#!/bin/bash

PARALLELISM=$1
LOAD_ROWS=$2

for i in $(seq ${PARALLELISM})
do
  ./load_bct.bash $LOAD_ROWS &
done
wait

exit 0

Loading 4000 Rows in a single job:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 1 4000

4000 rows created.


Commit complete.

Elapsed: 00:00:03.56

Loading 4000 Rows in 2 jobs, which run in parallel and each loading 2000 rows:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 2 2000

2000 rows created.


2000 rows created.


Commit complete.

Elapsed: 00:00:17.87

Commit complete.

Elapsed: 00:00:18.10

That doesn’t scale at all. Enabling SQL-Trace for the 2 jobs in parallel showed this:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      8.41       8.58          0    1759772       2088        2000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2001      8.41       8.58          0    1759772       2088        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=103 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=25 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=9 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                             108        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00
********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      0.55       0.55          0          0          0           0
Fetch     2000      7.39       7.52          0    1758556          0        2000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      7.95       8.08          0    1758556          0        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                              80        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00

The single job for above 2 statements contained the following:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.76       1.85          0       8001       4140        4000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      1.76       1.85          0       8001       4140        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=102 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=26 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=0 card=1)(object id 11132)

********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.09       1.09          0          0          0           0
Fetch     4000      0.06       0.06          0       8000          0        4000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     8001      1.15       1.16          0       8000          0        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)

I.e. there’s a massive difference in logical IOs and I could see in the trace that the SQLs became slower with each execution.

Summary: Blockchain Tables are a great technology, but as with any other technology you should know its limitations. There is an overhead when committing and inserting into such tables in parallel sesssions does currently not scale when committing. If you test blockchain tables then I do recommend to review your PCT-FREE-setting of the blockchain table to avoid row migration.

Cet article Oracle Blockchain Tables: COMMIT-Time est apparu en premier sur Blog dbi services.

Delphix: a glossary to get started

$
0
0

By Franck Pachot

.
dbi-services is partner of Delphix – a data virtualization platform for easy cloning of databases. I’m sharing a little glossary to get started if you are not familiar with the terms you see in doc, console or logs.

Setup console

The setup console is the first interface you will access when installing Delphix engine (“Dynamic Data Platform”). You import the .ova and start it. If you are on a network with DHCP you can connect to the GUI, like at http://http://192.168.56.111/ServerSetup.html#/dashboard. If not you will access to the console (also available through ssh) where you have a simple help. The basic commands are `ls` to show what is available – objects and operations, `commit` to validate your changes, `up` to… go up.


network
setup
update
set hostname="fmtdelphix01"
set dnsServers="8.8.8.8, 1.1.1.1"
set defaultRoute="192.168.56.1"
set dnsDomain="tetral.com"
set primaryAddress="192.168.56.111/24"
commit

And anyway, if you started with DHCP I think you heed to disable it ( network setup update ⏎ set dhcp=false ⏎ commit ⏎)

When in the console, the harder thing for me is to find the QWERTY for ” and = as the others are on the numeric pad (yes… numeric pad is still useful!)

Storage test

Once you have an ip address you can ssh to it for the command line console with the correct keyboard and copy/paste. On thing that you can do only there and before engine initialization is storage tests

delphix01> storage
delphix01 storage> test
delphix01 storage test> create
delphix01 storage test create *> ls
Properties
type: StorageTestParameters
devices: (unset)
duration: 120
initializeDevices: true
initializeEntireDevice: false
testRegion: 512GB
tests: ALL

Before that you can set a different duration and testRegion if you don’t want to wait. Then you type `commit`to start it (and check the ETA to know how many coffees you can drink) or `discard` to cancel the test.

Setup console

Then you will continue with the GUI and the first initialization will run the wizard: choose “Virtualization engine”, setup the admin and sysadmin accounts (sysadmin is the one for this “Setup console” and admin the one for the “Management” console), NTP, Network, Storage, Proxy, Certificates, SMTP. Don’t worry many things can be changed later. Like adding network interfaces, adding new disk (just click on rediscover and accept them as “Data” usage, add certificates for HTTPS, get the registration key, and add users. The users here are for this Server Setup GUI or CLI console only.

GUI: Setup and Management consoles

The main reason for this blog post is to explain the names that can be misleading because named differently at different places. There are two graphical consoles for this engine once setup is done:

  • The Engine Setup console with #serverSetup in the URL and SETUP subtitle in the DELPHIX login screen. You use SYSADMIN here (or another user that you will create in this console). You manage the engine here (network, storage,…)
  • The Management console with #delphixAdmin in the URL and the “DYNAMIC DATA PLATFORM” subtitle. You use the ADMIN user here (or another user that you will create in this console). You manage your databases here.

Once you get this, everything is simple. I’ll mention the few other concepts that may have a misleading name in the console or the API. Actually, there’s a third console, the Self Service with the /jetstream/#mgmt in the URL that you access from the Management console, with the Management user. And of course there are the APIs. I’ll cover only the Management console in re rest of this post.

Management console

It’s subtitle in the login screen is “Dynamic Data platform” and it is actually the “Virtualization” engine. There, you use the “admin” user, not the “sysadmin” one. Or any newly added one. The Manage/Dashboard is the best place to start. The main goal of this post is to explain quickly the different concepts and their different names.

Environments

An Environment is the door to other systems. Think of “environments” as if it was called “hosts”. You will create an environment for source and target hosts. It needs only ssh access (the best is to add the dephix ssh key in the target’s .ssh/authorized keys). You can create a dedicated linux user, or use the ‘oracle’ one for simplicity. It only needs a directory that it owns (I use “/u01/app/delphix”) where it will install the “Toolkit” (about 500MB used but check the prerequisites). That’s sufficient for sources but if you want to mount clones you need sudo provileges for that:

cat > /etc/sudoers.d/delphix_oracle <<'CAT'
Defaults:oracle !requiretty
oracle ALL=NOPASSWD: /bin/mount, /bin/umount, /bin/mkdir, /bin/rmdir, /bin/ps
CAT

And that’s all you need. There’s no agent running. All is run by the Delphix engine when needed, through ssh.

Well, I mention ssh only for operations, but the host must be able to connect to the Delphix engine, to send backups of dSource or mount a NFS.

Additionally, you will need to ensure that you have enough memory to start clones as I’m sure you will quickly be addicted to the easiness of provisioning new databases. I use this to check available memory in small pages (MemAvailable) and large pages (HugePages_Free):

awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$2;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print sprintf("%-60s %10d",k,v[k]/p); for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|( HugePage)[^:]*" #awk #meminfo

You find it there: https://franckpachot.medium.com/proc-meminfo-formatted-for-humans-350c6bebc380

As in many places, you name your environment (I put the host name and a little information behind like “prod” or “clones”) and have a Notes textbox that can be useful for you or your colleagues. Data virtualization is about agility and self-documented tools are the right place: you see the latest info next to the current status.

In each environments you can auto-discover the Databases. Promote one as a dSource. And if the database is an Oracle CDB you can discover the PDBs inside it.
You can also add filesystem directories. And this is where the naming confusion starts: they are displayed here, in environments, as “Unstructured Files” and you add them with “Add Database” and clone them to “vFiles”…

Datasets and Groups

And all those dSource, VDB, vFiles are “Datasets”. If you click on “dSources”, “VDBs” or “vFiles” you always go to “Datasets”. And there, they are listed in “Groups”. And in each group you see the Dataset name with its type (like “VDB” or “dSource”) and status (like “Running”, “Stopped” for VDBS, or “Active” or “Detached” for dSources). The idea is that all Datasets have a Timeflow, Status and Configuration. Because clones can also be sources for other clones. In the CLI console you see all Datasets as “source” objects, with a “virtual” flag that is true only for VDB or an unlinked dSource.

Don’t forget the Notes in the Status panel. I put the purpose there (why the clone is created, who is the user,…) and state (if the application is configured to work on it for example).

About the groups, you arrange them as you want. They also have Notes to describe it. And you can attach default policies to them. I group by host usually, and type of users (as they have different policies). And in the name of the group or the policy, I add a little detail to see which one is daily refreshed for example, or which one is a long-term used clone.

dSource

The first dataset you will have is the dSource. In a source environment, you have Dataset Homes (the ORACLE_HOME for Oracle) and from there a “data source” (a database) is discovered in an environment. And it will run a backup sent to Delphix (as a device type TAPE, for Oracle, handeled by Delphix libobk.so). This is stored in the Delphix engine storage and the configuration is kept to be able to refresh later with incremental backups (called SnapSync, or DB_SYNC or Snapshot with the camera icon). Delphix will then apply the incrementals on his copy-on-write filesystem. There’s no need for an Oracle instance to apply them. It seems that Delphix handles the proprietary format of Oracle backupsets. Of course, the archive logs generated during the backups must be kept but they need an Oracle instance for that so they are just stored to be applied on thin provisioning clone or refresh. If there’s a large gap and the incremental takes long, then you may opt for a DoubleSync where only the second one, faster, need to be covered by archived logs.

Timeflow

So you see the points of Sync as snapshots (camera icon) in the timeflow and you can provision a clone from them (the copy-paste Icon in the Timeflow). Automatic snapshots can be taken by the SnapSync policy and will be kept to cover the Retention policy (but you can mark one to keep longer as well). You take a snapshot manually with the camera icon.

In addition to the archivelog needed to cover the SnapSync, intermediate archive logs and even online logs can be retrieved with LogSync when you clone from an intermediate Point-In-Time. This, in the Timeflow, is seen with “Open LogSync” (an icon like a piece of paper) and from there you can select a specific time.

In a dSource, you select the snapshot, or point-in-time, to create a clone from it. It creates a child snapshot where all changes will be copy-on-write so that modifications on the parent are possible (the next SnapSync will write on the parent) and modifications on the child. And the first modification will be the apply of the redo log before opening the clone. The clone is simply an instance on an NFS mount to the Delphix engine.

VDB

Those clones become a virtual database (VDB) which is still a Dataset as it can be source for further clones.

They have additional options. They can be started and stopped as they are fully managed by Delphix (you don’t have to do anything on the server). And because they have a parent, you can refresh them (the round arrow icon). In the Timeflow, you see the snapshots as in all Datasets. But you also have the refreshes. And there is another operation related to this branch only: rewind.

Rewind

This is like a Flashback Database in Oracle: you mount the database from another point-in-time. This operation has many names. In the Timeflow the icon with two arrows on left is called “Rewind”. In the jobs you find “Rollback”. And none are really good names because you can move back and then in the future (relatively to current state of course).

vFiles

Another Datasource is vFiles where you can synchronize simple filesystems. In the environments, you find it in the Databases tab, under Unstructured Files instead of the Dataset Home (which is sometimes called Installation). And the directory paths are displayed as DATABASES. vFiles is really convenient when you store your files metadata in the database and the files themselves outside of it. You probably want to get them at the same point-in-time.

Detach or Unlink

When a dSource is imported in Delphix, it is a Dataset that can be source for a new clone, or to refresh an existing one. As it is linked to a source database, you can SnapSync and LogSync. But you can also unlink it from the source and keep it as a parent of clones. This is named Detach or Unlink operation.

Managed Source Data

Managed Source Data is the important metric for licensing reasons. Basically, Delphix ingests databases from dSources and stores it in a copy-on-write filesystem on the storage attached to the VM where Delphix engine runs. The Managed Source Data is the sum of all root parent before compression. This means that if you ingested two databases DB1 and DB2 and have plenty of clones (virtual databases) you count only the size of DB1 and DB2 for licensing. This is really good because this is where you save the most: storage thanks to compression and thin provisioning. If you drop the source database, for example DB2 but still keep clones on it, the parent snapshot must be kept in the engine and this still counts for licensing. However, be careful that as soon as a dSource is unlinked (when you don’t want to refresh from it anymore, and maybe even delete the source) the engine cannot query it to know the size. So this will not be displayed on Managed Source Data dashboard but should count for licensing purpose.

Cet article Delphix: a glossary to get started est apparu en premier sur Blog dbi services.

Oracle – testing resource manager plans?

$
0
0

By Franck Pachot

.
I never remember that in order to use instance caging you need to set a Resource Manager Plan but don’t need to set CPU_COUNT explicitly (was it the case in previous versions?). Here is how to test it quickly in a lab.


SQL> startup force
ORACLE instance started.

SQL> show spparameter resource_manager_plan

SID      NAME                          TYPE        VALUE
-------- ----------------------------- ----------- ----------------------------
*        resource_manager_plan         string

SQL> show spparameter cpu_count

SID      NAME                          TYPE        VALUE
-------- ----------------------------- ----------- ----------------------------
*        cpu_count                     integer

SQL> show parameter resource_manager_plan

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
resource_manager_plan                string

SQL> show parameter cpu_count

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     16

I have a VM with 16 CPU threads, no “cpu_count” or “resource_manager_plan” are set in SPFILE. I restarted the instance (it is a lab) to be sure that nothing is set on scope=memory.

sqlplus / as sysdba @ tmp.sql /dev/null
for i in {1..16} ; do echo "exec loop null; end loop;" | sqlplus -s "c##franck/c##franck" & done >/dev/null
sleep 10
( cd /tmp && git clone https://github.com/tanelpoder/tpt-oracle.git )
sqlplus / as sysdba @ /tmp/tpt-oracle/snapper ash=event+username 30 1 all < /dev/null
pkill -f oracleCDB1A

I run 32 sessions working in memory (simple PL/SQL loops) and look at the sessions with Tanel Poder’s snapper in order to show whether I am ON CPU or in Resource Manager wait. And then kill my sessions in a very hugly fashion (this is a lab)

Nothing set, all defaults: no instance caging

-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

-------------------------------------------------------------------------------
  ActSes   %Thread | EVENT                               | USERNAME
-------------------------------------------------------------------------------
   16.00   (1600%) | ON CPU                              | C##FRANCK
   16.00   (1600%) | ON CPU                              | SYS

--  End of ASH snap 1, end=2021-03-16 17:38:43, seconds=30, samples_taken=96, AAS=32

On my CPU_COUNT=16 (default but not set) instance, I have 32 sessions ON CPU -> no instance caging

Only CPU_COUNT set, no resource manager plan: no instance caging


SQL> alter system set cpu_count=16  scope=memory;
System altered.

I have set CPU_COUNT explicitely to 16 (just checking because this is where I always have a doubt)

-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

-------------------------------------------------------------------------------
  ActSes   %Thread | EVENT                               | USERNAME
-------------------------------------------------------------------------------
   16.00   (1600%) | ON CPU                              | C##FRANCK
   16.00   (1600%) | ON CPU                              | SYS
     .03      (3%) | ON CPU                              |

--  End of ASH snap 1, end=2021-03-16 18:03:50, seconds=5, samples_taken=37, AAS=32

Setting CPU_COUNT manually doesn’t change anything here.


SQL> startup force
ORACLE instance started.

For the next test I reset to the default to show that CPU_COUNT doesn’t have to be set explicitely in order to enable instance caging.

Resource manager set to DEFAULT_CDB_PLAN with default CPU_COUNT: instance caging

SQL> alter system set resource_manager_plan=DEFAULT_CDB_PLAN scope=memory;

System altered.

I have set the default resource manager plan (I’m in multitenant and running from the CDB)


-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

-------------------------------------------------------------------------------
  ActSes   %Thread | EVENT                               | USERNAME
-------------------------------------------------------------------------------
   13.07   (1307%) | ON CPU                              | SYS
   12.20   (1220%) | resmgr:cpu quantum                  | C##FRANCK
    3.80    (380%) | ON CPU                              | C##FRANCK
    2.93    (293%) | resmgr:cpu quantum                  | SYS

--  End of ASH snap 1, end=2021-03-16 18:21:24, seconds=30, samples_taken=94, AAS=32

Here only 16 sessions on average are ON CPU and the others are scheduled out by Resource Manager. Note that there’s a higher priority for SYS than for my user.

Resource manager set to DEFAULT_MAINTENANCE_PLAN with default CPU_COUNT: instance caging

SQL> alter system set resource_manager_plan=DEFAULT_MAINTENANCE_PLAN scope=memory;

System altered.

I have set the default resource manager plan (I’m in multitenant and running from the CDB)


-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

-------------------------------------------------------------------------------
  ActSes   %Thread | EVENT                               | USERNAME
-------------------------------------------------------------------------------
   13.22   (1322%) | ON CPU                              | SYS
   12.31   (1231%) | resmgr:cpu quantum                  | C##FRANCK
    3.69    (369%) | ON CPU                              | C##FRANCK
    2.78    (278%) | resmgr:cpu quantum                  | SYS
     .07      (7%) | ON CPU                              |
     .04      (4%) | resmgr:cpu quantum                  |

--  End of ASH snap 1, end=2021-03-16 18:29:31, seconds=30, samples_taken=95, AAS=32.1

Here only 16 sessions on average are ON CPU and the others are scheduled out by Resource Manager. Again, there’s a higher priority for SYS than for my user.

Same in a PDB


for i in {1..16} ; do echo "exec loop null; end loop;" | ORACLE_PDB_SID=PDB1 sqlplus -s / as sysdba & done >/dev/null
for i in {1..16} ; do echo "exec loop null; end loop;" | sqlplus -s "c##franck/c##franck"@//localhost/PDB1 & done >/dev/null

I’ve changed my connections to connect to the PDB


-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)


----------------------------------------------------------------------------------------
  ActSes   %Thread | EVENT                               | USERNAME             | CON_ID
----------------------------------------------------------------------------------------
   14.95   (1495%) | ON CPU                              | SYS                  |      3
   14.27   (1427%) | resmgr:cpu quantum                  | C##FRANCK            |      3
    1.73    (173%) | ON CPU                              | C##FRANCK            |      3
    1.05    (105%) | resmgr:cpu quantum                  | SYS                  |      3
     .01      (1%) | LGWR all worker groups              |                      |      0
     .01      (1%) | ON CPU                              |                      |      0

--  End of ASH snap 1, end=2021-03-16 19:14:52, seconds=30, samples_taken=93, AAS=32

I check the CON_ID to verify that I run in the PDB and here, with the CDB resource manager plan DEFAULT_MAINTENANCE_PLAN the SYS_GROUP (SYSDBA and SYSTEM) can take 90% of CPU. It is the same with DEFAULT_CDB_PLAN.

Adding a Simple Plan


SQL> alter session set container=PDB1;

Session altered.

SQL> BEGIN
  2  DBMS_RESOURCE_MANAGER.CREATE_SIMPLE_PLAN(SIMPLE_PLAN => 'SIMPLE_PLAN1',
  3     CONSUMER_GROUP1 => 'MYGROUP1', GROUP1_PERCENT => 80,
  4     CONSUMER_GROUP2 => 'MYGROUP2', GROUP2_PERCENT => 20);
  5  END;
  6  /

PL/SQL procedure successfully completed.

SQL> alter system set resource_manager_plan=SIMPLE_PLAN1 scope=memory;

System altered.
This is the simple plan example from the documentation (or here). 

-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

----------------------------------------------------------------------------------------
  ActSes   %Thread | EVENT                               | USERNAME             | CON_ID
----------------------------------------------------------------------------------------
    8.59    (859%) | ON CPU                              | SYS                  |      3
    8.51    (851%) | ON CPU                              | C##FRANCK            |      3
    7.49    (749%) | resmgr:cpu quantum                  | C##FRANCK            |      3
    7.41    (741%) | resmgr:cpu quantum                  | SYS                  |      3
     .04      (4%) | ON CPU                              |                      |      0

--  End of ASH snap 1, end=2021-03-16 19:29:54, seconds=30, samples_taken=92, AAS=32

Now, with this simple plan, everything changed. The level 1 gives 100% to SYS_GROUP but it actually got 50%. Level 2 gives 80% and 20% to groups that are not used there. And level 3 gives 100% to OTHER_GROUPS. But those are the levels documented in pre-multitenant.

Mapping my user to one simple plan group


SQL> BEGIN
  2  DBMS_RESOURCE_MANAGER.create_pending_area;
  3  DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
  4       (DBMS_RESOURCE_MANAGER.ORACLE_USER, 'C##FRANCK', 'MYGROUP2');
  5  DBMS_RESOURCE_MANAGER.validate_pending_area;
  6  DBMS_RESOURCE_MANAGER.submit_pending_area;
  7  END;
  8  /

PL/SQL procedure successfully completed.

I’ve assigned my C##FRANCK user which gets 20% at level 2


-- Session Snapper v4.31 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) - Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! :)

----------------------------------------------------------------------------------------
  ActSes   %Thread | EVENT                               | USERNAME             | CON_ID
----------------------------------------------------------------------------------------
   13.20   (1320%) | ON CPU                              | C##FRANCK            |      3
   12.78   (1278%) | resmgr:cpu quantum                  | SYS                  |      3
    3.22    (322%) | ON CPU                              | SYS                  |      3
    2.80    (280%) | resmgr:cpu quantum                  | C##FRANCK            |      3
     .02      (2%) | resmgr:cpu quantum                  |                      |      0
     .01      (1%) | log file parallel write             |                      |      0
     .01      (1%) | ON CPU                              |                      |      0

--  End of ASH snap 1, end=2021-03-16 19:45:52, seconds=30, samples_taken=96, AAS=32

Now my user got 80% of the CPU resource and SYS only 20%

Surprised? In a CDB the “simple plan” is the the same as described in pre-12c documentation – there’s only one level, and 80/20 shares:

The main message here is: test it because what you get may not be what you think… Test and keep it simple.

Cet article Oracle – testing resource manager plans? est apparu en premier sur Blog dbi services.

Some Artificial Intuition in Oracle SQL_ID?

$
0
0

By Franck Pachot

.
This post is something I discovered by chance when writing about tagging SQL statement with recognizable comments. We know that Oracle is introducing more and more artificial intelligence and machine learning in the database engine, but here is the first time I see something where random or hash values seems to bring some meaning.

There are two common ways to run a query and find it in V$SQL:

  • add some tag as a comment in the query
  • get the sql_id from the executed query

The idea of tag is not new at all, and not only for the Oracle database. For example, the Google SQL Insight handles some pre-formatted tags.


SQL> select current_timestamp from dual;

CURRENT_TIMESTAMP
---------------------------------------------------------------------------
01-APR-21 06.31.51.678270 AM +02:00

If I just want a unique tag to add to the query, an convenient way is to add the current timestamp. It can even be set as a substitution variable to put in a comment. With microseconds included, I know it will be unique.


SQL> set sql feedback on sql_id

SQL> select * from EMP where JOB='PRESIDENT' /* 01-APR-21 06.31.51.678270 AM +02:00 */
  2  /

     EMPNO ENAME                            JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- -------------------------------- --------- ---------- --------- ---------- ---------- ----------
      7839 KING                             PRESIDENT            17-NOV-81       5000                    10

1 row selected.

SQL_ID: 5c0tty145sddp
SQL>

I have added the timestamp as a /* 01-APR-21 06.31.51.678270 AM +02:00 */ comment at the end of my statement. And, in order to show the second way to identify the query, I use the SQL*Plus SET FEEDBACK ON SQL_ID. There you can see 5c0tty145sddp as the SQL_ID for my query. This is fully reproducible through databases and versions: the SQL_ID depends only on the SQL text. If you have installed the SCOTT schema, you will get the same 5c0tty145sddp for this query. Funny coincidence, here 5c0tt is SCOTT in 1337 language… but, as you will see, I’m not sure it is a coincidence.



SQL> select sql_id,sql_fulltext from v$sql where sql_text like '%01-APR-21 06.31.51.678270 AM +02:00%' and users_executing=0;

SQL_ID        SQL_FULLTEXT
------------- --------------------------------------------------------------------------------
5c0tty145sddp select * from EMP where JOB='PRESIDENT' /* 01-APR-21 06.31.51.678270 AM +02:00 *

1 row selected.

If I can’t get the SQL_ID from the SET FEEDBACK ON, here is how I get it thanks to my tag, from V$SQL. The USERS_EXECUTING=0 predicates avoids to return my current query which verifies also the same LIKE predicate.

So in the SCOTT.EMP table the president is “KING”. Let’s change that to the real boss:


SQL> update EMP set ENAME='LARRY ELLISON' where JOB='PRESIDENT' /* 01-APR-21 06.32.11.963644 AM +02:00 */
  2  /

1 row updated.

SQL_ID: 0rac13hrmtrqm

Oh, that’s funny. I’ve put my new timestamp as a tag, but also display the SQL_ID from SQL*Plus. 0rac13hrmtrqm starts with ORACLE in 1337 language… and the new ENAME I’ve updated is the boss of ORACLE. Is this really a coincidence? Note that this is not exactly LEET (it would be “0r4c13” or “024c13”). The idea of SQL_ID, since it was introduced in 10g, was that the letters “eilo” are not there as they could be mistaken with 1 and O on some terminals.


SQL> select * from EMP where JOB='PRESIDENT' /* 01-APR-21 06.33.82.691464 AM +02:00 */
  2  /

     EMPNO ENAME                            JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- -------------------------------- --------- ---------- --------- ---------- ---------- ----------
      7839 LARRY ELLISON                    PRESIDENT            17-NOV-81       5000                    10

1 row selected.

SQL_ID: 0c12g0mwq09va
 

I’m probably interpreting this too far, but I read “0c12g” like OCI 2nd gen… Is this sql_id really a random hash? OCI 2nd gen is the Oracle Cloud Infrastructure, and I got this after changing the PRESIDENT to “LARRY ELLISON” in the EMP table.

Let’s try with another Cloud vendor.


SQL> update EMP set ENAME='LARRY PAGE' where JOB='PRESIDENT' /* 01-APR-21 06.35.66.5728590 AM +02:00 */
  2  /

1 row updated.

SQL_ID: g00g13fys2fqd

When changing the boss to LARRY PAGE, one of the Google founders, the SQL_ID for this update is starting by… g00g13 and again that’s how to write Google in the SQL_ID allowed characters.


SQL> select * from EMP where JOB='PRESIDENT' /* 01-APR-21 06.41.26.238008094 AM +02:00 */
  2  /

     EMPNO ENAME                            JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- -------------------------------- --------- ---------- --------- ---------- ---------- ----------
      7839 LARRY PAGE                       PRESIDENT            17-NOV-81       5000                    10

1 row selected.

SQL_ID: gcp12yf4n2r3c
SQL>

The abbreviation of the Google Cloud Platform: GCP which has be started in 2012… and SQL_ID starts with gcp12…


SQL> update EMP set ENAME='BILL GATES' where JOB='PRESIDENT' /* 01-APR-21 06.30.35.4055430 AM +02:00 */
  2  /

1 row updated.

SQL_ID: bq8ru4dsjv67n

SQL> select * from EMP where JOB='PRESIDENT' /* 01-APR-21 06.31.18.2381810 AM +02:00 */
  2  /

     EMPNO ENAME                            JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- -------------------------------- --------- ---------- --------- ---------- ---------- ----------
      7839 BILL GATES                       PRESIDENT            17-NOV-81       5000                    10

1 row selected.

SQL_ID: azur3g4pr7mng

This is Azure, the Microsoft cloud, when I’ve set the President to BILL GATES…


SQL> update EMP set ENAME='JEFF BEZOS' where JOB='PRESIDENT' /* 01-APR-21 06.45.35.4055430 AM +02:00 */
  2  /

1 row updated.

SQL_ID: 4m4z0n7sadpm2

When I changed the president to JEFF BEZOS, the SQL_ID starts with 4m4z0n so let’s run the same select as before:


SQL> select * from EMP where JOB='PRESIDENT' /* 01-APR-21 06.51.00.0102180 AM +02:00 */
  2  /

     EMPNO ENAME                            JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- -------------------------------- --------- ---------- --------- ---------- ---------- ----------
      7839 JEFF BEZOS                       PRESIDENT            17-NOV-81       5000                    10

1 row selected.

SQL_ID: awss8rgcfkprm

And when querying it, that’s clearly starting with AWS.

The requirement for SQL_ID is that it identifies a SQL statement, but there’s no need for it to be really random. Just that one statement generates always the same SQL_ID. You can copy-paste those queries on any supported Oracle database and see the same SQL_ID, whatever the version. Please, tell me in comments if you don’t see the same. The only thing I did on the SCOTT schema was increasing the ENAME to put larger names: ALTER TABLE EMP MODIFY ENAME VARCHAR2(32);

Here is how you can run all that on db<>fiddle:

https://dbfiddle.uk/?rdbms=oracle_18&fiddle=c0bb7dad6447ed4b2ff9bc7475496aa6

Do you think it is a coincidence? Or a random value influenced by some Artificial Intelligence on the query text and parameters, to add a little business meaning encoded in LEET? Did you also find some funny SQL_ID? Please share in comments.

Cet article Some Artificial Intuition in Oracle SQL_ID? est apparu en premier sur Blog dbi services.

Rename your DB clone completely

$
0
0

Introduction

Have you ever renamed your database?
When you cloned a DB, you probably started it with a new name and cloning (duplicating) with RMAN provided a new DBID, right?
So why should we need to rename a DB?

There may be several reasons:
due to changed company rules, you need to rename all your DBs
After a restore you need to run the DB with another name
After a snapshot clone, (see my recent blog) you want to run it with new name

First, I want to mention that all I found in the Internet was incomplete – even the Oracle® Database Utilities and MOS Doc Id 2258871.1. Incomplete regarding all the items that should be changed together with the DB-name or simply not working. Therefore, I write this article with the word “completely” at the end.

Starting point

We did a snapshot clone with our fancy new PureStorage.
The Oracle release and edition must be the same as on the source server, of course.
The oratab, spfile, etc are created and the $ORACLE_SID is set to T01A (the new SID will be changed to T01B).
Just to remember the commands – we did a snapshot and copied the volumes to the target server:

   purepgroup snap --suffix $SUFFIX ${SrcPG} --apply-retention
   purevol copy --force ${SrcPG}.$SUFFIX.$SrcVol1 $TgtVol1
   purevol copy --force ${SrcPG}.$SUFFIX.$SrcVol1 $TgtVol2
   purevol copy --force ${SrcPG}.$SUFFIX.$SrcVol1 $TgtVol3

Then we mounted the volumes and were already able to and start the DB – done 😊
But here the story starts…

1. Clean shutdown

First, before a rename, the DB needs a “clean” shutdown.
So simply startup and
while the DB is open, create a pfile and a script to rename all DB-files for later use
then shutdown the DB “immediate” (not “abort”).
Some blogs mention to backup the controlfile and/or switch the logfile, but this is not required.

2. The configuration files

The clone commands above “copied” the /u02/oradata containing the tablespace files
/u03/oradata containing redologs
/u90/fra containing backups

In your environment you may have more volumes. Feel free to adapt …

For a fresh clone, we will need as well the spfile and orapw from either the ADMIN- or ORACLE_HOME-directory and the sqlnet configuration files.
When you are doing frequent clones, you probably have these files already on your target server.

Note: From here on I will use “SRC” for the original, the source DB-name and
“TGT” for the new, the target DB-name.

 
Here in our example, we use:

$ADMIN = /u01/app/oracle/admin/$ORACLE_SID/pfile

for spfile and orapwd and

$TNS_ADMIN = /u01/app/oracle/network/admin

So, we need links to the files for both old-SID and new-SID and set the variable:

ln -fs $ADMDIR/$ORACLE_SRC/pfile/orapw$ORACLE_SRC $ORACLE_HOME/dbs/orapw$ORACLE_SRC
ln -fs $ADMDIR/$ORACLE_TGT/pfile/orapw$ORACLE_TGT $ORACLE_HOME/dbs/orapw$ORACLE_TGT
ln -fs $ADMDIR/$ORACLE_SRC/pfile/spfile$ORACLE_SRC.ora $ORACLE_HOME/dbs/spfile$ORACLE_SRC.ora
ln -fs $ADMDIR/$ORACLE_TGT/pfile/spfile$ORACLE_TGT.ora $ORACLE_HOME/dbs/spfile$ORACLE_TGT.ora
export TNS_ADMIN=/u01/app/oracle/network/admin
/usr/bin/scp -pr oracle@$SRCHOST:$ADMDIR/$ORACLE_SRC* $ADMDIR    (***)
/usr/bin/scp -pr oracle@$SRCHOST:$TNS_ADMIN/*.ora $TNS_ADMIN

*** (if you do not want to copy a bulk load of *aud files, please modify the scp-command)

For completeness I would like to mention directories containing the $DB_UNIQUE_NAME like
/u01/app/oracle/admin/$DB_UNIQUE_NAME/xdb_wallet or 
/u01/app/oracle/diag/rdbms/$DB_UNIQUE_NAME/…
which are handled in my script.

3. DBNEWID

Now you can run the “main” command.

 nid target=/ dbname=$ORACLE_TGT

The output will tell you something about changed file names:

…
Control File /u02/oradata/T01A/control03T01A.dbf - modified
Datafile /u02/oradata/T01A/system01T01A.db - dbid changed, wrote new name
…
Instance shut down
Database name changed to T01B.
Modify parameter file and generate a new password file before restarting.
Database ID for database T01B changed to 2621333956.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.

Be aware – the file names were NOT changed. They are just registered in the controlfile.
And it tells us that all the archives and backups are obsolete.

4. Rename and adapt all and everything

Most blogs tell you ALTER SYSTEM SET DB_NAME=T01B; and start your DB  ⇒  😥

  • adapt the new pfile:
    in step 1 we have already created the new parameter-file with the adequate name:
    create pfile=’ $ADMDIR/$ORACLE_SRC/pfile/init$ORACLE_TGT.ora’ from spfile;

    on OS-level, we replace all SRC by TGT entries (not only the DB-name) and kick out all the heading lines:

    sed -i '/.archive_lag_target/,$!d' $ADMDIR/$ORACLE_SRC/pfile/init$ORACLE_TGT.ora
    sed -i "s/$ORACLE_SRC/$ORACLE_TGT/g" $ADMDIR/$ORACLE_SRC/pfile/init$ORACLE_TGT.ora
    sed -i "s/$SRCHOST/$TGTHOST/g" $ADMDIR/$ORACLE_SRC/pfile/init$ORACLE_TGT.ora
  • Create a new Password file:

    Since 12.2. the password must contain at least 8 characters and at least 1 special character.

    orapwd file=$ADMDIR/$ORACLE_SRC/pfile/orapw$ORACLE_TGT force=y password=Manager_19c entries=3
  • complete the oratab:
    echo "$ORACLE_TGT:$ORACLE_HOME:N" >> /etc/oratab
  • adapt the script to rename all DB-files:
    in step 1 we have already created the script to rename the DB-files. You probably know, that since 12c, we can move datafiles online. This would be resource intensive operation. Moving on OS-level is a “cheap” command – not even the inode will be changed.
    And renaming while the DB is in mount state, is as well a lightweight one.
    @script_rename.sql $ORACLE_SRC $ORACLE_TGT  ⇒  created script:  rename_db_files.sql

    We just need to remove the lines, containing the “old   1: …” and “new:   1: …”

    sed '/old/d;/new/d' -i rename_db_files.sql
  • rename the path/filenames on OS-level –we can modify path as well as filenames in one command. I love “sed”.

    (here $filesys1 and $filesys2 are our volumes, /u0*/oradata)

    cd $filsys1
     find . -type f -name "*$ORACLE_SRC*" | while read FN; do
       BFN=$(basename "$FN")
       NFN=$(echo ${BFN}|sed "s/$ORACLE_SRC/$ORACLE_TGT/g")
       mv "$BFN"  "$NFN"
     done
    #
     cd $filsys2
     find . -type f -name "*$ORACLE_SRC*" | while read FN; do
       BFN=$(basename "$FN")
       NFN=$(echo ${BFN}|sed "s/$ORACLE_SRC/$ORACLE_TGT/g")
       mv "$BFN"  "$NFN"
     done
  • adapt the SQL-net files
    replace all ORACLE_SID and Hostnames in listener.ora:
    cd $TNS_ADMIN
      sed -i "s/$SRCHOST/$TGTHOST/g"           $TNS_ADMIN/listener.ora
      sed -i "s/$ORACLE_SRC/$ORACLE_TGT/g"     $TNS_ADMIN/listener.ora

    adapt tnsnames.ora and append a tns-entry for the new DB: (I tend to leave the old one unchanged)

    SER=$(grep service_names $ADMDIR/$ORACLE_TGT/pfile/init$ORACLE_TGT.ora |awk -F\' '{print $2}')
     echo "############"    >> $TNS_ADMIN/tnsnames.ora
     echo "$ORACLE_TGT="    >> $TNS_ADMIN/tnsnames.ora
     echo " (DESCRIPTION="  >> $TNS_ADMIN/tnsnames.ora
     echo "   (ADDRESS=(PROTOCOL=TCP)(HOST=$TGTHOST)(PORT=1521)) "  >> $TNS_ADMIN/tnsnames.ora
     echo "   (CONNECT_DATA=(SERVICE_NAME=$SER))) "                 >> $TNS_ADMIN/tnsnames.ora
     echo “############”

    Restart the listener:

      lsnrctl stop
      lsnrctl start

clean up!   clean up!   clean up!

  • Clean and copy the “old” ADMIN-directory

    (I tend to leave the old one and remove it later)

    rm -f $ADMDIR/$ORACLE_SRC/adump/*.aud
     cd $ADMDIR
     ls -d $ORACLE_SRC* | while read FN; do
      BFN=$(basename "$FN")
      NFN=$(echo ${BFN}|sed "s/$ORACLE_SRC/$ORACLE_TGT/g")
      cp -ar "$BFN"  "$NFN"
      echo  "copied $BFN to  $NFN"
     done
  • clean the backups:
    whether your backups are in $FRA/$ORACLE_SID or in $FRA/$DB_UNIQUE_NAME, you can remove them all by:
    cd $FRA
     ls -d $ORACLE_SRC* | while read FN; do
      BFN=$(basename "$FN")
      NFN=$(echo ${BFN}|sed "s/$ORACLE_SRC/$ORACLE_TGT/g")
      if [ -d $BFN ]; then
       echo "delete old $BFN ..."
       rm -rf  ${BFN}
      fi
      mkdir  ${NFN}
     done
  • clean the DIAG-directory:
    you could simply remove the $ORACLE_BASE/diag/rdbms completely. If you have other DBs running on this machine, you do not want to delete all of them.
    find $ORACLE_BASE/diag/rdbms -name "${ORACLE_SRC,,}*" -exec rm -rf {} \;
    find $ORACLE_BASE/diag/rdbms -name "${ORACLE_TGT,,}*" -exec rm -rf {} \;
  • at last, you set the ORACLE_SID:
    export ORACLE_SID=$ORACLE_TGT

Finalize

  • Finally, you should be able to start the cloned DB with its new namee.
    sqlplus / as sysdba
    create spfile='$ADMDIR/$ORACLE_TGT/pfile/spfile$ORACLE_TGT.ora' from pfile='$ADMDIR/$ORACLE_TGT/pfile/init$ORACLE_TGT.ora';
    startup nomount;
    alter database mount;
    select name, dbid, created, open_mode  from V$DATABASE;

    rename all DB-files, including TEMP-files and redo-logs.

    @$PWD/rename_db_files.sql
    alter database open resetlogs;
    exit

do not forget to save your result and check all files and the FRA.

rman target /
BACKUP as compressed backupset DATABASE plus ARCHIVELOG delete all input;
exit

Remarks

  • These steps and code snippets cover the tasks for a standalone single instance DB.
  • In Data Guard environments you must be careful about renaming and moving files. The STANDBY_FILE_MANAGEMENT parameter determines how file changes on the primary server are applied to the standby server.
  • When using Oracle Managed Files (OMF), the command in script_rename.sql is even simpler. You do not specify “… rename ‘oldpath/filename’ TO ‘newpath/filename’” because Oracle knows where to put and how to name files. You simply specify: “… rename ‘oldpath/filename’”.
  • When working on a Container Database, run the NID command with the parameter
    PDB=[ALL | NONE] to change (or leave) all PDBs together with the container-DB.
    Funnily Oracle recommends that you use PDB=ALL, but PDB=NONE is the default. (See Database utilities.)
  • When using Global Database Names, double check the init$ORACLE_TGT.ora before creating a spfile.

Cet article Rename your DB clone completely est apparu en premier sur Blog dbi services.

An example of ORA-01152: file … was not restored from a sufficiently old backup

$
0
0

By Franck Pachot

.


Oracle Error: 
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below 
ORA-01152: file ... was not restored from a sufficiently old backup 
ORA-01110: data file ...

This error is one of the most misunderstood recovery error. It can happen in different case but I have here a simple example to reproduce it, with some comments and queries to see the state. I run an example to avoid long theory but let’s put the main media recovery concepts in a few words. [Note that even if my idea was to put only a few words… it is longer than expected, so skip to the example if that’s what you were expecting]

Except when you are doing direct-path insert, the database files (datafiles) are written from the buffer cache, by the database writer, asynchronously, without caring about transactions. At any point in time, in a database that is opened, you may have commited changes for which the changed block is not yet written to the datafiles. You may also have changes written to the datafiles which are not commited yet or will be rolled back later. You may also have some part only of an atomic changes. There are points (checkpoint) where the database ensures that what has been changed in memory is written back to the datafiles. Those points are important as they mark the start of the redo log stream that is required to ensure that the changes done in memory can be applied (recovered) to the datafiles in case of database crash. The controlfile keeps track of this but the checkpoint information (time, SCN, redo threads,…) is also written to the datafile headers so that recovery is possible even of the current controlfile is lost, or restored from a backup.

This is for instance failure. But the stream of redo may be used also in case of media recovery where you had to restore a datafile from a previous backup, in order to bring the datafile to the current state. Then the datafile header shows a checkpoint from a previous point in time (before the last checkpoint of the current state) and recovery can be possible because the redo from before the last checkpoint has also been backed up as archived logs. And finally, there’s also the possibility that all datafiles are from past checkpoints, because you want to do Point-In-Time Recovery (PITR) either by restoring all datafiles from a previous backup, and recover up to this PIT only. Or you used flashback database to bring them back, which recovers to this PIT as well. In those cases, you will open to a new incarnation of the database, like a branch in the timeline, and OPEN RESETLOGS to explicitly tell Oracle that you know that your datafile state is different than the current state, as known in the controlfile and the online redo logs (which will then be discarded).

However, even if you have this possibility, with a datafile PIT state that does not match the controlfile one, there are some conditions that must be met in order to open the database without inconsistencies. Basically, they must be consistent among themselves. Because table data are in some datafiles, indexes may be in others, metadata in another one, undo vertors elswhere, and the same for foreign key parent table… So even if all recovery is correct (no changes lost thanks to redo log recovery) the database may refuse an OPEN RESETLOGS. And that’s basically what ORA-01152 tells you: your recovery is ok for each file, but the point you recovered to is not globally the same consistent state.

So, there are two major pieces of information that are checked by Oracle when opening the database. One is about the consistency of each datafiles and the other is about the consistency of the database. When the database is opened, there may be some changes that are from after the checkpoint recorded in the datafile header, because dbwriter continuously does its job. This is known as the datafile being “fuzzy”. Only when the database is closed, the fuzzy bit is cleared to say that all blocks of the datafiles are consistent with the checkpoint time. That’s for each datafile consistency. And in order to leave a clean state, closing the database also does a last checkpoint so that all datafiles are consistent. This can be opened without the need to apply some redo log, given that you want to get the database at the same point in time than it was closed. But once closed, you can do things that Oracle doesn’t know. Like restoring the files from a previous backup, even from a hot backup where the files were fuzzy. So, in any case, when Oracle opens the database it checks the datafile headers, as if it were not cleanly closed.


SQL> host mkdir -p /u90/{close,open}/{before,after}

I create directories to put a backup of a datafile. I’ll backup the datafile in the open or mount state (to have fuzzy and non fuzzy backups). And from two points in time (‘before’ and ‘after’ the state I want to open resetlogs)


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1692588               1692038            1691878                1692586

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ              CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ _______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    30-apr-2021 10:38:02        8196 1692038             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     05-aug-2019 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     05-aug-2019 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     05-aug-2019 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL           NO     YES    30-apr-2021 10:38:02        8196 1692038             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0

11 rows selected.

I’ll run those queries each time. They show the checkpoint SCN of my database, from the controlfile, and from my datafile headers. The FUZZY=YES column tells me that the database is opened, which means that there are changes in the datafiles that were written after the checkpoint. This is also visible by the flag 4 in FHSTA (or 8196 because 8192 is another flag for the SYSTEM datafiles). There are files that are not fuzzy even if the database is opened, because the tablespaces are in read-only, PDB$SEED in this example. You can see that their checkpoint time is from a long time ago because they havent been opened read-write since them. As they are not fuzzy, and checkpointed at the same SCN, they are consistent. And as they are read-only since then Oracle knows that they don’t need any recovery. I think we have a clue about this with the RECOVER column being null.


SQL> alter database begin backup;

Database altered.

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/open/before

SQL> alter database end backup;

Database altered.

I’ve taken a hot backup of this datafile. The backup mode ensures that recovery will be possible, but the file is still online, and fuzzy, with db writer writing to it. So the header stilll shows it fuzzy and with the last checkpoint SCN.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.

I’ve closed my database cleanly and started it in mount, which means not opened.


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1692886            1692000                1692860

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
          natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

11 rows selected.

After a clean shutdown, no files are fuzzy and all files where checkpointed at the same time: SCN 1692886 that we see in v$datafile and v$datafile_header. All consistent. You can see that the PDB datafiles have a SCN a little bit earlier but this is because the PDB are closed before the database is. Exactly the same as the read-only PDB$SEED. Then checkpoint is consistent for the container but earlier than the database and the RECOVER column is null.


SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/close/before

I’ve taken another backup of my datafile here, now in a non fuzzy state (like a cold backup)


SQL> create restore point now guarantee flashback database;
Restore point NOW created.

I’m taking a snapshot of my database here as I’ll come back to this point. This PIT that I call ‘now’ is the where I’ll try to restore the datafile from backups from before (what i just did) or from after (what i’m going to do before reverting back to this snapshot)


SQL> alter database open;
Database altered.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1693832               1692889            1692000                1692999

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
               natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    2021-04-30 11:03:00        8196 1692889             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

11 rows selected.

SQL> alter system checkpoint;

System CHECKPOINT altered.

SQL> alter database begin backup;

Database altered.

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/open/after

SQL> alter database end backup;

Same as I did before, a hot backup of my datafile, but from a later point in time.


Database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694252            1692000                1693891


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     NO     2021-04-30 11:05:43        8192 1694252             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/close/after

And finally a cold backup from a later point in time.


SQL> host ls -l /u90/{close,open}/{before,after}
/u90/close/after:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:07 users01CDB19.dbf

/u90/close/before:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:00 users01CDB19.dbf

/u90/open/after:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:05 users01CDB19.dbf

/u90/open/before:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 10:55 users01CDB19.dbf

I have 4 backups, from before and after, and in a clean or fuzzy state.


SQL> flashback database to restore point now;

Flashback succeeded.

Now back to my snapshot so that my current state is after the ‘before’ backup and before the ‘after’ backup. Sorry for this bad description of it, time travel is never easy to explain.


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694252            1692000                1692886


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
11 rows selected.

So, here we are. In closed (mount) state. No files are opened, No files are fuzzy. The checkpoint time is consistent – we will check only the CDB$ROOT now as we know the other containers where checkpointed earlier when closed. So CDB$ROOT checkpoint is at 10:56:41, SCN 1692886, which matches the controlfile SCN. I can OPEN RESETLOGS this database without any recovery but that’s not what I want to show.


SQL> host cp /u90/open/before/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I restored the datafile from the previous hot backup (older than my current state, and fuzzy)


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
  2* natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  YES    2021-04-30 10:53:23           1 1692602             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:11:00        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0

11 rows selected.

The file header shows the fuzzy state (FUZZY=Y) and this means that Oracle needs to apply some redo log, starting from the checkpoint SCN 1692602 and until it brings it to a consistent state, at least after the END BACKUP, to the end of fuzziness.


SQL> alter database open resetlogs;

Error starting at line : 1 in command -
alter database open resetlogs
Error report -
ORA-01195: online backup of file 7 needs more recovery to be consistent
ORA-01110: data file 7: '/u02/oradata/CDB19/users01CDB19.dbf'
01195. 00000 -  "online backup of file %s needs more recovery to be consistent"
*Cause:    An incomplete recovery session was started, but an insufficient
           number of logs were applied to make the file consistent. The
           reported file is an online backup which must be recovered to the
           time the backup ended.
*Action:   Either apply more logs until the file is consistent or
           restore the database files from an older backup and repeat recovery.
SQL>

I’ll not do this recovery. I’m just showing the error message. This is ORA-01195 that tells you you need more recovery to clear the fuzziness.


SQL> host cp /u90/close/before/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I’ve restored from the cold backup here. So no fuzzy flag in the header.


SQL> alter database open resetlogs;

Database altered.

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
  2* natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    2021-04-30 11:48:42        8196 1692890               1 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:45        8192 1693089               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0

11 rows selected.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1694039               1692890                  0                1693104

From this cold backup, I was able to OPEN RESETLOGS. Because this cold backup was taken when the database was closed, so all are consistent.

This was to show the ORA-01195 meaning: a datafile needs to be recovered to be consistent by itself.

Now we will see the consistency with the other datafiles by restoring a backup from the future.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.
SQL> flashback database to restore point now;

Flashback succeeded.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694381                  0                1692886

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0

11 rows selected.

as my OPEN RESETLOGS was sucessfull, I flashback database again to go to the same point in time for my experiment.


SQL> host cp /u90/close/after/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I’ve restored the cold backup (not fuzzy) but from a checkpoint that happened after my current state.


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 11:05:43           0 1694252             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0

11 rows selected.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694381                  0                1692886

You can see the datafile is not fuzzy but with a checkpoint at 11:05:43, SCN 1694252, where all other datafiles, for this container, and the controlfile, are at 10:56:41, SCN 1692886. My file is from a state in the future of the other ones.


SQL> alter database open resetlogs;

Error starting at line : 1 in command -
alter database open resetlogs
Error report -
ORA-01152: file 7 was not restored from a sufficiently old backup
ORA-01110: data file 7: '/u02/oradata/CDB19/users01CDB19.dbf'
01152. 00000 -  "file %s was not restored from a sufficiently old backup "
*Cause:    An incomplete recovery session was started, but an insufficient
           number of logs were applied to make the database consistent. This
           file is still in the future of the last log applied. The most
           likely cause of this error is forgetting to restore the file
           from a backup before doing incomplete recovery.
*Action:   Either apply more logs until the database is consistent or
           restore the database file from an older backup and repeat recovery.

Here is ORA-01152 and the message may be misleading because there can be several reasons. Maybe the problem is the file mentioned, because you restored it from a backup that is too recent when compared to the others and to the point in time you want to open resetlogs. Or maybe it was not restored at all and it is the current datafile that remains there because you forgot to restore one file. Or maybe you want to go to a further point in time by recovering the other datafile up to the same PIT as this one.

I’ll not go further here, this blog post is already too long. Of course, I’ll get the same error if I restore the fuzzy backup from the future. When you encounter this error you may think about the Point In Time you want to recover to. Either you have the right PIT and then you need to restore a backup of this datafile from before this point in time. Or maybe you want to recover to a further point in time to reach the state of this datafile. The error message supposes you have recovered to the right point in time but didn’t restore the right file.

I tried to summarize this situation in a tweet:


Cet article An example of ORA-01152: file … was not restored from a sufficiently old backup est apparu en premier sur Blog dbi services.


Delphix and upgrading the clones (Oracle)

$
0
0

By Franck Pachot

.
Delphix is a tool for easy cloning of databases. The idea is that all is automated: the user can create a clone, rewind or refresh it with one click. However, I was suprised that the following common scenario is not managed by the Delphix engine:

  • You clone from production, say Oracle 12c
  • You upgrade the clone, say Oracle 19c
  • You test there
  • You refresh the clone from production, obviously being back to Oracle 12c

This is very common. You use clones to test the application, and testing on upgraded database version is probably the most common clone usage.

So, there’s an ‘Upgrade’ button, just near the refresh one, but this is a false-friend. It doesn’t upgrade anything but just sets the Oracle Home known by Delphix to the new one after you upgraded yourself. Because Delphix does not detect this change automatically. But Delphix requires it in order to run the toolkit actions. No problem, the upgrade of Oracle is easy with Oracle AutoUpgrade, and that’s just one additional manual action. So why is it called ‘Upgrade’ and not ‘Change Oracle Home’? That’s the problem: you can change only to a newer version Oracle Home. Then… how do you revert back to the previous version in order to refresh the clone? You can’t.

The only solution provided by the support (without additional consulting charges) is a manual action on the console. Not the Web console, not the telnet one.

That’s not exactly what I call a ‘CLI’ – Command Line Interface is supposed to accept actions and parameters other than stdin and return codes. But let’s try to automate that.

Here I’m showing a template for a Do-It-Yourself solution. Please don’t use it as-is. Test it. Because having access to the telnet console rather than a clean CLI or API doesn’t allow for professional error handling.

SSH key

This console is accessible through ssh with the admin user. Or an admin user that you have defined in the management console. This is not the setup console.
I’ve written a glossary about this: https://blog.dbi-services.com/delphix-a-glossary-to-get-started/

You don’t want to have hardcoded passwords, so better use passwordless authentication. I’ll do all that from the database server as root. so I check my public key that I’ve generated with `ssh-keygen`:


# cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRIw5638wyewN716iARPTKpaeCP+HtNOEa5TSKfI8Eh3h3EUwb+H3qzrWtv/b0k147QC0ET93kf2Y4AgvoaFKvo3ms3U6pI5BtCBN3h49KCcj4k1sPKmytJap6G6C79BMZKoGbG6hOSQ7PbbHHPoSgSYiXrxaO3Rh8OqWl+EqSQ45TSLE5Nb6+YuASEeILSUv3fezE21/kZ4dxsYJeE+6pfaUHCm/sCTFKM7JZJsviQ/3usq+7m8w+AreedQXAYERq9tDdCcrCUkmrj3OhiLh3YoYre8XkZ0QiBT1bwhkPlxGO5aN5bkihqm2ETF3y9sbdf2d/xXpKTnx3tTWZo6tr root@dbserver

This can be put once in the Delphix console.

This only time I’ll connect with the password I defined when creating the user from the GUI



:~$ ssh admin@10.0.1.10
Password: 

ip-10-0-1-10> ls

...
user
...

ip-10-0-1-10> user
ip-10-0-1-10 user> ls

Objects
NAME      EMAILADDRESS       
dev       dev@nomail.com
qa        qa@nomail.com
labadmin  labadmin@nomail.com
admin     admin@nomail.com

ip-10-0-1-10 user> current

ip-10-0-1-10 user 'admin'> ls
Properties
    type: User
    name: admin
...
  
ip-10-0-1-10 user 'admin'> update
ip-10-0-1-10 user 'admin' update *> ls
Properties
    type: User
    name: admin
 ...

ip-10-0-1-10 user 'admin' update *> set publicKey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRIw5638wyewN716iARPTKpaeCP+HtNOEa5TSKfI8Eh3h3EUwb+H3qzrWtv/b0k147QC0ET93kf2Y4AgvoaFKvo3ms3U6pI5BtCBN3h49KCcj4k1sPKmytJap6G6C79BMZKoGbG6hOSQ7PbbHHPoSgSYiXrxaO3Rh8OqWl+EqSQ45TSLE5Nb6+YuASEeILSUv3fezE21/kZ4dxsYJeE+6pfaUHCm/sCTFKM7JZJsviQ/3usq+7m8w+AreedQXAYERq9tDdCcrCUkmrj3OhiLh3YoYre8XkZ0QiBT1bwhkPlxGO5aN5bkihqm2ETF3y9sbdf2d/xXpKTnx3tTWZo6tr delphix@labserver"
ip-10-0-1-10 user 'admin' update *> commit;
ip-10-0-1-10 user 'admin'> exit
Connection to 10.0.1.10 closed.

this is all, now I can ssh without providing the password

(re)set the Oracle Home

Let’s see how to automate the manual actions given by the support engineer. Here is my script and I explain later:


management=admin@10.0.1.10 

for source in $({
ssh "$management" <<SSH
source
ls
SSH
} | awk '/ true /{ print $1}'
) ; do
echo "# looking if source=$source is on this server and finding oracle home"
{
ssh "$management" <<SSH
sourceconfig
select $source
ls
SSH
} | awk '/repository/{sub("^ *repository: ","");repo=$0; print source,sid,$0}/ instanceName/{sid=$NF}' source=$source |
while read source sid repository ; do
 # this should run as root to see the current directory from/proc
 dbs=$(readlink /proc/$(pgrep -f _pmon_$sid\$)/cwd)
 # process it only if home is found (this source is on this host
 if [ -n "$dbs" ] ; then
  home=$(dirname "$dbs")
  new="${repository%%\'/*}'${home}'"
  if [ "$repository" != "$new" ] ; then
   echo "## setting new repository source=$source (sid=$sid home=$home) to repository=$new (previous was $repository)"
   ssh "$management" <<SSH
sourceconfig
select $source
update
set repository="$new"
commit
ls
SSH
  fi
 fi
done

done

First, I define the ssh connection as I’ll have to ssh many times to read the answer and continue.
The first block will list all sources (VDBs) known by Delphix, with ssh output processed by awk.
The second block, for each source, will look at the configuration to get the “instanceName” which is the ORACLE_SID.
With this, I’ll get the Oracle home with:


but of course you can read /etc/oratab

Now, this is set in Delphix “repository” property where a prefix identifies the host (“environment” in Delphix terms) so I just replace the last part.

Here is the output when I run it on the Delphix Labs sandbox:


## setting new repository source=devdb (sid=devdb home=/u01/app/oracle/product/11.2.0/xe) to repository=TargetA/'/u01/app/oracle/product/18.0.0/xe' (previous was TargetA/'/u01/app/oracle/product/11.2.0/xe')
Properties
    type: OracleSIConfig
    name: devdb
    cdbType: NON_CDB
    credentials:
        type: PasswordCredential
        password: ********
    databaseName: devdb
    discovered: true
    environmentUser: TargetA/delphix
    instance:
        type: OracleInstance
        instanceName: devdb
        instanceNumber: 1
    linkingEnabled: false
    nonSysCredentials: (unset)
    nonSysUser: (unset)
    reference: ORACLE_SINGLE_CONFIG-2
    repository: TargetA/'/u01/app/oracle/product/18.0.0/xe'
    services:
        0:
            type: OracleService
            discovered: true
            jdbcConnectionString: jdbc:oracle:thin:@(DESCRIPTION=(ENABLE=broken)(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.1.30)(PORT=1521))(CONNECT_DATA=(UR=A)(SERVICE_NAME=devdb)))
    tdeKeystorePassword: (unset)
    uniqueName: devdb
    user: delphixdb

Operations
delete
update
validateCredentials
[root@linuxtarget ~]# 

But please, test and adapt it. The idea here is that it can run to sync the Delphix information to the currently running databases, which is probably never a bad idea.

Cet article Delphix and upgrading the clones (Oracle) est apparu en premier sur Blog dbi services.

El Carro: The Oracle Operator for Kubernetes

$
0
0

By Franck Pachot

.
Google Cloud, Open Source and Oracle Databases… what seems to be a paradox is possible, thanks to cloud providers who contribute to open infrastructure. The idea is to use Operators (custom resource controllers on Kubernetes) to automate the Oracle Database operations in a standard, open and portable way. If you ever attempted to run Oracle Database on containers, trying to keep up with the DevOps approach, you know that it requires a bit of complexity and careful orchestration.

The public announce was on the Google Open Source Blog: Modernizing Oracle operations with Kubernetes and El Carro. This is an Open Source project where we can contribute: https://github.com/GoogleCloudPlatform/elcarro-oracle-operator. I’ve tried the simplest thing: install Oracle XE – the free edition of Oracle, because it is the only one that you can deploy without cross-checking, with your lawyers, the license contracts and the “educational purpose only” documents about Oracle audit policies. But running Oracle on Kubernetes applies the same rules as virtualization: count the vCPU or the physical processors (depending on the hypervisor isolation accepted by Oracle). Basically the “installed or running” terms apply where the image is pulled.

Download El Carro software and install Oracle 18c XE

I’ll run all this from the Cloud Shell but of course you can do it from any configured gcloud CLI.


franck@cloudshell:~ (google-cloud.424242)$ gcloud alpha iam service-accounts list

DISPLAY NAME                            EMAIL                                               DISABLED
Compute Engine default service account  424242424242-compute@developer.gserviceaccount.com  False


I take note of my service account from there.

Installing Oracle is 3 lines only:

mkdir -p $HOME/elcarro-oracle-operator
gsutil -m cp -r gs://elcarro/latest $HOME/elcarro-oracle-operator
bash $HOME/elcarro-oracle-operator/latest/deploy/install-18c-xe.sh --service_account 424242424242-compute@developer.gserviceaccount.com

I’m following the instructions from https://github.com/GoogleCloudPlatform/elcarro-oracle-operator/blob/main/docs/content/quickstart-18c-xe.md here.

This takes a while the first time (45 minutes) because it has to create the image, built from oracle-database-xe-18c-1.0-1.x86_64.rpm RPM. The image is nearly 6GB and is stored in your Container Registry. Then it creates the cluster. The default cluster name is “gkecluster”, the CDB name is GCLOUD and the defaut zone is us-central1-a but you can pass the -c -k -z option on the install-18c-xe.sh to change those defaults. It creates a 2 nodes, total 4 vCPUs, 15 GB RAM, 200GB persistent storage. The namespace is “db”.


...
kubeconfig entry generated for gkecluster.
NAME        LOCATION       MASTER_VERSION   MASTER_IP     MACHINE_TYPE   NODE_VERSION     NUM_NODES  STATUS
gkecluster  us-central1-a  1.19.9-gke.1900  34.67.217.61  n1-standard-2  1.19.9-gke.1900  2          RUNNING
storageclass.storage.k8s.io/csi-gce-pd created
volumesnapshotclass.snapshot.storage.k8s.io/csi-gce-pd-snapshot-class created
namespace/operator-system created
...
Waiting for startup, statuses: InstanceReady=, InstanceDatabaseReady=, DatabaseReady=
Waiting for startup, statuses: InstanceReady=, InstanceDatabaseReady=, DatabaseReady=CreatePending
...
Waiting for startup, statuses: InstanceReady=CreateInProgress, InstanceDatabaseReady=, DatabaseReady=CreatePending
...
Waiting for startup, statuses: InstanceReady=CreateComplete, InstanceDatabaseReady=CreateInProgress, DatabaseReady=CreatePending
...
Waiting for startup, statuses: InstanceReady=CreateComplete, InstanceDatabaseReady=CreateComplete, DatabaseReady=CreatePending
Waiting for startup, statuses: InstanceReady=CreateComplete, InstanceDatabaseReady=CreateComplete, DatabaseReady=CreateComplete
Oracle Operator is installed. Database connection command:
> sqlplus scott/tiger@35.224.235.49:6021/pdb1.gke
franck@cloudshell:~ (google-cloud.424242)$


Be patient… it is Oracle, it has a pre-DevOps installation timing… And this is why it is really good to have a standardized way for automation. Building your own is a lot of effort because each iteration takes time to validate.

So, all is installed, with a service endpoint exposed to the public internet on port 6021:


[opc@a ~]$ ~/sqlcl/bin/sql scott/tiger@35.224.235.49:6021/pdb1.gke

SQLcl: Release 21.1 Production on Fri May 14 10:40:03 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0

SQL> select * from v$session_connect_info;

   SID    SERIAL#    AUTHENTICATION_TYPE    OSUSER                                                    NETWORK_SERVICE_BANNER    CLIENT_CHARSET    CLIENT_CONNECTION    CLIENT_OCI_LIBRARY    CLIENT_VERSION            CLIENT_DRIVER            CLIENT_LOBATTR    CLIENT_REGID    CON_ID
______ __________ ______________________ _________ _________________________________________________________________________ _________________ ____________________ _____________________ _________________ ________________________ _________________________ _______________ _________
   283      20073 DATABASE               opc       TCP/IP NT Protocol Adapter for Linux: Version 18.0.0.0.0 - Production     Unknown           Heterogeneous        Unknown               21.16.0.0.0       jdbcthin : 21.1.0.0.0    Client Temp Lob Rfc On                  0         3
   283      20073 DATABASE               opc       Encryption service for Linux: Version 18.0.0.0.0 - Production             Unknown           Heterogeneous        Unknown               21.16.0.0.0       jdbcthin : 21.1.0.0.0    Client Temp Lob Rfc On                  0         3
   283      20073 DATABASE               opc       Crypto-checksumming service for Linux: Version 18.0.0.0.0 - Production    Unknown           Heterogeneous        Unknown               21.16.0.0.0       jdbcthin : 21.1.0.0.0    Client Temp Lob Rfc On                  0         3

SQL> select initcap(regexp_replace(reverse('El Carro'),'(.)\1+| ','\1')) "K8s Operator for" from dual;

  K8s Operator for
__________________
Oracle

SQL> 


Now you see where the “El Carro” name comes from, right? 🤣

I can check the pods, from the Web Console, or CLI, remember the namespace is ‘db’:

franck@cloudshell:~ (google-cloud.424242)$ kubectl get pods -n db

NAME                                     READY   STATUS    RESTARTS   AGE
mydb-agent-deployment-6c8b7647fb-d4lkf   2/2     Running   0          77m
mydb-sts-0                               4/4     Running   0          77m

franck@cloudshell:~ (google-cloud.424242)$ kubectl get services -n db

NAME                TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                         AGE
mydb-agent-svc      ClusterIP      10.59.242.135             3202/TCP,9161/TCP               107m
mydb-dbdaemon-svc   ClusterIP      10.59.244.15              3203/TCP                        107m
mydb-svc            LoadBalancer   10.59.245.142   35.224.235.49   6021:30156/TCP,3307:32007/TCP   107m
mydb-svc-node       NodePort       10.59.250.249             6021:32512/TCP,3307:31243/TCP   107m
franck@cloudshell:~ (google-cloud.424242)$


The service is exposed externally by the LoadBalancer on the Secure Listener port

I can connect to the container to look at what is running there.


franck@cloudshell:~ (google-cloud.424242)$ kubectl  exec -it -n db mydb-sts-0 -c oracledb -- bash -i

bash-4.2$ grep ":[YN]" /etc/oratab

GCLOUD:/opt/oracle/product/18c/dbhomeXE:N
 
bash-4.2$ . oraenv <<<GCLOUD

ORACLE_SID = [] ? The Oracle base remains unchanged with value /opt/oracle
 
bash-4.2$ ps -fp $(pgrep tnslsnr)

UID          PID    PPID  C STIME TTY          TIME CMD
oracle       488       1  0 09:32 ?        00:00:00 /opt/oracle/product/18c/dbhomeXE/bin/tnslsnr SECURE -inherit

bash-4.2$ lsnrctl status SECURE          

LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 14-MAY-2021 10:18:11

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

TNS-01101: Could not find listener name or service name SECURE

bash-4.2$ lsnrctl status //localhost:6021

LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 14-MAY-2021 10:17:42

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=6021)))
STATUS of the LISTENER
------------------------
Alias                     SECURE
Version                   TNSLSNR for Linux: Version 18.0.0.0.0 - Production
Start Date                14-MAY-2021 09:32:52
Uptime                    0 days 0 hr. 44 min. 49 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u02/app/oracle/oraconfig/network/SECURE/listener.ora
Listener Log File         /u02/app/oracle/diag/tnslsnr/mydb-sts-0/secure/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=REGLSNR_6021)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mydb-sts-0)(PORT=6021)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=127.0.0.1)(PORT=5500))(Security=(my_wallet_directory=/opt/oracle/admin/GCLOUD_uscentral1a/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "GCLOUD.gke" has 1 instance(s).
  Instance "GCLOUD", status UNKNOWN, has 1 handler(s) for this service...
Service "GCLOUDXDB.gke" has 1 instance(s).
  Instance "GCLOUD", status READY, has 1 handler(s) for this service...
Service "GCLOUD_uscentral1a.gke" has 1 instance(s).
  Instance "GCLOUD", status READY, has 1 handler(s) for this service...
Service "PDB1.gke" has 2 instance(s).
  Instance "GCLOUD", status UNKNOWN, has 1 handler(s) for this service...
  Instance "GCLOUD", status READY, has 1 handler(s) for this service...
Service "c246feca2ab003e8e0530901380a0e21.gke" has 1 instance(s).
  Instance "GCLOUD", status READY, has 1 handler(s) for this service...
The command completed successfully

bash-4.2$


Here is the Oracle XE listener, with TNS_ADMIN in /u02/app/oracle/oraconfig/network/SECURE

You can have a look at the available operations from the samples (that you can customize and tun with `kubectl apply -n db -f`):


franck@cloudshell:~ (google-cloud.424242)$ ls ./elcarro-oracle-operator/latest/samples

v1alpha1_backup_rman1.yaml            v1alpha1_database_pdb1.yaml
v1alpha1_backup_rman2.yaml            v1alpha1_database_pdb2.yaml
v1alpha1_backup_rman3.yaml            v1alpha1_database_pdb3.yaml
v1alpha1_backup_rman4.yaml            v1alpha1_database_pdb4.yaml
v1alpha1_backupschedule.yaml          v1alpha1_export_dmp1.yaml
v1alpha1_backup_snap1.yaml            v1alpha1_export_dmp2.yaml
v1alpha1_backup_snap2.yaml            v1alpha1_import_pdb1.yaml
v1alpha1_backup_snap_minikube.yaml    v1alpha1_instance_18c_XE_express.yaml
v1alpha1_config_bm1.yaml              v1alpha1_instance_18c_XE.yaml
v1alpha1_config_bm2.yaml              v1alpha1_instance_custom_seeded.yaml
v1alpha1_config_gcp1.yaml             v1alpha1_instance_express.yaml
v1alpha1_config_gcp2.yaml             v1alpha1_instance_gcp_ilb.yaml
v1alpha1_config_gcp3.yaml             v1alpha1_instance_minikube.yaml
v1alpha1_config_minikube.yaml         v1alpha1_instance_standby.yaml
v1alpha1_cronanything.yaml            v1alpha1_instance_unseeded.yaml
v1alpha1_database_pdb1_express.yaml   v1alpha1_instance_with_backup_disk.yaml
v1alpha1_database_pdb1_gsm.yaml       v1alpha1_instance.yaml
v1alpha1_database_pdb1_unseeded.yaml
franck@cloudshell:~ (google-cloud.424242)$


Storage snapshots (v1alpha1_backup_snap2.yaml), backups (v1alpha1_backup_rman3.yaml), exports (v1alpha1_export_dmp1.yaml)

All is documented: https://github.com/GoogleCloudPlatform/elcarro-oracle-operator and will probably evolve.

Cet article El Carro: The Oracle Operator for Kubernetes est apparu en premier sur Blog dbi services.

Amazon RDS Oracle in Multitenant

$
0
0

By Franck Pachot

.
AWS has just added the possibility to create your oracle Database as as CDB (Container Database), the “new” architecture of Oracle where an instance can manage multiple databases, adding a new level between the heavy instance and lightweight schema:

At the time I’m writing this, I see it only in the “old” console (“original interface”) not in “new database creation flow”. It is displayed as a different Edition, however it is exactly the same price even when license is included.

The CDB name is always RDSCDB but you can choose the PDB name as “Database name” – I left the default “ORCL” here:


ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

 select con_id, cdb, dbid, con_dbid, name, created, log_mode, open_mode, database_role, force_logging, platform_name, flashback_on, db_unique_name from v$database;

   CON_ID    CDB             DBID       CON_DBID      NAME      CREATED        LOG_MODE     OPEN_MODE    DATABASE_ROLE    FORCE_LOGGING       PLATFORM_NAME    FLASHBACK_ON    DB_UNIQUE_NAME
_________ ______ ________________ ______________ _________ ____________ _______________ _____________ ________________ ________________ ___________________ _______________ _________________
        0 YES       3,360,638,310    490,545,968 RDSCDB    07-MAY-21    NOARCHIVELOG    READ WRITE    PRIMARY          NO               Linux x86 64-bit    NO              RDSCDB_A

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

SELECT pdb_id,pdb_name,dbid,con_uid,guid,status,con_id FROM dba_pdbs;

   PDB_ID    PDB_NAME           DBID        CON_UID                                GUID    STATUS    CON_ID
_________ ___________ ______________ ______________ ___________________________________ _________ _________
        3 ORCL           490,545,968    490,545,968 C3395C709E011676E0530100007F3932    NORMAL            3

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL> 

select service_id, name, network_name, creation_date, pdb, sql_translation_profile from dba_services;

   SERVICE_ID    NAME    NETWORK_NAME    CREATION_DATE     PDB    SQL_TRANSLATION_PROFILE
_____________ _______ _______________ ________________ _______ __________________________
            7 ORCL    ORCL            26-MAY-21        ORCL

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

This is not a best practice, but there’s no services declared there which mean that I can connect only with the default service registered from the PDB name. The documentation even recommends to connect with (CONNECT_DATA=(SID=pdb_name)) – I filled a feedback about this as this is a bad practice for 20 years.

I use EZCONNECT and create my own service:

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

connect oracle19c/franck@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL
Connected.

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

 exec dbms_service.start_service(service_name=>'MY_APP')

PL/SQL procedure successfully completed.

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL> select name,network_name,creation_date,con_id from v$active_services
  2  /

     NAME    NETWORK_NAME    CREATION_DATE    CON_ID
_________ _______________ ________________ _________
orcl      orcl            26-MAY-21                3
MY_APP    MY_APP          26-MAY-21                3

I can now connect as oracle19c/franck@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/MY_APP

Even if it is multitenant and I have only one PDB there, the whole CDB is mine:


ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL> 

select listagg(rownum ||': '||con_id_to_con_name(rownum),', ') con_name from xmltable('1 to 5000') where con_id_to_con_name(rownum) is not null;

                            CON_NAME
____________________________________
1: CDB$ROOT, 2: PDB$SEED, 3: ORCL

This lists all containers around me. Of course, I cannot go to CDB$ROOT as I have only a local user here.


ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL> 

show parameter max_pdbs

NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 5

The MAX_PDBS is set to 5 anyway because of Oracle detection of AWS hypervisor (see Oracle disables your multitenant option when you run on EC2)


ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

 select listagg(role,',') within group (order by role) from session_roles;

                                                                                                                            LISTAGG(ROLE,',')
______________________________________________________________________________________________________________________________________________________________
AQ_ADMINISTRATOR_ROLE,AQ_USER_ROLE,CAPTURE_ADMIN,CONNECT,CTXAPP,DATAPUMP_EXP_FULL_DATABASE,DATAPUMP_IMP_FULL_DATABASE,DBA,EM_EXPRESS_ALL,EM_EXPRESS_BASIC
,EXECUTE_CATALOG_ROLE,EXP_FULL_DATABASE,GATHER_SYSTEM_STATISTICS,HS_ADMIN_EXECUTE_ROLE,HS_ADMIN_SELECT_ROLE,IMP_FULL_DATABASE,OEM_ADVISOR,OEM_MONITOR
,OPTIMIZER_PROCESSING_RATE,PDB_DBA,RDS_MASTER_ROLE,RECOVERY_CATALOG_OWNER,RESOURCE,SCHEDULER_ADMIN,SELECT_CATALOG_ROLE,SODA_APP,XDBADMIN,XDB_SET_INVOKER

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

 select * from dba_sys_privs where grantee='PDB_DBA';

   GRANTEE                    PRIVILEGE    ADMIN_OPTION    COMMON    INHERITED
__________ ____________________________ _______________ _________ ____________
PDB_DBA    CREATE PLUGGABLE DATABASE    NO              NO        NO
PDB_DBA    CREATE SESSION               NO              NO        NO


ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

 show parameter pdb_lockdown

NAME         TYPE   VALUE
------------ ------ ---------------------
pdb_lockdown string RDSADMIN_PDB_LOCKDOWN

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/ORCL>

 select * from v$lockdown_rules;

   RULE_TYPE                        RULE                        CLAUSE    CLAUSE_OPTION     STATUS    USERS    CON_ID
____________ ___________________________ _____________________________ ________________ __________ ________ _________
STATEMENT    ALTER PLUGGABLE DATABASE                                                   DISABLE    ALL              3
STATEMENT    ALTER PLUGGABLE DATABASE    ADD SUPPLEMENTAL LOG DATA                      ENABLE     ALL              3
STATEMENT    ALTER PLUGGABLE DATABASE    DROP SUPPLEMENTAL LOG DATA                     ENABLE     ALL              3
STATEMENT    ALTER PLUGGABLE DATABASE    ENABLE FORCE LOGGING                           ENABLE     ALL              3
STATEMENT    ALTER PLUGGABLE DATABASE    OPEN RESTRICTED FORCE                          ENABLE     ALL              3
STATEMENT    ALTER PLUGGABLE DATABASE    RENAME GLOBAL_NAME                             ENABLE     ALL              3

I have many roles, including RDS_MASTER_ROLE, DBA and PDB_DBA (CREATE PLUGGABLE DATABASE) and it seems that the only lockdown profile rues are about ALTER PLUGGABLE DATABASE.

The documentation says that the RDSADMIN user is a common user. How is it possible?


ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/MY_APP> select username, account_status, lock_date, expiry_date, created, profile,  password_versions, common, oracle_maintained from dba_users;

                 USERNAME      ACCOUNT_STATUS    LOCK_DATE    EXPIRY_DATE      CREATED     PROFILE    PASSWORD_VERSIONS    COMMON    ORACLE_MAINTAINED
_________________________ ___________________ ____________ ______________ ____________ ___________ ____________________ _________ ____________________
XS$NULL                   EXPIRED & LOCKED    07-MAY-21                   07-MAY-21    DEFAULT     11G                  YES       Y
OUTLN                     LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
SYS                       OPEN                                            07-MAY-21    RDSADMIN    11G 12C              YES       Y
SYSTEM                    OPEN                                            07-MAY-21    RDSADMIN    11G 12C              YES       Y
APPQOSSYS                 LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
DBSFWUSER                 LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
GGSYS                     LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
ANONYMOUS                 LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
CTXSYS                    OPEN                                            07-MAY-21    DEFAULT                          YES       Y
GSMADMIN_INTERNAL         LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
XDB                       LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
DBSNMP                    LOCKED              07-MAY-21                   07-MAY-21    RDSADMIN                         YES       Y
GSMCATUSER                LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
REMOTE_SCHEDULER_AGENT    LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
SYSBACKUP                 LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
GSMUSER                   LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
SYSRAC                    LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
ORACLE19C                 OPEN                             22-NOV-21      26-MAY-21    DEFAULT     11G 12C              NO        N
AUDSYS                    LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
DIP                       LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
SYSKM                     LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
SYS$UMF                   LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
SYSDG                     LOCKED              07-MAY-21                   07-MAY-21    DEFAULT                          YES       Y
RDSADMIN                  OPEN                                            26-MAY-21    RDSADMIN    11G 12C              YES       N

24 rows selected.

ORACLE19C@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/MY_APP>

 show parameter common%prefix
NAME                      TYPE   VALUE
------------------------- ------ --------
common_user_prefix        string

Yes, RDSADMIN is a common user, probably created with COMMON_USER_PREFIX=” as we see no C## here. That’s not really a problem if it is correctly managed, and anyway, for the moment there’s no plug and clone operations on this PDB.

This is a start to support the Oracle Multitenant architecture. I hope we will be able to benefit from multitenant: multiple PDBs (you can have up to 3 without additional license, in any edition), data movement (imagine a cross-region refreshable PDB with ability to switchover…), thin clones…

On Performance Insight, we see the CDB level statistics without a PDB dimension (“pdb” is the name of my RDS instance here)

Note that in order to connect to your Oracle database, the easiest is to download SQLcl:


wget -qc https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip && unzip -qo sqlcl-latest.zip

sqlcl/bin/sql oracle19c/franck@//pdb.cywlwrcont2f.us-east-1.rds.amazonaws.com/MY_APP

This is how I connected to run all this.

Cet article Amazon RDS Oracle in Multitenant est apparu en premier sur Blog dbi services.

Nutanix Era with oracle databases : Part 1 – Introduction

$
0
0

I’m currently setting up a lab for Nutanix Era in order to be able to provide some presentation on this subject and see how Nutanix Era interacts with Oracle databases. I therefore thought it would be certainly helpful and a good opportunity to share the feedback I’m getting from this experience. This fist blog intends to provide only a brief introduction on Nutanix Era. I will then write some additional blogs in order to describe :
Part 2 – How to create a VM template for Oracle database implementation
Part 3 – How to provision an oracle database
Part 4 – How to clone an oracle database
Part 5 – How to refresh an oracle database clone
And more part coming on the way…


What is Nutanix?

Nutanix is a private Cloud solution for on-premises. You will see that Nutanix GUI is really look and feel Cloud. It is a hybrid cloud solution as Nutanix can be integrated with AWS, Azure and Google Cloud.

Nutanix is one of the most popular hyperconverged solution (HCI). Its distributed systems architecture will provide unlimited scalability, high performance and high data protection. A hyperconverged solution will combine datacenter hardware into shared resource pool for :

  • Compute
  • Storage
  • Storage network
  • Virtualization

Hyperconverged solution will then facilitate the datacenter management and deliver quick services on on-premise.

Nutanix solution is hardware agnostic. It is compatible with Dell, Lenovo, Cisco and HPE Proliant hardware. Of course Nutanix has their own hardware as well.

The solution is composed of a cluster of a minimum of 3 servers/nodes. Nutanix software AOS is deployed on the cluster.

Native Nutanix hypervisor is called AHV. The solution support also various of other hypervisor like Vmware ESXi, Microsoft Hyper-V and Citrix Hpervisor.

The next schema (referenced from Nutanix documentation) will briefly describe the architecture.



  • The CVM is the controller VM. It will host the core software in order to serve and manage all the I/O operations for the hypervisor and all VMs running on the specific host. The CVM will host as well Prism Management GUI which will be used to manage the resource.
  • The Distributed Storage Fabric (DFS) controls the storage pool and distributes data and metadata across all the nodes in the cluster.
  • The hypervisor is the virtual machine monitor. It will be in charge of creating and running all the VMs. The hypervisor is in charge of virtually sharing the resources (memory and processor) from its physical server/node to all the created VMs.

The solution will allow tunable redundancy. The Replication Factor (RF) will define the number of copies of data at all times accross nodes.

What is Nutanix Era?

On the other hand Nutanix Era is a platform, running on one VM of the Nutanix Cluster, that will help us to easily manage, create, update and keep track of the databases. It will simplify the database management through 1 central interface. With Nutanix Era I can easily :

  • Provision new database
  • Clone existing database
  • Delete database
  • Refresh database
  • Backup database
  • Patch database

It will as well support HA with RAC possiblities, but still do not incorporate Data Guard. You will still need to build your Data Guard solution between primary and standby databases running on separate VMs from the cluster.

Be careful. Management of the database done by Nutanix Era does not include your DBA tasks!

One of the most interesting part is also that from the same central GUI you will be able to run and manage various of databases :

  • SQL Server
  • PostgreSQL
  • SAP HANA
  • MySQL
  • MariaDB
  • Oracle database



Current Nutanix Era version is 2.2.

Nutanix Era is a database as a service (DBaaS). It will solve the long traditional provisioning process involving multiple teams and specialists :

  • DB request
  • Configure server
    • Create Server
    • Allocate storage
    • Setup the network
    • Create the cluster
    • Provision a DB
  • and so on…

How to easily test Nutanix Era?

You can easily test Nutanix Era on your own with MariaDB or PostgreSQL databases using the live lab test drive.

Creation of our Nutanix lab

For our purpose we started a 30 days cluster trial from my Nutanix website.
With the help of Nutanix team we could interface our AWS Cloud with the Nutanix 30 days Cluster in order to setup the cluster and deploy Nutanix Era using a image configuration in Prism GUI.

Documentation

There is one link to know, the most important : Nutanix Bible

Nutanix Era documentation : Nutanix Era User Guide

Cet article Nutanix Era with oracle databases : Part 1 – Introduction est apparu en premier sur Blog dbi services.

SELECT FROM DUAL : Oracle Performance And Tuning

$
0
0

The DUAL table is automatically created by Oracle and contains one column (DUMMY) and one row (x value).

This table is often used by SQL developer in PL/SQL code (Package, Functions, Trigger) to initialize variables storing technical information such as for example SYSDATE, USER or HOSTNAME.

Querying DUAL table is generally faster  as we can see below:

SQL> select sysdate from dual;

SYSDATE
---------
05-OCT-21

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 1388734953

-----------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |     2   (0)| 00:00:01 |
|   1 |  FAST DUAL       |      |     1 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        554  bytes sent via SQL*Net to client
        386  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

 

But what happens when this “SELECT FROM DUAL” is executed several times into the application :

Let’s execute it for 100, 1000, 10000, 100000 and 1000000 executions

SQL> declare
        v_date date;
begin
 for rec in 1..100 loop
        select sysdate into v_date from dual;
  end loop;
end;  2    3    4    5    6    7
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> declare
        v_date date;
begin
 for rec in 1..1000 loop
        select sysdate into v_date from dual;
  end loop;
end;  2    3    4    5    6    7
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> declare
        v_date date;
begin
 for rec in 1..10000 loop
        select sysdate into v_date from dual;
  end loop;
end;
  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.08
SQL> declare
        v_date date;
begin
 for rec in 1..100000 loop
        select sysdate into v_date from dual;
  end loop;
end;  2    3    4    5    6    7
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.85
SQL> declare
        v_date date;
begin
 for rec in 1..1000000 loop
        select sysdate into v_date from dual;
  end loop;
end;
  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:08.34
SQL>

 

Let’s execute now a PL/SQL block to assign directly the variable v_date with SYSDATE instead of using “SELECT FROM DUAL”:

SQL> declare
        v_date date;
begin
 for rec in 1..100 loop
        v_date := sysdate;
  end loop;
end;
  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> declare
        v_date date;
begin
 for rec in 1..1000 loop
        v_date := sysdate;
  end loop;
end;
  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> declare
        v_date date;
begin
 for rec in 1..10000 loop
        v_date := sysdate;
  end loop;
end;
  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> declare
        v_date date;
begin
 for rec in 1..100000 loop
        v_date := sysdate;
  end loop;
end;
  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
SQL> declare
        v_date date;
begin
 for rec in 1..1000000 loop
        v_date := sysdate;
  end loop;
end;
  2    3    4    5    6    7    8
  9  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.28
SQL>

 

Conclusion:

Nb of executions SELECT FROM DUAL Assigning variable
100
1 ms
0 ms
1000 1 ms
0 ms
10000 8 ms
1 ms
100000 85 ms
14 ms
1000000 8.34 sec
1.28 sec

 

“SELECT FROM DUAL” is always slower than “Assigning variable”:

  • 8 times more slower for 10000 executions
  • 6 times more slower for 100000 executions
  • More than 8 times more slower for 1000000 executions

From a performance point of vew, “SELECT FROM DUAL” must be avoided to initialize variable because when you query the DUAL table in a PL/SQL block, oracle optimizer does a roundtrip between the PL/SQL engine and the SQL engine. For few rows, it’s fast, but for several rows (Ex.: 1000000) the “SELECT FROM DUAL” is inefficient because oracle will do 1000000 roundtrips between the PL/SQL engine and the SQL engine.

I have seen plenty of applications where the “SELECT FROM DUAL” is used everywhere (Ex. : “SELECT FROM DUAL” in a Logon trigger !!!) while we can use a simple “Assigning variable” and changing the code has increased the performance of the application significantly.

 

Cet article SELECT FROM DUAL : Oracle Performance And Tuning est apparu en premier sur Blog dbi services.

ODA update-server to 19.12 fails on Patch GI with RHP

$
0
0

Sometimes giving a workshop is a good opportunities to test new things…this what happened to me today…  🙂

While giving an ODA workshop, I thought it was a good opportunity to test patching to the new 19.12 release.
However during the update-server process we got a failure on the our ODA update to 19.12 failing on GI patch using RHP.

Let’s see how we fixed it…

So this is our starting point:

[root@dbi-oda-x8 tmp]# odacli describe-job -i d3efbbf5-f1db-4c5e-8f03-e8876f85a341

Job details
----------------------------------------------------------------
                     ID:  d3efbbf5-f1db-4c5e-8f03-e8876f85a341
            Description:  Server Patching
                 Status:  Failure
                Created:  November 4, 2021 11:01:15 AM CET
                Message:  DCS-10001:Internal error encountered: Fail to patch GI with RHP : DCS-10001:Internal error encountered: DCS-10001:Internal error encountered: clonemetadata.xml file is not present in local repository.
Update repository with new clones and retry same command...

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Server patching                          November 4, 2021 11:01:23 AM CET    November 4, 2021 11:22:16 AM CET    Failure
Server patching                          November 4, 2021 11:01:24 AM CET    November 4, 2021 11:22:16 AM CET    Failure
Creating repositories using yum          November 4, 2021 11:02:36 AM CET    November 4, 2021 11:02:38 AM CET    Success
Updating YumPluginVersionLock rpm        November 4, 2021 11:02:38 AM CET    November 4, 2021 11:02:38 AM CET    Success
Applying OS Patches                      November 4, 2021 11:02:38 AM CET    November 4, 2021 11:10:49 AM CET    Success
Creating repositories using yum          November 4, 2021 11:10:50 AM CET    November 4, 2021 11:10:50 AM CET    Success
Applying HMP Patches                     November 4, 2021 11:10:50 AM CET    November 4, 2021 11:11:08 AM CET    Success
Patch location validation                November 4, 2021 11:11:08 AM CET    November 4, 2021 11:11:08 AM CET    Success
oda-hw-mgmt upgrade                      November 4, 2021 11:11:09 AM CET    November 4, 2021 11:11:41 AM CET    Success
OSS Patching                             November 4, 2021 11:11:42 AM CET    November 4, 2021 11:11:42 AM CET    Success
Applying Firmware Disk Patches           November 4, 2021 11:11:42 AM CET    November 4, 2021 11:11:46 AM CET    Success
Applying Firmware Controller Patches     November 4, 2021 11:11:46 AM CET    November 4, 2021 11:11:50 AM CET    Success
Checking Ilom patch Version              November 4, 2021 11:11:50 AM CET    November 4, 2021 11:11:50 AM CET    Success
Patch location validation                November 4, 2021 11:11:50 AM CET    November 4, 2021 11:11:50 AM CET    Success
Save password in Wallet                  November 4, 2021 11:11:50 AM CET    November 4, 2021 11:11:51 AM CET    Success
Apply Ilom patch                         November 4, 2021 11:11:51 AM CET    November 4, 2021 11:20:48 AM CET    Success
Copying Flash Bios to Temp location      November 4, 2021 11:20:48 AM CET    November 4, 2021 11:20:48 AM CET    Success
Server patching                          November 4, 2021 11:20:48 AM CET    November 4, 2021 11:22:16 AM CET    Failure
Starting the clusterware                 November 4, 2021 11:20:48 AM CET    November 4, 2021 11:22:16 AM CET    Success
registering image                        November 4, 2021 11:22:16 AM CET    November 4, 2021 11:22:16 AM CET    Success
registering working copy                 November 4, 2021 11:22:16 AM CET    November 4, 2021 11:22:16 AM CET    Success
registering image                        November 4, 2021 11:22:16 AM CET    November 4, 2021 11:22:16 AM CET    Success
Creating GI home directories             November 4, 2021 11:22:16 AM CET    November 4, 2021 11:22:16 AM CET    Success
Extract GI clone                         November 4, 2021 11:22:16 AM CET    November 4, 2021 11:22:16 AM CET    Success
Provisioning Software Only GI with RHP   November 4, 2021 11:22:16 AM CET    November 4, 2021 11:22:16 AM CET    Success
Patch GI with RHP                        November 4, 2021 11:22:16 AM CET    November 4, 2021 11:22:16 AM CET    Failure

 

So basically we had a failure because of

Patch GI with RHP November 4, 2021 11:22:16 AM CET November 4, 2021 11:22:16 AM CET Failure

 

with as explanation the following message:

DCS-10001:Internal error encountered: Fail to patch GI with RHP : DCS-10001:Internal error encountered:
DCS-10001:Internal error encountered: clonemetadata.xml file is not present in local repository.

 

This basically sounded like with we were missing the GI clone for the the 19.12 in our local repository (/opt/oracle/oak/pkgrepos)

The first check we was to verify one more time that our update repository job, to import the GI clone, worked properly

[root@dbi-oda-x8 clones]# odacli describe-job -i 3792c33b-3804-44f7-9806-eddf21bb4939

Job details
----------------------------------------------------------------
                     ID:  3792c33b-3804-44f7-9806-eddf21bb4939
            Description:  Repository Update
                 Status:  Success
                Created:  November 4, 2021 10:20:26 AM CET
                Message:  /tmp/ODA/odacli-dcs-19.12.0.0.0-210822.1-DB-19.12.0.0.zip,/tmp/ODA/odacli-dcs-19.12.0.0.0-210822.1-GI-19.12.0.0.zip

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Unzip bundle                             November 4, 2021 10:20:29 AM CET    November 4, 2021 10:21:56 AM CET    Success
registering image                        November 4, 2021 10:21:57 AM CET    November 4, 2021 10:21:57 AM CET    Success
registering image                        November 4, 2021 10:21:57 AM CET    November 4, 2021 10:21:57 AM CET    Success

 

OK as this was fine, we decided to check if the file is really physically there:

[root@dbi-oda-x8 ~]# cd /opt/oracle/oak/pkgrepos/orapkgs/clones/
[root@dbi-oda-x8 clones]# ls -l
total 22095968
-rw-r--r-- 1 root root      13813 Aug 23 19:06 clonemetadata.xml
-rw-r--r-- 1 root root 4500047202 Feb  9  2021 db19.210119.tar.gz
-r-xr-xr-x 1 root root 5033359682 Aug 23 19:10 db19.210720.tar.gz
-r-xr-xr-x 1 root root 6542844964 Aug 23 19:06 grid19.210720.tar.gz
-rw-r--r-- 1 root root 6542464864 Feb  9  2021 grid19.tar.gz
drwx------ 2 root root      65536 Nov  4 11:01 lost+found

Hmmm the file is there and with a simple grep command we can confirm that it is available in the clonemedata.xml too.

We started searching in the ODA 19.12 Release Note for the known issues and the end of the list we found the following one:

ODA 19.12 know issue on server update for misssing version

Ok it is not exactly the same message (not talking about clonemedata.xml) but still linked to missing version.
The proposed workaround is:

Server update workaround by changing file ownership

So let’s have a look to our setup on the source GI Home

[root@dbi-oda-x8 bin]# ls -l osdbagrp*
-rwxr-xr-x 1 root oinstall 33488 Nov  3 13:24 osdbagrp
-rw-r----- 1 root oinstall     0 Feb  9  2021 osdbagrp0

As in the know issue, the file is not belonging to grid user. We tried then to change the ownership

[root@dbi-oda-x8 bin]# chown grid osdbagrp


[root@dbi-oda-x8 bin]# ls -l osdbagrp*
-rwxr-xr-x 1 grid oinstall 33488 Nov  3 13:24 osdbagrp
-rw-r----- 1 root oinstall     0 Feb  9  2021 osdbagrp0

Then we run the command recommended to update the registry…unfortunately neither worked

[root@dbi-oda-x8 bin]# odacli update-registry -n gihome
DCS-10112:Specified components are already discovered.


[root@dbi-oda-x8 bin]# odacli update-registry -n system
DCS-10112:Specified components are already discovered.

As it didn’t wanted to update the information, we decided to force the update

[root@dbi-oda-x8 bin]# odacli update-registry -n gihome -f

Job details
----------------------------------------------------------------
                     ID:  4a9c149b-d59a-4a05-baea-2acfa446ec34
            Description:  Discover System Components : gihome
                 Status:  Created
                Created:  November 4, 2021 11:42:05 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

Checking the state of the job shows that it is sucessful

[root@dbi-oda-x8 bin]# odacli describe-job -i 4a9c149b-d59a-4a05-baea-2acfa446ec34

Job details
----------------------------------------------------------------
                     ID:  4a9c149b-d59a-4a05-baea-2acfa446ec34
            Description:  Discover System Components : gihome
                 Status:  Success
                Created:  November 4, 2021 11:42:05 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Rediscover GiHome                        November 4, 2021 11:42:05 AM CET    November 4, 2021 11:42:07 AM CET    Success

 

The final steps is then to re-run the update server command:

[root@dbi-oda-x8 bin]# odacli update-server -v 19.12.0.0.0
{
  "jobId" : "946922b3-aefa-4c75-b2e1-019e9d404118",
  "status" : "Created",
  "message" : "Success of server update will trigger reboot of the node after 4-5 minutes. Please wait until the node reboots.",
  "reports" : [ ],
  "createTimestamp" : "November 04, 2021 11:42:57 AM CET",
  "resourceList" : [ ],
  "description" : "Server Patching",
  "updatedTime" : "November 04, 2021 11:42:57 AM CET"
}

After few minutes the patching of the server is finally successful 🙂 🙂

[root@dbi-oda-x8 ~]# odacli describe-job -i 946922b3-aefa-4c75-b2e1-019e9d404118

Job details
----------------------------------------------------------------
                     ID:  946922b3-aefa-4c75-b2e1-019e9d404118
            Description:  Server Patching
                 Status:  Success
                Created:  November 4, 2021 11:42:57 AM CET
                Message:  Successfully patched GI with RHP

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Creating repositories using yum          November 4, 2021 11:43:11 AM CET    November 4, 2021 11:43:13 AM CET    Success
Updating YumPluginVersionLock rpm        November 4, 2021 11:43:13 AM CET    November 4, 2021 11:43:13 AM CET    Success
Applying OS Patches                      November 4, 2021 11:43:13 AM CET    November 4, 2021 11:43:14 AM CET    Success
Creating repositories using yum          November 4, 2021 11:43:15 AM CET    November 4, 2021 11:43:15 AM CET    Success
Applying HMP Patches                     November 4, 2021 11:43:15 AM CET    November 4, 2021 11:43:16 AM CET    Success
Patch location validation                November 4, 2021 11:43:16 AM CET    November 4, 2021 11:43:16 AM CET    Success
oda-hw-mgmt upgrade                      November 4, 2021 11:43:16 AM CET    November 4, 2021 11:43:16 AM CET    Success
OSS Patching                             November 4, 2021 11:43:16 AM CET    November 4, 2021 11:43:16 AM CET    Success
Applying Firmware Disk Patches           November 4, 2021 11:43:16 AM CET    November 4, 2021 11:43:20 AM CET    Success
Applying Firmware Controller Patches     November 4, 2021 11:43:20 AM CET    November 4, 2021 11:43:24 AM CET    Success
Checking Ilom patch Version              November 4, 2021 11:43:24 AM CET    November 4, 2021 11:43:24 AM CET    Success
Patch location validation                November 4, 2021 11:43:24 AM CET    November 4, 2021 11:43:24 AM CET    Success
Save password in Wallet                  November 4, 2021 11:43:25 AM CET    November 4, 2021 11:43:25 AM CET    Success
Apply Ilom patch                         November 4, 2021 11:43:25 AM CET    November 4, 2021 11:43:25 AM CET    Success
Copying Flash Bios to Temp location      November 4, 2021 11:43:25 AM CET    November 4, 2021 11:43:25 AM CET    Success
Starting the clusterware                 November 4, 2021 11:43:25 AM CET    November 4, 2021 11:43:25 AM CET    Success
registering image                        November 4, 2021 11:43:25 AM CET    November 4, 2021 11:43:26 AM CET    Success
registering working copy                 November 4, 2021 11:43:26 AM CET    November 4, 2021 11:43:26 AM CET    Success
registering image                        November 4, 2021 11:43:26 AM CET    November 4, 2021 11:43:26 AM CET    Success
Creating GI home directories             November 4, 2021 11:43:26 AM CET    November 4, 2021 11:43:26 AM CET    Success
Extract GI clone                         November 4, 2021 11:43:26 AM CET    November 4, 2021 11:43:26 AM CET    Success
Provisioning Software Only GI with RHP   November 4, 2021 11:43:26 AM CET    November 4, 2021 11:43:26 AM CET    Success
Patch GI with RHP                        November 4, 2021 11:43:26 AM CET    November 4, 2021 11:50:05 AM CET    Success
Updating GIHome version                  November 4, 2021 11:50:06 AM CET    November 4, 2021 11:50:08 AM CET    Success
Update System version                    November 4, 2021 11:50:23 AM CET    November 4, 2021 11:50:23 AM CET    Success
Cleanup JRE Home                         November 4, 2021 11:50:23 AM CET    November 4, 2021 11:50:23 AM CET    Success
Add SYSNAME in Env                       November 4, 2021 11:50:23 AM CET    November 4, 2021 11:50:23 AM CET    Success
Setting ACL for disk groups              November 4, 2021 11:50:23 AM CET    November 4, 2021 11:50:27 AM CET    Success
preRebootNode Actions                    November 4, 2021 11:52:14 AM CET    November 4, 2021 11:55:06 AM CET    Success
Reboot Ilom                              November 4, 2021 11:55:06 AM CET    November 4, 2021 11:55:06 AM CET    Success

 

Here we go our ODA is now up-to-date in version 19.12, we can continue with:

  • cleaning the repo
  • manually cleaning the former GI Home
  • Creating a Database Home Store
  • 😀

Enjoy! 😎

Cet article ODA update-server to 19.12 fails on Patch GI with RHP est apparu en premier sur Blog dbi services.

DataPump and the Transform option – Not very well known, but it can be useful

$
0
0

I work with Oracle Databases and Datapump for more than 10 years but still I find some datapump options that I did know and in some cases can be very helpful.
For a customer we did a migration of a 2TB database from AIX to Linux which means we had to change the endianness of the data.
We decided to perform the Migration with Full Transportable database, convert the database files with rman and afterwards import the metadata with datapump. The migration procedure is not part of this blog, but it is the reason why I discover the transform option of datapump.

Everyone working with oracle and datapump knows, that there are multiple options available to transform the data during the import. Well known are:

  • REMAP_DATAFILE
  • REMAPE_TABLESPACE
  • REMAP_SCHEMA

But there is an additional option available to transforming the metadata and that’s the not very well known TRANSFORM option of datapump. The transform option supports multiple options to transform objects during the import. I will show a little bit more about 3 options, but the full list can be found here

SEGMENT_ATTRIBUTES:[Y | N]:[table | index ]

The default value for the segment_attributes during import is transform=segment_attributes:y. This means, that the object will be imported with the same segment attributes like in the source database. If you set transform=segment_attributes:n then datapump will ignore the attributes in the dumpfile and will use the tablespace/user default values. For example if table t1 of user TEST_USER was stored on tablespace USER_DATA in the source database and the default tablespace of TEST_USER is USERS, then the segment will be created there.

LOB_STORAGE:[SECUREFILE | BASICFILE | DEFAULT | NO_CHANGE]

This option could be very interesting for migrations from older oracle releases to newer ones. Until Oracle 12.1 the default storage option for LOBs was BASICFILE. Since 12.2 the new default is SECUREFILE. The BASICFILE option is still possible but deprecated. So this will be desupported in future releases and in that case you should change this during a upgrade.

For example if you upgrade a database with export / import from 12.1 to 19c and you have LOB’s in your database, then per default the import will create also BASICFILE LOB’s in the 19c database. After the migration the default for LOB creation is SECUREFILE and in this case you will get a mix of SECUREFILE LOB’s and BASICFILE LOB’s. To avoid that you can use the LOB_STORAGE option of the TRANSFORM clause in datapump.

TRANSFORM=LOB_STORAGE:SECUREFILE

With this import parameter oracle will create all LOB Segments as SECUREFILES also if they are exported as BASICFILE.

OID:[Y | N]

The last option that could really help you is TRANSFORM=OID:n. I come back to my full transportable migration that I did for a customer. During the metadata import we get the following errors:

ORA-39082: Object type TYPE_BODY:"XXX"."XXX" created with compilation warnings

The application in this case use types in the code. Every type in the oracle database is identified with a unique identifier (OID) and the problem was, that in the target database we had already OID with the same value. In this case we used the parameter TRANSFORM=OID:n so oracle will create new OID during the import and will not use the OID of the source database.

Cet article DataPump and the Transform option – Not very well known, but it can be useful est apparu en premier sur Blog dbi services.


Improve Oracle Insert Performance with BULKCOLLECT and FORALL

$
0
0

As specified by Steven Feuerstein into the Oracle Blog Website the bulk processing features of PL/SQL are designed specifically to reduce the number of context switches required to communicate from the PL/SQL engine to the SQL engine.

Using BULK COLLECT plus FORALL instead of standard Insert statement to insert data improve performance dramatically, let’s me show you :

Here is a customer case on how using BULK COLLECT plus FORALL to improve Insert operations for very big tables (more than 1 billion of rows and more than 300Gb in size).

First of all, create a bigfile tablespace which will contain the data:

SQL> CREATE bigfile TABLESPACE tbs1 DATAFILE '+data' SIZE 310G;

Tablespace created.

Elapsed: 00:11:30.87

Next step is to create the table (empty) and move it to the bigfile tablespace:

CREATE TABLE "xxxx"."DBI_FK_NOPART"
( "PKEY" NUMBER(12,0) NOT NULL ENABLE,
"BOID" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP" NOT NULL ENABLE,
"METABO" NUMBER(12,0) NOT NULL ENABLE,
"LASTUPDATE" TIMESTAMP (9) NOT NULL ENABLE,
"PROCESSID" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP" NOT NULL ENABLE,
"ROWCOMMENT" VARCHAR2(15 CHAR) COLLATE "USING_NLS_COMP",
"CREATED" TIMESTAMP (9) NOT NULL ENABLE,
"CREATEDUSER" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP" NOT NULL ENABLE,
"REPLACED" TIMESTAMP (9) NOT NULL ENABLE,
"REPLACEDUSER" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP",
"ARCHIVETAG" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP",
"MDBID" VARCHAR2(255 CHAR) COLLATE "USING_NLS_COMP",
"ITSFORECAST" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP" NOT NULL ENABLE,
"BETRAG" NUMBER(15,2) NOT NULL ENABLE,
"ITSOPDETHERKUNFT" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP",
"ITSOPDETHKERSTPRM" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP",
"ITSFCKOMPPREISSEQ" VARCHAR2(40 CHAR) COLLATE "USING_NLS_COMP",
"CLSFCKOMPPREISSEQ" NUMBER(12,0),
"ISSUMMANDENDPREIS" NUMBER(12,0) NOT NULL ENABLE,
"PARTITIONTAG" NUMBER(12,0) NOT NULL ENABLE,
"PARTITIONDOMAIN" VARCHAR2(4 CHAR) COLLATE "USING_NLS_COMP" NOT NULL ENABLE,
"FCVPRODKOMPPKEY" NUMBER(12,0),
"FCKVPRDANKOMPPKEY" NUMBER(12,0)
) ;

--MOVE TABLE TO BIGFILE TABLESPACE
ALTER TABLE "xxxx"."DBI_FK_NOPART" MOVE ONLINE TABLESPACE tbs1;

Load Data with BULK COLLECT and FORALL:

SQL> declare
type testarray is table of varchar2(3000) index by binary_integer;
v_PKEY testarray;
v_BOID testarray;
v_METABO testarray;
v_LASTUPDATE testarray;
v_PROCESSID testarray;
v_ROWCOMMENT testarray;
v_CREATED testarray;
v_CREATEDUSER testarray;
v_REPLACED testarray;
v_REPLACEDUSER testarray;
v_ARCHIVETAG testarray;
v_MDBID testarray;
v_ITSFORECAST testarray;
v_BETRAG testarray;
v_ITSOPDETHERKUNFT testarray;
v_ITSOPDETHKERSTPRM testarray;
v_ITSFCKOMPPREISSEQ testarray;
v_CLSFCKOMPPREISSEQ testarray;
v_ISSUMMANDENDPREIS testarray;
v_PARTITIONTAG testarray;
v_PARTITIONDOMAIN testarray;
v_FCVPRODKOMPPKEY testarray;
v_FCKVPRDANKOMPPKEY testarray;

cursor cu_cursor is select PKEY,BOID,METABO,LASTUPDATE,PROCESSID,ROWCOMMENT,CREATED,CREATEDUSER,REPLACED,REPLACEDUSER,ARCHIVETAG,MDBID,ITSFORECAST,BETRAG,
ITSOPDETHERKUNFT,ITSOPDETHKERSTPRM,ITSFCKOMPPREISSEQ,CLSFCKOMPPREISSEQ,ISSUMMANDENDPREIS,PARTITIONTAG,PARTITIONDOMAIN,FCVPRODKOMPPKEY,FCKVPRDANKOMPPKEY
FROM xxx.TableSource;

begin
dbms_output.put_line('start : '||to_char(sysdate,'dd.mm.rrrr hh24:mi:ss'));
open cu_cursor;

loop

fetch cu_cursor bulk collect into v_PKEY,v_BOID,v_METABO,v_LASTUPDATE,v_PROCESSID,v_ROWCOMMENT,v_CREATED,v_CREATEDUSER,v_REPLACED,v_REPLACEDUSER ,v_ARCHIVETAG,v_MDBID,v_ITSFORECAST,v_BETRAG,v_ITSOPDETHERKUNFT,
v_ITSOPDETHKERSTPRM ,v_ITSFCKOMPPREISSEQ ,v_CLSFCKOMPPREISSEQ,v_ISSUMMANDENDPREIS ,v_PARTITIONTAG,v_PARTITIONDOMAIN,v_FCVPRODKOMPPKEY,v_FCKVPRDANKOMPPKEY LIMIT 1000;

forall i in 1 .. v_PKEY.count

insert into xxx.DBI_FK_NOPART( PKEY,BOID,METABO,LASTUPDATE,PROCESSID,ROWCOMMENT,CREATED,CREATEDUSER,REPLACED,REPLACEDUSER ,ARCHIVETAG,MDBID,ITSFORECAST,BETRAG,ITSOPDETHERKUNFT,
ITSOPDETHKERSTPRM ,ITSFCKOMPPREISSEQ ,CLSFCKOMPPREISSEQ,ISSUMMANDENDPREIS ,PARTITIONTAG,PARTITIONDOMAIN,FCVPRODKOMPPKEY,FCKVPRDANKOMPPKEY )
values
( v_PKEY(i),v_BOID(i),v_METABO(i),v_LASTUPDATE(i),v_PROCESSID(i),v_ROWCOMMENT(i),v_CREATED(i),v_CREATEDUSER(i),v_REPLACED(i),v_REPLACEDUSER(i),v_ARCHIVETAG(i),v_MDBID(i),v_ITSFORECAST(i),v_BETRAG(i),v_ITSOPDETHERKUNFT(i),
v_ITSOPDETHKERSTPRM(i),v_ITSFCKOMPPREISSEQ(i),v_CLSFCKOMPPREISSEQ(i),v_ISSUMMANDENDPREIS(i),v_PARTITIONTAG(i),v_PARTITIONDOMAIN(i),v_FCVPRODKOMPPKEY(i),v_FCKVPRDANKOMPPKEY(i));

exit when cu_cursor%notfound;
end loop;

close cu_cursor;
dbms_output.put_line('end : '||to_char(sysdate,'dd.mm.rrrr hh24:mi:ss'));

end;
/
start : 15.11.2021 10:30:36
end : 15.11.2021 12:50:23

PL/SQL procedure successfully completed.

Elapsed: 02:19:46.80

Gather Statistics:

exec dbms_stats.gather_table_stats('xxx','DBI_FK_NOPART');

Add primary key and indexes (store it into bigfile tablespace) :

ALTER TABLE xxx.DBI_FK_NOPART ADD CONSTRAINT PK6951_1 PRIMARY KEY (PKEY) using index tablespace tbs1;

BEGIN
CREATE INDEX "xxx"."CLSFCKOMPPREISSEQ695_1" ON "xxx"."DBI_FK_NOPART" ("CLSFCKOMPPREISSEQ") TABLESPACE "TBS1";
CREATE INDEX "xxx"."ITSFCKOMPPREISSEQ695_1" ON "xxx"."DBI_FK_NOPART" ("ITSFCKOMPPREISSEQ") TABLESPACE "TBS1";
CREATE INDEX "xxx"."ITSFORECAST695_1" ON "xxx"."DBI_FK_NOPART" ("ITSFORECAST") TABLESPACE "TBS1" ;
CREATE INDEX "xxx"."IX_MDBID_xxx_1" ON "xxx"."DBI_FK_NOPART" ("MDBID") TABLESPACE "TBS1" ;
END;

Let’s check statistics of the table :

select owner,table_name,num_rows,blocks, last_analyzed from dba_tables where table_name = 'DBI_FK_NOPART';
OWNER TABLE_NAME      NUM_ROWS  BLOCKS    LAST_ANALYZED
XXX    DBI_FK_NOPART  1188403800 39871915 15.11.21

Conclusion :

With BULK COLLECT plus FORALL, I inserted more than 1 billion of rows in 02h19.

With standard Insert through a FOR LOOP statement, the Insert never finished, I stopped it after 15 hours of execution and after resizing muliple times the Undo tablespace due to “unable to extend tablespace…” error.

Cet article Improve Oracle Insert Performance with BULKCOLLECT and FORALL est apparu en premier sur Blog dbi services.

5 things that aren’t true about RMAN

$
0
0

Introduction

RMAN backups are not a hot topic today. This is because it didn’t evolve that much in 12c/18c/19c, and most of the databases are now protected by a good backup strategy since years, and no changes are needed. Furthermore, Disaster Recovery solutions are more and more deployed, and all the flashback technologies embeded in the database are well known. This has pushed the need for RMAN restore to the very latest option in case of a failure. But we still do backups because we never know what could happen. Here are some untrue statements I heard recently about RMAN.

A full database RMAN backup is consistent

NO. A full database backup with RMAN is not consistent. Unless you shutdown the database, open it in the mount state and do the backup when the database is not open.

During an online backup, all the datafiles included in your backupsets will have different SCN, and you will never be able to restore a consistent database unless you have all the archivelogs between the beginning and the end of the backup.

Backup of database will take a certain amount of minutes or hours, and the more your backup will last, the less consistent will your backupsets be. But this is not a problem because you normally have all the archivelogs to put back the database in a consistent state, meaning all SCN aligned. Doing only a database backup at 10PM and lasting 1h will never let you restore a database consistent at any point in time if you don’t have the archivelogs. And an inconsistent database will never go from MOUNT to OPEN state. Having all the needed archivelogs will let you restore and recover the database at a point in time after 11PM. This is why the archivelogs are so important to backup.

Full backups should be done during the night

NO. There is no need to do database backups during the night. During the night, maintenance windows are planned (the default’s ones) and batches are probably running. You may think that doing a backup during the night will bring you a kind of consistency at the end of the day, but it’s very unlikely that you would need to do a restore at a point in time at the end of the day.

Either you will need a complete restore after a failure, or you will restore to a specific point in time.

Complete restore is needed when you loose some or all the datafiles. In this case, RMAN will pick up the files from the very latest full backup and apply all the changes in the archivelogs and will finish with the latest changes from the redologs.

Point-in-time restore/recover is something related to business, for example if someone corrupted data and did a commit, and if all flashback mechanisms cannot solve the problem. RMAN will then take the best full database backup to restore the database, the closest one before the problem and will apply all the changes from the archivelogs until the point-in-time you decided.

The start time of your full backup doesn’t really matter, you just need to plan it when the activity is rather low, when transactions and batches are rare.

Archivelogs backup can be done regularly, every hour for example. The most often you backup the archivelogs, the less files you will have to backup for each run.

RMAN backup is a good solution for Disaster Recovery

NO. A Disaster Recovery Plan means that you are able to bring back the database to life in a defined time. Most often a couple of minutes. Restoring a backup first needs a server, so if your current server is down, you will need another one. And restore will take time. The bigger your database is, the more time you will need to do the restore. Restoring a backup is much more a matter of time than anything else. And you may encounter troubles during the restore for some reasons, for example if something is missing in your backupsets.

Today, RMAN backups are still mandatory for critical databases, but 99.99% of the time you will use Disaster Recovery in case of failure, this is much more comfortable and it’s probably more compliant with your business requirements. Yes, your RMAN backupsets are mostly dedicated to the trash bin. This is not 100% true. Backups are still used for refreshing DEV or TEST databases, so you will probably use them from time to time!

Autobackup of the controlfile in the FRA is nice

NO. Autobackup of the controlfile is nice, but not in the Fast Recovery Area. It’s nice because it works beyond backup: each time a structural change on your database happens, an update of the controlfile is done and an automatic backup is triggered. This is very clever. But the default storage target for autobackup is in the FRA, which may not be your main backup destination. If you loose your database and also your FRA, you may not have the most recent controlfile to restore with minimal loss. FRA is the default destination for autobackup, and you can simply configure RMAN for putting this automatic backup in the backup folder of your daily full or incremental backup. This is definitely where it should be. If it’s not yet done, it’s easy to change, for example:

rman target /
CONFIGURE CONTROLFILE AUTOBACKUP ON; 
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'BackupPath_%F'; 

Backup speed depends on the disk speed

NO. Backup speed mainly depends on the parallel degree you choose to do the backup (the channels you allocate), and default is 1 channel only. This is the limit. If you’re using Standard Edition, you cannot go further and it’s one of the reason that makes Standard Edition not suitable for big databases (1+TB). When using Enterprise Edition, you will be able to allocate as many channel as you want (there is probably a limit), but you won’t probably allocate all available cores for this task. Increasing the number of channels will decrease the time needed for doing the backup, and increase consistency through datafiles in the backupsets. At some point, adding more channels will be useless because disks speed will then matter, either disks where your database resides, or disks where your backups reside (or their interface). Don’t forget that compressed backup will be more demanding for each channel, decreasing the overall backup speed.

Conclusion

One can discuss about Oracle database’s price compared to other RDBMS. But when it comes to reliability and resiliency, RMAN is a very strong argument for Oracle, don’t forget it.

Cet article 5 things that aren’t true about RMAN est apparu en premier sur Blog dbi services.

Move a PDB from a server to another one using NFS

$
0
0

Introduction

Multitenant brings new possibilities regarding Oracle databases, and one of them is to move a database from a container to another quite easily. When containers are on the same server, it’s very easy, but when the containers are on different servers, you will need to use a database link or an RMAN restore. But there is also another solution if you don’t want to use the previous ones: using a NFS volume.

Test lab

All these tests were done between 2 ODAs: one X8-2M and one X7-2M, both running the same patch version (19.12). This is important to run the same DB home version as moving PDB between different container versions will need to patch the database if the destination container is newer, and I’m not sure you can downgrade the PDB if its new container runs an older version.

My 2 containers are both using ASM, and as you may know, ODA is nothing else than an x86_64 server running Linux 7, so this should be the same on any other Linux boxes.

Requirements for the shared NFS volume

As you will temporarily move the PDB to this shared NFS volume, you will need special mount options when mounting your volume. Here is an example:

echo "192.168.61.50:/nfsoracle /u01/nfsdir/ nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600" >> /etc/fstab
mount -a

1st step, move the PDB to the NFS volume

The goal being to move the PDB to another server, let’s do a clean shutdown of the PDB on the source server:

. oraenv <<< POCCDB
sqlplus / as sysdba
show pdbs
CON_ID     CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2          PDB$SEED                       READ ONLY  NO
3          POCPDB                         READ WRITE NO
4          GOTAFO                         READ WRITE NO

alter pluggable database GOTAFO close immediate;

show pdbs

CON_ID     CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2          PDB$SEED                       READ ONLY  NO
3          POCPDB                         READ WRITE NO
4          GOTAFO                         MOUNTED

exit

Now let’s move this PDB to the NFS share:

rman target /
run {
allocate channel c1 type disk;
allocate channel c2 type disk;
backup as copy pluggable database GOTAFO format '/u01/nfsdir/DUMPS/move_pdb/%U';
}
Switch pluggable database GOTAFO to copy;
exit;

Now my PDB datafiles are located on the NFS share, and my files on ASM are flagged as backup copies of my datafiles.

Let’s unplug this PDB, XML file being also put in this NFS share:

sqlplus / as sysdba
alter pluggable database GOTAFO unplug into '/u01/nfsdir/DUMPS/move_pdb/GOTAFO.xml';
drop pluggable database GOTAFO keep datafiles ;

show pdbs

CON_ID     CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2          PDB$SEED                       READ ONLY  NO
4          POCPDB.                        READ WRITE NO

exit

PDB doesn’t belong anymore to this container.

2nd step, plug the PDB to the new container

On the target server having the same NFS share mounted, let’s plug this PDB:

. oraenv <<< POCCDB
sqlplus / as sysdba
create pluggable database GOTAFO using '/u01/nfsdir/DUMPS/move_pdb/GOTAFO.xml';
exit;

Datafiles now need to move from NFS to ASM, let’s do that with RMAN:

rman target /
run {
allocate channel c1 type disk;
allocate channel c2 type disk;
backup as copy pluggable database GOTAFO format '+DATA';
}
 
Switch pluggable database GOTAFO to copy;
exit;

Now database is located in ASM, and files on the share are identified as backup copies of my datafiles.

3rd step, open the PDB and check if everything is OK

Let’s open the PDB and check if datafiles are in ASM as expected:

sqlplus / as sysdba

alter pluggable database GOTAFO open;
alter pluggable database GOTAFO save state ;
alter session set container=GOTAFO ;
select file_name from dba_data_files;
FILE_NAME
----------------------------------------------------------------------------------
+DATA/POCCDB_DRP/D046C4A6F65A7268E0534401C80AC247/DATAFILE/system.908.1090162863
+DATA/POCCDB_DRP/D046C4A6F65A7268E0534401C80AC247/DATAFILE/sysaux.909.1090162863
+DATA/POCCDB_DRP/D046C4A6F65A7268E0534401C80AC247/DATAFILE/undotbs1.906.1090162863
+DATA/POCCDB_DRP/D046C4A6F65A7268E0534401C80AC247/DATAFILE/users.603.1090162863
+DATA/POCCDB_DRP/D046C4A6F65A7268E0534401C80AC247/DATAFILE/test.907.1090162863

Everything is OK, and PDB is running fine.

Conclusion

If you don’t want to use database links between your databases, or if you don’t want to restore a container on your target server, this method works and it may help.

Cet article Move a PDB from a server to another one using NFS est apparu en premier sur Blog dbi services.

When unreachable NFS share mess up your Dbvisit Standby configuration

$
0
0

If I had to rank my favorite Oracle-related tools and software, Dbvisit Standby would likely be at the top of the list.
You are reading this post, so you probably know that Dbvisit Standby is a Disaster/Recovery solution for Oracle Database Standard Edition (aka Data Guard for poor people 😛 ).

The reasons why I like this product are mostly related to the following points (non-exhaustive list) :

  • Ease of installation and configuration
  • Ease of use
  • Lightness
  • Stability
  • Continuous evolution (new features)
  • Documentation quality
  • Technical support efficiency

Despite all these qualities, it can happen that Dbvisit doesn’t work as it should and some troubleshooting is required.
In this post, I will describe a problem I encountered on the Dbvisit Standby (version 10.1) environment of one of our customers.

Issue

The following error message appeared while running the dbvctl command to transfer and the apply archive logs :

Dbvisit Standby process for preposs still running on odaprep01 (pid=53835).
See trace file 53835_dbvctl_preposs_202112081932.trc for more details.
Exceeded RUNNING_MAX_TIMES_TRIED=1 attempts.
(if Dbvisit Standby process is no longer running, then delete lock file /u01/app/dbvisit/standby/pid/dbvisit_preposs.pid)

This seems to indicate that the archive logs transfer process did not succeed properly and got stuck on the server.

Let’s have a look to the existing processes :

[oracle@odaprep01 ~]$ ps -ef | grep dbvisit
oracle   33833 14053  0 Dec08 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_3_12003.3.dbvisit.202112081923.sqlplus.dbv 2>/dev/null
oracle   33834 33833  0 Dec08 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_3_12003.3.dbvisit.202112081923.sqlplus.dbv
oracle   34813 34812  0 Dec08 ?        00:00:00 /bin/sh -c /u01/app/dbvisit/standby/dbvctl -d preposs >/tmp/dbvisit_apply_logs_preposs.log 2>&1
oracle   34814 34813  0 Dec08 ?        00:00:00 /u01/app/dbvisit/standby/dbvctl                                             -d preposs
oracle   35239 34814  0 Dec08 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/34814.0.dbvisit.202112081924.sqlplus.dbv 2>/dev/null
oracle   35240 35239  0 Dec08 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/34814.0.dbvisit.202112081924.sqlplus.dbv
oracle   36929     1  0 Sep17 ?        02:54:14 /u01/app/dbvisit/standby/dbvctl                                             -d preposs -D start
oracle   38142 14053  0 Dec08 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_4_1.4.dbvisit.202112082033.sqlplus.dbv 2>/dev/null
oracle   38143 38142  0 Dec08 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_4_1.4.dbvisit.202112082033.sqlplus.dbv
oracle   41184     1  0 Sep03 ?        01:17:16 /u01/app/dbvisit/dbvnet/dbvnet -d start
oracle   44524 14053  0 Dec08 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_5_1.5.dbvisit.202112082143.sqlplus.dbv 2>/dev/null
oracle   44525 44524  0 Dec08 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_5_1.5.dbvisit.202112082143.sqlplus.dbv
oracle   45780 14053  0 Dec08 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_6_1.6.dbvisit.202112082253.sqlplus.dbv 2>/dev/null
oracle   45781 45780  0 Dec08 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_6_1.6.dbvisit.202112082253.sqlplus.dbv
oracle   51450 14053  0 00:03 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_7_1.7.dbvisit.202112090003.sqlplus.dbv 2>/dev/null
oracle   51451 51450  0 00:03 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_7_1.7.dbvisit.202112090003.sqlplus.dbv
oracle   53234 14053  0 01:13 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_8_1.8.dbvisit.202112090113.sqlplus.dbv 2>/dev/null
oracle   53235 53234  0 01:13 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_8_1.8.dbvisit.202112090113.sqlplus.dbv
oracle   54442 14053  0 02:23 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_9_1.9.dbvisit.202112090223.sqlplus.dbv 2>/dev/null
oracle   54443 54442  0 02:23 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_9_1.9.dbvisit.202112090223.sqlplus.dbv
oracle   59799 14053  0 03:33 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_10_1.10.dbvisit.202112090333.sqlplus.dbv 2>/dev/null
oracle   59800 59799  0 03:33 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_10_1.10.dbvisit.202112090333.sqlplus.dbv
oracle   60840 14053  0 04:43 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_11_1.11.dbvisit.202112090443.sqlplus.dbv 2>/dev/null
oracle   60841 60840  0 04:43 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_11_1.11.dbvisit.202112090443.sqlplus.dbv
oracle   61931 14053  0 05:53 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_12_1.12.dbvisit.202112090553.sqlplus.dbv 2>/dev/null
oracle   61932 61931  0 05:53 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_12_1.12.dbvisit.202112090553.sqlplus.dbv
oracle   64979 14053  0 07:03 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_13_1.13.dbvisit.202112090703.sqlplus.dbv 2>/dev/null
oracle   64980 64979  0 07:03 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_13_1.13.dbvisit.202112090703.sqlplus.dbv
oracle   67322 14053  0 08:13 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_14_1.14.dbvisit.202112090813.sqlplus.dbv 2>/dev/null
oracle   67323 67322  0 08:13 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_14_1.14.dbvisit.202112090813.sqlplus.dbv
oracle   68513 14053  0 09:23 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_15_1.15.dbvisit.202112090923.sqlplus.dbv 2>/dev/null
oracle   68514 68513  0 09:23 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_15_1.15.dbvisit.202112090923.sqlplus.dbv
oracle   71517 14053  0 10:33 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_16_1.16.dbvisit.202112091033.sqlplus.dbv 2>/dev/null
oracle   71518 71517  0 10:33 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_16_1.16.dbvisit.202112091033.sqlplus.dbv
oracle   72932 14053  0 11:43 ?        00:00:00 sh -c /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_17_1.17.dbvisit.202112091143.sqlplus.dbv 2>/dev/null
oracle   72933 72932  0 11:43 ?        00:00:00 /usr/sbin/fuser /u01/app/dbvisit/standby/tmp/14053_17_1.17.dbvisit.202112091143.sqlplus.dbv
oracle   78878     1  0 Sep03 ?        00:09:45 /u01/app/dbvisit/dbvagent/dbvagent -d start
[oracle@odaprep02 ~]$

Mmmh, quite a big mess here, isn’t ?

Troubleshooting

You certainly noticed that there is a lot of blocked processes running the fuser command. Actually, before transferring an archive logs from the primary server to the standby one, fuser is executed to ensure that the concerned archive log is not being used by another process (e.g RMAN backup). This behavior can be confirmed by analyzing the trace file (<dbvisit_home>/traces directory) generated by the dbvctl command :

209 11:08:41 main::UTIL_UNIX_is_file_open: run command: /usr/sbin/fuser /u03/app/oracle/fast_recovery_area/PRODOSS_DC41/archivelog/2021_12_09/o1_mf_1_162040_jv3oc7xb_.arc
20211209 11:08:41 main::UTIL_run_command: ORACLE_HOME: /u01/app/oracle/product/19.0.0.0/dbhome_2
20211209 11:08:41 main::UTIL_run_command: PATH: /usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/sbin:/sbin:/u01/app/oracle/product/19.0.0.0/dbhome_2/bin
20211209 11:08:41 main::UTIL_run_command: LD_LIBRARY_PATH: /u01/app/dbvisit/standby/lib:/u01/app/oracle/product/19.0.0.0/dbhome_2/lib

The first action I took was to disable the Dbvisit jobs from the crontab – on both sides – to avoid some new processes to be generated.
Then I killed all the blocked processes listed above to start from a clean situation, and I deleted the PID file stored in the <dbvisit_home>/pid > directory.
Finally I executed manually the fuser command against an archive log and there I could see that the command remained stuck, without any result to return. Ctrl-C was needed to stop it. This strange behavior was also the same with some other commands like df  or lsof.

Root cause

By analyzing the system logs (/var/log/messages), I could observe that a NFS share mounted on the server was no longer reachable :

Dec  8 19:38:08 odaprep01 kernel: nfs: server 10.84.48.100 not responding, timed out
Dec  8 19:41:08 odaprep01 kernel: nfs: server 10.84.48.100 not responding, timed out
Dec  8 19:41:14 odaprep01 kernel: nfs: server 10.84.48.100 not responding, timed out

And that’s why the fuser command triggered by dbvctl never ended.

Now, you are probably wondering “the archive logs are not stored on the NFS, so why did this have an impact ?“.
Let me explain…
fuser is designed to access all processes at one time to determine if any of their files is stored on the local file system. To discover that, a stat() call is used to identify the attributes of the processes’ executable. If the executable is stored on a NFS, the stat() call result depends on the NFS availability to be successful and can hang if it’s not reachable.

To solve the issue, I had of course to unmount the unreachable NFS mount point (umount -l <mount_point>).
As Dbvisit were not working properly since several hours, the standby database was out of sync because of archive logs gap. Unfortunately, the missing archive logs where not present anymore into the Fast Recovery Area, so I had to restore them from the RMAN backup :

RMAN> restore archivelog from logseq=26160 until logseq=26190;

Starting restore at 09-DEC-2021 12:26:39
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=365 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26169
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26170
channel ORA_DISK_1: reading from backup piece /u03/app/oracle/fast_recovery_area/PREPOSS_DC41/backupset/2021_12_09/o1_mf_annnn_TAG20211209T002732_jv2hv54g_.bkp
channel ORA_DISK_1: piece handle=/u03/app/oracle/fast_recovery_area/PREPOSS_DC41/backupset/2021_12_09/o1_mf_annnn_TAG20211209T002732_jv2hv54g_.bkp tag=TAG20211209T002732
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26160
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26161
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26162
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26163
...
...
...
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26188
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26189
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=26190
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/admin/preposs/backup/20211209_111002_arc_PREPOSS_533362994_s2246_p1.bck
channel ORA_DISK_1: piece handle=/u01/app/oracle/admin/preposs/backup/20211209_111002_arc_PREPOSS_533362994_s2246_p1.bck tag=ARC_20211209_111002
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 09-DEC-2021 12:27:17

RMAN>

And finally, I was able to restart the dbvctl command to resolve the gap by transferring and applying the restored archive logs :

oracle@odaprep01:/u01/app/dbvisit/standby/trace/ [preposs] dbvctl -d preposs
=============================================================
Dbvisit Standby Database Technology (10.1.0_0_gba3a9e08) (pid 15459)
dbvctl started on odaprep01: Thu Dec  9 12:28:33 2021
=============================================================

>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
    Thread: 1 Archive log gap: 35. Transfer log gap: 35
>>> Transferring Log file(s) from preposs on odaprep01 to odaprep02:

    thread 1 sequence 26160 (o1_mf_1_26160_jv3t012k_.arc)... done
    thread 1 sequence 26161 (o1_mf_1_26161_jv3szzng_.arc)... done
    thread 1 sequence 26162 (o1_mf_1_26162_jv3t00lo_.arc)... done
    thread 1 sequence 26163 (o1_mf_1_26163_jv3szzpx_.arc)... done
    ...
    ...
    ...
    thread 1 sequence 26188 (o1_mf_1_26188_jv3t0jgx_.arc)... done
    thread 1 sequence 26189 (o1_mf_1_26189_jv3t0jpb_.arc)... done
    thread 1 sequence 26190 (o1_mf_1_26190_jv3t0gr6_.arc)... done

>>> Dbvisit Archive Management Module (AMM)

    Config: number of archives to keep      = 0
    Config: number of days to keep archives = 1
    Config: archive backup count            = 1
    Config: diskspace full threshold        = 80%
==========

    Total number of archive logs   : 35
    Current disk percent full (/u03/app/oracle/fast_recovery_area/) = 10%
==========

    Current disk percent full (FRA) = 0%
==========

    Number of archive logs deleted = 0

=============================================================
dbvctl ended on odaprep01: Thu Dec  9 12:30:19 2021
=============================================================

Conclusion

That was not a Dbvisit issue. As I said at the very beginning, Dbvisit is a great tool 😉 .
Hint: if you want to get the fuser, df, lsof commands working even when a mounted NFS is temporarily unreachable, a solution would be to mount it with the soft option :

mount -o rw,soft host.server.com/share /mymountpoint

According to the documentation :
soft
      Generates a soft mount of the NFS file system. If an error occurs, the stat() function returns with an error.
      If the option hard is used, stat() does not return until the file system is available.”

Hope this helps.

Cet article When unreachable NFS share mess up your Dbvisit Standby configuration est apparu en premier sur Blog dbi services.

Upgrade AHF and TFA on an ODA

$
0
0

TFA (Trace File Analyzer) is part of AHF (Autonomous Health Framework). Those tools are preinstalled and part of ODA (Oracle Database Appliance). As you might know patching and upgrading are normally always going through ODA global Bundle patches. AHF can, without any problem, be upgraded independently. In this blog I wanted to share with you how I upgraded TFA with latest v21.4 version. The upgrade is performed with root user. This version addresses CVE-2021-45105/CVE-2021-44228/CVE-2021-45046. For reminder Apache Log4j Vulnerabilities are covered by CVE-2021-44228 and CVE-2021-45046.

Check current version of TFA

First we can check if TFA is up and running and which version is currrently used.

[root@ODA01 ~]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status
WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 5388 | 5000 | 20.1.3.0.0 | 20130020200429161658 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'

As we can see we are currently running TFA/AHF 20.1.3.0.0 version.

Check running processes

We can check as well the TFA running processes.

[root@ODA01 ~]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root      5388     1  0 Oct18 ?        02:55:07 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 ~]#

Check the location of AHF

It is important to check in which directory AHF is currently installed in order to provide appropriate directory in the upgrade command option. Thus the setup script will be able to see that there is a current version installed and will suggest to upgrade it. Otherwise another new AHF installation will be performed.

[root@ODA01 ~]# cat /etc/oracle.ahf.loc
/opt/oracle/dcs/oracle.ahf

AHF is installed on the ODA in the /opt/oracle/dcs/oracle.ahf directory.

Backup of the current AHF version

Before doing any modification it is important to backup current AHF version for fallback if needed. I have been doing a tar of the currrent installation directory.

[root@ODA01 ~]# cd /opt/oracle/dcs

[root@ODA01 dcs]# ls -ltrh
total 25M
drwxr-xr-x.  3 root   root     4.0K Jul  2  2019 rdbaas
drwxr-xr-x.  3 root   root     4.0K Jul  2  2019 scratch
drwx------   2 root   root     4.0K Jul  2  2019 dcsagent_wallet
drwxr-xr-x   2 root   root     4.0K Jul  4  2019 ft
drwxr-xr-x   2 root   root     4.0K Aug 11  2019 Inventory
drwxr-xr-x   4 root   root     4.0K May 17  2020 dcs-ui
-rwxr-xr-x   1 root   root     6.8K May 21  2020 configuredcs.pl
-rw-r--r--   1 root   root      25M May 21  2020 dcs-ui.zip
drwxr-xr-x   4 root   root     4.0K Sep  2  2020 repo
-rw-r--r--   1 root   root        0 Sep  2  2020 dcscontroller-stderr.log
-rw-r--r--   1 root   root     6.7K Sep  3  2020 dcscontroller-stdout.log
drwxr-xr-x   6 oracle oinstall  32K Sep  3  2020 commonstore
drwxr-xr-x  12 root   root     4.0K Sep  3  2020 oracle.ahf
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 agent
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 sample
drwxr-xr-x   4 root   root     4.0K Sep  3  2020 java
drwxr-xr-x.  3 root   root     4.0K Sep  3  2020 conf
drwxr-xr-x.  3 root   root     4.0K Sep  3  2020 dcscli
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 bin
drwx------.  5 root   root      20K Dec 21 00:00 log

[root@ODA01 dcs]# mkdir /root/backup_ahf_for_upgrade/

[root@ODA01 dcs]# tar -czf /root/backup_ahf_for_upgrade/oracle.ahf.20.1.3.0.0.tar ./oracle.ahf

[root@ODA01 dcs]# ls -ltrh /root/backup_ahf_for_upgrade
total 1.3G
-rw-r--r-- 1 root root 1.3G Dec 21 14:26 oracle.ahf.20.1.3.0.0.tar

Download new AHF version

You can download latest AHF version through my oracle support portal. Download patch 30166242 :
Patch 30166242: PLACEHOLDER – DOWNLOAD LATEST AHF (TFA and ORACHK/EXACHK)

I have created a directory on the ODA to upload the patch :

[root@ODA01 dcs]# mkdir /u01/app/patch/TFA

Upgrade AHF on the ODA

In this part we will see the procedure to upgrade AHF on the ODA. We first need to unzip the AHF-LINUX_v21.4.0.zip file and run ahf_setup. The installation script will recognise the existing 20.1.3 version and suggest to upgrade it.

root@ODA01 dcs]# cd /u01/app/patch/TFA

[root@ODA01 TFA]# ls -ltrh
total 394M
-rw-r--r-- 1 root root 393M Dec 21 10:16 AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# unzip -q AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# ls -ltrh
total 792M
-r-xr-xr-x 1 root root 398M Dec 20 19:28 ahf_setup
-rw-r--r-- 1 root root  384 Dec 20 19:30 ahf_setup.dat
-rw-r--r-- 1 root root 1.5K Dec 20 19:31 README.txt
-rw-r--r-- 1 root root  625 Dec 20 19:31 oracle-tfa.pub
-rw-r--r-- 1 root root 393M Dec 21 10:16 AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# ./ahf_setup -ahf_loc /opt/oracle/dcs -data_dir /opt/oracle/dcs

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_214000_58089_2021_12_21-14_30_06.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 21.4.0 Build Date: 202112200745

AHF is already installed at /opt/oracle/dcs/oracle.ahf

Installed AHF Version: 20.1.3 Build Date: 202004291616

Do you want to upgrade AHF [Y]|N : Y

Upgrading /opt/oracle/dcs/oracle.ahf

Shutting down AHF Services
Stopped OSWatcher
Nothing to do !
Shutting down TFA
/etc/init.d/init.tfa: line 661: /sbin/stop: No such file or directory
. . . . .
Killing TFA running with pid 5388
. . .
Successfully shutdown TFA..

Starting AHF Services
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands


Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N

AHF is successfully upgraded to latest version

.-----------------------------------------------------------------.
| Host      | TFA Version | TFA Build ID         | Upgrade Status |
+-----------+-------------+----------------------+----------------+
| ODA01     |  21.4.0.0.0 | 21400020211220074549 | UPGRADED       |
'-----------+-------------+----------------------+----------------'

Moving /tmp/ahf_install_214000_58089_2021_12_21-14_30_06.log to /opt/oracle/dcs/oracle.ahf/data/ODA01/diag/ahf/

[root@ODA01 TFA]#

Check new AHF version

We can check that the new version of AHF is 21.4.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl version

AHF version: 21.4.0

Check TFA running processes

We can check that TFA is up and running.

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root     61938     1 62 14:31 ?        00:01:36 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 TFA]#

After the upgrade script is completed there might still be some TFA processes running in order to rebuild the inventory :

root     15469 15077  0 14:58 ?        00:00:00 sh -c /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl rediscover -mode full > /dev/null 2>&1
root     15470 15469  0 14:58 ?        00:00:00 /bin/sh /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl rediscover -mode full
root     15505 15500  0 14:58 ?        00:00:00 /bin/sh /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.tfa rediscover -mode full
root     15524 15505  1 14:58 ?        00:00:00 /u01/app/19.0.0.0/grid/perl/bin/perl /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.pl rediscover -mode full

Make sure all those processes are completed successfully (not existing any more) before stopping AHF. Otherwise your inventory status will end with a STOPPED status.

Check status of AHF

We can check AHF status.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 61938 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'


No scheduler for any ID

orachk daemon is not running

[root@ODA01 TFA]#

TFA is running. No AHF scheduler. No orachk daemon.

Stop AHF and TFA

To check all is working as expected, let’s stop AHF.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl stopahf

Stopping TFA from the Command Line
Stopped OSWatcher
Nothing to do !
Please wait while TFA stops
Please wait while TFA stops
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
TFA Stopped Successfully
Successfully stopped TFA..

orachk scheduler is not running

There is still one process for TFA, the one from init.d :

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
[root@ODA01 TFA]#

We are going to stop it :

[root@ODA01 TFA]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
Nothing to do !
TFA Stopped Successfully
Successfully stopped TFA..

And there is no more TFA processes up and running :

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
[root@ODA01 TFA]#

Start AHF

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl startahf

Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands

INFO: Starting orachk scheduler in background. Details for the process can be found at /opt/oracle/dcs/oracle.ahf/data/ODA01/diag/orachk/compliance_start_211221_143845.log

We can check TFA status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status

.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 87371 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'

We can check AHF status as well and see that we have now scheduler and orachk daemon up and running :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01 | RUNNING       | 87371 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'

------------------------------------------------------------

Master node = ODA01

orachk daemon version = 21.4.0

Install location = /opt/oracle/dcs/oracle.ahf/orachk

Started at = Tue Dec 21 14:38:57 CET 2021

Scheduler type = TFA Scheduler

Scheduler PID:  87371

------------------------------------------------------------
ID: orachk.autostart_client_oratier1
------------------------------------------------------------
AUTORUN_FLAGS  =  -usediscovery -profile oratier1 -dball -showpass -tag autostart_client_oratier1 -readenvconfig
COLLECTION_RETENTION  =  7
AUTORUN_SCHEDULE  =  3 2 * * 1,2,3,4,5,6
------------------------------------------------------------
------------------------------------------------------------
ID: orachk.autostart_client
------------------------------------------------------------
AUTORUN_FLAGS  =  -usediscovery -tag autostart_client -readenvconfig
COLLECTION_RETENTION  =  14
AUTORUN_SCHEDULE  =  3 3 * * 0
------------------------------------------------------------

Next auto run starts on Dec 22, 2021 02:03:00

ID:orachk.AUTOSTART_CLIENT_ORATIER1

We can also check TFA processes :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root     86989     1  0 14:38 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root     87371     1 19 14:38 ?        00:00:13 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
root     92789 87371 38 14:39 ?        00:00:00 /u01/app/19.0.0.0/grid/perl/bin/perl /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.pl availability product Europe/Zurich
[root@ODA01 TFA]#

Stop AHF and TFA

We will stop AHF and TFA again.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl stopahf

Stopping TFA from the Command Line
Nothing to do !
Please wait while TFA stops
Please wait while TFA stops
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
TFA Stopped Successfully
Successfully stopped TFA..

Stopping orachk scheduler ...
Removing orachk cache discovery....
No orachk cache discovery found.



Unable to send message to TFA



Removed orachk from inittab


Stopped orachk

AHF status checks :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf

TFA-00002 Oracle Trace File Analyzer (TFA) is not running


No scheduler for any ID

orachk daemon is not running

TFA status checks :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status
TFA-00002 Oracle Trace File Analyzer (TFA) is not running

Check processes and stop TFA init.d :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root     86989     1  0 14:38 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null

[root@ODA01 TFA]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
Nothing to do !
TFA Stopped Successfully
Successfully stopped TFA..

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
[root@ODA01 TFA]#

Restart only TFA

Finally we only want to keep TFA up and running. No AHF scheduling or orachk daemon. So we are only going to start TFA.

[root@ODA01 TFA]# /etc/init.d/init.tfa start
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands

Final checks

TFA running processes :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root      5344     1  0 14:43 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root      5732     1 77 14:43 ?        00:00:11 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 TFA]#

TFA status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01 | RUNNING       | 5732 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'
[root@ODA01 TFA]#

AHF status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01 | RUNNING       | 5732 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'


No scheduler for any ID

orachk daemon is not running

[root@ODA01 TFA]#

Cleanup

We can still keep previous AHF version backup a few days just in case and remove it later.

The AHF installation files can be deleted :

[root@ODA01 ~]# cd /u01/app/patch/

[root@ODA01 patch]# ls -l TFA
total 810144
-rw-r--r-- 1 root root 411836201 Dec 21 10:16 AHF-LINUX_v21.4.0.zip
-r-xr-xr-x 1 root root 416913901 Dec 20 19:28 ahf_setup
-rw-r--r-- 1 root root       384 Dec 20 19:30 ahf_setup.dat
-rw-r--r-- 1 root root       625 Dec 20 19:31 oracle-tfa.pub
-rw-r--r-- 1 root root      1525 Dec 20 19:31 README.txt

[root@ODA01 patch]# rm -rf TFA

[root@ODA01 patch]# ls
[root@ODA01 patch]#

Cet article Upgrade AHF and TFA on an ODA est apparu en premier sur Blog dbi services.

Viewing all 461 articles
Browse latest View live