Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 461 articles
Browse latest View live

The myth of NoSQL (vs. RDBMS) “a simpler API to bound resources”

$
0
0

By Franck Pachot

.
NoSQL provides an API that is much simpler than SQL. And one advantage of it is that users cannot exceed a defined amount of resources in one call. You can read this in Alex DeBrie article https://www.alexdebrie.com/posts/dynamodb-no-bad-queries/#relational-queries-are-unbounded which I take as a base for some of my “Myth of NoSQL vs RDBMS” posts because he explains very well how SQL and NoSQL are perceived by the users. But this idea of simpler API to limit what users can do is is quite common, precedes the NoSQL era, and is still valid with some SQL databases. Here I’m demonstrating that some RDBMS provide a powerful API and still can bound what users can do. Oracle Database has a resource manager for a long time, like defining resource limits on a per-service base, and those features are very simple to use in the Oracle Autonomous Database – the managed database in the Oracle Cloud.

I am using the example schema from the ATP database in the free tier, so that anyone can play with this. As usual, what I show on 1 million rows, and one thread, can scale to multiples vCPU and nodes. Once you get the algorithms (execution plan) you know how it scales.


06:36:08 SQL> set echo on serveroutput on time on timing on

06:36:14 SQL> select count(*) ,sum(s.amount_sold),sum(p.prod_list_price) 
              from sh.sales s join sh.products p using(prod_id);


   COUNT(*)    SUM(S.AMOUNT_SOLD)    SUM(P.PROD_LIST_PRICE)
___________ _____________________ _________________________
     918843           98205831.21               86564235.57

Elapsed: 00:00:00.092

I have scanned nearly one million rows from the SALES table and joined it to the PRODUCTS table, aggregated data to show the sum from both tables columns. That takes 92 milliseconds here (including network roundtrip). You are not surprised to get fast response with a join because you have read The myth of NoSQL (vs. RDBMS) “joins dont scale” 😉

Ok, now let’s say that a developer never learned about SQL joins and wants to do the same with a simpler scan/query API:


06:36:14 SQL> declare
    l_count_sales    number:=0;
    l_amount_sold    number:=0;
    l_sum_list_price number:=0;
   begin
    -- scan SALES
    for s in (select * from sh.sales) loop
     -- query PRODUCTS
     for p in (select * from sh.products where prod_id=s.prod_id) loop
      -- aggregate SUM and COUNT
      l_count_sales:=l_count_sales+1;
      l_amount_sold:=l_amount_sold+s.amount_sold;
      l_sum_list_price:=l_sum_list_price+p.prod_list_price;
     end loop;
    end loop;
    dbms_output.put_line('count_sales='||l_count_sales||' amount_sold='||l_amount_sold||' sum_list_price='||l_sum_list_price);
   end;
   /

PL/SQL procedure successfully completed.

Elapsed: 00:02:00.374

I have run this within the database with PL/SQL because I don’t want to add network rountrips and process switches to this bad design. You see that it takes 2 minutes here. Why? Because the risk when providing an API that doesn’t support joins is that the developer will do the join in his procedural code. Without SQL, the developer has no efficient and agile way to do this GROUP BY and SUM that was a one-liner in SQL: he will either loop on this simple scan/get API, or she will add a lot of code to initialize and maintain an aggregate derived from this table.

So, what can I do to avoid a user running this kind of query that will take a lot of CPU and IO resources? A simpler API will not solve this problem as the user will workaround this with many small queries. In the Oracle Autonomous Database, the admin can set some limits per service:

This says: when connected to the ‘TP’ service (which is the one for transactional processing with high concurrency) a user query cannot use more than 5 seconds of elapsed time or the query is canceled.

Now if I run the statement again:


Error starting at line : 54 File @ /home/opc/demo/tmp/atp-resource-mgmt-rules.sql
In command -
declare
 l_count_sales    number:=0;
 l_amount_sold    number:=0;
 l_sum_list_price number:=0;
begin
 -- scan SALES
 for s in (select * from sh.sales) loop
  -- query PRODUCTS
  for p in (select * from sh.products where prod_id=s.prod_id) loop
   -- aggregate SUM and COUNT
   l_count_sales:=l_count_sales+1;
   l_amount_sold:=l_amount_sold+s.amount_sold;
   l_sum_list_price:=l_sum_list_price+p.prod_list_price;
  end loop;
 end loop;
 dbms_output.put_line('count_sales='||l_count_sales||' amount_sold='||l_amount_sold||' sum_list_price='||l_sum_list_price);
end;
Error report -
ORA-56735: elapsed time limit exceeded - call aborted
ORA-06512: at line 9
ORA-06512: at line 9
56735. 00000 -  "elapsed time limit exceeded - call aborted"
*Cause:    The Resource Manager SWITCH_ELAPSED_TIME limit was exceeded.
*Action:   Reduce the complexity of the update or query, or contact your
           database administrator for more information.

Elapsed: 00:00:05.930

I get a message that I exceeded the limit. I hope that, from the message “Action: Reduce the complexity”, the user will understand something like “Please use SQL to process data sets” and will write a query with the join.

Of course, if the developer is thick-headed he will run his loop from his application code and will run one million short queries that will not exceed the time limit per execution. And it will be worse because of the roundtrips between the application and the database. The “Set Resource Management Rules” has another tab than “Run-away criteria”, which is “CPU/IO shares”, so that one service can be throttled when the overall resources are saturated. With this, we can give higher priority to critical services. But I prefer to address the root cause and show to the developer that when you need to join data, the most efficient is a SQL JOIN. And when you need to aggregate data, the most efficient is SQL GROUP BY. Of course, we can also re-design the tables to pre-join (materialized views in SQL or single-table design in DynamoDB for example) when data is ingested, but that’s another topic.

In the autonomous database, the GUI makes it simple, but you can query V$ views to monitor it. For example:


06:38:20 SQL> select sid,current_consumer_group_id,state,active,yields,sql_canceled,last_action,last_action_reason,last_action_time,current_active_time,active_time,current_consumed_cpu_time,consumed_cpu_time 
              from v$rsrc_session_info where sid=sys_context('userenv','sid');


     SID    CURRENT_CONSUMER_GROUP_ID      STATE    ACTIVE    YIELDS    SQL_CANCELED    LAST_ACTION     LAST_ACTION_REASON       LAST_ACTION_TIME    CURRENT_ACTIVE_TIME    ACTIVE_TIME    CURRENT_CONSUMED_CPU_TIME    CONSUMED_CPU_TIME
________ ____________________________ __________ _________ _________ _______________ ______________ ______________________ ______________________ ______________________ ______________ ____________________________ ____________________
   41150                        30407 RUNNING    TRUE             21               1 CANCEL_SQL     SWITCH_ELAPSED_TIME    2020-07-21 06:38:21                       168           5731                         5731                 5731


06:39:02 SQL> select id,name,cpu_wait_time,cpu_waits,consumed_cpu_time,yields,sql_canceled 
              from v$rsrc_consumer_group;


      ID            NAME    CPU_WAIT_TIME    CPU_WAITS    CONSUMED_CPU_TIME    YIELDS    SQL_CANCELED
________ _______________ ________________ ____________ ____________________ _________ _______________
   30409 MEDIUM                         0            0                    0         0               0
   30406 TPURGENT                       0            0                    0         0               0
   30407 TP                           286           21                 5764        21               1
   30408 HIGH                           0            0                    0         0               0
   30410 LOW                            0            0                    0         0               0
   19515 OTHER_GROUPS                 324           33                18320        33               0

You can see one SQL canceled here in the TP consumer group and my session was at 5.7 consumed CPU time.

I could have set the same programmatically with:


exec cs_resource_manager.update_plan_directive(consumer_group => 'TP', elapsed_time_limit => 5);

So, rather than limiting the API, better to give full SQL possibilities and limit the resources used per service: it makes sense to accept only short queries from the Transaction Processing services (TP/TPURGENT) and allow more time, but less shares, for the reporting ones (LOW/MEDIUM/HIGH)

Cet article The myth of NoSQL (vs. RDBMS) “a simpler API to bound resources” est apparu en premier sur Blog dbi services.


Merge-Statement crashes with ORA-7445 [kdu_close] caused by Real Time Statistics?

$
0
0

In a recent project we migrated an Oracle database previously running on 12.1.0.2 on an Oracle Database Appliance to an Exadata X8 with DB version 19.7. Shortly after the migration a merge-statement (upsert) failed with an

ORA-07445: exception encountered: core dump [kdu_close()+107] [SIGSEGV] [ADDR:0xE0] [PC:0x1276AE6B] [Address not mapped to object] [] 

The stack looked as follows:

kdu_close - updThreePhaseExe - upsexe - opiexe - kpoal8 - opiodr - ttcpip - opitsk - opiino - opiodr - opidrv - sou2o - opimai_real - ssthrdmain - main - __libc_start_main - _start

As experienced Oracle DBAs know an ORA-7445 error is usually caused by an Oracle bug (defect). Searching in My Oracle Support didn’t reveal much for module “kdu_close” and the associated error stack. Working on a Service Request (SR) with Oracle Support hasn’t provided a solution or workaround to the issue so far as well. Checking Orafun also didn’t provide much insight about kdu_close other than the fact that we are in the area of the code about kernel data update (kdu).

As the merge crashed at the end of its processing (from earlier successful executions we knew how long the statement usually takes) I setup the hypothesis that this issue might be related to the 19c new feature Real Time Statistics on Exadata. To verify if the hypothesis is correct, I did some tests first with Real Time Statistics and merge-statements in my environment to see if they do work as expected and if we can disable them with a hint:

1.) Enable Exadata Features

alter system set "_exadata_feature_on"=TRUE scope=spfile;
shutdown immediate
startup

2.) Test if a merge-statement triggers real time statistics

I setup a table tab1 and tab2 similar to the setup on Oracle-Base and run a merge statement, which actually updates 1000 rows:

Initially we just have statistics on tab1 from dbms_stats.gather_table_stats. Here e.g. the columns:

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:29:37
DESCRIPTION      07.08.2020 17:29:37

Then I ran the merge:

testuser1@orcl@orcl> merge
  2  into	tab1
  3  using	tab2
  4  on	(tab1.id = tab2.id)
  5  when matched then
  6  	     update set tab1.description = tab2.description
  7  WHEN NOT MATCHED THEN
  8  	 INSERT (  id, description )
  9  	 VALUES ( tab2.id, tab2.description )
 10  ;

1000 rows merged.

testuser1@orcl@orcl> commit;

Commit complete.

testuser1@orcl@orcl> exec dbms_stats.flush_database_monitoring_info;

PL/SQL procedure successfully completed.

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:29:37
DESCRIPTION      07.08.2020 17:29:37
ID               07.08.2020 17:37:34 STATS_ON_CONVENTIONAL_DML
DESCRIPTION      07.08.2020 17:37:34 STATS_ON_CONVENTIONAL_DML

So obviously Real Time Statistics gathering was triggered.

After the verification that merge statements trigger statistics to be gathered in real time I disabled Real Time Statistics on that specific merge-statement by adding the hint

/*+ NO_GATHER_OPTIMIZER_STATISTICS */

to it.

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:46:38
DESCRIPTION      07.08.2020 17:46:38

testuser1@orcl@orcl> merge /*+ NO_GATHER_OPTIMIZER_STATISTICS */
  2  into	tab1
  3  using	tab2
  4  on	(tab1.id = tab2.id)
  5  when matched then
  6  	     update set tab1.description = tab2.description
  7  WHEN NOT MATCHED THEN
  8  	 INSERT (  id, description )
  9  	 VALUES ( tab2.id, tab2.description )
 10  ;

1000 rows merged.

testuser1@orcl@orcl> commit;

Commit complete.

testuser1@orcl@orcl> exec dbms_stats.flush_database_monitoring_info;

PL/SQL procedure successfully completed.

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:46:38
DESCRIPTION      07.08.2020 17:46:38

So the hint works as expected.

The statement of the real application was generated and could not be modified, so I had to create a SQL-Patch to add the hint at parse-time to it:

var rv varchar2(32);
begin
   :rv:=dbms_sqldiag.create_sql_patch(sql_id=>'13szq2g6xbsg5',
                                      hint_text=>'NO_GATHER_OPTIMIZER_STATISTICS',
                                      name=>'disable_real_time_stats_on_merge',
                                      description=>'disable real time stats');
end;
/
print rv

REMARK: If a statement is no longer in the shared pool, but available in the AWR history, you may use below method to create the sql patch:

var rv varchar2(32);
declare
   v_sql CLOB;
begin
   select sql_text into v_sql from dba_hist_sqltext where sql_id='13szq2g6xbsg5';
   :rv:=dbms_sqldiag.create_sql_patch(
             sql_text  => v_sql,
             hint_text=>'NO_GATHER_OPTIMIZER_STATISTICS',
             name=>'disable_real_time_stats_on_merge',
             description=>'disable real time stats');
end;
/
print rv

It turned out that disabling Real Time Statistics actually worked around the ORA-7445 issue. It might be a coincidence and positive side effect that disabling Real Time Statistics worked around the issue, but for the moment we can cope with it and hope that this information helps to resolve the opened SR so that we get a permanent fix from Oracle for this defect.

Cet article Merge-Statement crashes with ORA-7445 [kdu_close] caused by Real Time Statistics? est apparu en premier sur Blog dbi services.

Oracle Database Appliance and CPU speed

$
0
0

Introduction

A complaint I heard from customers about ODA is the low core speed of the Intel Xeon processor embedded in the X8-2 servers: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz. 2.30GHz only? Because of its comfortable number of cores (16 per processor), the cruise speed of each core is limited. Is it a problem compared to a home made server with less cores?

Why clock speed is important?

As you may now, the faster a core is running, the less it takes time to complete a task. Single core clock speed is still an important parameter for Oracle databases. Software architecture of Oracle is brilliant: automatic parallelism can dramatically reduce the time needed for some statements to complete, but the vast majority of them will be processed on a single thread. Regarding Standard Edition 2, parallelism does not exist on this version, thus each statement is limited to a single thread.

Is ODA X8-2 processor really limited to 2.3GHz?

Don’t be affraid by this low CPU speed, this is actually the lowest speed the cores are guaranteed to operate. Speed of the cores can be increased by the system, depending on various parameters, and fastest speed is 3.9GHz for this kind of CPU, which is nearly twice the base frequency. This Xeon processor, as most of its predecessors, features Turbo boost technology, a kind of intelligent automatic overclocking.

Turbo boost technology?

As far as I know, all the Xeon family has Turbo boost technology. If you need more MHz than normal from time to time, you CPU speed can greatly increase to something like 180% of its nominal speed, which is quite amazing. But why this speed is not the default speed of the cores? Simply because running all the cores at full speed has a thermal impact on the CPU itself, and the complete system. As a consequence, heating can exceed cooling capacity and damage hardware. To manage speed and thermal efficiency, Intel’s processor dynamically distributes Turbo bins, which are basically slices of MHz increase. For each CPU model, a defined number of Turbo bins is available and will be given to the cores. The rule is that each core will receive the same Turbo bins numbers at the same time. What’s most interesting on ODA is that it’s related to enabled cores: the less cores are enabled on the CPU, the more Turbo bins are available for each single core.

Turbo bins and limited number cores

With limited number of cores, the heating of your CPU will be quite low in normal condition, and still low under heavy load because the heatsink and the fans are sized for using all the cores. As a result, most of the time, the Turbo bins will be allocated to your cores, and if you’re lucky, you’ll be running at full throttle, meaning that, for example, instead of a 16 cores CPU running at 2.3GHz, you’ll have a 4 cores CPU running at 3.9GHz. Quite nice isn’t it?

With Enterprise Edition

One of the main feature of ODA is the ability to configure the number of cores you need, and only pay the license for these enabled cores. Most of the customers are only using a few cores, and that’s nice for single threaded performance. You can expect full speed at least for 2 and 4 enabled cores.

What about Standard Edition 2?

With Standard Edition 2, you don’t need to decrease the cores on your server because your license is related to the socket, and not the cores. But nothing prevent you from decreasing the core numbers. There should be a limit where less but faster cores will benefit to all of your databases. If you only have a few databases on you ODA (let’s say less than 10 on a X8-2M), there is no question about decreasing the number of cores: it will most probably bring you more performance. If you have much more databases, the overall perfomance will probably be better with all the cores running at lower speed.

And when using old software/hardware?

Turbo boost was also available on X7-2, but old software releases (18.x) do not seem to let the cores go faster than normal speed. Maybe it’s due to the Linux version: the jump from Linux 6 to Linux 7 starting from earlier versions of 19.x has probably something to do with that. Patching to 19.x is highly recommended on X7 for a new reason: better performance.

Conclusion

If you’re using Standard Edition 2, don’t hesitate to decrease the number of enabled cores on your ODA, it will probably bring you nice speed bump. If you’re using Enterprise Edition and don’t plan to use all the cores on your ODA, you will benefit from very fast cores and leverage at best your licenses. Take this with a grain of salt, as it will depends on the environment, both physical and logical, and as these conclusions came from a quite limited number of systems. Definitely, with its fast NVMe disks and these Xeon CPUs, ODA is the perfect choice for most of us.

Cet article Oracle Database Appliance and CPU speed est apparu en premier sur Blog dbi services.

Troubleshooting performance on Autonomous Database

$
0
0

By Franck Pachot

.

On my Oracle Cloud Free Tier Autonomous Transaction Processing service, a database that can be used for free with no time limit, I have seen this strange activity. As I’m running nothing scheduled, I was surprised by this pattern and looked at it by curiosity. And I got the idea to take some screenshot to show you how I look at those things. The easiest performance tool available in the Autonomous Database is the Performance Hub which shows the activity though time with detail on multiple dimensions for drill-down analysis. This is based on ASH of course.

In the upper pane, I focus on the part with homogenous activity because I may views the content without the timeline and then want to compare the activity metric (Average Active Session) with the peak I observed. Without this, I may start to look to something that is not significant and waste my time. Here, where the activity is about 1 active session, I want to drill-down on dimensions that account for around 0.8 active sessions to be sure to address 80% of the surprising activity. If the part selected includes some idle time around, I would not be able to do this easily.

The second pane let me drill-down either on 3 dimensions in a load map (we will see that later), or one main dimension with the time axis (in this screenshot the dimension is “Consumer Group”) with two other dimensions below displayed without the time detail, here “Wait Class” and “Wait Event”. This is where I want to compare the activity (0.86 average active session on CPU) to the load I’m looking at, as I don’t have the time to see peaks and idle periods.

  • I see “Internal” for all “Session Attributes” ASH dimensions, like “Consumer Group”, “Module”, “Action”, “Client”, “Client Host Port”
  • About “Session Identifiers” ASH dimensions, I still see “internal” for “User Session”, “User Name” and “Program”.
  • “Parallel Process” shows “Serial” and “Session Type” shows “Foreground” which doesn’t give me more information

I have more information from “Resource Consumption”:

  • ASH Dimension “Wait Class”: mostly “CPU” and some “User I/O”
  • ASH Dimension “Wait Event”: the “User I/O” is “direct path read temp”

I’ll dig into those details later. There’s no direct detail for the CPU consumption. I’ll look at logical reads of course, and SQL Plan but I cannot directly match the CPU time with that. Especially from Average Active Session where I don’t have the CPU time – I have only samples there. It may be easier with “User I/O” because they should show up in other dimensions.

There are no “Blocking Session” but the ASH Dimension “Object” gives interesting information:

  • ASH Dimension “Object”: SYS.SYS_LOB0000009134C00039$$ and SYS.SYS_LOB0000011038C00004$$ (LOB)

I don’t know an easy way to copy/paste from the Performance Hub so I have generated an AWR report and found them in the Top DB Objects section:

Object ID % Activity Event % Event Object Name (Type) Tablespace Container Name
9135 24.11 direct path read 24.11 SYS.SYS_LOB0000009134C00039$$ (LOB) SYSAUX SUULFLFCSYX91Z0_ATP1
11039 10.64 direct path read 10.64 SYS.SYS_LOB0000011038C00004$$ (LOB) SYSAUX SUULFLFCSYX91Z0_ATP1

That’s the beauty of ASH. In addition, to show you the load per multiple dimensions, it links all dimensions. Here, without guessing, I know that those objects are responsible for the “direct path read temp” I have seen above.

Let me insist on the numbers. I mentioned that I selected, in the upper chart, a homogeneous activity time window in order to compare the activity number with and without the time axis. My total activity during this time window is a little bit over 1 session active (on average, AAS – Average Active Session). I can see this on the time chart y-axis. And I confirm it if I sum-up the aggregations on other dimensions. Like above CPU + USER I/O was 0.86 + 0.37 =1.23 when the selected part was around 1.25 active sessions. Here when looking at “Object” dimension, I see around 0.5 sessions on SYS_LOB0000011038C00004$$ (green) during one minute, then around 0.3 sessions on SYS_LOB0000009134C00039$$ (blue) for 5 minutes and no activity on objects during 1 minute. That matches approximately the 0.37 AAS on User I/O. From the AWR report this is displayed as “% Event” and 24.11 + 10.64 = 34.75% which is roughly the ratio of those 0.37 to 1.25 we had with Average Active Sessions. When looking at sampling activity details, it is important to keep in mind the weight of each component we look at.

Let’s get more detail about those objects, from SQL Developer Web, or any connection:


DEMO@atp1_tp> select owner,object_name,object_type,oracle_maintained from dba_objects 
where owner='SYS' and object_name in ('SYS_LOB0000009134C00039$$','SYS_LOB0000011038C00004$$');

   OWNER                  OBJECT_NAME    OBJECT_TYPE    ORACLE_MAINTAINED
________ ____________________________ ______________ ____________________
SYS      SYS_LOB0000009134C00039$$    LOB            Y
SYS      SYS_LOB0000011038C00004$$    LOB            Y

DEMO@atp1_tp> select owner,table_name,column_name,segment_name,tablespace_name from dba_lobs 
where owner='SYS' and segment_name in ('SYS_LOB0000009134C00039$$','SYS_LOB0000011038C00004$$');

   OWNER                TABLE_NAME    COLUMN_NAME                 SEGMENT_NAME    TABLESPACE_NAME
________ _________________________ ______________ ____________________________ __________________
SYS      WRI$_SQLSET_PLAN_LINES    OTHER_XML      SYS_LOB0000009134C00039$$    SYSAUX
SYS      WRH$_SQLTEXT              SQL_TEXT       SYS_LOB0000011038C00004$$    SYSAUX

Ok, that’s interesting information. It confirms why I see ‘internal’ everywhere: those are dictionary tables.

WRI$_SQLSET_PLAN_LINES is about SQL Tuning Sets and in 19c, especially with the Auto Index feature, the SQL statements are captured every 15 minutes and analyzed to find index candidates. A look at SQL Tuning Sets confirms this:


DEMO@atp1_tp> select sqlset_name,parsing_schema_name,count(*),dbms_xplan.format_number(sum(length(sql_text))),min(plan_timestamp)
from dba_sqlset_statements group by parsing_schema_name,sqlset_name order by count(*);


    SQLSET_NAME    PARSING_SCHEMA_NAME    COUNT(*)    DBMS_XPLAN.FORMAT_NUMBER(SUM(LENGTH(SQL_TEXT)))    MIN(PLAN_TIMESTAMP)
_______________ ______________________ ___________ __________________________________________________ ______________________
SYS_AUTO_STS    C##OMLIDM                        1 53                                                 30-APR-20
SYS_AUTO_STS    FLOWS_FILES                      1 103                                                18-JUL-20
SYS_AUTO_STS    DBSNMP                           6 646                                                26-MAY-20
SYS_AUTO_STS    XDB                              7 560                                                20-MAY-20
SYS_AUTO_STS    ORDS_PUBLIC_USER                 9 1989                                               30-APR-20
SYS_AUTO_STS    GUEST0001                       10 3656                                               20-MAY-20
SYS_AUTO_STS    CTXSYS                          12 1193                                               20-MAY-20
SYS_AUTO_STS    LBACSYS                         28 3273                                               30-APR-20
SYS_AUTO_STS    AUDSYS                          29 3146                                               26-MAY-20
SYS_AUTO_STS    ORDS_METADATA                   29 4204                                               20-MAY-20
SYS_AUTO_STS    C##ADP$SERVICE                  33 8886                                               11-AUG-20
SYS_AUTO_STS    MDSYS                           39 4964                                               20-MAY-20
SYS_AUTO_STS    DVSYS                           65 8935                                               30-APR-20
SYS_AUTO_STS    APEX_190200                    130 55465                                              30-APR-20
SYS_AUTO_STS    C##CLOUD$SERVICE               217 507K                                               30-APR-20
SYS_AUTO_STS    ADMIN                          245 205K                                               30-APR-20
SYS_AUTO_STS    DEMO                           628 320K                                               30-APR-20
SYS_AUTO_STS    APEX_200100                  2,218 590K                                               18-JUL-20
SYS_AUTO_STS    SYS                        106,690 338M                                               30-APR-20

All gathered by this SYS_AUTO_STS job. And the statements captured were parsed by SYS – a system job has hard work because of system statements, as I mentioned when seeing this for the first time:

With this drill-down from the “Object” dimension, I’ve already gone far enough to get an idea about the problem: an internal job is reading the huge SQL Tuning Sets that have been collected by the Auto STS job introduced in 19c (and used by Auto Index). But I’ll continue to look at all other ASH Dimensions. They can give me more detail or at least confirm my guesses. That’s the idea: you look at all the dimensions and once one gives you interesting information, you dig down to more details.

I look at “PL/SQL” ASH dimension first because an application should call SQL from procedural code and not the opposite. And, as all this is internal, developed by Oracle, I expect they do it this way.

  • ASH Dimension “PL/SQL”: I see ‘7322,38’
  • ASH Dimension “Top PL/SQL”: I see ‘19038,5’

Again, I copy/paste to avoid typos and got them from the AWR report “Top PL/SQL Procedures” section:

PL/SQL Entry Subprogram % Activity PL/SQL Current Subprogram % Current Container Name
UNKNOWN_PLSQL_ID <19038, 5> 78.72 SQL 46.81 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <7322, 38> 31.21 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <13644, 332> 2.13 SQL 2.13 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <30582, 1> 1.42 SQL 1.42 SUULFLFCSYX91Z0_ATP1

Side note on the number: activity was 0.35 AAS on top-level PL/SQL, 0.33 on current PL/SQL. 0.33 is included within 0.35 as a session active on a PL/SQL call. In AWR (where “Entry” means “top-level”) you see them nested and including the SQL activity. This is why you see 78.72% here, it is SQL + PL/SQL executed under the top-level call. But actually, the procedure (7322,38) is 31.21% if the total AAS, which matches the 0.33 AAS.

By the way, I didn’t mention it before but this in AWR report is actually an ASH report that is included in the AWR html report.

Now trying to know which are those procedures. I think the “UNKNOWN” comes from not finding it in the packages procedures:


DEMO@atp1_tp> select * from dba_procedures where (object_id,subprogram_id) in ( (7322,38) , (19038,5) );

no rows selected

but I find them from DBA_OBJECTS:


DEMO@atp1_tp> select owner,object_name,object_id,object_type,oracle_maintained,last_ddl_time from dba_objects where object_id in (7322,19038);

   OWNER           OBJECT_NAME    OBJECT_ID    OBJECT_TYPE    ORACLE_MAINTAINED    LAST_DDL_TIME
________ _____________________ ____________ ______________ ____________________ ________________
SYS      XMLTYPE                      7,322 TYPE           Y                    18-JUL-20
SYS      DBMS_AUTOTASK_PRVT          19,038 PACKAGE        Y                    22-MAY-20

and DBA_PROCEDURES:


DEMO@atp1_tp> select owner,object_name,procedure_name,object_id,subprogram_id from dba_procedures where object_id in(7322,19038);


   OWNER                   OBJECT_NAME    PROCEDURE_NAME    OBJECT_ID    SUBPROGRAM_ID
________ _____________________________ _________________ ____________ ________________
SYS      DBMS_RESULT_CACHE_INTERNAL    RELIES_ON               19,038                1
SYS      DBMS_RESULT_CACHE_INTERNAL                            19,038                0

All this doesn’t match 🙁

My guess is that the top level PL/SQL object is DBMS_AUTOTASK_PRVT as I can see in the container it is running on, which is the one I’m connected to (an autonomous database is a pluggable database in the Oracle Cloud container database). It has the OBJECT_ID=19038 in my PDB. But the DBA_PROCEDURES is an extended data link and the OBJECT_ID of common objects are different in CDB$ROOT and PDBs. And OBJECT_ID=7322 is probably an identifier in CDB$ROOT, where active session monitoring runs. I cannot verify as I have only a local user. Because of this inconsistency, my drill-down on the PL/SQL dimension stops there.

The package calls some SQL and from browsing the AWR report I’ve seen in the time model that “sql execute elapsed time” is the major component:

Statistic Name Time (s) % of DB Time % of Total CPU Time
sql execute elapsed time 1,756.19 99.97
DB CPU 1,213.59 69.08 94.77
PL/SQL execution elapsed time 498.62 28.38

I’ll follow the hierarchy of this dimension – the most detailed will be the SQL Plan operation. But let’s start with “SQL Opcode”

  • ASH Dimension “Top Level Opcode”: mostly “PL/SQL EXECUTE” which confirms that the SQL I’ll see is called by the PL/SQL.
  • ASH Dimension “top level SQL ID”: mostly dkb7ts34ajsjy here. I’ll look at its details further.

From the AWR report, I see all statements with no distinction about the top level one, and there’s no spinning top to help you find what is running as a recursive call or the top-level one. It can be often guessed from the time and other statistics – here I have 3 queries taking almost the same database time:

Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id SQL Module SQL Text
1,110.86 3 370.29 63.24 61.36 50.16 dkb7ts34ajsjy DBMS_SCHEDULER DECLARE job BINARY_INTEGER := …
1,110.85 3 370.28 63.24 61.36 50.16 f6j6vuum91fw8 DBMS_SCHEDULER begin /*KAPI:task_proc*/ dbms_…
1,087.12 3 362.37 61.88 61.65 49.93 0y288pk81u609 SYS_AI_MODULE SELECT /*+dynamic_sampling(11)…

SYS_AI_MODULE is the Auto Indexing feature


DEMO@atp1_tp> select distinct sql_id,sql_text from v$sql where sql_id in ('dkb7ts34ajsjy','f6j6vuum91fw8','0y288pk81u609');
dkb7ts34ajsjy    DECLARE job BINARY_INTEGER := :job;  next_date TIMESTAMP WITH TIME ZONE := :mydate;  broken BOOLEAN := FALSE;  job_name VARCHAR2(128) := :job_name;  job_subname VARCHAR2(128) := :job_subname;  job_owner VARCHAR2(128) := :job_owner;  job_start TIMESTAMP WITH TIME ZONE := :job_start;  job_scheduled_start TIMESTAMP WITH TIME ZONE := :job_scheduled_start;  window_start TIMESTAMP WITH TIME ZONE := :window_start;  window_end TIMESTAMP WITH TIME ZONE := :window_end;  chain_id VARCHAR2(14) :=  :chainid;  credential_owner VARCHAR2(128) := :credown;  credential_name  VARCHAR2(128) := :crednam;  destination_owner VARCHAR2(128) := :destown;  destination_name VARCHAR2(128) := :destnam;  job_dest_id varchar2(14) := :jdestid;  log_id number := :log_id;  BEGIN  begin dbms_autotask_prvt.run_autotask(3, 0);  end;  :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
f6j6vuum91fw8    begin /*KAPI:task_proc*/ dbms_auto_index_internal.task_proc(FALSE); end;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
0y288pk81u609    SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */ SQL_ID, PLAN_HASH_VALUE, ELAPSED_TIME/EXECUTIONS ELAPSED_PER_EXEC, DBMS_AUTO_INDEX_INTERNAL.AUTO_INDEX_ALLOW(CE) SESSION_TYPE FROM (SELECT SQL_ID, PLAN_HASH_VALUE, MIN(ELAPSED_TIME) ELAPSED_TIME, MIN(EXECUTIONS) EXECUTIONS, MIN(OPTIMIZER_ENV) CE, MAX(EXISTSNODE(XMLTYPE(OTHER_XML), '/other_xml/info[@type = "has_user_tab"]')) USER_TAB FROM (SELECT F.NAME AS SQLSET_NAME, F.OWNER AS SQLSET_OWNER, SQLSET_ID, S.SQL_ID, T.SQL_TEXT, S.COMMAND_TYPE, P.PLAN_HASH_VALUE, SUBSTRB(S.MODULE, 1, (SELECT KSUMODLEN FROM X$MODACT_LENGTH)) MODULE, SUBSTRB(S.ACTION, 1, (SELECT KSUACTLEN FROM X$MODACT_LENGTH)) ACTION, C.ELAPSED_TIME, C.BUFFER_GETS, C.EXECUTIONS, C.END_OF_FETCH_COUNT, P.OPTIMIZER_ENV, L.OTHER_XML FROM WRI$_SQLSET_DEFINITIONS F, WRI$_SQLSET_STATEMENTS S, WRI$_SQLSET_PLANS P,WRI$_SQLSET_MASK M, WRH$_SQLTEXT T, WRI$_SQLSET_STATISTICS C, WRI$_SQLSET_PLAN_LINES L WHERE F.ID = S.SQLSET_ID AND S.ID = P.STMT_ID AND S.CON_DBID = P.CON_DBID AND P.

It looks like dbms_autotask_prvt.run_autotask calls dbms_auto_index_internal.task_proc that queries WRI$_SQLSET tables and this is where all the database time goes.

  • ASH Dimension “SQL Opcode”: most of SELECT statements here
  • ASH Dimension “SQL Force Matching Signature” is interesting to group all statements that differ only by literals.
  • ASH Dimension “SQL Plan Hash Value”, and the more detailed “SQL Full Plan Hash Value”, are interesting to group all statements having the same execution plan shape, or exactly the same execution plan

  • ASH Dimension “SQL ID” is the most interesting here to see which of this SELECT query is seen most of the time below this Top Level call, but unfortunately, I see “internal here”. Fortunately, the AWR report above did not hide this.
  • ASH Dimension “SQL Plan Operation” shows me that within this query I’m spending time on HASH GROUP BY operation (which, is the workarea is large, does some “direct path read temp” as we encountered on the “wait event” dimension)
  • ASH Dimension “SQL Plan Operation Line” helps me to find this operation in the plan as in addition to the SQL_ID (the one that was hidden in the “SQL_ID” dimension) I have the plan identification (plan hash value) and plan line number.

Again, I use the graphical Performance Hub to find where I need to drill down and find all details in the AWR report “Top SQL with Top Events” section:

SQL ID Plan Hash Executions % Activity Event % Event Top Row Source % Row Source SQL Text
0y288pk81u609 2011736693 3 70.21 CPU + Wait for CPU 35.46 HASH – GROUP BY 28.37 SELECT /*+dynamic_sampling(11)…
direct path read 34.75 HASH – GROUP BY 24.11
444n6jjym97zv 1982042220 18 12.77 CPU + Wait for CPU 12.77 FIXED TABLE – FULL 12.77 SELECT /*+ unnest */ * FROM GV…
1xx2k8pu4g5yf 2224464885 2 5.67 CPU + Wait for CPU 5.67 FIXED TABLE – FIXED INDEX 2.84 SELECT /*+ first_rows(1) */ s…
3kqrku32p6sfn 3786872576 3 2.13 CPU + Wait for CPU 2.13 FIXED TABLE – FULL 2.13 MERGE /*+ OPT_PARAM(‘_parallel…
64z4t33vsvfua 3336915854 2 1.42 CPU + Wait for CPU 1.42 FIXED TABLE – FIXED INDEX 0.71 WITH LAST_HOUR AS ( SELECT ROU…

I can see the full SQL Text in the AWR report and get the AWR statement report with dbms_workload_repository. I can also fetch the plan with DBMS_XPLAN.DISPLAY_AWR:


DEMO@atp1_tp> select * from dbms_xplan.display_awr('0y288pk81u609',2011736693,null,'+peeked_binds');


                                                                                                              PLAN_TABLE_OUTPUT
_______________________________________________________________________________________________________________________________
SQL_ID 0y288pk81u609
--------------------
SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */ SQL_ID,
PLAN_HASH_VALUE, ELAPSED_TIME/EXECUTIONS ELAPSED_PER_EXEC,
DBMS_AUTO_INDEX_INTERNAL.AUTO_INDEX_ALLOW(CE) SESSION_TYPE FROM (SELECT
SQL_ID, PLAN_HASH_VALUE, MIN(ELAPSED_TIME) ELAPSED_TIME,
MIN(EXECUTIONS) EXECUTIONS, MIN(OPTIMIZER_ENV) CE,
MAX(EXISTSNODE(XMLTYPE(OTHER_XML), '/other_xml/info[@type =
"has_user_tab"]')) USER_TAB FROM (SELECT F.NAME AS SQLSET_NAME, F.OWNER
AS SQLSET_OWNER, SQLSET_ID, S.SQL_ID, T.SQL_TEXT, S.COMMAND_TYPE,
P.PLAN_HASH_VALUE, SUBSTRB(S.MODULE, 1, (SELECT KSUMODLEN FROM
X$MODACT_LENGTH)) MODULE, SUBSTRB(S.ACTION, 1, (SELECT KSUACTLEN FROM
X$MODACT_LENGTH)) ACTION, C.ELAPSED_TIME, C.BUFFER_GETS, C.EXECUTIONS,
C.END_OF_FETCH_COUNT, P.OPTIMIZER_ENV, L.OTHER_XML FROM
WRI$_SQLSET_DEFINITIONS F, WRI$_SQLSET_STATEMENTS S, WRI$_SQLSET_PLANS
P,WRI$_SQLSET_MASK M, WRH$_SQLTEXT T, WRI$_SQLSET_STATISTICS C,
WRI$_SQLSET_PLAN_LINES L WHERE F.ID = S.SQLSET_ID AND S.ID = P.STMT_ID
AND S.CON_DBID = P.CON_DBID AND P.STMT_ID = C.STMT_ID AND
P.PLAN_HASH_VALUE = C.PLAN_HASH_VALUE AND P.CON_DBID = C.CON_DBID AND
P.STMT_ID = M.STMT_ID AND P.PLAN_HASH_VALUE = M.PLAN_HASH_VALUE AND
P.CON_DBID = M.CON_DBID AND S.SQL_ID = T.SQL_ID AND S.CON_DBID =
T.CON_DBID AND T.DBID = F.CON_DBID AND P.STMT_ID=L.STMT_ID AND
P.PLAN_HASH_VALUE = L.PLAN_HASH_VALUE AND P.CON_DBID = L.CON_DBID) S,
WRI$_ADV_OBJECTS OS WHERE SQLSET_OWNER = :B8 AND SQLSET_NAME = :B7 AND
(MODULE IS NULL OR (MODULE != :B6 AND MODULE != :B5 )) AND SQL_TEXT NOT
LIKE 'SELECT /* DS_SVC */%' AND SQL_TEXT NOT LIKE 'SELECT /*
OPT_DYN_SAMP */%' AND SQL_TEXT NOT LIKE '/*AUTO_INDEX:ddl*/%' AND
SQL_TEXT NOT LIKE '%/*+%dbms_stats%' AND COMMAND_TYPE NOT IN (9, 10,
11) AND PLAN_HASH_VALUE > 0 AND BUFFER_GETS > 0 AND EXECUTIONS > 0 AND
OTHER_XML IS NOT NULL AND OS.SQL_ID_VC (+)= S.SQL_ID AND OS.TYPE (+)=
:B4 AND DECODE(OS.TYPE(+), :B4 , TO_NUMBER(OS.ATTR2(+)), -1) =
S.PLAN_HASH_VALUE AND OS.TASK_ID (+)= :B3 AND OS.EXEC_NAME (+) IS NULL
AND (OS.SQL_ID_VC IS NULL OR TO_DATE(OS.ATTR18, :B2 )  0 ORDER BY
DBMS_AUTO_INDEX_INTERNAL.AUTO_INDEX_ALLOW(CE) DESC, ELAPSED_TIME DESC

Plan hash value: 2011736693

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                                |       |       |   957 (100)|          |
|   1 |  SORT ORDER BY                            |                                |   180 |   152K|   957  (18)| 00:00:01 |
|   2 |   FILTER                                  |                                |       |       |            |          |
|   3 |    HASH GROUP BY                          |                                |   180 |   152K|   957  (18)| 00:00:01 |
|   4 |     NESTED LOOPS                          |                                |  3588 |  3030K|   955  (18)| 00:00:01 |
|   5 |      FILTER                               |                                |       |       |            |          |
|   6 |       HASH JOIN RIGHT OUTER               |                                |  3588 |  2964K|   955  (18)| 00:00:01 |
|   7 |        TABLE ACCESS BY INDEX ROWID BATCHED| WRI$_ADV_OBJECTS               |     1 |    61 |     4   (0)| 00:00:01 |
|   8 |         INDEX RANGE SCAN                  | WRI$_ADV_OBJECTS_IDX_02        |     1 |       |     3   (0)| 00:00:01 |
|   9 |        HASH JOIN                          |                                |  3588 |  2750K|   951  (18)| 00:00:01 |
|  10 |         TABLE ACCESS STORAGE FULL         | WRI$_SQLSET_PLAN_LINES         | 86623 |  2706K|   816  (19)| 00:00:01 |
|  11 |         HASH JOIN                         |                                |  3723 |  2737K|   134   (8)| 00:00:01 |
|  12 |          TABLE ACCESS STORAGE FULL        | WRI$_SQLSET_STATISTICS         | 89272 |  2789K|    21  (10)| 00:00:01 |
|  13 |          HASH JOIN                        |                                |  3744 |  2636K|   112   (7)| 00:00:01 |
|  14 |           JOIN FILTER CREATE              | :BF0000                        |  2395 |   736K|    39  (13)| 00:00:01 |
|  15 |            HASH JOIN                      |                                |  2395 |   736K|    39  (13)| 00:00:01 |
|  16 |             TABLE ACCESS STORAGE FULL     | WRI$_SQLSET_STATEMENTS         |  3002 |   137K|    13  (24)| 00:00:01 |
|  17 |              FIXED TABLE FULL             | X$MODACT_LENGTH                |     1 |     5 |     0   (0)|          |
|  18 |              FIXED TABLE FULL             | X$MODACT_LENGTH                |     1 |     5 |     0   (0)|          |
|  19 |              FIXED TABLE FULL             | X$MODACT_LENGTH                |     1 |     5 |     0   (0)|          |
|  20 |             NESTED LOOPS                  |                                |  1539 |   402K|    25   (4)| 00:00:01 |
|  21 |              TABLE ACCESS BY INDEX ROWID  | WRI$_SQLSET_DEFINITIONS        |     1 |    27 |     1   (0)| 00:00:01 |
|  22 |               INDEX UNIQUE SCAN           | WRI$_SQLSET_DEFINITIONS_IDX_01 |     1 |       |     0   (0)|          |
|  23 |              TABLE ACCESS STORAGE FULL    | WRH$_SQLTEXT                   |  1539 |   362K|    24   (5)| 00:00:01 |
|  24 |           JOIN FILTER USE                 | :BF0000                        | 89772 |    34M|    73   (3)| 00:00:01 |
|  25 |            TABLE ACCESS STORAGE FULL      | WRI$_SQLSET_PLANS              | 89772 |    34M|    73   (3)| 00:00:01 |
|  26 |      INDEX UNIQUE SCAN                    | WRI$_SQLSET_MASK_PK            |     1 |    19 |     0   (0)|          |
----------------------------------------------------------------------------------------------------------------------------

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 7 (U - Unused (7))
---------------------------------------------------------------------------

   0 -  SEL$5
         U -  MERGE(@"SEL$5" >"SEL$4") / duplicate hint
         U -  MERGE(@"SEL$5" >"SEL$4") / duplicate hint

   1 -  SEL$5C160134
         U -  dynamic_sampling(11) / rejected by IGNORE_OPTIM_EMBEDDED_HINTS

  17 -  SEL$7286615E
         U -  PUSH_SUBQ(@"SEL$7286615E") / duplicate hint
         U -  PUSH_SUBQ(@"SEL$7286615E") / duplicate hint

  17 -  SEL$7286615E / X$MODACT_LENGTH@SEL$5
         U -  FULL(@"SEL$7286615E" "X$MODACT_LENGTH"@"SEL$5") / duplicate hint
         U -  FULL(@"SEL$7286615E" "X$MODACT_LENGTH"@"SEL$5") / duplicate hint

Peeked Binds (identified by position):
--------------------------------------

   1 - :B8 (VARCHAR2(30), CSID=873): 'SYS'
   2 - :B7 (VARCHAR2(30), CSID=873): 'SYS_AUTO_STS'
   5 - :B4 (NUMBER): 7
   7 - :B3 (NUMBER): 15

Note
-----
   - SQL plan baseline SQL_PLAN_gf2c99a3zrzsge1b441a5 used for this statement

I can confirm what I’ve seen about HASH GROUP BY on line ID=3
I forgot to mention that SQL Monitor is not available for this query probably because it is disabled for internal queries. Anyway, the most interesting here is that the plan comes from SQL Plan Management

Here is more information about this SQL Plan Baseline:


DEMO@atp1_tp> select * from dbms_xplan.display_sql_plan_baseline('','SQL_PLAN_gf2c99a3zrzsge1b441a5');
                                                                                                                  ...
--------------------------------------------------------------------------------
SQL handle: SQL_f709894a87fbff0f
SQL text: SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */ SQL_ID,
          PLAN_HASH_VALUE, ELAPSED_TIME/EXECUTIONS ELAPSED_PER_EXEC,
...
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_gf2c99a3zrzsge1b441a5         Plan id: 3786686885
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
...

This shows only one plan, but I want to see all plans for this statement.


DEMO@atp1_tp> select 
CREATOR,ORIGIN,CREATED,LAST_MODIFIED,LAST_EXECUTED,LAST_VERIFIED,ENABLED,ACCEPTED,FIXED,REPRODUCED
from dba_sql_plan_baselines where sql_handle='SQL_f709894a87fbff0f' order by created;


   CREATOR                           ORIGIN            CREATED      LAST_MODIFIED      LAST_EXECUTED      LAST_VERIFIED    ENABLED    ACCEPTED    FIXED    REPRODUCED
__________ ________________________________ __________________ __________________ __________________ __________________ __________ ___________ ________ _____________
SYS        EVOLVE-LOAD-FROM-AWR             30-MAY-20 11:50    30-JUL-20 23:34                       30-JUL-20 23:34    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-AWR             30-MAY-20 11:50    31-JUL-20 05:03                       31-JUL-20 05:03    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-CURSOR-CACHE    30-MAY-20 11:50    31-JUL-20 06:09                       31-JUL-20 06:09    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-AWR             30-MAY-20 11:50    31-JUL-20 06:09                       31-JUL-20 06:09    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 16:08    31-JUL-20 07:15                       31-JUL-20 07:15    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 19:10    30-MAY-20 19:30    30-MAY-20 19:30    30-MAY-20 19:29    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 19:30    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 23:32    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 03:14    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 04:14    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-AWR             31-MAY-20 13:04    31-JUL-20 23:43                       31-JUL-20 23:43    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 13:19    31-JUL-20 23:43                       31-JUL-20 23:43    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 13:39    11-JUL-20 04:35    11-JUL-20 04:35    31-MAY-20 14:09    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 18:01    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 22:44    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     01-JUN-20 06:48    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     01-JUN-20 07:09    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     02-JUN-20 05:22    02-JUN-20 05:49                       02-JUN-20 05:49    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     02-JUN-20 21:52    10-AUG-20 22:06                       10-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     03-JUN-20 08:20    23-AUG-20 20:45    23-AUG-20 20:45    03-JUN-20 08:49    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     04-JUN-20 01:34    10-AUG-20 22:06                       10-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     05-JUN-20 21:43    10-AUG-20 22:06                       10-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     14-JUN-20 06:01    18-AUG-20 23:22    18-AUG-20 23:22    14-JUN-20 10:52    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     14-JUN-20 06:21    13-AUG-20 22:35                       13-AUG-20 22:35    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     27-JUN-20 16:43    27-AUG-20 22:11                       27-AUG-20 22:11    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     28-JUN-20 02:09    28-JUN-20 06:52    28-JUN-20 06:52    28-JUN-20 06:41    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     28-JUN-20 08:13    29-JUL-20 23:24                       29-JUL-20 23:24    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     29-JUN-20 03:05    30-JUL-20 22:28                       30-JUL-20 22:28    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     29-JUN-20 10:50    30-JUL-20 23:33                       30-JUL-20 23:33    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-JUN-20 13:28    11-JUL-20 05:15    11-JUL-20 05:15    30-JUN-20 23:09    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     01-JUL-20 14:04    31-JUL-20 22:37                       31-JUL-20 22:37    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     11-JUL-20 06:36    10-AUG-20 22:07                       10-AUG-20 22:07    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     11-JUL-20 14:00    11-AUG-20 22:06                       11-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     12-JUL-20 00:47    11-AUG-20 22:06                       11-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     12-JUL-20 01:47    11-AUG-20 22:06                       11-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     12-JUL-20 09:52    13-AUG-20 22:34                       13-AUG-20 22:34    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     13-JUL-20 04:03    13-AUG-20 22:34                       13-AUG-20 22:34    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     18-JUL-20 12:15    17-AUG-20 22:15                       17-AUG-20 22:15    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     18-JUL-20 23:43    18-AUG-20 22:44                       18-AUG-20 22:44    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     24-JUL-20 01:38    23-AUG-20 06:24                       23-AUG-20 06:24    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     24-JUL-20 06:42    24-AUG-20 22:09                       24-AUG-20 22:09    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-JUL-20 02:21    30-JUL-20 02:41                       30-JUL-20 02:41    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     07-AUG-20 18:33    07-AUG-20 19:16                       07-AUG-20 19:16    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     13-AUG-20 22:52    14-AUG-20 22:10                       14-AUG-20 22:10    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     14-AUG-20 05:16    14-AUG-20 22:10                       14-AUG-20 22:10    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     14-AUG-20 15:42    14-AUG-20 22:10                       14-AUG-20 22:10    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     18-AUG-20 23:22    19-AUG-20 22:11                       19-AUG-20 22:11    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     27-AUG-20 00:07    27-AUG-20 22:11                       27-AUG-20 22:11    YES        NO          NO       YES

Ok, there was a huge SQL Plan Management activity here. All starts on 30-MAY-20 and this is when my ATP database has been upgraded to 19c. 19c comes with two new features. First new feature is “Automatic SQL tuning set” which gathers a lot of statements in SYS_AUTO_STS as we have seen above. The other feature, “Automatic SQL Plan Management”, or “Automatic Resolution of Plan Regressions” look into AWR for resource intensive statements with several execution plans. Then it create SQL Plan BAselines for them, loading all alternative plans that are found in AWR, SQL Tuning Sets, and Cursor Cache. And this is why I have EVOLVE-LOAD-FROM-AWR and EVOLVE-LOAD-FROM-CURSOR-CACHE loaded on 30-MAY-20 11:50
This feature is explained by Nigel Bayliss blog post.

So, here are the settings in the Autonomous Database, ALTERNATE_PLAN_BASELINE=AUTO which enables the Auto SPM and ALTERNATE_PLAN_SOURCE=AUTO which means: AUTOMATIC_WORKLOAD_REPOSITORY+CURSOR_CACHE+SQL_TUNING_SET


DEMO@atp1_tp> select parameter_name, parameter_value from   dba_advisor_parameters
              where  task_name = 'SYS_AUTO_SPM_EVOLVE_TASK' and parameter_value  'UNUSED' order by 1;

             PARAMETER_NAME    PARAMETER_VALUE
___________________________ __________________
ACCEPT_PLANS                TRUE
ALTERNATE_PLAN_BASELINE     AUTO
ALTERNATE_PLAN_LIMIT        UNLIMITED
ALTERNATE_PLAN_SOURCE       AUTO
DAYS_TO_EXPIRE              UNLIMITED
DEFAULT_EXECUTION_TYPE      SPM EVOLVE
EXECUTION_DAYS_TO_EXPIRE    30
JOURNALING                  INFORMATION
MODE                        COMPREHENSIVE
TARGET_OBJECTS              1
TIME_LIMIT                  3600
_SPM_VERIFY                 TRUE

This query (and explanations) are from Mike Dietrich blog post which you should read.

So, I can see many plans for this query, some accepted and some not. The Auto Evolve advisor task should help to see which plan is ok or not but it seems that it cannot for this statement:


SELECT DBMS_SPM.report_auto_evolve_task FROM   dual;
...

---------------------------------------------------------------------------------------------
 Object ID          : 848087
 Test Plan Name     : SQL_PLAN_gf2c99a3zrzsgd6c09b5e
 Base Plan Name     : Cost-based plan
 SQL Handle         : SQL_f709894a87fbff0f
 Parsing Schema     : SYS
 Test Plan Creator  : SYS
 SQL Text           : SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */
...

FINDINGS SECTION
---------------------------------------------------------------------------------------------

Findings (1):
-----------------------------
 1. This plan was skipped because either the database is not fully open or the
    SQL statement is ineligible for SQL Plan Management.

I dropped all those SQL Plan Baselines:


set serveroutput on
exec dbms_output.put_line ( DBMS_SPM.DROP_SQL_PLAN_BASELINE(sql_handle => 'SQL_f709894a87fbff0f') );

but the query is still long. The problem is not about the Auto SPM job which just tries to find a solution.

It seems that the Auto Index query spends time on this HASH GROUP BY because of the following:


     SELECT
...
     FROM
     (SELECT SQL_ID, PLAN_HASH_VALUE,MIN(ELAPSED_TIME) ELAPSED_TIME,MIN(EXECUTIONS) EXECUTIONS,MIN(OPTIMIZER_ENV) CE,
             MAX(EXISTSNODE(XMLTYPE(OTHER_XML),
                            '/other_xml/info[@type = "has_user_tab"]')) USER_TAB
       FROM
...       
     GROUP BY SQL_ID, PLAN_HASH_VALUE
     )
     WHERE USER_TAB > 0

This is the AI job looking at many statements, with their OTHER_XML plan information and doing a group by on that. There are probably no optimal plans for this query.

Them why do I have so many statements in the auto-captured SQL Tuning Set? An application should have a limited set of statements. In OLTP, with many executions for different values, we should use bind variables to limit the set of statements. In DWH, ad-hoc queries should have so many executions.

When looking at the statements not using bind variables, the FORCE_MATCHING_SIGNATURE is the right dimension on which to aggregates them as there are too many SQL_ID:



DEMO@atp1_tp> select force_matching_signature from dba_sqlset_statements group by force_matching_signature order by count(*) desc fetch first 2 rows only;

     FORCE_MATCHING_SIGNATURE
_____________________________
    7,756,258,419,218,828,704
   15,893,216,616,221,909,352

DEMO@atp1_tp> select sql_text from dba_sqlset_statements where force_matching_signature=15893216616221909352 fetch first 3 rows only;
                                                     SQL_TEXT
_____________________________________________________________
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = 50867
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = 51039
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = 51048

DEMO@atp1_tp> select sql_text from dba_sqlset_statements where force_matching_signature=7756258419218828704 fetch first 3 rows only;
                                                                                   SQL_TEXT
___________________________________________________________________________________________
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 51039 and bitand(FLAGS, 128)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 51049 and bitand(FLAGS, 128)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 51047 and bitand(FLAGS, 128)=0

I have two FORCE_MATCHING_SIGNATURE that have the most rows in DBA_SQLSET_STATEMENTS and looking at a sample of them confirms that they don’t use bind variables. They are oracle internal queries and because I have the FORCE_MATCHING_SIGNATURE I put it in a google search in order to see if others already have seen the issue (Oracle Support notes are also indexed by Google).

First result is a Connor McDonald blog post from 2016, taking this example to show how to hunt for SQL which should use bind variables:
https://connor-mcdonald.com/2016/05/30/sql-statements-using-literals/

There is also a hit on My Oracle Support for those queries:
5931756 QUERIES AGAINST SYS_FBA_TRACKEDTABLES DON’T USE BIND VARIABLES which is supposed to be fixed in 19c but obviously it is not. When I look at the patch I see “where OBJ# = :1” in ktfa.o


$ strings 15931756/files/lib/libserver18.a/ktfa.o | grep "SYS_FBA_TRACKEDTABLES where OBJ# = "
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = :1 and bitand(FLAGS, :2)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = :1
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = :1

This uses bind variable.

But I checked in 19.6 and 20.3:


[oracle@cloud libserver]$ strings /u01/app/oracle/product/20.0.0/dbhome_1/bin/oracle | grep "SYS_FBA_TRACKEDTABLES where OBJ# = "
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = %d and bitand(FLAGS, %d)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = %d
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = %d

This is string substitution. Not bind variable.

Ok, as usual, I went too far from my initial goal which was just sharing some screenshots about looking at Performance Hub. With the autonomous database we don’t have all tools we are used to. On a self-managed database I would have tkprof’ed this job that runs every 15 minutes. Different tools but still possible. In this example I drilled down the problematic query execution plan, found that a system table was too large, got the bug number that should be fixed and verified that it wasn’t.

If you want to drill down by yourself, I’m sharing one AWR report easy to download from the Performance Hub:
https://www.dropbox.com/s/vp8ndas3pcqjfuw/troubleshooting-autonomous-database-AWRReport.html
and PerfHub report gathered with dbms_perf.report_perfhub: https://www.dropbox.com/s/yup5m7ihlduqgbn/troubleshooting-autonomous-database-perfhub.html

Comments and questions welcome. If you are interested in an Oracle Performance Workshop tuning, I can do it in our office, customer premises or remotely (Teams, Teamviewer, or any tool you want). Just request it on: https://www.dbi-services.com/trainings/oracle-performance-tuning-training/#onsite. We can deliver a 3 days workshop on the optimizer concepts and hands-on lab to learn the troubleshooting method and tools. Or we can do some coaching looking at your environment on a shared screen: your database, your tools.

Cet article Troubleshooting performance on Autonomous Database est apparu en premier sur Blog dbi services.

Oracle DML (DELETE) and the Index Clustering Factor

$
0
0

As a consultant working for customers, I’m often in the situation that I have an answer to a problem, but the recommended solution cannot be implemented due to some restrictions. E.g. the recommendation would be to adjust the code, but that is not feasible. In such cases you are forced to try to help without code changes.

Recently I was confronted with the following issue: A process takes too long. Digging deeper I could see that most of the time was spent on this SQL:

DELETE FROM COM_TAB WHERE 1=1 

The execution plan looked as follows:

--------------------------------------------------------------------------------------------
| Id  | Operation             | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                    |       |       | 16126 (100)|          |
|   1 |  DELETE               | COM_TAB            |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_COM_TAB         |    10M|   306M| 16126   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------

My initial reaction was of course to say that deleting all data in a table with a delete statement is not a good idea. Better is to turn the DML into DDL and use e.g. “truncate table”. All options for deleting lots of rows in a table fast are provided by Chris Saxon in his Blog here.

In this case changing the SQL was not possible, so what are the alternatives?

As I was involved in this a long time after the issue happened I checked the ASH of AWR-History:

SQL> select SQL_EXEC_START, session_state, event, count(*)*10 secs_in_state FROM dba_hist_active_sess_history where sql_id='53gwjb0gjn1np'
  2  group by sql_exec_start, session_state, event order by 1,4 desc;

SQL_EXEC_START      SESSION EVENT                                                            SECS_IN_STATE
------------------- ------- ---------------------------------------------------------------- -------------
19.06.2020 10:13:02 WAITING free buffer waits                                                          560
19.06.2020 10:13:02 WAITING enq: CR - block range reuse ckpt                                           370
19.06.2020 10:13:02 ON CPU                                                                             130
19.06.2020 10:13:02 WAITING reliable message                                                            10
19.06.2020 10:56:01 WAITING enq: CR - block range reuse ckpt                                           550
19.06.2020 10:56:01 WAITING free buffer waits                                                          230
19.06.2020 10:56:01 ON CPU                                                                             140
19.06.2020 10:56:01 WAITING log file switch (checkpoint incomplete)                                     60
19.06.2020 11:39:38 WAITING enq: CR - block range reuse ckpt                                           610
19.06.2020 11:39:38 WAITING free buffer waits                                                          180
19.06.2020 11:39:38 ON CPU                                                                             170
19.06.2020 11:39:38 WAITING log file switch (checkpoint incomplete)                                     80
19.06.2020 11:39:38 WAITING write complete waits                                                        40
19.06.2020 12:23:47 WAITING enq: CR - block range reuse ckpt                                           450
19.06.2020 12:23:47 WAITING free buffer waits                                                          280
19.06.2020 12:23:47 ON CPU                                                                             150
19.06.2020 12:23:47 WAITING log file switch (checkpoint incomplete)                                     90
19.06.2020 12:23:47 WAITING write complete waits                                                        30
19.06.2020 12:23:47 WAITING log buffer space                                                            10

So obviously the DBWR had a problem writing dirty blocks to disk and getting free space in the cache. When the issue happened above the following parameter were active:

filesystemio_options='ASYNCH'

Changing it to

filesystemio_options='SETALL'

improved the situation a lot, but caused waits on “db file sequential read”.

I.e. with filesystemio_options=’ASYNCH’ we do cache lots of repeatedly touched blocks in the filesystem cache, but suffer from slower (non-direct) writes by the DB-writer. With filesystemio_options=’SETALL’ we gain by doing direct IO by the DB-writer, but have to read repeatedly touched blocks from disk more often.

The table just had 1 index, the index for the primary key.

So what to do here?

Several recommendations came to mind:

– With filesystemio_options=’ASYNCH’: Increase the redologs to not do a checkpoint while the statement is running
– With filesystemio_options=’SETALL’: Increase the buffer cache to keep blocks in memory for longer and avoid single block IOs

The most interesting question was: Why is the optimizer deciding to go over the index here first? With a bad clustering factor it would make more sense to do a full table scan than to use the index. And this has actually been validated with a hint:

DELETE /*+ FULL(COM_TAB) */ FROM COM_TAB WHERE 1=1

improved the situation.

An improvement should be achievable by using an Index Organized Table here as we only have a primary key index on the table, so that we just wipe out the data in the index and do not have to visit the same table block repeatedly again. The best however is to create a testcase and reproduce the issue. Here’s what I did:

I created 2 tables

TDEL_GOOD_CF
TDEL_BAD_CF

which do have more blocks than I have in the db-cache. As the name suggests one table had an index with a better clustering factor and one an index with a bad clustering factor:

SQL> select table_name, blocks, num_rows from tabs where table_name like 'TDEL_%';

TABLE_NAME                           BLOCKS   NUM_ROWS
-------------------------------- ---------- ----------
TDEL_BAD_CF                          249280     544040
TDEL_GOOD_CF                         248063     544040

Remark: To use lots of blocks I stored only 2 rows per block by using a high PCTFREE.

SQL> select index_name, leaf_blocks, clustering_factor from ind where table_name like 'TDEL_%';

INDEX_NAME                       LEAF_BLOCKS CLUSTERING_FACTOR
-------------------------------- ----------- -----------------
PK_TDEL_BAD_CF                          1135            532313
PK_TDEL_GOOD_CF                         1135            247906

The database cache size was much smaller than the blocks in the table:

SQL> select bytes/8192 BLOCKS_IN_BUFFER_CACHE from v$sgastat where name='buffer_cache';

BLOCKS_IN_BUFFER_CACHE
----------------------
                 77824

Test with filesystemio_options=’SETALL’:

SQL> set autotrace trace timing on
SQL> delete from TDEL_BAD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:01:08.76

Execution Plan
----------------------------------------------------------
Plan hash value: 2076794500

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                |   544K|  2656K|   315	 (2)| 00:00:01 |
|   1 |  DELETE               | TDEL_BAD_CF    |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_TDEL_BAD_CF |   544K|  2656K|   315	 (2)| 00:00:01 |
----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
         88  recursive calls
    2500712  db block gets
       1267  consistent gets
     477213  physical reads
  388185816  redo size
        195  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     544040  rows processed

Please consider the 477213 physical reads (blocks read), i.e. almost 2 times the number of blocks in the table.
The ASH-data looked as follows:

select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='ck5fw78yqh93g'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;


SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
ck5fw78yqh93g                1 WAITING db file scattered read                    7          1
ck5fw78yqh93g                1 ON CPU                                            7         11
ck5fw78yqh93g                1 WAITING db file sequential read                   7         56

P1 is the file_id when doing IO. File ID 7 is the USERS-Tablepspace where my table and index are in.

So obviously Oracle didn’t consider the clustering factor when building the plan with the index. The cost of 315 is just the cost for the INDEX FAST FULL SCAN:

Fast Full Index Scan Cost ~ ((LEAF_BLOCKS/MBRC) x MREADTIM)/ SREADTIM + CPU

REMARK: I do not have system statistics gathered.

LEAF_BLOCKS=1135
MBRC=8
MREADTIM=26ms
SREADTIM=12ms

Fast Index Scan Cost ~ ((1135/8) x 26)/ 12 + CPU = 307 + CPU = 315

The costs for accessing the table are not considered at all. I.e. going through the index and from there to the table to delete the rows results in visiting the same table block several times.

Here the test with the table having a better clustering factor on the index:

SQL> delete from TDEL_GOOD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:00:30.48

Execution Plan
----------------------------------------------------------
Plan hash value: 4284904063

-----------------------------------------------------------------------------------------
| Id  | Operation             | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                 |   544K|  2656K|   315   (2)| 00:00:01 |
|   1 |  DELETE               | TDEL_GOOD_CF    |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_TDEL_GOOD_CF |   544K|  2656K|   315   (2)| 00:00:01 |
-----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        115  recursive calls
    2505121  db block gets
       1311  consistent gets
     249812  physical reads
  411603188  redo size
        195  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          9  sorts (memory)
          0  sorts (disk)
     544040  rows processed


select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='0nqk3fmcwrrzm'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;

SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
0nqk3fmcwrrzm                1 ON CPU                                            7          3
0nqk3fmcwrrzm                1 WAITING db file sequential read                   7         26

I.e. it did run much faster with the better clustering factor and only had to do half the physical reads. 

Here the test with the full table scan on the table with the index having a bad clustering factor:

cbleile@orcl@orcl> delete /*+ FULL(T) */ from TDEL_BAD_CF T where 1=1;

544040 rows deleted.

Elapsed: 00:00:08.39

Execution Plan
----------------------------------------------------------
Plan hash value: 4058645893

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT   |             |   544K|  2656K| 67670   (1)| 00:00:01 |
|   1 |  DELETE            | TDEL_BAD_CF |       |       |            |          |
|   2 |   TABLE ACCESS FULL| TDEL_BAD_CF |   544K|  2656K| 67670   (1)| 00:00:01 |
----------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        161  recursive calls
    1940687  db block gets
     248764  consistent gets
     252879  physical reads
  269882276  redo size
        195  bytes sent via SQL*Net to client
        401  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     544040  rows processed

select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='4272c7xv86d0k'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;

SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
4272c7xv86d0k                2 ON CPU                                            7          2
4272c7xv86d0k                1 ON CPU                                            7          3
4272c7xv86d0k                2 WAITING db file scattered read                    7          3

I.e. if Oracle would consider the clustering factor here and do the delete with the full table scan then it would obviously run much faster.

Last test with an IOT:

cbleile@orcl@orcl> delete from TDEL_IOT where 1=1;

544040 rows deleted.

Elapsed: 00:00:06.90

Execution Plan
----------------------------------------------------------
Plan hash value: 515699456

-------------------------------------------------------------------------------------------
| Id  | Operation             | Name      | Rows  | Bytes | Cost (%CPU)| Time             |
-------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                   |   544K|  2656K| 66065   (1)| 00:00:01 |
|   1 |  DELETE               | TDEL_IOT          |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_77456 |   544K|  2656K| 66065   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        144  recursive calls
     521556  db block gets
     243200  consistent gets
     243732  physical reads
  241686612  redo size
        194  bytes sent via SQL*Net to client
        381  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     544040  rows processed

select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='cf6nj64yybkpq'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;

SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
cf6nj64yybkpq                2 ON CPU                                            7          1
cf6nj64yybkpq                1 ON CPU                                            7          1
cf6nj64yybkpq                2 WAITING db file scattered read                    7          3

As we were not allowed to adjust the code or replace the table with an IOT the measures to improve this situation were to
– set filesystemio_options=’SETALL’
REMARK: That change needs good testing as it may have negative effects on other SQL, which gain from the filesystem cache.
– add a hint with a SQL-Patch to force a full table scan

REMARK: Creating a SQL-Patch to add the hint

FULL(TDEL_BAD_CF)

to the statement was not easily possible, because Oracle does not consider this hint in DML:

var rv varchar2(32);
declare
   v_sql CLOB;
begin
   select sql_text into v_sql from dba_hist_sqltext where sql_id='ck5fw78yqh93g';
   :rv:=dbms_sqldiag.create_sql_patch(
             sql_text  => v_sql,
             hint_text=>'FULL(TDEL_BAD_CF)',
             name=>'force_fts_when_del_all',
             description=>'force fts when del all rows');
end;
/
print rv

delete from TDEL_BAD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:01:01.79

Execution Plan
----------------------------------------------------------
Plan hash value: 2076794500

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                |   544K|  2656K|   315	 (2)| 00:00:01 |
|   1 |  DELETE               | TDEL_BAD_CF    |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_TDEL_BAD_CF |   544K|  2656K|   315	 (2)| 00:00:01 |
----------------------------------------------------------------------------------------

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (N - Unresolved (1))
---------------------------------------------------------------------------

   1 -	DEL$1
	 N -  FULL(TDEL_BAD_CF)

Note
-----
   - SQL patch "force_fts_when_del_all" used for this statement

I.e. according the Note the SQL patch was used, but the Hint report showed it to be “Unresolved”.

So I had to use the full hint specification:

FULL(@"DEL$1" "TDEL_BAD_CF"@"DEL$1")

To get this full specification you can do an explain plan of the hinted statement and look at the outline data:

SQL> explain plan for
  2  delete /*+ FULL(TDEL_BAD_CF) */ from TDEL_BAD_CF where 1=1;

Explained.

SQL> select * from table(dbms_xplan.display(format=>'+OUTLINE'));

...

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"DEL$1" "TDEL_BAD_CF"@"DEL$1")
      OUTLINE_LEAF(@"DEL$1")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

So here’s the script to create the SQL Patch correctly:

var rv varchar2(32);
declare
   v_sql CLOB;
begin
   select sql_text into v_sql from dba_hist_sqltext where sql_id='ck5fw78yqh93g';
   :rv:=dbms_sqldiag.create_sql_patch(
             sql_text  => v_sql,
             hint_text=>'FULL(@"DEL$1" "TDEL_BAD_CF"@"DEL$1")',
             name=>'force_fts_when_del_all',
             description=>'force fts when del all rows');
end;
/
print rv

SQL> delete from TDEL_BAD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:00:06.57

Execution Plan
----------------------------------------------------------
Plan hash value: 4058645893

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time	 |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT   |             |   544K|  2656K| 67670   (1)| 00:00:01 |
|   1 |  DELETE            | TDEL_BAD_CF |       |       |            |          |
|   2 |   TABLE ACCESS FULL| TDEL_BAD_CF |   544K|  2656K| 67670   (1)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
   - SQL patch "force_fts_when_del_all" used for this statement

Statistics
----------------------------------------------------------
        207  recursive calls
    1940517  db block gets
     248759  consistent gets
     252817  physical reads
  272061432  redo size
        195  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
         16  sorts (memory)
          0  sorts (disk)
     544040  rows processed

Summary: This is a specific corner case where the Oracle optimizer should consider the clustering factor in DML when calculating plan costs but it doesn’t. The workaround in this case was to hint the statement or add a SQL-Patch to hint the statement without modifying it in the code.

Cet article Oracle DML (DELETE) and the Index Clustering Factor est apparu en premier sur Blog dbi services.

How to synchronize the appliance registry metadata on an ODA?

$
0
0

Databases administration on a Bare Metal ODA will be done as root user by running odacli commands :

  • odacli create-database to create a database
  • odacli upgrade-database to upgrade a database between major releases
  • odacli move-database to move databases from one Oracle home to another of the same database version
  • odacli update-dbhome to update a specific RDBMS home to the latest patch bundle version
  • etc…

The odacli commands will do the needful and at the end update the Apache derby DB (ODA registry metadata). odacli commands like odacli list-dbhomes or odacli list-databases will use the derby DB information to display the requested information.

But what will happen if the odacli commands to upgrade or update your database are failing in error? How to synchronize the appliance registry metadata on an ODA?

I have been running several customer projects where the odacli commands to upgrade or update databases have been failing in error before completion. This had as consequence to complete the upgrade manually and to unfortunately update the derby DB manually in order to have coherent metadata information.

I have been already sharing a few blogs on that subject :
https://blog.dbi-services.com/connecting-to-oda-derby-database/
https://blog.dbi-services.com/moving-oracle-database-to-new-home-on-oda/

Updating manually the derby DB is a sensitive operation and you might want to do it only with oracle support guidance and instructions.

But, GOOD NEWS! If you are running an ODA version older than 18.7, there is a new command available : odacli update-registry. I could use it successfully recently and through this blog, I would like to share it with you.

odacli update-registry command

This command will update the registry of the components when you manually apply patches or run a manual database upgrade. The option -n will define the component you would like to get updated in the derby DB.
See ODA 19.8 documentation for more details.

Real customer case example

I had to upgrade DB1 from 11.2.0.4 to 12.1.0.2. The odacli upgrade-database command has been failing in error and I had to manually complete the upgrade. At the end I had to synchronize the registry metadata DB.

List dbhomes

The dbhomes from the ODA was the following one :
[root@ODASRV log]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
d6df9457-e4cd-4c39-b3cb-8d03be3c4598 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16 OraDB19000_home1 19.7.0.0.200414 /u01/app/oracle/product/19.0.0.0/dbhome_1 Configured
73847823-ae83-4bf0-a630-f8884cf4387a OraDB12102_home1 12.1.0.2.200414 /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured

Registry metadata after manual upgrade

The registry metadata after manual upgrade was the following one :
[root@ODASRV log]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
67abbd2e-f8e1-42da-bf8d-2f0a8eb403dd DB2 Si 19.7.0.0.200414 false Oltp Odb1 Acfs Configured 9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16
c51f7361-ee99-42ed-9126-86b7fc281981 DB3 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
472c04fe-533d-46af-aeab-ab5271979d98 DB4 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
8dd9b1ea-37fd-408f-99ab-eb32e2c2ed91 DB5 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
2f5856df-e717-404a-b7b0-ca8c82b2f45e DB6 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
5797b6b2-e3fc-4182-8db3-671132dd43a7 DB7 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
3c67a04d-4e6b-4b43-8b56-94284994b25d DB8 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
a1b2500d-728e-4cbe-8425-f8a85826c422 DB9 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
7dfadc59-0c67-4b42-86e1-0140f39cf4d3 DB10 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598

And as we can see, DB1 was still showing to be a 11.2.0.4 database and linked to 11.2.0.4 home although it has been upgraded to 12.1.0.2 version :
oracle@ODASRV:/home/oracle/mwagner/upgrade_TZ/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : DB1_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 12.1.0.2.0
CDB Enabled : NO
*************************************

Updating registry metadata : odacli update-registry -n db

I tried executing odacli update-registry command to get the registry metadata information updated :

[root@ODASRV log]# odacli update-registry -n db
 
Job details
----------------------------------------------------------------
ID: a7270d8d-c8d2-48be-a41b-150441559791
Description: Discover Components : db
Status: Created
Created: August 10, 2020 1:55:31 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
 
[root@ODASRV log]# odacli describe-job -i a7270d8d-c8d2-48be-a41b-150441559791
 
Job details
----------------------------------------------------------------
ID: a7270d8d-c8d2-48be-a41b-150441559791
Description: Discover Components : db
Status: Success
Created: August 10, 2020 1:55:31 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Discover DBHome August 10, 2020 1:55:31 PM CEST August 10, 2020 1:55:31 PM CEST Success
 
[root@ODASRV log]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
67abbd2e-f8e1-42da-bf8d-2f0a8eb403dd DB2 Si 19.7.0.0.200414 false Oltp Odb1 Acfs Configured 9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16
c51f7361-ee99-42ed-9126-86b7fc281981 DB3 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
472c04fe-533d-46af-aeab-ab5271979d98 DB4 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
8dd9b1ea-37fd-408f-99ab-eb32e2c2ed91 DB5 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
2f5856df-e717-404a-b7b0-ca8c82b2f45e DB6 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
5797b6b2-e3fc-4182-8db3-671132dd43a7 DB7 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
3c67a04d-4e6b-4b43-8b56-94284994b25d DB8 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
a1b2500d-728e-4cbe-8425-f8a85826c422 DB9 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
7dfadc59-0c67-4b42-86e1-0140f39cf4d3 DB10 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
[root@ODASRV log]#

As we can see nothing really happened…

Use -f force option : odacli update-registry -n db -f

I have been using the force option :

[root@ODASRV log]# odacli update-registry -n db -f
 
Job details
----------------------------------------------------------------
ID: 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
Description: Discover Components : db
Status: Created
Created: August 10, 2020 1:58:37 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
 
[root@ODASRV log]# odacli describe-job -i 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
 
Job details
----------------------------------------------------------------
ID: 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
Description: Discover Components : db
Status: Success
Created: August 10, 2020 1:58:37 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Rediscover DBHome August 10, 2020 1:58:37 PM CEST August 10, 2020 1:58:39 PM CEST Success
Rediscover DBHome August 10, 2020 1:58:39 PM CEST August 10, 2020 1:58:41 PM CEST Success
Rediscover DBHome August 10, 2020 1:58:41 PM CEST August 10, 2020 1:58:48 PM CEST Success
Discover DBHome August 10, 2020 1:58:48 PM CEST August 10, 2020 1:58:48 PM CEST Success
Rediscover DB: DB9_RZA August 10, 2020 1:58:48 PM CEST August 10, 2020 1:58:53 PM CEST Success
Rediscover DB: DB10_RZA August 10, 2020 1:58:53 PM CEST August 10, 2020 1:58:58 PM CEST Success
Rediscover DB: DB8_RZA August 10, 2020 1:58:58 PM CEST August 10, 2020 1:59:03 PM CEST Success
Rediscover DB: DB2_RZA August 10, 2020 1:59:03 PM CEST August 10, 2020 1:59:16 PM CEST Success
Rediscover DB: DB6_RZA August 10, 2020 1:59:16 PM CEST August 10, 2020 1:59:21 PM CEST Success
Rediscover DB: DB7_RZA August 10, 2020 1:59:21 PM CEST August 10, 2020 1:59:26 PM CEST Success
Rediscover DB: DB4_RZA August 10, 2020 1:59:26 PM CEST August 10, 2020 1:59:31 PM CEST Success
Rediscover DB: DB5_RZA August 10, 2020 1:59:31 PM CEST August 10, 2020 1:59:36 PM CEST Success
Rediscover DB: DB3_RZA August 10, 2020 1:59:36 PM CEST August 10, 2020 1:59:41 PM CEST Success
Rediscover DB: DB1_RZA August 10, 2020 1:59:41 PM CEST August 10, 2020 1:59:51 PM CEST Success
 
[root@ODASRV log]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 12.1.0.2.200414 false Oltp Odb1 Acfs Configured 73847823-ae83-4bf0-a630-f8884cf4387a
67abbd2e-f8e1-42da-bf8d-2f0a8eb403dd DB2 Si 19.7.0.0.200414 false Oltp Odb1 Acfs Configured 9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16
c51f7361-ee99-42ed-9126-86b7fc281981 DB3 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
472c04fe-533d-46af-aeab-ab5271979d98 DB4 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
8dd9b1ea-37fd-408f-99ab-eb32e2c2ed91 DB5 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
2f5856df-e717-404a-b7b0-ca8c82b2f45e DB6 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
5797b6b2-e3fc-4182-8db3-671132dd43a7 DB7 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
3c67a04d-4e6b-4b43-8b56-94284994b25d DB8 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
a1b2500d-728e-4cbe-8425-f8a85826c422 DB9 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
7dfadc59-0c67-4b42-86e1-0140f39cf4d3 DB10 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598

As we can see, this time the metadata information has been updated and DB1 is reflecting 12.1.0.2 version and Oracle home. The force option will update all information for existing components as well, and not only new ones.

Force option (-f) is mandatory if the component (here DB1 database) is already existing in the metadata registry.

Conclusion

This odacli update-registry command is really an excellent new feature and very helpful. It will allow to easily keep the ODA metadata registry up to date with all manual operations.

Cet article How to synchronize the appliance registry metadata on an ODA? est apparu en premier sur Blog dbi services.

Oracle ADB from a Jupyter Notebook

$
0
0

By Franck Pachot

.
My first attempt to connect to an Oracle database from a Jupyter Notebook on Google Colab was about one year ago:
https://medium.com/@FranckPachot/a-jupyter-notebook-on-google-collab-to-connect-to-the-oracle-cloud-atp-5e88b12282b0

I’m currently preparing a notebook as a handout from my coming SQL101 presentation where I start with some NoSQL to discover the benefits of RDBMS and SQL. I’m running everything on the Oracle Database because it provides all APIs (NoSQL-like key-value, with SODA, documents with OSON, and of course SQL on relational tables) within the same converged database. The notebook will connect to my Autonomous Database in the Oracle Free Tier so that readers don’t have to create a database themselves to start with it. And the notebook runs on Google Colab which is a free environment where people (with a Gmail account) can run it and change the queries as they want to try new things.

The notebook is there at sql101.pachot.net, but as I said, I’m currently working on it…

In this post, I’m sharing a few tips about how I install and run connections from SQLcl, sqlplus and cx_Oracle. There are probably many improvements possible and that’s one reason I share it in this blog… Feedback welcome!

Google Colab backend runs Ubuntu 18.04 and in order to tun the Oracle Client I need to install libaio:


dpkg -l | grep libaio1 > /dev/null || apt-get install -y libaio1

I test the existence before calling apt-get because I don’t want a “Run all” to take too much time.

Then I download the Instant Client, and SQLcl, and the cloud credential wallet to connect to my database which I’ve put on a public bucket in my free tier Object Store:


[ -f instantclient/network/admin/sqlnet.ora ] || wget --continue --quiet https://objectstorage.us-ashburn-1.oraclecloud.com/n/idhoprxq7wun/b/pub/o/sql101.zip && unzip -oq sql101.zip && sed -e "1a export TNS_ADMIN=$PWD/instantclient/network/admin" -e "/^bootStrap/s/$/| cat -s/" -i sqlcl/bin/sql 

I test the existence with the presence of one file (sqlnet.ora)
I hardcode the TNS_ADMIN in the SQLcl script
The -e “/^bootStrap/s/$/| cat -s/” is a dirty workaround for the black likes bug in SQLcl 20.2 (I’ll remove it when 20.3 is out)
All this is quick and dirty, I admit… I have my presentation to prepare 😉

I’ve build the wallet with passwords as I mentioned in a previous post

You also check this notebook I published a few weeks ago if you want to see how to install the instant client yourself:

Then I call a CREATE_USER procedure I have created in my database. The idea is that a public user is accessible (the password in the wallet) with minimal privileges just to run this procedure that creates a unique user for the Colab session.

The most important is where I define a the Python magics to run SQLcl and sqlplus:


import socket
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def sqlcl(line,cell=None):
    if cell is None:
      get_ipython().run_cell_magic('script', 'sqlcl/bin/sql -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',line)    
    else:
      get_ipython().run_cell_magic('script', 'sqlcl/bin/sql -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',cell)
# register %sqlplus and %%sqlplus for easy run scripts
@register_line_cell_magic
def sqlplus(line,cell=None):
    if cell is None:
      get_ipython().run_cell_magic('script', '/content/instantclient/sqlplus -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',line)    
    else:
      get_ipython().run_cell_magic('script', '/content/instantclient/sqlplus -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',cell)

The %sqlcl will call SQLcl in silent mode with the line (for %sqlcl) or cell (for %sqlcl)
The %sqlplus is similar. The only advantage over SQLcl is that it is faster to start.
Both have the connection string hardcoded in the same way as I generated the user and password (the username from the host name, the password as well).

Then I install the Oracle driver for Python:


pip install cx_Oracle

With it I can run SQL queries from Python or even SQLAlchemy.

I also load the SQL magic from Catherine Devlin


%load_ext sql
%config SqlMagic.autocommit=False
%config SqlMagic.autopandas=True

More info on: https://github.com/catherinedevlin/ipython-sql

I define the connection string for both:


$import socket,cx_Oracle,os
connection=cx_Oracle.connect('"SQL101#'+socket.gethostname().upper()+'"',"SQL101#"+socket.gethostname()[::-1], "sql101_tp")
os.environ['DATABASE_URL']='oracle://"SQL101#'+socket.gethostname().upper()+'":"SQL101#'+socket.gethostname()[::-1]+'"@sql101_tp'

Then I have 4 ways to run queries:

  • %%sqlplus for fast OCI access
  • %%sqlcl for additional SQLcl features (javascript, SODA)
  • %sql when I want the result as Pandas
  • and directly from Python with the connection defined

The examples are in the SQL101 notebook and you can play with them.

Just one more thing, which is probably perfectible:


import cx_Oracle, base64
from IPython.core.display import HTML
cursor=connection.cursor()
HTML("If you want to view performance of my database during the last hour: 1,is_omx =>1,report_level=>'basic',outer_start_time=>created-1/24,selected_start_time=>created) from user_users").fetchone()[0].read().encode('utf-8')).decode('utf-8')+"'>Download PerfHub")

This displays a download link to get the Performance Hub report covering the time since the beginning of my connection (actually the user creation).

The idea is:

  • call dbms_perf.report_perfhub
  • get the row with .fetchone()
  • get the first column with [0]
  • read the BLOB with .read()
  • make it an hexadecimal string with .encode(‘utf-8’)
  • encode it in base64 with base64.b64encode()
  • put it as a hex string with .decode(‘utf-8’)
  • build a data URL with text/html MIME type and base64 encoding
  • display the link ready to click and download

I do that because I prefer to have the performance hub in a plain window, and also because it does not run in an IFRAME as-is.

This is a very powerful environment for demos. You can use it there on Google Colab, connected to my database. Or create your own Oracle Autonomous Database in the always free tier and even run Jupyter in this free tier (see how from Gianni Ceresa)

Cet article Oracle ADB from a Jupyter Notebook est apparu en premier sur Blog dbi services.

CLUSTER

$
0
0

By Franck Pachot

.
Statistically, my blog posts starting with a single SQL keyword (like COMMIT and ROLLBACK) in the title are not fully technical ones, but about moves. Same here. It is more about community engagement, people sharing. And about a friend. And clusters of course…


In 2020 because of this COVID virus, we try to avoid clusters of people. And everybody suffers from that in the community because we have no, or very few, physical conferences where we can meet. This picture is from 6 years ago, 2014, my first Oracle Open World. I was not a speaker, I was not an ACE. Maybe my first and last conference without being a speaker… I was already a blogger and I was already on Twitter. And on Wednesday evening, after the blogger’s meetup, I missed the Aerosmith concert. For much better: a dinner at Scoma’s with these well-known people I was just meeting for the first time. It is not about being with famous people. It is about talking with smart people, who have long experience in this technical community, and are good inspiration on many topics – tech and soft skills. Look at who is taking the picture, visible in the mirror replica (as in a MAA Gold architecture😉). Conferences are great to cluster people from everywhere. There are small clusters (like this dinner) and big clusters (like the concert). All good ways to meet people, depends on your personality where you feel it better. How did I get there? Just following Ludovico… Because it was his 2nd OOW, because he likes to meet people, and he likes to share with others. So he was my guide there. Actually this story is related on his blog post he has written when I became ACE Director. My turn to write something about his move to Oracle MAA team. You know MAA, where RAC is the main pillar: you cluster some nodes that work together for better availability. Well… the team he is joining is also a nice cluster of smart people enhancing the products and helping their developers and their users.

I’ve been working with Ludovico Caldara as a colleague, as well as a competitor, have seen him at conferences, and outside of professional places as well. That’s how I know how great it is that he moves to Oracle, in the team that manages the products which are the bricks for the future (cloud managed ‘autonomous’ database). Because he was always there to understand and help people. Anywhere. Let me take one small example: we are in the Tram 18 from CERN to Geneva (maybe going to Coin Mousse). Sitting and talking. A young kid nearby, with his mum, is crying. In less time than an interconnect ping can detect any network issue, Ludo gets it immediately and proposes to move by one seat, still talking. Because he understood immediately that the kid, tired on late afternoon, wanted to be near the window. What better skills for a Product Manager than catching a problem that may not have been explained clearly, and find an easy solution that pleases everyone with minimal effort?

Talking about clusters, database performance is all about clustering data that you want to process together. Oracle Database has many features for that. It can be done by storing rows pre-joined (the old CLUSTER, the materialized views with amazing refresh and rewrite capabilities, or key-value JSON documents like though the SODA API). Or storing columns together for faster analytics (HCC, In-Memory Column Store). Or storing related rows together (Partitioning, Index Organized Tables, Attribute Clustering). Yes, attribute clustering is awsome: it tries to store related data nearby without forcing it when not possible. And it is the same with people: meet, talk and share, all in good mood, with common goals, to work better together. The syntax for attribute clustering, available in any Enterprise Edition since 12c, is:

ALTER TABLE people ADD CLUSTERING BY INTERLEAVED ORDER (top_skills, passion, personality, engagement, caring, listening, helping, humour, positivity, loyalty, ethics)

That’s a lot of attributes to cluster together and Ludovico has all of them, showing it with his lucky colleagues, managers, customers, friends,…
As an Oracle advocate, user, partner, customer,… I’m so happy that he joins Oracle, especially in that team!

Ludo wrote a blog post when I joined him as an ACE Director. Oracle employees cannot stay in this advocacy program, so I’m writing this post when he is leaving it. As Jennifer, from the Oracle ACE program, says: being an Oracle employee is The only acceptable way for an amazing Oracle ACE Director to leave the program. But, of course, Product Managers are always in contact with ACEs. If you want to contribute, please have a look at: https://developer.oracle.com/ace/. This advocacy program helps you to be in contact with Oracle product managers, other advocates, and users. For the benefit of all. And it is an awesome cluster of smart people, physically or virtually.

Cet article CLUSTER est apparu en premier sur Blog dbi services.


Upgrade to Oracle 19c – performance issue

$
0
0

In this blog I want to introduce you to a workaround for a performance issue which randomly appeared during the upgrades of several Oracle 12c databases to 19c I performed for a financial services provider. During the upgrades we ran into a severe performance issue after the upgrades of more than 40 databases had worked just fine. While most of them finished in less than one hour, we run into one which would have taken days to complete.

Issue

After starting the database upgrade from Oracle 12.2.0.1.0 to Production Version 19.8.0.0.0 the upgrade locked up during compiling:

@utlrp

 

Reason

One select-statement on the unified_audit_trail was running for hours with no result, blocking the upgrade progress and consuming nearly all database resources. The size of the audit_trail itself was about 35MB, so not the size you would expect such a bottleneck from:

SQL> SELECT count(*) from gv$unified_audit_trail;

 

Solution

After some research and testing (see notes below) I found the following workaround (after killing the upgrade process):

SQL> begin
DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
use_last_arch_timestamp => TRUE);
end;
/
SQL> set timing on;
SELECT count(*) from gv$unified_audit_trail;
exec DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL;

 

Note

As a first attempt I used the procedure below, described in Note 2212196.1.

But flush_unified_audit_trail lasted too long, so I killed the process after it ran for one hour. The flash procedure again worked fine after using clean_audit_trail as described above:

SQL> begin
DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL;
for i in 1..10 loop
DBMS_AUDIT_MGMT.TRANSFER_UNIFIED_AUDIT_RECORDS;
end loop;
end;
/

 

 

A few days later we encountered the same issue on an Oracle 12.1.0.2 database which requires Patch 25985768 for executing dbms_audit_mgmt.transfer_unified_audit_records.

This procedure is available out of the box in the Oracle 12.2 database and in the Oracle 12.1.0.2 databases which have been patched with Patch 25985768.

To avoid to get caught in this trap it is my advise that you gather all relevant statistics before any upgrade from Oracle 12c to 19c and to query gv$unified_audit_trail in advance. This query usually finishes within a few seconds.

 

Related documents

Doc ID 2212196.1

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=257639407234852&id=2212196.1&_afrWindowMode=0&_adf.ctrl-state=rd4zvw12p_4

Master Note For Database Unified Auditing (Doc ID 2351084.1)

Bug 18920838 : 12C POOR QUERY PERFORMANCE ON DICTIONARY TABLE SYS.X$UNIFIED_AUDIT_TRAIL

Bug 21119008 : POOR QUERY PERFORMANCE ON UNIFIED_AUDIT_TRAIL

Performance Issues While Monitoring the Unified Audit Trail of an Oracle12c Database (Doc ID 2063340.1)

Cet article Upgrade to Oracle 19c – performance issue est apparu en premier sur Blog dbi services.

Installing MySQL Database Service (MDS)

$
0
0

On a previous blog post, we saw how to create an account on the Oracle OCI using the Oracle Cloud Free Tier offer and then how to instal MySQL Server on the Compute instance.
Some weeks later, the new MySQL Database Service (MDS) was out and I can show you now how to install and configure it.

We are talking about the MySQL 8.0 Enterprise Edition on the on Oracle Generation 2 Cloud Infrastructure. For the moment it’s only available on some of the data regions (Frankfurt and London for the EMEA zone), but normally others will be activated beginning of 2021 (Zurich for example). Most of these regions have 3 Availability Domains (physical buildings), each of them composed by three Fault Domains (a group of separated hardware and infrastructure).
When we connect to the OCI console, the first step is to create a Virtual Cloud Network (VCN) in order to have our own private cloud network on OCI. This will be created in our Compartment, the main container that will contain our resources (elisapriv in my case).
We click on Networking > Virtual Cloud Network:

We can start then the VCN Wizard:

and define our VCN name and subnet:

We click on Create to finalize the VCN creation:

When it’s done, we can create a compute instance, which means our host.
We click on Compute > Instances:

We click then on Create Instance:

At this point we can adapt the compute instance configuration in terms of placement for the availability and fault domains, the resources and the OS images:

For example I decided to have 1 OCPU and 8GB of memory:

We need to upload our public key to connect then via ssh, and we can click on Create to create the compute instance:

We can get now the public IP address that we will use to connect then via ssh:

At this point, we can take care of the MySQL part.
If we need to set a variable to a value other than the default one, before creating our MySQL Server we have to create a new ad-hoc configuration, clicking on MySQL > Configurations:

and then clicking on Create MySQL Configuration:

We can now name our configuration and provide the variable value for the variable that we want to adapt. In my case, for example, I increased the maximum number of connections to 500:


It’s time to create our MySQL Server, clicking on MySQL > DB Systems:

and then clicking on Create MySQL DB System:

We can now name our MySQL DB System, and decide which kind of placement and hardware to use:

If we want to use the ad-hoc configuration that we defined just before, we have to click on Change Configuration:

and then select the right one:

We can choose the storage size for InnoDB and binary log data and the maintenance window (and I suggest to specify a convenient time for you, because otherwise one will be chosen at your place):

Still some information to fill, such as the administration user (which has to be different from root) and his password, networks details (VCN, the subnet, the port):

and the backup configuration (the backups will be executed through snapshots):

This creation operation will take some minutes, and then our MySQL DB System will be up and running.
One more thing to do before connecting to it: enable the network traffic to reach our MySQL Server.
On the VCN page, we have to click on Security Lists and to select the subnet:


We click on “Add Ingress Rules” and we can add the 3306 as destination port:

We can connect now to our compute instance and install the MySQL Shell:

ssh -i C:\MySQL\Cloud\ssh-key-2020-11-24.key opc@xxx.xx.xxx.xxx
[opc@instance-20201127-1738 ~]$ sudo yum install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
[opc@instance-20201127-1738 ~]$ sudo yum install mysql-shell

and then connect to the MySQL DB System using MySQL Shell:

[opc@instance-20201127-1738 ~]$ mysqlsh --sql admin@10.0.1.3
Please provide the password for 'admin@10.0.1.3': *************
MySQL Shell 8.0.22

Copyright (c) 2016, 2020, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'admin@10.0.1.3'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 33
Server version: 8.0.22-u2-cloud MySQL Enterprise - Cloud
No default schema selected; type \use  to set one.
 MySQL  10.0.1.3:3306 ssl  SQL >

Great, no?
And this is not everything. I think that news concerning analytics for MySQL data will come next weeks.
So stay tuned! 😉

Cet article Installing MySQL Database Service (MDS) est apparu en premier sur Blog dbi services.

Oracle write consistency bug and multi-thread de-queuing

$
0
0

By Franck Pachot

.
This was initially posted on CERN Database blog where it seems to be lost. Here is a copy thanks to web.archive.org
Additional notes:
– I’ve tested and got the same behaviour in Oracle 21c
– you will probably enjoy reading Hatem Mahmoud going further on Write consistency and DML restart

Posted by Franck Pachot on Thursday, 27 September 2018

Here is a quick test I did after encountering an abnormal behavior in write consistency and before finding some references to a bug on StackOverflow (yes, write consistency questions on StackOverflow!) and AskTOM. And a bug opened by Tom Kyte in 2011, that is still there in 18c.

The original issue was with a task management system to run jobs. Here is the simple table where all rows have a ‘NEW’ status and the goal is to have several threads processing them by updating them to the ‘HOLDING’ status’ and adding the process name.


set echo on
drop table DEMO;
create table DEMO (ID primary key,STATUS,NAME,CREATED)
 as select rownum,cast('NEW' as varchar2(10)),cast(null as varchar2(10)),sysdate+rownum/24/60 from xmltable('1 to 10')
/

Now here is the query that selects the 5 oldest rows in status ‘NEW’ and updates them to the ‘HOLDING’ status:


UPDATE DEMO SET NAME = 'NUMBER1', STATUS = 'HOLDING' 
WHERE ID IN (
 SELECT ID FROM (
  SELECT ID, rownum as counter 
  FROM DEMO 
  WHERE STATUS = 'NEW' 
  ORDER BY CREATED
 ) 
WHERE counter <= 5) 
;

Note that the update also sets the name of the session which has processed the rows, here ‘NUMBER1’.

Once the query started, and before the commit, I’ve run the same query from another session, but with ‘NUMBER2’.


UPDATE DEMO SET NAME = 'NUMBER2', STATUS = 'HOLDING' 
WHERE ID IN (
 SELECT ID FROM (
  SELECT ID, rownum as counter 
  FROM DEMO 
  WHERE STATUS = 'NEW' 
  ORDER BY CREATED
 ) 
WHERE counter <= 5) 
;

Of course, this waits on row lock from the first session as it has selected the same rows. Then I commit the first session, and check, from the first session what has been updated:


commit;
set pagesize 1000
select versions_operation,versions_xid,DEMO.* from DEMO versions between scn minvalue and maxvalue order by ID,2;

V VERSIONS_XID             ID STATUS     NAME       CREATED        
- ---------------- ---------- ---------- ---------- ---------------
U 0500110041040000          1 HOLDING    NUMBER1    27-SEP-18 16:48
                            1 NEW                   27-SEP-18 16:48
U 0500110041040000          2 HOLDING    NUMBER1    27-SEP-18 16:49
                            2 NEW                   27-SEP-18 16:49
U 0500110041040000          3 HOLDING    NUMBER1    27-SEP-18 16:50
                            3 NEW                   27-SEP-18 16:50
U 0500110041040000          4 HOLDING    NUMBER1    27-SEP-18 16:51
                            4 NEW                   27-SEP-18 16:51
U 0500110041040000          5 HOLDING    NUMBER1    27-SEP-18 16:52
                            5 NEW                   27-SEP-18 16:52
                            6 NEW                   27-SEP-18 16:53
                            7 NEW                   27-SEP-18 16:54
                            8 NEW                   27-SEP-18 16:55
                            9 NEW                   27-SEP-18 16:56
                           10 NEW                   27-SEP-18 16:57

I have used flashback query to see all versions of the rows. All 10 have been created and the the first 5 of them have been updated by NUMBER1.

Now, my second session continues, updating to NUMBER2. I commit and look at the row versions again:


commit;
set pagesize 1000
select versions_operation,versions_xid,DEMO.* from DEMO versions between scn minvalue and maxvalue order by ID,2;

V VERSIONS_XID             ID STATUS     NAME       CREATED        
- ---------------- ---------- ---------- ---------- ---------------
U 04001B0057030000          1 HOLDING    NUMBER2    27-SEP-18 16:48
U 0500110041040000          1 HOLDING    NUMBER1    27-SEP-18 16:48
                            1 NEW                   27-SEP-18 16:48
U 04001B0057030000          2 HOLDING    NUMBER2    27-SEP-18 16:49
U 0500110041040000          2 HOLDING    NUMBER1    27-SEP-18 16:49
                            2 NEW                   27-SEP-18 16:49
U 04001B0057030000          3 HOLDING    NUMBER2    27-SEP-18 16:50
U 0500110041040000          3 HOLDING    NUMBER1    27-SEP-18 16:50
                            3 NEW                   27-SEP-18 16:50
U 04001B0057030000          4 HOLDING    NUMBER2    27-SEP-18 16:51
U 0500110041040000          4 HOLDING    NUMBER1    27-SEP-18 16:51
                            4 NEW                   27-SEP-18 16:51
U 04001B0057030000          5 HOLDING    NUMBER2    27-SEP-18 16:52
U 0500110041040000          5 HOLDING    NUMBER1    27-SEP-18 16:52
                            5 NEW                   27-SEP-18 16:52
                            6 NEW                   27-SEP-18 16:53
                            7 NEW                   27-SEP-18 16:54
                            8 NEW                   27-SEP-18 16:55
                            9 NEW                   27-SEP-18 16:56
                           10 NEW                   27-SEP-18 16:57

This is not what I expected. I wanted my second session to process the other rows, but here it seems that it has processed the same rows as the first one. What has been done by the NUMBER1 has been lost and overwritten by NUMBER2. This is inconsistent, violates ACID properties, and should not happen. An SQL statement must ensure write consistency: either by locking all the rows as soon as they are read (for non-MVCC databases where reads block writes), or re-starting the update when a mutating row is encountered. Oracle default behaviour is in the second case, NUMBER2 query reads the rows 1 to 5, because the changes by NUMBER1, not committed yet, are invisible from NUMBER2. But the execution should keep track of the columns referenced in the where clause. When attempting to update a row, now that the concurrent change is visible, the update is possible only if the WHERE clause used to select the rows still selects this row. If not, the database should raise an error (this is what happens in serializable isolation level) or re-start the update when in the default statement-level consistency.

Here, probably because of the nested subquery, the write consistency is not guaranteed and this is a bug.

One workaround is not to use subqueries. However, as we need to ORDER BY the rows in order to process the oldest first, we cannot avoid the subquery. The workaround for this is to add STATUS = ‘NEW’ in the WHERE clause of the update, so that the update restart works correctly.

However, the goal of multithreading those processes is to be scalable, and multiple update restarts may finally serialize all those updates.

The preferred solution for this is to ensure that the updates do not attempt to touch the same rows. This can be achieved by a SELECT … FOR UPDATE SKIP LOCKED. As this cannot be added directly to the update statement, we need a cursor. Something like this can do the job:


declare counter number:=5;
begin
 for c in (select /*+ first_rows(5) */ ID FROM DEMO 
           where STATUS = 'NEW' 
           order by CREATED
           for update skip locked)
 loop
  counter:=counter-1;
  update DEMO set NAME = 'NUMBER1', STATUS = 'HOLDING'  where ID = c.ID and STATUS = 'NEW';
  exit when counter=0;
 end loop;
end;
/
commit;

This can be optimized further but just gives an idea of what is needed for a scalable solution. Waiting for locks is not scalable.

Cet article Oracle write consistency bug and multi-thread de-queuing est apparu en premier sur Blog dbi services.

Efficiently query DBA_EXTENTS for FILE_ID / BLOCK_ID

$
0
0

By Franck Pachot

.
This was initially posted to CERN Database blog on Thursday, 27 September 2018 where it seems to be lost. Here is a copy thanks to web.archive.org

Did you ever try to query DBA_EXTENTS on a very large database with LMT tablespaces? I had to, in the past, in order to find which segment a corrupt block belonged to. The information about extent allocation is stored in the datafiles headers, visible though X$KTFBUE, and queries on it can be very expensive. In addition to that, the optimizer tends to start with the segments and get to this X$KTFBUE for each of them. At this time, I had quickly created a view on the internal dictionary tables, forcing to start by X$KTFBUE with materialized CTE, to replace DBA_EXTENTS. I published this on dba-village in 2006.

I recently wanted to know the segment/extend for a hot block, identified by its file_id and block_id on a 900TB database with 7000 datafiles and 90000 extents, so I went back to this old query and I got my result in 1 second. The idea is to be sure that we start with the file (X$KCCFE) and then get to the extent allocation (X$KTFBUE) before going to the segments:

So here is the query:


column owner format a6
column segment_type format a20
column segment_name format a15
column partition_name format a15
set linesize 200
set timing on time on echo on autotrace on stat
WITH
 l AS ( /* LMT extents indexed on ktfbuesegtsn,ktfbuesegfno,ktfbuesegbno */
  SELECT ktfbuesegtsn segtsn,ktfbuesegfno segrfn,ktfbuesegbno segbid, ktfbuefno extrfn,
         ktfbuebno fstbid,ktfbuebno + ktfbueblks - 1 lstbid,ktfbueblks extblks,ktfbueextno extno
  FROM sys.x$ktfbue
 ),
 d AS ( /* DMT extents ts#, segfile#, segblock# */
  SELECT ts# segtsn,segfile# segrfn,segblock# segbid, file# extrfn,
         block# fstbid,block# + length - 1 lstbid,length extblks, ext# extno
  FROM sys.uet$
 ),
 s AS ( /* segment information for the tablespace that contains afn file */
  SELECT /*+ materialized */
  f1.fenum afn,f1.ferfn rfn,s.ts# segtsn,s.FILE# segrfn,s.BLOCK# segbid ,s.TYPE# segtype,f2.fenum segafn,t.name tsname,blocksize
  FROM sys.seg$ s, sys.ts$ t, sys.x$kccfe f1,sys.x$kccfe f2 
  WHERE s.ts#=t.ts# AND t.ts#=f1.fetsn AND s.FILE#=f2.ferfn AND s.ts#=f2.fetsn
 ),
 m AS ( /* extent mapping for the tablespace that contains afn file */
SELECT /*+ use_nl(e) ordered */
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,l e
 WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) ordered */ 
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,d e
  WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.ktfbfebno fstbid,e.ktfbfebno+e.ktfbfeblks-1 lstbid,e.ktfbfeblks extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.x$ktfbfe e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ktfbfetsn=f.fetsn and e.ktfbfefno=f.ferfn
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.block# fstbid,e.block#+e.length-1 lstbid,e.length extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.fet$ e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ts#=f.fetsn and e.file#=f.ferfn
 ),
 o AS (
  SELECT s.tablespace_id segtsn,s.relative_fno segrfn,s.header_block   segbid,s.segment_type,s.owner,s.segment_name,s.partition_name
  FROM SYS_DBA_SEGS s
 ),
datafile_map as (
SELECT
 afn file_id,fstbid block_id,extblks blocks,nvl(segment_type,decode(segtype,null,'free space','type='||segtype)) segment_type,
 owner,segment_name,partition_name,extno extent_id,extblks*blocksize bytes,
 tsname tablespace_name,rfn relative_fno,m.segtsn,m.segrfn,m.segbid
 FROM m,o WHERE extrfn=rfn and m.segtsn=o.segtsn(+) AND m.segrfn=o.segrfn(+) AND m.segbid=o.segbid(+)
UNION ALL
SELECT
 file_id+(select to_number(value) from v$parameter WHERE name='db_files') file_id,
 1 block_id,blocks,'tempfile' segment_type,
 '' owner,file_name segment_name,'' partition_name,0 extent_id,bytes,
  tablespace_name,relative_fno,0 segtsn,0 segrfn,0 segbid
 FROM dba_temp_files
)
select * from datafile_map where file_id=5495 and 11970455 between block_id and block_id+blocks

And here is the result, with execution statistics:



   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE         OWNER  SEGMENT_NAME    PARTITION_NAME    EXTENT_ID      BYTES TABLESPACE_NAME      RELATIVE_FNO     SEGTSN     SEGRFN    SEGBID
---------- ---------- ---------- -------------------- ------ --------------- ---------------- ---------- ---------- -------------------- ------------ ---------- ---------- ----------
      5495   11964544            8192 INDEX PARTITION LHCLOG DN_PK           PART_DN_20161022 1342         67108864 LOG_DATA_20161022            1024       6364       1024        162

Elapsed: 00:00:01.25

Statistics
----------------------------------------------------------
        103  recursive calls
       1071  db block gets
      21685  consistent gets
        782  physical reads
        840  redo size
       1548  bytes sent via SQL*Net to client
        520  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Knowing the segment from the block address is important in performance tuning, when we get the file_id/block_id from wait event parameters. It is even more important when a block corrution is detected ans having a fast query may help.

Cet article Efficiently query DBA_EXTENTS for FILE_ID / BLOCK_ID est apparu en premier sur Blog dbi services.

NTP is not working for ODA new deployment (reimage) in version 19.8?

$
0
0

Having recently reimaged and patched several ODA in version 19.8 and 19.9, I could see an issue with NTP. During my troubleshooting I could determine the root cause and find appropriate solution. Through this blog I would like to share my experience with you.

Symptom/Analysis

ODA version 19.6 or higher is coming with Oracle Linux 7. Since Oracle Linux 7 the default synchronization service is not ntp any more but chrony. In Oracle Linux 7, ntp is still available and can still be used. But ntp service will disappear in Oracle Linux 8.

What I could realize from my last deployments and patching is that :

  • Patching your ODA to version 19.8 or 19.9 from 19.6 : The system will still use ntpd and chronyd service will be deactivated. All is working fine.
  • You reimage your ODA to version 19.8 : chronyd will be activated and NTP will not work any more.
  • You reimage your ODA to version 19.9 : ntpd will be activated and NTP will be working with no problem.

So the problem is only if you reimage your ODA to version 19.8.

Problem explanation

The problem is due to the fact that the odacli script deploying the appliance will still update the ntpd configuration (/etc/ntpd.conf) with the IP address provided and not chronyd. But chronyd will be, by default, activated and started then with no configuration.

Solving the problem

There is 2 solutions.

A/ Configure and use chronyd

You configure /etc/chrony.conf with the NTP addresses given during appliance creation and you restart chronyd service.

Configure chrony :

oracle@ODA01:/u01/app/oracle/local/dmk/etc/ [rdbms19.8.0.0] vi /etc/chrony.conf

oracle@ODA01:/u01/app/oracle/local/dmk/etc/ [rdbms19.8.0.0] cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.pool.ntp.org iburst
#server 1.pool.ntp.org iburst
#server 2.pool.ntp.org iburst
#server 3.pool.ntp.org iburst
server 212.X.X.X.103 prefer
server 212.X.X.X.100
server 212.X.X.X.101


# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

And you restart chrony service :

[root@ODA01 ~]# service chronyd restart
Redirecting to /bin/systemctl restart chronyd.service

B/ Start ntp

Starting ntp will automatically stop chrony service.

[root@ODA01 ~]# ntpq -p
ntpq: read: Connection refused

[root@ODA01 ~]# service ntpd restart
Redirecting to /bin/systemctl restart ntpd.service

Checking synchronization :

[root@ODA01 ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
lantime. domain_name .STEP.          16 u    - 1024    0    0.000    0.000   0.000
*ntp1. domain_name    131.188.3.223    2 u  929 1024  377    0.935   -0.053   0.914
+ntp2. domain_name    131.188.3.223    2 u  113 1024  377    0.766    0.184   2.779

Checking both ntp and chrony services :

[root@ODA01 ~]# service ntpd status
Redirecting to /bin/systemctl status ntpd.service
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-11-27 09:40:08 CET; 31min ago
  Process: 68548 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 68549 (ntpd)
    Tasks: 1
   CGroup: /system.slice/ntpd.service
           └─68549 /usr/sbin/ntpd -u ntp:ntp -g

Nov 27 09:40:08 ODA01 ntpd[68549]: ntp_io: estimated max descriptors: 1024, initial socket boundary: 16
Nov 27 09:40:08 ODA01 ntpd[68549]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Nov 27 09:40:08 ODA01 ntpd[68549]: Listen normally on 1 lo 127.0.0.1 UDP 123
Nov 27 09:40:08 ODA01 ntpd[68549]: Listen normally on 2 btbond1 10.X.X.10 UDP 123
Nov 27 09:40:08 ODA01 ntpd[68549]: Listen normally on 3 priv0 192.X.X.24 UDP 123
Nov 27 09:40:08 ODA01 ntpd[68549]: Listen normally on 4 virbr0 192.X.X.1 UDP 123
Nov 27 09:40:08 ODA01 ntpd[68549]: Listening on routing socket on fd #21 for interface updates
Nov 27 09:40:08 ODA01 ntpd[68549]: 0.0.0.0 c016 06 restart
Nov 27 09:40:08 ODA01 ntpd[68549]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
Nov 27 09:40:08 ODA01 ntpd[68549]: 0.0.0.0 c011 01 freq_not_set

[root@ODA01 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Fri 2020-11-27 09:40:08 CET; 32min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 46183 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 46180 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 46182 (code=exited, status=0/SUCCESS)

Nov 27 09:18:25 ODA01 systemd[1]: Starting NTP client/server...
Nov 27 09:18:25 ODA01 chronyd[46182]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG)
Nov 27 09:18:25 ODA01 chronyd[46182]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift
Nov 27 09:18:25 ODA01 systemd[1]: Started NTP client/server.
Nov 27 09:40:08 ODA01 systemd[1]: Stopping NTP client/server...
Nov 27 09:40:08 ODA01 systemd[1]: Stopped NTP client/server.

You might need to deactivate chronyd service with systemctl to avoid chronyd starting automatically after server reboot.

Are you getting a socket error with chrony?

If you are getting following error starting chrony, you will need to give appropriate option to start chronyd with IPv4 :

Nov 27 09:09:19 ODA01 chronyd[35107]: Could not open IPv6 command socket : Address family not supported by protocol.

Example of error encountered :

[root@ODA01 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-11-27 09:09:19 CET; 5min ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 35109 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 35105 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 35107 (chronyd)
    Tasks: 1
   CGroup: /system.slice/chronyd.service
           └─35107 /usr/sbin/chronyd

Nov 27 09:09:19 ODA01 systemd[1]: Starting NTP client/server...
Nov 27 09:09:19 ODA01 chronyd[35107]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG)
Nov 27 09:09:19 ODA01 chronyd[35107]: Could not open IPv6 command socket : Address family not supported by protocol
Nov 27 09:09:19 ODA01 chronyd[35107]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift
Nov 27 09:09:19 ODA01 systemd[1]: Started NTP client/server.

Chronyd system service is using a variable to set options :

[root@ODA01 ~]# cat /usr/lib/systemd/system/chronyd.service
[Unit]
Description=NTP client/server
Documentation=man:chronyd(8) man:chrony.conf(5)
After=ntpdate.service sntp.service ntpd.service
Conflicts=ntpd.service systemd-timesyncd.service
ConditionCapability=CAP_SYS_TIME

[Service]
Type=forking
PIDFile=/var/run/chrony/chronyd.pid
EnvironmentFile=-/etc/sysconfig/chronyd
ExecStart=/usr/sbin/chronyd $OPTIONS
ExecStartPost=/usr/libexec/chrony-helper update-daemon
PrivateTmp=yes
ProtectHome=yes
ProtectSystem=full

[Install]
WantedBy=multi-user.target

Need to put options -4 to chronyd service configuration file :

[root@ODA01 ~]# cat /etc/sysconfig/chronyd
# Command-line options for chronyd
OPTIONS=""

[root@ODA01 ~]# vi /etc/sysconfig/chronyd

[root@ODA01 ~]# cat /etc/sysconfig/chronyd
# Command-line options for chronyd
OPTIONS="-4"

You will just need to restart chrony service :

[root@ODA01 ~]# service chronyd restart
Redirecting to /bin/systemctl restart chronyd.service

[root@ODA01 ~]# service chronyd status
Redirecting to /bin/systemctl status chronyd.service
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-11-27 09:18:25 CET; 4s ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 46183 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 46180 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 46182 (chronyd)
    Tasks: 1
   CGroup: /system.slice/chronyd.service
           └─46182 /usr/sbin/chronyd -4

Nov 27 09:18:25 ODA01 systemd[1]: Starting NTP client/server...
Nov 27 09:18:25 ODA01 chronyd[46182]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 +DEBUG)
Nov 27 09:18:25 ODA01 chronyd[46182]: Frequency 0.000 +/- 1000000.000 ppm read from /var/lib/chrony/drift
Nov 27 09:18:25 ODA01 systemd[1]: Started NTP client/server.

Finally you can then use following command to check NTP synchronisation with chronyd :

[root@ODA01 ~]# chronyc tracking

Cet article NTP is not working for ODA new deployment (reimage) in version 19.8? est apparu en premier sur Blog dbi services.

Oracle SPD status on two learning paths

$
0
0

By Franck Pachot

.
I have written a lot about SQL Plan Directives that appeared in 12c. They were used by default and, because of some side effects at the time of 12cR1 with legacy applications that were parsing too much, they have been disabled by default in 12cR2. Today, there are probably not used enough because of their bad reputation from those times. But for datawarehouses, they should be the default in my opinion.

There is a behaviour that surprised me initially and I though it was a bug but, after 5 years, the verdict is: expected behaviour (Bug 20311655 : SQL PLAN DIRECTIVE INVALIDATED BY STATISTICS FEEDBACK). The name of the bug is my fault: I initially though that the statistics feedback had been wrongly interpreted as HAS_STATS. But actually, this behaviour has nothing to do with it: it was visible here only because the re-optimization had triggered a new hard parse, which has changed the state. But any other query on similar predicates would have done the same.

And this is what I’m showing here: when the misestimate cannot be solved by extended statistics, the learning path of SQL Plan Directive have to go through this HAS_STATS state where misestimate will occur again. I’m mentioning the fact that extended statistics can help or not, and this is anticipated by the optimizer. For this reason, I’ve run two sets of examples: one with a predicate where no column group can help, and one where extended statistics can be created.

SQL> show parameter optimizer_adaptive
NAME                              TYPE    VALUE 
--------------------------------- ------- ----- 
optimizer_adaptive_plans boolean TRUE 
optimizer_adaptive_reporting_only boolean FALSE 
optimizer_adaptive_statistics boolean TRUE 

Since 12.2 the adaptive statistics are disabled by default: SQL Plan Directives are created but not used. This is fine for OLTP databases that are upgraded from previous versions. However, for data warehouse, analytic, ad-hoc queries, reporting, enabling adaptive statistics may help a lot when the static statistics are not sufficient to optimize complex queries.

SQL> alter session set optimizer_adaptive_statistics=true;

Session altered.

I’m enabling adaptive statistics for my session.

SQL> exec for r in (select directive_id from dba_sql_plan_dir_objects where owner=user) loop begin dbms_spd.drop_sql_plan_directive(r.directive_id); exception when others then raise; end; end loop;

I’m removing all SQL Plan Directives in my lab to build a reproducible test case.

SQL> create table DEMO pctfree 99 as select mod(rownum,2) a,mod(rownum,2) b,mod(rownum,2) c,mod(rownum,2) d from dual connect by level <=1000;

Table DEMO created.

This is my test table. Build on purpose with a special distribution of data: all rows with 0 or 1 on all columns.

SQL> alter session set statistics_level=all;

Session altered.

I’m profiling down to execution plan operation in order to see all execution statistics

SPD learning path {E}:
USABLE(NEW)->SUPERSEDED(HAS_STATS)->USABLE(PERMANENT)

SQL> select count(*) c1 from demo where a+b+c+d=0;

    C1 
______ 
   500 

Here is a query where dynamic sampling can help to get better statistics on selectivity but where no static statistic can help even on column group (extended statistics on expression is not considered for SQL Plan Directives even in 21c)

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                                PLAN_TABLE_OUTPUT 
_________________________________________________________________________________________________ 
SQL_ID  fjcbm5x4014mg, child number 0                                                             
-------------------------------------                                                             
select count(*) c1 from demo where a+b+c+d=0                                                      
                                                                                                  
Plan hash value: 2180342005                                                                       
                                                                                                  
----------------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |    
----------------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.03 |     253 |    250 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.03 |     253 |    250 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |     10 |    500 |00:00:00.03 |     253 |    250 |    
----------------------------------------------------------------------------------------------    
                                                                                                  
Predicate Information (identified by operation id):                                               
---------------------------------------------------                                               
                                                                                                  
   2 - filter("A"+"B"+"C"+"D"=0)      

As expected the estimation (10 rows) is far from the actual number of rows (500). This statement is flagged for re-optimisation with cardinality feedback but I’m interested by different SQL statements here.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';


    STATE    INTERNAL_STATE                      SPD_TEXT 
_________ _________________ _____________________________ 
USABLE    NEW               {E(DEMO.DEMO)[A, B, C, D]}    

A SQL Plan Directive has been created to keep the information that equality predicates on columns A, B, C and D are misestimated. The directive is in internal state NEW. The visible state is USABLE which means that dynamic sampling will be used by queries with a similar predicate on those columns.

SQL> select count(*) c2 from demo where a+b+c+d=0;

    C2 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  5sg7b9jg6rj2k, child number 0                                                    
-------------------------------------                                                    
select count(*) c2 from demo where a+b+c+d=0                                             
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter("A"+"B"+"C"+"D"=0)                                                         
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement      

As expected, a different query (note that I changed the column alias C1 to C2 but anything can be different as long as there’s an equality predicate involving the same columns) has accurate estimations (E-Rows=A-Rows) because of dynamic sampling (dynamic statistics) thanks to the used SQL Plan Directive.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';

        STATE    INTERNAL_STATE                      SPD_TEXT 
_____________ _________________ _____________________________ 
SUPERSEDED    HAS_STATS         {E(DEMO.DEMO)[A, B, C, D]}    

This is the important part and initially, I thought it was a bug because SUPERSEDED means that the next query on similar columns will not do dynamic sampling anymore, and then will have bad estimations. HAS_STATS does not mean that we have correct testimations here but only that there is no additional static statistics that can help. Because the optimizer has detected an expression (“A”+”B”+”C”+”D”=0) and automatic statistics extensions do not consider expressions.

SQL> select count(*) c3 from demo where a+b+c+d=0;

    C3 
______ 
   500 


SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  62cf5zwt4rwgj, child number 0                                                    
-------------------------------------                                                    
select count(*) c3 from demo where a+b+c+d=0                                             
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |     10 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter("A"+"B"+"C"+"D"=0)    

We are still in the learning phase and as you can see, even if we know that there is a misestimate (SPD has been created), adaptive statistic tries to avoid dynamic sampling: no SPD used mentioned in the notes, and back to the misestimate of E-Rows=10.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';

    STATE    INTERNAL_STATE                      SPD_TEXT 
_________ _________________ _____________________________ 
USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}    

The HAS_STATS and the misestimate was temporary. Now that the optimizer has validated that with all possible static statistics available (HAS_STATS) we still have a misestimate, and then has passed the SPD status to PERMANENT: end of the learning phase, we will permanently do dynamic sampling for this kind of query.

SQL> select count(*) c4 from demo where a+b+c+d=0;

    C4 
______ 
   500 


SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  65ufgd70n61nh, child number 0                                                    
-------------------------------------                                                    
select count(*) c4 from demo where a+b+c+d=0                                             
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter("A"+"B"+"C"+"D"=0)                                                         
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement                                        
                                                            

Yes, it has an overhead at hard parse time, but that helps to get better estimations and then faster execution plans. The execution plan shows that dynamic sampling is done because id SPD usage.

SPD learning path {EC}:
USABLE(NEW)->USABLE(MISSING_STATS)->SUPERSEDED(HAS_STATS)

I’m now running a query where the misestimate can be avoided with additional statistics: column group statistics extension.

SQL> select count(*) c1 from demo where a=0 and b=0 and c=0 and d=0;

    C1 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  2x5j71630ua0z, child number 0                                                    
-------------------------------------                                                    
select count(*) c1 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |     63 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))   

I have a misestimate here (E-Rows much lower than E-Rows) because the optimizer doesn’t know the correlation between A,B,C and D.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';


    STATE    INTERNAL_STATE                       SPD_TEXT 
_________ _________________ ______________________________ 
USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
USABLE    NEW               {EC(DEMO.DEMO)[A, B, C, D]}    

I have now a new SQL Plan Directive and the difference with the previous one is that the equality predicate (E) is a simple column equality on each column (EC). From that, the optimizer knows that statistics extension on column group may help.

SQL> select count(*) c2 from demo where a=0 and b=0 and c=0 and d=0;

    C2 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');


                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  5sg8p03mmx7ca, child number 0                                                    
-------------------------------------                                                    
select count(*) c2 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))                                     
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement       

So, the NEW directive is a USABLE state: SPD is used to do some dynamic sampling, as it was with the previous example.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';

    STATE    INTERNAL_STATE                       SPD_TEXT 
_________ _________________ ______________________________ 
USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
USABLE    MISSING_STATS     {EC(DEMO.DEMO)[A, B, C, D]}    

Here we have an additional state during the learning phase because there’s something else that can be done: we are not in HAS_STATS because more stats can be gathered. We are in MISSING_STATS internal state. This is a USABLE state so that dynamic sampling continues until we gather statistics.

SQL> select count(*) c3 from demo where a=0 and b=0 and c=0 and d=0;

    C3 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  d8zyzh140xk0d, child number 0                                                    
-------------------------------------                                                    
select count(*) c3 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))                                     
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement       

That can continue for a long time, with SPD in USABLE state and dynamic sampling compensating the missing stats, but at the cost of additional work during hard parse time.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select created,state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING' order by last_used;

    CREATED     STATE    INTERNAL_STATE                       SPD_TEXT 
___________ _________ _________________ ______________________________ 
20:52:11    USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
20:52:16    USABLE    MISSING_STATS     {EC(DEMO.DEMO)[A, B, C, D]}    

The status will not change until statistics gathering occurs.

SQL> exec dbms_stats.set_table_prefs(user,'DEMO','AUTO_STAT_EXTENSIONS','on');

PL/SQL procedure successfully completed.

In the same idea as adaptive statistics not enabled by default, the automatic creation of statistics extension is not there by default. I enable it for this table only here, but, as many dbms_stats operations, you can do that at schema, database or global level. This is what I do here. Usually, you do it initially when creating the table, or simply at database level because it works in pair with adaptive statistics, but in this demo I waited to show that even if the decision of going to HAS_STATS or MISSING_STATS state depends on the possibility of extended stats creation, this is done without looking at the dbms_stats preference.

SQL> exec dbms_stats.gather_table_stats(user,'DEMO', options=>'gather auto');

PL/SQL procedure successfully completed.

Note that I’m gathering the statistics like the automatic job does: GATHER AUTO. As I did not change any rows, the table statistics are not stale but the new directive in MISSING_STATS tells DBMS_STATS that there’s a reason to re-gather the statistics.

And if you look at statistics extensions there, there’s a new statistics extension on (A,B,C,D) column group.Just look at USER_STAT_EXTENSIONS.

SQL> select count(*) c4 from demo where a=0 and b=0 and c=0 and d=0;

    C4 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  g08m3qrmw7mgn, child number 0                                                    
-------------------------------------                                                    
select count(*) c4 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))                                     
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement     

You may think that no dynamic sampling is needed anymore but the Adaptive Statistics mechanism is still in the learning phase: the SPD is still USABLE and the next parse will verify if MISSING_STATS can be superseded by HAS_STATS. This is what happened here.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select created,state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING' order by last_used;

    CREATED         STATE    INTERNAL_STATE                       SPD_TEXT 
___________ _____________ _________________ ______________________________ 
20:52:11    USABLE        PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
20:52:16    SUPERSEDED    HAS_STATS         {EC(DEMO.DEMO)[A, B, C, D]}    

Here, SUPERSEDED means no more dynamic sampling for predicates with simple column equality on A,B,C,D because it HAS_STATS.

In the past, I mean before 12c, I often recommended enabling dynamic sampling (with optimizer_dynamic_sampling >= 4) on datawarehouses, or sessions running complex ad-hoc queries for reporting. And no dynamic sampling, creating manual statistics extensions only when required, for OLTP where we can expect less complex queries and where hard parse time may be a problem.

Now, in the same idea, I’ll rather recommend setting adaptive statistics because it has a finer grain optimization. As we see here: only one kind of predicate does dynamic sampling, and this dynamic sampling is the “adaptive” one, estimating not only single table cardinality but joins and aggregations as well. This is the USABLE (PERMANENT) one. The other, did it only temporarily until statistics extensions were automatically created, SUPERSEDED with HAS_STATS.

In summary, MISSING_STATS state is seen only when, given the simple column equality, there are possible statistics that are missing. And HAS_STATS means that all the statistics that can be used by optimizer for this predicate are available and no more can be gathered. Each directive will go through HAS_STATS during the learning phase. And then, it stays in HAS_STATS or switches definitely to PERMANENT state when HAS_STAT encountered misestimate again.

Cet article Oracle SPD status on two learning paths est apparu en premier sur Blog dbi services.

Password rolling change before Oracle 21c

$
0
0

By Franck Pachot

.
You may have read about Gradual Password Rollover usage from Mouhamadou Diaw and about some internals from Rodrigo Jorge. But it works only on 21c which is only in the cloud, for the moment, in Autonomous Database and DBaaS (but here I’ve encountered some problems apparently because of a bug when using SQL*Net native encryption). But your production is not yet in 21c anyway. However, here is how you can achieve a similar goal in 12c,18c or 19c: be able to connect with two passwords for the time window where you are changing the password in a rolling fashion in the application server configuration.

Proxy User

If your application still connects with the application owner, you do it wrong. Even when it needs to be connected in the application schema by default, and even when you can’t to an “alter session set current_schema” you don’t have to use this user for authentication. And this is really easy with proxy users. Consider the application owner as a schema, not as a user to connect with.

My application is in schema DEMO and I’ll not use DEMO credentials. You can set an impossible password or, better, in 18c, set no password at all. I’ll use a proxy user authentication to connect to this DEMO user:


19:28:49 DEMO@atp1_tp> grant create session to APP2020 identified by "2020 was a really Bad Year!";
Grant succeeded.

19:28:50 DEMO@atp1_tp> alter user DEMO grant connect through APP2020;
User DEMO altered.

The APP2020 user is the one I’ll use. I named it 2020 because I want to change the credentials every year and, as I don’t have the gradual rollover password feature, this means changing the user to connect with.


19:28:50 DEMO@atp1_tp> connect APP2020/"2020 was a really Bad Year!"@atp1_tp
Connected.
19:28:52 APP2020@atp1_tp> show user
USER is "APP2020"

This user can connect as usual, as it has the CREATE SESSION privilege. There is a way to prevent this and allow PROXY ONLY CONNECT, but this is unfortunately not documented (Miguel Anjo has written about this) so better not using it.

However, the most important is:


19:28:52 APP2020@atp1_tp> connect APP2020[DEMO]/"2020 was a really Bad Year!"@atp1_tp
Connected.

19:28:53 DEMO@atp1_tp> show user
USER is "DEMO"

With proxy connection, in addition to the proxy user credentials I mention the final user I want to connect to, though this proxy user. Now I’m in the exact same state as if I connected with the DEMO user.

No authentication


19:28:54 ADMIN@atp1_tp> alter user DEMO no authentication;
User DEMO altered.

As we don’t connect through this user anymore (and once I’m sure no application uses it) the best is to set it with NO AUTHENTICATION.

New proxy user

Now that the application uses this APP2020 for months, I want to change the password. I’ll add a new proxy user for that:


19:28:54 ADMIN@atp1_tp> show user
USER is "ADMIN"

19:28:53 ADMIN@atp1_tp> grant create session to APP2021 identified by "Best Hopes for 2021 :)";
Grant succeeded.

19:28:54 ADMIN@atp1_tp> alter user DEMO grant connect through APP2021;
User DEMO altered.

Here I have another proxy user that can be used to connect to DEMO, in addition to the existing one


19:28:54 ADMIN@atp1_tp> connect APP2020[DEMO]/"2020 was a really Bad Year!"@atp1_tp
Connected.

19:28:55 DEMO@atp1_tp> show user
USER is "DEMO"

19:28:55 DEMO@atp1_tp> connect APP2021[DEMO]/"Best Hopes for 2021 :)"@atp1_tp
Connected.

19:28:56 DEMO@atp1_tp> show user
USER is "DEMO"

During this time, I can use both credentials. This gives me enough time to change all application server configuration one by one, without any downtime for the application.

Lock previous account


19:30:00 ADMIN@atp1_tp> 
 select username,account_status,last_login,password_change_date,proxy_only_connect 
 from dba_users where username like 'APP____';

   USERNAME    ACCOUNT_STATUS                                       LAST_LOGIN    PASSWORD_CHANGE_DATE    PROXY_ONLY_CONNECT
___________ _________________ ________________________________________________ _______________________ _____________________
APP2020     OPEN              27-DEC-20 07.28.55.000000000 PM EUROPE/ZURICH    27-DEC-20               N
APP2021     OPEN              27-DEC-20 07.28.56.000000000 PM EUROPE/ZURICH    27-DEC-20               N

After a while, I can validate that the old user is not used anymore. If you have a connection recycling duration in the connection pool (you should) you can rely on last login.


19:30:00 ADMIN@atp1_tp> alter user APP2020 account lock;
User APP2020 altered.

Before dropping it, just lock the account, easier to keep track of it and unlock it quickly if anyone encounters a problem


19:30:00 ADMIN@atp1_tp> connect APP2020[DEMO]/"2020 was a really Bad Year!"@atp1_tp
Error starting at line : 30 File @ /home/opc/demo/tmp/proxy_to_rollover.sql
In command -
  connect ...
Error report -
Connection Failed
  USER          = APP2020[DEMO]
  URL           = jdbc:oracle:thin:@atp1_tp
  Error Message = ORA-28000: The account is locked.
Commit

If someone tries to connect with the old password, he will know that the user is locked.


19:30:01 @> connect APP2021[DEMO]/"Best Hopes for 2021 :)"@atp1_tp
Connected.
19:30:02 DEMO@atp1_tp> show user
USER is "DEMO"

Once the old user locked, only the new one is able to connect, with the new user credentials. As this operation can be done with no application downtime, you can do it frequently. From a security point of view, you must change passwords frequently. For end-user passwords, you can set a lifetime, and grace period. But not for system users as the warning may not be cached. Better change them proactively.

Cet article Password rolling change before Oracle 21c est apparu en premier sur Blog dbi services.


Optimizer Statistics Gathering – pending and history

$
0
0

By Franck Pachot

.
This was initially posted to CERN Database blog on Wednesday, 12 September 2018 where it seems to be lost. Here is a copy thanks to web.archive.org

Demo table

I create a table for the demo. The CTAS gathers statistics (12c online statistics gathering) with one row and then I insert more rows:



10:33:56 SQL> create table DEMO as select rownum n from dual;
Table DEMO created.
10:33:56 SQL> insert into DEMO select rownum n from xmltable('1 to 41');
41 rows inserted.
10:33:56 SQL> commit;
Commit complete.

The estimations are stale: estimates 1 row (E-Rows) but 42 actual rows (A-Rows)



10:33:56 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 

10:33:57 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |      1 |     42 |   
--------------------------------------------------------------   

Pending Statistics

Here we are: I want to gather statistics on this table. But I will lower all risks by not publishing them immediately. Current statistics preferences are set to PUBLISH=TRUE:



10:33:58 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
         1 12-SEP-18 10.33.56.000000000 AM   TRUE     
                                          

I set it to FALSE:



10:33:59 SQL> exec dbms_stats.set_table_prefs('DEMO','DEMO','publish','false');

PL/SQL procedure successfully completed.

10:34:00 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
         1 12-SEP-18 10.33.56.000000000 AM   FALSE  
                                            

I’m now gathering stats as I want to:



10:34:01 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.

Test Pending Statistics

They are not published. But to test my queries with those new stats, I can set my session to use pending statistics:



10:34:02 SQL> alter session set optimizer_use_pending_statistics=true;
Session altered.

Running my query again, I can see the good estimations (E-Rows=A-Rows)



10:34:03 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 

10:34:04 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |     42 |     42 |   
--------------------------------------------------------------   

The published statistics still show 1 row:



10:34:05 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
         1 12-SEP-18 10.33.56.000000000 AM   FALSE            
                                  

But I can query the pending ones before publishing them:



10:34:05 SQL> c/dba_tab_statistics/dba_tab_pending_stats
  1* select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_pending_stats where owner='DEMO' and table_name in ('DEMO');
10:34:05 SQL> /

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
        42 12-SEP-18 10.34.01.000000000 AM   FALSE          
                                    

I’ve finished my test with pending statistics:



10:34:05 SQL> alter session set optimizer_use_pending_statistics=false;
Session altered.

Note that if you have Real Application Testing, you can use SQL Performance Analyzer to test the pending statistics on a whole SQL Tuning Set representing the critical queries of your application. Of course, the more you test there, the better it is.

Delete Pending Statistics

Now let’s say that my test shows that the new statistics are not good, I can simply delete the pending statistics:



10:34:06 SQL> exec dbms_stats.delete_pending_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.

Then all queries are still using the previous statistics:



10:34:07 SQL> show parameter pending
NAME                             TYPE    VALUE
-------------------------------- ------- -----
optimizer_use_pending_statistics boolean FALSE

10:34:07 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 

10:34:08 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |      1 |     42 |   
--------------------------------------------------------------   

Accept Pending Statistics

Now I’ll show the second case where my tests show that the new statistics gathering is ok. I gather statistics again:



10:34:09 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.

10:34:09 SQL> alter session set optimizer_use_pending_statistics=true;
Session altered.

10:34:11 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 


10:34:12 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |     42 |     42 |   
--------------------------------------------------------------   
                                                                 
10:34:12 SQL> alter session set optimizer_use_pending_statistics=false;
Session altered.

When I’m ok with the new statistics I can publish them so that other sessions can see them. As doing this in production is probably a fix for a critical problem, I want the effects to take immediately, invalidating all cursors:



10:34:13 SQL> exec dbms_stats.publish_pending_stats('DEMO','DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

The default NO_INVALIDATE value is probably to avoid in those cases because you want to see the side effects, if any, as soon as possible. Not within a random window of 5 hours later where you have left the office. I set back the table preference to PUBLISH=TRUE and check that the new statistics are visible in DBA_TAB_STATISTICS (and no more in DBA_TAB_PENDING_STATS):



10:34:14 SQL> exec dbms_stats.set_table_prefs('DEMO','DEMO','publish','true');
PL/SQL procedure successfully completed.

10:34:15 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
        42 12-SEP-18 10.34.09.000000000 AM   TRUE                                               


10:34:15 SQL> c/dba_tab_statistics/dba_tab_pending_stats
  1* select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_pending_stats where owner='DEMO' and table_name in ('DEMO');
10:34:15 SQL> /

no rows selected

Report Differences

Then what if a citical regression is observed later? I still have the possibility to revert to the old statistics. First I can check in detail what has changed:



10:34:16 SQL> select report from table(dbms_stats.diff_table_stats_in_history('DEMO','DEMO',sysdate-1,sysdate,0));

REPORT
------

###############################################################################

STATISTICS DIFFERENCE REPORT FOR:
.................................

TABLE         : DEMO
OWNER         : DEMO
SOURCE A      : Statistics as of 11-SEP-18 10.34.16.000000 AM EUROPE/ZURICH
SOURCE B      : Statistics as of 12-SEP-18 10.34.16.000000 AM EUROPE/ZURICH
PCTTHRESHOLD  : 0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


TABLE / (SUB)PARTITION STATISTICS DIFFERENCE:
.............................................

OBJECTNAME                  TYP SRC ROWS       BLOCKS     ROWLEN     SAMPSIZE
...............................................................................

DEMO                        T   A   1          4          3          1
                                B   42         8          3          42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

COLUMN STATISTICS DIFFERENCE:
.............................

COLUMN_NAME     SRC NDV     DENSITY    HIST NULLS   LEN  MIN   MAX   SAMPSIZ
...............................................................................

N               A   1       1          NO   0       3    C102  C102  1
                B   41      .024390243 NO   0       3    C102  C12A  42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


NO DIFFERENCE IN INDEX / (SUB)PARTITION STATISTICS
###############################################################################

Restore Previous Statistics

If nothing is obvious and the regression is more critical than the original problem, I still have the possibility to revert back to the old statistics:



10:34:17 SQL> exec dbms_stats.restore_table_stats('DEMO','DEMO',sysdate-1,no_invalidate=>false);
PL/SQL procedure successfully completed.

Again, invalidating all cursors immediately is probably required as I solve a critical problem here. Immediately, the same query uses the old statistics:



10:34:17 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 


10:34:17 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |      1 |     42 |
--------------------------------------------------------------   

If I want to see what happened recently on this table, I can query the history of operations (you can replace my ugly regexp_replace with XQuery):



10:34:18 SQL> select end_time,end_time-start_time,operation,target,regexp_replace(regexp_replace(notes,'" val="','=>'),'(||)',' '),status from DBA_OPTSTAT_OPERATIONS where regexp_like(target,'"?'||'DEMO'||'"?."?'||'DEMO'||'"?') order by end_time desc fetch first 10 rows only;

END_TIME                                 END_TIME-START_TIME   OPERATION             TARGET          REGEXP_REPLACE(REGEXP_REPLACE(NOTES,'"VAL="','=>'),'(||)','')                                                                                                                                                                                                                                         STATUS      
--------                                 -------------------   ---------             ------          ----------------------------------------------------------------------------------------------                                                                                                                                                                                                                                         ------      
12-SEP-18 10.34.17.718800000 AM +02:00   +00 00:00:00.017215   restore_table_stats   "DEMO"."DEMO"     as_of_timestamp=>09-11-2018 10:34:17  force=>FALSE  no_invalidate=>FALSE  ownname=>DEMO  restore_cluster_index=>FALSE  tabname=>DEMO                                                                                                                                                                                                 COMPLETED   
12-SEP-18 10.34.13.262234000 AM +02:00   +00 00:00:00.010021   restore_table_stats   "DEMO"."DEMO"     as_of_timestamp=>11-30-3000 01:00:00  force=>FALSE  no_invalidate=>FALSE  ownname=>DEMO  restore_cluster_index=>FALSE  tabname=>DEMO                                                                                                                                                                                                 COMPLETED   
12-SEP-18 10.34.09.974873000 AM +02:00   +00 00:00:00.032513   gather_table_stats    "DEMO"."DEMO"     block_sample=>FALSE  cascade=>NULL  concurrent=>FALSE  degree=>NULL  estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE  force=>FALSE  granularity=>AUTO  method_opt=>FOR ALL COLUMNS SIZE AUTO  no_invalidate=>NULL  ownname=>DEMO  partname=>  reporting_mode=>FALSE  statid=>  statown=>  stattab=>  stattype=>DATA  tabname=>DEMO     COMPLETED   
12-SEP-18 10.34.01.194735000 AM +02:00   +00 00:00:00.052087   gather_table_stats    "DEMO"."DEMO"     block_sample=>FALSE  cascade=>NULL  concurrent=>FALSE  degree=>NULL  estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE  force=>FALSE  granularity=>AUTO  method_opt=>FOR ALL COLUMNS SIZE AUTO  no_invalidate=>NULL  ownname=>DEMO  partname=>  reporting_mode=>FALSE  statid=>  statown=>  stattab=>  stattype=>DATA  tabname=>DEMO     COMPLETED   

We can see here that the publishing of pending stats was actually a restore of stats as of Nov 30th of Year 3000. This is probably because the pending status is hardcoded as a date in the future. Does that mean that all pending stats will become autonomously published at that time? I don’t think we have to worry about Y3K bugs for the moment…

Here is the full receipe I’ve given to an application owner who needs to gather statistics on his tables on a highly critical database. Then he has all the info to limit the risks. My recommendation is to prepare this fallback scenario before doing any change, and test it as I did, on a test environment, in order to be ready to react on any unexpected side effect. Be careful, the pending statsitics do not work correctly with system statistics and can have very nasty side effects (Bug 21326597), but restoring from history is possible.

Cet article Optimizer Statistics Gathering – pending and history est apparu en premier sur Blog dbi services.

How to quickly download the new bunch of 21c Oracle Database documentation?

$
0
0

Last month, Oracle released its new 21c version of the database documentation.
At that time, I was looking for a quick mean to get all the books of this so-called 21c Innovation Release.

I could remember I used a script to get them all in one run.
Quick look at google, remembered me I used the one from Christian Antognini which hasn’t been refreshed for a while.

In this blog, I will provide you the refreshed script to download all these recent oracle database docs.
By default, the script will download you 109 files and arrange them under the 9 below folders:
– Install and Upgrade
– Administration
– Development
– Security
– Performance
– Clusterware, RAC and Data Guard
– Data Warehousing, ML and OLAP
– Spatial and Graph
– Distributed Data

This will consume about 310 MB of space.

Both Mac and Windows versions are provided at the bottom of this page.

– On Mac, open a terminal window and run <get-21c-docs-on-mac.sh>
– On Windows, open a command prompt and call <get-21c-docs-on-win.cmd>

Both scripts need a working “wget” tool for retrieving the files from https://docs.oracle.com
“wget” is a small utility, created by the GNU foundation
If you haven’t yet installed this tool:

– on Mac, one way to get it, is to use “brew”, somehow an open source package manager (more details on brew.sh)

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Brew install wget
get-21c-docs-on-mac.sh

– on Windows, you can also get a recent “wget” binary from here
In the windows script, replace “C:\Downloaded Products\wget\wget” by the full path to your folder and call the script:

get-21c-docs-on-win.cmd

I hope this will spare you a bit of your time. Feel free to let me know your comments.

 

get-21c-docs-on-mac.sh


#!/bin/sh
mkdir "Install and Upgrade"
mkdir "Administration"
mkdir "Development"
mkdir "Security"
mkdir "Performance"
mkdir "Clusterware, RAC and Data Guard"
mkdir "Data Warehousing, ML and OLAP"
mkdir "Spatial and Graph"
mkdir "Distributed Data"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/pdf/learning-database-new-features.pdf -O "Database New Features Guide.pdf"
cd "Install and Upgrade"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dblic/database-licensing-information-user-manual.pdf -O "Database Licensing Information User Manual.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rnrdm/database-release-notes.pdf -O "Database Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/upgrd/database-upgrade-guide.pdf -O "Database Upgrade Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/comsc/database-sample-schemas.pdf -O "Database Sample Schemas.pdf"
cd ../"Administration"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "2 Day + Performance Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admqs/2-day-dba.pdf -O "2 Day DBA.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admin/database-administrators-guide.pdf -O "Database Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cncpt/database-concepts.pdf -O "Database Concepts.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/errmg/database-error-messages.pdf -O "Database Error Messages.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/database-reference.pdf -O "Database Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sutil/database-utilities.pdf -O "Database Utilities.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/multi/multitenant-administrators-guide.pdf -O "Multitenant Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rcmrf/database-backup-and-recovery-reference.pdf -O "Database Backup and Recovery Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/bradv/database-backup-and-recovery-users-guide.pdf -O "Database Backup and Recovery User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netag/database-net-services-administrators-guide.pdf -O "Net Services Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netrf/database-net-services-reference.pdf -O "Net Services Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ratug/testing-guide.pdf -O "Testing Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ostmg/automatic-storage-management-administrators-guide.pdf -O "Automatic Storage Management Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/acfsg/automatic-storage-management-cluster-file-system-administrators-guide.pdf -O "Automatic Storage Management Cluster File System Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/unxar/administrators-reference-linux-and-unix-system-based-operating-systems.pdf -O "Administrator's Reference for Linux and UNIX-Based Operating Systems.pdf"
cd ../"Development"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdpjd/2-day-java-developers-guide.pdf -O "2 Day + Java Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdddg/2-day-developers-guide.pdf -O "2 Day Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adfns/database-development-guide.pdf -O "Database Development Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/addci/data-cartridge-developers-guide.pdf -O "Data Cartridge Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpls/database-pl-sql-language-reference.pdf -O "Database PLSQL Language Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/database-pl-sql-packages-and-types-reference.pdf -O "Database PLSQL Packages and Types Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdev/java-developers-guide.pdf -O "Java Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdbc/jdbc-developers-guide.pdf -O "JDBC Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adjsn/json-developers-guide.pdf -O "JSON Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adobj/object-relational-developers-guide.pdf -O "Object-Relational Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lncpp/oracle-c-call-interface-programmers-guide.pdf -O "Oracle C++ Call Interface Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnoci/oracle-call-interface-programmers-guide.pdf -O "Oracle Call Interface Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcc/c-c-programmers-guide.pdf -O "Pro C C++ Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcb/cobol-programmers-guide.pdf -O "Pro COBOL Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adlob/securefiles-and-large-objects-developers-guide.pdf -O "SecureFiles and Large Objects Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/sql-language-reference.pdf -O "SQL Language Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqpug/sqlplus-users-guide-and-reference.pdf -O "SQL Plus User's Guide and Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccapp/text-application-developers-guide.pdf -O "Text Application Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccref/text-reference.pdf -O "Text Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjucp/universal-connection-pool-developers-guide.pdf -O "Universal Connection Pool Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adwsm/workspace-manager-developers-guide.pdf -O "Workspace Manager Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/caxml/xml-c-api-reference.pdf -O "XML C API Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cpxml/xml-c-api-reference.pdf -O "XML C++ API Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdb/xml-db-developers-guide.pdf -O "XML DB Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdk/xml-developers-kit-programmers-guide.pdf -O "XML Developer's Kit Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nlspg/database-globalization-support-guide.pdf -O "Database Globalization Support Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pccrn/c-c-release-notes.pdf -O "Pro C C++ Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pcbrn/cobol-release-notes.pdf -O "Pro COBOL Release Notes.pdf"
cd ../"Security"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbseg/database-security-guide.pdf -O "Database Security Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dvadm/database-vault-administrators-guide.pdf -O "Database Vault Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olsag/label-security-administrators-guide.pdf -O "Label Security Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rasad/real-application-security-administration-console-rasadm-users-guide.pdf -O "Real Application Security Administration Console (RASADM) User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbfsg/real-application-security-administrators-and-developers-guide.pdf -O "Real Application Security Administrator's and Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbimi/enterprise-user-security-administrators-guide.pdf -O "Enterprise User Security Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/asoag/advanced-security-guide.pdf -O "Advanced Security Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dmksb/oracle-data-masking-and-subsetting-users-guide.pdf -O "Data Masking and Subsetting User's Guide.pdf"
cd ../"Performance"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "Database 2 Day + Performance Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgdba/database-performance-tuning-guide.pdf -O "Database Performance Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/inmem/database-memory-guide.pdf -O "Database In-Memory Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsql/sql-tuning-guide.pdf -O "SQL Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/vldb-and-partitioning-guide.pdf -O "VLDB and Partitioning Guide.pdf"
cd ../"Clusterware, RAC and Data Guard"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/atnms/autonomous-health-framework-users-guide.pdf -O "Autonomous Health Framework User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cwadd/clusterware-administration-and-deployment-guide.pdf -O "Clusterware Administration and Deployment Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gsmug/global-data-services-concepts-and-administration-guide.pdf -O "Global Data Services Concepts and Administration Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/racad/real-application-clusters-administration-and-deployment-guide.pdf -O "Real Application Clusters Administration and Deployment Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dgbkr/data-guard-broker.pdf -O "Data Guard Broker.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sbydb/data-guard-concepts-and-administration.pdf -O "Data Guard Concepts and Administration.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/shard/using-oracle-sharding.pdf -O "Using Oracle Sharding.pdf"
cd ../"Data Warehousing, ML and OLAP"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dwhsg/database-data-warehousing-guide.pdf -O "Database Data Warehousing Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oread/oracle-machine-learning-r-installation-and-administration-guide.pdf -O "Machine Learning for R Installation and Administration Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/omlrl/oracle-machine-learning-r-licensing-information-user-manual.pdf -O "Machine Learning for R Licensing Information User Manual.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/orern/oracle-machine-learning-r-release-notes.pdf -O "Machine Learning for R Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oreug/oracle-machine-learning-r-users-guide.pdf -O "Machine Learning for R User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmapi/oracle-machine-learning-sql-api-guide.pdf -O "Machine Learning for SQL API Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmcon/oracle-machine-learning-sql-concepts.pdf -O "Machine Learning for SQL Concepts.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmprg/oracle-machine-learning-sql-users-guide.pdf -O "Machine Learning for SQL User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/oladm/olap-dml-reference.pdf -O "OLAP DML Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Expression Syntax Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Java API Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaug/olap-users-guide.pdf -O "OLAP User's Guide.pdf"
cd ../"Spatial and Graph"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/spatl/spatial-developers-guide.pdf -O "Spatial Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/geors/spatial-georaster-developers-guide.pdf -O "Spatial GeoRaster Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jimpv/spatial-map-visualization-developers-guide.pdf -O "Spatial Map Visualization Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/topol/spatial-topology-and-network-data-model-developers-guide.pdf -O "Spatial Topology and Network Data Model Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/property-graph/20.4/spgdg/oracle-graph-property-graph-developers-guide.pdf -O "Oracle Graph Property Graph Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rdfrm/graph-developers-guide-rdf-graph.pdf -O "Graph Developer's Guide for RDF Graph.pdf"
cd ../"Distributed Data"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adque/database-transactional-event-queues-and-advanced-queuing-users-guide.pdf -O "Database Transactional Event Queues and Advanced Queuing User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdas/provider-drda-users-guide.pdf -O "Provider for DRDA User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdaa/sql-translation-and-migration-guide.pdf -O "SQL Translation and Migration Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appci/database-gateway-appc-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway for APPC Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appcw/database-gateway-appc-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway for APPC Installation and Configuration Guide for Microsoft Windows.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appug/database-gateway-appc-users-guide.pdf -O "Database Gateway for APPC User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdag/database-gateway-drda-users-guide.pdf -O "Database Gateway for DRDA User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tginu/database-gateway-informix-users-guide.pdf -O "Database Gateway for Informix User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcu/database-gateway-odbc-users-guide.pdf -O "Database Gateway for ODBC User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gmswn/database-gateway-sql-server-users-guide.pdf -O "Database Gateway for SQL Server User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsyu/database-gateway-sybase-users-guide.pdf -O "Database Gateway for Sybase User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgteu/database-gateway-teradata-users-guide.pdf -O "Database Gateway for Teradata User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/wsmqg/database-gateway-websphere-mq-installation-and-users-guide.pdf -O "Database Gateway for WebSphere MQ Installation and User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgis/database-gateway-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgiw/database-gateway-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway Installation and Configuration Guide for Microsoft Windows.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/heter/heterogeneous-connectivity-users-guide.pdf -O "Heterogeneous Connectivity User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/xstrm/xstream-guide.pdf -O "XStream Guide.pdf"
cd ..

get-21c-docs-on-win.cmd

@echo off
mkdir "Install and Upgrade"
mkdir "Administration"
mkdir "Development"
mkdir "Security"
mkdir "Performance"
mkdir "Clusterware, RAC and Data Guard"
mkdir "Data Warehousing, ML and OLAP"
mkdir "Spatial and Graph"
mkdir "Distributed Data"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/pdf/learning-database-new-features.pdf -O "Database New Features Guide.pdf"
cd "Install and Upgrade"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dblic/database-licensing-information-user-manual.pdf -O "Database Licensing Information User Manual.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rnrdm/database-release-notes.pdf -O "Database Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/upgrd/database-upgrade-guide.pdf -O "Database Upgrade Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/comsc/database-sample-schemas.pdf -O "Database Sample Schemas.pdf"
cd ../"Administration"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "2 Day + Performance Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admqs/2-day-dba.pdf -O "2 Day DBA.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admin/database-administrators-guide.pdf -O "Database Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cncpt/database-concepts.pdf -O "Database Concepts.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/errmg/database-error-messages.pdf -O "Database Error Messages.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/database-reference.pdf -O "Database Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sutil/database-utilities.pdf -O "Database Utilities.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/multi/multitenant-administrators-guide.pdf -O "Multitenant Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rcmrf/database-backup-and-recovery-reference.pdf -O "Database Backup and Recovery Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/bradv/database-backup-and-recovery-users-guide.pdf -O "Database Backup and Recovery User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netag/database-net-services-administrators-guide.pdf -O "Net Services Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netrf/database-net-services-reference.pdf -O "Net Services Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ratug/testing-guide.pdf -O "Testing Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ostmg/automatic-storage-management-administrators-guide.pdf -O "Automatic Storage Management Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/acfsg/automatic-storage-management-cluster-file-system-administrators-guide.pdf -O "Automatic Storage Management Cluster File System Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/unxar/administrators-reference-linux-and-unix-system-based-operating-systems.pdf -O "Administrator's Reference for Linux and UNIX-Based Operating Systems.pdf"
cd ../"Development"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdpjd/2-day-java-developers-guide.pdf -O "2 Day + Java Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdddg/2-day-developers-guide.pdf -O "2 Day Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adfns/database-development-guide.pdf -O "Database Development Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/addci/data-cartridge-developers-guide.pdf -O "Data Cartridge Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpls/database-pl-sql-language-reference.pdf -O "Database PLSQL Language Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/database-pl-sql-packages-and-types-reference.pdf -O "Database PLSQL Packages and Types Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdev/java-developers-guide.pdf -O "Java Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdbc/jdbc-developers-guide.pdf -O "JDBC Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adjsn/json-developers-guide.pdf -O "JSON Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adobj/object-relational-developers-guide.pdf -O "Object-Relational Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lncpp/oracle-c-call-interface-programmers-guide.pdf -O "Oracle C++ Call Interface Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnoci/oracle-call-interface-programmers-guide.pdf -O "Oracle Call Interface Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcc/c-c-programmers-guide.pdf -O "Pro C C++ Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcb/cobol-programmers-guide.pdf -O "Pro COBOL Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adlob/securefiles-and-large-objects-developers-guide.pdf -O "SecureFiles and Large Objects Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/sql-language-reference.pdf -O "SQL Language Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqpug/sqlplus-users-guide-and-reference.pdf -O "SQL Plus User's Guide and Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccapp/text-application-developers-guide.pdf -O "Text Application Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccref/text-reference.pdf -O "Text Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjucp/universal-connection-pool-developers-guide.pdf -O "Universal Connection Pool Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adwsm/workspace-manager-developers-guide.pdf -O "Workspace Manager Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/caxml/xml-c-api-reference.pdf -O "XML C API Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cpxml/xml-c-api-reference.pdf -O "XML C++ API Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdb/xml-db-developers-guide.pdf -O "XML DB Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdk/xml-developers-kit-programmers-guide.pdf -O "XML Developer's Kit Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nlspg/database-globalization-support-guide.pdf -O "Database Globalization Support Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pccrn/c-c-release-notes.pdf -O "Pro C C++ Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pcbrn/cobol-release-notes.pdf -O "Pro COBOL Release Notes.pdf"
cd ../"Security"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbseg/database-security-guide.pdf -O "Database Security Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dvadm/database-vault-administrators-guide.pdf -O "Database Vault Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olsag/label-security-administrators-guide.pdf -O "Label Security Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rasad/real-application-security-administration-console-rasadm-users-guide.pdf -O "Real Application Security Administration Console (RASADM) User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbfsg/real-application-security-administrators-and-developers-guide.pdf -O "Real Application Security Administrator's and Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbimi/enterprise-user-security-administrators-guide.pdf -O "Enterprise User Security Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/asoag/advanced-security-guide.pdf -O "Advanced Security Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dmksb/oracle-data-masking-and-subsetting-users-guide.pdf -O "Data Masking and Subsetting User's Guide.pdf"
cd ../"Performance"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "Database 2 Day + Performance Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgdba/database-performance-tuning-guide.pdf -O "Database Performance Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/inmem/database-memory-guide.pdf -O "Database In-Memory Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsql/sql-tuning-guide.pdf -O "SQL Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/vldb-and-partitioning-guide.pdf -O "VLDB and Partitioning Guide.pdf"
cd ../"Clusterware, RAC and Data Guard"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/atnms/autonomous-health-framework-users-guide.pdf -O "Autonomous Health Framework User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cwadd/clusterware-administration-and-deployment-guide.pdf -O "Clusterware Administration and Deployment Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gsmug/global-data-services-concepts-and-administration-guide.pdf -O "Global Data Services Concepts and Administration Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/racad/real-application-clusters-administration-and-deployment-guide.pdf -O "Real Application Clusters Administration and Deployment Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dgbkr/data-guard-broker.pdf -O "Data Guard Broker.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sbydb/data-guard-concepts-and-administration.pdf -O "Data Guard Concepts and Administration.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/shard/using-oracle-sharding.pdf -O "Using Oracle Sharding.pdf"
cd ../"Data Warehousing, ML and OLAP"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dwhsg/database-data-warehousing-guide.pdf -O "Database Data Warehousing Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oread/oracle-machine-learning-r-installation-and-administration-guide.pdf -O "Machine Learning for R Installation and Administration Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/omlrl/oracle-machine-learning-r-licensing-information-user-manual.pdf -O "Machine Learning for R Licensing Information User Manual.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/orern/oracle-machine-learning-r-release-notes.pdf -O "Machine Learning for R Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oreug/oracle-machine-learning-r-users-guide.pdf -O "Machine Learning for R User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmapi/oracle-machine-learning-sql-api-guide.pdf -O "Machine Learning for SQL API Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmcon/oracle-machine-learning-sql-concepts.pdf -O "Machine Learning for SQL Concepts.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmprg/oracle-machine-learning-sql-users-guide.pdf -O "Machine Learning for SQL User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/oladm/olap-dml-reference.pdf -O "OLAP DML Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Expression Syntax Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Java API Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaug/olap-users-guide.pdf -O "OLAP User's Guide.pdf"
cd ../"Spatial and Graph"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/spatl/spatial-developers-guide.pdf -O "Spatial Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/geors/spatial-georaster-developers-guide.pdf -O "Spatial GeoRaster Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jimpv/spatial-map-visualization-developers-guide.pdf -O "Spatial Map Visualization Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/topol/spatial-topology-and-network-data-model-developers-guide.pdf -O "Spatial Topology and Network Data Model Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/property-graph/20.4/spgdg/oracle-graph-property-graph-developers-guide.pdf -O "Oracle Graph Property Graph Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rdfrm/graph-developers-guide-rdf-graph.pdf -O "Graph Developer's Guide for RDF Graph.pdf"
cd ../"Distributed Data"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adque/database-transactional-event-queues-and-advanced-queuing-users-guide.pdf -O "Database Transactional Event Queues and Advanced Queuing User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdas/provider-drda-users-guide.pdf -O "Provider for DRDA User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdaa/sql-translation-and-migration-guide.pdf -O "SQL Translation and Migration Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appci/database-gateway-appc-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway for APPC Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appcw/database-gateway-appc-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway for APPC Installation and Configuration Guide for Microsoft Windows.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appug/database-gateway-appc-users-guide.pdf -O "Database Gateway for APPC User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdag/database-gateway-drda-users-guide.pdf -O "Database Gateway for DRDA User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tginu/database-gateway-informix-users-guide.pdf -O "Database Gateway for Informix User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcu/database-gateway-odbc-users-guide.pdf -O "Database Gateway for ODBC User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gmswn/database-gateway-sql-server-users-guide.pdf -O "Database Gateway for SQL Server User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsyu/database-gateway-sybase-users-guide.pdf -O "Database Gateway for Sybase User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgteu/database-gateway-teradata-users-guide.pdf -O "Database Gateway for Teradata User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/wsmqg/database-gateway-websphere-mq-installation-and-users-guide.pdf -O "Database Gateway for WebSphere MQ Installation and User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgis/database-gateway-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgiw/database-gateway-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway Installation and Configuration Guide for Microsoft Windows.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/heter/heterogeneous-connectivity-users-guide.pdf -O "Heterogeneous Connectivity User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/xstrm/xstream-guide.pdf -O "XStream Guide.pdf"
cd ..

 

Cet article How to quickly download the new bunch of 21c Oracle Database documentation? est apparu en premier sur Blog dbi services.

Oracle 21c: Blockchain Tables

$
0
0

Oracle Blockchain Tables

With Oracle Database 20c/21c the new feature Oracle Blockchain Tables has been introduced.

Blockchain Tables enable Oracle Database users to create tamper-resistant data management without distributing a ledger across multiple parties.

Database security can be improved by using Blockchain Tables to avoid user fraud and administrator fraud as well.

One of the main characteristics of Oracle Blockchain Tables is that you can only append data. Table rows are chained using a cryptographic hashing approach.

In addition, to avoid administrator or identity fraud, rows can optionally be signed with PKI (public key infrastructure) based on the user’s private key.

Use cases can be a centralized storage of compliance data, audit trail or clinical trial.

Let’s have a look how it works.

Creating an Oracle Blockchain Table:
Quite easy, I’ve used Oracle Database 20.3

select version_full from v$instance;
VERSION_FULL     
-----------------
20.3.0.0.0

CREATE BLOCKCHAIN TABLE bank_ledger (bank VARCHAR2(128), deposit_date DATE, deposit_amount NUMBER)
         NO DROP UNTIL 31 DAYS IDLE
         NO DELETE LOCKED
         HASHING USING "SHA2_512" VERSION "v1";
Error report -
ORA-05729: blockchain table cannot be created in root container

select name, pdb from v$services;

alter session set container = pdb1;

CREATE BLOCKCHAIN TABLE bank_ledger (bank VARCHAR2(128), deposit_date DATE, deposit_amount NUMBER)
         NO DROP UNTIL 31 DAYS IDLE
         NO DELETE LOCKED
         HASHING USING "SHA2_512" VERSION "v1";
Blockchain TABLE created.

Changing retention period on Blockchain Tables:
The table was created with a retention time of “31 DAYS IDLE”, can we reset that value?

ALTER TABLE bank_ledger NO DROP UNTIL 16 DAYS IDLE; 
Error report - 
ORA-05732: retention value cannot be lowered 

ALTER TABLE bank_ledger NO DROP UNTIL 42 days idle; 
Table BANK_LEDGER altered.

Appending Data in Oracle Blockchain Tables:
That’s working fine.

SELECT user_name, distinguished_name, 
          UTL_RAW.LENGTH(certificate_guid) CERT_GUID_LEN, 
          DBMS_LOB.GETLENGTH(certificate) CERT_LEN 
          FROM DBA_CERTIFICATES ORDER BY user_name; 
no rows selected
 
desc bank_ledger 
Name Null? Type 
------------------------------------ 
BANK VARCHAR2(128) 
DEPOSIT_DATE 
DATE 
DEPOSIT_AMOUNT NUMBER 

select * from bank_ledger; 
no rows selected
... 
1 row inserted. 
1 row inserted. 
1 row inserted.

BANK             DEPOSIT_           DEPOSIT_AMOUNT 
-------------------------------------------------- 
UBS              01.01.20           444000000 
Credit Suisse    02.02.20           22000000 
Vontobel         03.03.20           1000000

DML and DDL on Oracle Blockchain Tables:
Let’s try to change some data.

update bank_ledger set deposit_amount=10000 where bank like 'UBS';
Error starting at line : 1 in command -
update bank_ledger set deposit_amount=10000 where bank like 'UBS'
Error at Command Line : 1 Column : 8
Error report -
SQL Error: ORA-05715: operation not allowed on the blockchain table

delete from bank_ledger where bank like 'UBS';
Error starting at line : 1 in command -
delete from bank_ledger where bank like 'UBS'
Error at Command Line : 1 Column : 13
Error report -
SQL Error: ORA-05715: operation not allowed on the blockchain table

drop table bank_ledger;
Error starting at line : 1 in command -
drop table bank_ledger
Error report -
ORA-05723: drop blockchain table BANK_LEDGER not allowed

Copying data from an Oracle Blockchain Table:
Ok, we can’t change data in the original table, let’s try to copy it.

create tablespace bank_data;
Tablespace BANK_DATA created.

CREATE BLOCKCHAIN TABLE bad_bank_ledger (bank VARCHAR2(128), deposit_date DATE, deposit_amount NUMBER) 
         NO DROP UNTIL 31 DAYS IDLE
         NO DELETE LOCKED
         HASHING USING "SHA2_512" VERSION "v1"
         tablespace bank_data;
Blockchain TABLE created.

insert into bad_bank_ledger select * from bank_ledger;
Error starting at line : 1 in command -
insert into bad_bank_ledger select * from bank_ledger
Error at Command Line : 1 Column : 13
Error report -
SQL Error: ORA-05715: operation not allowed on the blockchain table

Alternative actions on Oracle Blockchain Tables:
Can we move tablespaces or try to replace tables?

insert into bad_bank_ledger values ('Vader', '09-09-2099', '999999999');
insert into bad_bank_ledger values ('Blofeld', '07-07-1977', '7777777');
insert into bad_bank_ledger values ('Lecter', '08-08-1988', '888888');

1 row inserted.
1 row inserted.
1 row inserted.

select * from bad_bank_ledger;
BANK                                   DEPOSIT_ DEPOSIT_AMOUNT
----------------------------------------------- --------------
Vader                                  09.09.99      999999999
Blofeld                                07.07.77        7777777
Lecter                                 08.08.88         888888

create table new_bad_bank_ledger as select * from bad_bank_ledger;
Table NEW_BAD_BANK_LEDGER created.

update new_bad_bank_ledger set deposit_amount = 666666 where bank like 'Blofeld';
1 row updated.
commit;
commit complete.

select * from new_bad_bank_ledger;
BANK                                   DEPOSIT_ DEPOSIT_AMOUNT
----------------------------------------------- --------------
Vader                                  09.09.99      999999999
Blofeld                                07.07.77         666666
Lecter                                 08.08.88         888888

drop table bad_bank_ledger;
Error starting at line : 1 in command -
drop table bad_bank_ledger
Error report -
ORA-05723: drop blockchain table BAD_BANK_LEDGER not allowed

drop tablespace bank_data INCLUDING CONTENTS and datafiles;
Error starting at line : 1 in command -
drop tablespace bank_data INCLUDING CONTENTS and datafiles
Error report -
ORA-00604: error occurred at recursive SQL level 1
ORA-05723: drop blockchain table BAD_BANK_LEDGER not allowed
00604. 00000 -  "error occurred at recursive SQL level %s"
*Cause:    An error occurred while processing a recursive SQL statement
           (a statement applying to internal dictionary tables).
*Action:   If the situation described in the next error on the stack
           can be corrected, do so; otherwise contact Oracle Support.

Move or Compress on Oracle Blockchain Table:
Table operations are forbidden in either case.

alter table bank_ledger move tablespace bank_data COMPRESS;
Error starting at line : 1 in command -
alter table bank_ledger move tablespace bank_data COMPRESS
Error report -
ORA-05715: operation not allowed on the blockchain table

alter table bank_ledger move tablespace bank_data;
Error starting at line : 1 in command -
alter table bank_ledger move tablespace bank_data
Error report -
ORA-05715: operation not allowed on the blockchain table

Hidden Columns in Oracle Blockchain Tables:
Every row is identified by hidden attributes.

col table_name for a40
set lin 999
set pages 100

SELECT * FROM user_blockchain_tables;
desc bank_ledger
SELECT column_name, hidden_column FROM user_tab_cols WHERE table_name='BANK_LEDGER';

TABLE_NAME                         ROW_RETENTION ROW TABLE_INACTIVITY_RETENTION HASH_ALG
------------------------------------------------ --- -------------------------- --------
BANK_LEDGER                                  YES                             42 SHA2_512
BAD_BANK_LEDGER                              YES                             31 SHA2_512

Name           Null? Type          
-------------- ----- ------------- 
BANK                 VARCHAR2(128) 
DEPOSIT_DATE         DATE          
DEPOSIT_AMOUNT       NUMBER        

COLUMN_NAME                            HID
-------------------------------------- ---
ORABCTAB_SIGNATURE$                    YES
ORABCTAB_SIGNATURE_ALG$                YES
ORABCTAB_SIGNATURE_CERT$               YES
ORABCTAB_SPARE$                        YES
BANK                                   NO 
DEPOSIT_DATE                           NO 
DEPOSIT_AMOUNT                         NO 
ORABCTAB_INST_ID$                      YES
ORABCTAB_CHAIN_ID$                     YES
ORABCTAB_SEQ_NUM$                      YES
ORABCTAB_CREATION_TIME$                YES
ORABCTAB_USER_NUMBER$                  YES
ORABCTAB_HASH$                         YES
13 rows selected. 

set colinvisible on
desc bank_ledger
Name                                 Null? Type                        
------------------------------------ ----- --------------------------- 
BANK                                       VARCHAR2(128)               
DEPOSIT_DATE                               DATE                        
DEPOSIT_AMOUNT                             NUMBER                      
ORABCTAB_SPARE$ (INVISIBLE)                RAW(2000 BYTE)              
ORABCTAB_SIGNATURE_ALG$ (INVISIBLE)        NUMBER                      
ORABCTAB_SIGNATURE$ (INVISIBLE)            RAW(2000 BYTE)              
ORABCTAB_HASH$ (INVISIBLE)                 RAW(2000 BYTE)              
ORABCTAB_SIGNATURE_CERT$ (INVISIBLE)       RAW(16 BYTE)                
ORABCTAB_CHAIN_ID$ (INVISIBLE)             NUMBER                      
ORABCTAB_SEQ_NUM$ (INVISIBLE)              NUMBER                      
ORABCTAB_CREATION_TIME$ (INVISIBLE)        TIMESTAMP(6) WITH TIME ZONE 
ORABCTAB_USER_NUMBER$ (INVISIBLE)          NUMBER                      
ORABCTAB_INST_ID$ (INVISIBLE)              NUMBER    

set lin 999
set pages 100
col bank for a40

select bank, deposit_date, orabctab_creation_time$ from bank_ledger;
BANK                                     DEPOSIT_ ORABCTAB_CREATION_TIME$        
---------------------------------------- -------- -------------------------------
UBS                                      01.01.20 25.09.20 13:17:03.946615000 GMT
Credit Suisse                            02.02.20 25.09.20 13:17:03.951545000 GMT
Vontobel                                 03.03.20 25.09.20 13:17:03.952064000 GMT

We see that it is not possible to modify an Oracle Blockchain Table on database level. To avoid manipulations from users with root access there are several possibilities to protect data, e.g. by transferring cryptographic hashes and user signatures systematically to external vaults which would enable you to recover data against the most disaster scenarios.

Resources:

https://www.oracle.com/blockchain/#blockchain-platform-tab

https://docs.oracle.com/en/cloud/paas/blockchain-cloud/user/create-rich-history-database.html#GUID-266145A1-EF3A-4917-B174-C50D4DB1A0E3

https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/details-oracle-blockchain-table-282449857.html

https://docs.oracle.com/en/database/oracle/oracle-database/21/admin/managing-tables.html#GUID-43470B0C-DE4A-4640-9278-B066901C3926

Cet article Oracle 21c: Blockchain Tables est apparu en premier sur Blog dbi services.

19c serverless logon trigger

$
0
0

By Franck Pachot

.
I thought I already blogged about this but can’t find it. So here it is, with a funny title. I like to rename oracle features by their user point of view (they are usually named from the oracle development point of view). This is about setting session parameters for Oracle connections, directly from the connection string, especially when it cannot be set in the application (code) or in the DB server (logon trigger).

SESSION_SETTINGS

Here is a simple example. I connect with the full connection string (you can put it in a tnsnames.ora of course to use a small alias instead):


SQL*Plus: Release 21.0.0.0.0 - Production on Sun Jan 31 10:03:07 2021
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.

SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.
SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      ALL_ROWS
SQL>

the OPTIMIZER_MODE is at its default value – ALL_ROWS.

Let’s say that for this connection I want to use FIRST_ROWS_10 because I know that results will always be paginated to the screen. But I can’t change the application to issue an ALTER SESSION. I can do it from the client connection string by adding (SESSION_SETTINGS=(optimizer_mode=first_rows_10)) in CONNECT_DATA, at the same level as the SERVICE_NAME I connect to:


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SESSION_SETTINGS=(optimizer_mode=first_rows_10))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.

SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      FIRST_ROWS_10
SQL>

This has been automatically set at connection time.

logon trigger

I could have done this from the server with a logon trigger:


SQL> create or replace trigger demo.set_session_settings after logon on demo.schema
  2  begin
  3    execute immediate 'alter session set optimizer_mode=first_rows_100';
  4  end;
  5  /

Trigger created.

SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.
SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      FIRST_ROWS_100

Here, with no SESSION_SETTINGS in the connection string, the session parameter is set. Of course the logon trigger may check additional context to set it for specific usage. You have the full power of PL/SQL here.

You probably use the connection string setting when you can’t or don’t want to define it in a logon trigger. But what happens when I use SESSION_SETTINGS in CONNECT_DATA in addition to the logon trigger?


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SESSION_SETTINGS=(optimizer_mode=first_rows_10))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.
SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      FIRST_ROWS_100

There is a priority for the logon trigger. The DBA always wins 😉 And there’s no error or warning, because your setting works, but is just changed later.

SQL_TRACE

Of course this is very useful to set SQL_TRACE and TRACEFILE_IDENTIFIER that you may need to set temporarily:


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SESSION_SETTINGS=(sql_trace=true)(tracefile_identifier=franck))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.

SQL> select value from v$diag_info where name='Default Trace File';

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/db21_iad36d/DB21/trace/DB21_ora_93108_FRANCK.trc

Here is what I see in the trace:


PARSING IN CURSOR #140571524262416 len=45 dep=1 uid=110 oct=42 lid=110 tim=4586453211272 hv=4113172360 ad='0' sqlid='1b8pu0mukn1w8'
ALTER SESSION SET tracefile_identifier=franck
END OF STMT
PARSE #140571524262416:c=312,e=312,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,plh=0,tim=4586453211272

*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/db21_iad36d/DB21/trace/DB21_ora_93108_FRANCK.trc ***

sql_trace was set and then tracefile_identifier.
The current trace file (with the tracefile_identifier) shows the code from my logon trigger:


*** 2021-01-31T10:44:38.949537+00:00 (DB21_PDB1(3))
*** SESSION ID:(30.47852) 2021-01-31T10:44:38.949596+00:00
*** CLIENT ID:() 2021-01-31T10:44:38.949609+00:00
*** SERVICE NAME:(db21_pdb1) 2021-01-31T10:44:38.949620+00:00
*** MODULE NAME:(sqlplus@cloud (TNS V1-V3)) 2021-01-31T10:44:38.949632+00:00
*** ACTION NAME:() 2021-01-31T10:44:38.949646+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-01-31T10:44:38.949663+00:00
*** CONTAINER ID:(3) 2021-01-31T10:44:38.949684+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:44:38.949700+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:44:38.949700+00:00


*** TRACE CONTINUED FROM FILE /u01/app/oracle/diag/rdbms/db21_iad36d/DB21/trace/DB21_ora_93108.trc ***

EXEC #140571524262416:c=601,e=1332,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,plh=0,tim=4586453212715
CLOSE #140571524262416:c=4,e=4,dep=1,type=1,tim=4586453213206
=====================
PARSING IN CURSOR #140571524259888 len=81 dep=1 uid=110 oct=47 lid=110 tim=4586453215247 hv=303636932 ad='16d3143b0' sqlid='22pncan91k8f4'
begin
  execute immediate 'alter session set optimizer_mode=first_rows_100';
end;
END OF STMT
PARSE #140571524259888:c=1737,e=1966,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=4586453215246

this proves that the logon trigger has priority, or rather the last word, on settings as it is run after, before giving the session to the application.

Module, Action

Before it comes to tracing, we would like to identify our session for end-to-end profiling and this is also possible from the connection string. Oracle does that by defining the “module” and “action” application info. There’s no parameter to set module and action, but there are additional possibilities than SESSION_SETTINGS with MODULE_NAME and MODULE_ACTION:


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(MODULE_NAME=my_application_tag)(MODULE_ACTION=my_action_tag)(SESSION_SETTINGS=(sql_trace=true))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))

This sets the module/action as soon as connected, which I can see in the trace:


*** 2021-01-31T10:57:54.404141+00:00 (DB21_PDB1(3))
*** SESSION ID:(484.11766) 2021-01-31T10:57:54.404177+00:00
*** CLIENT ID:() 2021-01-31T10:57:54.404193+00:00
*** SERVICE NAME:(db21_pdb1) 2021-01-31T10:57:54.404205+00:00
*** MODULE NAME:(sqlplus@cloud (TNS V1-V3)) 2021-01-31T10:57:54.404217+00:00
*** ACTION NAME:() 2021-01-31T10:57:54.404242+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-01-31T10:57:54.404253+00:00
*** CONTAINER ID:(3) 2021-01-31T10:57:54.404265+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:57:54.404277+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:57:54.404277+00:00

CLOSE #139872849242800:c=1,e=2,dep=1,type=1,tim=4587248667205
*** MODULE NAME:(my_application_tag) 2021-01-31T10:57:54.404725+00:00
*** ACTION NAME:(my_action_tag) 2021-01-31T10:57:54.404756+00:00

However, because I run that from sqlplus, this is set later by sqlplus itself later:


SQL> select sid,module,action from v$session where sid=sys_context('userenv','sid');

       SID MODULE                         ACTION
---------- ------------------------------ ------------------------------
        30 SQL*Plus

What you pass it in the connection string, it is run immediately before running anything else (logon trigger or application statements). It is really useful when you cannot run an ALTER SESSION in any other way. But remember that it is just an initial setting and nothing is locked for future change.

more… mostly undocumented

I mentioned that there are more things that can be set from there. Here is how I’ve found about MODULE_NAME and MODULE_ACTION:


strings $ORACLE_HOME/bin/oracle | grep ^DESCRIPTION/CONNECT_DATA/ | cut -d/ -f3- | sort | paste -s

CID/PROGRAM     CID/USER        COLOCATION_TAG  COMMAND CONNECTION_ID   CONNECTION_ID_PREFIX    
DESIG   DUPLICITY       FAILOVER_MODE   FAILOVER_MODE/BACKUP  GLOBAL_NAME     INSTANCE_NAME   
MODULE_ACTION   MODULE_NAME     NUMA_PG ORACLE_HOME     PRESENTATION    REGION  RPC     SEPARATE_PROCESS      
SERVER  SERVER_WAIT_TIMEOUT     SERVICE_NAME    SESSION_SETTINGS        SESSION_STATE   SID     USE_DBROUTER

Most of them are not documented and probably not working the way you think. But FAILOVER_MODE is well known to keep the session running when it fails over to a different node or replica in HA (because the High Availability of the database is at maximum only if the application follows without interruption). SERVER is well know to choose the level of connection sharing and pooling (a must with microservices). The COLOCATION_TAG is a way to favor colocation of sessions processing the same data when you have scaled-out to multiple nodes, to avoid inter-node cache synchronization. You just set a character string that may have a business meaning and the load-balancer will try to keep together those who hash to the same value. INSTANCE_NAME (and SID) are documented to go to a specific instance (for the DBA, the application uses services for that). NUMA_PG looks interesting to colocated sessions in NUMA nodes (visible in x$ksmssinfo) but it is undocumented and then unsupported. And we are far from the “serverless” title when we mention those physical characteristics… I’ve put this title not only to be in the trend but also to mention than things that we are used to set on server side may have to be set on the client-side when we are in the cloud.

CONTAINER

Even in the SESSION_SETTINGS you can put some settings that are not properly session parameters. The CONTAINER one may be convenient:


Yes, as a connection string can be a BEQ protocol, this can be used for local connections (without going through a listener) and is a way to go directly to a PDB. Here is an example:


BEQ_PDB1=
 (DESCRIPTION=
  (ADDRESS_LIST=
   (ADDRESS=
    (PROTOCOL=BEQ)
    (PROGRAM=oracle)
    (ARGV0=oracleCDB1)
    (ARGS='(DESCRIPTION=(SILLY_EXAMPLE=TRUE)(LOCAL=YEP)(ADDRESS=(PROTOCOL=beq)))')
   )
  )
  (CONNECT_DATA=
   (SESSION_SETTINGS=(container=PDB1)(tracefile_identifier=HAVE_FUN))
   (SID=CDB1)
  )
 )

I’ve added a few funny things here, but don’t do that. A `ps` shows:


oracleCDB1 (DESCRIPTION=(SILLY_EXAMPLE=TRUE)(LOCAL=YEP)(ADDRESS=(PROTOCOL=beq)))

for this connection

undocumented parameters

I mentioned that the SESSION_SETTINGS happen before the logon trigger, and that the application can change the parameters, as usual, afterwards. It seems that there are two hidden parameters for that:

_connect_string_settings_after_logon_triggers     0                              set connect string session settings after logon triggers                                        integer
_connect_string_settings_unalterable                         0                              make connect string session settings unalterable                                        integer

However, I tested them and haven’t seen how it works (surprisingly they are not booleans but integers)

Cet article 19c serverless logon trigger est apparu en premier sur Blog dbi services.

Learn ODA on Oracle Cloud

$
0
0

By Franck Pachot

.
You want to learn and practice your ODA command line and GUI without having an ODA at home? It should be possible to run the ODA image on VirtualBox but that’s probably a hard work as it is tied to the hardware. About the configuration, you can run the Oracle Appliance Manager Configurator on your laptop but I think it is not compatible with the latest odacli. However, for a long time Oracle provides an ODA simulator and it is now available in the Oracle Cloud Marketplace for free.

Here is it page:
https://console.us-ashburn-1.oraclecloud.com/marketplace/application/84422479/overview

You can get there by: Oracle Cloud Console > Create Compute Instance > Edit > Change Image > Oracle Images > Oracle Database Appliance (ODA) Simulator

I mentioned that this is for free. The marketplace does not allow me to run on an Always Free Eligible shape. But you may take the software and run it elsewhere (you will see the .tar.gz in the opc user home directory)

Cleanup and pull images

From the marketplace, the container is already running but I clean it and re-install. This does everything: it installs docker if not already there (you run all this as root).


# cleanup (incl. portainer)
sudo docker rm -f portainer ; sudo docker rmi -f portainer/portainer
yes | sudo ~/simulator*/cleanup_odasimulator_sw.sh
# setup (get images and start portainer)
sudo ~/simulator*/setup_odasimulator_sw.sh

With this you can connect by http on port 9000 to the Portainer. Of course, you need to open this in the Network Security Groups (I opened the range 9000-9100 as I’ll use those ports later). You can connect with user admin password welcome1… yes, that’s the CHANGE_ON_INSTALL password for ODA 😉

Choose Local repository and connect and you will se the users and containers created.

Create ODA simulators


sudo ~/simulator*/createOdaSimulatorContainer.sh -d class01 -t ha -n 2 -p 9004 \
 -i $(oci-public-ip | awk '/Primary public IP:/{print $NF}')

the -d option is a “department name”. You can put whatever you like and you can use it to create multiple classes.
-n is the number of simulators (one per participant in your class for example).
-t is ‘ha’ to create two docker containers to simulate a 2 nodes ODA HA or ‘single’ to simulate a one node ODA-lite.
The default starting port is 7094 but I start at 9004 as I opened the 9000-9100 range.

This has created the containers, storage and starts the ODA software: Zookeeper, DCS agent, DCS controller. You can see them from the Portainer console. It also creates users (the username is displayed, the password is welcome1) in portainer in the “odasimusers” team.

From the container list you have an icon ( >_ ) to go to a command line console (which is also displayed in the createOdaSimulatorContainer.sh output (“ODA cli access”) so that you can give it to your students. One for each node when you chose HA of course. It also displays the url to the ODA Console (“Browser User Interface”) at https://<public ip>:<displayed port>/mgmt/index.html for witch the user is “oda-admin” and the password must be changed at first connect.

Here is an example with mine:


***********************************************
ODA Simulator system info:
Executed on: 2021_02_03_09_39_AM
Executed by:

ADMIN:
ODA Simulator management GUI: http://150.136.58.254:9000
Username: admin Password: welcome1
num=          5
dept=       class01
hostpubip=    150.136.58.254

USERS:
Username: class01-1-node0  Password:welcome1
Container : class01-1-node0
ODA Console: https://150.136.58.254:9005/mgmt/index.html
ODA cli access: http://150.136.58.254:9000/#/containers/86a0846af46251c9389423ad440a807b83645b62a1ec893182e8d15b1d1179bd/exec

Those are my real IP addresses and those ports are opened so you can play with it if it is still up when you read it… it’s a lab.

The Portainer web shell is a possibility but you can go to the Portainer console from the machine where you have created all that you can:


[opc@instance-20210203-1009 simulator_19.9.0.0.0]$ ./connectContainer.sh -n class01-1-node0
[root@class01-1-node0 /]#

Of course you can also simply `sudo docker exec -i -t class01-1-node0 /bin/bash` – there’s nothing magic here. And then you can play with odacli:


[root@class01-1-node0 /]# odacli configure-firstnet

bonding interface is:
Using bonding public interface (yes/no) [yes]:
Select the Interface to configure the network on () [btbond1]:
Configure DHCP on btbond1 (yes/no) [no]:
INFO: You have chosen Static configuration
Use VLAN on btbond1 (yes/no) [no]:
Enter the IP address to configure : 192.168.0.100
Enter the Netmask address to configure : 255.255.255.0
Enter the Gateway address to configure[192.168.0.1] :
INFO: Restarting the network
Shutting down interface :           [  OK  ]
Shutting down interface em1:            [  OK  ]
Shutting down interface p1p1:           [  OK  ]
Shutting down interface p1p2:           [  OK  ]
Shutting down loopback interface:               [  OK  ]
Bringing up loopback interface:    [  OK  ]
Bringing up interface :     [  OK  ]
Bringing up interface em1:    [  OK  ]
Bringing up interface p1p1: Determining if ip address 192.168.16.24 is already in use for device p1p1...    [ OK  ]
Bringing up interface p1p2: Determining if ip address 192.168.17.24 is already in use for device p1p2...    [ OK  ]
Bringing up interface btbond1: Determining if ip address 192.168.0.100 is already in use for device btbond1...     [  OK  ]
INFO: Restarting the DCS agent
initdcsagent stop/waiting
initdcsagent start/running, process 20423

This is just a simulator, not a virtualized ODA, you need to use this address 192.168.0.100 on node0 and 192.168.0.101 on node 1

Some fake versions of the ODA software is there:


[root@class01-1-node0 /]# ls opt/oracle/dcs/patchfiles/

oda-sm-19.9.0.0.0-201020-server.zip
odacli-dcs-19.8.0.0.0-200714-DB-11.2.0.4.zip
odacli-dcs-19.8.0.0.0-200714-DB-12.1.0.2.zip
odacli-dcs-19.8.0.0.0-200714-DB-12.2.0.1.zip
odacli-dcs-19.8.0.0.0-200714-DB-18.11.0.0.zip
odacli-dcs-19.8.0.0.0-200714-DB-19.8.0.0.zip
odacli-dcs-19.8.0.0.0-200714-GI-19.8.0.0.zip
odacli-dcs-19.9.0.0.0-201020-DB-11.2.0.4.zip
odacli-dcs-19.9.0.0.0-201020-DB-12.1.0.2.zip
odacli-dcs-19.9.0.0.0-201020-DB-12.2.0.1.zip
odacli-dcs-19.9.0.0.0-201020-DB-18.12.0.0.zip
odacli-dcs-19.9.0.0.0-201020-DB-19.9.0.0.zip

On real ODA you download it from My Oracle Support but here the simulator will accept those to update the ODA repository:


odacli update-repository -f /opt/oracle/dcs/patchfiles/odacli-dcs-19.8.0.0.0-200714-GI-19.8.0.0.zip
odacli update-repository -f /opt/oracle/dcs/patchfiles/odacli-dcs-19.8.0.0.0-200714-DB-19.8.0.0.zip

You can also go to the web console where, at the first connection, you change the password to connect as oda-admin.


[root@class01-1-node0 opt]# odacli-adm set-credential -u oda-admin
User password: #HappyNewYear#2021
Confirm user password: #HappyNewYear#2021
[root@class01-1-node0 opt]#

Voila I shared my password for https://150.136.58.254:9005/mgmt/index.html 😉
You can also try the next even ports (9007, 9009…9021) and change the password yourself – I’ll leave this machine for a few days after publishing.

And then deploy the database. Here is my oda.json configuration that you can put in a file and load if you want a quick creation without typing:


{ "instance": { "instanceBaseName": "oda-c", "dbEdition": "EE", "objectStoreCredentials": null, "name": "oda", "systemPassword": null, "timeZone": "Europe/Zurich", "domainName": "pachot.net", "dnsServers": [ "8.8.8.8" ], "ntpServers": [], "isRoleSeparated": true, "osUserGroup": { "users": [ { "userName": "oracle", "userRole": "oracleUser", "userId": 1001 }, { "userName": "grid", "userRole": "gridUser", "userId": 1000 } ], "groups": [ { "groupName": "oinstall", "groupRole": "oinstall", "groupId": 1001 }, { "groupName": "dbaoper", "groupRole": "dbaoper", "groupId": 1002 }, { "groupName": "dba", "groupRole": "dba", "groupId": 1003 }, { "groupName": "asmadmin", "groupRole": "asmadmin", "groupId": 1004 }, { "groupName": "asmoper", "groupRole": "asmoper", "groupId": 1005 }, { "groupName": "asmdba", "groupRole": "asmdba", "groupId": 1006 } ] } }, "nodes": [ { "nodeNumber": "0", "nodeName": "GENEVA0", "network": [ { "ipAddress": "192.168.0.100", "subNetMask": "255.255.255.0", "gateway": "192.168.0.1", "nicName": "btbond1", "networkType": [ "Public" ], "isDefaultNetwork": true } ] }, { "nodeNumber": "1", "nodeName": "GENEVA1", "network": [ { "ipAddress": "192.168.0.101", "subNetMask": "255.255.255.0", "gateway": "192.168.0.1", "nicName": "btbond1", "networkType": [ "Public" ], "isDefaultNetwork": true } ] } ], "grid": { "vip": [ { "nodeNumber": "0", "vipName": "GENEVA0-vip", "ipAddress": "192.168.0.102" }, { "nodeNumber": "1", "vipName": "GENEVA1-vip", "ipAddress": "192.168.0.103" } ], "diskGroup": [ { "diskGroupName": "DATA", "diskPercentage": 80, "redundancy": "FLEX" }, { "diskGroupName": "RECO", "diskPercentage": 20, "redundancy": "FLEX" }, { "diskGroupName": "FLASH", "diskPercentage": 100, "redundancy": "FLEX" } ], "language": "en", "enableAFD": "TRUE", "scan": { "scanName": "oda-scan", "ipAddresses": [ "192.168.0.104", "192.168.0.105" ] } }, "database": { "dbName": "DB1", "dbCharacterSet": { "characterSet": "AL32UTF8", "nlsCharacterset": "AL16UTF16", "dbTerritory": "SWITZERLAND", "dbLanguage": "FRENCH" }, "dbRedundancy": "MIRROR", "adminPassword": null, "dbEdition": "EE", "databaseUniqueName": "DB1_GENEVA", "dbClass": "OLTP", "dbVersion": "19.8.0.0.200714", "dbHomeId": null, "instanceOnly": false, "isCdb": true, "pdBName": "PDB1", "dbShape": "odb1", "pdbAdminuserName": "pdbadmin", "enableTDE": false, "dbType": "RAC", "dbTargetNodeNumber": null, "dbStorage": "ASM", "dbConsoleEnable": false, "dbOnFlashStorage": false, "backupConfigId": null, "rmanBkupPassword": null } }

Of course the same can be done from command line. Here are my database created in this simulation:


[root@class01-1-node0 /]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
cc08cd94-0e95-4521-97c8-025dd03a5554     DB1        Rac      19.8.0.0.200714      true       Oltp     Odb1     Asm        Configured   782c749c-ff0e-4665-bb2a-d75e2caa5568

This is the database created by this simulated deployment.

This is really nice to learn or check something without accessing a real ODA. There’s more in a video from Sam K Tan, Business Development Director at Oracle: https://www.youtube.com/watch?v=mrLp8TkcJMI and the hands-on-lab handbook to know more. And about real-live ODA problems and solutions, I have awesome colleagues sharing on our blog: https://blog.dbi-services.com/?s=oda and they give the following trining: https://www.dbi-services.com/trainings/oracle-database-appliance-oda/ in French, German, English, in site or remote.

Cet article Learn ODA on Oracle Cloud est apparu en premier sur Blog dbi services.

Viewing all 461 articles
Browse latest View live