Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 461 articles
Browse latest View live

Parallelize your Oracle INSERT with DBMS_PARALLEL_EXECUTE

$
0
0

One of the challenge of all PL/SQL developers is to simulate the Production activity in a Non Prod. environment like for example different Insert executed by several sessions.

Different tools exist like Oracle RAT (Real Application Testing) but under license or you can create your own PL/SQL package using DBMS_SCHEDULER or DBMS_PARALLEL_EXECUTE packages.

The aim of this blog is to show you how to use DBMS_PARALLEL_EXECUTE to parallelize several INSERTS commands through different sessions.

My source to write this blog is : oracle-base and oracle documentation.

My goal is to Insert 3000 rows into the table DBI_FK_NOPART through different sessions in parallel.

First of all, let’s check the MAX primary key into the table:

select max(pkey) from XXXX.dbi_fk_nopart;
MAX(PKEY)
9900038489

For my test I have created and populated a new table test_tab as specified into oracle-base which will allow to create the chunks used to create the different parallel sessions. In my case, we will create 3 chunks:

SELECT DISTINCT num_col, num_col FROM test_tab;
num_col num_col1
10      10
30      30
20      20

The following code below must be written into a PL/SQL block or a PL/SQL procedure, I just copy the main command:

The first step is to create a new task:

DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');

We split the data into 3 chunks:

--We create 3 chunks
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT DISTINCT num_col, num_col FROM test_tab', by_rowid => false); 

Now I want to Insert 1000 rows for each chunk which will correspond to different session. So at the end I will have 3000 rows inserted through different sessions.

Add a dynamic PL/SQL block to execute the Insert :

v_sql_stmt := 'declare
s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
table_name varchar2(30);
v_pkey number;
begin
EXECUTE IMMEDIATE ''SELECT max(pkey) FROM xxxx.DBI_FK_NOPART'' INTO v_pkey;
for rec in 1..1000 loop
s:=''INSERT /*TEST_INSERT_DBI_FK_NOPART*/ INTO xxxx.DBI_FK_NOPART ( 
pkey,
boid,
metabo,
lastupdate,
processid,
rowcomment,
created,
createduser,
replaced,
replaceduser,
archivetag,
mdbid,
itsforecast,
betrag,
itsopdetherkunft,
itsopdethkerstprm,
itsfckomppreisseq,
clsfckomppreisseq,
issummandendpreis,
partitiontag,
partitiondomain,
fcvprodkomppkey,
fckvprdankomppkey,
session_id
) VALUES (
1 +'||v_pkey||' ,
''''8189b7c7-0c36-485b-8993-054dddd62708'''' ,
-695,
sysdate,
''''B.3142'''' ,
NULL,
SYSDATE,
''''svc_xxxx_Mig_DEV_DBITEST'''' ,
SYSDATE,
NULL,
NULL,
NULL,
''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' ,
0,
''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' ,
NULL,
''''5849f308-215b-486b-95bd-cbd7afe8440H'''',  
-251,
0,
201905,
''''E'''',  
:start_id,
:end_id,
SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))'';
execute immediate s using vstart_id, vend_id;
commit;
end loop;
end;';

The next step is to execute the TASK with parallel_level = 4 meaning I want to insert the rows through 4 differents sessions.

DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',   sql_stmt =>v_sql_stmt,   language_flag => DBMS_SQL.NATIVE, parallel_level => 4 );

Let’s check the TASK execution status:

SELECT task_name,status FROM user_parallel_execute_tasks;
TASK_NAME STATUS
TASK_NAME FINISHED

And let’s check the chunks created, we should have 3 chunks:

SELECT chunk_id, status, start_id, end_id FROM   user_parallel_execute_chunks WHERE  task_name = 'TASK_NAME' ORDER BY chunk_id;
CHUNK_ID STATUS     START_ID END_ID
9926    PROCESSED   10  10
9927    PROCESSED   30  30
9928    PROCESSED   20  20

As we have used the parameter parallel_level=4, we should have 4 different jobs using 4 differents sessions :

SELECT log_date,job_name, status FROM   user_scheduler_job_run_details WHERE  job_name LIKE 'TASK$%' order by log_date desc;
LOG_DATE                            JOB_NAME        STATUS      SESSION_ID
29.12.21 14:38:41.882995000 +01:00  TASK$_22362_3   SUCCEEDED   3152,27076
29.12.21 14:38:41.766619000 +01:00  TASK$_22362_2   SUCCEEDED   14389,25264
29.12.21 14:38:41.657571000 +01:00  TASK$_22362_1   SUCCEEDED   3143,9335
29.12.21 14:38:41.588968000 +01:00  TASK$_22362_4   SUCCEEDED   6903,60912

Now let’s check the MAX primary key into the table :

select max(pkey) from xxxx.dbi_fk_nopart;
MAX(PKEY)
9900041489
select 9900041489 - 9900038489 from dual;
3000

3000 rows has been inserted, and the data has been splitted by chunks of 1000 rows per session:

select count(*),session_id from xxxx.dbi_fk_nopart where pkey >  9900041489 group by session_id;
count(*) session_id
1000    4174522508
1000    539738149
1000    4190321565

Conclusion :

DBMS_PARALLEL_EXECUTE is easy to use, performing and has many options :

  • Data can be splitted by ROWID by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID
  • Data can be splitted on a number column by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_NUMBER_COL
  • Data can be splitted on a user defined query by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (used in this blog)

Cet article Parallelize your Oracle INSERT with DBMS_PARALLEL_EXECUTE est apparu en premier sur Blog dbi services.


Oracle Partition and Performance of massive/concurrent Inserts

$
0
0

For a customer, I had to check if partitioning improves performance of massive and concurrent inserts.

The goal is to execute several Inserts in parallel via dbms_parallel_execute package (my previous blog “parallelize your Oracle Insert with DBMS_PARALLEL_EXECUTE” explains how to use dbms_parallel_execute).

The idea is to insert more than 20 millions of rows in 2 tables :

  • One table not partitioned –> DBI_FK_NOPART
  • One table partitioned in HASH –> DBI_FK_PART

Both table have the same columns, same indexes but of different type :

  • All Indexes on the table partitioned are global:
    • CREATE INDEX …GLOBAL PARTITION BY HASH (….)….
  • All indexes on the table not partitioned are normal
    • CREATE INDEX …ON…
--Table DBI_FK_PART --> PARTITIONED
SQL> select TABLE_NAME,PARTITION_NAME from dba_tab_partitions where table_name = 'DBI_FK_PART';

TABLE_NAME          PARTITION_NAME
------------------- --------------------------------------------------------------------------------------------------------------------------------
DBI_FK_PART         SYS_P9797
DBI_FK_PART         SYS_P9798
DBI_FK_PART         SYS_P9799
DBI_FK_PART         SYS_P9800
DBI_FK_PART         SYS_P9801
DBI_FK_PART         SYS_P9802
DBI_FK_PART         SYS_P9803
DBI_FK_PART         SYS_P9804
DBI_FK_PART         SYS_P9805
DBI_FK_PART         SYS_P9806
DBI_FK_PART         SYS_P9807

TABLE_NAME          PARTITION_NAME
------------------- --------------------------------------------------------------------------------------------------------------------------------
DBI_FK_PART         SYS_P9808
DBI_FK_PART         SYS_P9809
DBI_FK_PART         SYS_P9810
DBI_FK_PART         SYS_P9811
DBI_FK_PART         SYS_P9812
DBI_FK_PART         SYS_P9813
DBI_FK_PART         SYS_P9814
DBI_FK_PART         SYS_P9815
DBI_FK_PART         SYS_P9816
DBI_FK_PART         SYS_P9817
DBI_FK_PART         SYS_P9818

TABLE_NAME          PARTITION_NAME
------------------- --------------------------------------------------------------------------------------------------------------------------------
DBI_FK_PART         SYS_P9819
DBI_FK_PART         SYS_P9820
DBI_FK_PART         SYS_P9821
DBI_FK_PART         SYS_P9822
DBI_FK_PART         SYS_P9823
DBI_FK_PART         SYS_P9824
DBI_FK_PART         SYS_P9825
DBI_FK_PART         SYS_P9826
DBI_FK_PART         SYS_P9827
DBI_FK_PART         SYS_P9828

32 rows selected.


--TABLE DBI_FK_NOPART --> NOT PARTITIONED

SQL> select TABLE_NAME,PARTITION_NAME from dba_tab_partitions where table_name = 'DBI_FK_NOPART';

no rows selected

SQL>

Each table has more than 1.2 billion of rows:

SQL> select count(*) from xxxx.dbi_fk_nopart;

  COUNT(*)
----------
1241226011

1 row selected.

SQL> select count(*) from xxxx.dbi_fk_part;

  COUNT(*)
----------
1196189234

1 row selected.

Let’s check the maximum primary key for the both tables :

SQL> select max(pkey) from xxxx.dbi_fk_part;

 MAX(PKEY)
----------
9950649803

1 row selected.

SQL> select max(pkey) from xxxx.dbi_fk_nopart;

 MAX(PKEY)
----------
9960649804

1 row selected.

SQL>

Let’s create 2 procedures :

  • “test_insert_nopart” which do the Insert into the table not partitioned “DBI_FK_NOPART”
  • “test_insert_part” which do the Insert into the table partitioned “DBI_FK_PART”
create or replace NONEDITIONABLE procedure test_insert_nopart is

v_sql_stmt varchar2(32767);
v_pkey number;
l_chunk_id NUMBER;
l_start_id NUMBER;
  l_end_id   NUMBER;
  l_any_rows BOOLEAN;
  l_try      NUMBER;
  l_status   NUMBER;
begin
    DBMS_OUTPUT.PUT_LINE('start : '||to_char(sysdate,'hh24:mi:ss'));

   begin
     DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
   exception when others then null;
   end;

   DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
   --We create 3 chunks
   DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT NUM_COL,NUM_COL+10 FROM TEST_TAB WHERE ROWNUM < 10001', by_rowid => false);   

   SELECT max(pkey) into v_pkey FROM XXXX.DBI_FK_NOPART;
   --I will Insert 1000 rows for each chunks, each chunks will work with different session_id
   v_sql_stmt := 'declare
       s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
       table_name varchar2(30);
       v_pkey number;
       begin
         EXECUTE IMMEDIATE ''SELECT max(pkey) FROM XXXX.DBI_FK_NOPART'' INTO v_pkey;
         for rec in 1..1000 loop
         s:=''INSERT /*TEST_INSERT_DBI_FK_NOPART*/ INTO XXXX.DBI_FK_NOPART ( 
        pkey,
        boid,
        metabo,
        lastupdate,
        processid,
        rowcomment,
        created,
        createduser,
        replaced,
        replaceduser,
        archivetag,
        mdbid,
        itsforecast,
        betrag,
        itsopdetherkunft,
        itsopdethkerstprm,
        itsfckomppreisseq,
        clsfckomppreisseq,
        issummandendpreis,
        partitiontag,
        partitiondomain,
        fcvprodkomppkey,
        fckvprdankomppkey,
        session_id
        ) VALUES (
        1 +'||v_pkey||' ,
         ''''8189b7c7-0c36-485b-8993-054dddd62708'''' ,
        -695,
        sysdate,
         ''''B.3142'''' ,
        NULL,
        SYSDATE,
         ''''XXXX_DEV_DBITEST'''' ,
        SYSDATE,
        NULL,
        NULL,
        NULL,
        ''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' ,
        0,
         ''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' ,
        NULL,
         ''''5849f308-215b-486b-95bd-cbd7afe8440H'''',  
        -251,
        0,
        201905,
         ''''E'''',  
        :start_id,
        :end_id,
        SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))'';
         execute immediate s using vstart_id, vend_id;
         commit;
         end loop;
     end;';
dbms_output.put_Line (v_sql_stmt);

   DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
     sql_stmt =>v_sql_stmt,
     language_flag => DBMS_SQL.NATIVE, parallel_level => 4 );

    DBMS_OUTPUT.PUT_LINE('end : '||to_char(sysdate,'hh24:mi:ss'));

end;


create or replace NONEDITIONABLE procedure test_insert_part is

v_sql_stmt varchar2(32767);
v_pkey number;
l_chunk_id NUMBER;
l_start_id NUMBER;
  l_end_id   NUMBER;
  l_any_rows BOOLEAN;
  l_try      NUMBER;
  l_status   NUMBER;
begin
    DBMS_OUTPUT.PUT_LINE('start : '||to_char(sysdate,'hh24:mi:ss'));

   begin
     DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
   exception when others then null;
   end;

   DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
   --We create 3 chunks
   DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT NUM_COL,NUM_COL+10 FROM TEST_TAB WHERE ROWNUM < 10001', by_rowid => false);   

   SELECT max(pkey) into v_pkey FROM XXXX.DBI_FK_PART;
   --I will Insert 1000 rows for each chunks, each chunks will work with different session_id
   v_sql_stmt := 'declare
       s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
       table_name varchar2(30);
       v_pkey number;
       begin
         EXECUTE IMMEDIATE ''SELECT max(pkey) FROM xxxx.DBI_FK_PART'' INTO v_pkey;
         for rec in 1..1000 loop
         s:=''INSERT /*TEST_INSERT_DBI_FK_PART*/ INTO xxxx.DBI_FK_PART ( 
        pkey,
        boid,
        metabo,
        lastupdate,
        processid,
        rowcomment,
        created,
        createduser,
        replaced,
        replaceduser,
        archivetag,
        mdbid,
        itsforecast,
        betrag,
        itsopdetherkunft,
        itsopdethkerstprm,
        itsfckomppreisseq,
        clsfckomppreisseq,
        issummandendpreis,
        partitiontag,
        partitiondomain,
        fcvprodkomppkey,
        fckvprdankomppkey,
        session_id
        ) VALUES (
        1 +'||v_pkey||' ,
         ''''8189b7c7-0c36-485b-8993-054dddd62708'''' ,
        -695,
        sysdate,
         ''''B.3142'''' ,
        NULL,
        SYSDATE,
         ''''xxxx_DBITEST'''' ,
        SYSDATE,
        NULL,
        NULL,
        NULL,
        ''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' ,
        0,
         ''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' ,
        NULL,
         ''''5849f308-215b-486b-95bd-cbd7afe8440H'''',  
        -251,
        0,
        201905,
         ''''E'''',  
        :start_id,
        :end_id,
        SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))'';
         execute immediate s using vstart_id, vend_id;
         commit;
         end loop;
     end;';
dbms_output.put_Line (v_sql_stmt);

   DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
     sql_stmt =>v_sql_stmt,
     language_flag => DBMS_SQL.NATIVE, parallel_level => 4 );

    DBMS_OUTPUT.PUT_LINE('end : '||to_char(sysdate,'hh24:mi:ss'));

end;

 

Now let’s inserting about 20 millions of rows in each tables via the procedures we created above:

SQL> set timing on
SQL> set autotrace on
SQL> begin
  2  test_insert_nopart;
  3  end;
  4  /

PL/SQL procedure successfully completed.

Elapsed: 00:06:30.34
SQL> begin
  2  test_insert_part;
  3  end;
  4
  5  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:22.92


SQL> select max(pkey) from xxxx.dbi_fk_nopart;

 MAX(PKEY)
----------
9980650809

SQL> select 9980650809 - 9960649804 from dual;

9980650809-9960649804
---------------------
             20001005
             
SQL> select max(pkey) from xxxx.dbi_fk_part;

 MAX(PKEY)
----------
9980811483

SQL> select 9980811483 - 9950649803 from dual;

9980811483-9950649803
---------------------
             30161680


FIRST CONCLUSION:

  • About 20 millions of rows has been inserted into the table not partitioned “DBI_FK_NOPART”  in 06.30.34 mins
  • About 30 millions of rows has been inserted into the table partitioned “DBI_FK_PART”  in 22 sec

Do a massive concurrent INSERT on a huge table is always faster on table partitioned compare a table non partitioned.

 

Now, let’s check OEM graphics to understand why the Insert is 17 times faster into DBI_FK_PART than DBI_FK_NOPART

Between 03:40 PM and 03:46 PM, we can see the peak related to the Insert on DBI_FK_NOPART

At 03:49 PM, we can see a very small peak related to the Insert related to DBI_FK_PART

 

 

If we focus only on the INSERT command (1st line and 4th line), the one into DBI_FK_PART (table partitioned) waits less on CPU (green) and CONCURRENCY (purple) compare to INSERT in DBI_FK_NOPART (Table partitioned) where the I/O is the event the most important.

Let’s see more in details on which event the database is waiting for both INSERT:

For INSERT into DBI_FK_NOPART:

And if we click into Concurrency Event :

For INSERT into DBI_FK_PART:

If we click on Concurrency Event :

 

SECOND CONCLUSION

The event “db file sequential read” seems indicate that the difference of response time between the both tables seems due to the type of index we created on each table (Global Partitioned Index on partitioned table VS Normal Index on nonpartitioned table).

As it’s possible to create Global Partitioned Index on nonpartitioned table, another “interesting” test (not done on this blog) should be to replace normal indexes by global indexes on non partitioned tables and check if response time is faster.

To conclude, if we have Partitioning license, in term of performance, we should always partition huge tables accessed several times in read (SELECT) or in write (INSERT).

Cet article Oracle Partition and Performance of massive/concurrent Inserts est apparu en premier sur Blog dbi services.

Import tnsnames.ora in LDAP directory

$
0
0

This post gives a short intro to directory naming, shows how to import from tnsnames.ora to an LDAP directory. Finally, as an alternative, you get an example how a TNS connection string looks like in ldif file format. LDIF can be used universally to export and importing data to LDAP directories.

What is directory naming?

In order to connect to a database, you either need to pass a DB connection string or you are using an alias to lookup up the string. When using an alias, there are two ways to connect to an Oracle database:

  1. Local naming
    Lookup DB connection strings locally in a tnsnames.ora file
    There is a variant when the connection is “directly” established using local application configuration in application or a jdbc string:
  2. Directory naming
    Lookup DB connection strings remotely in a LDAP directory
    LDAP lookup is also possible via jdbc:

This diagram summarises all the possibilities :

Directory naming can be compared to DNS, all you aliases are in one central LDAP server (like in a DNS server).
If you do local naming, you use a local configuration file that can be compared to /etc/hosts.
The data structure of both tnsnames.ora file or LDAP server used is a very simple key-value-store, it’s actually <Alias>=<DB connection string>

Directory naming has some benefits over local naming:

  • There is a single source of “truth” in one central directory. Much easier to manage. You don’t have to distribute tnsnames.ora files to the clients.
    It’s advisable to run LDAP highly availabe, for example configuring replication between several LDAP servers and having load balancers distribute traffic.
  • On every client, all connections are available. Useful when accessing a DB from remote, for example remote PDB cloning, using DB links, Data Guard Observer, doing remote RMAN backup & restore/recovery or cloning, etc.

Mass import from tnsnames.ora to LDAP Server

In the following example we are using Oracle Unified Directory as an LDAP directory. You are free to choose other LDAP directories to use for directory naming: OpenLDAP or Active Directory to name just a few.
Once you setup OUD for directory naming, you may want to do an initial load with contents from local tnsnames.ora file. In the directory your environment variable TNS_ADMIN points to, make sure to have these three files present:

sqlnet.ora
contains information on SQL*net configuration, important parameters are:

NAMES.DEFAULT_DOMAIN = company.com # default domain if domains are used
NAMES.DIRECTORY_PATH = (LDAP) # Where to lookup aliass, multi values possible, for example (TNSNAMES, LDAP) to first look locally, then on LDAP

 

ldap.ora
contains information about your LDAP directory, important parameters are:

DIRECTORY_SERVERS     = (loadbalancer.company.com:1389) # Adress of the LDAP server
DEFAULT_ADMIN_CONTEXT = "dc=company,dc=com" # Context where alias entries are stored
DIRECTORY_SERVER_TYPE = OID # Type of LDAP server

 

tnsnames.ora
contains aliases and connections strings, example:

TESTDB = (DESCRIPTION=
           (ADDRESS_LIST=
              (ADDRESS=
                  (PROTOCOL=TCP)
                  (Host=testdb.company.com)
                  (Port=1521)
              )
           )
           (CONNECT_DATA=
              (SERVICE_NAME=testdb.company.com)
           )
        )

 

On the client you would like to do the mass import, make sure to have either have the Oracle client or RDBMS software installed. Here, I’m using Oracle Client 12.2 64Bit on Windows. It’s quite an old version, but it fits our purpose. Both in Oracle client or RDBMS software, there is a tool called “Net Manager”. By reading the configuration files mentioned above, it’s able to connect both to remote LDAP servers and local tnsnames.ora file.

Startup Net Manager. In the menu, choose “COMMAND – DIRECTORY – EXPORT NET SERVICE NAMES”:

After authenticating to the LDAP server, a wizard shows up on which you are able to choose which aliases to add to LDAP directory:

Alternative: Use LDIF to export and import LDAP data

LDIF file format is easiest if you want to transfer data between LDAP directories. It’s also possible to create LDIF files containing TNS data. See the following example LDIF record for alias “TESTDB”:

dn: CN=TESTDB,cn=OracleContext,dc=company,dc=com
orclNetDescString: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(Host=testdb.company.com)(Port=1521)))(CONNECT_DATA=(SERVICE_NAME=testdb.company.com)))
objectClass: top
objectClass: orclNetService
CN: TESTDB

Aliases are stored in LDAP context “cn=OracleContext,dc=company,dc=com”.
“orclNetDescString” contains the actual DB connection string.
“CN” contains the alias value and is used to lookup a connection string.

Cet article Import tnsnames.ora in LDAP directory est apparu en premier sur Blog dbi services.

Creating KVM Database System on separate VLAN network on ODA

$
0
0

Oracle appliance is proposing various possibilities for creating databases either on the Bare Metal or using KVM DB System. Each DB System will host a single database in a separate VM. What about having each DB system running a separate network? In this blog I would like to share my tests and findings about how to create additional network on the ODA and how to create DB System on separate VLAN interface. The principle would be of course the same if we would like to create KVM Virtual Machines (compute instance) on separate VLAN.

Configuration description

For my test I have got an ODA X8-2M with a quad-port 10GBase-T network interface. I’m running ODA version 19.13.
On my network card, the 2 first ports p7p1 and p7p2 will be assigned to btbond1 and the 2 next ports p7p3 and p7p4 will be assigned to my second bonding interface btbond2. ODA are configured by default with active-backup mode without LACP for all bonding interfaces. This is automatically designed by the ODA and can not be changed. More over we need to keep in mind that on an appliance we will never manually change the Linux network scripts. All network configuration changes needs to be done with odacli.

btbond1 is used for my main network and we will use btbond2 to add additionnal networks.

The 2 additional networks are :
10.38.0.1/24 VLAN id 38
10.39.0.1/24 VLAN id 39

Checking network interface

With ethtool I can check and see that the 2 p7p3 and p7p4 ports are twisted pair and connected to the network :

[root@dbi-oda-x8 ~]# ethtool p7p3 | grep -iE "(\bport\b|detected)"
	Port: Twisted Pair
	Link detected: yes

[root@dbi-oda-x8 ~]# ethtool p7p4 | grep -iE "(\bport\b|detected)"
	Port: Twisted Pair
	Link detected: yes

I can see that both those interfaces are assigned to btbond2 interface :

[root@dbi-oda-x8 ~]# grep MASTER /etc/sysconfig/network-scripts/ifcfg-p7p3
MASTER=btbond2

[root@dbi-oda-x8 ~]# grep MASTER /etc/sysconfig/network-scripts/ifcfg-p7p4
MASTER=btbond2

Checking existing and default ODA configuration

After reimaging an ODA we would have the below default configuration.

root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

[root@dbi-oda-x8 ~]#

To use a network interface with the VMs, either a DB System or a Compute instance, the network interface needs to be created in the vnetworks.

VLAN Tagged versus Untagged

If a port on a switch is configured as tagged (trunk port in cisco world), the connected equipment (here our ODA) is VLAN aware, and would need to be tagged as well. The port is enabled for VLAN tagging. Purpose is to pass traffic for multiple VLAN’s. The ethernet frame is enclosing an additional VLAN header. The connected equipment will add the VLAN information in the ethernet frame.

If a port is configured as untagged (access port in cisco world), the connected equipment (here our ODA) does not know anything about in which VLAN it is and does not care about it. The switch will manage it on its own. The port does not tag and only accepts a single VLAN. The ethernet frame coming to the connected equipment will not have any VLAN header.

This is explained in the 802.1Q standard.

Most of the time trunk ports will link switches and access ports will link to end devices, albeit we would like to use several VLAN network on the same ODA network interface.

Creating an additional untagged network

Let’s create an additional untagged physical network on the Bare Metal itself. We will create it on btbond2 interface. Untagged means that the ODA will not add VLAN information in the Ethernet frame.
The option -t bond would be the default one and would here not be required.

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.38.0.10 -m untagged1 -s 255.255.255.0 -g 10.38.0.1 -t bond
{
  "jobId" : "9a4476dd-b955-433b-9463-377f66ab737a",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "February 04, 2022 13:48:12 PM CET",
  "resourceList" : [ ],
  "description" : "Rac Network service creation with name untagged1",
  "updatedTime" : "February 04, 2022 13:48:12 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i 9a4476dd-b955-433b-9463-377f66ab737a

Job details
----------------------------------------------------------------
                     ID:  9a4476dd-b955-433b-9463-377f66ab737a
            Description:  Rac Network service creation with name untagged1
                 Status:  Success
                Created:  February 4, 2022 1:48:12 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting network                          February 4, 2022 1:48:12 PM CET     February 4, 2022 1:48:18 PM CET     Success
Setting up Network                       February 4, 2022 1:48:12 PM CET     February 4, 2022 1:48:12 PM CET     Success
restart network interface btbond2        February 4, 2022 1:48:12 PM CET     February 4, 2022 1:48:18 PM CET     Success

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]
6171aad7-e247-4b08-a56e-83bdedd74af1   untagged1            btbond2      BOND            255.255.255.0      10.38.0.1                   [IP Address on node0: 10.38.0.10]

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

I can see that our new network exists in the network list but not in the virtual network of course.

I can also check and see that our btbond2 interface has been configured by odacli as an untagged bonding interface with the appropriate information.

[root@dbi-oda-x8 ~]# ip addr sh btbond2
9: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:fd:fe:92:80:1a brd ff:ff:ff:ff:ff:ff
    inet 10.38.0.10/24 brd 10.38.0.255 scope global btbond2
       valid_lft forever preferred_lft forever

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 262 Feb  4 13:48 /etc/sysconfig/network-scripts/ifcfg-btbond2

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2
#This file was created by ODA. Do not edit.
NETMASK=255.255.255.0
GATEWAY=10.38.0.1
BOOTPROTO=none
PEERDNS=no
DEVICE=btbond2
ONBOOT=yes
NM_CONTROLLED=no
IPADDR=10.38.0.10
BONDING_OPTS="mode=active-backup miimon=100 primary=p7p3"
IPV6INIT=no
USERCTL=no
TYPE=BOND

Of course it is not possible to create another untagged network on btbond2 interface knowing there is already one existing :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m untagged2 -s 255.255.255.0 -g 10.39.0.1 -t bond
DCS-10001:Internal error encountered: nicnamebtbond2 already exists in the networks list .. .

Creating an additional tagged network would also not be possible. Note the option -t and -v to create tagged networks on the ODA :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m tagged2 -s 255.255.255.0 -g 10.39.0.1 -t VLAN -v 39
DCS-10001:Internal error encountered: Creating vlan in the interface btbond2 is not allowed. Physical network untagged1 already exists in interface btbond2.

Let’s delete our additional untagged network

[root@dbi-oda-x8 ~]# odacli delete-network -m untagged1
{
  "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
  "status" : "Running",
  "message" : null,
  "reports" : [ {
    "taskId" : "TaskSequential_10041",
    "taskName" : "deleting network",
    "taskResult" : "",
    "startTime" : "February 04, 2022 14:08:32 PM CET",
    "endTime" : "February 04, 2022 14:08:32 PM CET",
    "status" : "Running",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_10039",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 04, 2022 14:08:32 PM CET"
  }, {
    "taskId" : "TaskZJsonRpcExt_10045",
    "taskName" : "Setting up Network",
    "taskResult" : "Network setup success",
    "startTime" : "February 04, 2022 14:08:32 PM CET",
    "endTime" : "February 04, 2022 14:08:32 PM CET",
    "status" : "Success",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_10041",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 04, 2022 14:08:32 PM CET"
  }, {
    "taskId" : "TaskZJsonRpcExt_10048",
    "taskName" : "restart network interface btbond2",
    "taskResult" : "",
    "startTime" : "February 04, 2022 14:08:32 PM CET",
    "endTime" : "February 04, 2022 14:08:32 PM CET",
    "status" : "Running",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_10047",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 04, 2022 14:08:32 PM CET"
  } ],
  "createTimestamp" : "February 04, 2022 14:08:32 PM CET",
  "resourceList" : [ {
    "resourceId" : "6171aad7-e247-4b08-a56e-83bdedd74af1",
    "resourceType" : null,
    "resourceNewType" : "Network",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "updatedTime" : null
  } ],
  "description" : "Network service deleteRacNetwork with id 6171aad7-e247-4b08-a56e-83bdedd74af1",
  "updatedTime" : "February 04, 2022 14:08:32 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i 3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57

Job details
----------------------------------------------------------------
                     ID:  3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57
            Description:  Network service deleteRacNetwork with id 6171aad7-e247-4b08-a56e-83bdedd74af1
                 Status:  Success
                Created:  February 4, 2022 2:08:32 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
deleting network                         February 4, 2022 2:08:32 PM CET     February 4, 2022 2:08:34 PM CET     Success
Setting up Network                       February 4, 2022 2:08:32 PM CET     February 4, 2022 2:08:32 PM CET     Success
restart network interface btbond2        February 4, 2022 2:08:32 PM CET     February 4, 2022 2:08:34 PM CET     Success

Network has been deleted :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]

Creating additional tagged network

Let’s create the first tagged network on btbond2 interface :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.38.0.10 -m tagged38 -s 255.255.255.0 -g 10.38.0.1 -t VLAN -v 38
{
  "jobId" : "b3dd6d7b-7ee7-4418-afc6-fd71af0a01bc",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "February 04, 2022 14:40:28 PM CET",
  "resourceList" : [ ],
  "description" : "Rac Network service creation with name tagged38",
  "updatedTime" : "February 04, 2022 14:40:28 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i b3dd6d7b-7ee7-4418-afc6-fd71af0a01bc

Job details
----------------------------------------------------------------
                     ID:  b3dd6d7b-7ee7-4418-afc6-fd71af0a01bc
            Description:  Rac Network service creation with name tagged38
                 Status:  Success
                Created:  February 4, 2022 2:40:28 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting network                          February 4, 2022 2:40:28 PM CET     February 4, 2022 2:40:33 PM CET     Success
Setting up Vlan                          February 4, 2022 2:40:28 PM CET     February 4, 2022 2:40:33 PM CET     Success

The tagged network has been created :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]
fbe05bfa-636e-4f9c-a348-c59ba23e2296   tagged38             btbond2.38   VLAN            255.255.255.0      10.38.0.1          38       [IP Address on node0: 10.38.0.10]

And we can see that respective tagged network interface has now been created on the Linux operating system side (note the <btbond>.<vlan_id>) :

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 239 Feb  4 14:38 /etc/sysconfig/network-scripts/ifcfg-btbond2
-rw-r--r--  1 root root 520 Feb  4 14:40 /etc/sysconfig/network-scripts/ifcfg-btbond2.38

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.38
#ODA_VLAN_CONFIG ===
#ODA_VLAN_CONFIG Name=tagged38
#ODA_VLAN_CONFIG VlanId=38
#ODA_VLAN_CONFIG VlanInterface=btbond2
#ODA_VLAN_CONFIG Type=VlanType
#ODA_VLAN_CONFIG VlanSetupType=Other
#ODA_VLAN_CONFIG VlanIpAddr=10.38.0.10
#ODA_VLAN_CONFIG VlanNetmask=255.255.255.0
#ODA_VLAN_CONFIG VlanGateway=10.38.0.1
#ODA_VLAN_CONFIG NodeNum=0
#=== DO NOT EDIT ANYTHING ABOVE THIS LINE ===
DEVICE=btbond2.38
BOOTPROTO=none
ONBOOT=yes
VLAN=yes
NM_CONTROLLED=no
DEFROUTE=no
IPADDR=10.38.0.10
NETMASK=255.255.255.0
GATEWAY=10.38.0.1
[root@dbi-oda-x8 ~]#

And I can reach the ODA on the new network :

C:\Users>ping 10.38.0.10

Pinging 10.38.0.10 with 32 bytes of data:
Reply from 10.38.0.10: bytes=32 time=1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63

Ping statistics for 10.38.0.10:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 1ms, Average = 0ms

The ODA will not permit to create any untagged additional network on the btbond2 interface knowing one tagged network already exists :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m untagged2 -s 255.255.255.0 -g 10.39.0.1 -t bond
DCS-10001:Internal error encountered: Creating non-VLAN typed network on the interface btbond2 is not allowed. VLAN tagged38 already exists in interface btbond2.

Let’s create the second tagged network on the same btbond2 interface :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m tagged39 -s 255.255.255.0 -g 10.39.0.1 -t VLAN -v 39
{
  "jobId" : "e0452663-544c-47c4-8b9b-5fbe6e0a5cd9",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "February 04, 2022 14:45:18 PM CET",
  "resourceList" : [ ],
  "description" : "Rac Network service creation with name tagged39",
  "updatedTime" : "February 04, 2022 14:45:18 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i e0452663-544c-47c4-8b9b-5fbe6e0a5cd9

Job details
----------------------------------------------------------------
                     ID:  e0452663-544c-47c4-8b9b-5fbe6e0a5cd9
            Description:  Rac Network service creation with name tagged39
                 Status:  Success
                Created:  February 4, 2022 2:45:18 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting network                          February 4, 2022 2:45:18 PM CET     February 4, 2022 2:45:23 PM CET     Success
Setting up Vlan                          February 4, 2022 2:45:18 PM CET     February 4, 2022 2:45:23 PM CET     Success

New tagged network has been created and we now have 2 tagged network running on the same btbond2 interface :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]
fbe05bfa-636e-4f9c-a348-c59ba23e2296   tagged38             btbond2.38   VLAN            255.255.255.0      10.38.0.1          38       [IP Address on node0: 10.38.0.10]
dc11ecbe-c4be-4e18-9c37-9f6360b37ee1   tagged39             btbond2.39   VLAN            255.255.255.0      10.39.0.1          39       [IP Address on node0: 10.39.0.10]

The respective new tagged interface has been configured by odacli on the linux operating system :

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 239 Feb  4 14:38 /etc/sysconfig/network-scripts/ifcfg-btbond2
-rw-r--r--  1 root root 520 Feb  4 14:40 /etc/sysconfig/network-scripts/ifcfg-btbond2.38
-rw-r--r--  1 root root 520 Feb  4 14:45 /etc/sysconfig/network-scripts/ifcfg-btbond2.39

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.39
#ODA_VLAN_CONFIG ===
#ODA_VLAN_CONFIG Name=tagged39
#ODA_VLAN_CONFIG VlanId=39
#ODA_VLAN_CONFIG VlanInterface=btbond2
#ODA_VLAN_CONFIG Type=VlanType
#ODA_VLAN_CONFIG VlanSetupType=Other
#ODA_VLAN_CONFIG VlanIpAddr=10.39.0.10
#ODA_VLAN_CONFIG VlanNetmask=255.255.255.0
#ODA_VLAN_CONFIG VlanGateway=10.39.0.1
#ODA_VLAN_CONFIG NodeNum=0
#=== DO NOT EDIT ANYTHING ABOVE THIS LINE ===
DEVICE=btbond2.39
BOOTPROTO=none
ONBOOT=yes
VLAN=yes
NM_CONTROLLED=no
DEFROUTE=no
IPADDR=10.39.0.10
NETMASK=255.255.255.0
GATEWAY=10.39.0.1

And I can ping my new network as well :

C:\Users>ping 10.39.0.10

Pinging 10.39.0.10 with 32 bytes of data:
Reply from 10.39.0.10: bytes=32 time=2ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63

Ping statistics for 10.39.0.10:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 2ms, Average = 0ms

I can now as well check my global btbond2 configuration on the linux side :

[root@dbi-oda-x8 ~]# ip addr sh | grep -iE "(btbond2|btbond2.38|btbond2.39)"
4: p7p3:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
5: p7p4:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
9: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
57: btbond2.38@btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.38.0.10/24 brd 10.38.0.255 scope global btbond2.38
58: btbond2.39@btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.39.0.10/24 brd 10.39.0.255 scope global btbond2.39
[root@dbi-oda-x8 ~]#

Creating virtual networks

Knowing I need to use the networks for the next DB Systems I have deleted the created physical networks and I’m going to create the 2 same tagged networks as virtual networks on the btbond2 interface :

[root@dbi-oda-x8 ~]# odacli create-vnetwork -n tagged38 -if btbond2 -t bridged-vlan -ip 10.38.0.10 -nm 255.255.255.0 -vlan 38 -gw 10.38.0.1

Job details
----------------------------------------------------------------
                     ID:  eb57ff9b-c9b0-4e1d-bc51-162a42ea4fb1
            Description:  vNetwork tagged38 creation
                 Status:  Created
                Created:  February 4, 2022 3:19:02 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i eb57ff9b-c9b0-4e1d-bc51-162a42ea4fb1

Job details
----------------------------------------------------------------
                     ID:  eb57ff9b-c9b0-4e1d-bc51-162a42ea4fb1
            Description:  vNetwork tagged38 creation
                 Status:  Success
                Created:  February 4, 2022 3:19:02 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network doesn‘t exist   February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Validate interface to use exists         February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Validate interfaces to create not exist  February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Create bridge                            February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Create VLAN                              February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Bring up VLAN                            February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:07 PM CET     Success
Create metadata                          February 4, 2022 3:19:07 PM CET     February 4, 2022 3:19:07 PM CET     Success
Persist metadata                         February 4, 2022 3:19:07 PM CET     February 4, 2022 3:19:07 PM CET     Success

[root@dbi-oda-x8 ~]# odacli create-vnetwork -n tagged39 -if btbond2 -t bridged-vlan -ip 10.39.0.10 -nm 255.255.255.0 -vlan 39 -gw 10.39.0.1

Job details
----------------------------------------------------------------
                     ID:  15cf889f-9c6e-4e0c-a676-b2203e40cfd2
            Description:  vNetwork tagged39 creation
                 Status:  Created
                Created:  February 4, 2022 3:19:40 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i 15cf889f-9c6e-4e0c-a676-b2203e40cfd2

Job details
----------------------------------------------------------------
                     ID:  15cf889f-9c6e-4e0c-a676-b2203e40cfd2
            Description:  vNetwork tagged39 creation
                 Status:  Success
                Created:  February 4, 2022 3:19:40 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network doesn‘t exist   February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Validate interface to use exists         February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Validate interfaces to create not exist  February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Create bridge                            February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Create VLAN                              February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Bring up VLAN                            February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:45 PM CET     Success
Create metadata                          February 4, 2022 3:19:45 PM CET     February 4, 2022 3:19:45 PM CET     Success
Persist metadata                         February 4, 2022 3:19:45 PM CET     February 4, 2022 3:19:45 PM CET     Success

My 2 new tagged networks exists as virtual network and not physical networks :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged39              BridgedVlan      btbond2          brtagged39            NO        2022-02-04 15:19:45 CET  2022-02-04 15:19:45 CET
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

Tagged network interfaces have been created by the dcs-agent on the linux operating side :

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 239 Feb  4 14:38 /etc/sysconfig/network-scripts/ifcfg-btbond2
-rw-r--r--  1 root root 145 Feb  4 15:19 /etc/sysconfig/network-scripts/ifcfg-btbond2.38
-rw-r--r--  1 root root 145 Feb  4 15:19 /etc/sysconfig/network-scripts/ifcfg-btbond2.39

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.38
#This file was created by ODA. Do not edit.
DEVICE=btbond2.38
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
VLAN=yes
ONPARENT=yes
BRIDGE=brtagged38

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.39
#This file was created by ODA. Do not edit.
DEVICE=btbond2.39
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
VLAN=yes
ONPARENT=yes
BRIDGE=brtagged39

On the linux side, I can see that the respective IP addresses have not been assigned to the physical interfaces :

[root@dbi-oda-x8 ~]# ip addr sh | grep -iE "(btbond2|btbond2.38|btbond2.39)"
4: p7p3:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
5: p7p4:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
9: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
60: btbond2.38@btbond2:  mtu 1500 qdisc noqueue master brtagged38 state UP group default qlen 1000
62: btbond2.39@btbond2:  mtu 1500 qdisc noqueue master brtagged39 state UP group default qlen 1000

But on new virtual interfaces linked to the tagged btbond2 interfaces :

[root@dbi-oda-x8 ~]# ip addr sh | grep -iE "(tagged38|tagged39)"
59: brtagged38:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.38.0.10/24 brd 10.38.0.255 scope global brtagged38
60: btbond2.38@btbond2:  mtu 1500 qdisc noqueue master brtagged38 state UP group default qlen 1000
61: brtagged39:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.39.0.10/24 brd 10.39.0.255 scope global brtagged39
62: btbond2.39@btbond2:  mtu 1500 qdisc noqueue master brtagged39 state UP group default qlen 1000

And I can ping them :

C:\Users>ping 10.38.0.10

Pinging 10.38.0.10 with 32 bytes of data:
Reply from 10.38.0.10: bytes=32 time=2ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63

C:\Users>ping 10.39.0.10

Pinging 10.39.0.10 with 32 bytes of data:
Reply from 10.39.0.10: bytes=32 time=2ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63
Reply from 10.39.0.10: bytes=32 time=1ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63

Ping statistics for 10.39.0.10:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 2ms, Average = 0ms

I can see the network bridge information from the operating system as well :

[root@dbi-oda-x8 ~]# brctl show
bridge name	  bridge id		        STP enabled	       interfaces
brtagged38		8000.5254003bdc76	  no		             btbond2.38
							                                       vnet2
brtagged39		8000.52540060eb57	  no		             btbond2.39
							                                       vnet4
privasm		    8000.5a8241cdb508	  no		             priv0.100
							                                       vnet1
							                                       vnet3
pubnet		    8000.3cfdfe928018	  no	               btbond1
							                                       vnet0
virbr0		    8000.525400dc8c09	  yes		             virbr0-nic

Creating 2 DB Systems using respectively tagged38 and tagged39 virtual networks

My repository has already been updated with the KVM DB System Image :

[root@dbi-oda-x8 ~]# odacli describe-dbsystem-image
DB System Image details
--------------------------------------------------------------------------------
Component Name        Supported Versions    Available Versions
--------------------  --------------------  --------------------

DBVM                  19.13.0.0.0           19.13.0.0.0

GI                    19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

DB                    19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

I have created the first DB System json file assigning a new IP address from the VLAN38 network using tagged38 virtual network. The IP connection will use bridging through the virtual VLAN38 IP address we created on the Bare Metal itself :

[root@dbi-oda-x8 ~]# cat /opt/dbi/create_dbsystem_srvdb38.json
...
...
...
"network": {
    "domainName": "dbi-lab.ch",
    "ntpServers": ["216.239.35.0"],
    "dnsServers": [
        "8.8.8.8","8.8.4.4"
    ],
    "nodes": [
        {
            "name": "srvdb38",
            "ipAddress": "10.38.0.20",
            "netmask": "255.255.255.0",
            "gateway": "10.38.0.1",
            "number": 0
        }
    ],
"publicVNetwork": "tagged38"
},
"grid": {
    "language": "en"
}
}

Creation of the first DB System on network VLAN 38 :

[root@dbi-oda-x8 ~]# odacli create-dbsystem -p /opt/dbi/create_dbsystem_srvdb38.json
Enter password for system "srvdb38":
Retype password for system "srvdb38":
Enter administrator password for DB "DB38":
Retype administrator password for DB "DB38":

Job details
----------------------------------------------------------------
                     ID:  ed88ef81-5cb3-4214-ac5c-bc255b67577f
            Description:  DB System srvdb38 creation
                 Status:  Created
                Created:  February 4, 2022 4:12:33 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i ed88ef81-5cb3-4214-ac5c-bc255b67577f

Job details
----------------------------------------------------------------
                     ID:  ed88ef81-5cb3-4214-ac5c-bc255b67577f
            Description:  DB System srvdb38 creation
                 Status:  Success
                Created:  February 4, 2022 4:12:33 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create DB System metadata                February 4, 2022 4:12:33 PM CET     February 4, 2022 4:12:33 PM CET     Success
Persist new DB System                    February 4, 2022 4:12:33 PM CET     February 4, 2022 4:12:33 PM CET     Success
Validate DB System prerequisites         February 4, 2022 4:12:33 PM CET     February 4, 2022 4:12:37 PM CET     Success
Setup DB System environment              February 4, 2022 4:12:37 PM CET     February 4, 2022 4:12:39 PM CET     Success
Create DB System ASM volume              February 4, 2022 4:12:39 PM CET     February 4, 2022 4:12:45 PM CET     Success
Create DB System ACFS filesystem         February 4, 2022 4:12:45 PM CET     February 4, 2022 4:12:54 PM CET     Success
Create DB System VM ACFS snapshots       February 4, 2022 4:12:54 PM CET     February 4, 2022 4:13:27 PM CET     Success
Create temporary SSH key pair            February 4, 2022 4:13:27 PM CET     February 4, 2022 4:13:27 PM CET     Success
Create DB System cloud-init config       February 4, 2022 4:13:27 PM CET     February 4, 2022 4:13:27 PM CET     Success
Provision DB System VM(s)                February 4, 2022 4:13:27 PM CET     February 4, 2022 4:13:28 PM CET     Success
Attach disks to DB System                February 4, 2022 4:13:28 PM CET     February 4, 2022 4:13:29 PM CET     Success
Add DB System to Clusterware             February 4, 2022 4:13:29 PM CET     February 4, 2022 4:13:29 PM CET     Success
Start DB System                          February 4, 2022 4:13:29 PM CET     February 4, 2022 4:13:30 PM CET     Success
Wait DB System VM first boot             February 4, 2022 4:13:30 PM CET     February 4, 2022 4:17:06 PM CET     Success
Setup Mutual TLS (mTLS)                  February 4, 2022 4:17:06 PM CET     February 4, 2022 4:21:36 PM CET     Success
Export clones repository                 February 4, 2022 4:21:36 PM CET     February 4, 2022 4:21:37 PM CET     Success
Setup ASM client cluster config          February 4, 2022 4:21:37 PM CET     February 4, 2022 4:22:00 PM CET     Success
Install DB System                        February 4, 2022 4:22:00 PM CET     February 4, 2022 5:07:11 PM CET     Success
Cleanup temporary SSH key pair           February 4, 2022 5:07:11 PM CET     February 4, 2022 5:07:52 PM CET     Success
Set DB System as configured              February 4, 2022 5:07:52 PM CET     February 4, 2022 5:07:52 PM CET     Success

I have then created the second DB System json file assigning a new IP address from the VLAN39 network using tagged39 virtual network. That time the IP connection will use bridging through the virtual VLAN39 IP address we created on the Bare Metal itself :

[root@dbi-oda-x8 ~]# cat /opt/dbi/create_dbsystem_srvdb39.json
...
...
...
},
"network": {
    "domainName": "dbi-lab.ch",
    "ntpServers": ["216.239.35.0"],
    "dnsServers": [
        "8.8.8.8","8.8.4.4"
    ],
    "nodes": [
        {
            "name": "srvdb39",
            "ipAddress": "10.39.0.20",
            "netmask": "255.255.255.0",
            "gateway": "10.39.0.1",
            "number": 0
        }
    ],
"publicVNetwork": "tagged39"
},
"grid": {
    "language": "en"
}
}

Creation on the second DB System on network VLAN 39 :

[root@dbi-oda-x8 ~]# odacli create-dbsystem -p /opt/dbi/create_dbsystem_srvdb39.json
Enter password for system "srvdb39":
Retype password for system "srvdb39":
Enter administrator password for DB "DB39":
Retype administrator password for DB "DB39":

Job details
----------------------------------------------------------------
                     ID:  38303453-8524-4c65-b1d2-3717dfc79a1f
            Description:  DB System srvdb39 creation
                 Status:  Created
                Created:  February 4, 2022 5:14:01 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i 38303453-8524-4c65-b1d2-3717dfc79a1f

Job details
----------------------------------------------------------------
                     ID:  38303453-8524-4c65-b1d2-3717dfc79a1f
            Description:  DB System srvdb39 creation
                 Status:  Success
                Created:  February 4, 2022 5:14:01 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create DB System metadata                February 4, 2022 5:14:01 PM CET     February 4, 2022 5:14:01 PM CET     Success
Persist new DB System                    February 4, 2022 5:14:01 PM CET     February 4, 2022 5:14:01 PM CET     Success
Validate DB System prerequisites         February 4, 2022 5:14:01 PM CET     February 4, 2022 5:14:05 PM CET     Success
Setup DB System environment              February 4, 2022 5:14:05 PM CET     February 4, 2022 5:14:07 PM CET     Success
Create DB System ASM volume              February 4, 2022 5:14:07 PM CET     February 4, 2022 5:14:13 PM CET     Success
Create DB System ACFS filesystem         February 4, 2022 5:14:13 PM CET     February 4, 2022 5:14:22 PM CET     Success
Create DB System VM ACFS snapshots       February 4, 2022 5:14:22 PM CET     February 4, 2022 5:14:52 PM CET     Success
Create temporary SSH key pair            February 4, 2022 5:14:52 PM CET     February 4, 2022 5:14:52 PM CET     Success
Create DB System cloud-init config       February 4, 2022 5:14:52 PM CET     February 4, 2022 5:14:53 PM CET     Success
Provision DB System VM(s)                February 4, 2022 5:14:53 PM CET     February 4, 2022 5:14:54 PM CET     Success
Attach disks to DB System                February 4, 2022 5:14:54 PM CET     February 4, 2022 5:14:54 PM CET     Success
Add DB System to Clusterware             February 4, 2022 5:14:54 PM CET     February 4, 2022 5:14:54 PM CET     Success
Start DB System                          February 4, 2022 5:14:54 PM CET     February 4, 2022 5:14:56 PM CET     Success
Wait DB System VM first boot             February 4, 2022 5:14:56 PM CET     February 4, 2022 5:18:32 PM CET     Success
Setup Mutual TLS (mTLS)                  February 4, 2022 5:18:32 PM CET     February 4, 2022 5:23:07 PM CET     Success
Export clones repository                 February 4, 2022 5:23:07 PM CET     February 4, 2022 5:23:07 PM CET     Success
Setup ASM client cluster config          February 4, 2022 5:23:07 PM CET     February 4, 2022 5:23:30 PM CET     Success
Install DB System                        February 4, 2022 5:23:30 PM CET     February 4, 2022 6:09:43 PM CET     Success
Cleanup temporary SSH key pair           February 4, 2022 6:09:43 PM CET     February 4, 2022 6:10:23 PM CET     Success
Set DB System as configured              February 4, 2022 6:10:23 PM CET     February 4, 2022 6:10:23 PM CET     Success

Virtual network configuration checks

So I have got my both virtual networks configured on respective IP address (10.38.0.10 for VLAN38 network and 10.39.0.10 for VLAN39 network) :

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged39              BridgedVlan      btbond2          brtagged39            NO        2022-02-04 15:19:45 CET  2022-02-04 15:19:45 CET
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

[root@dbi-oda-x8 ~]# odacli describe-vnetwork -n tagged38
VNetwork details
--------------------------------------------------------------------------------
                       ID:  0d8099ba-ff42-4d73-82b4-f7497fb68ca5
                     Name:  tagged38
                  Created:  2022-02-04 15:19:07 CET
                  Updated:  2022-02-04 15:19:07 CET
                     Type:  BridgedVlan
           Interface name:  btbond2
              Bridge name:  brtagged38
                  VLAN ID:  38
                       IP:  10.38.0.10
                  Netmask:  255.255.255.0
                  Gateway:  10.38.0.1
 Attached in VMs (config):  xa287c764d
   Attached in VMs (live):  xa287c764d

[root@dbi-oda-x8 ~]# odacli describe-vnetwork -n tagged39
VNetwork details
--------------------------------------------------------------------------------
                       ID:  00df04d4-9a01-4547-9a3c-5a27dab10494
                     Name:  tagged39
                  Created:  2022-02-04 15:19:45 CET
                  Updated:  2022-02-04 15:19:45 CET
                     Type:  BridgedVlan
           Interface name:  btbond2
              Bridge name:  brtagged39
                  VLAN ID:  39
                       IP:  10.39.0.10
                  Netmask:  255.255.255.0
                  Gateway:  10.39.0.1
 Attached in VMs (config):  x793d6c5ce
   Attached in VMs (live):  x793d6c5ce

DB System information checks

I have my 2 DB Systems.
srvdb38 configured on VLAN38 network with 10.38.0.20 IP Address.
srvdb39 configured on VLAN39 network with 10.39.0.20 IP Address.

[root@dbi-oda-x8 ~]# odacli list-dbsystems
Name                  Shape       Cores  Memory      GI version          DB version          Status           Created                  Updated
--------------------  ----------  -----  ----------  ------------------  ------------------  ---------------  -----------------------  -----------------------
srvdb39               odb2        4      16.00 GB    19.13.0.0.211019    19.13.0.0.211019    CONFIGURED       2022-02-04 17:14:01 CET  2022-02-04 18:10:23 CET
srvdb38               odb2        4      16.00 GB    19.13.0.0.211019    19.13.0.0.211019    CONFIGURED       2022-02-04 16:12:33 CET  2022-02-04 17:07:52 CET

[root@dbi-oda-x8 ~]# odacli describe-dbsystem -n srvdb38
DB System details
--------------------------------------------------------------------------------
                       ID:  859466fe-a4a5-40ae-aa00-8e02cf74dedc
                     Name:  srvdb38
                    Image:  19.13.0.0.0
                    Shape:  odb2
             Cluster name:  dbsa287c764d
             Grid version:  19.13.0.0.211019
                   Memory:  16.00 GB
             NUMA enabled:  YES
                   Status:  CONFIGURED
                  Created:  2022-02-04 16:12:33 CET
                  Updated:  2022-02-04 17:07:52 CET

 CPU Pool
--------------------------
                     Name:  cpupool4dbsystems
          Number of cores:  4

                     Host:  dbi-oda-x8
        Effective CPU set:  5-6,21-22,37-38,53-54
              Online CPUs:  5, 6, 21, 22, 37, 38, 53, 54
             Offline CPUs:  NONE

 VM Storage
--------------------------
               Disk group:  DATA
              Volume name:  SA287C764D
            Volume device:  /dev/asm/sa287c764d-390
                     Size:  200.00 GB
              Mount Point:  /u05/app/sharedrepo/srvdb38

 VMs
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  xa287c764d
             VM Host Name:  srvdb38.dbi-lab.ch
            VM image path:  /u05/app/sharedrepo/srvdb38/.ACFS/snaps/vm_xa287c764d/xa287c764d
             Target State:  ONLINE
            Current State:  ONLINE

 VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  xa287c764d
                   Public:  10.38.0.20      / 255.255.255.0   / ens3 / BRIDGE(brtagged38)
                      ASM:  192.168.17.4    / 255.255.255.128 / ens4 / BRIDGE(privasm) VLAN(priv0.100)

 Extra VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  xa287c764d
                 tagged38:  10.38.0.20      / 255.255.255.0   / PUBLIC

 Databases
--------------------------
                     Name:  DB38
              Resource ID:  5ff22777-f24b-446a-8c58-3fa6dd2ad383
              Unique name:  DB38_SITE1
              Database ID:  715007309
              Domain name:  dbi-lab.ch
               DB Home ID:  8b66a7d6-9e5e-43c0-a69c-00b80d2dba81
                    Shape:  odb2
                  Version:  19.13.0.0.211019
                  Edition:  EE
                     Type:  SI
                     Role:  PRIMARY
                    Class:  OLTP
                  Storage:  ASM
               Redundancy:
         Target node name:
            Character set:  AL32UTF8
        NLS character set:
                 Language:  ENGLISH
                Territory:  AMERICA
          Console enabled:  false
             SEHA enabled:  false
      Associated networks:  Public-network
         Backup config ID:
       Level 0 Backup Day:  sunday
       Autobackup enabled:  true
              TDE enabled:  false
                 CDB type:  false
                 PDB name:
           PDB admin user:

[root@dbi-oda-x8 ~]# odacli describe-dbsystem -n srvdb39
DB System details
--------------------------------------------------------------------------------
                       ID:  5238f074-6126-4051-8b9e-e392b55a2328
                     Name:  srvdb39
                    Image:  19.13.0.0.0
                    Shape:  odb2
             Cluster name:  dbs793d6c5ce
             Grid version:  19.13.0.0.211019
                   Memory:  16.00 GB
             NUMA enabled:  YES
                   Status:  CONFIGURED
                  Created:  2022-02-04 17:14:01 CET
                  Updated:  2022-02-04 18:10:23 CET

 CPU Pool
--------------------------
                     Name:  cpupool4dbsystems
          Number of cores:  4

                     Host:  dbi-oda-x8
        Effective CPU set:  5-6,21-22,37-38,53-54
              Online CPUs:  5, 6, 21, 22, 37, 38, 53, 54
             Offline CPUs:  NONE

 VM Storage
--------------------------
               Disk group:  DATA
              Volume name:  S793D6C5CE
            Volume device:  /dev/asm/s793d6c5ce-390
                     Size:  200.00 GB
              Mount Point:  /u05/app/sharedrepo/srvdb39

 VMs
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
             VM Host Name:  srvdb39.dbi-lab.ch
            VM image path:  /u05/app/sharedrepo/srvdb39/.ACFS/snaps/vm_x793d6c5ce/x793d6c5ce
             Target State:  ONLINE
            Current State:  ONLINE

 VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
                   Public:  10.39.0.20      / 255.255.255.0   / ens3 / BRIDGE(brtagged39)
                      ASM:  192.168.17.5    / 255.255.255.128 / ens4 / BRIDGE(privasm) VLAN(priv0.100)

 Extra VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
                 tagged39:  10.39.0.20      / 255.255.255.0   / PUBLIC

 Databases
--------------------------
                     Name:  DB39
              Resource ID:  e796e7e1-c8c8-4a63-809c-57976ce2163d
              Unique name:  DB39_SITE1
              Database ID:  2043690133
              Domain name:  dbi-lab.ch
               DB Home ID:  67f6df00-83d8-477b-8884-201664a3701b
                    Shape:  odb2
                  Version:  19.13.0.0.211019
                  Edition:  EE
                     Type:  SI
                     Role:  PRIMARY
                    Class:  OLTP
                  Storage:  ASM
               Redundancy:
         Target node name:
            Character set:  AL32UTF8
        NLS character set:
                 Language:  ENGLISH
                Territory:  AMERICA
          Console enabled:  false
             SEHA enabled:  false
      Associated networks:  Public-network
         Backup config ID:
       Level 0 Backup Day:  sunday
       Autobackup enabled:  true
              TDE enabled:  false
                 CDB type:  false
                 PDB name:
           PDB admin user:

[root@dbi-oda-x8 ~]#

Direct connection on the DB Systems

Finally I can access directly the DB Systems through SSH, and we could eventually, in case the internal firewall would be configured, connect directly to the database through the listener.

srvdb38 DB Systems :

[root@srvdb38 ~]# ps -ef | grep pmon | grep -v grep
oracle   99936     1  0 17:05 ?        00:00:00 ora_pmon_DB38

[root@srvdb38 ~]# ip addr sh ens3
2: ens3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:6e:3f:9f brd ff:ff:ff:ff:ff:ff
    inet 10.38.0.20/24 brd 10.38.0.255 scope global noprefixroute ens3
       valid_lft forever preferred_lft forever

srvdb39 DB Systems :

[root@srvdb39 ~]# ps -ef | grep pmon | grep -v grep
oracle    3243     1  0 18:07 ?        00:00:00 ora_pmon_DB39

[root@srvdb39 ~]# ip addr sh ens3
2: ens3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:6c:ad:ab brd ff:ff:ff:ff:ff:ff
    inet 10.39.0.20/24 brd 10.39.0.255 scope global noprefixroute ens3
       valid_lft forever preferred_lft forever

Cet article Creating KVM Database System on separate VLAN network on ODA est apparu en premier sur Blog dbi services.

Managing Refreshable Clone Pluggable Databases with Oracle 21c

$
0
0

A refreshable clone PDB is a way to refresh a single PDB instead of refreshing all PDBs in a container as in a Data Guard environment. It consists to make a clone of a source PDB and the clone PDB is updated with redo accumulated since the last redo log apply
In this blog I did some tests of this feature Refreshable pluggable databases.
I am doing my test with Oracle 21c but this feature exists since Oracle 12.2.

The configuration I use in the following

An Oracle 21c source CDB : DB21 with a source pluggable database PDB1
An Oracle 21c target CDB : TEST21 which will contain the refreshable clone of PDB1. The clone will be named PDB1FRES

Note that the refreshable clone can be created in the same container.

The first step is to create a user in the source CDB DB21 for database link purpose

SQL> create user c##clone_user identified by rootroot2016 temporary tablespace temp container=ALL;

User created.

SQL>

SQL> grant create session, create pluggable database, sysoper to c##clone_user container=ALL ;

Grant succeeded.

SQL>

In the target CDB TEST21 let’s create a database link to the source CDB. We will use the user c##clone

SQL> create database link clonesource connect to c##clone_user identified by rootroot2016 using 'DB21';

Database link created.

SQL>

SQL> select * from dual@clonesource;

D
-
X

SQL>

Now we can create a refreshable clone PDB1FRES of PDB1 in the database TEST21.

First we will create a manual refreshable clone

SQL> create pluggable database PDB1FRES from PDB1@clonesource refresh mode manual;

Pluggable database created.

SQL>

When created the new clone is mounted

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       MOUNTED
SQL>

We can see the refresh mode

SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        MANUAL                          39266271

SQL>

Ok now let’s do some change on PDB1 and let’s see how to propagate these changes on PDB1FRES

SQL> show con_name

CON_NAME
------------------------------
PDB1


SQL> create table test(id number);

Table created.

SQL> insert into test values (1);

1 row created.

SQL> commit;

Commit complete.

SQL>

PDB1FRES must be closed (mounted) to be refreshed with changes in PDB1. As the clause REFRESH MANUAL was used during it’s creation, we have to manually do the refresh

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> alter pluggable database PDB1FRES refresh;

Pluggable database altered.

SQL>

Let’s now open PDB1FRES in Read Only mode to verify the refresh

SQL> alter pluggable database PDB1FRES open read only;

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       MOUNTED
SQL> alter pluggable database PDB1FRES open read only;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ ONLY  NO
SQL> alter session set container=PDB1FRES;

Session altered.

SQL> select * from test;

        ID
----------
         1

SQL>

SQL> alter pluggable database PDB1FRES close immediate;

Pluggable database altered.

As seen, the manual refresh works fine.

Can we change the manual refresh mode to an automatic one?

Let’s try

SQL> alter pluggable database PDB1FRES  refresh mode every 4 minutes;

Pluggable database altered.

SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        AUTO                  4         39272240

SQL>

Now let’s again do some changes in PDB1

SQL> insert into test values (10);

1 row created.

SQL> insert into test values (20);

1 row created.

SQL> commit;

Commit complete.

SQL>

4 minutes after we can see the the last LAST_REFRESH_SCN has changed on PDB1FRES

SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        AUTO                  4         39272403

SQL>

Let’s open PDB1FRES on read only mode and let’s verify that the latest changes are replicated

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ ONLY  NO

SQL> alter session set container=PDB1FRES ;

Session altered.

SQL> select * from test;

        ID
----------
         1
        10
        20

SQL>

Note that the automatic refresh will success only if the PDB clone is mounted. Note also that a manual refresh can be done even if the auto refresh is configured.

Another question may be if we can open PDB1FRES in read write mode.
Let’s try

SQL> alter pluggable database PDB1FRES open read write;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ ONLY  NO
SQL>

What? The command open read write returns SUCCESS but the database is real openend in read only mode.

To open the database in a read write mode, we have to set the refresh mode to none

SQL> alter pluggable database PDB1FRES  refresh mode none;
alter pluggable database PDB1FRES  refresh mode none
*
ERROR at line 1:
ORA-65025: Pluggable database PDB1FRES is not closed on all instances.


SQL> alter pluggable database PDB1FRES  close immediate;

Pluggable database altered.

SQL> alter pluggable database PDB1FRES  refresh mode none;

Pluggable database altered.

SQL> col pdb_name for a15
SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        NONE                            39272683

SQL> alter pluggable database PDB1FRES open read write;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ WRITE NO
SQL>

Now that PDB1FRES is opened in read write mode, let’s close it and let’s try to transform it again in refreshable clone

SQL> alter pluggable database PDB1FRES close immediate;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       MOUNTED
SQL> alter pluggable database PDB1FRES  refresh mode manual;
alter pluggable database PDB1FRES  refresh mode manual
*
ERROR at line 1:
ORA-65261: pluggable database PDB1FRES not enabled for refresh


SQL>

It’s not possible to convert back an opened R/W PDB to a refreshable PDB. It’s clearly specified in the documentation
You cannot change an ordinary PDB into a refreshable clone PDB. After a refreshable clone PDB is converted to an ordinary PDB, you cannot change it back into a refreshable clone PDB.

Conclusion

One usage of refreshable PDB is that the clone can be used as a golden master for snapshots at PDB level. And these snapshots can be used for cloning environments for developers.

Cet article Managing Refreshable Clone Pluggable Databases with Oracle 21c est apparu en premier sur Blog dbi services.

Oracle DBs and ransomware attacks

$
0
0

By Clemens Bleile

I had a discussion with a customer recently about the risk of running into an issue with ransomware encrypting data in an Oracle databases. Just to quickly recap on what ransomware is:

Wikipedia: Ransomware is a type of malware from cryptovirology that threatens to publish the victim’s personal data or perpetually block access to it unless a ransom is paid. While some simple ransomware may lock the system without damaging any files, more advanced malware uses a technique called cryptoviral extortion. It encrypts the victim’s files, making them inaccessible, and demands a ransom payment to decrypt them.

In the last years ransomware has become more perfidious by
– searching for backups to encrypt them as well, because restoring non-infected backups were the only resolution in the past to ransomware encrypted data if you do not want to pay the ransom
– stealing sensitive data and then blackmail the victim to publish the stolen data if no ransom is paid

So how can you protect your database proactively to prevent becoming a victim of a ransomware attack?

The following list is not complete, but should give an idea on what an Oracle DBA may proactively do:

1. Protecting the data from becoming encrypted

It is very unlikely that a ransomware uses Oracle functionality to connect to a database. In almost all cases the ransomware tries to find data on filesystems or on block devices to encrypt it through normal reads and writes.
My customer actually uses Automatic Storage Management (ASM) and I proposed to use the ASM Filter Driver as a first protection against ransomware, because access to ASM-disks is only allowed using Oracle-Database-Calls then. You may e.g. check Blogs which show that even a dd or fdisk as root is not possible on the devices holding the data when the ASM Filder Driver is installed:

https://franckpachot.medium.com/asm-filter-driver-simple-test-on-filtering-2a506f048ee5
https://www.uxora.com/unix/admin/42-oracle-asm-filter-driver-i-o-filtering-test

Here an example trying to delete a partition, which is an ASM-device with fdisk:

[root@ol8-21-rac1 ~]# fdisk /dev/sdc
...
Command (m for help): p
Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb7b98795

Device     Boot Start     End Sectors Size Id Type
/dev/sdc1        2048 4194303 4192256   2G 83 Linux

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Command (m for help): p
Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb7b98795

Command (m for help): w
The partition table has been altered.
Failed to remove partition 1 from system: Device or resource busy

The kernel still uses the old partitions. The new table will be used at the next reboot. 
/dev/sdc: close device failed: Input/output error

[root@ol8-21-rac1 ~]# 

In /var/log/messages I can see this:

Mar 22 22:18:51 ol8-21-rac1 kernel: F 4299627.272/220322221851 fdisk[98385] oracleafd:18:1012:Write IO to ASM managed device: [8] [32]
Mar 22 22:18:51 ol8-21-rac1 kernel: Buffer I/O error on dev sdc, logical block 0, lost async page write

2. Protecting backups from becoming encrypted

Besides storing backups on different servers it’s a good idea to use backup-solutions which make the backup immutable (read only) after it has been written. So you should check that your database backups are immutable. A NFS-location is usually not a good backup medium for that (there are measures to help for NFS as well though. Check here).

3. Protecting data so that it cannot be stolen

Encrypting your data using e.g. Oracle Transparent Data Encrpytion is a good idea, because stealing that data is useless without the key.

Depending on your configuration several other methods to protect against ransomware attacks are available. Here a couple of links concerning the subject:

https://phoenixnap.com/kb/immutable-backups
https://blogs.oracle.com/maa/post/protect-and-recover-databases-from-ransomware-attacks-with-zero-data-loss-recovery-appliance
https://ronekins.com/2021/06/14/protecting-oracle-backups-from-ransomware-and-malicious-intent/#more-7955

Summary: Ransomware may also affect database servers. A DBA should protect the databases he’s responsible for. Despites ASM Filter Drivers (AFD) issues (dependency on the Linux kernel, Bugs) the AFD could be a measure to protect your databases against ransomware attacks. Interestingely I haven’t seen any Blog or information yet about using AFD as a protection against ransomware.

REMARK: You may check the following MOS Note concerning the dependency of the AFD to the Kernel:
ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)
Even if the MOS-Note-title talks about ACFS only, the AFD is covered as well.

Cet article Oracle DBs and ransomware attacks est apparu en premier sur Blog dbi services.

Extract all DDL from an Oracle database

$
0
0

Introduction

Extracting DDL is sometime useful for creating similar objects in another database without data. Basically everything can be extracted from a running Oracle database.

The needs

My customer asked me to replicate a database without any data. The goal is to feed development environments running on Docker, so with a minimal footprint. The precise needs were:

  • All the metadata
  • Data from some tables may be needed (based on a provided list)
  • Imported users should be filtered (based on criteria and an exclude list)
  • Imported users will be created with basic password (password = username)
  • Source and target databases are not on the same network (thus no direct communication between both instances)
  • Logon triggers must be disabled on target database
  • Audit rules must also be disabled

Additional criteria may be added later.

How to proceed?

An Oracle package is dedicated to DDL extraction: dbms_metadata.get_ddl. If it’s very convenient for few objects, it does not do the job for a complete DDL extraction.

Since years now, datapump is also able to do this extraction. Actually, it was already possible with older exp/imp on 9i and older versions.

Export could be done as a normal expdp with metadata-only extraction. Then impdp will be used with the sqlfile directive. Datapump import with sqlfile directive won’t import anything but will parse the dumpfile and generate a SQL script from it.

SQL script will then be parsed and several actions will be done:

  • password change for all users (reset to username)
  • logon trigger disabling
  • audit rules disabling

Once done, SQL script will then be ready to send to target server.

Another expdp will be done with selected tables, this is for parameter tables for example and for sure tables without any sensible data. It is based on a text file (with the list of the tables to export) as input.

Before creating the metadata on target database, tablespaces must exist but with minimal sizes. This is why a script is also used to generate tablespace creation using a single datafile with minimum size and autoextend. Data size will be low, so users may not be annoyed by space exhaust on tablespaces.

Prerequisites

These are the prerequisites to use these scripts:

  • 12c or later source database (should also work with older versions)
  • target database in same or higher version and configured for OMF (db_create_file_dest)
  • connection to oracle user on both systems (or a user in the dba system group)
  • 1+GB free space on source and target server
  • nfs share between the 2 servers is recommended
  • users list for exclusion is provided by the customer
  • tables list for inclusion is provided by the customer

Here is an example of both lists:

cat ddl_tmp/excl_users.txt | more
ABELHATR
ACALENTI
ACTIVITY_R
ADELANUT
ALOMMET
AMERAN
AOLESG
APEREAN
APP_CON_MGR
...

cat ddl_tmp/incl_tables.txt
DERAN.TRAD_GV_CR
DERAN.TRAD_GV_PS
APPCN.PARAM_BASE
APPCN.PARAM_EXTENDED
OAPPLE.MCUST_INVC
...

Output files

The script will generate 3 files prefixed with the step number for identifying sequence on target database:

  • 01_${ORACLE_SID}_create_tablespace.sql: tablespace creation script using OMF
  • 02_${ORACLE_SID}_create_ddl.sql: main SQL script to create the DDL
  • 03_impdp_${ORACLE_SID}_tables.sh: import shell script for importing tables with data

Complete script explained

The first part of the script is for defining variables, variables are basically source database SID, working folder and file names:

# Set source database
export ORACLE_SID=MARCP01

# Set environment variables, main folder and file names
export DDL_TARGET_DIR=/home/oracle/ddl_tmp
export DDL_TARGET_DUMPFILE=ddl_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`.dmp
export DDL_TARGET_LOGFILE_EXP=ddl_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_LOGFILE_IMP=ddl_${ORACLE_SID}_imp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_TABLES_DUMPFILE=tables_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`_%U.dmp
export DDL_TARGET_TABLES_LOGFILE_EXP=tables_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_SCRIPT=ddl_${ORACLE_SID}_extracted_`date +"%Y%m%d_%H%M"`.sql
export DDL_TBS_SCRIPT=01_${ORACLE_SID}_create_tablespace.sql
export DDL_CREATE_SCRIPT=02_${ORACLE_SID}_create_ddl.sql
export DDL_IMPORT_TABLES_CMD=03_impdp_${ORACLE_SID}_tables.sh
export DDL_EXCLUDE_USER_LIST=excl_users.txt
export DDL_INCLUDE_TABLE_LIST=incl_tables.txt

Second part is for creating target folder and deleting temporary files from the hypothetical last run:

# Create target directory and clean up the folder
# Directory should include a user list to exclude: $DDL_EXCLUDE_USER_LIST
#  => User list is basically 1 username per line
# Directory may include a table list to include: $DDL_INCLUDE_TABLE_LIST
#  => Table list is 1 table per line, prefixed with the username (owner)
mkdir $DDL_TARGET_DIR 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.par 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.par 2>/dev/null
rm $DDL_TARGET_DIR/0*.sql 2>/dev/null
rm $DDL_TARGET_DIR/0*.sh 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.dmp 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.dmp 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.log 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.log 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.sql 2>/dev/null

A parameter file will be used for the first expdp, it must be created before. All users will be included, but not the default’s one. Excluding unneeded users will be done later:

# Create parameter file for metadata export
# No need to parallelize as DDL extraction runs on a single thread
. oraenv <<< $ORACLE_SID
sqlplus -s / as sysdba <<EOF
 create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/ddl_extract.par
 SELECT 'dumpfile=$DDL_TARGET_DUMPFILE' FROM DUAL; 
 SELECT 'logfile=$DDL_TARGET_LOGFILE_EXP' FROM DUAL;
 SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
 SELECT 'content=metadata_only' FROM DUAL;
 SELECT 'cluster=N' FROM DUAL;
 SELECT 'exclude=fga_policy' FROM DUAL;
 SELECT 'exclude=AUDIT_OBJ' FROM DUAL;
 SELECT 'exclude=DB_LINK' FROM DUAL;
 SELECT 'schemas='||username FROM DBA_USERS WHERE oracle_maintained='N' ORDER BY username;
 spool off;
exit;
EOF

In this parameter file, let’s exclude the users from the txt file:

# Exclude users' list from parameter file
cp $DDL_TARGET_DIR/ddl_extract.par $DDL_TARGET_DIR/par1.tmp
for a in `cat $DDL_TARGET_DIR/$DDL_EXCLUDE_USER_LIST`; do cat $DDL_TARGET_DIR/par1.tmp | grep -v $a > $DDL_TARGET_DIR/par2.tmp; mv $DDL_TARGET_DIR/par2.tmp $DDL_TARGET_DIR/par1.tmp; done
mv $DDL_TARGET_DIR/par1.tmp $DDL_TARGET_DIR/ddl_extract.par

A parameter file is also needed for tables expdp. Tables’ export will be included in separate dump files:

# Create parameter file for tables to include
# Tables will be consistent at the same SCN
# Export is done with parallel degree 4
sqlplus -s / as sysdba <<EOF
 create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/tables_extract.par
 SELECT 'dumpfile=$DDL_TARGET_TABLES_DUMPFILE' FROM DUAL; 
 SELECT 'logfile=$DDL_TARGET_TABLES_LOGFILE_EXP' FROM DUAL;
 SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
 SELECT 'parallel=4' FROM DUAL;
 SELECT 'cluster=N' FROM DUAL;
 SELECT 'flashback_scn='||current_scn FROM V\$DATABASE;
 spool off;
exit;
EOF

In this parameter file, let’s include all the tables as described in the related txt file:

# Include tables' list to parameter file
for a in `cat $DDL_TARGET_DIR/$DDL_INCLUDE_TABLE_LIST`; do echo "tables="$a >> $DDL_TARGET_DIR/tables_extract.par; done

Now metadata export could start:

# Export metadata to a dump file
expdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/ddl_extract.par

# Output example
# ...
# Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:
#   /home/oracle/ddl_tmp/ddl_MARCP01_20220318_1351.dmp
# Job "SYS"."SYS_EXPORT_SCHEMA_02" successfully completed at Fri Mar 18 14:21:47 2022 elapsed 0 00:24:13

And tables could also be exported now:

# Export included tables in another set of dump files
expdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/tables_extract.par

# Output example
# ...
# Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
# Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Mar 18 14:22:08 2022 elapsed 0 00:00:14

A script is needed for tablespace creation, let’s create it:

# Create tablespace script for tablespace creation on target database (10MB with autoextend)
sqlplus -s / as sysdba <<EOF
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/$DDL_TBS_SCRIPT
  SELECT 'create tablespace '||tablespace_name||' datafile size 10M autoextend on;' FROM dba_data_files WHERE tablespace_name NOT IN ('SYSTEM','SYSAUX') and tablespace_name NOT LIKE 'UNDOTBS%' group by tablespace_name order by tablespace_name;
  spool off;
exit;
EOF

Another parameter file is needed for doing the datapump import that will create the SQL file:

# Create parameter file for metadata import as an SQL file
echo "dumpfile=$DDL_TARGET_DUMPFILE" > $DDL_TARGET_DIR/ddl_generate.par
echo "logfile=$DDL_TARGET_LOGFILE_IMP" >> $DDL_TARGET_DIR/ddl_generate.par
echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/ddl_generate.par
echo "sqlfile=$DDL_TARGET_SCRIPT" >> $DDL_TARGET_DIR/ddl_generate.par

Now let’s start the impdp task to extract DDL from the metadata dumpfile:

# Generate SQL script from previous dump (with impdp - it will not import anything)
impdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/ddl_generate.par

# Output example
# ...
# Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Fri Mar 18 14:34:48 2022 elapsed 0 00:08:37

Once the SQL script with all DDL has been created, it’s time to change users’ passwords, lock some specific users, and disable logon triggers. You may probably need different changes:

# Define standard password for all internal users and generate DDL script (not definitive's one)
cat $DDL_TARGET_DIR/$DDL_TARGET_SCRIPT | awk -F ' ' '{if ($1 == "CREATE" && $2 == "USER" && $6 == "VALUES")  print $1" "$2" "$3" "$4" "$5" "$3; else print $0}' > $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT

# Lock *_MANAGER users (lock is added at the end of DDL script)
cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
cat $DDL_TARGET_DIR/ddl.tmp | grep "CREATE USER \"" | grep "_MANAGER\"" | awk -F ' ' '{print "ALTER USER "$3" ACCOUNT LOCK;"}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
rm $DDL_TARGET_DIR/ddl.tmp

# Remove logon triggers (disabled at the end of DDL script)
cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
cat $DDL_TARGET_DIR/ddl.tmp | awk -F ' ' '{if ($1 == "CREATE" && $2 == "EDITIONABLE" && $3 == "TRIGGER")  {trig=1; trigname=$4;} else if (trig == 1 && $1  == "after" && $2 == "logon") {trig=0 ; print "ALTER TRIGGER "trigname" DISABLE;"}}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
rm $DDL_TARGET_DIR/ddl.tmp

I’m still on the source server, but it does not prevent me to generate parameter and command file for impdp:

# Create parameter file for tables import (will be needed on target server)
echo "dumpfile=$DDL_TARGET_TABLES_DUMPFILE" > $DDL_TARGET_DIR/tables_import.par
echo "logfile=tables_import.log" >> $DDL_TARGET_DIR/tables_import.par
echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/tables_import.par

# Script for importing tables on the target database (on the target server)
echo 'impdp \"/ as sysdba\" parfile=$DDL_TARGET_DIR/tables_import.par' > $DDL_TARGET_DIR/$DDL_IMPORT_TABLES_CMD

Last operation done on this source server is displaying the files generated by the script:

# Display files to transport to target server
ls -lrth $DDL_TARGET_DIR/0*.* $DDL_TARGET_DIR/tables*.dmp $DDL_TARGET_DIR/tables_import.par | sort
# Output example
# -rw-r----- 1 oracle asmadmin  20K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
# -rw-r----- 1 oracle asmadmin 472K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
# -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
# -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
# -rw-rw-r-- 1 oracle oinstall  29K Mar 18 14:25 /home/oracle/ddl_tmp/01_MARCP01_create_tablespace.sql
# -rw-rw-r-- 1 oracle oinstall  63M Mar 18 14:35 /home/oracle/ddl_tmp/02_MARCP01_create_ddl.sql
# -rw-rw-r-- 1 oracle oinstall   64 Mar 18 14:35 /home/oracle/ddl_tmp/03_impdp_MARC01_tables.sh
# -rw-rw-r-- 1 oracle oinstall   98 Mar 18 14:35 /home/oracle/ddl_tmp/tables_import.par

Conclusion

This is not high level Oracle database stuff, but you can achieve nice automation simply using command shell and dynamic SQL scripting. It does not require any extra tool, and in this example, it brought to my customer exactly what he needs.

Cet article Extract all DDL from an Oracle database est apparu en premier sur Blog dbi services.

How to create an Oracle GoldenGate EXTRACT in Multitenant

$
0
0

Create an EXTRACT process into container database has some specificity :

From the CDB$ROOT, create a common user and configure the database to be ready to extract data via GoldenGate:

SQL> create user c##gg_admin identified by "*****" default tablespace goldengate temporary tablespace temp;

User created.

SQL>

SQL> alter user c##gg_admin quota unlimited on goldengate;

User altered.

SQL>


SQL> grant create session, connect,resource,alter system, select any dictionary, flashback any table to c##gg_admin container=all;

Grant succeeded.

SQL>

SQL> exec dbms_goldengate_auth.grant_admin_privilege(grantee => 'c##gg_admin',container=>'all');

PL/SQL procedure successfully completed.

SQL> alter user c##gg_admin set container_data=all container=current;

User altered.

SQL>

SQL> grant alter any table to c##gg_admin container=ALL;

Grant succeeded.

SQL>

alter system set enable_goldengate_replication=true scope=both;


SQL> alter database force logging;


SQL> alter pluggable database add supplemental log data;

Pluggable database altered.

SQL>

Add the schematrandata for the schema concerned:

GGSCI (vmld-01726 as c##gg_admin@MYCDB) 3> add schematrandata schema_source

2022-04-13 18:06:55  INFO    OGG-01788  SCHEMATRANDATA has been added on schema "schema_source".

2022-04-13 18:06:55  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema "schema_source".

2022-04-13 18:06:55  INFO    OGG-10154  Schema level PREPARECSN set to mode NOWAIT on schema "schema_source".

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_DUMMY *****
Oracle Goldengate support native capture on table schema_source.ZZ_DUMMY.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_DUMMY: SCN, D, COMMENT_TXT
No unique key is defined for table schema_source.ZZ_DUMMY.

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_DUMMY2 *****
Oracle Goldengate support native capture on table schema_source.ZZ_DUMMY2.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_DUMMY2: SCN, D, COMMENT_TXT
No unique key is defined for table schema_source.ZZ_DUMMY2.

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_SURVEILLANCE *****
Oracle Goldengate support native capture on table schema_source.ZZ_SURVEILLANCE.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_SURVEILLANCE: I.

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_SURVEILLANCE_COPY *****
Oracle Goldengate support native capture on table schema_source.ZZ_SURVEILLANCE_COPY.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_SURVEILLANCE_COPY: I, SURV_DATE, ELLAPSED_1, ELLAPSED_2, CLIENT_HOST, CLIENT_TERMINAL, OS_USER, CLIENT_PROGRAM, INFO
No unique key is defined for table schema_source.ZZ_SURVEILLANCE_COPY.

GGSCI (vmld-01726 as c##gg_admin@MYCDB/CDB$ROOT) 3> dblogin userid c##gg_admin@MYPDB password xxxx
Successfully logged into database.

GGSCI (vmld-01726 as c##gg_admin@MYCDB) 4> info schematrandata schema_source

2022-04-13 18:32:43  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema "schema_source".

2022-04-13 18:32:43  INFO    OGG-01980  Schema level supplemental logging is enabled on schema "schema_source" for all scheduling columns.

2022-04-13 18:32:43  INFO    OGG-10462  Schema "schema_source" have 4 prepared tables for instantiation.

GGSCI (vmld-01726 as c##gg_admin@MYCDB) 5>

Create a new alias connection to the container database and register the extract, the extract must be registered into the root container (CDB$ROOT) even the data to capture are from the PDB:

GGSCI (myserver) 10> alter credentialstore add user c##gg_admin@MYCDB_X1 alias ggadmin_exacc
Password:

Credential store altered.

GGSCI (myserver) 11> dblogin useridalias ggadmin
Successfully logged into database CDB$ROOT.

GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 2>

GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 2> register extract E3 database container (MYPDB)

2022-04-13 18:31:19  INFO    OGG-02003  Extract E3 successfully registered with database at SCN 3386436450080


GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 3>

Save the SCN –> 3386436450080

Create the EXTRACT, connected on the CDB:

[oracle@myserver:/u01/app/oracle/product/19.1.0.0.4/gg_1]$ mkdir -p /u01/gs_x/ogg/

GGSCI (myserver) 7> add extract E3, integrated tranlog, begin now
EXTRACT (Integrated) added.


GGSCI (myserver) 8> INFO ALL

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     E3		    00:00:00      00:00:06


GGSCI (myserver) 9>


GGSCI (myserver) 9> add exttrail /u01/gs_x/ogg/gz, extract E3
EXTTRAIL added.


GGSCI (myserver) 2> edit param E3


GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 13> edit param E3
Extract E3
useridalias ggadmin
Exttrail /u01/gs_x/ogg/gz
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
DDL  &
INCLUDE MAPPED OBJNAME MYPDB.SCHEMA.*
Sequence MYPDB.SCHEMA.*;
Table MYPDB.SCHEMA.* ;

The parameter Table must be prefixed by the Pdb Name

 

Start the Extract always from the CDB$ROOT:

GGSCI (myserver as c##gg_admin@MY_CDB/CDB$ROOT) 12> START EXTRACT E3 atcsn 3386436450080

Sending START request to MANAGER ...
EXTRACT E3 starting


GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 15> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     E3      00:00:05      00:00:03

Check the extract is running.
Now you are ready to create the Pump process, do the Initial Load and create the replicat process on the target.

Cet article How to create an Oracle GoldenGate EXTRACT in Multitenant est apparu en premier sur Blog dbi services.


Installing MySQL InnoDB Cluster in OKE using a MySQL Operator

$
0
0

During previous months, I’ve had some time to satisfy my curiosity about databases in containers and I started to test a little bit MySQL in Kubernetes.
This is how it all began…

In January I had the chance to be trained on Kubernetes attending the Docker and Kubernetes essentials Workshop of dbi services. So I decided to prepare a session on this topic at our internal dbi xChange event. And as if by magic, at the same time, a customer asked for our support to migrate a MySQL database to their Kubernetes cluster.

In general, I would like to raise two points before going into the technical details:
1. Is it a good idea to move databases into containers? Here I would use a typical IT answer: “it depends”. I can suggest you to think about your needs and constraints, if you have small images to deploy, about storage and persistence, performances, …
2. There are various solutions for installing, orchestrating and administering MySQL in K8s: MySQL single instance vs MySQL InnoDB Cluster, using MySQL Operator for Kubernetes or Helm Charts, on-premise but also through Oracle Container Engine for Kubernetes on OCI, … I recommend you to think about which are (again) your needs and skills, if you are already working on Cloud technologies, whether you have already set up DevOps processes and which ones, …

Here I will show you how to install a MySQL InnoDB Cluster in OKE using a MySQL Operator.

First thing is to have an account on Oracle OCI and have deployed an Oracle Container Engine for Kubernetes in your compartment. You can do it in an easy was using the Quick Create option under “Developer Services > Containers & Artifacts > Kubernetes Clusters (OKE)”:

In this way all the resources you need (VCN, Internet and NAT gateways, a K8s cluster with workers nodes and node pool) are there in one click:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl cluster-info
Kubernetes control plane is running at https://xxx.xx.xxx.xxx:6443
CoreDNS is running at https://xxx.xx.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get nodes -o wide
NAME         STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP       OS-IMAGE                  KERNEL-VERSION                      CONTAINER-RUNTIME
10.0.10.36   Ready    node    6m7s   v1.22.5   10.0.10.36    yyy.yyy.yyy.yyy   Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.37   Ready    node    6m1s   v1.22.5   10.0.10.37    kkk.kkk.kkk.kk    Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.42   Ready    node    6m     v1.22.5   10.0.10.42    jjj.jj.jjj.jj     Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7

As a second step, you can install the MySQL Operator for Kubernetes using kubectl:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
customresourcedefinition.apiextensions.k8s.io/innodbclusters.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/mysqlbackups.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/clusterkopfpeerings.zalando.org created
customresourcedefinition.apiextensions.k8s.io/kopfpeerings.zalando.org created
elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
serviceaccount/mysql-sidecar-sa created
clusterrole.rbac.authorization.k8s.io/mysql-operator created
clusterrole.rbac.authorization.k8s.io/mysql-sidecar created
clusterrolebinding.rbac.authorization.k8s.io/mysql-operator-rolebinding created
clusterkopfpeering.zalando.org/mysql-operator created
namespace/mysql-operator created
serviceaccount/mysql-operator-sa created
deployment.apps/mysql-operator created

You can check the health of the MySQL Operator:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get deployment -n mysql-operator mysql-operator
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
mysql-operator   1/1     1            1           24s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get pods --show-labels -n mysql-operator
NAME                              READY   STATUS    RESTARTS   AGE    LABELS
mysql-operator-869d4b4b8d-slr4t   1/1     Running   0          113s   name=mysql-operator,pod-template-hash=869d4b4b8d

To isolate resources, you can create a dedicated namespace for the MySQL InnoDB Cluster:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl create namespace mysql-cluster
namespace/mysql-cluster created

You should also create a Secret using kubectl to store MySQL user credentials that will be created and then required by pods to access to the MySQL server:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl create secret generic elisapwd --from-literal=rootUser=root --from-literal=rootHost=% --from-literal=rootPassword="pwd" -n mysql-cluster
secret/elisapwd created

You can check that the Secret was corrected created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get secrets -n mysql-cluster
NAME                  TYPE                                  DATA   AGE
default-token-t2c47   kubernetes.io/service-account-token   3      2m
elisapwd              Opaque                                3      34s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl describe secret/elisapwd -n mysql-cluster
Name:         elisapwd
Namespace:    mysql-cluster
Labels:       
Annotations:  

Type:  Opaque

Data
====
rootHost:      1 bytes
rootPassword:  7 bytes
rootUser:      4 bytes

Now you have to write a .yaml configuration file to define how the MySQL InnoDB Cluster should be created. Here is a simple example:

elisa@cloudshell:~ (eu-zurich-1)$ vi InnoDBCluster_config.yaml
apiVersion: mysql.oracle.com/v2alpha1
kind: InnoDBCluster
metadata:
  name: elisacluster
  namespace: mysql-cluster 
spec:
  secretName: elisapwd
  instances: 3
  router:
    instances: 1

At this point you can run a MySQL InnoDB Cluster applying the configuration that you just created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f InnoDBCluster_config.yaml
innodbcluster.mysql.oracle.com/elisacluster created

You can finally check if the MySQL InnoDB Cluster has been successfully created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get innodbcluster --watch --namespace mysql-cluster
NAME           STATUS    ONLINE   INSTANCES   ROUTERS   AGE
elisacluster   PENDING   0        3           1         12s
elisacluster   PENDING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         104s
elisacluster   INITIALIZING   0        3           1         106s
elisacluster   ONLINE         1        3           1         107s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get all -n mysql-cluster
NAME                                       READY   STATUS    RESTARTS   AGE
pod/elisacluster-0                         2/2     Running   0          4h44m
pod/elisacluster-1                         2/2     Running   0          4h42m
pod/elisacluster-2                         2/2     Running   0          4h41m
pod/elisacluster-router-7686457f5f-hwfcv   1/1     Running   0          4h42m

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                               AGE
service/elisacluster             ClusterIP   10.96.9.203           6446/TCP,6448/TCP,6447/TCP,6449/TCP   4h44m
service/elisacluster-instances   ClusterIP   None                  3306/TCP,33060/TCP,33061/TCP          4h44m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/elisacluster-router   1/1     1            1           4h44m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/elisacluster-router-7686457f5f   1         1         1       4h44m

NAME                            READY   AGE
statefulset.apps/elisacluster   3/3     4h44m

You can use port forwarding in the following way:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl port-forward service/elisacluster mysql --namespace=mysql-cluster
Forwarding from 127.0.0.1:6446 -> 6446

to access your MySQL InnoDB Cluster on a second terminal in order to check its health:

elisa@cloudshell:~ (eu-zurich-1)$ mysqlsh -h127.0.0.1 -P6446 -uroot -p
Please provide the password for 'root@127.0.0.1:6446': *******
Save password for 'root@127.0.0.1:6446'? [Y]es/[N]o/Ne[v]er (default No): N
MySQL Shell 8.0.28-commercial

Copyright (c) 2016, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
Creating a session to 'root@127.0.0.1:6446'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 36651
Server version: 8.0.28 MySQL Community Server - GPL
No default schema selected; type \use  to set one.
 MySQL  127.0.0.1:6446 ssl  JS >  MySQL  127.0.0.1:6446 ssl  JS > dba.getCluster().status();
{
    "clusterName": "elisacluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "PRIMARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306"
}

 MySQL  127.0.0.1:6446 ssl  JS > \sql
Switching to SQL mode... Commands end with ;
 MySQL  127.0.0.1:6446 ssl  SQL > select @@hostname;
+----------------+
| @@hostname     |
+----------------+
| elisacluster-0 |
+----------------+
1 row in set (0.0018 sec)
 MySQL  127.0.0.1:6446 ssl  SQL > SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST                                                           | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 717dbe17-ba71-11ec-8a91-3665daa9c822 | elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | PRIMARY     | 8.0.28         | XCom                       |
| group_replication_applier | b02c3c9a-ba71-11ec-8b65-5a93db09dda5 | elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
| group_replication_applier | eb06aadd-ba71-11ec-8aac-aa31e5d7e08b | elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.0036 sec)

Easy, right?
Yes, but databases containers is still a tricky subject. As we said above, many topics need to be addressed: deployment type, performances, backups, storage and persistence, … So stay tuned, more blog posts about MySQL on K8s will come soon…

By Elisa Usai

Cet article Installing MySQL InnoDB Cluster in OKE using a MySQL Operator est apparu en premier sur Blog dbi services.

Upgrade AHF and TFA on an ODA

$
0
0

TFA (Trace File Analyzer) is part of AHF (Autonomous Health Framework). Those tools are preinstalled and part of ODA (Oracle Database Appliance). As you might know patching and upgrading are normally always going through ODA global Bundle patches. AHF can, without any problem, be upgraded independently. In this blog I wanted to share with you how I upgraded TFA with latest v21.4 version. The upgrade is performed with root user. This version addresses CVE-2021-45105/CVE-2021-44228/CVE-2021-45046. For reminder Apache Log4j Vulnerabilities are covered by CVE-2021-44228 and CVE-2021-45046.

Check current version of TFA

First we can check if TFA is up and running and which version is currrently used.

[root@ODA01 ~]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status
WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 5388 | 5000 | 20.1.3.0.0 | 20130020200429161658 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'

As we can see we are currently running TFA/AHF 20.1.3.0.0 version.

Check running processes

We can check as well the TFA running processes.

[root@ODA01 ~]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root      5388     1  0 Oct18 ?        02:55:07 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 ~]#

Check the location of AHF

It is important to check in which directory AHF is currently installed in order to provide appropriate directory in the upgrade command option. Thus the setup script will be able to see that there is a current version installed and will suggest to upgrade it. Otherwise another new AHF installation will be performed.

[root@ODA01 ~]# cat /etc/oracle.ahf.loc
/opt/oracle/dcs/oracle.ahf

AHF is installed on the ODA in the /opt/oracle/dcs/oracle.ahf directory.

Backup of the current AHF version

Before doing any modification it is important to backup current AHF version for fallback if needed. I have been doing a tar of the currrent installation directory.

[root@ODA01 ~]# cd /opt/oracle/dcs

[root@ODA01 dcs]# ls -ltrh
total 25M
drwxr-xr-x.  3 root   root     4.0K Jul  2  2019 rdbaas
drwxr-xr-x.  3 root   root     4.0K Jul  2  2019 scratch
drwx------   2 root   root     4.0K Jul  2  2019 dcsagent_wallet
drwxr-xr-x   2 root   root     4.0K Jul  4  2019 ft
drwxr-xr-x   2 root   root     4.0K Aug 11  2019 Inventory
drwxr-xr-x   4 root   root     4.0K May 17  2020 dcs-ui
-rwxr-xr-x   1 root   root     6.8K May 21  2020 configuredcs.pl
-rw-r--r--   1 root   root      25M May 21  2020 dcs-ui.zip
drwxr-xr-x   4 root   root     4.0K Sep  2  2020 repo
-rw-r--r--   1 root   root        0 Sep  2  2020 dcscontroller-stderr.log
-rw-r--r--   1 root   root     6.7K Sep  3  2020 dcscontroller-stdout.log
drwxr-xr-x   6 oracle oinstall  32K Sep  3  2020 commonstore
drwxr-xr-x  12 root   root     4.0K Sep  3  2020 oracle.ahf
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 agent
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 sample
drwxr-xr-x   4 root   root     4.0K Sep  3  2020 java
drwxr-xr-x.  3 root   root     4.0K Sep  3  2020 conf
drwxr-xr-x.  3 root   root     4.0K Sep  3  2020 dcscli
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 bin
drwx------.  5 root   root      20K Dec 21 00:00 log

[root@ODA01 dcs]# mkdir /root/backup_ahf_for_upgrade/

[root@ODA01 dcs]# tar -czf /root/backup_ahf_for_upgrade/oracle.ahf.20.1.3.0.0.tar ./oracle.ahf

[root@ODA01 dcs]# ls -ltrh /root/backup_ahf_for_upgrade
total 1.3G
-rw-r--r-- 1 root root 1.3G Dec 21 14:26 oracle.ahf.20.1.3.0.0.tar

Download new AHF version

You can download latest AHF version through my oracle support portal. Download patch 30166242 :
Patch 30166242: PLACEHOLDER – DOWNLOAD LATEST AHF (TFA and ORACHK/EXACHK)

I have created a directory on the ODA to upload the patch :

[root@ODA01 dcs]# mkdir /u01/app/patch/TFA

Upgrade AHF on the ODA

In this part we will see the procedure to upgrade AHF on the ODA. We first need to unzip the AHF-LINUX_v21.4.0.zip file and run ahf_setup. The installation script will recognise the existing 20.1.3 version and suggest to upgrade it.

root@ODA01 dcs]# cd /u01/app/patch/TFA

[root@ODA01 TFA]# ls -ltrh
total 394M
-rw-r--r-- 1 root root 393M Dec 21 10:16 AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# unzip -q AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# ls -ltrh
total 792M
-r-xr-xr-x 1 root root 398M Dec 20 19:28 ahf_setup
-rw-r--r-- 1 root root  384 Dec 20 19:30 ahf_setup.dat
-rw-r--r-- 1 root root 1.5K Dec 20 19:31 README.txt
-rw-r--r-- 1 root root  625 Dec 20 19:31 oracle-tfa.pub
-rw-r--r-- 1 root root 393M Dec 21 10:16 AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# ./ahf_setup -ahf_loc /opt/oracle/dcs -data_dir /opt/oracle/dcs

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_214000_58089_2021_12_21-14_30_06.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 21.4.0 Build Date: 202112200745

AHF is already installed at /opt/oracle/dcs/oracle.ahf

Installed AHF Version: 20.1.3 Build Date: 202004291616

Do you want to upgrade AHF [Y]|N : Y

Upgrading /opt/oracle/dcs/oracle.ahf

Shutting down AHF Services
Stopped OSWatcher
Nothing to do !
Shutting down TFA
/etc/init.d/init.tfa: line 661: /sbin/stop: No such file or directory
. . . . .
Killing TFA running with pid 5388
. . .
Successfully shutdown TFA..

Starting AHF Services
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands


Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N

AHF is successfully upgraded to latest version

.-----------------------------------------------------------------.
| Host      | TFA Version | TFA Build ID         | Upgrade Status |
+-----------+-------------+----------------------+----------------+
| ODA01     |  21.4.0.0.0 | 21400020211220074549 | UPGRADED       |
'-----------+-------------+----------------------+----------------'

Moving /tmp/ahf_install_214000_58089_2021_12_21-14_30_06.log to /opt/oracle/dcs/oracle.ahf/data/ODA01/diag/ahf/

[root@ODA01 TFA]#

Check new AHF version

We can check that the new version of AHF is 21.4.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl version

AHF version: 21.4.0

Check TFA running processes

We can check that TFA is up and running.

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root     61938     1 62 14:31 ?        00:01:36 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 TFA]#

After the upgrade script is completed there might still be some TFA processes running in order to rebuild the inventory :

root     15469 15077  0 14:58 ?        00:00:00 sh -c /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl rediscover -mode full > /dev/null 2>&1
root     15470 15469  0 14:58 ?        00:00:00 /bin/sh /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl rediscover -mode full
root     15505 15500  0 14:58 ?        00:00:00 /bin/sh /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.tfa rediscover -mode full
root     15524 15505  1 14:58 ?        00:00:00 /u01/app/19.0.0.0/grid/perl/bin/perl /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.pl rediscover -mode full

Make sure all those processes are completed successfully (not existing any more) before stopping AHF. Otherwise your inventory status will end with a STOPPED status.

Check status of AHF

We can check AHF status.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 61938 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'


No scheduler for any ID

orachk daemon is not running

[root@ODA01 TFA]#

TFA is running. No AHF scheduler. No orachk daemon.

Stop AHF and TFA

To check all is working as expected, let’s stop AHF.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl stopahf

Stopping TFA from the Command Line
Stopped OSWatcher
Nothing to do !
Please wait while TFA stops
Please wait while TFA stops
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
TFA Stopped Successfully
Successfully stopped TFA..

orachk scheduler is not running

There is still one process for TFA, the one from init.d :

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
[root@ODA01 TFA]#

We are going to stop it :

[root@ODA01 TFA]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
Nothing to do !
TFA Stopped Successfully
Successfully stopped TFA..

And there is no more TFA processes up and running :

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
[root@ODA01 TFA]#

Start AHF

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl startahf

Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands

INFO: Starting orachk scheduler in background. Details for the process can be found at /opt/oracle/dcs/oracle.ahf/data/ODA01/diag/orachk/compliance_start_211221_143845.log

We can check TFA status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status

.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 87371 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'

We can check AHF status as well and see that we have now scheduler and orachk daemon up and running :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 87371 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'

------------------------------------------------------------

Master node = ODA01

orachk daemon version = 21.4.0

Install location = /opt/oracle/dcs/oracle.ahf/orachk

Started at = Tue Dec 21 14:38:57 CET 2021

Scheduler type = TFA Scheduler

Scheduler PID:  87371

------------------------------------------------------------
ID: orachk.autostart_client_oratier1
------------------------------------------------------------
AUTORUN_FLAGS  =  -usediscovery -profile oratier1 -dball -showpass -tag autostart_client_oratier1 -readenvconfig
COLLECTION_RETENTION  =  7
AUTORUN_SCHEDULE  =  3 2 * * 1,2,3,4,5,6
------------------------------------------------------------
------------------------------------------------------------
ID: orachk.autostart_client
------------------------------------------------------------
AUTORUN_FLAGS  =  -usediscovery -tag autostart_client -readenvconfig
COLLECTION_RETENTION  =  14
AUTORUN_SCHEDULE  =  3 3 * * 0
------------------------------------------------------------

Next auto run starts on Dec 22, 2021 02:03:00

ID:orachk.AUTOSTART_CLIENT_ORATIER1

We can also check TFA processes :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root     86989     1  0 14:38 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root     87371     1 19 14:38 ?        00:00:13 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
root     92789 87371 38 14:39 ?        00:00:00 /u01/app/19.0.0.0/grid/perl/bin/perl /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.pl availability product Europe/Zurich
[root@ODA01 TFA]#

Stop AHF and TFA

We will stop AHF and TFA again.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl stopahf

Stopping TFA from the Command Line
Nothing to do !
Please wait while TFA stops
Please wait while TFA stops
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
TFA Stopped Successfully
Successfully stopped TFA..

Stopping orachk scheduler ...
Removing orachk cache discovery....
No orachk cache discovery found.



Unable to send message to TFA



Removed orachk from inittab


Stopped orachk

AHF status checks :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf

TFA-00002 Oracle Trace File Analyzer (TFA) is not running


No scheduler for any ID

orachk daemon is not running

TFA status checks :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status
TFA-00002 Oracle Trace File Analyzer (TFA) is not running

Check processes and stop TFA init.d :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root     86989     1  0 14:38 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null

[root@ODA01 TFA]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
Nothing to do !
TFA Stopped Successfully
Successfully stopped TFA..

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
[root@ODA01 TFA]#

Restart only TFA

Finally we only want to keep TFA up and running. No AHF scheduling or orachk daemon. So we are only going to start TFA.

[root@ODA01 TFA]# /etc/init.d/init.tfa start
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands

Final checks

TFA running processes :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root      5344     1  0 14:43 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root      5732     1 77 14:43 ?        00:00:11 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 TFA]#

TFA status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 5732 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'
[root@ODA01 TFA]#

AHF status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 5732 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'


No scheduler for any ID

orachk daemon is not running

[root@ODA01 TFA]#

Cleanup

We can still keep previous AHF version backup a few days just in case and remove it later.

The AHF installation files can be deleted :

[root@ODA01 ~]# cd /u01/app/patch/

[root@ODA01 patch]# ls -l TFA
total 810144
-rw-r--r-- 1 root root 411836201 Dec 21 10:16 AHF-LINUX_v21.4.0.zip
-r-xr-xr-x 1 root root 416913901 Dec 20 19:28 ahf_setup
-rw-r--r-- 1 root root       384 Dec 20 19:30 ahf_setup.dat
-rw-r--r-- 1 root root       625 Dec 20 19:31 oracle-tfa.pub
-rw-r--r-- 1 root root      1525 Dec 20 19:31 README.txt

[root@ODA01 patch]# rm -rf TFA

[root@ODA01 patch]# ls
[root@ODA01 patch]#

L’article Upgrade AHF and TFA on an ODA est apparu en premier sur dbi Blog.

Parallelize your Oracle INSERT with DBMS_PARALLEL_EXECUTE

$
0
0

One of the challenge of all PL/SQL developers is to simulate the Production activity in a Non Prod. environment like for example different Insert executed by several sessions.

Different tools exist like Oracle RAT (Real Application Testing) but under license or you can create your own PL/SQL package using DBMS_SCHEDULER or DBMS_PARALLEL_EXECUTE packages.

The aim of this blog is to show you how to use DBMS_PARALLEL_EXECUTE to parallelize several INSERTS commands through different sessions.

My source to write this blog is : oracle-base and oracle documentation.

My goal is to Insert 3000 rows into the table DBI_FK_NOPART through different sessions in parallel.

First of all, let’s check the MAX primary key into the table:

select max(pkey) from XXXX.dbi_fk_nopart;
MAX(PKEY)
9900038489

For my test I have created and populated a new table test_tab as specified into oracle-base which will allow to create the chunks used to create the different parallel sessions. In my case, we will create 3 chunks:

SELECT DISTINCT num_col, num_col FROM test_tab;
num_col num_col1
10      10
30      30
20      20

The following code below must be written into a PL/SQL block or a PL/SQL procedure, I just copy the main command:

The first step is to create a new task:

DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');

We split the data into 3 chunks:

--We create 3 chunks
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT DISTINCT num_col, num_col FROM test_tab', by_rowid => false); 

Now I want to Insert 1000 rows for each chunk which will correspond to different session. So at the end I will have 3000 rows inserted through different sessions.

Add a dynamic PL/SQL block to execute the Insert :

v_sql_stmt := 'declare
s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
table_name varchar2(30);
v_pkey number;
begin
EXECUTE IMMEDIATE ''SELECT max(pkey) FROM xxxx.DBI_FK_NOPART'' INTO v_pkey;
for rec in 1..1000 loop
s:=''INSERT /*TEST_INSERT_DBI_FK_NOPART*/ INTO xxxx.DBI_FK_NOPART ( 
pkey,
boid,
metabo,
lastupdate,
processid,
rowcomment,
created,
createduser,
replaced,
replaceduser,
archivetag,
mdbid,
itsforecast,
betrag,
itsopdetherkunft,
itsopdethkerstprm,
itsfckomppreisseq,
clsfckomppreisseq,
issummandendpreis,
partitiontag,
partitiondomain,
fcvprodkomppkey,
fckvprdankomppkey,
session_id
) VALUES (
1 +'||v_pkey||' ,
''''8189b7c7-0c36-485b-8993-054dddd62708'''' ,
-695,
sysdate,
''''B.3142'''' ,
NULL,
SYSDATE,
''''svc_xxxx_Mig_DEV_DBITEST'''' ,
SYSDATE,
NULL,
NULL,
NULL,
''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' ,
0,
''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' ,
NULL,
''''5849f308-215b-486b-95bd-cbd7afe8440H'''',  
-251,
0,
201905,
''''E'''',  
:start_id,
:end_id,
SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))'';
execute immediate s using vstart_id, vend_id;
commit;
end loop;
end;';

The next step is to execute the TASK with parallel_level = 4 meaning I want to insert the rows through 4 differents sessions.

DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',   sql_stmt =>v_sql_stmt,   language_flag => DBMS_SQL.NATIVE, parallel_level => 4 );

Let’s check the TASK execution status:

SELECT task_name,status FROM user_parallel_execute_tasks;
TASK_NAME STATUS
TASK_NAME FINISHED

And let’s check the chunks created, we should have 3 chunks:

SELECT chunk_id, status, start_id, end_id FROM   user_parallel_execute_chunks WHERE  task_name = 'TASK_NAME' ORDER BY chunk_id;
CHUNK_ID STATUS     START_ID END_ID
9926    PROCESSED   10  10
9927    PROCESSED   30  30
9928    PROCESSED   20  20

As we have used the parameter parallel_level=4, we should have 4 different jobs using 4 differents sessions :

SELECT log_date,job_name, status FROM   user_scheduler_job_run_details WHERE  job_name LIKE 'TASK$%' order by log_date desc;
LOG_DATE                            JOB_NAME        STATUS      SESSION_ID
29.12.21 14:38:41.882995000 +01:00  TASK$_22362_3   SUCCEEDED   3152,27076
29.12.21 14:38:41.766619000 +01:00  TASK$_22362_2   SUCCEEDED   14389,25264
29.12.21 14:38:41.657571000 +01:00  TASK$_22362_1   SUCCEEDED   3143,9335
29.12.21 14:38:41.588968000 +01:00  TASK$_22362_4   SUCCEEDED   6903,60912

Now let’s check the MAX primary key into the table :

select max(pkey) from xxxx.dbi_fk_nopart;
MAX(PKEY)
9900041489
select 9900041489 - 9900038489 from dual;
3000

3000 rows has been inserted, and the data has been splitted by chunks of 1000 rows per session:

select count(*),session_id from xxxx.dbi_fk_nopart where pkey >  9900041489 group by session_id;
count(*) session_id
1000    4174522508
1000    539738149
1000    4190321565

Conclusion :

DBMS_PARALLEL_EXECUTE is easy to use, performing and has many options :

  • Data can be splitted by ROWID by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID
  • Data can be splitted on a number column by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_NUMBER_COL
  • Data can be splitted on a user defined query by using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (used in this blog)

L’article Parallelize your Oracle INSERT with DBMS_PARALLEL_EXECUTE est apparu en premier sur dbi Blog.

Oracle Partition and Performance of massive/concurrent Inserts

$
0
0

For a customer, I had to check if partitioning improves performance of massive and concurrent inserts.

The goal is to execute several Inserts in parallel via dbms_parallel_execute package (my previous blog “parallelize your Oracle Insert with DBMS_PARALLEL_EXECUTE” explains how to use dbms_parallel_execute).

The idea is to insert more than 20 millions of rows in 2 tables :

  • One table not partitioned –> DBI_FK_NOPART
  • One table partitioned in HASH –> DBI_FK_PART

Both table have the same columns, same indexes but of different type :

  • All Indexes on the table partitioned are global:
    • CREATE INDEX …GLOBAL PARTITION BY HASH (….)….
  • All indexes on the table not partitioned are normal
    • CREATE INDEX …ON…
--Table DBI_FK_PART --> PARTITIONED
SQL> select TABLE_NAME,PARTITION_NAME from dba_tab_partitions where table_name = 'DBI_FK_PART';

TABLE_NAME          PARTITION_NAME
------------------- --------------------------------------------------------------------------------------------------------------------------------
DBI_FK_PART         SYS_P9797
DBI_FK_PART         SYS_P9798
DBI_FK_PART         SYS_P9799
DBI_FK_PART         SYS_P9800
DBI_FK_PART         SYS_P9801
DBI_FK_PART         SYS_P9802
DBI_FK_PART         SYS_P9803
DBI_FK_PART         SYS_P9804
DBI_FK_PART         SYS_P9805
DBI_FK_PART         SYS_P9806
DBI_FK_PART         SYS_P9807

TABLE_NAME          PARTITION_NAME
------------------- --------------------------------------------------------------------------------------------------------------------------------
DBI_FK_PART         SYS_P9808
DBI_FK_PART         SYS_P9809
DBI_FK_PART         SYS_P9810
DBI_FK_PART         SYS_P9811
DBI_FK_PART         SYS_P9812
DBI_FK_PART         SYS_P9813
DBI_FK_PART         SYS_P9814
DBI_FK_PART         SYS_P9815
DBI_FK_PART         SYS_P9816
DBI_FK_PART         SYS_P9817
DBI_FK_PART         SYS_P9818

TABLE_NAME          PARTITION_NAME
------------------- --------------------------------------------------------------------------------------------------------------------------------
DBI_FK_PART         SYS_P9819
DBI_FK_PART         SYS_P9820
DBI_FK_PART         SYS_P9821
DBI_FK_PART         SYS_P9822
DBI_FK_PART         SYS_P9823
DBI_FK_PART         SYS_P9824
DBI_FK_PART         SYS_P9825
DBI_FK_PART         SYS_P9826
DBI_FK_PART         SYS_P9827
DBI_FK_PART         SYS_P9828

32 rows selected.


--TABLE DBI_FK_NOPART --> NOT PARTITIONED

SQL> select TABLE_NAME,PARTITION_NAME from dba_tab_partitions where table_name = 'DBI_FK_NOPART';

no rows selected

SQL>

Each table has more than 1.2 billion of rows:

SQL> select count(*) from xxxx.dbi_fk_nopart;

  COUNT(*)
----------
1241226011

1 row selected.

SQL> select count(*) from xxxx.dbi_fk_part;

  COUNT(*)
----------
1196189234

1 row selected.

Let’s check the maximum primary key for the both tables :

SQL> select max(pkey) from xxxx.dbi_fk_part;

 MAX(PKEY)
----------
9950649803

1 row selected.

SQL> select max(pkey) from xxxx.dbi_fk_nopart;

 MAX(PKEY)
----------
9960649804

1 row selected.

SQL>

Let’s create 2 procedures :

  • “test_insert_nopart” which do the Insert into the table not partitioned “DBI_FK_NOPART”
  • “test_insert_part” which do the Insert into the table partitioned “DBI_FK_PART”
create or replace NONEDITIONABLE procedure test_insert_nopart is

v_sql_stmt varchar2(32767);
v_pkey number;
l_chunk_id NUMBER;
l_start_id NUMBER;
  l_end_id   NUMBER;
  l_any_rows BOOLEAN;
  l_try      NUMBER;
  l_status   NUMBER;
begin
    DBMS_OUTPUT.PUT_LINE('start : '||to_char(sysdate,'hh24:mi:ss'));

   begin
     DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
   exception when others then null;
   end;

   DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
   --We create 3 chunks
   DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT NUM_COL,NUM_COL+10 FROM TEST_TAB WHERE ROWNUM < 10001', by_rowid => false);   

   SELECT max(pkey) into v_pkey FROM XXXX.DBI_FK_NOPART;
   --I will Insert 1000 rows for each chunks, each chunks will work with different session_id
   v_sql_stmt := 'declare
       s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
       table_name varchar2(30);
       v_pkey number;
       begin
         EXECUTE IMMEDIATE ''SELECT max(pkey) FROM XXXX.DBI_FK_NOPART'' INTO v_pkey;
         for rec in 1..1000 loop
         s:=''INSERT /*TEST_INSERT_DBI_FK_NOPART*/ INTO XXXX.DBI_FK_NOPART ( 
        pkey,
        boid,
        metabo,
        lastupdate,
        processid,
        rowcomment,
        created,
        createduser,
        replaced,
        replaceduser,
        archivetag,
        mdbid,
        itsforecast,
        betrag,
        itsopdetherkunft,
        itsopdethkerstprm,
        itsfckomppreisseq,
        clsfckomppreisseq,
        issummandendpreis,
        partitiontag,
        partitiondomain,
        fcvprodkomppkey,
        fckvprdankomppkey,
        session_id
        ) VALUES (
        1 +'||v_pkey||' ,
         ''''8189b7c7-0c36-485b-8993-054dddd62708'''' ,
        -695,
        sysdate,
         ''''B.3142'''' ,
        NULL,
        SYSDATE,
         ''''XXXX_DEV_DBITEST'''' ,
        SYSDATE,
        NULL,
        NULL,
        NULL,
        ''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' ,
        0,
         ''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' ,
        NULL,
         ''''5849f308-215b-486b-95bd-cbd7afe8440H'''',  
        -251,
        0,
        201905,
         ''''E'''',  
        :start_id,
        :end_id,
        SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))'';
         execute immediate s using vstart_id, vend_id;
         commit;
         end loop;
     end;';
dbms_output.put_Line (v_sql_stmt);

   DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
     sql_stmt =>v_sql_stmt,
     language_flag => DBMS_SQL.NATIVE, parallel_level => 4 );

    DBMS_OUTPUT.PUT_LINE('end : '||to_char(sysdate,'hh24:mi:ss'));

end;


create or replace NONEDITIONABLE procedure test_insert_part is

v_sql_stmt varchar2(32767);
v_pkey number;
l_chunk_id NUMBER;
l_start_id NUMBER;
  l_end_id   NUMBER;
  l_any_rows BOOLEAN;
  l_try      NUMBER;
  l_status   NUMBER;
begin
    DBMS_OUTPUT.PUT_LINE('start : '||to_char(sysdate,'hh24:mi:ss'));

   begin
     DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
   exception when others then null;
   end;

   DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
   --We create 3 chunks
   DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME',sql_stmt =>'SELECT NUM_COL,NUM_COL+10 FROM TEST_TAB WHERE ROWNUM < 10001', by_rowid => false);   

   SELECT max(pkey) into v_pkey FROM XXXX.DBI_FK_PART;
   --I will Insert 1000 rows for each chunks, each chunks will work with different session_id
   v_sql_stmt := 'declare
       s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
       table_name varchar2(30);
       v_pkey number;
       begin
         EXECUTE IMMEDIATE ''SELECT max(pkey) FROM xxxx.DBI_FK_PART'' INTO v_pkey;
         for rec in 1..1000 loop
         s:=''INSERT /*TEST_INSERT_DBI_FK_PART*/ INTO xxxx.DBI_FK_PART ( 
        pkey,
        boid,
        metabo,
        lastupdate,
        processid,
        rowcomment,
        created,
        createduser,
        replaced,
        replaceduser,
        archivetag,
        mdbid,
        itsforecast,
        betrag,
        itsopdetherkunft,
        itsopdethkerstprm,
        itsfckomppreisseq,
        clsfckomppreisseq,
        issummandendpreis,
        partitiontag,
        partitiondomain,
        fcvprodkomppkey,
        fckvprdankomppkey,
        session_id
        ) VALUES (
        1 +'||v_pkey||' ,
         ''''8189b7c7-0c36-485b-8993-054dddd62708'''' ,
        -695,
        sysdate,
         ''''B.3142'''' ,
        NULL,
        SYSDATE,
         ''''xxxx_DBITEST'''' ,
        SYSDATE,
        NULL,
        NULL,
        NULL,
        ''''8a9f1321-b3ec-46d5-b6c7-af1c7fb5167G'''' ,
        0,
         ''''ae03b3fc-b31c-433b-be0f-c8b0bdaa82fK'''' ,
        NULL,
         ''''5849f308-215b-486b-95bd-cbd7afe8440H'''',  
        -251,
        0,
        201905,
         ''''E'''',  
        :start_id,
        :end_id,
        SYS_CONTEXT(''''USERENV'''',''''SESSIONID''''))'';
         execute immediate s using vstart_id, vend_id;
         commit;
         end loop;
     end;';
dbms_output.put_Line (v_sql_stmt);

   DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
     sql_stmt =>v_sql_stmt,
     language_flag => DBMS_SQL.NATIVE, parallel_level => 4 );

    DBMS_OUTPUT.PUT_LINE('end : '||to_char(sysdate,'hh24:mi:ss'));

end;

 

Now let’s inserting about 20 millions of rows in each tables via the procedures we created above:

SQL> set timing on
SQL> set autotrace on
SQL> begin
  2  test_insert_nopart;
  3  end;
  4  /

PL/SQL procedure successfully completed.

Elapsed: 00:06:30.34
SQL> begin
  2  test_insert_part;
  3  end;
  4
  5  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:22.92


SQL> select max(pkey) from xxxx.dbi_fk_nopart;

 MAX(PKEY)
----------
9980650809

SQL> select 9980650809 - 9960649804 from dual;

9980650809-9960649804
---------------------
             20001005
             
SQL> select max(pkey) from xxxx.dbi_fk_part;

 MAX(PKEY)
----------
9980811483

SQL> select 9980811483 - 9950649803 from dual;

9980811483-9950649803
---------------------
             30161680


FIRST CONCLUSION:

  • About 20 millions of rows has been inserted into the table not partitioned “DBI_FK_NOPART”  in 06.30.34 mins
  • About 30 millions of rows has been inserted into the table partitioned “DBI_FK_PART”  in 22 sec

Do a massive concurrent INSERT on a huge table is always faster on table partitioned compare a table non partitioned.

 

Now, let’s check OEM graphics to understand why the Insert is 17 times faster into DBI_FK_PART than DBI_FK_NOPART

Between 03:40 PM and 03:46 PM, we can see the peak related to the Insert on DBI_FK_NOPART

At 03:49 PM, we can see a very small peak related to the Insert related to DBI_FK_PART

 

 

If we focus only on the INSERT command (1st line and 4th line), the one into DBI_FK_PART (table partitioned) waits less on CPU (green) and CONCURRENCY (purple) compare to INSERT in DBI_FK_NOPART (Table partitioned) where the I/O is the event the most important.

Let’s see more in details on which event the database is waiting for both INSERT:

For INSERT into DBI_FK_NOPART:

And if we click into Concurrency Event :

For INSERT into DBI_FK_PART:

If we click on Concurrency Event :

 

SECOND CONCLUSION

The event “db file sequential read” seems indicate that the difference of response time between the both tables seems due to the type of index we created on each table (Global Partitioned Index on partitioned table VS Normal Index on nonpartitioned table).

As it’s possible to create Global Partitioned Index on nonpartitioned table, another “interesting” test (not done on this blog) should be to replace normal indexes by global indexes on non partitioned tables and check if response time is faster.

To conclude, if we have Partitioning license, in term of performance, we should always partition huge tables accessed several times in read (SELECT) or in write (INSERT).

L’article Oracle Partition and Performance of massive/concurrent Inserts est apparu en premier sur dbi Blog.

Import tnsnames.ora in LDAP directory

$
0
0

This post gives a short intro to directory naming, shows how to import from tnsnames.ora to an LDAP directory. Finally, as an alternative, you get an example how a TNS connection string looks like in ldif file format. LDIF can be used universally to export and importing data to LDAP directories.

What is directory naming?

In order to connect to a database, you either need to pass a DB connection string or you are using an alias to lookup up the string. When using an alias, there are two ways to connect to an Oracle database:

  1. Local naming
    Lookup DB connection strings locally in a tnsnames.ora file
    There is a variant when the connection is “directly” established using local application configuration in application or a jdbc string:
  2. Directory naming
    Lookup DB connection strings remotely in a LDAP directory
    LDAP lookup is also possible via jdbc:

This diagram summarises all the possibilities :

Directory naming can be compared to DNS, all you aliases are in one central LDAP server (like in a DNS server).
If you do local naming, you use a local configuration file that can be compared to /etc/hosts.
The data structure of both tnsnames.ora file or LDAP server used is a very simple key-value-store, it’s actually <Alias>=<DB connection string>

Directory naming has some benefits over local naming:

  • There is a single source of “truth” in one central directory. Much easier to manage. You don’t have to distribute tnsnames.ora files to the clients.
    It’s advisable to run LDAP highly availabe, for example configuring replication between several LDAP servers and having load balancers distribute traffic.
  • On every client, all connections are available. Useful when accessing a DB from remote, for example remote PDB cloning, using DB links, Data Guard Observer, doing remote RMAN backup & restore/recovery or cloning, etc.

Mass import from tnsnames.ora to LDAP Server

In the following example we are using Oracle Unified Directory as an LDAP directory. You are free to choose other LDAP directories to use for directory naming: OpenLDAP or Active Directory to name just a few.
Once you setup OUD for directory naming, you may want to do an initial load with contents from local tnsnames.ora file. In the directory your environment variable TNS_ADMIN points to, make sure to have these three files present:

sqlnet.ora
contains information on SQL*net configuration, important parameters are:

NAMES.DEFAULT_DOMAIN = company.com # default domain if domains are used
NAMES.DIRECTORY_PATH = (LDAP) # Where to lookup aliass, multi values possible, for example (TNSNAMES, LDAP) to first look locally, then on LDAP

 

ldap.ora
contains information about your LDAP directory, important parameters are:

DIRECTORY_SERVERS     = (loadbalancer.company.com:1389) # Adress of the LDAP server
DEFAULT_ADMIN_CONTEXT = "dc=company,dc=com" # Context where alias entries are stored
DIRECTORY_SERVER_TYPE = OID # Type of LDAP server

 

tnsnames.ora
contains aliases and connections strings, example:

TESTDB = (DESCRIPTION=
           (ADDRESS_LIST=
              (ADDRESS=
                  (PROTOCOL=TCP)
                  (Host=testdb.company.com)
                  (Port=1521)
              )
           )
           (CONNECT_DATA=
              (SERVICE_NAME=testdb.company.com)
           )
        )

 

On the client you would like to do the mass import, make sure to have either have the Oracle client or RDBMS software installed. Here, I’m using Oracle Client 12.2 64Bit on Windows. It’s quite an old version, but it fits our purpose. Both in Oracle client or RDBMS software, there is a tool called “Net Manager”. By reading the configuration files mentioned above, it’s able to connect both to remote LDAP servers and local tnsnames.ora file.

Startup Net Manager. In the menu, choose “COMMAND – DIRECTORY – EXPORT NET SERVICE NAMES”:

After authenticating to the LDAP server, a wizard shows up on which you are able to choose which aliases to add to LDAP directory:

Alternative: Use LDIF to export and import LDAP data

LDIF file format is easiest if you want to transfer data between LDAP directories. It’s also possible to create LDIF files containing TNS data. See the following example LDIF record for alias “TESTDB”:

dn: CN=TESTDB,cn=OracleContext,dc=company,dc=com
orclNetDescString: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(Host=testdb.company.com)(Port=1521)))(CONNECT_DATA=(SERVICE_NAME=testdb.company.com)))
objectClass: top
objectClass: orclNetService
CN: TESTDB

Aliases are stored in LDAP context “cn=OracleContext,dc=company,dc=com”.
“orclNetDescString” contains the actual DB connection string.
“CN” contains the alias value and is used to lookup a connection string.

L’article Import tnsnames.ora in LDAP directory est apparu en premier sur dbi Blog.

Creating KVM Database System on separate VLAN network on ODA

$
0
0

Oracle appliance is proposing various possibilities for creating databases either on the Bare Metal or using KVM DB System. Each DB System will host a single database in a separate VM. What about having each DB system running a separate network? In this blog I would like to share my tests and findings about how to create additional network on the ODA and how to create DB System on separate VLAN interface. The principle would be of course the same if we would like to create KVM Virtual Machines (compute instance) on separate VLAN.

Configuration description

For my test I have got an ODA X8-2M with a quad-port 10GBase-T network interface. I’m running ODA version 19.13.
On my network card, the 2 first ports p7p1 and p7p2 will be assigned to btbond1 and the 2 next ports p7p3 and p7p4 will be assigned to my second bonding interface btbond2. ODA are configured by default with active-backup mode without LACP for all bonding interfaces. This is automatically designed by the ODA and can not be changed. More over we need to keep in mind that on an appliance we will never manually change the Linux network scripts. All network configuration changes needs to be done with odacli.

btbond1 is used for my main network and we will use btbond2 to add additionnal networks.

The 2 additional networks are :
10.38.0.1/24 VLAN id 38
10.39.0.1/24 VLAN id 39

Checking network interface

With ethtool I can check and see that the 2 p7p3 and p7p4 ports are twisted pair and connected to the network :

[root@dbi-oda-x8 ~]# ethtool p7p3 | grep -iE "(bportb|detected)"
	Port: Twisted Pair
	Link detected: yes

[root@dbi-oda-x8 ~]# ethtool p7p4 | grep -iE "(bportb|detected)"
	Port: Twisted Pair
	Link detected: yes

I can see that both those interfaces are assigned to btbond2 interface :

[root@dbi-oda-x8 ~]# grep MASTER /etc/sysconfig/network-scripts/ifcfg-p7p3
MASTER=btbond2

[root@dbi-oda-x8 ~]# grep MASTER /etc/sysconfig/network-scripts/ifcfg-p7p4
MASTER=btbond2

Checking existing and default ODA configuration

After reimaging an ODA we would have the below default configuration.

root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

[root@dbi-oda-x8 ~]#

To use a network interface with the VMs, either a DB System or a Compute instance, the network interface needs to be created in the vnetworks.

VLAN Tagged versus Untagged

If a port on a switch is configured as tagged (trunk port in cisco world), the connected equipment (here our ODA) is VLAN aware, and would need to be tagged as well. The port is enabled for VLAN tagging. Purpose is to pass traffic for multiple VLAN’s. The ethernet frame is enclosing an additional VLAN header. The connected equipment will add the VLAN information in the ethernet frame.

If a port is configured as untagged (access port in cisco world), the connected equipment (here our ODA) does not know anything about in which VLAN it is and does not care about it. The switch will manage it on its own. The port does not tag and only accepts a single VLAN. The ethernet frame coming to the connected equipment will not have any VLAN header.

This is explained in the 802.1Q standard.

Most of the time trunk ports will link switches and access ports will link to end devices, albeit we would like to use several VLAN network on the same ODA network interface.

Creating an additional untagged network

Let’s create an additional untagged physical network on the Bare Metal itself. We will create it on btbond2 interface. Untagged means that the ODA will not add VLAN information in the Ethernet frame.
The option -t bond would be the default one and would here not be required.

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.38.0.10 -m untagged1 -s 255.255.255.0 -g 10.38.0.1 -t bond
{
  "jobId" : "9a4476dd-b955-433b-9463-377f66ab737a",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "February 04, 2022 13:48:12 PM CET",
  "resourceList" : [ ],
  "description" : "Rac Network service creation with name untagged1",
  "updatedTime" : "February 04, 2022 13:48:12 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i 9a4476dd-b955-433b-9463-377f66ab737a

Job details
----------------------------------------------------------------
                     ID:  9a4476dd-b955-433b-9463-377f66ab737a
            Description:  Rac Network service creation with name untagged1
                 Status:  Success
                Created:  February 4, 2022 1:48:12 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting network                          February 4, 2022 1:48:12 PM CET     February 4, 2022 1:48:18 PM CET     Success
Setting up Network                       February 4, 2022 1:48:12 PM CET     February 4, 2022 1:48:12 PM CET     Success
restart network interface btbond2        February 4, 2022 1:48:12 PM CET     February 4, 2022 1:48:18 PM CET     Success

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]
6171aad7-e247-4b08-a56e-83bdedd74af1   untagged1            btbond2      BOND            255.255.255.0      10.38.0.1                   [IP Address on node0: 10.38.0.10]

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

I can see that our new network exists in the network list but not in the virtual network of course.

I can also check and see that our btbond2 interface has been configured by odacli as an untagged bonding interface with the appropriate information.

[root@dbi-oda-x8 ~]# ip addr sh btbond2
9: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3c:fd:fe:92:80:1a brd ff:ff:ff:ff:ff:ff
    inet 10.38.0.10/24 brd 10.38.0.255 scope global btbond2
       valid_lft forever preferred_lft forever

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 262 Feb  4 13:48 /etc/sysconfig/network-scripts/ifcfg-btbond2

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2
#This file was created by ODA. Do not edit.
NETMASK=255.255.255.0
GATEWAY=10.38.0.1
BOOTPROTO=none
PEERDNS=no
DEVICE=btbond2
ONBOOT=yes
NM_CONTROLLED=no
IPADDR=10.38.0.10
BONDING_OPTS="mode=active-backup miimon=100 primary=p7p3"
IPV6INIT=no
USERCTL=no
TYPE=BOND

Of course it is not possible to create another untagged network on btbond2 interface knowing there is already one existing :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m untagged2 -s 255.255.255.0 -g 10.39.0.1 -t bond
DCS-10001:Internal error encountered: nicnamebtbond2 already exists in the networks list .. .

Creating an additional tagged network would also not be possible. Note the option -t and -v to create tagged networks on the ODA :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m tagged2 -s 255.255.255.0 -g 10.39.0.1 -t VLAN -v 39
DCS-10001:Internal error encountered: Creating vlan in the interface btbond2 is not allowed. Physical network untagged1 already exists in interface btbond2.

Let’s delete our additional untagged network

[root@dbi-oda-x8 ~]# odacli delete-network -m untagged1
{
  "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
  "status" : "Running",
  "message" : null,
  "reports" : [ {
    "taskId" : "TaskSequential_10041",
    "taskName" : "deleting network",
    "taskResult" : "",
    "startTime" : "February 04, 2022 14:08:32 PM CET",
    "endTime" : "February 04, 2022 14:08:32 PM CET",
    "status" : "Running",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_10039",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 04, 2022 14:08:32 PM CET"
  }, {
    "taskId" : "TaskZJsonRpcExt_10045",
    "taskName" : "Setting up Network",
    "taskResult" : "Network setup success",
    "startTime" : "February 04, 2022 14:08:32 PM CET",
    "endTime" : "February 04, 2022 14:08:32 PM CET",
    "status" : "Success",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_10041",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 04, 2022 14:08:32 PM CET"
  }, {
    "taskId" : "TaskZJsonRpcExt_10048",
    "taskName" : "restart network interface btbond2",
    "taskResult" : "",
    "startTime" : "February 04, 2022 14:08:32 PM CET",
    "endTime" : "February 04, 2022 14:08:32 PM CET",
    "status" : "Running",
    "taskDescription" : null,
    "parentTaskId" : "TaskSequential_10047",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "tags" : [ ],
    "reportLevel" : "Info",
    "updatedTime" : "February 04, 2022 14:08:32 PM CET"
  } ],
  "createTimestamp" : "February 04, 2022 14:08:32 PM CET",
  "resourceList" : [ {
    "resourceId" : "6171aad7-e247-4b08-a56e-83bdedd74af1",
    "resourceType" : null,
    "resourceNewType" : "Network",
    "jobId" : "3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57",
    "updatedTime" : null
  } ],
  "description" : "Network service deleteRacNetwork with id 6171aad7-e247-4b08-a56e-83bdedd74af1",
  "updatedTime" : "February 04, 2022 14:08:32 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i 3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57

Job details
----------------------------------------------------------------
                     ID:  3d7a033f-fd9c-4632-a6f1-b80f7d3fcf57
            Description:  Network service deleteRacNetwork with id 6171aad7-e247-4b08-a56e-83bdedd74af1
                 Status:  Success
                Created:  February 4, 2022 2:08:32 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
deleting network                         February 4, 2022 2:08:32 PM CET     February 4, 2022 2:08:34 PM CET     Success
Setting up Network                       February 4, 2022 2:08:32 PM CET     February 4, 2022 2:08:32 PM CET     Success
restart network interface btbond2        February 4, 2022 2:08:32 PM CET     February 4, 2022 2:08:34 PM CET     Success

Network has been deleted :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]

Creating additional tagged network

Let’s create the first tagged network on btbond2 interface :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.38.0.10 -m tagged38 -s 255.255.255.0 -g 10.38.0.1 -t VLAN -v 38
{
  "jobId" : "b3dd6d7b-7ee7-4418-afc6-fd71af0a01bc",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "February 04, 2022 14:40:28 PM CET",
  "resourceList" : [ ],
  "description" : "Rac Network service creation with name tagged38",
  "updatedTime" : "February 04, 2022 14:40:28 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i b3dd6d7b-7ee7-4418-afc6-fd71af0a01bc

Job details
----------------------------------------------------------------
                     ID:  b3dd6d7b-7ee7-4418-afc6-fd71af0a01bc
            Description:  Rac Network service creation with name tagged38
                 Status:  Success
                Created:  February 4, 2022 2:40:28 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting network                          February 4, 2022 2:40:28 PM CET     February 4, 2022 2:40:33 PM CET     Success
Setting up Vlan                          February 4, 2022 2:40:28 PM CET     February 4, 2022 2:40:33 PM CET     Success

The tagged network has been created :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]
fbe05bfa-636e-4f9c-a348-c59ba23e2296   tagged38             btbond2.38   VLAN            255.255.255.0      10.38.0.1          38       [IP Address on node0: 10.38.0.10]

And we can see that respective tagged network interface has now been created on the Linux operating system side (note the <btbond>.<vlan_id>) :

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 239 Feb  4 14:38 /etc/sysconfig/network-scripts/ifcfg-btbond2
-rw-r--r--  1 root root 520 Feb  4 14:40 /etc/sysconfig/network-scripts/ifcfg-btbond2.38

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.38
#ODA_VLAN_CONFIG ===
#ODA_VLAN_CONFIG Name=tagged38
#ODA_VLAN_CONFIG VlanId=38
#ODA_VLAN_CONFIG VlanInterface=btbond2
#ODA_VLAN_CONFIG Type=VlanType
#ODA_VLAN_CONFIG VlanSetupType=Other
#ODA_VLAN_CONFIG VlanIpAddr=10.38.0.10
#ODA_VLAN_CONFIG VlanNetmask=255.255.255.0
#ODA_VLAN_CONFIG VlanGateway=10.38.0.1
#ODA_VLAN_CONFIG NodeNum=0
#=== DO NOT EDIT ANYTHING ABOVE THIS LINE ===
DEVICE=btbond2.38
BOOTPROTO=none
ONBOOT=yes
VLAN=yes
NM_CONTROLLED=no
DEFROUTE=no
IPADDR=10.38.0.10
NETMASK=255.255.255.0
GATEWAY=10.38.0.1
[root@dbi-oda-x8 ~]#

And I can reach the ODA on the new network :

C:Users>ping 10.38.0.10

Pinging 10.38.0.10 with 32 bytes of data:
Reply from 10.38.0.10: bytes=32 time=1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63

Ping statistics for 10.38.0.10:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 1ms, Average = 0ms

The ODA will not permit to create any untagged additional network on the btbond2 interface knowing one tagged network already exists :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m untagged2 -s 255.255.255.0 -g 10.39.0.1 -t bond
DCS-10001:Internal error encountered: Creating non-VLAN typed network on the interface btbond2 is not allowed. VLAN tagged38 already exists in interface btbond2.

Let’s create the second tagged network on the same btbond2 interface :

[root@dbi-oda-x8 ~]# odacli create-network -n btbond2 -p 10.39.0.10 -m tagged39 -s 255.255.255.0 -g 10.39.0.1 -t VLAN -v 39
{
  "jobId" : "e0452663-544c-47c4-8b9b-5fbe6e0a5cd9",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "February 04, 2022 14:45:18 PM CET",
  "resourceList" : [ ],
  "description" : "Rac Network service creation with name tagged39",
  "updatedTime" : "February 04, 2022 14:45:18 PM CET"
}

[root@dbi-oda-x8 ~]# odacli describe-job -i e0452663-544c-47c4-8b9b-5fbe6e0a5cd9

Job details
----------------------------------------------------------------
                     ID:  e0452663-544c-47c4-8b9b-5fbe6e0a5cd9
            Description:  Rac Network service creation with name tagged39
                 Status:  Success
                Created:  February 4, 2022 2:45:18 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting network                          February 4, 2022 2:45:18 PM CET     February 4, 2022 2:45:23 PM CET     Success
Setting up Vlan                          February 4, 2022 2:45:18 PM CET     February 4, 2022 2:45:23 PM CET     Success

New tagged network has been created and we now have 2 tagged network running on the same btbond2 interface :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]
fbe05bfa-636e-4f9c-a348-c59ba23e2296   tagged38             btbond2.38   VLAN            255.255.255.0      10.38.0.1          38       [IP Address on node0: 10.38.0.10]
dc11ecbe-c4be-4e18-9c37-9f6360b37ee1   tagged39             btbond2.39   VLAN            255.255.255.0      10.39.0.1          39       [IP Address on node0: 10.39.0.10]

The respective new tagged interface has been configured by odacli on the linux operating system :

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 239 Feb  4 14:38 /etc/sysconfig/network-scripts/ifcfg-btbond2
-rw-r--r--  1 root root 520 Feb  4 14:40 /etc/sysconfig/network-scripts/ifcfg-btbond2.38
-rw-r--r--  1 root root 520 Feb  4 14:45 /etc/sysconfig/network-scripts/ifcfg-btbond2.39

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.39
#ODA_VLAN_CONFIG ===
#ODA_VLAN_CONFIG Name=tagged39
#ODA_VLAN_CONFIG VlanId=39
#ODA_VLAN_CONFIG VlanInterface=btbond2
#ODA_VLAN_CONFIG Type=VlanType
#ODA_VLAN_CONFIG VlanSetupType=Other
#ODA_VLAN_CONFIG VlanIpAddr=10.39.0.10
#ODA_VLAN_CONFIG VlanNetmask=255.255.255.0
#ODA_VLAN_CONFIG VlanGateway=10.39.0.1
#ODA_VLAN_CONFIG NodeNum=0
#=== DO NOT EDIT ANYTHING ABOVE THIS LINE ===
DEVICE=btbond2.39
BOOTPROTO=none
ONBOOT=yes
VLAN=yes
NM_CONTROLLED=no
DEFROUTE=no
IPADDR=10.39.0.10
NETMASK=255.255.255.0
GATEWAY=10.39.0.1

And I can ping my new network as well :

C:Users>ping 10.39.0.10

Pinging 10.39.0.10 with 32 bytes of data:
Reply from 10.39.0.10: bytes=32 time=2ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63

Ping statistics for 10.39.0.10:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 2ms, Average = 0ms

I can now as well check my global btbond2 configuration on the linux side :

[root@dbi-oda-x8 ~]# ip addr sh | grep -iE "(btbond2|btbond2.38|btbond2.39)"
4: p7p3:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
5: p7p4:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
9: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
57: btbond2.38@btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.38.0.10/24 brd 10.38.0.255 scope global btbond2.38
58: btbond2.39@btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.39.0.10/24 brd 10.39.0.255 scope global btbond2.39
[root@dbi-oda-x8 ~]#

Creating virtual networks

Knowing I need to use the networks for the next DB Systems I have deleted the created physical networks and I’m going to create the 2 same tagged networks as virtual networks on the btbond2 interface :

[root@dbi-oda-x8 ~]# odacli create-vnetwork -n tagged38 -if btbond2 -t bridged-vlan -ip 10.38.0.10 -nm 255.255.255.0 -vlan 38 -gw 10.38.0.1

Job details
----------------------------------------------------------------
                     ID:  eb57ff9b-c9b0-4e1d-bc51-162a42ea4fb1
            Description:  vNetwork tagged38 creation
                 Status:  Created
                Created:  February 4, 2022 3:19:02 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i eb57ff9b-c9b0-4e1d-bc51-162a42ea4fb1

Job details
----------------------------------------------------------------
                     ID:  eb57ff9b-c9b0-4e1d-bc51-162a42ea4fb1
            Description:  vNetwork tagged38 creation
                 Status:  Success
                Created:  February 4, 2022 3:19:02 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network doesn‘t exist   February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Validate interface to use exists         February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Validate interfaces to create not exist  February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Create bridge                            February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Create VLAN                              February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:02 PM CET     Success
Bring up VLAN                            February 4, 2022 3:19:02 PM CET     February 4, 2022 3:19:07 PM CET     Success
Create metadata                          February 4, 2022 3:19:07 PM CET     February 4, 2022 3:19:07 PM CET     Success
Persist metadata                         February 4, 2022 3:19:07 PM CET     February 4, 2022 3:19:07 PM CET     Success

[root@dbi-oda-x8 ~]# odacli create-vnetwork -n tagged39 -if btbond2 -t bridged-vlan -ip 10.39.0.10 -nm 255.255.255.0 -vlan 39 -gw 10.39.0.1

Job details
----------------------------------------------------------------
                     ID:  15cf889f-9c6e-4e0c-a676-b2203e40cfd2
            Description:  vNetwork tagged39 creation
                 Status:  Created
                Created:  February 4, 2022 3:19:40 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i 15cf889f-9c6e-4e0c-a676-b2203e40cfd2

Job details
----------------------------------------------------------------
                     ID:  15cf889f-9c6e-4e0c-a676-b2203e40cfd2
            Description:  vNetwork tagged39 creation
                 Status:  Success
                Created:  February 4, 2022 3:19:40 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network doesn‘t exist   February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Validate interface to use exists         February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Validate interfaces to create not exist  February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Create bridge                            February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Create VLAN                              February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:40 PM CET     Success
Bring up VLAN                            February 4, 2022 3:19:40 PM CET     February 4, 2022 3:19:45 PM CET     Success
Create metadata                          February 4, 2022 3:19:45 PM CET     February 4, 2022 3:19:45 PM CET     Success
Persist metadata                         February 4, 2022 3:19:45 PM CET     February 4, 2022 3:19:45 PM CET     Success

My 2 new tagged networks exists as virtual network and not physical networks :

[root@dbi-oda-x8 ~]# odacli list-networks

ID                                     Name                 NIC          Interface Type  Subnet Mask        Gateway            VLAN ID  Node Networks
-------------------------------------- -------------------- ------------ --------------- ------------------ ------------------ -------- -----------------------
bb05b06c-52eb-41ce-8ed8-92ec712db61d   Public-network       pubnet       BRIDGE          255.255.255.0      10.36.0.1                   [IP Address on node0: 10.36.0.241]
95ec74b7-4f6f-4cda-9b0c-5bcac731666b   ASM-network          privasm      BRIDGE          255.255.255.128                                [IP Address on node0: 192.168.17.2]
e77a33d0-2b40-4562-b802-c02cadd93b25   Private-network      priv0        INTERNAL        255.255.255.240                                [IP Address on node0: 192.168.16.24]

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged39              BridgedVlan      btbond2          brtagged39            NO        2022-02-04 15:19:45 CET  2022-02-04 15:19:45 CET
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

Tagged network interfaces have been created by the dcs-agent on the linux operating side :

[root@dbi-oda-x8 ~]# ls -ltrh /etc/sysconfig/network-scripts/ifcfg*btbond2*
-rw-r--r--. 1 root root 239 Feb  4 14:38 /etc/sysconfig/network-scripts/ifcfg-btbond2
-rw-r--r--  1 root root 145 Feb  4 15:19 /etc/sysconfig/network-scripts/ifcfg-btbond2.38
-rw-r--r--  1 root root 145 Feb  4 15:19 /etc/sysconfig/network-scripts/ifcfg-btbond2.39

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.38
#This file was created by ODA. Do not edit.
DEVICE=btbond2.38
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
VLAN=yes
ONPARENT=yes
BRIDGE=brtagged38

[root@dbi-oda-x8 ~]# cat /etc/sysconfig/network-scripts/ifcfg-btbond2.39
#This file was created by ODA. Do not edit.
DEVICE=btbond2.39
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
VLAN=yes
ONPARENT=yes
BRIDGE=brtagged39

On the linux side, I can see that the respective IP addresses have not been assigned to the physical interfaces :

[root@dbi-oda-x8 ~]# ip addr sh | grep -iE "(btbond2|btbond2.38|btbond2.39)"
4: p7p3:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
5: p7p4:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
9: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
60: btbond2.38@btbond2:  mtu 1500 qdisc noqueue master brtagged38 state UP group default qlen 1000
62: btbond2.39@btbond2:  mtu 1500 qdisc noqueue master brtagged39 state UP group default qlen 1000

But on new virtual interfaces linked to the tagged btbond2 interfaces :

[root@dbi-oda-x8 ~]# ip addr sh | grep -iE "(tagged38|tagged39)"
59: brtagged38:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.38.0.10/24 brd 10.38.0.255 scope global brtagged38
60: btbond2.38@btbond2:  mtu 1500 qdisc noqueue master brtagged38 state UP group default qlen 1000
61: brtagged39:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.39.0.10/24 brd 10.39.0.255 scope global brtagged39
62: btbond2.39@btbond2:  mtu 1500 qdisc noqueue master brtagged39 state UP group default qlen 1000

And I can ping them :

C:Users>ping 10.38.0.10

Pinging 10.38.0.10 with 32 bytes of data:
Reply from 10.38.0.10: bytes=32 time=2ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63
Reply from 10.38.0.10: bytes=32 time<1ms TTL=63

C:Users>ping 10.39.0.10

Pinging 10.39.0.10 with 32 bytes of data:
Reply from 10.39.0.10: bytes=32 time=2ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63
Reply from 10.39.0.10: bytes=32 time=1ms TTL=63
Reply from 10.39.0.10: bytes=32 time<1ms TTL=63

Ping statistics for 10.39.0.10:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 2ms, Average = 0ms

I can see the network bridge information from the operating system as well :

[root@dbi-oda-x8 ~]# brctl show
bridge name	  bridge id		        STP enabled	       interfaces
brtagged38		8000.5254003bdc76	  no		             btbond2.38
							                                       vnet2
brtagged39		8000.52540060eb57	  no		             btbond2.39
							                                       vnet4
privasm		    8000.5a8241cdb508	  no		             priv0.100
							                                       vnet1
							                                       vnet3
pubnet		    8000.3cfdfe928018	  no	               btbond1
							                                       vnet0
virbr0		    8000.525400dc8c09	  yes		             virbr0-nic

Creating 2 DB Systems using respectively tagged38 and tagged39 virtual networks

My repository has already been updated with the KVM DB System Image :

[root@dbi-oda-x8 ~]# odacli describe-dbsystem-image
DB System Image details
--------------------------------------------------------------------------------
Component Name        Supported Versions    Available Versions
--------------------  --------------------  --------------------

DBVM                  19.13.0.0.0           19.13.0.0.0

GI                    19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

DB                    19.13.0.0.211019      19.13.0.0.211019
                      19.12.0.0.210720      not-available
                      19.11.0.0.210420      not-available
                      21.4.0.0.211019       not-available
                      21.3.0.0.210720       not-available

I have created the first DB System json file assigning a new IP address from the VLAN38 network using tagged38 virtual network. The IP connection will use bridging through the virtual VLAN38 IP address we created on the Bare Metal itself :

[root@dbi-oda-x8 ~]# cat /opt/dbi/create_dbsystem_srvdb38.json
...
...
...
"network": {
    "domainName": "dbi-lab.ch",
    "ntpServers": ["216.239.35.0"],
    "dnsServers": [
        "8.8.8.8","8.8.4.4"
    ],
    "nodes": [
        {
            "name": "srvdb38",
            "ipAddress": "10.38.0.20",
            "netmask": "255.255.255.0",
            "gateway": "10.38.0.1",
            "number": 0
        }
    ],
"publicVNetwork": "tagged38"
},
"grid": {
    "language": "en"
}
}

Creation of the first DB System on network VLAN 38 :

[root@dbi-oda-x8 ~]# odacli create-dbsystem -p /opt/dbi/create_dbsystem_srvdb38.json
Enter password for system "srvdb38":
Retype password for system "srvdb38":
Enter administrator password for DB "DB38":
Retype administrator password for DB "DB38":

Job details
----------------------------------------------------------------
                     ID:  ed88ef81-5cb3-4214-ac5c-bc255b67577f
            Description:  DB System srvdb38 creation
                 Status:  Created
                Created:  February 4, 2022 4:12:33 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i ed88ef81-5cb3-4214-ac5c-bc255b67577f

Job details
----------------------------------------------------------------
                     ID:  ed88ef81-5cb3-4214-ac5c-bc255b67577f
            Description:  DB System srvdb38 creation
                 Status:  Success
                Created:  February 4, 2022 4:12:33 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create DB System metadata                February 4, 2022 4:12:33 PM CET     February 4, 2022 4:12:33 PM CET     Success
Persist new DB System                    February 4, 2022 4:12:33 PM CET     February 4, 2022 4:12:33 PM CET     Success
Validate DB System prerequisites         February 4, 2022 4:12:33 PM CET     February 4, 2022 4:12:37 PM CET     Success
Setup DB System environment              February 4, 2022 4:12:37 PM CET     February 4, 2022 4:12:39 PM CET     Success
Create DB System ASM volume              February 4, 2022 4:12:39 PM CET     February 4, 2022 4:12:45 PM CET     Success
Create DB System ACFS filesystem         February 4, 2022 4:12:45 PM CET     February 4, 2022 4:12:54 PM CET     Success
Create DB System VM ACFS snapshots       February 4, 2022 4:12:54 PM CET     February 4, 2022 4:13:27 PM CET     Success
Create temporary SSH key pair            February 4, 2022 4:13:27 PM CET     February 4, 2022 4:13:27 PM CET     Success
Create DB System cloud-init config       February 4, 2022 4:13:27 PM CET     February 4, 2022 4:13:27 PM CET     Success
Provision DB System VM(s)                February 4, 2022 4:13:27 PM CET     February 4, 2022 4:13:28 PM CET     Success
Attach disks to DB System                February 4, 2022 4:13:28 PM CET     February 4, 2022 4:13:29 PM CET     Success
Add DB System to Clusterware             February 4, 2022 4:13:29 PM CET     February 4, 2022 4:13:29 PM CET     Success
Start DB System                          February 4, 2022 4:13:29 PM CET     February 4, 2022 4:13:30 PM CET     Success
Wait DB System VM first boot             February 4, 2022 4:13:30 PM CET     February 4, 2022 4:17:06 PM CET     Success
Setup Mutual TLS (mTLS)                  February 4, 2022 4:17:06 PM CET     February 4, 2022 4:21:36 PM CET     Success
Export clones repository                 February 4, 2022 4:21:36 PM CET     February 4, 2022 4:21:37 PM CET     Success
Setup ASM client cluster config          February 4, 2022 4:21:37 PM CET     February 4, 2022 4:22:00 PM CET     Success
Install DB System                        February 4, 2022 4:22:00 PM CET     February 4, 2022 5:07:11 PM CET     Success
Cleanup temporary SSH key pair           February 4, 2022 5:07:11 PM CET     February 4, 2022 5:07:52 PM CET     Success
Set DB System as configured              February 4, 2022 5:07:52 PM CET     February 4, 2022 5:07:52 PM CET     Success

I have then created the second DB System json file assigning a new IP address from the VLAN39 network using tagged39 virtual network. That time the IP connection will use bridging through the virtual VLAN39 IP address we created on the Bare Metal itself :

[root@dbi-oda-x8 ~]# cat /opt/dbi/create_dbsystem_srvdb39.json
...
...
...
},
"network": {
    "domainName": "dbi-lab.ch",
    "ntpServers": ["216.239.35.0"],
    "dnsServers": [
        "8.8.8.8","8.8.4.4"
    ],
    "nodes": [
        {
            "name": "srvdb39",
            "ipAddress": "10.39.0.20",
            "netmask": "255.255.255.0",
            "gateway": "10.39.0.1",
            "number": 0
        }
    ],
"publicVNetwork": "tagged39"
},
"grid": {
    "language": "en"
}
}

Creation on the second DB System on network VLAN 39 :

[root@dbi-oda-x8 ~]# odacli create-dbsystem -p /opt/dbi/create_dbsystem_srvdb39.json
Enter password for system "srvdb39":
Retype password for system "srvdb39":
Enter administrator password for DB "DB39":
Retype administrator password for DB "DB39":

Job details
----------------------------------------------------------------
                     ID:  38303453-8524-4c65-b1d2-3717dfc79a1f
            Description:  DB System srvdb39 creation
                 Status:  Created
                Created:  February 4, 2022 5:14:01 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i 38303453-8524-4c65-b1d2-3717dfc79a1f

Job details
----------------------------------------------------------------
                     ID:  38303453-8524-4c65-b1d2-3717dfc79a1f
            Description:  DB System srvdb39 creation
                 Status:  Success
                Created:  February 4, 2022 5:14:01 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create DB System metadata                February 4, 2022 5:14:01 PM CET     February 4, 2022 5:14:01 PM CET     Success
Persist new DB System                    February 4, 2022 5:14:01 PM CET     February 4, 2022 5:14:01 PM CET     Success
Validate DB System prerequisites         February 4, 2022 5:14:01 PM CET     February 4, 2022 5:14:05 PM CET     Success
Setup DB System environment              February 4, 2022 5:14:05 PM CET     February 4, 2022 5:14:07 PM CET     Success
Create DB System ASM volume              February 4, 2022 5:14:07 PM CET     February 4, 2022 5:14:13 PM CET     Success
Create DB System ACFS filesystem         February 4, 2022 5:14:13 PM CET     February 4, 2022 5:14:22 PM CET     Success
Create DB System VM ACFS snapshots       February 4, 2022 5:14:22 PM CET     February 4, 2022 5:14:52 PM CET     Success
Create temporary SSH key pair            February 4, 2022 5:14:52 PM CET     February 4, 2022 5:14:52 PM CET     Success
Create DB System cloud-init config       February 4, 2022 5:14:52 PM CET     February 4, 2022 5:14:53 PM CET     Success
Provision DB System VM(s)                February 4, 2022 5:14:53 PM CET     February 4, 2022 5:14:54 PM CET     Success
Attach disks to DB System                February 4, 2022 5:14:54 PM CET     February 4, 2022 5:14:54 PM CET     Success
Add DB System to Clusterware             February 4, 2022 5:14:54 PM CET     February 4, 2022 5:14:54 PM CET     Success
Start DB System                          February 4, 2022 5:14:54 PM CET     February 4, 2022 5:14:56 PM CET     Success
Wait DB System VM first boot             February 4, 2022 5:14:56 PM CET     February 4, 2022 5:18:32 PM CET     Success
Setup Mutual TLS (mTLS)                  February 4, 2022 5:18:32 PM CET     February 4, 2022 5:23:07 PM CET     Success
Export clones repository                 February 4, 2022 5:23:07 PM CET     February 4, 2022 5:23:07 PM CET     Success
Setup ASM client cluster config          February 4, 2022 5:23:07 PM CET     February 4, 2022 5:23:30 PM CET     Success
Install DB System                        February 4, 2022 5:23:30 PM CET     February 4, 2022 6:09:43 PM CET     Success
Cleanup temporary SSH key pair           February 4, 2022 6:09:43 PM CET     February 4, 2022 6:10:23 PM CET     Success
Set DB System as configured              February 4, 2022 6:10:23 PM CET     February 4, 2022 6:10:23 PM CET     Success

Virtual network configuration checks

So I have got my both virtual networks configured on respective IP address (10.38.0.10 for VLAN38 network and 10.39.0.10 for VLAN39 network) :

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged39              BridgedVlan      btbond2          brtagged39            NO        2022-02-04 15:19:45 CET  2022-02-04 15:19:45 CET
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

[root@dbi-oda-x8 ~]# odacli describe-vnetwork -n tagged38
VNetwork details
--------------------------------------------------------------------------------
                       ID:  0d8099ba-ff42-4d73-82b4-f7497fb68ca5
                     Name:  tagged38
                  Created:  2022-02-04 15:19:07 CET
                  Updated:  2022-02-04 15:19:07 CET
                     Type:  BridgedVlan
           Interface name:  btbond2
              Bridge name:  brtagged38
                  VLAN ID:  38
                       IP:  10.38.0.10
                  Netmask:  255.255.255.0
                  Gateway:  10.38.0.1
 Attached in VMs (config):  xa287c764d
   Attached in VMs (live):  xa287c764d

[root@dbi-oda-x8 ~]# odacli describe-vnetwork -n tagged39
VNetwork details
--------------------------------------------------------------------------------
                       ID:  00df04d4-9a01-4547-9a3c-5a27dab10494
                     Name:  tagged39
                  Created:  2022-02-04 15:19:45 CET
                  Updated:  2022-02-04 15:19:45 CET
                     Type:  BridgedVlan
           Interface name:  btbond2
              Bridge name:  brtagged39
                  VLAN ID:  39
                       IP:  10.39.0.10
                  Netmask:  255.255.255.0
                  Gateway:  10.39.0.1
 Attached in VMs (config):  x793d6c5ce
   Attached in VMs (live):  x793d6c5ce

DB System information checks

I have my 2 DB Systems.
srvdb38 configured on VLAN38 network with 10.38.0.20 IP Address.
srvdb39 configured on VLAN39 network with 10.39.0.20 IP Address.

[root@dbi-oda-x8 ~]# odacli list-dbsystems
Name                  Shape       Cores  Memory      GI version          DB version          Status           Created                  Updated
--------------------  ----------  -----  ----------  ------------------  ------------------  ---------------  -----------------------  -----------------------
srvdb39               odb2        4      16.00 GB    19.13.0.0.211019    19.13.0.0.211019    CONFIGURED       2022-02-04 17:14:01 CET  2022-02-04 18:10:23 CET
srvdb38               odb2        4      16.00 GB    19.13.0.0.211019    19.13.0.0.211019    CONFIGURED       2022-02-04 16:12:33 CET  2022-02-04 17:07:52 CET

[root@dbi-oda-x8 ~]# odacli describe-dbsystem -n srvdb38
DB System details
--------------------------------------------------------------------------------
                       ID:  859466fe-a4a5-40ae-aa00-8e02cf74dedc
                     Name:  srvdb38
                    Image:  19.13.0.0.0
                    Shape:  odb2
             Cluster name:  dbsa287c764d
             Grid version:  19.13.0.0.211019
                   Memory:  16.00 GB
             NUMA enabled:  YES
                   Status:  CONFIGURED
                  Created:  2022-02-04 16:12:33 CET
                  Updated:  2022-02-04 17:07:52 CET

 CPU Pool
--------------------------
                     Name:  cpupool4dbsystems
          Number of cores:  4

                     Host:  dbi-oda-x8
        Effective CPU set:  5-6,21-22,37-38,53-54
              Online CPUs:  5, 6, 21, 22, 37, 38, 53, 54
             Offline CPUs:  NONE

 VM Storage
--------------------------
               Disk group:  DATA
              Volume name:  SA287C764D
            Volume device:  /dev/asm/sa287c764d-390
                     Size:  200.00 GB
              Mount Point:  /u05/app/sharedrepo/srvdb38

 VMs
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  xa287c764d
             VM Host Name:  srvdb38.dbi-lab.ch
            VM image path:  /u05/app/sharedrepo/srvdb38/.ACFS/snaps/vm_xa287c764d/xa287c764d
             Target State:  ONLINE
            Current State:  ONLINE

 VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  xa287c764d
                   Public:  10.38.0.20      / 255.255.255.0   / ens3 / BRIDGE(brtagged38)
                      ASM:  192.168.17.4    / 255.255.255.128 / ens4 / BRIDGE(privasm) VLAN(priv0.100)

 Extra VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  xa287c764d
                 tagged38:  10.38.0.20      / 255.255.255.0   / PUBLIC

 Databases
--------------------------
                     Name:  DB38
              Resource ID:  5ff22777-f24b-446a-8c58-3fa6dd2ad383
              Unique name:  DB38_SITE1
              Database ID:  715007309
              Domain name:  dbi-lab.ch
               DB Home ID:  8b66a7d6-9e5e-43c0-a69c-00b80d2dba81
                    Shape:  odb2
                  Version:  19.13.0.0.211019
                  Edition:  EE
                     Type:  SI
                     Role:  PRIMARY
                    Class:  OLTP
                  Storage:  ASM
               Redundancy:
         Target node name:
            Character set:  AL32UTF8
        NLS character set:
                 Language:  ENGLISH
                Territory:  AMERICA
          Console enabled:  false
             SEHA enabled:  false
      Associated networks:  Public-network
         Backup config ID:
       Level 0 Backup Day:  sunday
       Autobackup enabled:  true
              TDE enabled:  false
                 CDB type:  false
                 PDB name:
           PDB admin user:

[root@dbi-oda-x8 ~]# odacli describe-dbsystem -n srvdb39
DB System details
--------------------------------------------------------------------------------
                       ID:  5238f074-6126-4051-8b9e-e392b55a2328
                     Name:  srvdb39
                    Image:  19.13.0.0.0
                    Shape:  odb2
             Cluster name:  dbs793d6c5ce
             Grid version:  19.13.0.0.211019
                   Memory:  16.00 GB
             NUMA enabled:  YES
                   Status:  CONFIGURED
                  Created:  2022-02-04 17:14:01 CET
                  Updated:  2022-02-04 18:10:23 CET

 CPU Pool
--------------------------
                     Name:  cpupool4dbsystems
          Number of cores:  4

                     Host:  dbi-oda-x8
        Effective CPU set:  5-6,21-22,37-38,53-54
              Online CPUs:  5, 6, 21, 22, 37, 38, 53, 54
             Offline CPUs:  NONE

 VM Storage
--------------------------
               Disk group:  DATA
              Volume name:  S793D6C5CE
            Volume device:  /dev/asm/s793d6c5ce-390
                     Size:  200.00 GB
              Mount Point:  /u05/app/sharedrepo/srvdb39

 VMs
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
             VM Host Name:  srvdb39.dbi-lab.ch
            VM image path:  /u05/app/sharedrepo/srvdb39/.ACFS/snaps/vm_x793d6c5ce/x793d6c5ce
             Target State:  ONLINE
            Current State:  ONLINE

 VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
                   Public:  10.39.0.20      / 255.255.255.0   / ens3 / BRIDGE(brtagged39)
                      ASM:  192.168.17.5    / 255.255.255.128 / ens4 / BRIDGE(privasm) VLAN(priv0.100)

 Extra VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
                 tagged39:  10.39.0.20      / 255.255.255.0   / PUBLIC

 Databases
--------------------------
                     Name:  DB39
              Resource ID:  e796e7e1-c8c8-4a63-809c-57976ce2163d
              Unique name:  DB39_SITE1
              Database ID:  2043690133
              Domain name:  dbi-lab.ch
               DB Home ID:  67f6df00-83d8-477b-8884-201664a3701b
                    Shape:  odb2
                  Version:  19.13.0.0.211019
                  Edition:  EE
                     Type:  SI
                     Role:  PRIMARY
                    Class:  OLTP
                  Storage:  ASM
               Redundancy:
         Target node name:
            Character set:  AL32UTF8
        NLS character set:
                 Language:  ENGLISH
                Territory:  AMERICA
          Console enabled:  false
             SEHA enabled:  false
      Associated networks:  Public-network
         Backup config ID:
       Level 0 Backup Day:  sunday
       Autobackup enabled:  true
              TDE enabled:  false
                 CDB type:  false
                 PDB name:
           PDB admin user:

[root@dbi-oda-x8 ~]#

Direct connection on the DB Systems

Finally I can access directly the DB Systems through SSH from my laptop. Here as example, connection to the srvdb38 DB System.

maw@DBI-LT-MAW2 ~ % ssh root@10.38.0.20
root@10.38.0.20's password:
Last login: Thu Feb 17 16:07:31 2022

[root@srvdb38 ~]# ip addr sh ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:6e:3f:9f brd ff:ff:ff:ff:ff:ff
    inet 10.38.0.20/24 brd 10.38.0.255 scope global noprefixroute ens3
       valid_lft forever preferred_lft forever
       
[root@srvdb38 ~]# ps -ef | grep [p]mon
oracle   21506     1  0 14:05 ?        00:00:00 ora_pmon_DB38
[root@srvdb38 ~]#

And directly from my laptop I can connect and use the Oracle database DB38 :




Creating a virtual network without assigning an IP address to the bridge interface

Of course it is possible to create a virtual bridge interface without assigning any IP address.

I deleted tagged39 virtual network in order to create it again without IP address :

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET
tagged39              BridgedVlan      btbond2          brtagged39            NO        2022-02-04 22:13:18 CET  2022-02-17 16:38:02 CET

[root@dbi-oda-x8 ~]# odacli delete-vnetwork -n tagged39

Job details
----------------------------------------------------------------
                     ID:  2cae44c4-39e7-4102-8453-597f707ed10b
            Description:  vNetwork tagged39 deletion
                 Status:  Created
                Created:  February 17, 2022 4:46:45 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i 2cae44c4-39e7-4102-8453-597f707ed10b

Job details
----------------------------------------------------------------
                     ID:  2cae44c4-39e7-4102-8453-597f707ed10b
            Description:  vNetwork tagged39 deletion
                 Status:  Success
                Created:  February 17, 2022 4:46:45 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network exists          February 17, 2022 4:46:45 PM CET    February 17, 2022 4:46:45 PM CET    Success
Validate VM does not have the VNetwork attached February 17, 2022 4:46:45 PM CET    February 17, 2022 4:46:45 PM CET    Success
Delete bridge                            February 17, 2022 4:46:45 PM CET    February 17, 2022 4:46:45 PM CET    Success
Delete VLAN                              February 17, 2022 4:46:45 PM CET    February 17, 2022 4:46:46 PM CET    Success
Delete metadata                          February 17, 2022 4:46:46 PM CET    February 17, 2022 4:46:46 PM CET    Success

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

[root@dbi-oda-x8 ~]# ip addr sh | grep btbond2
4: p7p3:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
5: p7p4:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
10: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
11: btbond2.38@btbond2:  mtu 1500 qdisc noqueue master brtagged38 state UP group default qlen 1000
[root@dbi-oda-x8 ~]#

Of course I could have updated the vnetwork interface using odacli modify-vnetwork and assigning 0.0.0.0 as IP address.

I created again the tagged39 virtual network without assigning an IP address to the bridge interface.

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET

[root@dbi-oda-x8 ~]# odacli create-vnetwork -n tagged39 -if btbond2 -t bridged-vlan -vlan 39

Job details
----------------------------------------------------------------
                     ID:  a70a1313-f2cd-48ab-ac5f-c17b2f2fae08
            Description:  vNetwork tagged39 creation
                 Status:  Created
                Created:  February 17, 2022 4:49:44 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i a70a1313-f2cd-48ab-ac5f-c17b2f2fae08

Job details
----------------------------------------------------------------
                     ID:  a70a1313-f2cd-48ab-ac5f-c17b2f2fae08
            Description:  vNetwork tagged39 creation
                 Status:  Success
                Created:  February 17, 2022 4:49:44 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network doesn't exist   February 17, 2022 4:49:44 PM CET    February 17, 2022 4:49:44 PM CET    Success
Validate interface to use exists         February 17, 2022 4:49:44 PM CET    February 17, 2022 4:49:44 PM CET    Success
Validate interfaces to create not exist  February 17, 2022 4:49:44 PM CET    February 17, 2022 4:49:44 PM CET    Success
Create bridge                            February 17, 2022 4:49:44 PM CET    February 17, 2022 4:49:44 PM CET    Success
Create VLAN                              February 17, 2022 4:49:44 PM CET    February 17, 2022 4:49:44 PM CET    Success
Bring up VLAN                            February 17, 2022 4:49:44 PM CET    February 17, 2022 4:49:45 PM CET    Success
Create metadata                          February 17, 2022 4:49:45 PM CET    February 17, 2022 4:49:45 PM CET    Success
Persist metadata                         February 17, 2022 4:49:45 PM CET    February 17, 2022 4:49:45 PM CET    Success

[root@dbi-oda-x8 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
tagged38              BridgedVlan      btbond2          brtagged38            NO        2022-02-04 15:19:07 CET  2022-02-04 15:19:07 CET
pubnet                Bridged          btbond1          pubnet                NO        2022-01-28 08:54:55 CET  2022-01-28 08:54:55 CET
tagged39              BridgedVlan      btbond2          brtagged39            NO        2022-02-17 16:49:45 CET  2022-02-17 16:49:45 CET

[root@dbi-oda-x8 ~]# odacli describe-vnetwork -n tagged39
VNetwork details
--------------------------------------------------------------------------------
                       ID:  6e40ca21-1d4a-4bcc-8436-f29ffb2eb8bc
                     Name:  tagged39
                  Created:  2022-02-17 16:49:45 CET
                  Updated:  2022-02-17 16:49:45 CET
                     Type:  BridgedVlan
           Interface name:  btbond2
              Bridge name:  brtagged39
                  VLAN ID:  39
                       IP:
                  Netmask:
                  Gateway:
 Attached in VMs (config):  NONE
   Attached in VMs (live):  NONE

[root@dbi-oda-x8 ~]# ip addr sh | grep btbond2
4: p7p3:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
5: p7p4:  mtu 1500 qdisc mq master btbond2 state UP group default qlen 1000
10: btbond2:  mtu 1500 qdisc noqueue state UP group default qlen 1000
11: btbond2.38@btbond2:  mtu 1500 qdisc noqueue master brtagged38 state UP group default qlen 1000
31: btbond2.39@btbond2:  mtu 1500 qdisc noqueue master brtagged39 state UP group default qlen 1000

[root@dbi-oda-x8 ~]# ip addr sh | grep brtagged
11: btbond2.38@btbond2:  mtu 1500 qdisc noqueue master brtagged38 state UP group default qlen 1000
12: brtagged38:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 10.38.0.10/24 brd 10.38.0.255 scope global brtagged38
25: vnet6:  mtu 1500 qdisc pfifo_fast master brtagged38 state UNKNOWN group default qlen 1000
30: brtagged39:  mtu 1500 qdisc noqueue state UP group default qlen 1000
31: btbond2.39@btbond2:  mtu 1500 qdisc noqueue master brtagged39 state UP group default qlen 1000
[root@dbi-oda-x8 ~]#

And let’s deploy the srvdb39 DB Systems again after having removed it with odacli delete-dbsystem command :

[root@dbi-oda-x8 ~]# odacli list-dbsystems
Name                  Shape       Cores  Memory      GI version          DB version          Status           Created                  Updated
--------------------  ----------  -----  ----------  ------------------  ------------------  ---------------  -----------------------  -----------------------
srvdb38               odb2        4      16.00 GB    19.13.0.0.211019    19.13.0.0.211019    CONFIGURED       2022-02-04 16:12:33 CET  2022-02-04 17:07:52 CET

[root@dbi-oda-x8 ~]# odacli create-dbsystem -p /opt/dbi/create_dbsystem_srvdb39.json
Enter password for system "srvdb39":
Retype password for system "srvdb39":
Enter administrator password for DB "DB39":
Retype administrator password for DB "DB39":

Job details
----------------------------------------------------------------
                     ID:  934c752b-7eb6-4b5b-8856-7a90102a1e65
            Description:  DB System srvdb39 creation
                 Status:  Created
                Created:  February 17, 2022 5:06:37 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@dbi-oda-x8 ~]# odacli describe-job -i 934c752b-7eb6-4b5b-8856-7a90102a1e65

Job details
----------------------------------------------------------------
                     ID:  934c752b-7eb6-4b5b-8856-7a90102a1e65
            Description:  DB System srvdb39 creation
                 Status:  Success
                Created:  February 17, 2022 5:06:37 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Create DB System metadata                February 17, 2022 5:06:37 PM CET    February 17, 2022 5:06:37 PM CET    Success
Persist new DB System                    February 17, 2022 5:06:37 PM CET    February 17, 2022 5:06:37 PM CET    Success
Validate DB System prerequisites         February 17, 2022 5:06:37 PM CET    February 17, 2022 5:06:41 PM CET    Success
Setup DB System environment              February 17, 2022 5:06:41 PM CET    February 17, 2022 5:06:43 PM CET    Success
Create DB System ASM volume              February 17, 2022 5:06:43 PM CET    February 17, 2022 5:06:49 PM CET    Success
Create DB System ACFS filesystem         February 17, 2022 5:06:49 PM CET    February 17, 2022 5:06:58 PM CET    Success
Create DB System VM ACFS snapshots       February 17, 2022 5:06:58 PM CET    February 17, 2022 5:07:31 PM CET    Success
Create temporary SSH key pair            February 17, 2022 5:07:31 PM CET    February 17, 2022 5:07:32 PM CET    Success
Create DB System cloud-init config       February 17, 2022 5:07:32 PM CET    February 17, 2022 5:07:32 PM CET    Success
Provision DB System VM(s)                February 17, 2022 5:07:32 PM CET    February 17, 2022 5:07:33 PM CET    Success
Attach disks to DB System                February 17, 2022 5:07:33 PM CET    February 17, 2022 5:07:34 PM CET    Success
Add DB System to Clusterware             February 17, 2022 5:07:34 PM CET    February 17, 2022 5:07:34 PM CET    Success
Start DB System                          February 17, 2022 5:07:34 PM CET    February 17, 2022 5:07:36 PM CET    Success
Wait DB System VM first boot             February 17, 2022 5:07:36 PM CET    February 17, 2022 5:08:48 PM CET    Success
Setup Mutual TLS (mTLS)                  February 17, 2022 5:08:48 PM CET    February 17, 2022 5:09:08 PM CET    Success
Export clones repository                 February 17, 2022 5:09:08 PM CET    February 17, 2022 5:09:08 PM CET    Success
Setup ASM client cluster config          February 17, 2022 5:09:08 PM CET    February 17, 2022 5:09:11 PM CET    Success
Install DB System                        February 17, 2022 5:09:11 PM CET    February 17, 2022 5:36:57 PM CET    Success
Cleanup temporary SSH key pair           February 17, 2022 5:36:57 PM CET    February 17, 2022 5:36:57 PM CET    Success
Set DB System as configured              February 17, 2022 5:36:57 PM CET    February 17, 2022 5:36:57 PM CET    Success

[root@dbi-oda-x8 ~]# odacli list-dbsystems
Name                  Shape       Cores  Memory      GI version          DB version          Status           Created                  Updated
--------------------  ----------  -----  ----------  ------------------  ------------------  ---------------  -----------------------  -----------------------
srvdb38               odb2        4      16.00 GB    19.13.0.0.211019    19.13.0.0.211019    CONFIGURED       2022-02-04 16:12:33 CET  2022-02-04 17:07:52 CET
srvdb39               odb2        4      16.00 GB    19.13.0.0.211019    19.13.0.0.211019    CONFIGURED       2022-02-17 17:06:37 CET  2022-02-17 17:36:57 CET

[root@dbi-oda-x8 ~]# odacli describe-dbsystem -n srvdb39
DB System details
--------------------------------------------------------------------------------
                       ID:  ed13bacb-be28-4e31-a5d7-39fd713d72a0
                     Name:  srvdb39
                    Image:  19.13.0.0.0
                    Shape:  odb2
             Cluster name:  dbs793d6c5ce
             Grid version:  19.13.0.0.211019
                   Memory:  16.00 GB
             NUMA enabled:  YES
                   Status:  CONFIGURED
                  Created:  2022-02-17 17:06:37 CET
                  Updated:  2022-02-17 17:36:57 CET

 CPU Pool
--------------------------
                     Name:  cpupool4dbsystems
          Number of cores:  4

                     Host:  dbi-oda-x8
        Effective CPU set:  5-6,21-22,37-38,53-54
              Online CPUs:  5, 6, 21, 22, 37, 38, 53, 54
             Offline CPUs:  NONE

 VM Storage
--------------------------
               Disk group:  DATA
              Volume name:  S793D6C5CE
            Volume device:  /dev/asm/s793d6c5ce-390
                     Size:  200.00 GB
              Mount Point:  /u05/app/sharedrepo/srvdb39

 VMs
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
             VM Host Name:  srvdb39.dbi-lab.ch
            VM image path:  /u05/app/sharedrepo/srvdb39/.ACFS/snaps/vm_x793d6c5ce/x793d6c5ce
             Target State:  ONLINE
            Current State:  ONLINE

 VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
                   Public:  10.39.0.20      / 255.255.255.0   / ens3 / BRIDGE(brtagged39)
                      ASM:  192.168.17.5    / 255.255.255.128 / ens4 / BRIDGE(privasm) VLAN(priv0.100)

 Extra VNetworks
--------------------------
                     Host:  dbi-oda-x8
                  VM Name:  x793d6c5ce
                 tagged39:  10.39.0.20      / 255.255.255.0   / PUBLIC

 Databases
--------------------------
                     Name:  DB39
              Resource ID:  6ecee323-cd18-4971-acb4-b476d4d76d2c
              Unique name:  DB39_SITE1
              Database ID:  2044811647
              Domain name:  dbi-lab.ch
               DB Home ID:  3a6f8134-407a-4272-8512-7c52bdca68ea
                    Shape:  odb2
                  Version:  19.13.0.0.211019
                  Edition:  EE
                     Type:  SI
                     Role:  PRIMARY
                    Class:  OLTP
                  Storage:  ASM
               Redundancy:
         Target node name:
            Character set:  AL32UTF8
        NLS character set:
                 Language:  ENGLISH
                Territory:  AMERICA
          Console enabled:  false
             SEHA enabled:  false
      Associated networks:  Public-network
         Backup config ID:
       Level 0 Backup Day:  sunday
       Autobackup enabled:  true
              TDE enabled:  false
                 CDB type:  false
                 PDB name:
           PDB admin user:

This DB System is also now accessible through his IP address 10.39.0.20 and is using the bridge interface without any IP address. A bridge interface does not need any IP address.

I can ping the new DB System from my laptop :

maw@DBI-LT-MAW2 ~ % ping 10.39.0.20
PING 10.39.0.20 (10.39.0.20): 56 data bytes
64 bytes from 10.39.0.20: icmp_seq=0 ttl=62 time=56.761 ms
64 bytes from 10.39.0.20: icmp_seq=1 ttl=62 time=56.734 ms
64 bytes from 10.39.0.20: icmp_seq=2 ttl=62 time=64.759 ms
^C
--- 10.39.0.20 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 56.734/59.418/64.759/3.777 ms

I can ssh the new DB System from my laptop :

maw@DBI-LT-MAW2 ~ % ssh root@10.39.0.20
root@10.39.0.20's password:
Last login: Thu Feb 17 18:10:10 2022 from gateway

[root@srvdb39 ~]# ip addr sh ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:86:87:88 brd ff:ff:ff:ff:ff:ff
    inet 10.39.0.20/24 brd 10.39.0.255 scope global noprefixroute ens3
       valid_lft forever preferred_lft forever
       
[root@srvdb39 ~]# ps -ef | grep [p]mon
oracle   93401     1  0 17:35 ?        00:00:00 ora_pmon_DB39
[root@srvdb39 ~]#

And I can join the DB39 database from my laptop directly :











Conclusion

KVM DB System main purpose is hard partitioning for oracle licensing. All ODA’s cpu cores can remain active. DB System with appropriate pinned CPU core related to the number of licenses will be created for the oracle database. The others ODA’s CPU core can be used for further KVM or application needs.
As we could see KVM DB System will then have an additional pros. With DB System we can separate Database traffic through separated VLANs.

On the other hand KVM DB System will have some other cons :

  • More complex installation.
  • More resources will be consumed : each DB Systems will run its own oracle restart (grid) infrastructure.
  • Each DB Systems will have to be patched separately as well during ODA patching, making patching process more heavy.

Also please note that current ODA release 19.13 supports only creation of database in version 19.11, 19.12, 19.13, 21.3 and 21.4 for the DB Systems.

L’article Creating KVM Database System on separate VLAN network on ODA est apparu en premier sur dbi Blog.

Managing Refreshable Clone Pluggable Databases with Oracle 21c

$
0
0

A refreshable clone PDB is a way to refresh a single PDB instead of refreshing all PDBs in a container as in a Data Guard environment. It consists to make a clone of a source PDB and the clone PDB is updated with redo accumulated since the last redo log apply
In this blog I did some tests of this feature Refreshable pluggable databases.
I am doing my test with Oracle 21c but this feature exists since Oracle 12.2.

The configuration I use in the following

An Oracle 21c source CDB : DB21 with a source pluggable database PDB1
An Oracle 21c target CDB : TEST21 which will contain the refreshable clone of PDB1. The clone will be named PDB1FRES

Note that the refreshable clone can be created in the same container.

The first step is to create a user in the source CDB DB21 for database link purpose

SQL> create user c##clone_user identified by rootroot2016 temporary tablespace temp container=ALL;

User created.

SQL>

SQL> grant create session, create pluggable database, sysoper to c##clone_user container=ALL ;

Grant succeeded.

SQL>

In the target CDB TEST21 let’s create a database link to the source CDB. We will use the user c##clone

SQL> create database link clonesource connect to c##clone_user identified by rootroot2016 using 'DB21';

Database link created.

SQL>

SQL> select * from dual@clonesource;

D
-
X

SQL>

Now we can create a refreshable clone PDB1FRES of PDB1 in the database TEST21.

First we will create a manual refreshable clone

SQL> create pluggable database PDB1FRES from PDB1@clonesource refresh mode manual;

Pluggable database created.

SQL>

When created the new clone is mounted

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       MOUNTED
SQL>

We can see the refresh mode

SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        MANUAL                          39266271

SQL>

Ok now let’s do some change on PDB1 and let’s see how to propagate these changes on PDB1FRES

SQL> show con_name

CON_NAME
------------------------------
PDB1


SQL> create table test(id number);

Table created.

SQL> insert into test values (1);

1 row created.

SQL> commit;

Commit complete.

SQL>

PDB1FRES must be closed (mounted) to be refreshed with changes in PDB1. As the clause REFRESH MANUAL was used during it’s creation, we have to manually do the refresh

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> alter pluggable database PDB1FRES refresh;

Pluggable database altered.

SQL>

Let’s now open PDB1FRES in Read Only mode to verify the refresh

SQL> alter pluggable database PDB1FRES open read only;

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       MOUNTED
SQL> alter pluggable database PDB1FRES open read only;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ ONLY  NO
SQL> alter session set container=PDB1FRES;

Session altered.

SQL> select * from test;

        ID
----------
         1

SQL>

SQL> alter pluggable database PDB1FRES close immediate;

Pluggable database altered.

As seen, the manual refresh works fine.

Can we change the manual refresh mode to an automatic one?

Let’s try

SQL> alter pluggable database PDB1FRES  refresh mode every 4 minutes;

Pluggable database altered.

SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        AUTO                  4         39272240

SQL>

Now let’s again do some changes in PDB1

SQL> insert into test values (10);

1 row created.

SQL> insert into test values (20);

1 row created.

SQL> commit;

Commit complete.

SQL>

4 minutes after we can see the the last LAST_REFRESH_SCN has changed on PDB1FRES

SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        AUTO                  4         39272403

SQL>

Let’s open PDB1FRES on read only mode and let’s verify that the latest changes are replicated

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ ONLY  NO

SQL> alter session set container=PDB1FRES ;

Session altered.

SQL> select * from test;

        ID
----------
         1
        10
        20

SQL>

Note that the automatic refresh will success only if the PDB clone is mounted. Note also that a manual refresh can be done even if the auto refresh is configured.

Another question may be if we can open PDB1FRES in read write mode.
Let’s try

SQL> alter pluggable database PDB1FRES open read write;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ ONLY  NO
SQL>

What? The command open read write returns SUCCESS but the database is real openend in read only mode.

To open the database in a read write mode, we have to set the refresh mode to none

SQL> alter pluggable database PDB1FRES  refresh mode none;
alter pluggable database PDB1FRES  refresh mode none
*
ERROR at line 1:
ORA-65025: Pluggable database PDB1FRES is not closed on all instances.


SQL> alter pluggable database PDB1FRES  close immediate;

Pluggable database altered.

SQL> alter pluggable database PDB1FRES  refresh mode none;

Pluggable database altered.

SQL> col pdb_name for a15
SQL> select PDB_NAME,REFRESH_MODE,REFRESH_INTERVAL,LAST_REFRESH_SCN from dba_pdbs where PDB_NAME='PDB1FRES';

PDB_NAME        REFRES REFRESH_INTERVAL LAST_REFRESH_SCN
--------------- ------ ---------------- ----------------
PDB1FRES        NONE                            39272683

SQL> alter pluggable database PDB1FRES open read write;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       READ WRITE NO
SQL>

Now that PDB1FRES is opened in read write mode, let’s close it and let’s try to transform it again in refreshable clone

SQL> alter pluggable database PDB1FRES close immediate;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1FRES                       MOUNTED
SQL> alter pluggable database PDB1FRES  refresh mode manual;
alter pluggable database PDB1FRES  refresh mode manual
*
ERROR at line 1:
ORA-65261: pluggable database PDB1FRES not enabled for refresh


SQL>

It’s not possible to convert back an opened R/W PDB to a refreshable PDB. It’s clearly specified in the documentation
You cannot change an ordinary PDB into a refreshable clone PDB. After a refreshable clone PDB is converted to an ordinary PDB, you cannot change it back into a refreshable clone PDB.

Conclusion

One usage of refreshable PDB is that the clone can be used as a golden master for snapshots at PDB level. And these snapshots can be used for cloning environments for developers.

L’article Managing Refreshable Clone Pluggable Databases with Oracle 21c est apparu en premier sur dbi Blog.


Oracle DBs and ransomware attacks

$
0
0

By Clemens Bleile

I had a discussion with a customer recently about the risk of running into an issue with ransomware encrypting data of Oracle databases. Just to quickly recap on what ransomware is:

Wikipedia: Ransomware is a type of malware from cryptovirology that threatens to publish the victim’s personal data or perpetually block access to it unless a ransom is paid. While some simple ransomware may lock the system without damaging any files, more advanced malware uses a technique called cryptoviral extortion. It encrypts the victim’s files, making them inaccessible, and demands a ransom payment to decrypt them.

In the last years ransomware has become more perfidious by
– searching for backups to encrypt them as well, because restoring non-infected backups were the only resolution in the past to ransomware encrypted data if you do not want to pay the ransom
– stealing sensitive data and then blackmail the victim to publish the stolen data if no ransom is paid

So how can you protect your database proactively to prevent becoming a victim of a ransomware attack?

The following list is not complete, but should give an idea on what an Oracle DBA may proactively do:

1. Protecting the data from becoming encrypted

It is very unlikely that a ransomware uses Oracle functionality to connect to a database. In almost all cases the ransomware tries to find data on filesystems or on block devices to encrypt it through normal reads and writes.
My customer actually uses Automatic Storage Management (ASM) and I proposed to use the ASM Filter Driver as a first protection against ransomware, because access to ASM-disks is only allowed using Oracle-Database-Calls then. You may e.g. check Blogs which show that even a dd or fdisk as root is not possible on the devices holding the data when the ASM Filter Driver is installed:

https://franckpachot.medium.com/asm-filter-driver-simple-test-on-filtering-2a506f048ee5
https://www.uxora.com/unix/admin/42-oracle-asm-filter-driver-i-o-filtering-test

Here an example trying to delete a partition, which is an ASM-device with fdisk:

[root@ol8-21-rac1 ~]# fdisk /dev/sdc
...
Command (m for help): p
Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb7b98795

Device     Boot Start     End Sectors Size Id Type
/dev/sdc1        2048 4194303 4192256   2G 83 Linux

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Command (m for help): p
Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb7b98795

Command (m for help): w
The partition table has been altered.
Failed to remove partition 1 from system: Device or resource busy

The kernel still uses the old partitions. The new table will be used at the next reboot. 
/dev/sdc: close device failed: Input/output error

[root@ol8-21-rac1 ~]# 

In /var/log/messages I can see this:

Mar 22 22:18:51 ol8-21-rac1 kernel: F 4299627.272/220322221851 fdisk[98385] oracleafd:18:1012:Write IO to ASM managed device: [8] [32]
Mar 22 22:18:51 ol8-21-rac1 kernel: Buffer I/O error on dev sdc, logical block 0, lost async page write

2. Protecting backups from becoming encrypted

Besides storing backups on different servers it’s a good idea to use backup-solutions which make the backup immutable (read only) after it has been written. So you should check that your database backups are immutable. A NFS-location is usually not a good backup medium for that (there are measures to help for NFS as well though. Check here).

3. Protecting data so that it cannot be stolen

Encrypting your data using e.g. Oracle Transparent Data Encrpytion is a good idea, because stealing that data is useless without the key.

Depending on your configuration several other methods to protect against ransomware attacks are available. Here a couple of links concerning the subject:

https://phoenixnap.com/kb/immutable-backups
https://blogs.oracle.com/maa/post/protect-and-recover-databases-from-ransomware-attacks-with-zero-data-loss-recovery-appliance
https://ronekins.com/2021/06/14/protecting-oracle-backups-from-ransomware-and-malicious-intent/#more-7955

Summary: Ransomware may also affect database servers. A DBA should protect the databases he’s responsible for. Despites ASM Filter Drivers (AFD) issues (dependency on the Linux kernel, Bugs) the AFD could be a measure to protect your databases against ransomware attacks. Interestingely I haven’t seen any Blog or information yet about using AFD as a protection against ransomware.

REMARK: You may check the following MOS Note concerning the dependency of the AFD to the Kernel:
ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1)
Even if the MOS-Note-title is about ACFS only, the AFD is covered as well.

L’article Oracle DBs and ransomware attacks est apparu en premier sur dbi Blog.

Extract all DDL from an Oracle database

$
0
0

Introduction

Extracting DDL is sometime useful for creating similar objects in another database without data. Basically everything can be extracted from a running Oracle database.

The needs

My customer asked me to replicate a database without any data. The goal is to feed development environments running on Docker, so with a minimal footprint. The precise needs were:

  • All the metadata
  • Data from some tables may be needed (based on a provided list)
  • Imported users should be filtered (based on criteria and an exclude list)
  • Imported users will be created with basic password (password = username)
  • Source and target databases are not on the same network (thus no direct communication between both instances)
  • Logon triggers must be disabled on target database
  • Audit rules must also be disabled

Additional criteria may be added later.

How to proceed?

An Oracle package is dedicated to DDL extraction: dbms_metadata.get_ddl. If it’s very convenient for few objects, it does not do the job for a complete DDL extraction.

Since years now, datapump is also able to do this extraction. Actually, it was already possible with older exp/imp on 9i and older versions.

Export could be done as a normal expdp with metadata-only extraction. Then impdp will be used with the sqlfile directive. Datapump import with sqlfile directive won’t import anything but will parse the dumpfile and generate a SQL script from it.

SQL script will then be parsed and several actions will be done:

  • password change for all users (reset to username)
  • logon trigger disabling
  • audit rules disabling

Once done, SQL script will then be ready to send to target server.

Another expdp will be done with selected tables, this is for parameter tables for example and for sure tables without any sensible data. It is based on a text file (with the list of the tables to export) as input.

Before creating the metadata on target database, tablespaces must exist but with minimal sizes. This is why a script is also used to generate tablespace creation using a single datafile with minimum size and autoextend. Data size will be low, so users may not be annoyed by space exhaust on tablespaces.

Prerequisites

These are the prerequisites to use these scripts:

  • 12c or later source database (should also work with older versions)
  • target database in same or higher version and configured for OMF (db_create_file_dest)
  • connection to oracle user on both systems (or a user in the dba system group)
  • 1+GB free space on source and target server
  • nfs share between the 2 servers is recommended
  • users list for exclusion is provided by the customer
  • tables list for inclusion is provided by the customer

Here is an example of both lists:

cat ddl_tmp/excl_users.txt | more
ABELHATR
ACALENTI
ACTIVITY_R
ADELANUT
ALOMMET
AMERAN
AOLESG
APEREAN
APP_CON_MGR
...

cat ddl_tmp/incl_tables.txt
DERAN.TRAD_GV_CR
DERAN.TRAD_GV_PS
APPCN.PARAM_BASE
APPCN.PARAM_EXTENDED
OAPPLE.MCUST_INVC
...

Output files

The script will generate 3 files prefixed with the step number for identifying sequence on target database:

  • 01_${ORACLE_SID}_create_tablespace.sql: tablespace creation script using OMF
  • 02_${ORACLE_SID}_create_ddl.sql: main SQL script to create the DDL
  • 03_impdp_${ORACLE_SID}_tables.sh: import shell script for importing tables with data

Complete script explained

The first part of the script is for defining variables, variables are basically source database SID, working folder and file names:

# Set source database
export ORACLE_SID=MARCP01

# Set environment variables, main folder and file names
export DDL_TARGET_DIR=/home/oracle/ddl_tmp
export DDL_TARGET_DUMPFILE=ddl_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`.dmp
export DDL_TARGET_LOGFILE_EXP=ddl_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_LOGFILE_IMP=ddl_${ORACLE_SID}_imp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_TABLES_DUMPFILE=tables_${ORACLE_SID}_`date +"%Y%m%d_%H%M"`_%U.dmp
export DDL_TARGET_TABLES_LOGFILE_EXP=tables_${ORACLE_SID}_exp_`date +"%Y%m%d_%H%M"`.log
export DDL_TARGET_SCRIPT=ddl_${ORACLE_SID}_extracted_`date +"%Y%m%d_%H%M"`.sql
export DDL_TBS_SCRIPT=01_${ORACLE_SID}_create_tablespace.sql
export DDL_CREATE_SCRIPT=02_${ORACLE_SID}_create_ddl.sql
export DDL_IMPORT_TABLES_CMD=03_impdp_${ORACLE_SID}_tables.sh
export DDL_EXCLUDE_USER_LIST=excl_users.txt
export DDL_INCLUDE_TABLE_LIST=incl_tables.txt

Second part is for creating target folder and deleting temporary files from the hypothetical last run:

# Create target directory and clean up the folder
# Directory should include a user list to exclude: $DDL_EXCLUDE_USER_LIST
#  => User list is basically 1 username per line
# Directory may include a table list to include: $DDL_INCLUDE_TABLE_LIST
#  => Table list is 1 table per line, prefixed with the username (owner)
mkdir $DDL_TARGET_DIR 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.par 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.par 2>/dev/null
rm $DDL_TARGET_DIR/0*.sql 2>/dev/null
rm $DDL_TARGET_DIR/0*.sh 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.dmp 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.dmp 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.log 2>/dev/null
rm $DDL_TARGET_DIR/tables_*.log 2>/dev/null
rm $DDL_TARGET_DIR/ddl_*.sql 2>/dev/null

A parameter file will be used for the first expdp, it must be created before. All users will be included, but not the default’s one. Excluding unneeded users will be done later:

# Create parameter file for metadata export
# No need to parallelize as DDL extraction runs on a single thread
. oraenv <<< $ORACLE_SID
sqlplus -s / as sysdba <<EOF
 create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/ddl_extract.par
 SELECT 'dumpfile=$DDL_TARGET_DUMPFILE' FROM DUAL; 
 SELECT 'logfile=$DDL_TARGET_LOGFILE_EXP' FROM DUAL;
 SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
 SELECT 'content=metadata_only' FROM DUAL;
 SELECT 'cluster=N' FROM DUAL;
 SELECT 'exclude=fga_policy' FROM DUAL;
 SELECT 'exclude=AUDIT_OBJ' FROM DUAL;
 SELECT 'exclude=DB_LINK' FROM DUAL;
 SELECT 'schemas='||username FROM DBA_USERS WHERE oracle_maintained='N' ORDER BY username;
 spool off;
exit;
EOF

In this parameter file, let’s exclude the users from the txt file:

# Exclude users' list from parameter file
cp $DDL_TARGET_DIR/ddl_extract.par $DDL_TARGET_DIR/par1.tmp
for a in `cat $DDL_TARGET_DIR/$DDL_EXCLUDE_USER_LIST`; do cat $DDL_TARGET_DIR/par1.tmp | grep -v $a > $DDL_TARGET_DIR/par2.tmp; mv $DDL_TARGET_DIR/par2.tmp $DDL_TARGET_DIR/par1.tmp; done
mv $DDL_TARGET_DIR/par1.tmp $DDL_TARGET_DIR/ddl_extract.par

A parameter file is also needed for tables expdp. Tables’ export will be included in separate dump files:

# Create parameter file for tables to include
# Tables will be consistent at the same SCN
# Export is done with parallel degree 4
sqlplus -s / as sysdba <<EOF
 create or replace directory DDL_TARGET_DIR as '$DDL_TARGET_DIR';
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/tables_extract.par
 SELECT 'dumpfile=$DDL_TARGET_TABLES_DUMPFILE' FROM DUAL; 
 SELECT 'logfile=$DDL_TARGET_TABLES_LOGFILE_EXP' FROM DUAL;
 SELECT 'directory=DDL_TARGET_DIR' FROM DUAL; 
 SELECT 'parallel=4' FROM DUAL;
 SELECT 'cluster=N' FROM DUAL;
 SELECT 'flashback_scn='||current_scn FROM V$DATABASE;
 spool off;
exit;
EOF

In this parameter file, let’s include all the tables as described in the related txt file:

# Include tables' list to parameter file
for a in `cat $DDL_TARGET_DIR/$DDL_INCLUDE_TABLE_LIST`; do echo "tables="$a >> $DDL_TARGET_DIR/tables_extract.par; done

Now metadata export could start:

# Export metadata to a dump file
expdp "/ as sysdba" parfile=$DDL_TARGET_DIR/ddl_extract.par

# Output example
# ...
# Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:
#   /home/oracle/ddl_tmp/ddl_MARCP01_20220318_1351.dmp
# Job "SYS"."SYS_EXPORT_SCHEMA_02" successfully completed at Fri Mar 18 14:21:47 2022 elapsed 0 00:24:13

And tables could also be exported now:

# Export included tables in another set of dump files
expdp "/ as sysdba" parfile=$DDL_TARGET_DIR/tables_extract.par

# Output example
# ...
# Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
#   /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
# Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Mar 18 14:22:08 2022 elapsed 0 00:00:14

A script is needed for tablespace creation, let’s create it:

# Create tablespace script for tablespace creation on target database (10MB with autoextend)
sqlplus -s / as sysdba <<EOF
 set pages 0
 set lines 200
 set feedback off
 spool $DDL_TARGET_DIR/$DDL_TBS_SCRIPT
  SELECT 'create tablespace '||tablespace_name||' datafile size 10M autoextend on;' FROM dba_data_files WHERE tablespace_name NOT IN ('SYSTEM','SYSAUX') and tablespace_name NOT LIKE 'UNDOTBS%' group by tablespace_name order by tablespace_name;
  spool off;
exit;
EOF

Another parameter file is needed for doing the datapump import that will create the SQL file:

# Create parameter file for metadata import as an SQL file
echo "dumpfile=$DDL_TARGET_DUMPFILE" > $DDL_TARGET_DIR/ddl_generate.par
echo "logfile=$DDL_TARGET_LOGFILE_IMP" >> $DDL_TARGET_DIR/ddl_generate.par
echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/ddl_generate.par
echo "sqlfile=$DDL_TARGET_SCRIPT" >> $DDL_TARGET_DIR/ddl_generate.par

Now let’s start the impdp task to extract DDL from the metadata dumpfile:

# Generate SQL script from previous dump (with impdp - it will not import anything)
impdp "/ as sysdba" parfile=$DDL_TARGET_DIR/ddl_generate.par

# Output example
# ...
# Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Fri Mar 18 14:34:48 2022 elapsed 0 00:08:37

Once the SQL script with all DDL has been created, it’s time to change users’ passwords, lock some specific users, and disable logon triggers. You may probably need different changes:

# Define standard password for all internal users and generate DDL script (not definitive's one)
cat $DDL_TARGET_DIR/$DDL_TARGET_SCRIPT | awk -F ' ' '{if ($1 == "CREATE" && $2 == "USER" && $6 == "VALUES")  print $1" "$2" "$3" "$4" "$5" "$3; else print $0}' > $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT

# Lock *_MANAGER users (lock is added at the end of DDL script)
cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
cat $DDL_TARGET_DIR/ddl.tmp | grep "CREATE USER "" | grep "_MANAGER"" | awk -F ' ' '{print "ALTER USER "$3" ACCOUNT LOCK;"}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
rm $DDL_TARGET_DIR/ddl.tmp

# Remove logon triggers (disabled at the end of DDL script)
cp $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT $DDL_TARGET_DIR/ddl.tmp
cat $DDL_TARGET_DIR/ddl.tmp | awk -F ' ' '{if ($1 == "CREATE" && $2 == "EDITIONABLE" && $3 == "TRIGGER")  {trig=1; trigname=$4;} else if (trig == 1 && $1  == "after" && $2 == "logon") {trig=0 ; print "ALTER TRIGGER "trigname" DISABLE;"}}' >> $DDL_TARGET_DIR/$DDL_CREATE_SCRIPT
rm $DDL_TARGET_DIR/ddl.tmp

I’m still on the source server, but it does not prevent me to generate parameter and command file for impdp:

# Create parameter file for tables import (will be needed on target server)
echo "dumpfile=$DDL_TARGET_TABLES_DUMPFILE" > $DDL_TARGET_DIR/tables_import.par
echo "logfile=tables_import.log" >> $DDL_TARGET_DIR/tables_import.par
echo "directory=DDL_TARGET_DIR" >> $DDL_TARGET_DIR/tables_import.par

# Script for importing tables on the target database (on the target server)
echo 'impdp "/ as sysdba" parfile=$DDL_TARGET_DIR/tables_import.par' > $DDL_TARGET_DIR/$DDL_IMPORT_TABLES_CMD

Last operation done on this source server is displaying the files generated by the script:

# Display files to transport to target server
ls -lrth $DDL_TARGET_DIR/0*.* $DDL_TARGET_DIR/tables*.dmp $DDL_TARGET_DIR/tables_import.par | sort
# Output example
# -rw-r----- 1 oracle asmadmin  20K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_02.dmp
# -rw-r----- 1 oracle asmadmin 472K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_01.dmp
# -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_03.dmp
# -rw-r----- 1 oracle asmadmin 8.0K Mar 18 14:22 /home/oracle/ddl_tmp/tables_MARCP01_20220318_1351_04.dmp
# -rw-rw-r-- 1 oracle oinstall  29K Mar 18 14:25 /home/oracle/ddl_tmp/01_MARCP01_create_tablespace.sql
# -rw-rw-r-- 1 oracle oinstall  63M Mar 18 14:35 /home/oracle/ddl_tmp/02_MARCP01_create_ddl.sql
# -rw-rw-r-- 1 oracle oinstall   64 Mar 18 14:35 /home/oracle/ddl_tmp/03_impdp_MARC01_tables.sh
# -rw-rw-r-- 1 oracle oinstall   98 Mar 18 14:35 /home/oracle/ddl_tmp/tables_import.par

Conclusion

This is not high level Oracle database stuff, but you can achieve nice automation simply using command shell and dynamic SQL scripting. It does not require any extra tool, and in this example, it brought to my customer exactly what he needs.

L’article Extract all DDL from an Oracle database est apparu en premier sur dbi Blog.

How to create an Oracle GoldenGate EXTRACT in Multitenant

$
0
0

Create an EXTRACT process into container database has some specificity :

From the CDB$ROOT, create a common user and configure the database to be ready to extract data via GoldenGate:

SQL> create user c##gg_admin identified by "*****" default tablespace goldengate temporary tablespace temp;

User created.

SQL>

SQL> alter user c##gg_admin quota unlimited on goldengate;

User altered.

SQL>


SQL> grant create session, connect,resource,alter system, select any dictionary, flashback any table to c##gg_admin container=all;

Grant succeeded.

SQL>

SQL> exec dbms_goldengate_auth.grant_admin_privilege(grantee => 'c##gg_admin',container=>'all');

PL/SQL procedure successfully completed.

SQL> alter user c##gg_admin set container_data=all container=current;

User altered.

SQL>

SQL> grant alter any table to c##gg_admin container=ALL;

Grant succeeded.

SQL>

alter system set enable_goldengate_replication=true scope=both;


SQL> alter database force logging;


SQL> alter pluggable database add supplemental log data;

Pluggable database altered.

SQL>

Add the schematrandata for the schema concerned:

GGSCI (vmld-01726 as c##gg_admin@MYCDB) 3> add schematrandata schema_source

2022-04-13 18:06:55  INFO    OGG-01788  SCHEMATRANDATA has been added on schema "schema_source".

2022-04-13 18:06:55  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema "schema_source".

2022-04-13 18:06:55  INFO    OGG-10154  Schema level PREPARECSN set to mode NOWAIT on schema "schema_source".

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_DUMMY *****
Oracle Goldengate support native capture on table schema_source.ZZ_DUMMY.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_DUMMY: SCN, D, COMMENT_TXT
No unique key is defined for table schema_source.ZZ_DUMMY.

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_DUMMY2 *****
Oracle Goldengate support native capture on table schema_source.ZZ_DUMMY2.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_DUMMY2: SCN, D, COMMENT_TXT
No unique key is defined for table schema_source.ZZ_DUMMY2.

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_SURVEILLANCE *****
Oracle Goldengate support native capture on table schema_source.ZZ_SURVEILLANCE.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_SURVEILLANCE: I.

2022-04-13 18:07:00  INFO    OGG-10471  ***** Oracle Goldengate support information on table schema_source.ZZ_SURVEILLANCE_COPY *****
Oracle Goldengate support native capture on table schema_source.ZZ_SURVEILLANCE_COPY.
Oracle Goldengate marked following column as key columns on table schema_source.ZZ_SURVEILLANCE_COPY: I, SURV_DATE, ELLAPSED_1, ELLAPSED_2, CLIENT_HOST, CLIENT_TERMINAL, OS_USER, CLIENT_PROGRAM, INFO
No unique key is defined for table schema_source.ZZ_SURVEILLANCE_COPY.

GGSCI (vmld-01726 as c##gg_admin@MYCDB/CDB$ROOT) 3> dblogin userid c##gg_admin@MYPDB password xxxx
Successfully logged into database.

GGSCI (vmld-01726 as c##gg_admin@MYCDB) 4> info schematrandata schema_source

2022-04-13 18:32:43  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema "schema_source".

2022-04-13 18:32:43  INFO    OGG-01980  Schema level supplemental logging is enabled on schema "schema_source" for all scheduling columns.

2022-04-13 18:32:43  INFO    OGG-10462  Schema "schema_source" have 4 prepared tables for instantiation.

GGSCI (vmld-01726 as c##gg_admin@MYCDB) 5>

Create a new alias connection to the container database and register the extract, the extract must be registered into the root container (CDB$ROOT) even the data to capture are from the PDB:

GGSCI (myserver) 10> alter credentialstore add user c##gg_admin@MYCDB_X1 alias ggadmin_exacc
Password:

Credential store altered.

GGSCI (myserver) 11> dblogin useridalias ggadmin
Successfully logged into database CDB$ROOT.

GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 2>

GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 2> register extract E3 database container (MYPDB)

2022-04-13 18:31:19  INFO    OGG-02003  Extract E3 successfully registered with database at SCN 3386436450080


GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 3>

Save the SCN –> 3386436450080

Create the EXTRACT, connected on the CDB:

[oracle@myserver:/u01/app/oracle/product/19.1.0.0.4/gg_1]$ mkdir -p /u01/gs_x/ogg/

GGSCI (myserver) 7> add extract E3, integrated tranlog, begin now
EXTRACT (Integrated) added.


GGSCI (myserver) 8> INFO ALL

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     E3		    00:00:00      00:00:06


GGSCI (myserver) 9>


GGSCI (myserver) 9> add exttrail /u01/gs_x/ogg/gz, extract E3
EXTTRAIL added.


GGSCI (myserver) 2> edit param E3


GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 13> edit param E3
Extract E3
useridalias ggadmin
Exttrail /u01/gs_x/ogg/gz
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
DDL  &
INCLUDE MAPPED OBJNAME MYPDB.SCHEMA.*
Sequence MYPDB.SCHEMA.*;
Table MYPDB.SCHEMA.* ;

The parameter Table must be prefixed by the Pdb Name

 

Start the Extract always from the CDB$ROOT:

GGSCI (myserver as c##gg_admin@MY_CDB/CDB$ROOT) 12> START EXTRACT E3 atcsn 3386436450080

Sending START request to MANAGER ...
EXTRACT E3 starting


GGSCI (myserver as c##gg_admin@MYCDB/CDB$ROOT) 15> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     E3      00:00:05      00:00:03

Check the extract is running.
Now you are ready to create the Pump process, do the Initial Load and create the replicat process on the target.

L’article How to create an Oracle GoldenGate EXTRACT in Multitenant est apparu en premier sur dbi Blog.

Installing MySQL InnoDB Cluster in OKE using a MySQL Operator

$
0
0

During previous months, I’ve had some time to satisfy my curiosity about databases in containers and I started to test a little bit MySQL in Kubernetes.
This is how it all began…

In January I had the chance to be trained on Kubernetes attending the Docker and Kubernetes essentials Workshop of dbi services. So I decided to prepare a session on this topic at our internal dbi xChange event. And as if by magic, at the same time, a customer asked for our support to migrate a MySQL database to their Kubernetes cluster.

In general, I would like to raise two points before going into the technical details:
1. Is it a good idea to move databases into containers? Here I would use a typical IT answer: “it depends”. I can suggest you to think about your needs and constraints, if you have small images to deploy, about storage and persistence, performances, …
2. There are various solutions for installing, orchestrating and administering MySQL in K8s: MySQL single instance vs MySQL InnoDB Cluster, using MySQL Operator for Kubernetes or Helm Charts, on-premise but also through Oracle Container Engine for Kubernetes on OCI, … I recommend you to think about which are (again) your needs and skills, if you are already working on Cloud technologies, whether you have already set up DevOps processes and which ones, …

Here I will show you how to install a MySQL InnoDB Cluster in OKE using a MySQL Operator.

First thing is to have an account on Oracle OCI and have deployed an Oracle Container Engine for Kubernetes in your compartment. You can do it in an easy was using the Quick Create option under “Developer Services > Containers & Artifacts > Kubernetes Clusters (OKE)”:

In this way all the resources you need (VCN, Internet and NAT gateways, a K8s cluster with workers nodes and node pool) are there in one click:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl cluster-info
Kubernetes control plane is running at https://xxx.xx.xxx.xxx:6443
CoreDNS is running at https://xxx.xx.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get nodes -o wide
NAME         STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP       OS-IMAGE                  KERNEL-VERSION                      CONTAINER-RUNTIME
10.0.10.36   Ready    node    6m7s   v1.22.5   10.0.10.36    yyy.yyy.yyy.yyy   Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.37   Ready    node    6m1s   v1.22.5   10.0.10.37    kkk.kkk.kkk.kk    Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.42   Ready    node    6m     v1.22.5   10.0.10.42    jjj.jj.jjj.jj     Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7

As a second step, you can install the MySQL Operator for Kubernetes using kubectl:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
customresourcedefinition.apiextensions.k8s.io/innodbclusters.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/mysqlbackups.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/clusterkopfpeerings.zalando.org created
customresourcedefinition.apiextensions.k8s.io/kopfpeerings.zalando.org created
elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
serviceaccount/mysql-sidecar-sa created
clusterrole.rbac.authorization.k8s.io/mysql-operator created
clusterrole.rbac.authorization.k8s.io/mysql-sidecar created
clusterrolebinding.rbac.authorization.k8s.io/mysql-operator-rolebinding created
clusterkopfpeering.zalando.org/mysql-operator created
namespace/mysql-operator created
serviceaccount/mysql-operator-sa created
deployment.apps/mysql-operator created

You can check the health of the MySQL Operator:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get deployment -n mysql-operator mysql-operator
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
mysql-operator   1/1     1            1           24s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get pods --show-labels -n mysql-operator
NAME                              READY   STATUS    RESTARTS   AGE    LABELS
mysql-operator-869d4b4b8d-slr4t   1/1     Running   0          113s   name=mysql-operator,pod-template-hash=869d4b4b8d

To isolate resources, you can create a dedicated namespace for the MySQL InnoDB Cluster:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl create namespace mysql-cluster
namespace/mysql-cluster created

You should also create a Secret using kubectl to store MySQL user credentials that will be created and then required by pods to access to the MySQL server:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl create secret generic elisapwd --from-literal=rootUser=root --from-literal=rootHost=% --from-literal=rootPassword="pwd" -n mysql-cluster
secret/elisapwd created

You can check that the Secret was corrected created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get secrets -n mysql-cluster
NAME                  TYPE                                  DATA   AGE
default-token-t2c47   kubernetes.io/service-account-token   3      2m
elisapwd              Opaque                                3      34s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl describe secret/elisapwd -n mysql-cluster
Name:         elisapwd
Namespace:    mysql-cluster
Labels:       
Annotations:  

Type:  Opaque

Data
====
rootHost:      1 bytes
rootPassword:  7 bytes
rootUser:      4 bytes

Now you have to write a .yaml configuration file to define how the MySQL InnoDB Cluster should be created. Here is a simple example:

elisa@cloudshell:~ (eu-zurich-1)$ vi InnoDBCluster_config.yaml
apiVersion: mysql.oracle.com/v2alpha1
kind: InnoDBCluster
metadata:
  name: elisacluster
  namespace: mysql-cluster 
spec:
  secretName: elisapwd
  instances: 3
  router:
    instances: 1

At this point you can run a MySQL InnoDB Cluster applying the configuration that you just created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f InnoDBCluster_config.yaml
innodbcluster.mysql.oracle.com/elisacluster created

You can finally check if the MySQL InnoDB Cluster has been successfully created:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get innodbcluster --watch --namespace mysql-cluster
NAME           STATUS    ONLINE   INSTANCES   ROUTERS   AGE
elisacluster   PENDING   0        3           1         12s
elisacluster   PENDING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         104s
elisacluster   INITIALIZING   0        3           1         106s
elisacluster   ONLINE         1        3           1         107s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get all -n mysql-cluster
NAME                                       READY   STATUS    RESTARTS   AGE
pod/elisacluster-0                         2/2     Running   0          4h44m
pod/elisacluster-1                         2/2     Running   0          4h42m
pod/elisacluster-2                         2/2     Running   0          4h41m
pod/elisacluster-router-7686457f5f-hwfcv   1/1     Running   0          4h42m

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                               AGE
service/elisacluster             ClusterIP   10.96.9.203           6446/TCP,6448/TCP,6447/TCP,6449/TCP   4h44m
service/elisacluster-instances   ClusterIP   None                  3306/TCP,33060/TCP,33061/TCP          4h44m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/elisacluster-router   1/1     1            1           4h44m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/elisacluster-router-7686457f5f   1         1         1       4h44m

NAME                            READY   AGE
statefulset.apps/elisacluster   3/3     4h44m

You can use port forwarding in the following way:

elisa@cloudshell:~ (eu-zurich-1)$ kubectl port-forward service/elisacluster mysql --namespace=mysql-cluster
Forwarding from 127.0.0.1:6446 -> 6446

to access your MySQL InnoDB Cluster on a second terminal in order to check its health:

elisa@cloudshell:~ (eu-zurich-1)$ mysqlsh -h127.0.0.1 -P6446 -uroot -p
Please provide the password for 'root@127.0.0.1:6446': *******
Save password for 'root@127.0.0.1:6446'? [Y]es/[N]o/Ne[v]er (default No): N
MySQL Shell 8.0.28-commercial

Copyright (c) 2016, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type 'help' or '?' for help; 'quit' to exit.
Creating a session to 'root@127.0.0.1:6446'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 36651
Server version: 8.0.28 MySQL Community Server - GPL
No default schema selected; type use  to set one.
 MySQL  127.0.0.1:6446 ssl  JS >  MySQL  127.0.0.1:6446 ssl  JS > dba.getCluster().status();
{
    "clusterName": "elisacluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "PRIMARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306"
}

 MySQL  127.0.0.1:6446 ssl  JS > sql
Switching to SQL mode... Commands end with ;
 MySQL  127.0.0.1:6446 ssl  SQL > select @@hostname;
+----------------+
| @@hostname     |
+----------------+
| elisacluster-0 |
+----------------+
1 row in set (0.0018 sec)
 MySQL  127.0.0.1:6446 ssl  SQL > SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST                                                           | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 717dbe17-ba71-11ec-8a91-3665daa9c822 | elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | PRIMARY     | 8.0.28         | XCom                       |
| group_replication_applier | b02c3c9a-ba71-11ec-8b65-5a93db09dda5 | elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
| group_replication_applier | eb06aadd-ba71-11ec-8aac-aa31e5d7e08b | elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.0036 sec)

Easy, right?
Yes, but databases containers is still a tricky subject. As we said above, many topics need to be addressed: deployment type, performances, backups, storage and persistence, … So stay tuned, more blog posts about MySQL on K8s will come soon…

By Elisa Usai

L’article Installing MySQL InnoDB Cluster in OKE using a MySQL Operator est apparu en premier sur dbi Blog.

Warning: ODA HA disk enclosure is not smart!

$
0
0

Introduction

Apart from the number of servers (1 vs 2), the main difference between Oracle Database Appliance lite (S/L) and High-Availability (HA) is the data disks location. They are inside the server on lite ODAs, and in a dedicated disk enclosure on HA ODAs. Obviously, this is because when 2 nodes want to use the same disks, these disks have to be shared. And this is why HA needs a SAS disk enclosure.

Disk technology on ODA

On lite ODAs, disks are SSDs inside the server and connected to the PCI Express bus without any interface, it’s the NVMe technology. This is very fast. There are faster technologies, like NVRam, but price/performance ratio made NVMe technology a game changer.

HA ODAs are not that fast regarding disk bandwidth. This is because NVMe only works for disks locally connected to the server’s motherboard. Both HA ODA nodes come with SAS controllers, these being connected to a SAS disk enclosure with SAS SSDs in it. As this enclosure is quite big (same height as the 2 nodes together), disk capacity is much higher than lite ODAs. A fully loaded X9-2HA ODA with SSDs has 184TB, it’s more than twice the 81TB capacity of a fully loaded ODA X9-2L. Furthermore, you can add another storage enclosure to X9-2HA to double the disk capacity to 369TB. And if you need even more capacity, there is an high capacity version of this enclosure with a mix of SSDs and HHDs for a maximum RAW capacity of 740TB. This is huge!

Hardware monitoring on ODA

Monitoring the ODA hardware is done from ILOM, the management console. ILOM can send SNMP traps and raise an alert if something is wrong. For an HA ODA, you have 2 ILOMs to monitor, as the 2 nodes are separate hardware. There’s a catch when it comes to monitoring the storage enclosure. This enclosure is not active, meaning that it doesn’t have any intelligence, and therefore cannot raise any alert. And ILOM from the nodes is not aware of hardware outside the nodes. You may think that it’s not really a problem because data disks are monitored by ASM. But this enclosure also has SAS interfaces to get connected with the nodes. And if one of these interfaces is down, you may not detect the problem.

The use case

My customer has multiple HA ODAs, and I was doing a sanity checks of these ODAs. Everything was fine until I did an orachk on an X6-2HA:

odaadmcli orachk
INFO: 2022-11-16 16:41:11: Running orachk under /usr/bin/orachk Searching for running databases . . . . .
........
List of running databases registered in OCR
1. XXX
3. YYY
4. ZZZ 
5. All of above
6. None of above
Select databases from list for checking best practices. For multiple databases, select 5 for All or comma separated number like 1,2 etc [1-6][5]. 6
RDBMS binaries found at /u01/app/oracle/product/19.0.0.0/dbhome_1 and ORACLE_HOME not set. Do you want to set ORACLE_HOME to "/u01/app/oracle/product/19.0.0.0/dbhome_1"?[y/n][y] y
...
FAIL => Several enclosure components controllers might be down
...

This is not something nice to see. My storage enclosure has a problem.

I will do another check with odaadmcli:

odaadmcli show enclosure

        NAME        SUBSYSTEM         STATUS      METRIC

        E0_FAN0     Cooling           OK          4910 rpm
        E0_FAN1     Cooling           OK          4530 rpm
        E0_FAN2     Cooling           OK          4920 rpm
        E0_FAN3     Cooling           OK          4570 rpm
        E0_IOM0     Encl_Electronics  OK          -
        E0_IOM1     Encl_Electronics  Not availab -
        E0_PSU0     Power_Supply      OK          -
        E0_PSU1     Power_Supply      OK          -
        E0_TEMP0    Amb_Temp          OK          23 C
        E0_TEMP1    Midplane_Temp     OK          23 C
        E0_TEMP2    PCM0_Inlet_Temp   OK          29 C
        E0_TEMP3    PCM0_Hotspot_Temp OK          26 C
        E0_TEMP4    PCM1_Inlet_Temp   OK          44 C
        E0_TEMP5    PCM1_Hotspot_Temp OK          28 C
        E0_TEMP6    IOM0_Temp         OK          22 C
        E0_TEMP7    IOM1_Temp         OK          28 C

Enclosure is not visible through one of the SAS controller. Maybe there is a failure, but the node is not able to say that there is a failure. It may be related to an unplugged SAS cable, as I found on MOS.

Let’s do a validate storage topology:

odacli validate-storagetopology
INFO    : ODA Topology Verification
INFO    : Running on Node0
INFO    : Check hardware type
SUCCESS : Type of hardware found : X6-2
INFO    : Check for Environment(Bare Metal or Virtual Machine)
SUCCESS : Type of environment found : Bare Metal
INFO    : Check number of Controllers
SUCCESS : Number of Internal RAID bus controllers found : 1
SUCCESS : Number of External SCSI controllers found : 2
INFO    : Check for Controllers correct PCIe slot address
SUCCESS : Internal RAID controller   : 23:00.0
SUCCESS : External LSI SAS controller 0 : 03:00.0
SUCCESS : External LSI SAS controller 1 : 13:00.0
INFO    : Check if JBOD powered on
SUCCESS : 0JBOD : Powered-on
INFO    : Check for correct number of EBODS(2 or 4)
FAILURE : Check for correct number of EBODS(2 or 4) : 1
ERROR   : 1 EBOD found on the system, which is less than 2 EBODS with 1 JBOD
INFO    : Above details can also be found in the log file=/opt/oracle/oak/log/srvxxx/storagetopology/StorageTopology-2022-11-16-17:21:43_34790_17083.log

EBOD stands for Expanded Bunch Of Disks, which is not very clear. But as disks are OK, this is probably related to cabling or controller in the enclosure.

Solution

My customer went to the datacenter and first checked the cabling, but it was fine. Opening an SR on My Oracle Support quickly solved the problem. A new controller was sent, it was swapped in the enclosure with the defect one without any downtime, and everything is fine then.

Conclusion

There is absolutely no problem with the HA storage enclosure not being smart. You don’t need a smart storage for this kind of server, as ODA is a “Simple. Reliable. Affordable” solution.

In this particular case, it’s hard to detect that the failure is a real one. But my customer was using a RAC setup with a failure in one of the redundant components, maybe since months. It’s definitely not satisfying. From time to time, manual and human checks are still needed!

L’article Warning: ODA HA disk enclosure is not smart! est apparu en premier sur dbi Blog.

Viewing all 461 articles
Browse latest View live