Moving a 12.1.0.2.0 Pluggable Database from one Cluster to another… the hard way

I recently had to move a pluggable database from Oracle RAC cluster to another. However, the task was complicated by the fact that the two clusters were on separate public networks and therefore I could not use the remote clone syntax. Luckily, the two clusters did share a NAS network so I could use NFS to move files from one server to another.

Example of the Syntax I couldn’t use due to network access issues:

create pluggable database SOMEPDB from SOMEPDB@DB_LINK;

So, I had to do things the hard way.

Here is the Lay of the Land:

Source Cluster TSTLDB03
Source Nodes tstldb301,tstldb302
Destination Cluster TSTLDB04
Destination Nodes tstldb401,tstldb402
Source CDB SRCCDB
Destination CDB DSTCDB
PDB being moved SOMEPDB

Unplug the PDB SOMEPDB and place on an NFS share common to both RAC clusters

 
SQL> alter pluggable database somepdb close immediate instances=ALL;

Pluggable database altered.

SQL> alter pluggable database somepdb unplug into '/nfs/nfsdir/somepdb.xml';

Pluggable database altered.
 

Identify the Datafiles for the PDB

 
SQL> select name from v$datafile where con_id = (select con_id from v$containers where name = 'SOMEPDB');

NAME
--------------------------------------------------------------------------------
+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/system.311.946749143
+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/sysaux.312.946749143
+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/users.314.946749227

Use ASMCMD to copy the datafiles to the shared NFS location


[oracle@tstldb301 ~]$ asmcmd cp +DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/system.311.946749143 /nfs/nfsdir/system.311.946749143
copying +DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/system.311.946749143 -> /nfs/nfsdir/system.311.946749143

[oracle@tstldb301 ~]$ asmcmd cp DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/sysaux.312.946749143 /nfs/nfsdir/sysaux.312.946749143
copying +DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/sysaux.312.946749143 -> /nfs/nfsdir/sysaux.312.946749143

[oracle@tstldb301 ~]$ asmcmd cp DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/users.314.946749227 /nfs/nfsdir/users.314.946749227
copying +DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/users.314.946749227 -> /nfs/nfsdir/users.314.946749227

Possible error

You may hit the below error due to your NFS mounts not being mounted for Oracle use.


+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/system.311.946749143 /nfs/nfsdir/system.311.946749143
copying +DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/system.311.946749143 -> /nfs/nfsdir/system.311.946749143
ASMCMD-8016: copy source '+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/system.311.946749143' and target '/nfs/nfsdir/system.311.946749143' failed
ORA-19505: failed to identify file "/nfs/nfsdir/system.311.946749143"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3
Additional information: 2
ORA-15120: ASM file name '/nfs/nfsdir/system.311.946749143' does not begin with the ASM prefix character
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 486
ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)


Resolution

In order to resolve the above error, you will need to mount the NFS directory with the correct options. The options can be found in the ASM alert log.


Fri Jun 16 12:30:31 2017
Note: Some NFS servers require 'insecure' to be specified as part of the export.
Fri Jun 16 12:30:31 2017
WARNING:NFS file system /nfs/nfsdir mounted with incorrect options(rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.10.65,local_lock=none,addr=192.168.10.59)
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,noac/actimeo=0


Here is a look at what this would look like in your /etc/fstab settings. Though this is a very minimal example.


192.168.10.59:/data_pool/nfsdir         /nfs/nfsdir           nfs rsize=32768,wsize=32768,hard,actimeo=0        0 0 

Modify the xml file to reflect the new locations

Among other things, the XML file that is generated in an ALTER PLUGGABLE DATABASE UNPLUG command contains information regarding where your physical datafiles are stored. This information is necessary for any CDB that is trying to create a pluggable database because it needs to know where the pluggable database’s files are kept so that it can either copy/move or use them in place, depending on the clauses you supplied with your CREATE PLUGGABLE database command.

However, in my situation. The original datafiles reside in an ASM diskgroup which is inaccessible to the destination cluster, hence the ASMCMD CP command to a common NFS location. So I will need to modify the XML file to reflect the new paths so that the database DSTCDB on cluster TSTLDB04 knows where to look for the files.

In my situation, all of the datafiles were in the normal OMF location “+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/” so I used a one-line sed command to modify the locations in the XML file. Since there are slashes “/” in the file paths, I opted to use a non-standard delimiter of X in the sed command. In retrospect, that may not be the best delimiter to use since file paths may contain a capital X.


sed -i s'X+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/X/nfs/nfsdir/X'g somepdb.xml

Check the contents of the XML file

All of the datafile paths are enclosed in <path></path> tags so they are easily grep-able. We can ignore the fact that the TEMPFILE path was not modified as the tempfiles will be put in their proper place during the CREATE PLUGGABLE DATABASE command IF AND ONLY IF you don’t use the TEMPFILE REUSE Clause.


[oracle@tstldb301 nfsdir]$ cat somepdb.xml |grep "<path>"
      <path>/nfs/nfsdir/system.311.946749143</path>
      <path>/nfs/nfsdir/sysaux.312.946749143</path>
      <path>+DATAC1/SRCCDB/5207AE5FB7DF1231E0534B00A8C0E86C/TEMPFILE/temp.313.946749191</path>
      <path>/nfs/nfsdir/users.314.946749227</path>

Plug the PDB into the DSTCDB destination CDB


[oracle@tstldb401 ~]$ . oraenv
ORACLE_SID = [dstcdb1] ? dstcdb1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@tstldb401 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jun 16 12:54:07 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> create pluggable database somepdb using '/nfs/nfsdir/somepdb.xml' MOVE;

Pluggable database created.

Possible Error

Again, if your NFS mounts are not mounted just right, Oracle will throw an error.


SQL> create pluggable database somepdb using '/nfs/nfsdir/somepdb.xml' MOVE;
create pluggable database somepdb using '/nfs/nfsdir/somepdb.xml' MOVE
*
ERROR at line 1:
ORA-19505: failed to identify file "/nfs/nfsdir/system.311.946749143"
ORA-27054: NFS file system where the file is created or resides is not mounted
with correct options
Additional information: 3
Additional information: 2

Resolution

In order to resolve the above error, you will need to mount the NFS directory with the correct options. The options can be found in the database alert log


WARNING:NFS file system /nfs/nfsdir mounted with incorrect options(rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.82,local_lock=none,addr=192.168.0.59)
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,noac/actimeo=0

The way this would look in the /etc/fstab file is like so.


192.168.10.59:/data_pool/nfsdir         /nfs/nfsdir           nfs     rsize=32768,wsize=32768,hard,actimeo=0        0 0 

Confirm that the datafiles have been moved


SQL> select name, con_id from v$containers;

NAME                               CON_ID
------------------------------ ----------
CDB$ROOT                                1
PDB$SEED                                2
TESTPDB                                 3
SOMEPDB                                 4

SQL> select name from v$datafile where con_id = 4;

NAME
--------------------------------------------------------------------------------
+DATAC1/DSTCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/system.285.946817661
+DATAC1/DSTCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/sysaux.286.946817661
+DATAC1/DSTCDB/5207AE5FB7DF1231E0534B00A8C0E86C/DATAFILE/users.297.946817659

SQL> select name from v$tempfile where con_id =4;

NAME
--------------------------------------------------------------------------------
+DATAC1/DSTCDB/5207AE5FB7DF1231E0534B00A8C0E86C/TEMPFILE/temp.290.946817817


Open the PDB in the new CDB


SQL> alter pluggable database SOMEPDB open read write instances=ALL;

Pluggable database altered.

SQL> select name, open_mode, inst_id from gv$containers where name = 'SOMEPDB';

NAME                           OPEN_MODE     INST_ID
------------------------------ ---------- ----------
SOMEPDB                        READ WRITE          1
SOMEPDB                        READ WRITE          2



NLS_DATE_FORMAT and the Logminer

Recently, I was attempting to replicate some DML that occurred on a table with DATE format columns and I found that the SQL_REDO column was truncating the dates! Obviously, this is not good if you are attempting to replicate the DML with the same date precision.

I will illustrate the problem here, for simplicity’s sake I have reproduced the issue with some very simple representative DML and table structure.

Table Structure:

Let’s assume that we have the following table structure.

SQL> desc kkerekovski.some_table
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 CREATE_DATE                                        DATE

 

Original DML:

For this test, we are simply going to insert some random data into the table. It is important to note the value for the DATE formatted column, 2017-04-13 13:52:00. I’ve also gone ahead and checked the SCN value from before and after the insert. We will use these SCN values later on in this post to initialize our logmining sessions.

 

SQL> set num 38
SQL> select current_scn from v$database;

                           CURRENT_SCN
--------------------------------------
                              16583813

SQL> insert into kkerekovski.some_table values (1, to_date('2017-04-13 13:52:00','YYYY-MM-DD HH24:MI:SS'));

1 row created.

SQL> commit;

SQL> select current_scn from v$database;

                           CURRENT_SCN
--------------------------------------
                              16583878


Commit complete.

 

Initiate the Logmining Session:

Here is an example of the script that I will use to initiate the logmining session. It takes two arguments, the startscn and endscn which I have already captured.

SQL> !cat logmine.sql
set echo on;
BEGIN
sys.dbms_logmnr.start_logmnr (
startscn => &1,
endscn => &2,
options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG +
DBMS_LOGMNR.COMMITTED_DATA_ONLY +
DBMS_LOGMNR.CONTINUOUS_MINE +
DBMS_LOGMNR.NO_ROWID_IN_STMT
);
END;
/

Now we can call this script and pass the appropriate SCN values to initiate the logmining session.

SQL> @logmine.sql 16583813 16583878
SQL> BEGIN
  2  sys.dbms_logmnr.start_logmnr (
  3  startscn => &1,
  4  endscn => &2,
  5  options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG +
  6  DBMS_LOGMNR.COMMITTED_DATA_ONLY +
  7  DBMS_LOGMNR.CONTINUOUS_MINE +
  8  DBMS_LOGMNR.NO_ROWID_IN_STMT
  9  );
 10  END;
 11  /
old   3: startscn => &1,
new   3: startscn => 16583813,
old   4: endscn => &2,
new   4: endscn => 16583878,

PL/SQL procedure successfully completed.

 

Reproduce the issue:

Here we can clearly see that the v$logmnr_contents.sql_redo column is giving us less than ideal data. The reconstucted SQL statement is using the date format DD-MON-RR which leaves out the hourly, minutely and by second precision which was used in the original insert statement.

SQL> col sql_redo format a200
SQL> select sql_redo from v$logmnr_contents where seg_owner = 'KKEREKOVSKI' and seg_name='SOME_TABLE';

SQL_REDO
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
insert into "KKEREKOVSKI"."SOME_TABLE"("ID","CREATE_DATE") values ('1',TO_DATE('13-APR-17', 'DD-MON-RR'));

 

Why is this happening?

The reason that this problem is occurring is because v$logmnr_contents attempts format the restructured sql statement according to the session’s NLS parameters. In this case, we are being hampered by the default NLS_DATE_FORMAT

 

The fix:

SQL> alter session set nls_date_format='YYYY-MM-DD HH24:MI:SS';

Session altered.

SQL> select sql_redo from v$logmnr_contents where seg_owner = 'KKEREKOVSKI' and seg_name='SOME_TABLE';

SQL_REDO
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
insert into "KKEREKOVSKI"."SOME_TABLE"("ID","CREATE_DATE") values ('1',TO_DATE('2017-04-13 13:52:00', 'YYYY-MM-DD HH24:MI:SS'));

 

The above sqlplus output clearly shows that by altering the NLS_DATE_FORMAT session parameter, we can influence the format of any TO_DATE functions returned by V$LOGMNR_CONTENTS.SQL_REDO.

I think that the big lesson to be learned here, is that if you are going to be using v$logmnr_contents to reconstruct SQL statements that involve the use of any TO_DATE or similar functions that require some sort of DATE masking, it would be a good idea to set the defaults for your session to an appropriate value.