By Brian Fitzgerald
Introduction
Oracle Data Guard 19c with far sync setup is described here. Far sync can improve commit response time in a Maximum Availability Data Guard network. The Data Guard configuration is EC2 across two AWS Cloud regions. Database storage is ASM. Far sync creation is done using RMAN. The physical standby and far sync are implemented in a single configuration step. Some observations on network latency and switchover timing are shown.
License
Data Guard is a feature of the Oracle Database Enterprise Edition itself and does not require separate licensing. An Active Data Guard license is required for Far sync.
By using Amazon Elastic Cloud Computing, (EC2), you can control your license costs by configuring only the CPUs that you need.
Environment overview
A system overview is described in this table:
| Description | Value |
| Cloud | AWS |
| Image ID | ami-2051294a |
| Red Hat version | 7.2 |
| EC2 InstanceType | m3.medium |
| Memory | 3.75 GB |
| CPU | 1 |
| Swap | 2 GB |
| Grid software owner | grid |
| Grid Infrastructure Version | 19.3.0 |
| Database Storage | ASM |
| Oracle software owner | oracle |
| Oracle Database Version | 19.3.0 |
| Oracle Instance Type | Restart |
AWS instance type was initially m3.large (8 GB, 2 CPU), and then downsized after the grid and oracle home installations were complete. For additional information on the grid install, please refer to grid 19c install with ASM filter driver. For the network description, please refer to Data Guard network in AWS. The breakdown by region, availability zone, host, and role is:
| description | primary | far sync | standby |
| Region | N. Virginia | N. Virginia | N. California |
| availability zone | us-east-1c | us-east-1d | us-west-1b |
| ip address | 172.31.86.22 | 172.31.28.23 | 172.32.10.34 |
| hostname -s | ip-172-31-86-22 | ip-172-31-28-23 | ip-172-32-10-34 |
| db_unique_name | NY | FS | SF |
Network latency
We can measure the network latency from the primary to the far sync and to the standby. Start qperf server on the far sync:
[ec2-user@ip-172-31-28-23 ~]$ qperf
Start qperf server on the standby
[ec2-user@ip-172-32-10-34 ~]$ qperf
Measure bandwidth and latency between two N. Virginia availability zones:
[ec2-user@ip-172-31-86-22 ~]$ qperf 172.31.28.23 tcp_bw tcp_lat
tcp_bw:
bw = 92.6 MB/sec
tcp_lat:
latency = 254 us
Measure bandwidth and latency between regions N. Virginia and N. California:
[ec2-user@ip-172-31-86-22 ~]$ qperf 172.32.10.34 tcp_bw tcp_lat
tcp_bw:
bw = 18.4 MB/sec
tcp_lat:
latency = 33.3 ms
Network bandwidth is approximately 5x higher, and latency is more than 100x lower across in-region availability zones, compared to cross-region. This fact motivates the far-sync. actually showed that far sync can improve performance in cases where the network latency to the standby is higher than the latency to the far sync.
Database configuration
The initial primary instance configuration was:
*.archive_lag_target=900 *.audit_file_dest='/u01/app/oracle/admin/NY/adump' *.audit_trail='db' *.compatible='19.3.0' *.control_files='+RECO01/NY/CONTROLFILE/current.486.1018151799','+DATA01/NY/CONTROLFILE/current.273.1018151799' *.db_block_size=8192 *.db_create_file_dest='+DATA01' *.db_create_online_log_dest_1='+RECO01' *.db_create_online_log_dest_2='+DATA01' *.db_name='ORCL' *.db_recovery_file_dest='+RECO01' *.db_recovery_file_dest_size=4000m *.db_unique_name='NY' *.dg_broker_config_file1='+DATA01/NY/dr1orcl.dat' *.dg_broker_config_file2='+RECO01/NY/dr2orcl.dat' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)' *.local_listener='LISTENER_NY' *.log_archive_format='%t_%s_%r.dbf' *.nls_language='AMERICAN' *.nls_territory='AMERICA' *.open_cursors=300 *.pga_aggregate_target=360m *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=1080m *.standby_file_management='AUTO' *.undo_tablespace='UNDOTBS1'
If not already done, make these changes:
SQL> alter system set standby_file_management=AUTO; System altered. SQL> alter system set dg_broker_config_file1='+DATA01/NY/dr1orcl.dat'; System altered. SQL> alter system set dg_broker_config_file2='+RECO01/NY/dr2orcl.dat'; System altered.
Optional settings
- compatible. Must be set to the same value on the the primary, on the far sync, and on the standby.
- db_create_online_log_dest_n. Points logfile members to specific disk groups. Simplifies alter database add standby logfile syntax.
- archive_lag_target. Time-boxes each archive log. Adjust to manage control file contention.
Alter database
Make sure these alter database alterations are done:
SQL> alter database force logging; Database altered.
Make these changes with the database mounted:
SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1140849904 bytes Fixed Size 8895728 bytes Variable Size 301989888 bytes Database Buffers 822083584 bytes Redo Buffers 7880704 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database flashback on; Database altered. SQL> alter database open; Database altered.
Standby logs
Identify the online redo log size. Check that all online redo logs are the same size. Check that this query returns exactly one row:
SQL> select count(*)numlogs, bytes/1024/1024 mb from v$log group by bytes;
NUMLOGS MB
---------- ----------
3 200
If all online logs are not the same size, correct that condition before proceeding.
If standby redo logs exist, check that they are all the same size, and the same size as the online redo logs. Check that this query returns exactly one row:
SQL> select count(*)numlogs, bytes/1024/1024 mb from v$standby_log group by bytes;
NUMLOGS MB
---------- ----------
4 200
Correct discrepant conditions before proceeding.
Create standby logs
Create standby logs, if needed. You can set:
SQL> alter system set db_create_online_log_dest_1 = '+DATA01'; System altered. SQL> alter system set db_create_online_log_dest_2 = '+RECO01'; System altered.
And then run, for example:
SQL> alter database add standby logfile size 200m;
as many times as needed to get the desired number of standby logs. The optimal number of standby logs is usually greater than the number of online logs. If a high apply backlog is expected, then increase this number further.
Static listeners
On all hosts, primary, far sync, and standby, in the grid account, in $ORACLE_HOME/network/admin, edit listener.ora.
Primary:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
(SID_NAME = NY)
)
)
Far sync:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
(SID_NAME = FS)
)
)
Standby:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
(SID_NAME = SF)
)
)
Reload the listener. For example:
[grid@ip-172-31-86-22 ~]$ lsnrctl reload LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 02-SEP-2019 12:28:09 Copyright (c) 1991, 2019, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-86-22.ec2.internal)(PORT=1521))) The command completed successfully
startup far sync
On the far sync configure and startup nomount
Far sync audit directory
At the far sync
[oracle@ip-172-32-10-34 ~]$ mkdir -p /u01/app/oracle/admin/FS/adump
Far sync oratab and environment
[oracle@ip-172-31-28-23 ~]$ echo FS:/u01/app/oracle/product/19.3.0/dbhome_1:N >> /etc/oratab [oracle@ip-172-31-28-23 ~]$ . oraenv ORACLE_SID = [FS] ? FS The Oracle base remains unchanged with value /u01/app/oracle
Far sync orapwd
Create orapwFS on the far sync:
[oracle@ip-172-31-28-23 ~]$ alias oh alias oh='cd $ORACLE_HOME' [oracle@ip-172-31-28-23 ~]$ oh [oracle@ip-172-31-28-23 dbhome_1]$ cd dbs [oracle@ip-172-31-28-23 dbs]$ orapwd file=orapwFS entries=10 password=zystm.22
Far sync startup
Create a temporary, minimal pfile on the far sync. Set the compatible setting to match the primary.
[oracle@ip-172-31-28-23 ops]$ cat > /tmp/initFS.ora db_name = ORCL compatible = '19.3.0' ^D [oracle@ip-172-31-28-23 dbs]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sat Aug 31 22:01:58 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount pfile='/tmp/initFS.ora' ORACLE instance started. Total System Global Area 243268216 bytes Fixed Size 8895096 bytes Variable Size 180355072 bytes Database Buffers 50331648 bytes Redo Buffers 3686400 bytes SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0
Startup the standby
On the standby configure and startup nomount
Standby audit directory
At the standby
[oracle@ip-172-32-10-34 ~]$ mkdir -p /u01/app/oracle/admin/SF/adump
Standby oratab and environment
[oracle@ip-172-32-10-34 ~]$ echo SF:/u01/app/oracle/product/19.3.0/dbhome_1:N >> /etc/oratab [oracle@ip-172-32-10-34 ~]$ . oraenv ORACLE_SID = [SF] ? SF The Oracle base remains unchanged with value /u01/app/oracle
Standby orapwd
Create orapwSF on the standby:
[oracle@ip-172-32-10-34 ~]$ oh
[oracle@ip-172-32-10-34 dbhome_1]$ cd dbs
[oracle@ip-172-32-10-34 dbs]$ orapwd file=orapwSF entries=10 password=zystm.22
Standby startup
Create a temporary, minimal pfile on the standby. Set the compatible setting to match the primary.
[oracle@ip-172-32-10-34 ops]$ cat > /tmp/initSF.ora db_name = ORCL compatible = '19.3.0' ^D [oracle@ip-172-32-10-34 dbs]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sat Aug 31 22:01:58 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount pfile='/tmp/initSF.ora' ORACLE instance started. Total System Global Area 243268216 bytes Fixed Size 8895096 bytes Variable Size 180355072 bytes Database Buffers 50331648 bytes Redo Buffers 3686400 bytes SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0
tnsnames.ora
On all hosts, in the oracle account, in $ORACLE_HOME/network/admin/tnsnames.ora, add these entries:
NY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.31.86.22)(PORT = 1521))
(CONNECT_DATA =
(SID = NY)
)
)
FS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.31.28.23)(PORT = 1521))
(CONNECT_DATA =
(SID = FS)
)
)
SF =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.32.10.34)(PORT = 1521))
(CONNECT_DATA =
(SID = SF)
)
)
Connectivity checklist
Check that you have configured all of these:
- VPC enable DNS hostnames
- DNS resolution across the peering connection
- Route tables across the peering connection
- Security groups
- Static listener
- orapwd
- tnsnames.ora
- ORACLE_HOME environment variable
Test connectivity
From the primary, test connectivity to all three instances:
sqlplus sys/zystm.22@OH as sysdba sqlplus sys/zystm.22@VA_FS as sysdba sqlplus sys/zystm.22@OH as sysdba
From the far sync and the standby, repeat these checks.
Duplicate primary to far sync
Now we are going to kick off an RMAN script. RMAN will transfer the spfile, the password file, and the control file.
[oracle@ip-172-31-86-22 ops]$ cat dup.db.farsync.rcv
DUPLICATE TARGET DATABASE
FOR FARSYNC
FROM ACTIVE DATABASE
SPFILE
SET db_unique_name='FS'
SET dg_broker_config_file1='+DATA01/FS/dr1orcl.dat'
SET dg_broker_config_file2='+RECO01/FS/dr2orcl.dat'
SET audit_file_dest='/u01/app/oracle/admin/FS/adump'
NOFILENAMECHECK;
[oracle@ip-172-31-86-22 ops]$ rman target sys/zystm.22@NY auxiliary sys/zystm.22@FS
Recovery Manager: Release 19.0.0.0.0 - Production on Thu Sep 5 04:53:02 2019
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1545932534)
connected to auxiliary database: ORCL (not mounted)
RMAN> @ dup.db.farsync.rcv
RMAN> DUPLICATE TARGET DATABASE
2> FOR FARSYNC
3> FROM ACTIVE DATABASE
4> SPFILE
5> SET db_unique_name='FS'
6> SET dg_broker_config_file1='+DATA01/FS/dr1orcl.dat'
7> SET dg_broker_config_file2='+RECO01/FS/dr2orcl.dat'
8> SET audit_file_dest='/u01/app/oracle/admin/FS/adump'
9> NOFILENAMECHECK;
Starting Duplicate Db at 05-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=39 device type=DISK
contents of Memory Script:
{
backup as copy reuse
passwordfile auxiliary format '/u01/app/oracle/product/19.3.0/dbhome_1/dbs/orapwORCL' ;
restore clone from service 'NY' spfile to
'/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora';
sql clone "alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''";
}
executing Memory Script
Starting backup at 05-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=88 device type=DISK
Finished backup at 05-SEP-19
Starting restore at 05-SEP-19
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service NY
channel ORA_AUX_DISK_1: restoring SPFILE
output file name=/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 05-SEP-19
sql statement: alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''
contents of Memory Script:
{
sql clone "alter system set db_unique_name =
''FS'' comment=
'''' scope=spfile";
sql clone "alter system set dg_broker_config_file1 =
''+DATA01/FS/dr1orcl.dat'' comment=
'''' scope=spfile";
sql clone "alter system set dg_broker_config_file2 =
''+RECO01/FS/dr2orcl.dat'' comment=
'''' scope=spfile";
sql clone "alter system set audit_file_dest =
''/u01/app/oracle/admin/FS/adump'' comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
sql statement: alter system set db_unique_name = ''FS'' comment= '''' scope=spfile
sql statement: alter system set dg_broker_config_file1 = ''+DATA01/FS/dr1orcl.dat'' comment= '''' scope=spfile
sql statement: alter system set dg_broker_config_file2 = ''+RECO01/FS/dr2orcl.dat'' comment= '''' scope=spfile
sql statement: alter system set audit_file_dest = ''/u01/app/oracle/admin/FS/adump'' comment= '''' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1140849904 bytes
Fixed Size 8895728 bytes
Variable Size 301989888 bytes
Database Buffers 822083584 bytes
Redo Buffers 7880704 bytes
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location
contents of Memory Script:
{
sql clone "alter system set control_files =
''+RECO01/FS/CONTROLFILE/current.257.1018155321'', ''+DATA01/FS/CONTROLFILE/current.258.1018155321'' comment=
''Set by RMAN'' scope=spfile";
restore clone from service 'NY' farsync controlfile;
}
executing Memory Script
sql statement: alter system set control_files = ''+RECO01/FS/CONTROLFILE/current.257.1018155321'', ''+DATA01/FS/CONTROLFILE/current.258.1018155321'' comment= ''Set by RMAN'' scope=spfile
Starting restore at 05-SEP-19
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=46 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service NY
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=+RECO01/FS/CONTROLFILE/current.383.1018155325
output file name=+DATA01/FS/CONTROLFILE/current.265.1018155325
Finished restore at 05-SEP-19
contents of Memory Script:
{
sql clone 'alter database mount';
}
executing Memory Script
sql statement: alter database mount
contents of Memory Script:
{
sql 'alter system archive log current';
}
executing Memory Script
sql statement: alter system archive log current
Finished Duplicate Db at 05-SEP-19
RMAN> **end-of-file**
Note that there are no online logs, and that the standby logs are the same count and size as the primary.
SQL> select count(*)numlogs, bytes/1024/1024 mb from v$log group by bytes;
no rows selected
SQL> select count(*)numlogs, bytes/1024/1024 mb from v$standby_log group by bytes;
NUMLOGS MB
---------- ----------
4 200
Notice these points about the operation:
- All file transfers were done with RMAN. No ssh connection is needed.
- There is no ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE step. RMAN handled the control file by backing it up at the primary and tranferring it to the far sync.
- Although the primary spfile is in ASM, the far sync spfile ends up on the file system in directory $ORACLE_HOME/dbs.
- The orapwORCL that you created at the far sync got overwritten by a backup of the primary orapwORCL.
Duplicate primary to standby
Allocate an appropriate number of channels to reduce the time needed to duplicate the database.
[oracle@ip-172-31-86-22 ops]$ cat dup.db.standby.rcv
run {
allocate channel ch01 device type disk;
allocate channel ch02 device type disk;
allocate auxiliary channel aux01 device type disk;
allocate auxiliary channel aux02 device type disk;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
DORECOVER
SPFILE
SET db_unique_name='SF'
SET dg_broker_config_file1='+DATA01/SF/dr1orcl.dat'
SET dg_broker_config_file2='+RECO01/SF/dr2orcl.dat'
SET audit_file_dest='/u01/app/oracle/admin/SF/adump'
NOFILENAMECHECK;
}
[oracle@ip-172-31-86-22 ops]$ vi dup.db.standby.rcv
[oracle@ip-172-31-86-22 ops]$ rman target sys/zystm.22@NY auxiliary sys/zystm.22@SF
Recovery Manager: Release 19.0.0.0.0 - Production on Thu Sep 5 05:04:52 2019
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1545932534)
connected to auxiliary database: ORCL (not mounted)
RMAN> @ dup.db.standby.rcv
RMAN> run {
2>
3> allocate channel ch01 device type disk;
4> allocate channel ch02 device type disk;
5> allocate auxiliary channel aux01 device type disk;
6> allocate auxiliary channel aux02 device type disk;
7>
8> DUPLICATE TARGET DATABASE
9> FOR STANDBY
10> FROM ACTIVE DATABASE
11> DORECOVER
12> SPFILE
13> SET db_unique_name='SF'
14> SET dg_broker_config_file1='+DATA01/SF/dr1orcl.dat'
15> SET dg_broker_config_file2='+RECO01/SF/dr2orcl.dat'
16> SET audit_file_dest='/u01/app/oracle/admin/SF/adump'
17> NOFILENAMECHECK;
18>
19> }
using target database control file instead of recovery catalog
allocated channel: ch01
channel ch01: SID=84 device type=DISK
allocated channel: ch02
channel ch02: SID=85 device type=DISK
allocated channel: aux01
channel aux01: SID=40 device type=DISK
allocated channel: aux02
channel aux02: SID=41 device type=DISK
Starting Duplicate Db at 05-SEP-19
current log archived
contents of Memory Script:
{
backup as copy reuse
passwordfile auxiliary format '/u01/app/oracle/product/19.3.0/dbhome_1/dbs/orapwORCL' ;
restore clone from service 'NY' spfile to
'/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora';
sql clone "alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''";
}
executing Memory Script
Starting backup at 05-SEP-19
Finished backup at 05-SEP-19
Starting restore at 05-SEP-19
channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: restoring SPFILE
output file name=/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora
channel aux01: restore complete, elapsed time: 00:00:02
Finished restore at 05-SEP-19
sql statement: alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''
contents of Memory Script:
{
sql clone "alter system set db_unique_name =
''SF'' comment=
'''' scope=spfile";
sql clone "alter system set dg_broker_config_file1 =
''+DATA01/SF/dr1orcl.dat'' comment=
'''' scope=spfile";
sql clone "alter system set dg_broker_config_file2 =
''+RECO01/SF/dr2orcl.dat'' comment=
'''' scope=spfile";
sql clone "alter system set audit_file_dest =
''/u01/app/oracle/admin/SF/adump'' comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
sql statement: alter system set db_unique_name = ''SF'' comment= '''' scope=spfile
sql statement: alter system set dg_broker_config_file1 = ''+DATA01/SF/dr1orcl.dat'' comment= '''' scope=spfile
sql statement: alter system set dg_broker_config_file2 = ''+RECO01/SF/dr2orcl.dat'' comment= '''' scope=spfile
sql statement: alter system set audit_file_dest = ''/u01/app/oracle/admin/SF/adump'' comment= '''' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1140849904 bytes
Fixed Size 8895728 bytes
Variable Size 301989888 bytes
Database Buffers 822083584 bytes
Redo Buffers 7880704 bytes
allocated channel: aux01
channel aux01: SID=44 device type=DISK
allocated channel: aux02
channel aux02: SID=45 device type=DISK
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location
contents of Memory Script:
{
sql clone "alter system set control_files =
''+RECO01/SF/CONTROLFILE/current.257.1018156101'', ''+DATA01/SF/CONTROLFILE/current.258.1018156101'' comment=
''Set by RMAN'' scope=spfile";
restore clone from service 'NY' standby controlfile;
}
executing Memory Script
sql statement: alter system set control_files = ''+RECO01/SF/CONTROLFILE/current.257.1018156101'', ''+DATA01/SF/CONTROLFILE/current.258.1018156101'' comment= ''Set by RMAN'' scope=spfile
Starting restore at 05-SEP-19
channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: restoring control file
channel aux01: restore complete, elapsed time: 00:00:04
output file name=+RECO01/SF/CONTROLFILE/current.262.1018156111
output file name=+DATA01/SF/CONTROLFILE/current.267.1018156111
Finished restore at 05-SEP-19
contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script
sql statement: alter database mount standby database
contents of Memory Script:
{
set newname for clone tempfile 1 to new;
switch clone tempfile all;
set newname for clone datafile 1 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 7 to new;
restore
from nonsparse from service
'NY' clone database
;
sql 'alter system archive log current';
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to +DATA01 in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 05-SEP-19
channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: specifying datafile(s) to restore from backup set
channel aux01: restoring datafile 00001 to +DATA01
channel aux02: starting datafile backup set restore
channel aux02: using network backup set from service NY
channel aux02: specifying datafile(s) to restore from backup set
channel aux02: restoring datafile 00003 to +DATA01
channel aux02: restore complete, elapsed time: 00:00:38
channel aux02: starting datafile backup set restore
channel aux02: using network backup set from service NY
channel aux02: specifying datafile(s) to restore from backup set
channel aux02: restoring datafile 00004 to +DATA01
channel aux01: restore complete, elapsed time: 00:00:44
channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: specifying datafile(s) to restore from backup set
channel aux01: restoring datafile 00007 to +DATA01
channel aux01: restore complete, elapsed time: 00:00:05
channel aux02: restore complete, elapsed time: 00:00:12
Finished restore at 05-SEP-19
sql statement: alter system archive log current
current log archived
contents of Memory Script:
{
restore clone force from service 'NY'
archivelog from scn 2246774;
switch clone datafile all;
}
executing Memory Script
Starting restore at 05-SEP-19
channel aux01: starting archived log restore to default destination
channel aux01: using network backup set from service NY
channel aux01: restoring archived log
archived log thread=1 sequence=9
channel aux02: starting archived log restore to default destination
channel aux02: using network backup set from service NY
channel aux02: restoring archived log
archived log thread=1 sequence=10
channel aux01: restore complete, elapsed time: 00:00:02
channel aux02: restore complete, elapsed time: 00:00:02
Finished restore at 05-SEP-19
datafile 1 switched to datafile copy
input datafile copy RECID=5 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/system.266.1018156127
datafile 3 switched to datafile copy
input datafile copy RECID=6 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/sysaux.265.1018156129
datafile 4 switched to datafile copy
input datafile copy RECID=7 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/undotbs1.264.1018156167
datafile 7 switched to datafile copy
input datafile copy RECID=8 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/users.269.1018156173
contents of Memory Script:
{
set until scn 2247713;
recover
standby
clone database
delete archivelog
;
}
executing Memory Script
executing command: SET until clause
Starting recover at 05-SEP-19
starting media recovery
archived log for thread 1 with sequence 9 is already on disk as file +RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_9.261.1018156181
archived log for thread 1 with sequence 10 is already on disk as file +RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_10.259.1018156183
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_9.261.1018156181 thread=1 sequence=9
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_10.259.1018156183 thread=1 sequence=10
media recovery complete, elapsed time: 00:00:01
Finished recover at 05-SEP-19
contents of Memory Script:
{
delete clone force archivelog all;
}
executing Memory Script
deleted archived log
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_9.261.1018156181 RECID=1 STAMP=1018156181
Deleted 1 objects
deleted archived log
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_10.259.1018156183 RECID=2 STAMP=1018156182
Deleted 1 objects
Finished Duplicate Db at 05-SEP-19
released channel: ch01
released channel: ch02
released channel: aux01
released channel: aux02
RMAN> **end-of-file**
RMAN>
Recovery Manager complete.
Notice that the online logs and the standby logs are the same as the primary:
SQL> select count(*)numlogs, bytes/1024/1024 mb from v$log group by bytes;
NUMLOGS MB
---------- ----------
3 200
SQL> select count(*)numlogs, bytes/1024/1024 mb from v$standby_log group by bytes;
NUMLOGS MB
---------- ----------
4 200
Configure restart
All new systems should be mounted at this stage. Register your restart instances. (At the primary, this step might already be done.)
[oracle@ip-172-31-86-22 ~]$ srvctl add database -database NY -role PRIMARY -stopoption IMMEDIATE -instance ORCL -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile +DATA01/ORCL/PARAMETERFILE/spfile.266.1017440879 -diskgroup DATA01,RECO01
At the far sync:
[oracle@ip-172-31-28-23 ~]$ srvctl add database -database FS -role physical_standby -startoption MOUNT -stopoption ABORT -instance FS -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileFS.ora -diskgroup DATA01,RECO01
To activate the change, I found it necessary to issue srvctl start. That does not seem right. “srvctl enable database” should do it. In restart, “srvctl enable instance” is not available. Unless you activate the instance, it will not start automatically upon host reboot.
[oracle@ip-172-31-28-23 ~]$ srvctl start database -database FS
At the standby:
[oracle@ip-172-32-10-34 ~]$ srvctl add database -database SF -role physical_standby -startoption MOUNT -stopoption ABORT -instance SF -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileSF.ora -diskgroup DATA01,RECO01 [oracle@ip-172-32-10-34 ~]$ srvctl start database -database SF
Reboot all hosts and check that the database instances start automatically, in the proper startup mode.
Validate static listeners
Validating the static listeners is critical. For each database or far sync (NY, FS, SF):
- Connect over the network with sqlplus
- Shutdown (shutdown abort for standbys and far syncs)
- Startup (startup mount for standbys and far syncs)
Example:
[oracle@ip-172-31-28-23 ops]$ sqlplus sys/zystm.22@FS as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Fri Sep 13 11:36:37 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> shutdown abort ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1140849904 bytes Fixed Size 8895728 bytes Variable Size 301989888 bytes Database Buffers 822083584 bytes Redo Buffers 7880704 bytes Database mounted.
This is an example of a failed static listener check.
[oracle@ip-172-31-28-23 ops]$ sqlplus sys/zystm.22@FS as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Fri Sep 13 11:36:37 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> shutdown abort ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1140849904 bytes Fixed Size 8895728 bytes Variable Size 301989888 bytes Database Buffers 822083584 bytes Redo Buffers 7880704 bytes Database mounted. SQL> shutdown abort ORACLE instance shut down. ERROR: ORA-12505: TNS:listener does not currently know of SID given in connect descriptor SQL> startup mount SP2-0640: Not connected
Resolve all failures before proceeding.
Broker start
At this stage, primary database is open and the far sync and standby intances are mounted. At the primary, far sync, and standby, start the Data Guard broker:
SQL> alter system set dg_broker_start = true; System altered.
Create the broker configuration
Connect to the Data Guard broker:
[oracle@ip-172-31-86-22 broker]$ dgmgrl sys/zystm.22@NY DGMGRL for Linux: Release 19.0.0.0.0 - Production on Mon Sep 2 13:59:25 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type "help" for information. Connected to "NY" Connected as SYSDBA.
Create the Data Guard broker configuration. One way to do it is to create and test the standby first, and add the far sync later. However, in this example, we add the far sync and the standby in one fell swoop.
While creating new objects, if you want to preserve name case, you should quote your identifiers.
DGMGRL> create configuration 'ORCL_CONFIG' as primary database is 'NY' connect identifier is 'NY'; Configuration "ORCL_CONFIG" created with primary database "NY" DGMGRL> add far_sync 'FS' as connect identifier is 'FS'; far sync instance "FS" added DGMGRL> add database 'SF' as connect identifier is 'SF' maintained as physical; Database "SF" added
Add the redo routes:
DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : FS SYNC)'; Property "redoroutes" updated DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : FS SYNC)'; Property "redoroutes" updated DGMGRL> edit far_sync FS set property RedoRoutes = '(NY : SF ASYNC)(SF : NY ASYNC)'; Property "redoroutes" updated
Enable the configuration
DGMGRL> enable configuration Enabled.
A point of interest is the fal_server configuration symmetry.
| parameter | primary | far sync | standby |
| fal_server | ‘FS’,’SF’* | ‘NY’,’SF’ | ‘FS’,’NY’ |
* after switchover
The Data Guard broker sets fal_server at the far sync and standby when you enable the configuration. If you perform a switchover, the broker sets fal_server at the new standby and clears fal_server at the old standby.
Switchover test
Test switchover to SF. The output becomes:
DGMGRL> switchover to SF Performing switchover NOW, please wait... Operation requires a connection to database "SF" Connecting ... Connected to "SF" Connected as SYSDBA. New primary database "SF" is opening... Oracle Clusterware is restarting database "NY" ... Connected to an idle instance. Connected to an idle instance. Connected to an idle instance. Connected to an idle instance. Connected to "NY" Connected to "NY" Switchover succeeded, new primary is "sf"
Test switchover to NY. The output is now:
DGMGRL> switchover to NY Performing switchover NOW, please wait... New primary database "NY" is opening... Oracle Clusterware is restarting database "SF" ... Connected to an idle instance. Connected to an idle instance. Connected to "SF" Connected to "SF" Switchover succeeded, new primary is "ny"
Setting the protection mode
Maximum Performance
In Maximum Performance protection mode:
Transactions commit as soon as all redo data generated by those transactions has been written to the online log
Maximum Availability
In Maximum Availability mode:
Under normal operations, transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log AND based on user configuration, one of the following is true:
-
- redo has been received at the standby, I/O to the standby redo log has been initiated, and acknowledgement sent back to primary
- redo has been received and written to standby redo log at the standby and acknowledgement sent back to primary
If the primary does not receive acknowledgement from at least one synchronized standby, then it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.
With far sync, replace “standby” in the preceding description with “far sync”. Transactions do not commit until redo has been written to the far sync standby log. Because the far sync is in the same region as the primary, commit performance of a Maximum Availability system is expected to be better with a far sync.
Set protection mode to Maximum Availability.
DGMGRL> edit configuration set protection mode as MaxAvailability; Succeeded.
Wait a few minutes and check:
DGMGRL> show configuration
Configuration - ORCL_CONFIG
Protection Mode: MaxAvailability
Members:
NY - Primary database
FS - Far sync instance
SF - Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 51 seconds ago)
Test switchover and switch back:
DGMGRL> switchover to SF Performing switchover NOW, please wait... Operation requires a connection to database "SF" Connecting ... Connected to "SF" Connected as SYSDBA. New primary database "SF" is opening... Oracle Clusterware is restarting database "NY" ... Connected to an idle instance. Connected to an idle instance. Connected to "NY" Connected to "NY" Switchover succeeded, new primary is "sf" DGMGRL> switchover to NY Performing switchover NOW, please wait... Operation requires a connection to database "NY" Connecting ... Connected to "NY" Connected as SYSDBA. New primary database "NY" is opening... Oracle Clusterware is restarting database "SF" ... Connected to an idle instance. Connected to an idle instance. Connected to an idle instance. Connected to "SF" Connected to "SF" Switchover succeeded, new primary is "ny"
Notice at the far sync that one or more standby logs are assigned
SQL> select group#, bytes/1024/1024 mb, thread#, sequence#, status from v$standby_log;
GROUP# MB THREAD# SEQUENCE# STATUS
---------- ---------- ---------- ---------- ----------
1 200 1 572 ACTIVE
2 200 1 0 UNASSIGNED
3 200 1 0 UNASSIGNED
4 200 0 0 UNASSIGNED
and at the physical standby one or more standby logs are assigned.
SQL> select group#, bytes/1024/1024 mb, thread#, sequence#, status from v$standby_log;
GROUP# MB THREAD# SEQUENCE# STATUS
---------- ---------- ---------- ---------- ----------
4 200 1 0 UNASSIGNED
5 200 1 572 ACTIVE
6 200 0 0 UNASSIGNED
7 200 0 0 UNASSIGNED
Disabling far sync
You can disable far sync
DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : SF SYNC)'; Property "redoroutes" updated DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : NY SYNC)'; Property "redoroutes" updated DGMGRL> edit far_sync FS set property RedoRoutes = ''; Property "redoroutes" updated DGMGRL> disable far_sync FS Disabled.
The configuration display looks like this:
DGMGRL> show configuration
Configuration - ORCL_CONFIG
Protection Mode: MaxAvailability
Members:
NY - Primary database
SF - Physical standby database
Members Not Receiving Redo:
FS - Far sync instance (disabled)
ORA-16749: The member was disabled manually.
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 24 seconds ago)
Re-enabling far sync
You can re-enable far sync. You must touch the objects in the reverse order that you did when you disabled far sync: far sync, far sync redo routes, database redo routes.
DGMGRL> enable far_sync FS Enabled. DGMGRL> edit far_sync FS set property RedoRoutes = '(NY : SF ASYNC)(SF : NY ASYNC)'; Property "redoroutes" updated DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : FS SYNC)'; Property "redoroutes" updated DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : FS SYNC)'; Property "redoroutes" updated
Check:
DGMGRL> show configuration
Configuration - ORCL_CONFIG
Protection Mode: MaxAvailability
Members:
NY - Primary database
FS - Far sync instance
SF - Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 39 seconds ago)
Removing the far sync
You can remove a disabled far sync from the configuration.
DGMGRL> remove far_sync FS Removed far sync instance "fs" from the configuration
Switchover performance
We want to know how long after starting the switchover until the new primary is available. In other words how much time elapsed from when the switchover command is issued until the new primary is open. The times appear in the alert log. For example:
2019-09-05T14:44:04.271609-04:00 SWITCHOVER VERIFY BEGIN ... 2019-09-05T14:44:59.392512-04:00 TMI: adbdrv open database END 2019-09-05 14:44:59.392313 Starting background process CJQ0 Completed: ALTER DATABASE OPEN
We can compare database opening time with far sync
Protection Mode: MaxAvailability
Members:
NY - Primary database
FS - Far sync instance
SF - Physical standby database
to time without far sync (direct route)
Protection Mode: MaxAvailability
Members:
NY - Primary database
SF - Physical standby database
With far sync, the switchover timing data are:
| sw to | date | start sw | new primary open | elapsed time to open (s) | redo route | prot level |
| SF | 9/5/2019 | 05:54:16.9 | 05:55:11.4 | 54.6 | far sync | Max Perf |
| NY | 9/5/2019 | 06:02:11.7 | 06:04:12.2 | 120.5 | far sync | Max Perf |
| SF | 9/5/2019 | 06:09:34.2 | 06:10:40.9 | 66.8 | far sync | Max Perf |
| NY | 9/5/2019 | 06:13:32.7 | 06:15:21.9 | 109.1 | far sync | Max Perf |
| SF | 9/5/2019 | 13:38:38.6 | 13:40:18.1 | 99.5 | far sync | Max Avail |
| NY | 9/5/2019 | 14:07:56.7 | 14:10:23.7 | 147.0 | far sync | Max Avail |
| SF | 9/5/2019 | 14:15:32.9 | 14:17:58.9 | 145.9 | far sync | Max Avail |
| NY | 9/5/2019 | 14:20:04.6 | 14:21:38.9 | 94.3 | far sync | Max Avail |
| Average | 104.7 | |||||
| Stdev | 33.5 |
compared to without far sync:
| sw to | date | start sw | new primary open | elapsed time to open (s) | redo route | prot level |
| SF | 9/5/2019 | 06:19:48.0 | 06:20:47.4 | 59.4 | direct | Max Perf |
| NY | 9/5/2019 | 06:23:10.5 | 06:24:06.8 | 56.3 | direct | Max Perf |
| SF | 9/5/2019 | 06:27:20.5 | 06:28:19.5 | 59.0 | direct | Max Perf |
| NY | 9/5/2019 | 06:29:57.2 | 06:30:54.0 | 56.8 | direct | Max Perf |
| SF | 9/5/2019 | 14:34:49.6 | 14:35:50.1 | 60.5 | direct | Max Avail |
| NY | 9/5/2019 | 14:38:04.4 | 14:38:59.3 | 54.9 | direct | Max Avail |
| SF | 9/5/2019 | 14:40:59.6 | 14:41:58.1 | 58.5 | direct | Max Avail |
| NY | 9/5/2019 | 14:44:04.3 | 14:44:59.4 | 55.1 | direct | Max Avail |
| Average | 57.6 | |||||
| Stdev | 2.1 |
Without far sync, switchover time is lower (Average) and more consistent (Stdev). Far sync, therefore, imposes a switchover time penalty.
Conclusion
A far sync implementation is presented. There are several points of interest.
- The platform is Red Hat Linux 7.2 on Amazon AWS EC2.
- The network setup is described in a separate article.
- You can measure network latency and throughput with qperf.
- The grid and oracle home users are separate accounts.
- Database storage is in ASM, not operating system files.
- The number of standby log files depends on the expected workload.
- Parameter compatible must be set the same at the primary, far sync, and standby.
- Check standby log size before duplicating the database.
- File transfer is by RMAN only.
- SSH trust is not required to setup Data Guard.
- Multiple RMAN channels should be used to duplicate the database.
- Configure Restart for all instances.
- Validate the static listeners
- Data Guard configuration and operations are done from the broker, not sqlplus.
- The the standby and the far sync are set up at the same time, not separately.
- Reboot hosts and confirm automatic startup, startup mode, and Data Guard role.
- Test switchover after configuring restart.
- Data Guard broker manages fal_server at all nodes during role transitions.
- You can change the protection mode to Maximum Availability.
- Far sync can reduce or prevent data loss.
- With far sync, switchover takes longer and the elapsed time is less consistent.
We have presented an Oracle Data Guard 19c setup with far sync. Some techniques here are variants on contributions found elsewhere. Some findings on performance are presented.