ASMCMD-9520: AFD is not Loaded after Red Hat update

by Brian Fitzgerald

Introduction

Your Red Hat kernel got upgraded and now AFD does not load. Solution: run “grubby –set-default”. This article applies to Oracle Database 19c on Red Hat Enterprise Linux 9 on-premises.

Background

ASM filter driver (AFD) requires loading kernel module oracleafd.ko. The kernel modules are distributed in release updates and are installed in $ORACLE_HOME/usm/install/Oracle. AFD kernel modules are tied to a specific sub-version of Red Hat EL 9, and Oracle’s distribution of new AFD kernel modules may lag the Red Hat release by 6 months. For example, rhel9_4 was released on April 30, 2024, but Oracle did not release the compatible AFD kernel module until RU 19.25 of October 15, 2024. If you issued “dnf update” or “yum update” after April 30, but before installing RU 19.25, then AFD has stopped working. You must downgrade your kernel. Refer to ACFS and AFD Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1) for up-to-date AFD kernel driver release information.

rhel9_5 was released on November 13, 2024. If you issued “dnf update,” then AFD has stopped working. As of this writing, Oracle has not released an rhel9_5 AFD kernel module, so you must downgrade your kernel.

Don’t do it!

Don’t run “dnf update”!

Oh no, you did it!

“dnf update” got run and now AFD does not load. Your Oracle database is down!

Fix it!

Fix this issue simply by identifying your previous kernel file and running “grubby –set-default”. Reboot.

Yay, it’s fixed!

Notice that filtering is not supported on Red Hat 9. Be careful not to overwrite your ASM device!

No filtering in RHEL9

ASM filter driver is designed to block IO from programs except for Oracle binaries. Filtering works in rhel7. You can’t overwrite an oracle device with dd, for example:

Refer to Oracle Automatic Storage Management Filter Driver (ASMFD) (Doc ID 2806979.1) for news about AFD filtering. Exercise care when handling Oracle devices. For example:

dd overwrote /dev/nvme3n1. Your data is wiped out. The ironically named “ASM Filter Driver” did not filter the non-Oracle I/O.

Common SA commands such as parted could corrupt your disk:

Be careful!

Red Hat release

Notice that the kernel is at rhel9_4, but the operating system is at rhel9_5.

Conclusion

We covered these points:

  • Oracle AFD depends on a kernel module.
  • In Red Hat 9, the AFD kernel module is tied to a specific sub-version.
  • Oracle will release the needed AFD module after each Red Hat 9 sub-version release.
  • Depending on what Oracle RU you have installed, dnf update may install a kernel that is incompatible with your AFD module.
  • You can fix your problem by running “grubby –set-default”
  • You can upgrade to a specific kernel version.
  • ASM filter driver no longer filters.
  • Administrative commands could wipe out your disks.
  • Exercise greater care without filtering present.

No disk groups mounted after Grid 12.1 upgrade to 18c

Introduction

After an apparently successful upgrade from Grid 12.1 to 18c, the ASM disk groups failed to mount.

Scenario

An upgrade from Grid 12.1 to 18c happened. The DBA had used gridSetup.sh.

After the upgrade, no disk groups were found to be mounted. There had been no warnings before this happened.

Other application groups were waiting, so there was pressure to bring up the system.

Symptoms

  1. No disk groups got mounted.
  2. This error appears in the ASM alert log:
ORA-15032: not all alterations performed
ORA-59303: The attribute compatible.asm (11.2.0.0.0) of the diskgroup being mounted should be 11.2.0.2.0 or higher.

Notice that tools that refer to disk groups will not work

asmcmd lsdsk –discovery shows ASM disks, but no disk groups.

$ asmcmd lsdsk --discovery -g
Inst_ID  Path
      1  ORCL:ASM_DATA_VOL1
      1  ORCL:ASM_DATA_VOL2
      1  ORCL:ASM_DATA_VOL3
      1  ORCL:ASM_DATA_VOL4
      1  ORCL:ASM_DATA_VOL5
      1  ORCL:ASM_RECO_VOL1
      1  ORCL:ASM_REDO_VOL1

Mount with force option will not work. Ex:

asmcmd mount -f DATA1

This will not work:

SQL> ALTER DISKGROUP DATA1 SET ATTRIBUTE 'compatible.asm' ='11.2.0.2.0';
ALTER DISKGROUP DATA1 SET ATTRIBUTE 'compatible.asm' ='11.2.0.2.0'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15001: diskgroup "DATA1" does not exist or is not mounted

This will not work:

SQL> alter diskgroup DATA1 mount;
alter diskgroup DATA1 mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-59303: The attribute compatible.asm (11.2.0.0.0) of the diskgroup being
mounted should be 11.2.0.2.0 or higher.

Notice that the ASM disks are ok

asmlib:

$ oracleasm listdisks
ASM_DATA_VOL1
ASM_DATA_VOL2
ASM_DATA_VOL3
ASM_DATA_VOL4
ASM_DATA_VOL5
ASM_RECO_VOL1
ASM_REDO_VOL1

kfed:

$ kfed op=read dev=/dev/oracleasm/disks/ASM_DATA_VOL1
kfbh.endian:                          1 ; 0x000: 0x01
etc..
kfdhdb.dskname:           ASM_DATA_VOL1 ; 0x028: length=13
kfdhdb.grpname:                   DATA1 ; 0x048: length=5
etc.
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000

The ASM disk and ASM group name look good.

Hex b refers to oracle version 11. The zeros following the 2 apparently mean that the disk group version is 11.2.0.0.0.

Cause

Oracle 12.2, 18c, and 19c binaries cannot mount a compatible 11.2.0.0.0 disk group. The manual states:

Starting with Oracle ASM version 12.2.0.1, the minimum and default settings for Oracle ASM disk group attributes are:

  • COMPATIBLE.ASM = 11.2.0.2

If you are reading the Oracle grid upgrade manual, you might not notice any checks that could prevent this mistake. The Oracle 19c Grid Upgrade manual checklist hints at the problem:

Our 18c gridSetup.sh as patched (18.3) does not check for this, and lets you start the upgrade, which leads to the problem.

Approach (with a downside)

Downgrade grid to the previous version (12.1).

In the 18c grid home, attempted this step from the 18c manual:

# cd crs/install
# ./rootcrs.sh -downgrade
Using configuration parameter file: /ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install/crsconfig_params
The log of current session can be found at:
  /ora_local/apps/oracle/crsdata/evp003-eldb/crsconfig/crsdowngrade_evp003-eldb_2021-01-20_10-48-42AM.log
2021/01/20 10:48:44 CLSRSC-457: Oracle Restart is currently configured and cannot be deconfigured using this Oracle Clusterware deconfiguration command.
Died at /ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install/crsdowngrade.pm line 260.
The command '/ora_local/apps/oracle_grid/product/18.0.0/grid_18c/perl/bin/perl -I/ora_local/apps/oracle_grid/product/18.0.0/grid_18c/perl/lib -I/ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install /ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install/rootcrs.pl -downgrade' execution failed

Did instead:

# cd crs/install
# ./roothas.sh -deconfig

This step wipes out ocr.

In the 12.1 grid home

# cd crs/install
# ./roothas.sh

Now grid is running on 12.1 with just the basics, but not cssd.

Downside

roothas deconfig wipes out your crs. You will have to manually start cssd and re-add your ASM and database resources.

Workaround

Start cssd, etc.

# crsctl start resource -all

Start ASM

To get things rolling, you could create an ASM pfile. Remember that ASM needs to know which disk groups to mount, so be sure to configure asm_diskgroups

init+ASM.20210120

large_pool_size          = 12M
instance_type            = "asm"
remote_login_passwordfile= "EXCLUSIVE"
asm_diskgroups =  DATA1,REDO1,RECOVERY1
asm_power_limit          = 1
diagnostic_dest          = "/ora_local/apps/oracle"

start

SQL> startup mount pfile='init+ASM.20210120'
ASM instance started

Total System Global Area 1136934472 bytes
Fixed Size                  8666696 bytes
Variable Size            1103101952 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted
ASM diskgroups volume enabled

You can successfully create an spfile, with this error message:

SQL> create spfile='asmspfile.ora'  from pfile =  'init+ASM.20210120';
create spfile='asmspfile.ora'  from pfile =  'init+ASM.20210120'
*
ERROR at line 1:
ORA-29786: Oracle Restart attribute GET failed with error [Attribute 'SPFILE'
sts[200] lsts[0]]

Add asm to crs

$ srvctl add asm -spfile asmspfile.ora

Start databases. Details omitted for brevity. Start with sqlplus. configure in crs with srvctl add database.

Final state

  • Grid 12.1 is up
  • ASM is up with all disk groups mounted.
  • Databases up

Follow up steps

To reattempt the Grid 18c upgrade:

  • In all disk groups, change compatible.asm 11.2.0.2.0 or higher
  • Retry the Grid 18c upgrade

Conclusion

During an upgrade from Grid 12.1 to 18c, the step to upgrade the disk groups to compatible 11.2.0.2.0 got missed. As a result, the 18c software could not mount the disk groups. kfod revealed that the ASM disks were intact, though.

The downgrade step found in the manual, rootcrs.sh -downgrade, failed, so I ran roothas.sh -deconfig, followed by roothas.sh in the 12c grid home. That worked, but I had to start cssd manually, and register ASM and the databases.

Launch AMI by name in terraform

By Brian Fitzgerald

Introduction

Using Terraform, you can launch an AWS EC2 instance from any account and any region using a public AMI identified only by image name. Because you do not need to lookup the image id each time, your code can be simplified.

Approach

Red Hat account number

In the AWS console, notice that the Red Hat, Inc. account number is 309956199498.


AMI name

Find the Red Hat AMI image by name. In the console, navigate to EC2 Images AMIs. Filter on Owner: 309956199498, Architecture: 64-bit (x86), Virtualization type: HVM.

For example, “RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2”. You could also search using the AWS command line:

$ aws ec2 describe-images --owners 309956199498 --filters "` cat filters.ami.json `" --region us-east-2

where filters.ami.json contains:

[
    {
        "Name": "architecture",
        "Values": [
             "x86_64"
        ]
    },
    {
        "Name": "virtualization-type",
        "Values": [
             "hvm"
        ]
    }
]

Terraform data source

Define a Terraform aws_ami data source.

ami-rhel.tf

data "aws_ami" "rhel" {
    most_recent = true
    owners = [
        "309956199498"
    ]
    filter {
        name   = "name"
        values = [
            "RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2"
        ]
    }
}

aws instance resource

Use the data source in your Terraform aws_instance resource

provider "aws" {
    region      = var.region
}

resource "aws_instance" "your-ec2-rsrc" {
    ami           = data.aws_ami.rhel.image_id

and so on . . .

Launch the ec2 instance

Run “terraform apply”. Terraform determines the AMI ID. For example, ami-0d2bf41df19c4aac7.

Conclusion

By using the Terraform AWS AMI data source, you can launch a public AMI from any account and region. The AMI ID is different for each account and region.

EC2 maximum number of volumes

Introduction

The documented AWS EC2 EBS volume attachment limit is 27. The attachment limit affects the maximum number of Oracle ASM disks.

EC2 volume limit

Amazon AWS documents a 27-volume attachment limit. I have found that a t2 type creation attempt with 28 EBS volumes will abort.

error.png

Attempting to create an m5 (Nitro)  with 28 or more volumes will hang.

creating

The attachment limit is 28, including network interfaces, volumes, and instance store volumes. EC2 instances with one network interface can have up to 27 volumes attached.

EC2 instances with more than 27 volumes

I am aware of t2 EC2 systems with as many as 44 volumes attached. I have no information about how this was done, and what the customer’s support expectations are. I would be concerned about how such a system would respond to a change of instance type.

Oracle Database

In planning an Oracle database installation, you may need file system mounts as well. /u01, for example. The underlying volume counts against the maximum attachment count.

In laying out an ASM setup, consider a simple design, with as few ASM disks and ASM disk groups as needed.

Conclusion

Based on the available public information, I recommend limiting the number of EBS volume attachments to 27. Oracle database administrators might want to simplify their ASM implementation.

DGMGRL not required in listener.ora in Restart

By Brian Fitzgerald

Introduction

In Oracle Restart, _DGMGRL services are no longer required in listener.ora. Switchover output has changed slightly and _DGMGRL connections no longer appear in the listener log.

Background

While testing switchover in Data Guard in 19c, I noticed that after configuring Restart, connections to _DGMGRL longer appeared in the listener log. I deleted the _DGMGRL services, reloaded the listeners, and retested the switchover without issue.

Static listeners (initial)

Initially, in the grid account, in $ORACLE_HOME/network/admin/listener.ora, these SID_LIST_LISTENER were in place. At the primary:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = NY_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = ORCL)
    )
  )

At the far sync:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = FS_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = ORCL)
    )
  )

At the standby:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = SF_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = ORCL)
    )
  )

I reloaded the listeners at each host:

[grid@ip-172-31-86-22 ~]$ lsnrctl reload

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 02-SEP-2019 12:28:09

Copyright (c) 1991, 2019, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-86-22.ec2.internal)(PORT=1521)))
The command completed successfully

Switchover test before setting up Restart

The following switchover test without Restart was done:

DGMGRL> switchover to SF
Performing switchover NOW, please wait...
Operation requires a connection to database "SF"
Connecting ...
Connected to "SF"
Connected as SYSDBA.
New primary database "SF" is opening...
Operation requires start up of instance "ORCL" on database "NY"
Starting instance "ORCL"...
Connected to an idle instance.
ORACLE instance started.
Connected to "NY"
Database mounted.
Connected to "NY"
Switchover succeeded, new primary is "sf"

Notice the message ‘Operation requires start up of instance “ORCL” on database “NY”‘. In the NY listener log, several connections to service NY_DGMGRL appear. For example:

02-SEP-2019 22:46:10 * (CONNECT_DATA=(SERVICE_NAME=NY_DGMGRL)(INSTANCE_NAME=ORCL)(SERVER=DEDICATED)(CID=(PROGRAM=dgmgrl)(HOST=ip-172-31-86-22.ec2.internal)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.31.86.22)(PORT=51298)) * establish * NY_DGMGRL * 0

Run “show configuration”:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxPerformance
  Members:
  SF - Primary database
    FS - Far sync instance
      NY - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 51 seconds ago)

Switch back to the original primary.

DGMGRL> switchover to NY
Performing switchover NOW, please wait...
New primary database "NY" is opening...
Operation requires start up of instance "ORCL" on database "SF"
Starting instance "ORCL"...
Connected to an idle instance.
ORACLE instance started.
Connected to "SF"
Database mounted.
Connected to "SF"
Switchover succeeded, new primary is "ny"

Notice the message ‘Operation requires start up of instance “ORCL” on database “SF”‘ . In the SF listener log, several connections to service SF_DGMGRL appear. For example:

02-SEP-2019 22:43:08 * (CONNECT_DATA=(SERVICE_NAME=SF_DGMGRL)(INSTANCE_NAME=ORCL)(SERVER=DEDICATED)(CID=(PROGRAM=dgmgrl)(HOST=ip-172-31-86-22.ec2.internal)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.31.86.22)(PORT=50340)) * establish * SF_DGMGRL * 0

Notice already at the far sync, no connections to the FS_DGMGRL service appeared. Service FS_DGMGRL plays no role at the far sync.

Configure Restart

Register your Restart instances.

At the primary:

[oracle@ip-172-31-86-22 ~]$ srvctl add database -database NY -role PRIMARY -stopoption IMMEDIATE -instance ORCL -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile +DATA01/ORCL/PARAMETERFILE/spfile.266.1017440879 -diskgroup DATA01,RECO01

At the far sync:

[oracle@ip-172-31-28-23 ~]$ srvctl add database -database FS -role physical_standby -startoption MOUNT -stopoption ABORT -instance ORCL -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora -diskgroup DATA01,RECO01

To activate the change, I found it necessary to issue srvctl start. That does not seem right. “srvctl enable database” should do it.

Also, in Restart, “srvctl enable instance” is not available. “srvctl enable instance” only works in RAC.

Unless you activate the instance, it will not start automatically upon host reboot.

[oracle@ip-172-31-28-23 ~]$ srvctl start database -database FS

At the standby:

[oracle@ip-172-32-10-34 ~]$ srvctl add database -database SF -role physical_standby -startoption MOUNT -stopoption ABORT -instance ORCL -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora -diskgroup DATA01,RECO01
[oracle@ip-172-32-10-34 ~]$ srvctl start database -database SF

Reboot all hosts and check that the database instances start automatically, in the proper startup mode and Data Guard role.

Switchover test

Test switchover to SF. The output changes slightly, and becomes:

DGMGRL> switchover to SF
Performing switchover NOW, please wait...
Operation requires a connection to database "SF"
Connecting ...
Connected to "SF"
Connected as SYSDBA.
New primary database "SF" is opening...
Oracle Clusterware is restarting database "NY" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to "NY"
Connected to "NY"
Switchover succeeded, new primary is "sf"

After configuring Restart, the Data Guard switchover output has changed slightly. Message ‘Operation requires start up of instance “ORCL” on database “NY”‘ has been replaced with ‘Oracle Clusterware is restarting database “NY” …’. A review of the NY listener log shows no connection to service NY_DGMGRL.

Test switchover to NY. The output is now:

DGMGRL> switchover to NY
Performing switchover NOW, please wait...
New primary database "NY" is opening...
Oracle Clusterware is restarting database "SF" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to "SF"
Connected to "SF"
Switchover succeeded, new primary is "ny"

A review of the SF listener log shows no connection to service SF_DGMGRL.

Static listeners (final)

In Restart, therefore, the “_DGMGRL” listener.ora entry is not needed. SID_LIST_LISTENER can be simplified on the primary, far sync, and standby as:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = ORCL)
    )
  )

After configuring Oracle Restart, switchovers were retested with the revised listener.ora without issue.

Error in non-Restart and no DGMGRL

Suppose we disable restart at NY.

[oracle@ip-172-31-86-22 ~]$ srvctl stop database -database NY
[oracle@ip-172-31-86-22 ~]$ srvctl remove database -database NY -y
[oracle@ip-172-31-86-22 dbs]$ cat > initORCL.ora
spfile='+DATA01/NY/PARAMETERFILE/spfile.263.1018152951'
[oracle@ip-172-31-86-22 dbs]$ sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Sep 12 11:07:20 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1140849904 bytes
Fixed Size                  8895728 bytes
Variable Size             318767104 bytes
Database Buffers          805306368 bytes
Redo Buffers                7880704 bytes
Database mounted.
Database opened.

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    SF - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 11 seconds ago)

(Note: In this example, there is no far sync.) Now try a switchover:

DGMGRL> switchover to SF
Performing switchover NOW, please wait...
Operation requires a connection to database "SF"
Connecting ...
Connected to "SF"
Connected as SYSDBA.
New primary database "SF" is opening...
Operation requires start up of instance "ORCL" on database "NY"
Starting instance "ORCL"...
Unable to connect to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-86-22.ec2.internal)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=NY_DGMGRL)(INSTANCE_NAME=ORCL)(SERVER=DEDICATED)))
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Failed.

Please complete the following steps to finish switchover:
        start up and mount instance "ORCL" of database "NY"

The new primary, SF, opens just fine. The old primary, NY, got shut down, but now there is no way for the broker to restart it. The listener is not listening on behalf of the oracle database.

[grid@ip-172-31-86-22 ~]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 12-SEP-2019 11:26:00

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-86-22.ec2.internal)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                12-SEP-2019 10:15:24
Uptime                    0 days 1 hr. 10 min. 36 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19.3.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/ip-172-31-86-22/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ip-172-31-86-22.ec2.internal)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA01" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_RECO01" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
The command completed successfully

This error referring to misssing service NY_DGMGRL appears in the listener log:

12-SEP-2019 11:22:09 * (CONNECT_DATA=(SERVICE_NAME=NY_DGMGRL)(INSTANCE_NAME=ORCL)(SERVER=DEDICATED)(CID=(PROGRAM=dgmgrl)(HOST=ip-172-31-86-22.ec2.internal)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.31.86.22)(PORT=53930)) * establish * NY_DGMGRL * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor

The conclusion is that without Restart we need the static listener and the DGMGRL in SID_LIST_LISTENER.

Validate static listeners

Validating the static listeners is a critical Data Guard setup step. After duplicating the database and setting up Restart, check each static listener.:

  • Connect over the network with sqlplus
  • Shutdown (shutdown abort for standbys and far syncs)
  • Startup (startup mount for standbys and far syncs)

Example:

[oracle@ip-172-31-28-23 ops]$ sqlplus sys/zystm.22@FS as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Sep 13 11:36:37 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1140849904 bytes
Fixed Size                  8895728 bytes
Variable Size             301989888 bytes
Database Buffers          822083584 bytes
Redo Buffers                7880704 bytes
Database mounted.

This is an example of a failed static listener check.

[oracle@ip-172-31-28-23 ops]$ sqlplus sys/zystm.22@FS as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Sep 13 11:36:37 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1140849904 bytes
Fixed Size                  8895728 bytes
Variable Size             301989888 bytes
Database Buffers          822083584 bytes
Redo Buffers                7880704 bytes
Database mounted.
SQL> shutdown abort
ORACLE instance shut down.
ERROR:
ORA-12505: TNS:listener does not currently know of SID given in connect
descriptor


SQL> startup mount
SP2-0640: Not connected

Resolve all failures before proceeding.

Conclusion

In Oracle Database 12.1, the Data Guard Broker Manager tool (dgmgrl) was modified so that in Restart or RAC, “_DGMGRL” is no longer required. This fact is mentioned in Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1).

On the other hand, if you are not using Restart, then you do need DGMGRL in your SID_LIST_LISTENER.

Restart is only available with Grid Infrastructure. The main benefit of Grid Infrastructure is ASM. If you built Oracle Database on operating system files, then you have less of a need for Grid Infrastructure. If you did not install Grid Infrastructure, then Restart is not available. Without Retstart, a Data Guard setup will need a DGMGRL service in SID_LIST_LISTENER.

During Data Guard setup, validate all static listeners.

Symmetric far sync Data Guard

Introduction

Presented here is a reliable, symmetric Oracle Data Guard network. By reliable is meant that zero data loss and continuous availability is guaranteed in case of loss of one far sync. Symmetric means that there is is no preferred primary site.

Background

I used to drive a Navy Nuclear Submarine. On a weekly basis, we switched over our running machinery. For example, we switched over our turbine generator lubricating oil pumps. The engineroom upper level watch would the start the standby pump, verify proper response, and then and place the originally running pump in standby. We shifted our pumps for multiple reasons. Switching pumps equalized wear. It also verified readiness and operability. Finally, by operating the controls on a regular basis, the crew maintained a higher level of proficiency.

Notice a few things about this arrangement. No pump is a “preferred” primary. The pump functioning in the standby role is in no way inferior in design, specifications, or readiness. Because the machinery is rotated on a weekly basis, there is no question that the standby is ready to take over the primary role immediately when necessary.

So it is with databases. You need to have  a disaster recovery strategy. The DR site needs to be in a known state of readiness at all times. The best way of accomplishing that is to rotate DR sites on a weekly or quarterly basis. No site is a “preferred” primary. By regularly exercising the system, there is no question that a standby is in a state of readiness. Finally, database administrators maintain proficiency by regularly switching over the Data Guard systems.

A reliable, symmetric Data Guard arrangement is convenient to set up in the cloud. You have complete flexibility over which region and availability zone for placing your databases and far syncs.

Environment

The environment is Amazon AWS EC2 (Elastic Computing Cloud) with Red Hat Linux 7.2. Here is an overview of the environment.

db unique name region availability zone type net I/O sync latency from latency μs
NY us-east-1 us-east-1c database SYNC SF 32000
NY_FSA us-east-1 us-east-1d far sync ASYNC NY 250
NY_FSB us-east-1 us-east-1a far sync ASYNC NY 500
SF us-west-1 us-west-1b database SYNC NY 31500
SF_FSA us-west-1 us-west-1b far sync ASYNC SF 115
SF_FSB us-west-1 us-west-1c far sync ASYNC SF 600

Notice that NY, NY_FSA, and NY_FSB are in the same region but in separate availability zones. Also, SF, SF_FSA, and SF_FSB are in the same region but SF_FSB is in an availability zone separate from SF and SF_FSA. SF and SF_FSA are in the same availability zone. The highest latency is across regions. The lowest latency is within one availability zone. Medium latency is in the same region but across availability zones.

Here is a network diagram.

reliable.symmetric

In Maximum Availability mode, the primary ships redo to one far sync, leaving the second far sync as an alternate. In fase of a failure on the far sync, the alternate comes on line, resyncs with the primary, and takes over the role of far sync. The primary always ships redo to a far sync in the same region. For example, NY ships to NY_FSA or NY_FSB. Because of this arrangement, there is a total of four far syncs, two per region. Although four far sync hosts are required, the hosts could be configured with less CPU, memory, and disk space than the primary database. In a cloud, CPU, memory, and disk can be reconfigured quickly. Note that in the present symmmetric arrangement, far sync direction is one-way, from the local database to the database in the remote region.

Setup

You can assume an initial, enabled Maximum Availability configuration with only a primary and standby database, and no far syncs.

Create and enable the far syncs

Create far syncs as explained in Data Guard 19c in AWS with far sync. Add the far syncs and enable them.

DGMGRL> add far_sync "NY_FSA" as connect identifier is 'NY_FSA';
far sync instance "NY_FSA" added
DGMGRL> enable far_sync "NY_FSA";
Enabled.
DGMGRL> add far_sync 'NY_FSB' as connect identifier is 'NY_FSB';
far sync instance "NY_FSB" added
DGMGRL> enable far_sync 'NY_FSB'
Enabled.
DGMGRL> add far_sync 'SF_FSA' as connect identifier is 'SF_FSA';
far sync instance "SF_FSA" added
DGMGRL> enable far_sync 'SF_FSA';
Enabled.
DGMGRL> add far_sync 'SF_FSB' as connect identifier is 'SF_FSB';
far sync instance "SF_FSB" added
DGMGRL> enable far_sync 'SF_FSB'
Enabled.

The configuration so far looks like this:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY     - Primary database
    SF     - Physical standby database

  Members Not Receiving Redo:
  NY_FSA - Far sync instance
  SF_FSA - Far sync instance
  SF_FSB - Far sync instance
  NY_FSB - Far sync instance

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 57 seconds ago)

Edit the far sync routes

DGMGRL> edit far_sync NY_FSA set property RedoRoutes = '(NY : SF ASYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync NY_FSB set property RedoRoutes = '(NY : SF ASYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync SF_FSA set property RedoRoutes = '(SF : NY ASYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync SF_FSB set property RedoRoutes = '(SF : NY ASYNC)';
Property "redoroutes" updated

Edit the database routes

DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : (NY_FSA SYNC, NY_FSB SYNC))';
Property "redoroutes" updated
DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : (SF_FSA SYNC, SF_FSB SYNC))';
Property "redoroutes" updated
DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY     - Primary database
    NY_FSA - Far sync instance
      SF     - Physical standby database

  Members Not Receiving Redo:
  SF_FSA - Far sync instance
  SF_FSB - Far sync instance
  NY_FSB - Far sync instance (alternate of NY_FSA)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 40 seconds ago)

Test far sync failover

In case of a failure of the far sync, Data Guard should quickly switch over to the alternate far sync. In this example, if there is a disrpution of NY_FSA, Data Guard should fail over to NY_FSB. We want to observe the system response to a simulated failure at NY_FSA. There are two failure scenarios.

end-of-file on communication channel

In a TCP client, such as the primary database connection to the far sync, ORA-03113: end-of-file on communication channel will appear when the far sync is disrupted but the far sync host is still on the network. Possible causes could include far sync instance crash, restart, or host reboot. As long as the host is on the network at the link level, it is possible for the primary host operating system to quickly identify the error and return a failure to the primary database. In such cases, an error such as this one appears immediately in the alert log.

2019-09-10T13:44:33.987037-04:00
ORA-03113: end-of-file on communication channel

In such a case, recovery will be very quick. The primary will immediately connect to the alternate far sync. Synchronization can be complete in less than 10 seconds.

timeout error

A timeout will occur if there is a power loss or network disruption. A network disruption could occur as a result of a network configuration change. I can demonstrate a timeout by cutting off the TCP connectivity at the cloud level. To do that, I switch the NY_FSA host to a security group that does not allow traffic on the Oracle port (1521).

clear.sg

In this case, Oracle error detection and recovery is slow. In 30 seconds, the NetTimeout default, the primary connection to NY_FSA (LAD:2 (log archive destination) times out:

2019-09-10T18:22:28.214255-04:00
LGWR (PID:4969): ORA-16198: Received timed out error from KSR
LGWR (PID:4969): Attempting LAD:2 network reconnect (16198)
LGWR (PID:4969): LAD:2 network reconnect abandoned
2019-09-10T18:22:28.214836-04:00
Errors in file /u01/app/oracle/diag/rdbms/ny/ORCL/trace/ORCL_lgwr_4969.trc:
ORA-16198: Timeout incurred on internal channel during remote archival
LGWR (PID:4969): Error 16198 for LNO:1 to 'NY_FSA'
2019-09-10T18:22:28.223786-04:00
LGWR (PID:4969): LAD:2 is UNSYNCHRONIZED
LGWR (PID:4969): Failed to archive LNO:1 T-1.S-333, error=16198

After another 30 seconds, the primary gives up on NY_FSA and switches to NY_FSB (LAD:3).

2019-09-10T18:22:58.232029-04:00
LGWR (PID:4969): ORA-16198: Received timed out error from KSR
LGWR (PID:4969): Error 16198 disconnecting from LAD:2 standby host 'NY_FSA'
2019-09-10T18:22:58.232531-04:00
LGWR (PID:4969): LAD:3 is UNSYNCHRONIZED
2019-09-10T18:22:58.232638-04:00
LGWR (PID:4969): LAD:2 no longer supports SYNCHRONIZATION
LGWR (PID:4969): SRL selected to archive T-1.S-334
LGWR (PID:4969): SRL selected for T-1.S-334 for LAD:3
2019-09-10T18:22:58.449325-04:00
Thread 1 advanced to log sequence 334 (LGWR switch)
  Current log# 2 seq# 334 mem# 0: +RECO01/NY/ONLINELOG/group_2.484.1018151803
  Current log# 2 seq# 334 mem# 1: +DATA01/NY/ONLINELOG/group_2.270.1018151815
2019-09-10T18:22:58.519853-04:00
ARC0 (PID:5073): Archived Log entry 675 added for T-1.S-333 ID 0x5c2b52e5 LAD:1
2019-09-10T18:22:58.707980-04:00
ARC1 (PID:5079): SRL selected for T-1.S-333 for LAD:3

Synchronization begins, but four to five minutes elapse until the primary is resynced to the new far sync, NY_FSB.

2019-09-10T18:27:01.888256-04:00
LGWR (PID:4969): LAD:3 is SYNCHRONIZED

While NY_FSB is synchronizing, there are two things to notice. The primary status is:

  NY     - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

and the protection level:

SQL> select protection_level from v$database;

PROTECTION_LEVEL
--------------------
RESYNCHRONIZATION

The final state is:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY     - Primary database
    NY_FSB - Far sync instance (alternate of NY_FSA)
      SF     - Physical standby database

  Members Not Receiving Redo:
  NY_FSA - Far sync instance
    Warning: ORA-16857: member disconnected from redo source for longer than specified threshold

  SF_FSA - Far sync instance
  SF_FSB - Far sync instance

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 59 seconds ago)

Adjusting NetTimeout

On a low latency and usually reliable network, you should reduce the timeout. Example:

DGMGRL> edit database NY set property NetTimeout = 5;
Property "nettimeout" updated

This makes for a cleaner far sync switchover.

Maximum Protection

You might think that you could implement protection mode Maximum Protection with multiple far syncs:

DGMGRL> edit configuration set protection mode as MaxProtection;

Oracle will not let you do that.

Error: ORA-16627: operation disallowed since no member would remain to support protection mode

If you want to implement Maximum Protection, you will need to implement a direct route to one or more physical standby databases.

Conclusion

Several points were covered in this article.

  • Data Guard latency can be reduced by implementing far sync.
  • Data Guard can be made more reliable by placing multiple far syncs near the primary.
  • Benefits to setting up a symmetric arrangement include:
    • Standby readniness has been recently verified.
    • The standby capability is known to be identical to the primary.
    • Heightened DBA staff proficiency.
  • Setup implementation steps were covered.
  • Data Guard does not keep alternate far syncs in sync with the primary.
  • Data Guard is quick to recover from some error conditions.
  • Data Guard detection and recovery from network timeout can be slow.
  • Maximum Protection though a far sync is not supported.

Data Guard 19c in AWS with far sync

By Brian Fitzgerald

Introduction

Oracle Data Guard 19c with far sync setup is described here. Far sync can improve commit response time in a Maximum Availability Data Guard network. The Data Guard configuration is EC2 across two AWS Cloud regions. Database storage is ASM. Far sync creation is done using RMAN. The physical standby and far sync are implemented in a single configuration step. Some observations on network latency and switchover timing are shown.

License

Data Guard is a feature of the Oracle Database Enterprise Edition itself and does not require separate licensing. An Active Data Guard license is required for Far sync.

By using Amazon Elastic Cloud Computing, (EC2), you can control your license costs by configuring only the CPUs that you need.

Environment overview

A system overview is described in this table:

Description Value
Cloud AWS
Image ID ami-2051294a
Red Hat version 7.2
EC2 InstanceType m3.medium
Memory 3.75 GB
CPU 1
Swap 2 GB
Grid software owner grid
Grid Infrastructure Version 19.3.0
Database Storage ASM
Oracle software owner oracle
Oracle Database Version 19.3.0
Oracle Instance Type Restart

AWS instance type was initially m3.large (8 GB, 2 CPU), and then downsized after the grid and oracle home installations were complete. For additional information on the grid install, please refer to grid 19c install with ASM filter driver. For the network description, please refer to Data Guard network in AWS. The breakdown by region, availability zone, host, and role is:

description primary far sync standby
Region N. Virginia N. Virginia N. California
availability zone us-east-1c us-east-1d us-west-1b
ip address 172.31.86.22 172.31.28.23 172.32.10.34
hostname -s ip-172-31-86-22 ip-172-31-28-23 ip-172-32-10-34
db_unique_name NY FS SF

Network latency

We can measure the network latency from the primary to the far sync and to the standby. Start qperf server on the far sync:

[ec2-user@ip-172-31-28-23 ~]$ qperf

Start qperf server on the standby

[ec2-user@ip-172-32-10-34 ~]$ qperf

Measure bandwidth and latency between two N. Virginia availability zones:

[ec2-user@ip-172-31-86-22 ~]$ qperf 172.31.28.23 tcp_bw tcp_lat
tcp_bw:
    bw  =  92.6 MB/sec
tcp_lat:
    latency  =  254 us

Measure bandwidth and latency between regions N. Virginia and N. California:

[ec2-user@ip-172-31-86-22 ~]$ qperf 172.32.10.34 tcp_bw tcp_lat
tcp_bw:
    bw  =  18.4 MB/sec
tcp_lat:
    latency  =  33.3 ms

Network bandwidth is approximately 5x higher, and latency is more than 100x lower across in-region availability zones, compared to cross-region. This fact motivates the far-sync. actually showed that far sync can improve performance in cases where the network latency to the standby is higher than the latency to the far sync.

Database configuration

The initial primary instance configuration was:

*.archive_lag_target=900
*.audit_file_dest='/u01/app/oracle/admin/NY/adump'
*.audit_trail='db'
*.compatible='19.3.0'
*.control_files='+RECO01/NY/CONTROLFILE/current.486.1018151799','+DATA01/NY/CONTROLFILE/current.273.1018151799'
*.db_block_size=8192
*.db_create_file_dest='+DATA01'
*.db_create_online_log_dest_1='+RECO01'
*.db_create_online_log_dest_2='+DATA01'
*.db_name='ORCL'
*.db_recovery_file_dest='+RECO01'
*.db_recovery_file_dest_size=4000m
*.db_unique_name='NY'
*.dg_broker_config_file1='+DATA01/NY/dr1orcl.dat'
*.dg_broker_config_file2='+RECO01/NY/dr2orcl.dat'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'
*.local_listener='LISTENER_NY'
*.log_archive_format='%t_%s_%r.dbf'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=300
*.pga_aggregate_target=360m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1080m
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'

If not already done, make these changes:

SQL> alter system set standby_file_management=AUTO;

System altered.

SQL> alter system set dg_broker_config_file1='+DATA01/NY/dr1orcl.dat';

System altered.

SQL> alter system set dg_broker_config_file2='+RECO01/NY/dr2orcl.dat';

System altered.

Optional settings

  • compatible. Must be set to the same value on the the primary, on the far sync, and on the standby.
  • db_create_online_log_dest_n. Points logfile members to specific disk groups. Simplifies alter database add standby logfile syntax.
  • archive_lag_target. Time-boxes each archive log. Adjust to manage control file contention.

Alter database

Make sure these alter database alterations are done:

SQL> alter database force logging;

Database altered.

Make these changes with the database mounted:

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1140849904 bytes
Fixed Size                  8895728 bytes
Variable Size             301989888 bytes
Database Buffers          822083584 bytes
Redo Buffers                7880704 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.

Standby logs

Identify the online redo log size. Check that all online redo logs are the same size. Check that this query returns exactly one row:

SQL> select count(*)numlogs, bytes/1024/1024 mb from v$log group by bytes;

   NUMLOGS         MB
---------- ----------
         3        200

If all online logs are not the same size, correct that condition before proceeding.

If standby redo logs exist, check that they are all the same size, and the same size as the online redo logs. Check that this query returns exactly one row:

SQL> select count(*)numlogs, bytes/1024/1024 mb from v$standby_log group by bytes;

   NUMLOGS         MB
---------- ----------
         4        200

Correct discrepant conditions before proceeding.

Create standby logs

Create standby logs, if needed. You can set:

SQL> alter system set db_create_online_log_dest_1 = '+DATA01';

System altered.

SQL> alter system set db_create_online_log_dest_2 = '+RECO01';

System altered.

And then run, for example:

SQL> alter database add standby logfile size 200m;

as many times as needed to get the desired number of standby logs. The optimal number of standby logs is usually greater than the number of online logs. If a high apply backlog is expected, then increase this number further.

Static listeners

On all hosts, primary, far sync, and standby, in the grid account, in $ORACLE_HOME/network/admin, edit listener.ora.

Primary:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = NY)
    )
  )

Far sync:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = FS)
    )
  )

Standby:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = SF)
    )
  )

Reload the listener. For example:

[grid@ip-172-31-86-22 ~]$ lsnrctl reload

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 02-SEP-2019 12:28:09

Copyright (c) 1991, 2019, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-86-22.ec2.internal)(PORT=1521)))
The command completed successfully

startup far sync

On the far sync configure and startup nomount

Far sync audit directory

At the far sync

[oracle@ip-172-32-10-34 ~]$ mkdir -p /u01/app/oracle/admin/FS/adump

Far sync oratab and environment

[oracle@ip-172-31-28-23 ~]$ echo FS:/u01/app/oracle/product/19.3.0/dbhome_1:N >> /etc/oratab
[oracle@ip-172-31-28-23 ~]$ . oraenv
ORACLE_SID = [FS] ? FS
The Oracle base remains unchanged with value /u01/app/oracle

Far sync orapwd

Create orapwFS on the far sync:

[oracle@ip-172-31-28-23 ~]$ alias oh
alias oh='cd $ORACLE_HOME'
[oracle@ip-172-31-28-23 ~]$ oh
[oracle@ip-172-31-28-23 dbhome_1]$ cd dbs
[oracle@ip-172-31-28-23 dbs]$ orapwd file=orapwFS entries=10 password=zystm.22

Far sync startup

Create a temporary, minimal pfile on the far sync. Set the compatible setting to match the primary.

[oracle@ip-172-31-28-23 ops]$ cat > /tmp/initFS.ora
db_name = ORCL
compatible = '19.3.0'
^D
[oracle@ip-172-31-28-23 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Aug 31 22:01:58 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/tmp/initFS.ora'
ORACLE instance started.

Total System Global Area  243268216 bytes
Fixed Size                  8895096 bytes
Variable Size             180355072 bytes
Database Buffers           50331648 bytes
Redo Buffers                3686400 bytes
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

Startup the standby

On the standby configure and startup nomount

Standby audit directory

At the standby

[oracle@ip-172-32-10-34 ~]$ mkdir -p /u01/app/oracle/admin/SF/adump

Standby oratab and environment

[oracle@ip-172-32-10-34 ~]$ echo SF:/u01/app/oracle/product/19.3.0/dbhome_1:N >> /etc/oratab
[oracle@ip-172-32-10-34 ~]$ . oraenv
ORACLE_SID = [SF] ? SF
The Oracle base remains unchanged with value /u01/app/oracle

Standby orapwd

Create orapwSF on the standby:
[oracle@ip-172-32-10-34 ~]$ oh
[oracle@ip-172-32-10-34 dbhome_1]$ cd dbs
[oracle@ip-172-32-10-34 dbs]$ orapwd file=orapwSF entries=10 password=zystm.22

Standby startup

Create a temporary, minimal pfile on the standby. Set the compatible setting to match the primary.

[oracle@ip-172-32-10-34 ops]$ cat > /tmp/initSF.ora
db_name = ORCL
compatible = '19.3.0'
^D
[oracle@ip-172-32-10-34 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Aug 31 22:01:58 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/tmp/initSF.ora'
ORACLE instance started.

Total System Global Area  243268216 bytes
Fixed Size                  8895096 bytes
Variable Size             180355072 bytes
Database Buffers           50331648 bytes
Redo Buffers                3686400 bytes
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

tnsnames.ora

On all hosts, in the oracle account, in $ORACLE_HOME/network/admin/tnsnames.ora, add these entries:

NY =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 172.31.86.22)(PORT = 1521))
    (CONNECT_DATA =
      (SID = NY)
    )
  )

FS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 172.31.28.23)(PORT = 1521))
    (CONNECT_DATA =
      (SID = FS)
    )
  )

SF =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 172.32.10.34)(PORT = 1521))
    (CONNECT_DATA =
      (SID = SF)
    )
  )

Connectivity checklist

Check that you have configured all of these:

  • VPC enable DNS hostnames
  • DNS resolution across the peering connection
  • Route tables across the peering connection
  • Security groups
  • Static listener
  • orapwd
  • tnsnames.ora
  • ORACLE_HOME environment variable

Test connectivity

From the primary, test connectivity to all three instances:

sqlplus sys/zystm.22@OH as sysdba
sqlplus sys/zystm.22@VA_FS as sysdba
sqlplus sys/zystm.22@OH as sysdba

From the far sync and the standby, repeat these checks.

Duplicate primary to far sync

Now we are going to kick off an RMAN script. RMAN will transfer the spfile, the password file, and the control file.

[oracle@ip-172-31-86-22 ops]$ cat dup.db.farsync.rcv
DUPLICATE TARGET DATABASE
  FOR FARSYNC
  FROM ACTIVE DATABASE
  SPFILE
    SET db_unique_name='FS'
    SET dg_broker_config_file1='+DATA01/FS/dr1orcl.dat'
    SET dg_broker_config_file2='+RECO01/FS/dr2orcl.dat'
    SET audit_file_dest='/u01/app/oracle/admin/FS/adump'
  NOFILENAMECHECK;
[oracle@ip-172-31-86-22 ops]$ rman target sys/zystm.22@NY auxiliary sys/zystm.22@FS

Recovery Manager: Release 19.0.0.0.0 - Production on Thu Sep 5 04:53:02 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1545932534)
connected to auxiliary database: ORCL (not mounted)

RMAN> @ dup.db.farsync.rcv

RMAN> DUPLICATE TARGET DATABASE
2>   FOR FARSYNC
3>   FROM ACTIVE DATABASE
4>   SPFILE
5>     SET db_unique_name='FS'
6>     SET dg_broker_config_file1='+DATA01/FS/dr1orcl.dat'
7>     SET dg_broker_config_file2='+RECO01/FS/dr2orcl.dat'
8>     SET audit_file_dest='/u01/app/oracle/admin/FS/adump'
9>   NOFILENAMECHECK;
Starting Duplicate Db at 05-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=39 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   passwordfile auxiliary format  '/u01/app/oracle/product/19.3.0/dbhome_1/dbs/orapwORCL'   ;
   restore clone from service  'NY' spfile to
 '/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora';
   sql clone "alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''";
}
executing Memory Script

Starting backup at 05-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=88 device type=DISK
Finished backup at 05-SEP-19

Starting restore at 05-SEP-19
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service NY
channel ORA_AUX_DISK_1: restoring SPFILE
output file name=/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 05-SEP-19

sql statement: alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_unique_name =
 ''FS'' comment=
 '''' scope=spfile";
   sql clone "alter system set  dg_broker_config_file1 =
 ''+DATA01/FS/dr1orcl.dat'' comment=
 '''' scope=spfile";
   sql clone "alter system set  dg_broker_config_file2 =
 ''+RECO01/FS/dr2orcl.dat'' comment=
 '''' scope=spfile";
   sql clone "alter system set  audit_file_dest =
 ''/u01/app/oracle/admin/FS/adump'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_unique_name =  ''FS'' comment= '''' scope=spfile

sql statement: alter system set  dg_broker_config_file1 =  ''+DATA01/FS/dr1orcl.dat'' comment= '''' scope=spfile

sql statement: alter system set  dg_broker_config_file2 =  ''+RECO01/FS/dr2orcl.dat'' comment= '''' scope=spfile

sql statement: alter system set  audit_file_dest =  ''/u01/app/oracle/admin/FS/adump'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1140849904 bytes

Fixed Size                     8895728 bytes
Variable Size                301989888 bytes
Database Buffers             822083584 bytes
Redo Buffers                   7880704 bytes
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''+RECO01/FS/CONTROLFILE/current.257.1018155321'', ''+DATA01/FS/CONTROLFILE/current.258.1018155321'' comment=
 ''Set by RMAN'' scope=spfile";
   restore clone from service  'NY' farsync controlfile;
}
executing Memory Script

sql statement: alter system set  control_files =   ''+RECO01/FS/CONTROLFILE/current.257.1018155321'', ''+DATA01/FS/CONTROLFILE/current.258.1018155321'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 05-SEP-19
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=46 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service NY
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=+RECO01/FS/CONTROLFILE/current.383.1018155325
output file name=+DATA01/FS/CONTROLFILE/current.265.1018155325
Finished restore at 05-SEP-19

contents of Memory Script:
{
   sql clone 'alter database mount';
}
executing Memory Script

sql statement: alter database mount

contents of Memory Script:
{
   sql 'alter system archive log current';
}
executing Memory Script

sql statement: alter system archive log current
Finished Duplicate Db at 05-SEP-19

RMAN> **end-of-file**

Note that there are no online logs, and that the standby logs are the same count and size as the primary.

SQL> select count(*)numlogs, bytes/1024/1024 mb from v$log group by bytes;

no rows selected

SQL> select count(*)numlogs, bytes/1024/1024 mb from v$standby_log group by bytes;

   NUMLOGS         MB
---------- ----------
         4        200

Notice these points about the operation:

  • All file transfers were done with RMAN. No ssh connection is needed.
  • There is no ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE step. RMAN handled the control file by backing it up at the primary and tranferring it to the far sync.
  • Although the primary spfile is in ASM, the far sync spfile ends up on the file system in directory $ORACLE_HOME/dbs.
  • The orapwORCL that you created at the far sync got overwritten by a backup of the primary orapwORCL.

Duplicate primary to standby

Allocate an appropriate number of channels to reduce the time needed to duplicate the database.

[oracle@ip-172-31-86-22 ops]$ cat dup.db.standby.rcv
run {

allocate channel ch01 device type disk;
allocate channel ch02 device type disk;
allocate auxiliary channel aux01 device type disk;
allocate auxiliary channel aux02 device type disk;

DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET db_unique_name='SF'
    SET dg_broker_config_file1='+DATA01/SF/dr1orcl.dat'
    SET dg_broker_config_file2='+RECO01/SF/dr2orcl.dat'
    SET audit_file_dest='/u01/app/oracle/admin/SF/adump'
  NOFILENAMECHECK;

}
[oracle@ip-172-31-86-22 ops]$ vi dup.db.standby.rcv
[oracle@ip-172-31-86-22 ops]$ rman target sys/zystm.22@NY auxiliary sys/zystm.22@SF

Recovery Manager: Release 19.0.0.0.0 - Production on Thu Sep 5 05:04:52 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1545932534)
connected to auxiliary database: ORCL (not mounted)

RMAN> @ dup.db.standby.rcv

RMAN> run {
2>
3> allocate channel ch01 device type disk;
4> allocate channel ch02 device type disk;
5> allocate auxiliary channel aux01 device type disk;
6> allocate auxiliary channel aux02 device type disk;
7>
8> DUPLICATE TARGET DATABASE
9>   FOR STANDBY
10>   FROM ACTIVE DATABASE
11>   DORECOVER
12>   SPFILE
13>     SET db_unique_name='SF'
14>     SET dg_broker_config_file1='+DATA01/SF/dr1orcl.dat'
15>     SET dg_broker_config_file2='+RECO01/SF/dr2orcl.dat'
16>     SET audit_file_dest='/u01/app/oracle/admin/SF/adump'
17>   NOFILENAMECHECK;
18>
19> }
using target database control file instead of recovery catalog
allocated channel: ch01
channel ch01: SID=84 device type=DISK

allocated channel: ch02
channel ch02: SID=85 device type=DISK

allocated channel: aux01
channel aux01: SID=40 device type=DISK

allocated channel: aux02
channel aux02: SID=41 device type=DISK

Starting Duplicate Db at 05-SEP-19
current log archived

contents of Memory Script:
{
   backup as copy reuse
   passwordfile auxiliary format  '/u01/app/oracle/product/19.3.0/dbhome_1/dbs/orapwORCL'   ;
   restore clone from service  'NY' spfile to
 '/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora';
   sql clone "alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''";
}
executing Memory Script

Starting backup at 05-SEP-19
Finished backup at 05-SEP-19

Starting restore at 05-SEP-19

channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: restoring SPFILE
output file name=/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora
channel aux01: restore complete, elapsed time: 00:00:02
Finished restore at 05-SEP-19

sql statement: alter system set spfile= ''/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileORCL.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_unique_name =
 ''SF'' comment=
 '''' scope=spfile";
   sql clone "alter system set  dg_broker_config_file1 =
 ''+DATA01/SF/dr1orcl.dat'' comment=
 '''' scope=spfile";
   sql clone "alter system set  dg_broker_config_file2 =
 ''+RECO01/SF/dr2orcl.dat'' comment=
 '''' scope=spfile";
   sql clone "alter system set  audit_file_dest =
 ''/u01/app/oracle/admin/SF/adump'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_unique_name =  ''SF'' comment= '''' scope=spfile

sql statement: alter system set  dg_broker_config_file1 =  ''+DATA01/SF/dr1orcl.dat'' comment= '''' scope=spfile

sql statement: alter system set  dg_broker_config_file2 =  ''+RECO01/SF/dr2orcl.dat'' comment= '''' scope=spfile

sql statement: alter system set  audit_file_dest =  ''/u01/app/oracle/admin/SF/adump'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1140849904 bytes

Fixed Size                     8895728 bytes
Variable Size                301989888 bytes
Database Buffers             822083584 bytes
Redo Buffers                   7880704 bytes
allocated channel: aux01
channel aux01: SID=44 device type=DISK
allocated channel: aux02
channel aux02: SID=45 device type=DISK
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''+RECO01/SF/CONTROLFILE/current.257.1018156101'', ''+DATA01/SF/CONTROLFILE/current.258.1018156101'' comment=
 ''Set by RMAN'' scope=spfile";
   restore clone from service  'NY' standby controlfile;
}
executing Memory Script

sql statement: alter system set  control_files =   ''+RECO01/SF/CONTROLFILE/current.257.1018156101'', ''+DATA01/SF/CONTROLFILE/current.258.1018156101'' comment= ''Set by RMAN'' scope=spfile

Starting restore at 05-SEP-19

channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: restoring control file
channel aux01: restore complete, elapsed time: 00:00:04
output file name=+RECO01/SF/CONTROLFILE/current.262.1018156111
output file name=+DATA01/SF/CONTROLFILE/current.267.1018156111
Finished restore at 05-SEP-19

contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   switch clone tempfile all;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  7 to new;
   restore
   from  nonsparse   from service
 'NY'   clone database
   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to +DATA01 in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 05-SEP-19

channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: specifying datafile(s) to restore from backup set
channel aux01: restoring datafile 00001 to +DATA01
channel aux02: starting datafile backup set restore
channel aux02: using network backup set from service NY
channel aux02: specifying datafile(s) to restore from backup set
channel aux02: restoring datafile 00003 to +DATA01
channel aux02: restore complete, elapsed time: 00:00:38
channel aux02: starting datafile backup set restore
channel aux02: using network backup set from service NY
channel aux02: specifying datafile(s) to restore from backup set
channel aux02: restoring datafile 00004 to +DATA01
channel aux01: restore complete, elapsed time: 00:00:44
channel aux01: starting datafile backup set restore
channel aux01: using network backup set from service NY
channel aux01: specifying datafile(s) to restore from backup set
channel aux01: restoring datafile 00007 to +DATA01
channel aux01: restore complete, elapsed time: 00:00:05
channel aux02: restore complete, elapsed time: 00:00:12
Finished restore at 05-SEP-19

sql statement: alter system archive log current
current log archived

contents of Memory Script:
{
   restore clone force from service  'NY'
           archivelog from scn  2246774;
   switch clone datafile all;
}
executing Memory Script

Starting restore at 05-SEP-19

channel aux01: starting archived log restore to default destination
channel aux01: using network backup set from service NY
channel aux01: restoring archived log
archived log thread=1 sequence=9
channel aux02: starting archived log restore to default destination
channel aux02: using network backup set from service NY
channel aux02: restoring archived log
archived log thread=1 sequence=10
channel aux01: restore complete, elapsed time: 00:00:02
channel aux02: restore complete, elapsed time: 00:00:02
Finished restore at 05-SEP-19

datafile 1 switched to datafile copy
input datafile copy RECID=5 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/system.266.1018156127
datafile 3 switched to datafile copy
input datafile copy RECID=6 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/sysaux.265.1018156129
datafile 4 switched to datafile copy
input datafile copy RECID=7 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/undotbs1.264.1018156167
datafile 7 switched to datafile copy
input datafile copy RECID=8 STAMP=1018156184 file name=+DATA01/SF/DATAFILE/users.269.1018156173

contents of Memory Script:
{
   set until scn  2247713;
   recover
   standby
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 05-SEP-19

starting media recovery

archived log for thread 1 with sequence 9 is already on disk as file +RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_9.261.1018156181
archived log for thread 1 with sequence 10 is already on disk as file +RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_10.259.1018156183
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_9.261.1018156181 thread=1 sequence=9
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_10.259.1018156183 thread=1 sequence=10
media recovery complete, elapsed time: 00:00:01
Finished recover at 05-SEP-19

contents of Memory Script:
{
   delete clone force archivelog all;
}
executing Memory Script

deleted archived log
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_9.261.1018156181 RECID=1 STAMP=1018156181
Deleted 1 objects

deleted archived log
archived log file name=+RECO01/SF/ARCHIVELOG/2019_09_05/thread_1_seq_10.259.1018156183 RECID=2 STAMP=1018156182
Deleted 1 objects

Finished Duplicate Db at 05-SEP-19
released channel: ch01
released channel: ch02
released channel: aux01
released channel: aux02

RMAN> **end-of-file**

RMAN>

Recovery Manager complete.

Notice that the online logs and the standby logs are the same as the primary:

SQL> select count(*)numlogs, bytes/1024/1024 mb from v$log group by bytes;

   NUMLOGS         MB
---------- ----------
         3        200

SQL> select count(*)numlogs, bytes/1024/1024 mb from v$standby_log group by bytes;

   NUMLOGS         MB
---------- ----------
         4        200

Configure restart

All new systems should be mounted at this stage. Register your restart instances. (At the primary, this step might already be done.)

[oracle@ip-172-31-86-22 ~]$ srvctl add database -database NY -role PRIMARY -stopoption IMMEDIATE -instance ORCL -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile +DATA01/ORCL/PARAMETERFILE/spfile.266.1017440879 -diskgroup DATA01,RECO01

At the far sync:

[oracle@ip-172-31-28-23 ~]$ srvctl add database -database FS -role physical_standby -startoption MOUNT -stopoption ABORT -instance FS -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileFS.ora -diskgroup DATA01,RECO01

To activate the change, I found it necessary to issue srvctl start. That does not seem right. “srvctl enable database” should do it. In restart, “srvctl enable instance” is not available. Unless you activate the instance, it will not start automatically upon host reboot.

[oracle@ip-172-31-28-23 ~]$ srvctl start database -database FS

At the standby:

[oracle@ip-172-32-10-34 ~]$ srvctl add database -database SF -role physical_standby -startoption MOUNT -stopoption ABORT -instance SF -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1 -spfile /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileSF.ora -diskgroup DATA01,RECO01
[oracle@ip-172-32-10-34 ~]$ srvctl start database -database SF

Reboot all hosts and check that the database instances start automatically, in the proper startup mode.

Validate static listeners

Validating the static listeners is critical. For each database or far sync (NY, FS, SF):

  • Connect over the network with sqlplus
  • Shutdown (shutdown abort for standbys and far syncs)
  • Startup (startup mount for standbys and far syncs)

Example:

[oracle@ip-172-31-28-23 ops]$ sqlplus sys/zystm.22@FS as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Sep 13 11:36:37 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1140849904 bytes
Fixed Size                  8895728 bytes
Variable Size             301989888 bytes
Database Buffers          822083584 bytes
Redo Buffers                7880704 bytes
Database mounted.

This is an example of a failed static listener check.

[oracle@ip-172-31-28-23 ops]$ sqlplus sys/zystm.22@FS as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Sep 13 11:36:37 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown abort
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1140849904 bytes
Fixed Size                  8895728 bytes
Variable Size             301989888 bytes
Database Buffers          822083584 bytes
Redo Buffers                7880704 bytes
Database mounted.
SQL> shutdown abort
ORACLE instance shut down.
ERROR:
ORA-12505: TNS:listener does not currently know of SID given in connect
descriptor


SQL> startup mount
SP2-0640: Not connected

Resolve all failures before proceeding.

Broker start

At this stage, primary database is open and the far sync and standby intances are mounted. At the primary, far sync, and standby, start the Data Guard broker:

SQL> alter system set dg_broker_start = true;

System altered.

Create the broker configuration

Connect to the Data Guard broker:

[oracle@ip-172-31-86-22 broker]$ dgmgrl sys/zystm.22@NY
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Mon Sep 2 13:59:25 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "NY"
Connected as SYSDBA.

Create the Data Guard broker configuration. One way to do it is to create and test the standby first, and add the far sync later. However, in this example, we add the far sync and the standby in one fell swoop.

While creating new objects, if you want to preserve name case, you should quote your identifiers.

DGMGRL> create configuration 'ORCL_CONFIG' as primary database is 'NY' connect identifier is 'NY';
Configuration "ORCL_CONFIG" created with primary database "NY"
DGMGRL> add far_sync 'FS' as connect identifier is 'FS';
far sync instance "FS" added
DGMGRL> add database 'SF' as connect identifier is 'SF' maintained as physical;
Database "SF" added

Add the redo routes:

DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : FS SYNC)';
Property "redoroutes" updated
DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : FS SYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync FS set property RedoRoutes = '(NY : SF ASYNC)(SF : NY ASYNC)';
Property "redoroutes" updated

Enable the configuration

DGMGRL> enable configuration
Enabled.

A point of interest is the fal_server configuration symmetry.

parameter primary far sync standby
fal_server ‘FS’,’SF’* ‘NY’,’SF’ ‘FS’,’NY’

* after switchover

The Data Guard broker sets fal_server at the far sync and standby when you enable the configuration. If you perform a switchover, the broker sets fal_server at the new standby and clears fal_server at the old standby.

Switchover test

Test switchover to SF. The output becomes:

DGMGRL> switchover to SF
Performing switchover NOW, please wait...
Operation requires a connection to database "SF"
Connecting ...
Connected to "SF"
Connected as SYSDBA.
New primary database "SF" is opening...
Oracle Clusterware is restarting database "NY" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to "NY"
Connected to "NY"
Switchover succeeded, new primary is "sf"

Test switchover to NY. The output is now:

DGMGRL> switchover to NY
Performing switchover NOW, please wait...
New primary database "NY" is opening...
Oracle Clusterware is restarting database "SF" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to "SF"
Connected to "SF"
Switchover succeeded, new primary is "ny"

Setting the protection mode

Maximum Performance

In Maximum Performance protection mode:

Transactions commit as soon as all redo data generated by those transactions has been written to the online log

Maximum Availability

In Maximum Availability mode:

Under normal operations, transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log AND based on user configuration, one of the following is true:

    • redo has been received at the standby, I/O to the standby redo log has been initiated, and acknowledgement sent back to primary
    • redo has been received and written to standby redo log at the standby and acknowledgement sent back to primary

If the primary does not receive acknowledgement from at least one synchronized standby, then it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.

With far sync, replace “standby” in the preceding description with “far sync”. Transactions do not commit until redo has been written to the far sync standby log. Because the far sync is in the same region as the primary, commit performance of a Maximum Availability system is expected to be better with a far sync.

Set protection mode to Maximum Availability.

DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.

Wait a few minutes and check:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    FS - Far sync instance
      SF - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 51 seconds ago)

Test switchover and switch back:

DGMGRL> switchover to SF
Performing switchover NOW, please wait...
Operation requires a connection to database "SF"
Connecting ...
Connected to "SF"
Connected as SYSDBA.
New primary database "SF" is opening...
Oracle Clusterware is restarting database "NY" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to "NY"
Connected to "NY"
Switchover succeeded, new primary is "sf"
DGMGRL> switchover to NY
Performing switchover NOW, please wait...
Operation requires a connection to database "NY"
Connecting ...
Connected to "NY"
Connected as SYSDBA.
New primary database "NY" is opening...
Oracle Clusterware is restarting database "SF" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to "SF"
Connected to "SF"
Switchover succeeded, new primary is "ny"

Notice at the far sync that one or more standby logs are assigned

SQL> select group#, bytes/1024/1024 mb, thread#, sequence#, status from v$standby_log;

    GROUP#         MB    THREAD#  SEQUENCE# STATUS
---------- ---------- ---------- ---------- ----------
         1        200          1        572 ACTIVE
         2        200          1          0 UNASSIGNED
         3        200          1          0 UNASSIGNED
         4        200          0          0 UNASSIGNED

and at the physical standby one or more standby logs are assigned.

SQL> select group#, bytes/1024/1024 mb, thread#, sequence#, status from v$standby_log;

    GROUP#         MB    THREAD#  SEQUENCE# STATUS
---------- ---------- ---------- ---------- ----------
         4        200          1          0 UNASSIGNED
         5        200          1        572 ACTIVE
         6        200          0          0 UNASSIGNED
         7        200          0          0 UNASSIGNED

Disabling far sync

You can disable far sync

DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : SF SYNC)';
Property "redoroutes" updated
DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : NY SYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync FS set property RedoRoutes = '';
Property "redoroutes" updated
DGMGRL> disable far_sync FS
Disabled.

The configuration display looks like this:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    SF - Physical standby database

  Members Not Receiving Redo:
  FS - Far sync instance (disabled)
    ORA-16749: The member was disabled manually.

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 24 seconds ago)

Re-enabling far sync

You can re-enable far sync. You must touch the objects in the reverse order that you did when you disabled far sync: far sync, far sync redo routes, database redo routes.

DGMGRL> enable far_sync FS
Enabled.
DGMGRL> edit far_sync FS set property RedoRoutes = '(NY : SF ASYNC)(SF : NY ASYNC)';
Property "redoroutes" updated
DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : FS SYNC)';
Property "redoroutes" updated
DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : FS SYNC)';
Property "redoroutes" updated

Check:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    FS - Far sync instance
      SF - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 39 seconds ago)

Removing the far sync

You can remove a disabled far sync from the configuration.

DGMGRL> remove far_sync FS
Removed far sync instance "fs" from the configuration

Switchover performance

We want to know how long after starting the switchover until the new primary is available. In other words how much time elapsed from when the switchover command is issued until the new primary is open. The times appear in the alert log. For example:

2019-09-05T14:44:04.271609-04:00
SWITCHOVER VERIFY BEGIN
...
2019-09-05T14:44:59.392512-04:00
TMI: adbdrv open database END 2019-09-05 14:44:59.392313
Starting background process CJQ0
Completed: ALTER DATABASE OPEN

We can compare database opening time with far sync

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    FS - Far sync instance
      SF - Physical standby database

to time without far sync (direct route)

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    SF - Physical standby database

With far sync, the switchover timing data are:

sw to date start sw new primary open elapsed time to open (s) redo route prot level
SF 9/5/2019 05:54:16.9 05:55:11.4 54.6 far sync Max Perf
NY 9/5/2019 06:02:11.7 06:04:12.2 120.5 far sync Max Perf
SF 9/5/2019 06:09:34.2 06:10:40.9 66.8 far sync Max Perf
NY 9/5/2019 06:13:32.7 06:15:21.9 109.1 far sync Max Perf
SF 9/5/2019 13:38:38.6 13:40:18.1 99.5 far sync Max Avail
NY 9/5/2019 14:07:56.7 14:10:23.7 147.0 far sync Max Avail
SF 9/5/2019 14:15:32.9 14:17:58.9 145.9 far sync Max Avail
NY 9/5/2019 14:20:04.6 14:21:38.9 94.3 far sync Max Avail
Average 104.7
Stdev 33.5

compared to without far sync:

sw to date start sw new primary open elapsed time to open (s) redo route prot level
SF 9/5/2019 06:19:48.0 06:20:47.4 59.4 direct Max Perf
NY 9/5/2019 06:23:10.5 06:24:06.8 56.3 direct Max Perf
SF 9/5/2019 06:27:20.5 06:28:19.5 59.0 direct Max Perf
NY 9/5/2019 06:29:57.2 06:30:54.0 56.8 direct Max Perf
SF 9/5/2019 14:34:49.6 14:35:50.1 60.5 direct Max Avail
NY 9/5/2019 14:38:04.4 14:38:59.3 54.9 direct Max Avail
SF 9/5/2019 14:40:59.6 14:41:58.1 58.5 direct Max Avail
NY 9/5/2019 14:44:04.3 14:44:59.4 55.1 direct Max Avail
Average 57.6
Stdev 2.1

Without far sync, switchover time is lower (Average) and more consistent (Stdev). Far sync, therefore, imposes a switchover time penalty.

Conclusion

A far sync implementation is presented. There are several points of interest.

  • The platform is Red Hat Linux 7.2 on Amazon AWS EC2.
  • The network setup is described in a separate article.
  • You can measure network latency and throughput with qperf.
  • The grid and oracle home users are separate accounts.
  • Database storage is in ASM, not operating system files.
  • The number of standby log files depends on the expected workload.
  • Parameter compatible must be set the same at the primary, far sync, and standby.
  • Check standby log size before duplicating the database.
  • File transfer is by RMAN only.
  • SSH trust is not required to setup Data Guard.
  • Multiple RMAN channels should be used to duplicate the database.
  • Configure Restart for all instances.
  • Validate the static listeners
  • Data Guard configuration and operations are done from the broker, not sqlplus.
  • The the standby and the far sync are set up at the same time, not separately.
  • Reboot hosts and confirm automatic startup, startup mode, and Data Guard role.
  • Test switchover after configuring restart.
  • Data Guard broker manages fal_server at all nodes during role transitions.
  • You can change the protection mode to Maximum Availability.
  • Far sync can reduce or prevent data loss.
  • With far sync, switchover takes longer and the elapsed time is less consistent.

We have presented an Oracle Data Guard 19c setup with far sync. Some techniques here are variants on contributions found elsewhere. Some findings on performance are presented.

grid setup error: could not retrieve local node name

By Brian Fitzgerald

Error

In AWS EC2 Redhat Linux, gridSetup.sh reports this error:

PRVF-0002 : could not retrieve local node name

Solution

Set the EnableDnsHostnames flag

Error details

In Linux, you run Oracle 19c grid setup

[grid@ip-10-0-2-197 ~]$ cd /u01/app/19.3.0/grid
[grid@ip-10-0-2-197 grid]$ ./gridSetup.sh

After the splash screen:

splash

this error appears:

errmsg

The error appears in several log files. For example: /tmp/GridSetupActions2019-09-01_09-40-29AM/gridSetupActions2019-09-01_09-40-29AM.log

SEVERE: [Sep 1, 2019 9:40:36 AM] [FATAL] PRVF-0002 : could not retrieve local node name
ip-10-0-2-197.ec2.internal: ip-10-0-2-197.ec2.internal: Name or service not known.
Refer associated stacktrace #oracle.install.commons.util.exception.AbstractErrorAdvisor:165

/tmp/GridSetupActions2019-09-01_09-40-29AM/time2019-09-01_09-40-29AM.log

oracle.cluster.verification.VerificationException: PRVF-0002 : could not retrieve local node name
ip-10-0-2-197.ec2.internal: ip-10-0-2-197.ec2.internal: Name or service not known

Note that the local hostname does not resolve to an IP address:

[grid@ip-10-0-2-197 grid]$ hostname
ip-10-0-2-197.ec2.internal
[grid@ip-10-0-2-197 grid]$ ping ip-10-0-2-197.ec2.internal
ping: ip-10-0-2-197.ec2.internal: Name or service not known

Cause

The Virtual Private Cloud (VPC) has the EnableDnsHostnames flag set to Disabled (False). This could happen in a non-default VPC, meaning a VPC that you create. You can see this in the AWS console:

vpc

or from python:

Python 2.7.5 (default, Jun 11 2019, 14:33:56)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from boto3 import client
>>> rsp = client('ec2').describe_vpc_attribute(
...     Attribute='enableDnsHostnames',
...     VpcId = 'vpc-0786f6ef4dc16cefc'
... )
>>> rsp['EnableDnsHostnames']['Value']
False

or from the command line:

[grid@ip-10-0-2-197 grid]$ aws ec2 describe-vpc-attribute --vpc-id vpc-0786f6ef4dc16cefc --attribute enableDnsHostnames
{
    "VpcId": "vpc-0786f6ef4dc16cefc",
    "EnableDnsHostnames": {
        "Value": false
    }
}

Fix

Set the EnableDnsHostnames flag, which the AWS manual describes:

Indicates whether the instances launched in the VPC get DNS hostnames. If enabled, instances in the VPC get DNS hostnames; otherwise, they do not.

From the VPC Dashboard, select your VPC. Click Actions, and select “Edit DNS hostnames”. At the Edit DNS hostnames screen, check “enable” and press “Save”.

edit.dns.hostnames

or, in python:

>>> from boto3 import client
>>> rsp = client('ec2').modify_vpc_attribute(
...     VpcId = 'vpc-0786f6ef4dc16cefc',
...     EnableDnsHostnames = { 'Value': True }
... )

or, using the aws command line, issue modify-vpc-attribute:

[grid@ip-10-0-2-197 grid]$ aws ec2 modify-vpc-attribute --vpc-id vpc-0786f6ef4dc16cefc --enable-dns-hostnames
[grid@ip-10-0-2-197 grid]$ aws ec2 describe-vpc-attribute --vpc-id vpc-0786f6ef4dc16cefc --attribute enableDnsHostnames
{
    "VpcId": "vpc-0786f6ef4dc16cefc",
    "EnableDnsHostnames": {
        "Value": true
    }
}

Retest

The change does not affect the EC2 instance immediately.

[grid@ip-10-0-2-197 grid]$ ping ip-10-0-2-197.ec2.internal
ping: ip-10-0-2-197.ec2.internal: Name or service not known

Reboot the host:

[ec2-user@ip-10-0-2-197 scr]$ sudo reboot

Now test:

[ec2-user@ip-10-0-2-197 ~]$ ping ip-10-0-2-197.ec2.internal
PING ip-10-0-2-197.ec2.internal (10.0.2.197) 56(84) bytes of data.

Now gridSetup.sh starts normally, and the installer menu appears:

[grid@ip-10-0-2-197 grid]$ ./gridSetup.sh

installer

Discussion

Note that error PRVF-0002 has been widely reported in the blogs. In this case, the environment is AWS EC2 Redhat Linux 7.2 and Oracle 19c Grid Infrastructure. The error also appears in Oracle Linux, and in Oracle versions as early as 11g. PRVF-0002 can as well appear in a silent install.

The fix is sometimes given as a local /etc/hosts edit. However, in this blog article, I am proposing a fix that you can implement once per VPC, so that you don’t have to edit hosts every time. Also, you can get more consistent deployments across all VPCs, not just the default VPC.

Conclusion

gridSetup error “PRVF-0002 : could not retrieve local node name” can be resolved at the VPC level by setting EnableDnsHostnames.

grid 19c install with ASM filter driver

Introduction

Starting in Oracle 12.2, the grid installation experience changed in two major ways:

  1. In the past, you unzipped the binaries into a download folder and executed runInstaller. Oracle Universal Installer (OUI) copied the binaries to your grid oracle home. In the new procedure, you unzip the grid binaries into the grid home and then run gridSetup.sh to configure the software in-place.
  2. Instead of configuring asmlib, you are going to initialize the disks using ASM Filter Driver (AFD).

These changes raise new organizational issues and introduce a new, critical step.

Scope

The scope of this article is:

  • Installation to cloud virtual machine (Azure or AWS EC2)
  • Oracle Restart (Not RAC)
  • x86_64 hardware
  • Linux 7
  • udev is not considered
  • New install, not an upgrade
  • No preexisting asmlib

Grid infrastructure in the cloud

RAC

The main purpose of oracle Grid Infrastructure was to support Real Application Clusters (RAC). RAC requires sharing storage volumes across hosts, which most Cloud providers disallow. An iscsi server can be used to share drives, but at the cost of added complexity. RAC deployments are, therefore, less common in the cloud.

ASM

Oracle Grid Infrastructure is extremely useful, even without RAC. Automatic Storage Management is a main component of grid. Some benefits of ASM are:

  • Convenient, consolidated, managed storage
  • Efficient RMAN and Data Guard administration
  • Monitoring and managing space in ASM using Oracle Enterprise Manager

restart

An Oracle standalone (non-RAC) database that is running on grid infrastructure is known as a “restart”. You use grid to manage oracle startup and shutdown, and to monitor the database instance. Oracle restart works very well in the cloud.

Organizational issues

Previously, during the operating system setup, asmlib could be configured along with other root steps, prior to grid installation. However, AFD labeling must wait until grid installation has begun. In an organization with segregation of SA and DBA duties, the DBA is going to need to schedule three separate tasks for the SA.

  1. Setup the operating system
  2. Label ASM disks using AFD
  3. Run root.sh during the grid setup

Instead of personal handoffs, you can do one of these:

  1. Have one person with root and grid access perform all steps.
  2. Use an automated deployment tool.
  3. Run a script as root with some steps su’d to grid.

This blog article covers manual command line steps as root or grid, and use of the gridSetup.sh X windows GUI.

Steps leading up to AFD

The steps leading up to ASM disk labeling are, briefly:

  • Select a compatible machine image (Linux on x86_64)
  • Create the OS (ex: 2 CPU, 8G RAM, 32G operating system disk)
  • Update and install packages
  • Configure the operating system (swap, sysctl.conf)
  • Create the grid user and directories
  • Download and unzip the software

Now you are ready to configure your ASM disks.

Disk allocation

In the cloud, you can allocate new disks from the web console. You can also use a command line tool. Finally, you can write a script to do the job, in python for example. You can select from among  available type (hard drive or SSD), you can specify the size, and in some cases, you can specify the IOPS.

You can attach a disk to a live system, and you can detach a disk from a live system. Before allocating new disks, list the existing disks

[root@grid19c ~]# df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 33025276 12824288 20200988 39% /
[root@grid19c ~]# swapon
NAME TYPE SIZE USED PRIO
/mnt/resource/swapfile file 2G 0B -2
[root@grid19c ~]# ls -l /dev/sd*
brw-rw----. 1 root disk 8, 0 Aug 23 12:40 /dev/sda
brw-rw----. 1 root disk 8, 1 Aug 23 12:40 /dev/sda1
brw-rw----. 1 root disk 8, 2 Aug 23 12:40 /dev/sda2
brw-rw----. 1 root disk 8, 16 Aug 23 12:40 /dev/sdb
brw-rw----. 1 root disk 8, 17 Aug 23 12:40 /dev/sdb1

AWS

We are mainly interested in Linux systems that are available in the AWS Marketplace. The virtualization type in the AWS Marketplace right now is predominantly Hardware Virtual Machine (HVM). In the systems that I have checked, the possible device names are /dev/xvd[a-z], or only 26 devices, which is not very flexible. The allowable device size range is 1 GiB to 16384 GiB. The largest SSD devices are burstable up to 64000 IOPS. In AWS, an attached device can be resized.

aws.disks

AWS lets you choose the Linux device node name. You can retrieve the node name from the console, the command line, or the API. For example:

>>> from boto3 import resource
>>> resource('ec2').Volume('vol-0679cfe2a209db2ed').attachments[0]['Device']
'/dev/sdg'

Azure

In Azure, disks range in size from 32 GiB to 32767 GiB. A disk can be resized, and the type (HDD or SSD) may be modified only when the disk is unattached. The largest premium SSD offers IOPS up to 20000.

azure.disks

Azure does not identify the new disk names for you. I have not found a way to retrieve the operating system node name from the Azure cloud. The closest I can get is to display the attachment status.

>>> from azure.common.credentials import get_azure_cli_credentials
>>> from azure.mgmt.compute import ComputeManagementClient
>>> cli = ComputeManagementClient(
... *get_azure_cli_credentials()
... )
>>> cli.disks.get(
... resource_group_name='ora',
... disk_name='asm-0'
... ).disk_state
'Attached'

This seems like a gap to me. You want a certain method to distinguish new disks from existing disks, and you want a positive method to identify each disk.

Identify the new disks

Example:

brw-rw----. 1 root disk 8, 32 Aug 23 12:40 /dev/sdc
brw-rw----. 1 root disk 8, 48 Aug 23 12:40 /dev/sdd

AFD label

Prior to starting the grid install, you need to label the disk or disks that you intend to use for the first ASM disk group. We will use asmcmd afd_label. This is a critical step.

afd_label

To label your disks, sudo to root, and execute

asmcmd afd_label label devicenode –init

For example:

[root@grid19c ~]# export ORACLE_HOME=/u01/app/19.3.0/grid
[root@grid19c ~]# export ORACLE_BASE=/tmp
[root@grid19c ~]# cd /u01/app/19.3.0/grid/bin
[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
  • ORACLE_BASE=/tmp avoids creating root-owned files under the grid oracle base.
  • Exercise care to follow this instruction exactly because an incorrect invocation may not print an error message at all.

separation of duties

If root steps are not done by the DBA, then the DBA should prepare a script for the AFD labeling. For example:

#!/bin/bash

export ORACLE_HOME=/u01/app/19.3.0/grid
export ORACLE_BASE=/tmp
cd /u01/app/19.3.0/grid/bin
./asmcmd afd_label DATA01_00001 /dev/sdc --init

afd_lslbl

If your disk discovery string is the default value ‘/dev/sd*’, as in Azure, you will be able to list your ASM disks with this simple command:

[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

On AWS, your disk string might be ‘/dev/xvd*’, so this might not work:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

In that case, you need to specify the disk string on the command line:

[root@ip-172-31-86-22 bin]# ./asmcmd afd_lslbl '/dev/xvd*'
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After you run root.sh, you do not need to specify the disk discovery string.

[root@ip-172-31-25-179 bin]# ./asmcmd afd_lslbl
--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
ASM01_00001 /dev/xvdc

After root.sh, afd_lslbl gets its disk string from file /etc/oracleafd.conf:

# cat /etc/oracleafd.conf
afd_diskstring='/dev/xvd*'

afd_unlabel

You can erase the AFD label with:

[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

Bugs and limitations

the afd_label –init option

If you leave out the –init option afd_label might not write a label to your disk, but it will also print no error message, and will return a success code.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'No devices to be scanned.

You can use dd to verify that the disk is empty

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 00000000
*
0020000
1+0 records in
1+0 records out
8192 bytes (8,2 kB) copied, 0.032763 s, 25.0 MB/s

the afd_unlabel –init option

Likewise, if you run afd_unlabel without the –init option, asmcmd may silently not unlabel your disk.

[root@grid19c bin]# ./asmcmd afd_label DATA01_00001 /dev/sdc --init
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc
[root@grid19c bin]# ./asmcmd afd_unlabel /dev/sdc
[root@grid19c bin]# ./asmcmd afd_lslbl
Could not open pfile '/etc/oracleafd.conf'--------------------------------------------------------------------------------
Label Duplicate Path
================================================================================
DATA01_00001 /dev/sdc

You can also use dd to view the label

[root@grid19c bin]# dd if=/dev/sdc ibs=8192 obs=8192 count=1 | od -X
0000000 00000000 00000000 00000000 1f75bc96
0000020 00000000 00000000 00000000 00000000
0000040 4c43524f 4b534944 41544144 305f3130
0000060 31303030 00000000 00000000 00000000
0000100 00000000 00000000 00000000 00000000
*
0000440 00000000 00000000 0500000a 5d5ee7d3
0000460 00000000 00000000 00000000 00000000
*
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied*
0020000

other utilities

In this context, other afd commands might run, produce no output, do nothing, and exit with success code.

[root@grid19c bin]# ./asmcmd afd_state
[root@grid19c bin]# echo $?
0
[root@grid19c bin]# ./asmcmd afd_configure
[root@grid19c bin]# echo $?
0

At this stage so far in the grid installation, only commands afd_label is needed. You can also run afd_unlabel, and afd_lslbl. If you experiment with other commands or options, you may end up in a state that is difficult to diagnose.

grid installation directory

Oracle grid can be installed in virtually any directory. Originally, however, path”/u01/app/19.0.0/grid” is hard-coded into scripts such as kfod. You cannot kfod and other such scripts at this early stage. Later in the installation procedure, root.sh replaces such files with scripts that can run out of any installation directory.

[root@grid19c bin]# ./asmcmd
/u01/app/19.3.0/grid/bin/kfod: line 22: /u01/app/19.0.0/grid/bin/kfod.bin: No such file or directory
Use of uninitialized value $result[0] in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5982.
Use of uninitialized value $result[0] in split at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5985.
Use of uninitialized value $clus_mode in scalar chomp at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5987.
Use of uninitialized value $clus_mode in concatenation (.) or string at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5988.
Use of uninitialized value $clus_mode in string eq at /u01/app/19.3.0/grid/lib/asmcmdbase.pm line 5993.
...
ASMCMD> exit

conclusion

Most utilities are designed to run with full functionality after gridSetup.sh and root.sh have completed. One notable exception is subset of asmcmd afd sub commands with very specific options.

Because of the bugginess or limitations of the asmcmd AFD options, it is critical to be aware of the limitations, and by all means, to get the AFD disk labeling correct the first time.

Grid setup

This section describes the grid setup, with emphasis on the ASM Disk Group screen

  • start X server (Xming) on Windows
  • On Linux, sudo to grid and cd to the grid home
  • Start gridSetup.sh
  • At the Configuration Options screen, select Standalone (Oracle Restart).
  • Click Next to go to the Create ASM Disk Group screen.

Here is where you are going to create your first ASM disk group. gridSetup.sh scans the operating system for labeled disks. Labeled disks matching “/dev/sd*” will appear in your display.

cr.asm.dg.azure

On some AWS systems, you need to search for “/dev/xvd*”. “Click Change Disk Discovery Path”. For example:

ch.dsk.dsc.path

cr.asm.dg

You can change the disk group name, allocation unit size, and redundancy. Select the disk or disks that you want in your first disk group. Check that Configure Oracle ASM Filter Driver is checked. Press Next

Continuing the grid install

After the Create ASM diskgroup screen, installation can proceed smoothly

  • Complete the remaining screens
  • When prompted, run root.sh

After root.sh is finished, all utilities in grid $ORACLE_HOME/bin should operate with full functionality.

Summary

Oracle Grid Infrastructure substantially improves manageability. Grid is less complex to install than Oracle RAC, but requires a disk installation step which requires attention to detail. The Oracle 19c grid installation requires configuration of the ASM Filter Driver prior to running gridSetup.sh. You can use cloud and Linux administrative tools to allocate storage and then use asmcmd to label the disks.