How wallet locality is determined

By Brian Fitzgerald

Introduction

With orapki, you can create a local, auto-open wallet. The document states:

You cannot move local auto-login wallets to another computer. They must be used on the host on which they are created.

Questions arise:

  • How secure is the “local” feature?
  • How is “local” determined?
  • How could I open a wallet if it gets restored to a different host under unplanned circumstances?

This article sets out to answer these questions.

How to uniquely identify a host

There are multiple host attributes that one might use to uniquely identify a host, including:

  • hostname
  • IP address
  • hostid
  • MAC address

I tried two of these.

Setup

As root, create an auto-login local wallet

Check that the wallet does not require a password.

Test

As root, change the hostname. Retest the wallet. orapki prompts for a password, so the wallet is not auto-login anymore. That demonstrates that oracle checks the system hostname to determine whether the wallet is on the original host. We are done!

Before that, I tried changing hostid but found no effect on wallet locality.

Security implications

Oracle states “Local auto-login wallets are used for scenarios where additional security is required”. However, one can defeat the measure simply by issuing the “hostname” command. It is clear that local auto-login wallets offer little in the way of real security.

Conclusion

Oracle determines whether a local auto-login wallet is on the host where it was created by checking the system hostname. This feature is easy to spoof and does not substantially enhance security. In case of an unplanned restore to a different host, open the wallet by changing the new host’s hostname by issuing the hostname command as root.

10 things you didn’t know about tablespace quotas

10 things you didn’t know about tablespace quotas

by Brian Fitzgerald

Introduction

DBAs use tablespace quotas to limit where segments can be placed. Sometimes grant and revoke interact with quotas in surprising ways.

Setup

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 31 22:59:29 2022
Version 19.16.0.0.0

Copyright (c) 1982, 2022, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.16.0.0.0

SQL> set linesize 32767
SQL> set trimspool on
SQL> col segment_name format a30
SQL> col granted_role format a30
SQL> create user U identified by U default tablespace USERS;

User created.

1. When you grant role DBA to a user, unlimited tablespace gets granted.

Unlimited tablespace is a separate, special case side effect of granting DBA that is internal to oracle.

SQL> grant dba to U;

Grant succeeded.

SQL> select privilege from dba_sys_privs where grantee = ‘U’;

PRIVILEGE
—————————————-
UNLIMITED TABLESPACE

2. When you revoke DBA from a user, unlimited tablespace gets revoked.

Exercise care when tightening security. Before you revoke DBA from a user, check whether that user owns segments. Compensate by grant tablespace quotas. Failure to grant a tablespace quota will lead to a loss of ability to insert into tables owned by that user.

SQL> revoke dba from U;

Revoke succeeded.

SQL> select privilege from dba_sys_privs where grantee = 'U';

no rows selected

3. Unlimited tablespace gets revoked even if it had been granted separately.

SQL> grant dba to U;

Grant succeeded.

SQL> grant unlimited tablespace to U;

Grant succeeded.

SQL> revoke dba from U;

Revoke succeeded.

SQL> select privilege from dba_sys_privs where grantee = 'U';

no rows selected

4. If you revoke unlimited tablespace from a user with DBA role, that user keeps DBA role.

SQL> grant dba to U;

Grant succeeded.

SQL> revoke unlimited tablespace from U;

Revoke succeeded.

SQL> select granted_role from dba_role_privs where grantee = 'U';

GRANTED_ROLE
------------------------------
DBA

5. You cannot grant unlimited tablespace privilege to a role.

SQL> create role R;

Role created.

SQL> grant unlimited tablespace to R;
grant unlimited tablespace to R
*
ERROR at line 1:
ORA-01931: cannot grant UNLIMITED TABLESPACE to a role

No role has unlimited tablespace privilege, not even DBA.

6. You can revoke a quota from that a segment needs the quota.

This is seldom done intentionally. If you do, the segment cannot extend.

SQL> alter user U quota unlimited on USERS;

User altered.

SQL> create table U.T ( N number) segment creation immediate;

Table created.

SQL>
SQL> select grantee, privilege from dba_sys_privs
2 where grantee = 'U' and privilege = 'UNLIMITED TABLESPACE';

no rows selected

SQL>
SQL> select segment_name, bytes, extents from dba_segments 
2 where owner = 'U'
3 and segment_name = 'T'
4 and segment_type = 'TABLE';

SEGMENT_NAME BYTES EXTENTS
------------------------------ ---------- ----------
T 65536 1

SQL> alter user U quota 0 on USERS;

User altered.

SQL> insert into U.T select level from dual connect by level <= 1;

1 row created.

SQL> insert into U.T select level from dual connect by level <= 1000000;
insert into U.T select level from dual connect by level <= 1000000
*
ERROR at line 1:
ORA-01536: space quota exceeded for tablespace 'USERS'

7. Without a quota, you cannot move a table.

Moving a table can save space, but with an insufficient quota, you can’t do it

SQL> alter table U.T move;
alter table U.T move
*
ERROR at line 1:
ORA-01536: space quota exceeded for tablespace 'USERS'

8. When you revoke unlimited tablespace you also revoke all limited quotas.

SQL> grant unlimited tablespace to U;

Grant succeeded.

SQL> alter user U quota 10g on USERS;

User altered.

SQL> create tablespace U2;

Tablespace created.

SQL> alter user U quota unlimited on U2;

User altered.

SQL>
SQL> select username, tablespace_name, max_blocks
2 from dba_ts_quotas where username = 'U';

U TABLESPACE_NAME MAX_BLOCKS
- ------------------------------ ----------
U USERS 1310720
U U2 -1

SQL>
SQL> revoke unlimited tablespace from U;

Revoke succeeded.

SQL>
SQL> select username, tablespace_name, max_blocks
2 from dba_ts_quotas where username = 'U';

U TABLESPACE_NAME MAX_BLOCKS
- ------------------------------ ----------
U U2 -1

Notice that the limited quota on USERS got revoked, but the unlimited quota on U2 remains. That’s not exactly what the manual says: “If you later revoke the privilege, then you must explicitly grant quotas to individual tablespaces.”

Because revoking DBA revokes unlimited tablespace, it follows that revoking DBA revokes limited (finite) quotas.

9. You can grant a quota greater than 2 TB.

SQL> alter user U quota 10T on U2;

User altered.

The manual says “The maximum amount of space that you can assign for a tablespace is 2 TB.”

10. A quota could exceed the tablespace’s maximum size

SQL> select username, tablespace_name, max_blocks
2 from dba_ts_quotas where username = 'U';

U TABLESPACE_NAME MAX_BLOCKS
- ------------------------------ ----------
U U2 1342177280

SQL> select sum(maxblocks) from dba_data_files
2 where tablespace_name = 'U2';

SUM(MAXBLOCKS)
--------------
4194302

So here are 10 things you did not know about tablespace quotas. Here is one more:

Bonus: A user running import does not require a quota.

The owner of the segment needs the quota, not the user running import. That’s how it works. If you pre-create the user, you must grant the quota ahead of time. If the segment owner has no quota, then you will get ORA-01950: no privileges on tablespace.

SQL> create user U identified by U default tablespace USERS;

User created.

SQL> alter user U quota unlimited on USERS;

User altered.

SQL> create table U.T ( N number) segment creation immediate;

Table created.

$ cat exp.u.par
directory=d
dumpfile=exp.u.t.dmp
logfile=exp.u.t.log
reuse_dumpfiles=true
tables=
u.t

$ expdp "'/ as sysdba'" parfile=exp.u.par

Export: Release 19.0.0.0.0 - Production on Thu Sep 1 00:16:02 2022
Version 19.16.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "SYS"."SYS_EXPORT_TABLE_01": "/******** AS SYSDBA" parfile=exp.u.par
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "U"."T" 0 KB 0 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/u99/exp/d/exp.u.t.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Thu Sep 1 00:19:07 2022 elapsed 0 00:03:05

SQL> drop user U cascade;

User dropped.

SQL> create user U identified by U default tablespace USERS;

User created.

$ cat imp.u.par
directory=d
dumpfile=exp.u.t.dmp
logfile=imp.u.t.log

$ impdp "'/ as sysdba'" parfile=imp.u.par

Import: Release 19.0.0.0.0 - Production on Thu Sep 1 00:24:48 2022
Version 19.16.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01": "/******** AS SYSDBA" parfile=imp.u.par
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"U"."T" failed to create with error:
ORA-01950: no privileges on tablespace 'USERS'

Failing sql is:
CREATE TABLE "U"."T" ("N" NUMBER) SEGMENT CREATION IMMEDIATE PCTFREE 10 
PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 
NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 
FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT 
CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS"

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Thu Sep 1 
00:24:57 2022 elapsed 0 00:00:08

Cleanup

SQL> drop user U cascade;

User dropped.

SQL> drop role R;

Role dropped.

SQL> drop tablespace U2 including contents and datafiles;

Tablespace dropped.

Conclusion

Exercise care when revoking DBA or unlimited tablespace. Be sure to compensate by issuing needed quotas. Otherwise, users or applications will get ORA-01536: space quota exceeded.

Create spfile in ASM

By Brian Fitzgerald

Problem

Question: How do you create a spfile in ASM? If you issue “create spfile” too early, you will not know the path, so you would have to search ASM for it. The situation could arise while you are setting up a standby database.

Solution

Register the database with CRS first. Then, as the manual mentions, the create spfile statement “automatically updates the SPFILE in the database resource.”

Register the db with CRS

$ srvctl add database -database orcl -role physical_standby -startoption MOUNT -stopoption ABORT -instance orcl -oraclehome /u01/app/oracle/product/1212/db_1 -diskgroup DATA1,RECO1,
REDO1,REDO2
$ srvctl start database -database orcl
$ srvctl config database -database orcl
Database unique name: orcl
Database name:
Oracle home: /u01/app/oracle/product/1212/db_1
Oracle user: oracle
Spfile:
Password file:
Start options: mount
Stop options: abort
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Disk Groups: DATA1,RECO1,REDO1,REDO2
Services:
OSDBA group: dba
OSOPER group: dba
Database instance: orcl

Create a text file

You could create a pfile from memory, for example.

SQL> create pfile='initorcl.memory' from memory;
File created.

Edit the file

$ vi initorcl.memory

Delete extraneous underscore parameters.For example, here is a file with the basic parameters that you need for restoring and recovering a duplicate database:

audit_trail='DB'
cluster_database=FALSE
compatible='12.1.0.2.0'
control_files='+REDO1/ORCL/CONTROLFILE/current.258.1067960587'
control_files='+REDO2/ORCL/CONTROLFILE/current.258.1067960589' # Restore Controlfile
core_dump_dest='/u01/app/oracle/diag/rdbms/orcl/orcl/cdump'
db_cache_size=16G
db_create_file_dest='+DATA1'
db_create_online_log_dest_1='+REDO1'
db_create_online_log_dest_2='+REDO2'
db_files=1000
db_name='ORCL'
db_recovery_file_dest='+RECO1'
db_recovery_file_dest_size=1000G
db_unique_name='orcl'
local_listener='ORCL'
remote_login_passwordfile='EXCLUSIVE'
sga_max_size=31G # internally adjusted
sga_target=31G
use_large_pages='only'

Create the spfile

SQL> create spfile = '+DATA1' from pfile='initorcl.memory';

Check:

$ srvctl config database -database orcl
Database unique name: orcl
Database name:
Oracle home: /u01/app/oracle/product/1212/db_1
Oracle user: oracle
Spfile: +DATA1/ORCL/PARAMETERFILE/spfile.585.1068023283
Password file:
Start options: mount
Stop options: abort
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Disk Groups: DATA1,RECO1,REDO1,REDO2
Services:
OSDBA group: dba
OSOPER group: dba
Database instance: orcl

Notice that there was no need to create an ASM alias or log in to ASM to search for the file.

Conclusion

Register your database in CRS first, then create your spfile.

No disk groups mounted after Grid 12.1 upgrade to 18c

Introduction

After an apparently successful upgrade from Grid 12.1 to 18c, the ASM disk groups failed to mount.

Scenario

An upgrade from Grid 12.1 to 18c happened. The DBA had used gridSetup.sh.

After the upgrade, no disk groups were found to be mounted. There had been no warnings before this happened.

Other application groups were waiting, so there was pressure to bring up the system.

Symptoms

  1. No disk groups got mounted.
  2. This error appears in the ASM alert log:
ORA-15032: not all alterations performed
ORA-59303: The attribute compatible.asm (11.2.0.0.0) of the diskgroup being mounted should be 11.2.0.2.0 or higher.

Notice that tools that refer to disk groups will not work

asmcmd lsdsk –discovery shows ASM disks, but no disk groups.

$ asmcmd lsdsk --discovery -g
Inst_ID  Path
      1  ORCL:ASM_DATA_VOL1
      1  ORCL:ASM_DATA_VOL2
      1  ORCL:ASM_DATA_VOL3
      1  ORCL:ASM_DATA_VOL4
      1  ORCL:ASM_DATA_VOL5
      1  ORCL:ASM_RECO_VOL1
      1  ORCL:ASM_REDO_VOL1

Mount with force option will not work. Ex:

asmcmd mount -f DATA1

This will not work:

SQL> ALTER DISKGROUP DATA1 SET ATTRIBUTE 'compatible.asm' ='11.2.0.2.0';
ALTER DISKGROUP DATA1 SET ATTRIBUTE 'compatible.asm' ='11.2.0.2.0'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15001: diskgroup "DATA1" does not exist or is not mounted

This will not work:

SQL> alter diskgroup DATA1 mount;
alter diskgroup DATA1 mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-59303: The attribute compatible.asm (11.2.0.0.0) of the diskgroup being
mounted should be 11.2.0.2.0 or higher.

Notice that the ASM disks are ok

asmlib:

$ oracleasm listdisks
ASM_DATA_VOL1
ASM_DATA_VOL2
ASM_DATA_VOL3
ASM_DATA_VOL4
ASM_DATA_VOL5
ASM_RECO_VOL1
ASM_REDO_VOL1

kfed:

$ kfed op=read dev=/dev/oracleasm/disks/ASM_DATA_VOL1
kfbh.endian:                          1 ; 0x000: 0x01
etc..
kfdhdb.dskname:           ASM_DATA_VOL1 ; 0x028: length=13
kfdhdb.grpname:                   DATA1 ; 0x048: length=5
etc.
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000

The ASM disk and ASM group name look good.

Hex b refers to oracle version 11. The zeros following the 2 apparently mean that the disk group version is 11.2.0.0.0.

Cause

Oracle 12.2, 18c, and 19c binaries cannot mount a compatible 11.2.0.0.0 disk group. The manual states:

Starting with Oracle ASM version 12.2.0.1, the minimum and default settings for Oracle ASM disk group attributes are:

  • COMPATIBLE.ASM = 11.2.0.2

If you are reading the Oracle grid upgrade manual, you might not notice any checks that could prevent this mistake. The Oracle 19c Grid Upgrade manual checklist hints at the problem:

Our 18c gridSetup.sh as patched (18.3) does not check for this, and lets you start the upgrade, which leads to the problem.

Approach (with a downside)

Downgrade grid to the previous version (12.1).

In the 18c grid home, attempted this step from the 18c manual:

# cd crs/install
# ./rootcrs.sh -downgrade
Using configuration parameter file: /ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install/crsconfig_params
The log of current session can be found at:
  /ora_local/apps/oracle/crsdata/evp003-eldb/crsconfig/crsdowngrade_evp003-eldb_2021-01-20_10-48-42AM.log
2021/01/20 10:48:44 CLSRSC-457: Oracle Restart is currently configured and cannot be deconfigured using this Oracle Clusterware deconfiguration command.
Died at /ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install/crsdowngrade.pm line 260.
The command '/ora_local/apps/oracle_grid/product/18.0.0/grid_18c/perl/bin/perl -I/ora_local/apps/oracle_grid/product/18.0.0/grid_18c/perl/lib -I/ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install /ora_local/apps/oracle_grid/product/18.0.0/grid_18c/crs/install/rootcrs.pl -downgrade' execution failed

Did instead:

# cd crs/install
# ./roothas.sh -deconfig

This step wipes out ocr.

In the 12.1 grid home

# cd crs/install
# ./roothas.sh

Now grid is running on 12.1 with just the basics, but not cssd.

Downside

roothas deconfig wipes out your crs. You will have to manually start cssd and re-add your ASM and database resources.

Workaround

Start cssd, etc.

# crsctl start resource -all

Start ASM

To get things rolling, you could create an ASM pfile. Remember that ASM needs to know which disk groups to mount, so be sure to configure asm_diskgroups

init+ASM.20210120

large_pool_size          = 12M
instance_type            = "asm"
remote_login_passwordfile= "EXCLUSIVE"
asm_diskgroups =  DATA1,REDO1,RECOVERY1
asm_power_limit          = 1
diagnostic_dest          = "/ora_local/apps/oracle"

start

SQL> startup mount pfile='init+ASM.20210120'
ASM instance started

Total System Global Area 1136934472 bytes
Fixed Size                  8666696 bytes
Variable Size            1103101952 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted
ASM diskgroups volume enabled

You can successfully create an spfile, with this error message:

SQL> create spfile='asmspfile.ora'  from pfile =  'init+ASM.20210120';
create spfile='asmspfile.ora'  from pfile =  'init+ASM.20210120'
*
ERROR at line 1:
ORA-29786: Oracle Restart attribute GET failed with error [Attribute 'SPFILE'
sts[200] lsts[0]]

Add asm to crs

$ srvctl add asm -spfile asmspfile.ora

Start databases. Details omitted for brevity. Start with sqlplus. configure in crs with srvctl add database.

Final state

  • Grid 12.1 is up
  • ASM is up with all disk groups mounted.
  • Databases up

Follow up steps

To reattempt the Grid 18c upgrade:

  • In all disk groups, change compatible.asm 11.2.0.2.0 or higher
  • Retry the Grid 18c upgrade

Conclusion

During an upgrade from Grid 12.1 to 18c, the step to upgrade the disk groups to compatible 11.2.0.2.0 got missed. As a result, the 18c software could not mount the disk groups. kfod revealed that the ASM disks were intact, though.

The downgrade step found in the manual, rootcrs.sh -downgrade, failed, so I ran roothas.sh -deconfig, followed by roothas.sh in the 12c grid home. That worked, but I had to start cssd manually, and register ASM and the databases.

EC2 maximum number of volumes

Introduction

The documented AWS EC2 EBS volume attachment limit is 27. The attachment limit affects the maximum number of Oracle ASM disks.

EC2 volume limit

Amazon AWS documents a 27-volume attachment limit. I have found that a t2 type creation attempt with 28 EBS volumes will abort.

error.png

Attempting to create an m5 (Nitro)  with 28 or more volumes will hang.

creating

The attachment limit is 28, including network interfaces, volumes, and instance store volumes. EC2 instances with one network interface can have up to 27 volumes attached.

EC2 instances with more than 27 volumes

I am aware of t2 EC2 systems with as many as 44 volumes attached. I have no information about how this was done, and what the customer’s support expectations are. I would be concerned about how such a system would respond to a change of instance type.

Oracle Database

In planning an Oracle database installation, you may need file system mounts as well. /u01, for example. The underlying volume counts against the maximum attachment count.

In laying out an ASM setup, consider a simple design, with as few ASM disks and ASM disk groups as needed.

Conclusion

Based on the available public information, I recommend limiting the number of EBS volume attachments to 27. Oracle database administrators might want to simplify their ASM implementation.

AWS RDS Oracle parameter values

Introduction

Amazon AWS has set values to certain Oracle Database parameters. You can refer to this blog post if want to know ahead of time what the values will be before you create the instance.

Parameters that have values

Oracle Database parameters that AWS RDS has set to values are listed here. The values come from parameter group default.oracle-ee-19:

ParameterName ParameterValue
archive_lag_target 300
audit_file_dest /rdsdbdata/admin/{dbName}/adump
compatible 19.0.0
control_files /rdsdbdata/db/{dbName}_{DBUniqueNameSuffix}/controlfile/control-01.ctl
db_block_checking MEDIUM
db_create_file_dest /rdsdbdata/db
dbfips_140 FALSE
db_name {dbName}
db_recovery_file_dest_size 1073741824
db_unique_name {dbName}_{DBUniqueNameSuffix}
dg_broker_config_file1 /rdsdbdata/config/dr1{dbName}.dat
dg_broker_config_file2 /rdsdbdata/config/dr2{dbName}.dat
diagnostic_dest /rdsdbdata/log
enable_pluggable_database FALSE
filesystemio_options setall
heat_map OFF
job_queue_processes 50
local_listener (address=(protocol=tcp)(host=)(port={EndPointPort}))
log_archive_dest_1 location=”/rdsdbdata/db/{dbName}_{DBUniqueNameSuffix}/arch/redolog”, valid_for=(ALL_LOGFILES,ALL_ROLES)
log_archive_format -%s-%t-%r.arc
max_string_size STANDARD
memory_max_target IF({DBInstanceClassHugePagesDefault}, 0, {DBInstanceClassMemory*3/4})
memory_target IF({DBInstanceClassHugePagesDefault}, 0, {DBInstanceClassMemory*3/4})
open_cursors 300
pga_aggregate_target IF({DBInstanceClassHugePagesDefault}, {DBInstanceClassMemory*1/8}, 0)
processes LEAST({DBInstanceClassMemory/9868951}, 20000)
recyclebin OFF
sga_target IF({DBInstanceClassHugePagesDefault}, {DBInstanceClassMemory*3/4}, 0)
spfile /rdsdbbin/oracle/dbs/spfile{dbName}.ora
standby_file_management AUTO
undo_tablespace UNDO_T1
use_large_pages {DBInstanceClassHugePagesDefault}

Variables

Notice that some parameter values depend on variables.

The list of variables is:

DBInstanceClassHugePagesDefault

Depends on RDS size. RDS uses hugepages for RAM 16 GB and up.

DBInstanceClassMemory

The amount of RAM, in bytes, in the EC2 instance type that you select.

DBUniqueNameSuffix

Usually “_A”

EndPointPort

1521, for example

dbName

The value that you specify, or ORCL.

Actual memory-related parameter values

Here are some actual, observed memory-related parameter values:

Host memory
db.t3 instance size small medium large xlarge 2xlarge
RAM 2 GB 4 GB 8 GB 16 GB 32 GB
Parameter Name
memory_max_target 1328 MB 2768 MB 5670 MB 0 0
memory_target 1328 MB 2768 MB 5670 MB 0 0
pga_aggregate_target 0 0 0 2 GB 4 GB
processes 187 392 815 1663 3359
sga_target 0 0 0 12 GB 23 GB
use_large_pages FALSE FALSE FALSE ONLY ONLY

Generally, you should not set these parameters to your own values. You should not create parameter groups that set these values. Doing so defeats the simplifying design of RDS.

RDS disables hugepages, and memory management is AMM, for RAM size 8 GB and smallerl however, RDS enforces hugepages, and memory management is ASMM, for RAM size 16 GB and up. If you modify the instance size, the memory-related parameters get changed.

Files

Control files

Control files are not multiplexed. Example:

/rdsdbdata/db/ORCL_A/controlfile/control-01.ctl

Data files

Tablespaces are bigfile. Example datafile:

/rdsdbdata/db/ORCL_A/datafile/o1_mf_users_hbl3j3yn_.dbf

Log files

Logs are not multiplexed. Example logfile:

/rdsdbdata/db/ORCL_A/onlinelog/o1_mf_1_hbl3kbbx_.log

Example tempfile:

/rdsdbdata/db/ORCL_A/datafile/o1_mf_temp_hfdtzrr6_.tmp

Operations

Because parameters db_create_file_dest and db_unique_name are set, tablespace creation is simplified. This command:

create tablespace TSQ;

results in creating a datafile such as:

/rdsdbdata/db/ORCL_A/datafile/o1_mf_tsq_hffymmyr_.dbf

Services

Out of the box, RDS boots with two services, namely db_name and db_unique_name. For example, ORCL and ORCL_A.

There are at least two ways to create additional services. Grid infrastructure tool srvctl is not be available to manage services in RDS.

SID

Your db_name is also your SID, and it is registered with the listener as a service. Check your SID:

select sys_context('userenv','instance_name') from dual;

or

select instance_name from v$instance;
ORCL

db_unique_name

Check your db_unique_name:

show parameter db_unique_name;
NAME TYPE VALUE 
-------------- ------ ------ 
db_unique_name string ORCL_A

You could override the default db_unique_name by creating a parameter group with your new value.

Managing service_names

You could create a new AWS RDS parameter group and set service_names there. If you do that, you may find over time that you have created an unmanageable number of parameter groups.

Running dbms_service

You can create and start a service using dbms_service.

begin
dbms_service.create_service(
        service_name => 'ESB',
        network_name => 'ESB'       
    );
end;
/
begin
dbms_service.start_service(
        service_name => 'ESB'      
    );
end;
/

However, the service state would not survive a restart. To start the service automatically, you could try creating a database trigger.

create or replace trigger
system.start_svc
after startup on database
begin
dbms_service.start_service(
        service_name => 'ESB'      
    );    
exception when others then
    if sqlcode != -44305
    then
        raise;
    end if;
end;
/

However, you will get this error:

ORA-20900: RDS restricted DDL found: CREATE TRIGGER SYS.START_SVC

In case you set db_domain

Suppose, because of a tradition, you decide to set db_domain. You could create a new parameter group and set db_domain there.

pargrpdb

Then you could apply the parameter group to your RDS instance. Now you have two connection options:

host:port/orcl

or

host:port/orcl_a.its.gvu.edu

If you try:

host:port/orcl.its.gvu.edu

you get:

ORA-12514: TNS: Listener does not currently know of service requested in connect descriptor

This means that all your connections need suffix “_a.its.gvu.edu”. If you head down this road, your environment is going to be cluttered with “_a.its.gvu.edu”. “_A” is for “_Appendix”, a vestigial organ that serves no current purpose, and can cause pain at unexpected times. If you don’t want so much clutter, do not set db_domain in RDS.

Services conclusion

Services were useful on on-prem deployments for managing or monitoring workload. Configuring services in AWS RDS is impractical.

I have these recommendations regarding services:

  • Do not set db_domain
  • Do not attempt to manage extra database services
  • Connect using host:port/sid

AWS RDS is not very service-friendly.

Conclusion

Amazon AWS RDS sets certain Oracle Database initialization parameters. Memory and file-related parameters depend on RDS variables, which depend on variables. Memory-related variables are adjusted proportional to the amount of EC2 RAM. RDS sets parameters that control file creation locations. RDS does not support the service functionality that you could be accustomed to with on-prem.

listener status UNKNOWN or BLOCKED

By Brian Fitzgerald

Introduction

By trial and error, I identify what Oracle Database instance states correspond to listener status READY, UNKNOWN, and BLOCKED.

Details

I have recorded the listener status for various database states. In some cases, the listener was reloaded or restarted.

Database open READ WRITE:

Service "EQTRD" has 2 instance(s).
  Instance "EQTRD", status UNKNOWN, has 1 handler(s) for this service...
  Instance "EQTRD", status READY, has 1 handler(s) for this service...

Shutdown the database

Service "EQTRD" has 1 instance(s).
  Instance "EQTRD", status UNKNOWN, has 1 handler(s) for this service...

Delete the static listener and reload the listener.

[none]

Startup nomount

Service "EQTRD" has 1 instance(s).
  Instance "EQTRD", status BLOCKED, has 1 handler(s) for this service...

Reload

Service "EQTRD" has 1 instance(s).
  Instance "EQTRD", status BLOCKED, has 1 handler(s) for this service...

Stop and start the listener.

Service "EQTRD" has 1 instance(s).
  Instance "EQTRD", status BLOCKED, has 1 handler(s) for this service...

Note that reloading or restarting the listener does not clear a BLOCKED state.

Mount

Service "EQTRD" has 1 instance(s).
  Instance "EQTRD", status READY, has 1 handler(s) for this service...

Startup force nomount

Service "EQTRD" has 1 instance(s).
  Instance "EQTRD", status BLOCKED, has 1 handler(s) for this service...

Re-enable the static listener

Service "EQTRD" has 2 instance(s).
  Instance "EQTRD", status UNKNOWN, has 1 handler(s) for this service...
  Instance "EQTRD", status BLOCKED, has 1 handler(s) for this service...

Mount

Service "EQTRD" has 2 instance(s).
  Instance "EQTRD", status UNKNOWN, has 1 handler(s) for this service...
  Instance "EQTRD", status READY, has 1 handler(s) for this service...

Restricted session

Service "XIENT" has 1 instance(s).
  Instance "XIENT", status RESTRICTED, has 1 handler(s) for this service...

Summary

Here is a summary of listener status and interpretation.

status interpretation
BLOCKED nomount
UNKNOWN static listener
READY mounted or open. Dymamically registered
RESTRICTED restricted session

 

AutoUpgrade 12c->19c with Transient Logical Standby

By Brian Fitzgerald

Introduction

Oracle Database 12.1 was upgraded to version 19.3 using the AutoUpgrade utility and the Transient Logical Standby. Actual downtime was less than 10s.

News about AutoUpgrade

For news about AutoUpgrade, please refer Oracle product manager Mike Dietrich’s blog.

System Description

The system is Linux 7.7 on Amazon AWS EC2. Details appear in this table:

Attribute Value
cloud AWS
location US East (N. Virginia)
instanceType m5.large
tenancy shared
instanceFamily General purpose
physicalProcessor Intel Xeon Platinum 8175
clockSpeed 2.5 GHz
Number of CPUs 2
memGB 7.8 GiB
Storage EBS gp2
operatingSystem Linux
AMI RHEL-7.7_HVM_GA-20190723-x86_64-1-Hourly2-GP2
OS Vendor Red Hat, Inc.
Red Hat version 7.7
price per hour $0.096

System Configuration

Initially, the system is a Data Guard network with a physical standby.

Description Value
Original version 12.1.0.2
Target version 19.3.0
Primary EQTRD
Physical Standby XIENT
Instance type Restart
Schema trades

Non-default Database system configurations

Selected non-default instance parameters are:

name value
archive_lag_target 900
audit_file_dest /u01/app/oracle/admin/EQTRD/adump
audit_trail DB
compatible 12.1.0.2.0
control_files +DATA01/EQTRD/CONTROLFILE/current.259.1021685185, +RECO01/EQTRD/CONTROLFILE/current.313.1021685185
db_block_size 8192
db_create_file_dest +DATA01
db_create_online_log_dest_1 +DATA01
db_create_online_log_dest_2 +RECO01
db_name EQTRD
db_recovery_file_dest +RECO01
db_recovery_file_dest_size 30g
dg_broker_config_file1 +DATA01/EQTRD/dr1orcl.dat
dg_broker_config_file2 +RECO01/EQTRD/dr2orcl.dat
dg_broker_start TRUE
diagnostic_dest /u01/app/oracle
local_listener 127.0.0.1:1521
log_archive_config dg_config=(EQTRD,XIENT)
log_archive_dest_1 location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES)
log_archive_dest_2 service=”XIENT”, ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name=”XIENT” net_timeout=30, valid_for=(online_logfile,all_roles)
log_archive_dest_state_2 ENABLE
sga_max_size 1500m
sga_target 1500m
standby_file_management AUTO
use_large_pages ONLY

Existing Data Guard 12c physical standby

We start this article with a working Data Guard physical standby with these issues already handled on the primary and the standby:

  • network route
  • static listeners
  • archivelog
  • flashback on
  • forced logging
  • 3×500 MB online redo log groups
  • 4×500 MB standby redo log groups
  • audit directory
  • data guard broker
  • tested switchover
  • spfile moved to ASM
  • tnsnames.ora in 12c oracle home
  • Restart is setup
  • oratab is configured and pointing to the 12c home
  • srvctl tested

The standby spfile has been moved to ASM so that it will be unnecessary to move it after the upgrade, or a possible downgrade.

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DATA01/XIENT/PARAMETERFILE/spfile.275.1021229119

Existing Data

Example table ordermessage has one row.

SQL> @ insert1.sql

1 row created.

SQL> commit;

Commit complete.
SQL> select ClOrdID from ordermessage order by TransactTime;

CLORDID
------------------------------
XL46W2NCVM22N02RQ2YK

Preparations

Careful preparation should be completed days ahead of the planned upgrade. AutoUpgrade Analyze and Fixups can be run and the results reviewed. The 19c oracle homes, including tnsnames.ora and orapw should be prepared for the upgrade.

Download AutoUpgrade

Download the latest autoupgrade.jar

Install to the 19c oracle home on primary and standby

[oracle@ip-172-31-91-148 dbhome_1]$ alias oh
alias oh='cd $ORACLE_HOME'
[oracle@ip-172-31-91-148 dbhome_1]$ oh
[oracle@ip-172-31-91-148 dbhome_1]$ find . -name autoupgrade.jar -ls
76988898 3616 -rw-r--r-- 1 oracle oinstall 3702360 Oct 7 20:35 ./rdbms/admin/autoupgrade.jar

Set this alias for convenience

[oracle@ip-172-31-91-148 dbhome_1]$ AH=/u01/app/oracle/product/19.3.0/dbhome_1
[oracle@ip-172-31-91-148 dbhome_1]$ alias au='rlwrap $AH/jdk/bin/java -jar $AH/rdbms/admin/autoupgrade.jar'

Use rlwrap for CLI readline editing and history recall.

Check the AutoUpgrade version:

[oracle@ip-172-31-91-148 dbhome_1]$ au -version
build.hash 67fee5b
build.version 20190823
build.date 2019/08/23 18:08:47

Source AutoUpgrade configuration file

The source configuration file is eqtrd.12c.19c.conf. Notice that target_home need not appear in the source configuration file.

global.autoupg_log_dir=/u01/app/oracle/autoupgrade

upg1.dbname=EQTRD
upg1.start_time=now
upg1.source_home=/u01/app/oracle/product/12.1.0/dbhome_1
upg1.sid=EQTRD
upg1.log_dir=/u01/app/oracle/autoupgrade
upg1.upgrade_node=ip-172-31-88-93.ec2.internal
upg1.target_version=19.3

Analyze

Use AutoUpgrade to analyze for issues and needed fixups.

If you scroll down, you will find a horizontal scrollbar. You may scroll right to view the right-most columns.

[oracle@ip-172-31-88-93 ~]$ au -config eqtrd.12c.19c.conf -mode analyze
AutoUpgrade tool launched with default options
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be analyzed
Type 'help' to list console commands
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
| 110|  EQTRD|PRECHECKS|PREPARING|RUNNING|19/10/11 14:38|     N/A|14:38:09|Loading DB info|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+--------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|       MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+--------------+
| 110|  EQTRD|PRECHECKS|PREPARING|RUNNING|19/10/11 14:38|     N/A|14:38:16|Remaining 6/72|
+----+-------+---------+---------+-------+--------------+--------+--------+--------------+
Total jobs 1
upg> Job 110 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 110 FOR EQTRD

Fixups

Use AutoUpgrade to run Fixups:

[oracle@ip-172-31-88-93 ~]$ au -config eqtrd.12c.19c.conf -mode fixups
AutoUpgrade tool launched with default options
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be processed
Type 'help' to list console commands
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
| 111|  EQTRD|PRECHECKS|PREPARING|RUNNING|19/10/11 14:43|     N/A|14:43:28|Loading DB info|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+-------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|      MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+-------------+
| 111|  EQTRD|PREFIXUPS|EXECUTING|RUNNING|19/10/11 14:43|     N/A|14:43:42|Remaining 3/3|
+----+-------+---------+---------+-------+--------------+--------+--------+-------------+
Total jobs 1
upg> Job 111 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 111 FOR EQTRD

Target AutoUpgrade configuration file

Prepare the target AutoUpgrade configuration file. Notice that dbname and sid are changed. upgrade_node must match the target hostname. The configuration file is xient.12c.19c.conf.

global.autoupg_log_dir=/u01/app/oracle/autoupgrade

upg1.dbname=XIENT
upg1.start_time=now
upg1.source_home=/u01/app/oracle/product/12.1.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.3.0/dbhome_1
upg1.sid=XIENT
upg1.log_dir=/u01/app/oracle/autoupgrade
upg1.upgrade_node=ip-172-31-91-148.ec2.internal
upg1.target_version=19.3

Prepare 19c oracle homes

Prepare the 19c oracle homes with the password file and tnsnames.ora. Primary:

[oracle@ip-172-31-88-93 ~]$ alias oh
alias oh='cd $ORACLE_HOME'
[oracle@ip-172-31-88-93 ~]$ oh
[oracle@ip-172-31-88-93 dbhome_1]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1
[oracle@ip-172-31-88-93 dbhome_1]$ cp -p dbs/orapwEQTRD /u01/app/oracle/product/19.3.0/dbhome_1/dbs/
[oracle@ip-172-31-88-93 dbhome_1]$ cp -p network/admin/tnsnames.ora /u01/app/oracle/product/19.3.0/dbhome_1/network/admin/

standby:

[oracle@ip-172-31-91-148 ~]$ oh
[oracle@ip-172-31-91-148 dbhome_1]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1
[oracle@ip-172-31-91-148 dbhome_1]$ cp -p dbs/orapwXIENT /u01/app/oracle/product/19.3.0/dbhome_1/dbs/ 
[oracle@ip-172-31-91-148 dbhome_1]$ cp -p network/admin/tnsnames.ora /u01/app/oracle/product/19.3.0/dbhome_1/network/admin/

Create the Transient Logical Standby

For this exercise, I find it helpful to have four terminal windows open, two on the primary and two on the standby. One window per host is for tailing the alert log, and the other is for commands.

Data Guard configuration

Note the Data Guard configuration. At the standby:

SQL> select * from v$dataguard_config;

DB_UNIQUE_NAME  PARENT_DBUN     DEST_ROLE         CURRENT_SCN     CON_ID
--------------- --------------- ----------------- ----------- ----------
EQTRD           NONE            PRIMARY DATABASE      1868165          0
XIENT           EQTRD           PHYSICAL STANDBY      1868075          0

Disable the Data Guard broker configuration

We are not going to stop the Data Guard broker. Disabling the configuration freezes the current Data Guard system state.

DGMGRL> disable configuration
Disabled.

Standby is mounted

Note that the standby database is mounted:

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

Primary restore point

The primary restore point is required for the upgrade procedure later. Create it now:

SQL> create restore point upgrd1 guarantee flashback database;

Restore point created.

Cancel managed recovery

On the standby:

SQL> recover managed standby database cancel
Media recovery complete.

Create standby restore point

On the standby:

SQL> create restore point upgrd2 guarantee flashback database;

Restore point created.

Some DBAs create additional restore points at later steps.

Create the logical standby

For this step, it is helpful to watch two windows.

On the standby:

SQL> alter database recover to logical standby keep identity;

The session hangs …

On the primary:

SQL> exec dbms_logstdby.build;

PL/SQL procedure successfully completed.

SQL>

dbms_logstdby.build runs in approximately three seconds. Three more seconds later, the alter database finishes and the sqlplus prompt appears.

Database altered.

SQL>

Run standby apply

On the standby:

SQL> alter database open;

Database altered.

SQL> alter database start logical standby apply immediate;

Database altered.

SQL> select state from v$logstdby_state;

STATE
----------------------------------------------------------------
INITIALIZING

SQL> select state from v$logstdby_state;

STATE
----------------------------------------------------------------
INITIALIZING

SQL> select state from v$logstdby_state;

STATE
----------------------------------------------------------------
LOADING DICTIONARY

SQL> select state from v$logstdby_state;

STATE
----------------------------------------------------------------
LOADING DICTIONARY

SQL> select state from v$logstdby_state;

STATE
----------------------------------------------------------------
LOADING DICTIONARY

SQL> select state from v$logstdby_state;

STATE
----------------------------------------------------------------
APPLYING

SQL> select state from v$logstdby_state;

STATE
----------------------------------------------------------------
IDLE

SQL>

Standby log miner dictionary build may hang at INITIALIZING for more than one minute. It could take up to 5 minutes to reach IDLE.

Foreign archivelogs

Archivelogs identified as “foreign” may appear at the standby

Wed Oct 16 02:21:23 2019
LOGMINER: End mining logfile for session 1 thread 1 sequence 33, +RECO01/XIENT/foreign_archivelog/EQTRD/2019_10_16/thread_1_seq_33.316.1021774867

Stop standby apply

When logical standby state reaches IDLE, stop standby apply.

On the primary:

SQL> alter system set log_archive_dest_state_2=defer scope=memory;

System altered.

From this point until the catchup, the primary continues to operate normally with users connected. Archived logs at the primary contain transactions that will be replayed at the standby later.

On the standby:

SQL> alter database stop logical standby apply;

Database altered.

Shutdown 12c

On the standby:

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

Upgrade

Upgrade the standby to 19c.

Setup the shell environment

Set the 19c environment.

On the standby:

[oracle@ip-172-31-91-148 ~]$ unset ORACLE_SID
[oracle@ip-172-31-91-148 ~]$ . oraenv
ORACLE_SID = [oracle] ?
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/19.3.0/dbhome_1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ip-172-31-91-148 ~]$ export ORACLE_SID=XIENT

Upgrade the Restart

[oracle@ip-172-31-91-148 ~]$ srvctl upgrade database -database XIENT -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1

startup upgrade

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 1577054672 bytes
Fixed Size                  8896976 bytes
Variable Size             385875968 bytes
Database Buffers         1174405120 bytes
Redo Buffers                7876608 bytes
Database mounted.
Database opened.

Check the status. The following output is expected:

SQL> select logins, status from v$instance;

LOGINS     STATUS
---------- ------------
RESTRICTED OPEN MIGRATE

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

oratab

Note that the Oracle Agent has updated the Oracle home in oratab.

[oracle@ip-172-31-91-148 ~]$ grep XIENT /etc/oratab
XIENT:/u01/app/oracle/product/19.3.0/dbhome_1:N         # line added by Agent

Start screen

If you lose your session during the upgrade, you are going to lose direct control over the upgrade. To recover, you would need to search your global.autoupg_log_dir for the state and try to figure out how to recover, if necessary.

Protect your session with screen.

[oracle@ip-172-31-91-148 ~]$ screen

If you get disconnected, reconnect to screen:

[oracle@ip-172-31-91-148 ~]$ screen -r

You can detach from screen with ctrl-a-d.

Run AutoUpgrade

Use AutoUpgrade to upgrade the standby database.

On the standby, in screen:

[oracle@ip-172-31-91-148 ~]$ AH=/u01/app/oracle/product/19.3.0/dbhome_1
[oracle@ip-172-31-91-148 ~]$ alias au='rlwrap $AH/jdk/bin/java -jar $AH/rdbms/admin/autoupgrade.jar'
[oracle@ip-172-31-91-148 ~]$ au -config xient.12c.19c.conf -mode upgrade
AutoUpgrade tool launched with default options
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be processed
Type 'help' to list console commands
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|    MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
| 105|  XIENT|DBUPGRADE|EXECUTING|RUNNING|19/10/12 02:47|     N/A|02:48:05|0%Upgraded |
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
| 105|  XIENT|DBUPGRADE|EXECUTING|RUNNING|19/10/12 02:47|     N/A|02:51:08|12%Upgraded |
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
| 105|  XIENT|DBUPGRADE|EXECUTING|RUNNING|19/10/12 02:47|     N/A|03:00:14|49%Upgraded |
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
| 105|  XIENT|DBUPGRADE|EXECUTING|RUNNING|19/10/12 02:47|     N/A|03:09:21|92%Upgraded |
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|    MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
| 105|  XIENT|DBUPGRADE|EXECUTING|RUNNING|19/10/12 02:47|     N/A|03:12:52|0%Compiled |
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
| 105|  XIENT|DBUPGRADE|EXECUTING|RUNNING|19/10/12 02:47|     N/A|03:15:56|80%Compiled |
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
Total jobs 1
upg> lsj
+----+-------+---------+---------+--------+--------------+--------+--------+-------------------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|            MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+--------+-------------------+
| 105|  XIENT|DBUPGRADE|EXECUTING|FINISHED|19/10/12 02:47|     N/A|03:18:21|Restarting Database|
+----+-------+---------+---------+--------+--------------+--------+--------+-------------------+
Total jobs 1
upg> lsj
+----+-------+----------+---------+-------+--------------+--------+--------+-------------+
|Job#|DB_NAME|     STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|      MESSAGE|
+----+-------+----------+---------+-------+--------------+--------+--------+-------------+
| 105|  XIENT|POSTFIXUPS|EXECUTING|RUNNING|19/10/12 02:47|     N/A|03:22:21|Remaining 1/3|
+----+-------+----------+---------+-------+--------------+--------+--------+-------------+
Total jobs 1
upg> Job 105 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 105 FOR XIENT

Exit screen

You may exit screen now. To exit screen, exit your shell.

Update the static listener

In the grid account, edit listener.ora. You can do this while you are waiting for the upgrade to finish.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = XIENT)
    )
  )

Reload the listener

[grid@ip-172-31-91-148 admin]$ lsnrctl reload

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 12-OCT-2019 02:26:43

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-91-148.ec2.internal)(PORT=1521)))
The command completed successfully

Check the listener status

[grid@ip-172-31-91-148 admin]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 12-OCT-2019 02:26:48

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-91-148.ec2.internal)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                11-OCT-2019 13:17:23
Uptime                    0 days 13 hr. 9 min. 25 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19.3.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/ip-172-31-91-148/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ip-172-31-91-148.ec2.internal)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA01" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_RECO01" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "XIENT" has 1 instance(s).
  Instance "XIENT", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Continued operations on the 12c primary

Meanwhile, on primary, which is still on 12c, users may continue to modify data:

SQL> select banner from v$version where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> @ insert2.sql

1 row created.

SQL> commit;

Commit complete.

SQL> select ClOrdID from ordermessage order by TransactTime;

CLORDID
------------------------------
XL46W2NCVM22N02RQ2YK
DMYD9FS3FD3CYO77CU4R

Catch up the upgraded standby

Initial state

The standby is open and upgraded, but still lacks application data updates that were made to the primary during the upgrade.

SQL> select banner from v$version where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

SQL> select ClOrdID from ordermessage order by TransactTime;

CLORDID
------------------------------
XL46W2NCVM22N02RQ2YK

With only one row, table ordermessage is out of date.

Resume transport

Resume transport from the primary to the standby:

Primary:

SQL> alter system set log_archive_dest_state_2=enable;

System altered.

Standby:

SQL> alter database start logical standby apply immediate;

Database altered.

Check for catchup on the standby:

Primary:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
1964941

Standby:

SQL> select applied_scn, latest_scn, mining_scn from v$logstdby_progress;

APPLIED_SCN LATEST_SCN MINING_SCN
----------- ---------- ----------
1964938        1964946    1964944

Check switchover status.

On the primary:

SQL> select database_role, switchover_status from v$database;

DATABASE_ROLE    SWITCHOVER_STATUS
---------------- --------------------
PRIMARY          TO STANDBY

Until now, the application was operating normally on the original primary. Primary application data is now caught up on the original standby.

Check the application data on the standby:

SQL> select ClOrdID from ordermessage order by TransactTime;

CLORDID
------------------------------
XL46W2NCVM22N02RQ2YK
DMYD9FS3FD3CYO77CU4R

The standby data is up to date!

Now it is time to quiesce the application. Otherwise, changes could be lost.

Switchover

We’re going to switchover the application to the Transient Logical Standby. This is the actual outage. Get your stopwatch ready!

Original primary:

SQL> alter database commit to switchover to logical standby;

Database altered.

Start your stopwatch!

alter database commit to switchover to logical standby could run for 4s.

Original standby:

SQL> select database_role, switchover_status from v$database;

DATABASE_ROLE    SWITCHOVER_STATUS
---------------- --------------------
LOGICAL STANDBY  TO PRIMARY
SQL> alter database commit to switchover to logical primary;

Database altered.

alter database commit to switchover to logical primary could run for 15s or more.

Stop!

The first outage is over. Users may connect to the new logical primary.

Flashback the original primary

The original primary is going to become the physical standby.

SQL> startup force mount
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
SQL> flashback database to restore point upgrd1;

Flashback complete.

Convert the original primary to physical standby

SQL> alter database convert to physical standby;

Database altered.

Shut down original primary

SQL> shutdown abort
ORACLE instance shut down.

Environment

Set the 19c environment:

[oracle@ip-172-31-88-93 ~]$ unset ORACLE_SID
[oracle@ip-172-31-88-93 ~]$ . oraenv
ORACLE_SID = [oracle] ?
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/19.3.0/dbhome_1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ip-172-31-88-93 ~]$ ORACLE_SID=EQTRD

Upgrade original primary Restart

 [oracle@ip-172-31-88-93 ~]$ srvctl upgrade database -database EQTRD -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1

startup mount

Mount the original primary.

[oracle@ip-172-31-88-93 ~]$ srvctl start database -database EQTRD -startoption MOUNT

Check oratab

Notice that the Oracle Agent updated the Oracle Home in oratab

[oracle@ip-172-31-88-93 ~]$ grep EQTRD /etc/oratab
EQTRD:/u01/app/oracle/product/19.3.0/dbhome_1:N         # line added by Agent

Static listener

Edit the static listener. In the grid account on the original primary:

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /u01/app/oracle/product/19.3.0/dbhome_1)
      (SID_NAME = EQTRD)
    )
  )

Reload the listener

[grid@ip-172-31-91-148 admin]$ lsnrctl reload

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 15-OCT-2019 00:32:52

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-91-148.ec2.internal)(PORT=1521)))
The command completed successfully

Check the listener status

[grid@ip-172-31-88-93 admin]$ lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 16-OCT-2019 03:56:08

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ip-172-31-88-93.ec2.internal)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                16-OCT-2019 00:25:28
Uptime                    0 days 3 hr. 30 min. 40 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/19.3.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/ip-172-31-88-93/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ip-172-31-88-93.ec2.internal)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_DATA01" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "+ASM_RECO01" has 1 instance(s).
  Instance "+ASM", status READY, has 1 handler(s) for this service...
Service "EQTRD" has 2 instance(s).
  Instance "EQTRD", status UNKNOWN, has 1 handler(s) for this service...
  Instance "EQTRD", status READY, has 1 handler(s) for this service...
The command completed successfully

Enable the Data Guard configuration

Connect to the Transient Logical Standby:

[oracle@ip-172-31-91-148 ~]$ dgmgrl sys/Zystm.22@XIENT
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sat Oct 12 05:17:06 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "XIENT"
Connected as SYSDBA.

DGMGRL> enable configuration
Enabled.

Data Guard broker is going to detect that XIENT is the new primary. Data Guard broker will start log transport to EQTRD, and will start managed standby on EQTRD. Wait 5 to 10 minutes until recovery at the original primary is caught up.

A message such as the following should appear in the original primary alert log:

2019-10-17T02:24:01.118981+00:00
PR00 (PID:9485): Media Recovery Waiting for T-1.S-31 (in transit)

Check the Data Guard configuration

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxPerformance
  Members:
  XIENT - Primary database
    EQTRD - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 20 seconds ago)

Errors

The following errors should not appear if no steps were missed.

ORA-1033: ORACLE initialization or shutdown in progress

DGMGRL> add database 'EQTRD' as connect identifier is 'EQTRD' maintained as physical;
Error: ORA-1033: ORACLE initialization or shutdown in progress

Failed.

There could be a password file issue. Copy the 12c orapw to the 19c home.

On the original primary:

[oracle@ip-172-31-88-93 ~]$ cp -p /u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwEQTRD /u01/app/oracle/product/19.3.0/dbhome_1/dbs/orapwEQTRD

Error: ORA-16698: member has a LOG_ARCHIVE_DEST_n parameter with SERVICE attribute set

DGMGRL> add database 'EQTRD' as connect identifier is 'EQTRD' maintained as physical;
Error: ORA-16698: member has a LOG_ARCHIVE_DEST_n parameter with SERVICE attribute set

Solution: clear log_archive_dest_2.

On the original primary:

SQL> alter system set log_archive_dest_2='';

System altered.

Switchover to the original primary

The next step is going to result in another application outage.

DGMGRL> switchover to EQTRD
Performing switchover NOW, please wait...
Operation requires a connection to database "EQTRD"
Connecting ...
Connected to "EQTRD"
Connected as SYSDBA.
New primary database "EQTRD" is opening...
Oracle Clusterware is restarting database "XIENT" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to "XIENT"
Connected to "XIENT"
Switchover succeeded, new primary is "eqtrd"

The switchover completed in less than 40s.

Flashback to restore point

This section contains backout procedures that you can use on the primary or the standby.

Remove or disable Data Guard

You can try to flashback the primary and the standby and resume Data Guard. If you do, you may notice an error such as this one:

In the broker:

ERROR ORA-16700: the standby database has diverged from the primary database

In the standby alert log:

Thu Oct 17 03:53:40 2019
Errors in file /u01/app/oracle/diag/rdbms/xient/XIENT/trace/XIENT_mrp0_3527.trc:
ORA-19909: datafile 1 belongs to an orphan incarnation
ORA-01110: data file 1: '+DATA01/XIENT/DATAFILE/system.271.1021840189'

For the sake of demonstration, we will remove Data Guard and then flashback and open either the primary or the standby, but not both.

Remove the Data Guard broker configuration.

DGMGRL> disable configuration
Disabled.
DGMGRL> remove configuration
Removed configuration
DGMGRL>

Stop the brokers and disable transport.

On the primary:

SQL> alter system set log_archive_dest_2='';

System altered.

SQL> alter system set dg_broker_start=false;

System altered.

On the standby:

SQL> alter system set dg_broker_start=false;

System altered.

Flashback the primary

SQL> startup force mount
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
SQL> flashback database to restore point upgrd1;

Flashback complete.

Clear parameters that are invalid in 12.1. Shut down abort.

SQL> alter system reset "__unified_pga_pool_size" sid = 'EQTRD' scope=spfile;

System altered

SQL> shutdown abort
ORACLE instance shut down.

Downgrade Restart.

[oracle@ip-172-31-88-93 ~]$ srvctl downgrade database -database EQTRD -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 -targetversion 12.1.0.2

Set the 12c environment.

[oracle@ip-172-31-88-93 ~]$ unset ORACLE_SID
[oracle@ip-172-31-88-93 ~]$ . oraenv
ORACLE_SID = [oracle] ?
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0/dbhome_1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ip-172-31-88-93 ~]$ ORACLE_SID=EQTRD

Startup mount and open resetlogs.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
SQL> alter database open resetlogs;

Database altered.

Notice that the Agent has updated oratab to 12c.

[oracle@ip-172-31-88-93 ~]$ grep EQTRD /etc/oratab
EQTRD:/u01/app/oracle/product/12.1.0/dbhome_1:N         # line added by Agent

Check the application data. Changes that were made during the upgrade have been lost.

SQL> select ClOrdID from ordermessage order by TransactTime;

CLORDID
------------------------------
XL46W2NCVM22N02RQ2YK

Rollback of primary to 12c is done.

Flashback the standby

As an alternative, flashback the standby.

SQL> recover managed standby database cancel
Media recovery complete.
SQL> flashback database to restore point upgrd2;

Flashback complete.

SQL> alter database activate physical standby database;

Database altered.

Reset parameters that are invalid in 12.1. Shut down abort.

SQL> alter system reset "__unified_pga_pool_size" sid = 'XIENT' scope=spfile;

System altered.

SQL> shutdown abort
ORACLE instance shut down.

Downgrade Restart.

[oracle@ip-172-31-91-148 ~]$ srvctl downgrade database -database XIENT -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 -targetversion 12.1.0.2

Set the 12c environment. Startup.

[oracle@ip-172-31-91-148 ~]$ unset ORACLE_SID
[oracle@ip-172-31-91-148 ~]$ . oraenv
ORACLE_SID = [oracle] ?
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0/dbhome_1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ip-172-31-91-148 ~]$ export ORACLE_SID=XIENT

SQL> startup
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.

Notice that the Agent has updated oratab.

[oracle@ip-172-31-91-148 ~]$ grep XIENT /etc/oratab
XIENT:/u01/app/oracle/product/12.1.0/dbhome_1:N         # line added by Agent

Check the application data.

SQL> select ClOrdID from ordermessage order by TransactTime;

CLORDID
------------------------------
XL46W2NCVM22N02RQ2YK

Conclusion to flashbacks

This section was an outline of two possible, independent rollback procedures. Flashback will result in data loss. Further refinement is required.

Prior to beginning, we had moved spfile to ASM.

This concludes the section on flashback.

Final steps

If there are no problems with the upgrade, continue.

Drop the restore point

Primary:

SQL> drop restore point UPGRD1;

Restore point dropped.

Standby:

SQL> drop restore point UPGRD2;

Restore point dropped.

Update compatible

Primary:

SQL> alter system set compatible = '19.3.0.0.0' scope=spfile;

System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.

Standby:

SQL> alter system set compatible = '19.3.0.0.0' scope=spfile;

System altered.

SQL> startup force mount
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2924832 bytes
Variable Size             503320288 bytes
Database Buffers         1056964608 bytes
Redo Buffers               13848576 bytes
Database mounted.
SQL

compatible error

You cannot change compatibility if a guaranteed restore point exists.

SQL> alter system set compatible = '19.3.0.0.0' scope=spfile;

System altered.

SQL> startup force mount
ORACLE instance started.

Total System Global Area 1577054672 bytes
Fixed Size                  8896976 bytes
Variable Size             536870912 bytes
Database Buffers         1023410176 bytes
Redo Buffers                7876608 bytes
ORA-38880: Cannot advance compatibility from 12.1.0.2.0 to 19.3.0.0.0 due to
guaranteed restore points

Solutions:

  1. Set compatible to its original value.
    SQL> alter system set compatible = '12.1.0.2.0' scope=spfile;
    
    System altered.
    
    SQL> startup force mount
    ORACLE instance started.
    
    Total System Global Area 1577054672 bytes
    Fixed Size                  8896976 bytes
    Variable Size             536870912 bytes
    Database Buffers         1023410176 bytes
    Redo Buffers                7876608 bytes
    Database mounted.
    SQL>
    

    or:

  2. Drop the restore point.

Delete foreign archive logs

RMAN> list foreign archivelog all;

RMAN> delete noprompt foreign archivelog all;

Conclusion

Oracle Database was upgraded from version 12.1 to version 19.3 using AutoUpgrade and Transient Logical Standby. The actual application outage was 10 seconds.

Notes:

  • The initial state was a working Data Guard system with a physical standby.
  • Oracle Customer offers rolling upgrade script physru.
  • In case of a problem with the upgrade, it is possible to flashback to a point in time before the upgrade.
  • This is a zero data loss upgrade.
  • There are two brief outages.
  • The rollback is not fully developed in this article.
  • As described here, downgrade to 12c results in data loss.

 

move spfile to ASM in one step

By Brian Fitzgerald

Introduction

Here is a simple technique for relocating the Oracle Database spfile from the file system to ASM in a single RMAN step in Oracle Restart. No direct ASM access is required. The technique applies to Oracle Database 12c and up.

tl;dr

RMAN> restore from service 'EQTRD' spfile to '+DATA01';
SQL> startup force

Initial condition

Note the Restart configuration

Note the db_unique_name and the spfile location.

[oracle@ip-172-31-88-93 ~]$ srvctl config database -database EQTRD
Database unique name: EQTRD
Database name: EQTRD
Oracle home: /u01/app/oracle/product/19.3.0/dbhome_1
Oracle user: oracle
Spfile: /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: RECO01,DATA01
Services:
OSDBA group:
OSOPER group:
Database instance: EQTRD

ASM

Check that no spfile exists in ASM for your db_unique_name:

[grid@ip-172-31-88-93 ~]$ asmcmd ls +DATA01/EQTRD/PARAMETERFILE
ASMCMD-8002: entry 'PARAMETERFILE' does not exist in directory '+DATA01/EQTRD/'

No spfile was found in ASM.

Check Restart

The initial condition should be a working Oracle Restart.

[oracle@ip-172-31-88-93 ~]$ srvctl stop database -database EQTRD
[oracle@ip-172-31-88-93 ~]$ srvctl start database -database EQTRD
[oracle@ip-172-31-88-93 ~]$

Restore

Restore the spfile using a single RMAN command. You must connect to oracle via a service. “rman target /” will not work in this case. You only need to specify the target disk group. You do not need to specify the full ASM directory path.

[oracle@ip-172-31-88-93 ~]$ rman target sys/Zystm.22@EQTRD

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Oct 13 18:12:25 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EQTRD (DBID=625173207)

RMAN> restore from service 'EQTRD' spfile to '+DATA01';

Starting restore at 13-OCT-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=282 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service EQTRD
channel ORA_DISK_1: restoring SPFILE
output file name=+DATA01
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 13-OCT-19

RMAN>

Recovery Manager complete.

The RMAN technique uses the “restore from service” syntax, available in Oracle Database 12c and up.

Check Restart

Notice that the Restart configuration got updated:

[oracle@ip-172-31-88-93 ~]$ srvctl config database -database EQTRD
Database unique name: EQTRD
Database name: EQTRD
Oracle home: /u01/app/oracle/product/19.3.0/dbhome_1
Oracle user: oracle
Spfile: +DATA01/EQTRD/PARAMETERFILE/spfile.280.1021572757
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: RECO01,DATA01
Services:
OSDBA group:
OSOPER group:
Database instance: EQTRD

Notice we moved spfile and learned its new location without direct access to ASM. In an organization with segregation of duties, the RDMBS DBA can move spfile to ASM without coordination with the ASM DBA.

Check ASM

For information, we may check ASM.

[grid@ip-172-31-88-93 ~]$ asmcmd ls +DATA01/EQTRD/PARAMETERFILE/spfile.280.1021572757
spfile.280.1021572757

Remove old spfile

Remove the old spfile from the file system (if you dare!).

The Oracle base has been set to /u01/app/oracle
[oracle@ip-172-31-88-93 ~]$ rm /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora
[oracle@ip-172-31-88-93 ~]$ 

Restart the database

You must restart the database instance now. You may use sqlplus or srvctl.

SQL> startup force
ORACLE instance started.

Total System Global Area 2365586088 bytes
Fixed Size                  8899240 bytes
Variable Size             520093696 bytes
Database Buffers         1828716544 bytes
Redo Buffers                7876608 bytes
Database mounted.
Database opened.
SQL>

Done!

Only one RMAN command was needed to move the spfile to ASM.

Reversal

You can move spfile in the reverse direction, from ASM to the file system.

Check the file system

Note that the spfile does not exist on the file system

 [oracle@ip-172-31-88-93 ~]$ ls /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora
ls: cannot access /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora: No such file or directory

Restore spfile to file system

[oracle@ip-172-31-88-93 ~]$ rman target sys/Zystm.22@EQTRD

Recovery Manager: Release 19.0.0.0.0 - Production on Sun Oct 13 18:40:05 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EQTRD (DBID=625173207)

RMAN> restore from service 'EQTRD' spfile to '/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora';

Starting restore at 13-OCT-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=55 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service EQTRD
channel ORA_DISK_1: restoring SPFILE
output file name=/u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 13-OCT-19

RMAN>

Moving the spfile to the file system required no access to ASM and no knowledge of the ASM file name.

Check that spfile exists on the file system

[oracle@ip-172-31-88-93 ~]$ ls -l /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora
-rw-r-----. 1 oracle dba 3584 Oct 13 18:40 /u01/app/oracle/product/19.3.0/dbhome_1/dbs/spfileEQTRD.ora

Check that Restart is updated

Remove old spfile from ASM

[ec2-user@ip-172-31-88-93 ~]$ sudo -i -u grid
The Oracle base has been set to /u01/app/grid
[grid@ip-172-31-88-93 ~]$ asmcmd rm +DATA01/EQTRD/PARAMETERFILE/spfile.280.1021572757
[grid@ip-172-31-88-93 ~]$

Restart the database

Instead of using sqlplus, you may use srvctl.

[oracle@ip-172-31-88-93 ~]$ srvctl stop database -database EQTRD -stopoption ABORT
[oracle@ip-172-31-88-93 ~]$ srvctl start database -database EQTRD
[oracle@ip-172-31-88-93 ~]$

Conclusion

Previously, moving spfile TO ASM required five steps:

  1. Create pfile from spfile
  2. Create spfile from pfile
  3. Connect to ASM and identify the file name.
  4. Update CRS by running srvctl modify database
  5. Restart the instance

By the way, some DBAs pre-create an ASM directory. I don’t know why. Some DBAs specify an ASM alias name for the spfile. I just let ASM generate a random, unique name.

spfile was moved from the file system in a single step. It was unnecessary to create an intermediate pfile, nor was it necessary to know the path to the source spfile. You only need to specify the ASM disk group. The technique did not require the DBA to access ASM to learn the location of the spfile. There is no separate srvctl modify database step. RMAN takes care of that.

In RAC, spfile centralization is an administrative win. Multiple instance settings can be stored in a single file. Initialization parameter settings applicable to all instances, or only one instance can be made even if a RAC node is down. Parameters follow an instance even if an instance is relocated. The same goes for RAC One Node.

Remember that a Restart instance can be relocated, too. With spfile in ASM, there is no need to move spfile during relocation. If you upgrade the Oracle version, you don’t have to be concerned about moving the spfile or reconfiguring Restart.

RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process

By Brian Fitzgerald

Introduction

After dropping a logical standby database, ORA-08137 appeared. Clearing log_archive_dest_2 resolved the issue.

Problem

During testing, I dropped a logical standby database. I attempted to delete archive logs on the primary. ORA-08137 appeared.

RMAN> delete noprompt archivelog all;

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=39 device type=DISK
RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
archived log file name=+RECO01/EQTRD/ARCHIVELOG/2019_10_12/thread_1_seq_24.263.1021422689 thread=1 sequence=24

Solution

Having dropped the standby database, the appropriate solution is to clear log_archive_dest_2.

SQL> alter system set log_archive_dest_2 = '';

System altered.

Now the delete succeeds:

RMAN> delete noprompt archivelog all;

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=39 device type=DISK
List of Archived Log Copies for database with db_unique_name EQTRD
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
33      1    24      A 11-OCT-19
        Name: +RECO01/EQTRD/ARCHIVELOG/2019_10_12/thread_1_seq_24.263.1021422689

deleted archived log
archived log file name=+RECO01/EQTRD/ARCHIVELOG/2019_10_12/thread_1_seq_24.263.1021422689 RECID=33 STAMP=1021422689
Deleted 1 objects

No force is needed.

Conclusion

There are numerous causes for RMAN-08137 relating to GoldenGate and Streams. These situations are described in the blogs and forums. Also, other logical standby situations have a different solution. Our case here is simple. We dropped the logical standby. The solution is also simple: Clear log_archive_dest_2. Thanks to Dean Capps for describing this solution. Refer also to Note 2169282.1.