Trace only one logon on AWS RDS Oracle

By Brian Fitzgerald

Introduction

Given an Oracle RDS database, multiple application servers and multiple connections, the task at hand is to

  • Create a logon trigger that will …
  • Run when the first connection arrives and not again after that, and …
  • Run trace 10046 …
  • Up to 10 MB only.
  • Identify the trace file
  • Download the trace file
  • Run tkprof

Background

  • A logon trigger fires for every login that matches the firing condition. If we trace every session, we could file the file system. We only want one example trace file.
  • In RDS, you do not have direct access to the operating system. You must somehow copy out the trace file.

Let’s go!

Grant

The first thing you need to get right is the grant. Trace requires “alter session” rights, which must be granted directly, not through a role. Suppose the user is “inuser”

grant alter session to inuser;

Create a sequence

create sequence inuser.user_trace_trg_seq;

Create the trigger

The trigger must be owned by the user that needs to be traced.

create or replace trigger inuser.user_trace_trg
after logon on database
when ( user = 'INUSER' )
declare
  l_val number;
begin
  l_val := inuser.user_trace_trg_seq.nextval;
  if l_val = 1
  then
    execute immediate
    q'{alter session set max_dump_file_size = '10m'}';
    execute immediate
    q'{alter session set tracefile_identifier = '}'||user||q'{_user_trace_trg'}';
    execute immediate
    q'{alter session set events '10046 trace name context forever, level 12'}';
  end if;
end;
/

Now wait for the user to login. The first the trigger fires time, nextval equals 1, so the trigger body runs. After that, the trigger body does not run.

Identify the trace file

select * from table(
  rdsadmin.rds_file_util.listdir('BDUMP')
)
where filename like '%INUSER_user_trace_trg.trc'
order by mtime;
FILENAME TYPE FILESIZE MTIME
ORCL_ora_9022_INUSER_user_trace_trg.trc file 10485856 2022-04-17 00:17:04

“Download” the trace file

You could transfer the file from rds to s3, and then from s3 to a local system. The other way is:

select text from table(
  rdsadmin.rds_file_util.read_text_file(
   :dir,
   :filename
  )
)

Using sqlalchemy and boto3, I have integrated this statement with AWS secrets manager in a convenient python script

$ download-rds-ora-dir-files.py --secret-id $sec --dir BDUMP --pat ORCL_ora_9022_INUSER_user_trace_trg.trc
number of files: 1

You could also run sqlplus and spool the output to a file.

set feedback off
set linesize 32767
set trimspool on
set pagesize 0

set termout off
spool ORCL_ora_9022_INUSER_user_trace_trg.trc

select text from table(
    rdsadmin.rds_file_util.read_text_file(
        'BDUMP',
        'ORCL_ora_9022_INUSER_user_trace_trg.trc'
    )
)
;

Run tkprof

tkprof ORCL_ora_9022_INUSER_user_trace_trg.trc ORCL_ora_9022_INUSER_user_trace_trg.tkp

Done!

Conclusion

The following concepts were covered

  • How to make a logon trigger body that runs only once.
  • Permissions and object ownership for a logon trigger.
  • How to download a trace file from RDS.

Symmetric far sync Data Guard

Introduction

Presented here is a reliable, symmetric Oracle Data Guard network. By reliable is meant that zero data loss and continuous availability is guaranteed in case of loss of one far sync. Symmetric means that there is is no preferred primary site.

Background

I used to drive a Navy Nuclear Submarine. On a weekly basis, we switched over our running machinery. For example, we switched over our turbine generator lubricating oil pumps. The engineroom upper level watch would the start the standby pump, verify proper response, and then and place the originally running pump in standby. We shifted our pumps for multiple reasons. Switching pumps equalized wear. It also verified readiness and operability. Finally, by operating the controls on a regular basis, the crew maintained a higher level of proficiency.

Notice a few things about this arrangement. No pump is a “preferred” primary. The pump functioning in the standby role is in no way inferior in design, specifications, or readiness. Because the machinery is rotated on a weekly basis, there is no question that the standby is ready to take over the primary role immediately when necessary.

So it is with databases. You need to have  a disaster recovery strategy. The DR site needs to be in a known state of readiness at all times. The best way of accomplishing that is to rotate DR sites on a weekly or quarterly basis. No site is a “preferred” primary. By regularly exercising the system, there is no question that a standby is in a state of readiness. Finally, database administrators maintain proficiency by regularly switching over the Data Guard systems.

A reliable, symmetric Data Guard arrangement is convenient to set up in the cloud. You have complete flexibility over which region and availability zone for placing your databases and far syncs.

Environment

The environment is Amazon AWS EC2 (Elastic Computing Cloud) with Red Hat Linux 7.2. Here is an overview of the environment.

db unique name region availability zone type net I/O sync latency from latency μs
NY us-east-1 us-east-1c database SYNC SF 32000
NY_FSA us-east-1 us-east-1d far sync ASYNC NY 250
NY_FSB us-east-1 us-east-1a far sync ASYNC NY 500
SF us-west-1 us-west-1b database SYNC NY 31500
SF_FSA us-west-1 us-west-1b far sync ASYNC SF 115
SF_FSB us-west-1 us-west-1c far sync ASYNC SF 600

Notice that NY, NY_FSA, and NY_FSB are in the same region but in separate availability zones. Also, SF, SF_FSA, and SF_FSB are in the same region but SF_FSB is in an availability zone separate from SF and SF_FSA. SF and SF_FSA are in the same availability zone. The highest latency is across regions. The lowest latency is within one availability zone. Medium latency is in the same region but across availability zones.

Here is a network diagram.

reliable.symmetric

In Maximum Availability mode, the primary ships redo to one far sync, leaving the second far sync as an alternate. In fase of a failure on the far sync, the alternate comes on line, resyncs with the primary, and takes over the role of far sync. The primary always ships redo to a far sync in the same region. For example, NY ships to NY_FSA or NY_FSB. Because of this arrangement, there is a total of four far syncs, two per region. Although four far sync hosts are required, the hosts could be configured with less CPU, memory, and disk space than the primary database. In a cloud, CPU, memory, and disk can be reconfigured quickly. Note that in the present symmmetric arrangement, far sync direction is one-way, from the local database to the database in the remote region.

Setup

You can assume an initial, enabled Maximum Availability configuration with only a primary and standby database, and no far syncs.

Create and enable the far syncs

Create far syncs as explained in Data Guard 19c in AWS with far sync. Add the far syncs and enable them.

DGMGRL> add far_sync "NY_FSA" as connect identifier is 'NY_FSA';
far sync instance "NY_FSA" added
DGMGRL> enable far_sync "NY_FSA";
Enabled.
DGMGRL> add far_sync 'NY_FSB' as connect identifier is 'NY_FSB';
far sync instance "NY_FSB" added
DGMGRL> enable far_sync 'NY_FSB'
Enabled.
DGMGRL> add far_sync 'SF_FSA' as connect identifier is 'SF_FSA';
far sync instance "SF_FSA" added
DGMGRL> enable far_sync 'SF_FSA';
Enabled.
DGMGRL> add far_sync 'SF_FSB' as connect identifier is 'SF_FSB';
far sync instance "SF_FSB" added
DGMGRL> enable far_sync 'SF_FSB'
Enabled.

The configuration so far looks like this:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY     - Primary database
    SF     - Physical standby database

  Members Not Receiving Redo:
  NY_FSA - Far sync instance
  SF_FSA - Far sync instance
  SF_FSB - Far sync instance
  NY_FSB - Far sync instance

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 57 seconds ago)

Edit the far sync routes

DGMGRL> edit far_sync NY_FSA set property RedoRoutes = '(NY : SF ASYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync NY_FSB set property RedoRoutes = '(NY : SF ASYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync SF_FSA set property RedoRoutes = '(SF : NY ASYNC)';
Property "redoroutes" updated
DGMGRL> edit far_sync SF_FSB set property RedoRoutes = '(SF : NY ASYNC)';
Property "redoroutes" updated

Edit the database routes

DGMGRL> edit database NY set property RedoRoutes = '(LOCAL : (NY_FSA SYNC, NY_FSB SYNC))';
Property "redoroutes" updated
DGMGRL> edit database SF set property RedoRoutes = '(LOCAL : (SF_FSA SYNC, SF_FSB SYNC))';
Property "redoroutes" updated
DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY     - Primary database
    NY_FSA - Far sync instance
      SF     - Physical standby database

  Members Not Receiving Redo:
  SF_FSA - Far sync instance
  SF_FSB - Far sync instance
  NY_FSB - Far sync instance (alternate of NY_FSA)

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 40 seconds ago)

Test far sync failover

In case of a failure of the far sync, Data Guard should quickly switch over to the alternate far sync. In this example, if there is a disrpution of NY_FSA, Data Guard should fail over to NY_FSB. We want to observe the system response to a simulated failure at NY_FSA. There are two failure scenarios.

end-of-file on communication channel

In a TCP client, such as the primary database connection to the far sync, ORA-03113: end-of-file on communication channel will appear when the far sync is disrupted but the far sync host is still on the network. Possible causes could include far sync instance crash, restart, or host reboot. As long as the host is on the network at the link level, it is possible for the primary host operating system to quickly identify the error and return a failure to the primary database. In such cases, an error such as this one appears immediately in the alert log.

2019-09-10T13:44:33.987037-04:00
ORA-03113: end-of-file on communication channel

In such a case, recovery will be very quick. The primary will immediately connect to the alternate far sync. Synchronization can be complete in less than 10 seconds.

timeout error

A timeout will occur if there is a power loss or network disruption. A network disruption could occur as a result of a network configuration change. I can demonstrate a timeout by cutting off the TCP connectivity at the cloud level. To do that, I switch the NY_FSA host to a security group that does not allow traffic on the Oracle port (1521).

clear.sg

In this case, Oracle error detection and recovery is slow. In 30 seconds, the NetTimeout default, the primary connection to NY_FSA (LAD:2 (log archive destination) times out:

2019-09-10T18:22:28.214255-04:00
LGWR (PID:4969): ORA-16198: Received timed out error from KSR
LGWR (PID:4969): Attempting LAD:2 network reconnect (16198)
LGWR (PID:4969): LAD:2 network reconnect abandoned
2019-09-10T18:22:28.214836-04:00
Errors in file /u01/app/oracle/diag/rdbms/ny/ORCL/trace/ORCL_lgwr_4969.trc:
ORA-16198: Timeout incurred on internal channel during remote archival
LGWR (PID:4969): Error 16198 for LNO:1 to 'NY_FSA'
2019-09-10T18:22:28.223786-04:00
LGWR (PID:4969): LAD:2 is UNSYNCHRONIZED
LGWR (PID:4969): Failed to archive LNO:1 T-1.S-333, error=16198

After another 30 seconds, the primary gives up on NY_FSA and switches to NY_FSB (LAD:3).

2019-09-10T18:22:58.232029-04:00
LGWR (PID:4969): ORA-16198: Received timed out error from KSR
LGWR (PID:4969): Error 16198 disconnecting from LAD:2 standby host 'NY_FSA'
2019-09-10T18:22:58.232531-04:00
LGWR (PID:4969): LAD:3 is UNSYNCHRONIZED
2019-09-10T18:22:58.232638-04:00
LGWR (PID:4969): LAD:2 no longer supports SYNCHRONIZATION
LGWR (PID:4969): SRL selected to archive T-1.S-334
LGWR (PID:4969): SRL selected for T-1.S-334 for LAD:3
2019-09-10T18:22:58.449325-04:00
Thread 1 advanced to log sequence 334 (LGWR switch)
  Current log# 2 seq# 334 mem# 0: +RECO01/NY/ONLINELOG/group_2.484.1018151803
  Current log# 2 seq# 334 mem# 1: +DATA01/NY/ONLINELOG/group_2.270.1018151815
2019-09-10T18:22:58.519853-04:00
ARC0 (PID:5073): Archived Log entry 675 added for T-1.S-333 ID 0x5c2b52e5 LAD:1
2019-09-10T18:22:58.707980-04:00
ARC1 (PID:5079): SRL selected for T-1.S-333 for LAD:3

Synchronization begins, but four to five minutes elapse until the primary is resynced to the new far sync, NY_FSB.

2019-09-10T18:27:01.888256-04:00
LGWR (PID:4969): LAD:3 is SYNCHRONIZED

While NY_FSB is synchronizing, there are two things to notice. The primary status is:

  NY     - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

and the protection level:

SQL> select protection_level from v$database;

PROTECTION_LEVEL
--------------------
RESYNCHRONIZATION

The final state is:

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY     - Primary database
    NY_FSB - Far sync instance (alternate of NY_FSA)
      SF     - Physical standby database

  Members Not Receiving Redo:
  NY_FSA - Far sync instance
    Warning: ORA-16857: member disconnected from redo source for longer than specified threshold

  SF_FSA - Far sync instance
  SF_FSB - Far sync instance

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 59 seconds ago)

Adjusting NetTimeout

On a low latency and usually reliable network, you should reduce the timeout. Example:

DGMGRL> edit database NY set property NetTimeout = 5;
Property "nettimeout" updated

This makes for a cleaner far sync switchover.

Maximum Protection

You might think that you could implement protection mode Maximum Protection with multiple far syncs:

DGMGRL> edit configuration set protection mode as MaxProtection;

Oracle will not let you do that.

Error: ORA-16627: operation disallowed since no member would remain to support protection mode

If you want to implement Maximum Protection, you will need to implement a direct route to one or more physical standby databases.

Conclusion

Several points were covered in this article.

  • Data Guard latency can be reduced by implementing far sync.
  • Data Guard can be made more reliable by placing multiple far syncs near the primary.
  • Benefits to setting up a symmetric arrangement include:
    • Standby readniness has been recently verified.
    • The standby capability is known to be identical to the primary.
    • Heightened DBA staff proficiency.
  • Setup implementation steps were covered.
  • Data Guard does not keep alternate far syncs in sync with the primary.
  • Data Guard is quick to recover from some error conditions.
  • Data Guard detection and recovery from network timeout can be slow.
  • Maximum Protection though a far sync is not supported.

Data Guard error in Maximum Availability

Introduction

Errors were noted while changing Data Guard protection mode to Maximum Availability. The root cause was mismatched standby redo log size.

Symptoms

We are in protection mode Maximum Performance and we want to set Maximum Availability. Before we begin, we notice something is amiss, but for the sake of discussion, we are not sure what, and we proceed.

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxPerformance
  Members:
  NY - Primary database
    FS - Far sync instance
      Warning: ORA-16809: multiple warnings detected for the member

      SF - Physical standby database
        Warning: ORA-16809: multiple warnings detected for the member

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 57 seconds ago)

We neglect the errors and set protection mode to Maximum Availability.

DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    FS - Far sync instance
      Warning: ORA-16855: transport lag has exceeded specified threshold

      SF - Physical standby database
        Warning: ORA-16809: multiple warnings detected for the member

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 59 seconds ago)

The warnings persist. We force a logfile switch at the primary

[oracle@ip-172-31-86-22 ~]$ sqlplus sys/zystm.22@NY as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Sep 5 21:16:54 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter system switch logfile;

System altered.

Now errors appear in the show configuration display.

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    Error: ORA-16810: multiple errors or warnings detected for the member

    FS - Far sync instance
      Warning: ORA-16809: multiple warnings detected for the member

      SF - Physical standby database
        Warning: ORA-16809: multiple warnings detected for the member

Fast-Start Failover:  Disabled

Configuration Status:
ERROR   (status updated 51 seconds ago)

The system has errors. Logs are not being processed. We are in hot water. We investigate.

Let’s review the standby and work backward via the far sync, and finally at the primary.

DGMGRL> show database verbose SF StatusReport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
                   *    WARNING ORA-16855: transport lag has exceeded specified threshold
                   *    WARNING ORA-16857: member disconnected from redo source for longer than specified threshold

DGMGRL> show far_sync verbose FS StatusReport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
                   *    WARNING ORA-16855: transport lag has exceeded specified threshold
                   *    WARNING ORA-16857: member disconnected from redo source for longer than specified threshold

DGMGRL> show database NY StatusReport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
                   *    WARNING ORA-16629: database reports a different protection level from the protection mode
                ORCL      ERROR ORA-16737: the redo transport service for member "FS" has an error

Message ‘the redo transport service for member “FS” has an error’ requires further drilldown:

DGMGRL> show database verbose NY LogXptStatus;
LOG TRANSPORT STATUS
PRIMARY_INSTANCE_NAME STANDBY_DATABASE_NAME     STATUS                ERROR
                ORCL                   FS      ERROR ORA-16086: Redo data cannot be written to the standby redo log

The underlying reason is found in far sync RFS trace:

Trace file /u01/app/oracle/diag/rdbms/fs/ORCL/trace/ORCL_rfs_29164.trc
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
Build label:    RDBMS_19.3.0.0.0DBRU_LINUX.X64_190417
ORACLE_HOME:    /u01/app/oracle/product/19.3.0/dbhome_1
System name:    Linux
Node name:      ip-172-31-28-23.ec2.internal
Release:        3.10.0-1062.el7.x86_64
Version:        #1 SMP Thu Jul 18 20:25:13 UTC 2019
Machine:        x86_64
VM name:        Xen Version: 4.2 (HVM)
Instance name: ORCL
Redo thread mounted by this instance: 1
Oracle process number: 56
Unix process pid: 29164, image: oracle@ip-172-31-28-23.ec2.internal


*** 2019-09-04T15:56:40.334571-04:00
*** SESSION ID:(66.28887) 2019-09-04T15:56:40.334605-04:00
*** CLIENT ID:() 2019-09-04T15:56:40.334616-04:00
*** SERVICE NAME:() 2019-09-04T15:56:40.334624-04:00
*** MODULE NAME:(oracle@ip-172-31-86-22.ec2.internal (TNS V1-V3)) 2019-09-04T15:56:40.334631-04:00
*** ACTION NAME:() 2019-09-04T15:56:40.334639-04:00
*** CLIENT DRIVER:() 2019-09-04T15:56:40.334646-04:00

krsv_proc_add: Request to add process to V$MANAGED_STANDBY [krsr.c:4229]
krsr_abrt: The primary database is operating in MAXIMUM PROTECTION
  or MAXIMUM AVAILABILITY mode, and the standby database
  does not contain any viable SRLs
Encountered error status 16086
krsv_proc_rem: Request to remove process from V$MANAGED_STANDBY [krsr.c:12657]

Analysis

Consider the message “the standby database does not contain any viable SRLs” (standby redo logs). By “viable” is meant standby redo logs of size greater than or equal to the primary online redo log. Check the primary online redo log:

SQL> select group#, bytes/1024/1024 mb from v$log;

    GROUP#         MB
---------- ----------
         1        200
         2        200
         3        200

All primary online redo logs are 200 MB. Now check the standby redo logs at each site.

Primary:

SQL> select group#, bytes/1024/1024 mb from v$standby_log;

    GROUP#         MB
---------- ----------
         4         50
         5         50
         6         50
         7         50
         8         50
         9        200

6 rows selected.

Already there is a problem. Not all the standby logs are the same size. Check the far sync:

SQL> select group#, bytes/1024/1024 mb from v$standby_log;

    GROUP#         MB
---------- ----------
         4         50
         5         50
         6         50
         7         50
         8         50

All the standby logs are 50 mb. They are too small. There are no viable standby redo logs. Check the standby:

SQL> select group#, bytes/1024/1024 mb from v$standby_log;

    GROUP#         MB
---------- ----------
         4         50
         5         50
         6         50
         7         50
         8         50

Again, there are no viable SRLs.

Possible root cause

The online redo log size is 200 MB, which happens to be the Database Creation Assistant (DBCA) default size.

For some reason of incorrect administration, the standby redo log size was 50 MB when the database was duplicated. The standby redo log should be 200 MB.

Corrective action

  1. Drop all standby redo logs that are not 200 MB.
  2. Create a set of 200 MB SRLs per database.

Standby database

DGMGRL> edit database SF set state=apply-off;
Succeeded.
SQL> select group#, bytes/1024/1024 mb, status from v$standby_log;

    GROUP#         MB STATUS
---------- ---------- ----------
         4         50 UNASSIGNED
         5         50 UNASSIGNED
         6         50 UNASSIGNED
         7         50 UNASSIGNED
         8         50 UNASSIGNED

SQL> alter system set standby_file_management = manual;

System altered.

SQL> alter database drop logfile group 4;

Database altered.

SQL> alter database drop logfile group 5;

Database altered.

SQL> alter database drop logfile group 6;

Database altered.

SQL> alter database drop logfile group 7;

Database altered.

SQL> alter database drop logfile group 8;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter system set standby_file_management = auto;

System altered.

SQL> select group#, bytes/1024/1024 mb, status from v$standby_log;

    GROUP#         MB STATUS
---------- ---------- ----------
         4        200 UNASSIGNED
         5        200 UNASSIGNED
         6        200 UNASSIGNED
         7        200 UNASSIGNED
DGMGRL> edit database SF set state=apply-on;
Succeeded.

Far sync

SQL> select group#, bytes/1024/1024 mb, status from v$standby_log;

    GROUP#         MB STATUS
---------- ---------- ----------
         4         50 UNASSIGNED
         5         50 UNASSIGNED
         6         50 UNASSIGNED
         7         50 UNASSIGNED
         8         50 UNASSIGNED

SQL> alter database drop logfile group 4;

Database altered.

SQL> alter database drop logfile group 5;

Database altered.

SQL> alter database drop logfile group 6;

Database altered.

SQL> alter database drop logfile group 7;

Database altered.

SQL> alter database drop logfile group 8;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> select group#, bytes/1024/1024 mb, status from v$standby_log;

    GROUP#         MB STATUS
---------- ---------- ----------
         1        200 UNASSIGNED
         2        200 UNASSIGNED
         3        200 UNASSIGNED
         4        200 UNASSIGNED

Primary

SQL> select group#, bytes/1024/1024 mb, status from v$standby_log;

    GROUP#         MB STATUS
---------- ---------- ----------
         4         50 UNASSIGNED
         5         50 UNASSIGNED
         6         50 UNASSIGNED
         7         50 UNASSIGNED
         8         50 UNASSIGNED
         9        200 UNASSIGNED

6 rows selected.

SQL> alter database drop logfile group 4;

Database altered.

SQL> alter database drop logfile group 5;

Database altered.

SQL> alter database drop logfile group 6;

Database altered.

SQL> alter database drop logfile group 7;

Database altered.

SQL> alter database drop logfile group 8;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> alter database add standby logfile size 200m;

Database altered.

SQL> select group#, bytes/1024/1024 mb, status from v$standby_log;

    GROUP#         MB STATUS
---------- ---------- ----------
         4        200 UNASSIGNED
         5        200 UNASSIGNED
         6        200 UNASSIGNED
         9        200 UNASSIGNED

Double check all systems

Check NY, FS, and SF.

SQL> select count(*)numlog, bytes/1024/1024 mb from v$standby_log group by bytes;

    NUMLOG         MB
---------- ----------
         4        200

Data Guard

After a few seconds, note that the system state is normal. The protection mode is now Maximum Availability.

DGMGRL> show configuration

Configuration - ORCL_CONFIG

  Protection Mode: MaxAvailability
  Members:
  NY - Primary database
    FS - Far sync instance
      SF - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 51 seconds ago)

Conclusion

Be aware of these lessons or practices:

  • Before duplicating the primary database, note the online redo log size. Check that all OLRs are the same size. Check that the SRLs are all the same size, and are the same size as the OLRs.
  • Recognize that a normal Data Guard in steady state should be free of warnings.
  • Before attempting to upgrade the protection level, resolve all persistent warnings.
  • Drill in to errors as follows:
    • show configuration
    • show objecttype objectname StatusReport
    • show objecttype objectname property
      where property is a monitor property, LogXptStatus in this case

 

Scripts for trace 10053

By Brian Fitzgerald

Trace 10053 tells you “how” the optimizer selected its plan — the reasons for the choices it has made. The main fact to remember when you are collecting trace 10053 is that you must force a hard parse each time you run your statement. Although forcing a hard parse can be done by manually varying the statement’s SQL text, this blog post offers a way to automate some of the steps, in case you need to quickly make a high number of trace 10053 runs.

For the sake of this blog article, we’ll use trace 10053 to examine the effect of statistics on the execution plan.

Preliminary: To run trace 10053, the user must be granted alter session.

SQL> grant alter session to scott;

Grant succeeded.

Trace 10053 with manual steps

In order to analyze the effect of gathering statistics on execution plan, we’ll run a statement twice, once without statistics and once with statistics. We’ll cycle the pluggable database to guarantee at least one hard parse to start with. Here is file demo.softparse.10053.sql

set pagesize 50000
conn / as sysdba
alter pluggable database pdba close;
alter pluggable database pdba open;
@ conn.pdba.scott.sql

alter session set tracefile_identifier = a;
exec dbms_stats.delete_schema_stats(user);
alter session set events='10053 trace name context forever, level 1';
select e.ename, e.deptno
from emp e
;
alter session set events '10053 trace name context off';
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

alter session set tracefile_identifier = b;
exec dbms_stats.gather_schema_stats(user);
alter session set events='10053 trace name context forever, level 1';
select e.ename, e.deptno
from emp e
;
alter session set events '10053 trace name context off';
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

The output:

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.softparse.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:25:35 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Pluggable database altered.
Pluggable database altered.
Connected.
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_A.trc
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_B.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ ls -l /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_{A,B}.trc
ls: cannot access /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_B.trc: No such file or directory
-rw-r----- 1 oracle asmadmin 99753 Aug 7 21:25 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_A.trc

Whoops, where is my second trace file? The second trace file is not there because I did not force a hard parse on the second statement execution. The SQL text was exactly the same in both runs:

select e.ename, e.deptno
from emp e
;

We failed to collect a trace 10053 for the second execution.

I could work around by manually varying the white space, upper case, or adding a comment. There are other ways to force a hard parse, that sometimes work, such as issuing a grant or flushing the statement from the shared pool, if you have the privilege.

In the next example, the statements are the same, except that SELECT is in caps in the first, and FROM is in caps in the second. These SQL text variations were made manually with an editor. Here is how demo.hardparse.10053.sql differs from the original:

[oracle@stormking cdb12201 sql]$ diff demo.softparse.10053.sql demo.hardparse.10053.sql
10c10
< select e.ename, e.deptno
---
> SELECT e.ename, e.deptno
22c22
< from emp e
---
> FROM emp e
YYY

The output :

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.hardparse.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:34:36 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Pluggable database altered.
Pluggable database altered.
Connected.
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ ls -l /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_{A,B}.trc
-rw-r----- 1 oracle asmadmin 99909 Aug 7 21:35 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
-rw-r----- 1 oracle asmadmin 98628 Aug 7 21:35 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc

Notice that there are two 10053 trace files this time, one for each statement. The statements are unique, so each one was hard parsed.

Notice also the trace file continuation message from file A to file B:

[oracle@stormking cdb12201 sql]$ tail -2 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc ***

Issuing alter session set tracefile_identifier = b forced a switch to a new trace file.

That’s the basic idea, force a hard parse and switch trace file each execution, but I want to automate the process because in the future, I want to make a high number of trace 10053 runs.

Scripts for trace 10053

My demo queries are in directory /home/oracle/brian/sql/sql, but I plan to reuse my trace in the future. I set SQLPATH, and put reusable scripts there.

[oracle@stormking cdb12201 sqlplus]$ echo $SQLPATH
/home/oracle/brian/sql/sqlplus

Here are the scripts:

conn.pdba.scott.sql

A one-liner for connecting to the schema. I will reuse this script many times.

conn scott/tiger@stormking:1521/pdba

trace.10053.on.sql

Here is a one-line script for starting the trace 10053. Using this script is a lot easier than remembering the exact syntax, and it makes for an uncluttered main script.

alter session set events='10053 trace name context forever, level 1';

trace.10053.off.sql

Here is the script for stopping trace 10053. If you do not explicitly turn off trace, you will not get your trace file.

alter session set events '10053 trace name context off';

tracefilename.sql

This script displays the trace file name. It’s a better approach than searching for the file manually.

set echo off
column tracefilename format a100
set trimspool on
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

randident.sql

I want to use a random identifier in two places. I want a random trace file identifier, and I want a random identifier somewhere in my statement.

-- set sqlplus substitution variable randident
-- to a random character string
set termout off
column randident new_value randident format a6
select dbms_random.string('u',6) randident from dual;
set termout on

randtfi.sql

This script sets the tracefile identifier to a random string. Notice that each time you call randtfi.sql, Oracle switches to a new trace file. That’s handy if you want to break up the trace for a single sqlplus session into multiple parts.

-- Change to a new, randomly named trace file on each call
@ randident.sql
alter session set tracefile_identifier = &&randident;

This is my collection of reusable scripts. They go in /home/oracle/brian/sql/sqlplus

Demonstrating the scripts

The demonstration is a throwaway example. The script does not belong in SQLPLATH. I have put the script in /home/oracle/brian/sql/sql. Here is demo.trace.10053.sql:

set pagesize 50000
@ conn.pdba.scott.sql

exec dbms_stats.delete_schema_stats(user);
@ randtfi.sql
@ trace.10053.on.sql
@ randident.sql
set echo on
set verify on
select e.ename &&randident, e.deptno
from emp e
;
set echo off
@ trace.10053.off.sql
@ tracefilename.sql

exec dbms_stats.gather_schema_stats(user);
@ randtfi.sql
@ trace.10053.on.sql
@ randident.sql
set echo on
set verify on
select e.ename &&randident, e.deptno
from emp e
;
set echo off
@ trace.10053.off.sql
@ tracefilename.sql

Running demo.trace.10053.sql:

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.trace.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:50:53 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
PL/SQL procedure successfully completed.
Session altered.
Session altered.
SQL> set verify on
SQL> select e.ename &&randident, e.deptno
 2 from emp e
 3 ;
old 1: select e.ename &&randident, e.deptno
new 1: select e.ename AWQVDZ, e.deptno
AWQVDZ DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
SQL> set echo off
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc
PL/SQL procedure successfully completed.
Session altered.
Session altered.
SQL> set verify on
SQL> select e.ename &&randident, e.deptno
 2 from emp e
 3 ;
old 1: select e.ename &&randident, e.deptno
new 1: select e.ename ZRQGGE, e.deptno
ZRQGGE DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
SQL> set echo off
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ grep '^sql_id' /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc:sql_id=6z7nrbdts160v plan_hash_value=-338806364 problem_type=3
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc:sql_id=5ub8qv1cugvr4 plan_hash_value=-338806364 problem_type=3

Again, there are two trace files. The first trace file has a continuation message on to the second:

[oracle@stormking cdb12201 sql]$ tail -2 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc
*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc ***

I have varied the output column name:

new 1: select e.ename AWQVDZ, e.deptno

vs.

new 1: select e.ename ZRQGGE, e.deptno

Because the SQL is different each time, a hard parse is forced each execution. I could have also forced a hard parse by placing the random string in a comment:

/* &&randident */

Now, on to the analysis and comparison of the two trace files.

In the first execution, statistics had been deleted. The trace 10053 was collected in file cdb12201_ora_22121_RYNDLO.trc. The plan is:

============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3 | |
| 1 | TABLE ACCESS FULL | EMP | 14 | 280 | 3 | 00:00:01 |
-------------------------------------+-----------------------------------+

Trace cdb12201_ora_22121_XGLYME.trc, with statistics, show this plan:

============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3 | |
| 1 | TABLE ACCESS FULL | EMP | 14 | 126 | 3 | 00:00:01 |
-------------------------------------+-----------------------------------+

The plan hash value in both cases is -338806364. The rows and blocks estimate is the same, and the bytes estimate is slightly different.

Trace cdb12201_ora_22121_RYNDLO.trc contains a dynamic sampling analysis:

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 2).
** Dynamic sampling updated index stats.: PK_EMP, blocks=1
** Dynamic sampling updated table stats.: blocks=5

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Generated dynamic sampling query:
 query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE RESULT_CACHE(SNAPSHOT=3600) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ NO_PARALLEL("E") FULL("E") NO_PARALLEL_INDEX("E") */ 1 AS C1, 1 AS C2 FROM "SCOTT"."EMP" "E") SAMPLESUB

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Executed dynamic sampling query:
 level : 2
 sample pct. : 100.000000
 actual sample size : 14
 filtered sample card. : 14
 orig. card. : 409
 block cnt. table stat. : 5
 block cnt. for sampling: 5
 max. sample block cnt. : 64
 sample block cnt. : 5
 min. sel. est. : -1.00000000

In the second execution, statistics had been collected. The trace is cdb12201_ora_22121_XGLYME.trc. No dynamic sampling appears.

The analysis of the bytes difference follows. For the case of gathered stats, the row length is:

SQL> select sum(avg_col_len) from user_tab_columns 
where table_name = 'EMP' 
and column_name in ('ENAME','DEPTNO');

SUM(AVG_COL_LEN)
----------------
 9

9 bytes per row x 14 rows = 126 bytes, which is the bytes value in the plan, step 2 TABLE ACCESS FULL. For deleted stats, avg_col_len is null. The dynamic sampling query only finds the row count. The simplified dynamic sampling query is:

SQL> SELECT NVL(SUM(C1),0) FROM (SELECT /*+ FULL("E") */ 1 AS C1 FROM "SCOTT"."EMP" "E");
NVL(SUM(C1),0)
--------------
 14

Dynamic sampling tells us nothing about the number of bytes per row. The row length would have to come from the catalog.

SQL> select column_name, data_length, data_type from user_tab_columns where table_name = 'EMP' and column_name in ('ENAME','DEPTNO');

COLUMN_NAME DATA_LENGTH DATA_TYPE
------------------------------ ----------- ------------------------------
ENAME 10 VARCHAR2
DEPTNO 22 NUMBER

Allowing number digits per byte, each row would return 21 bytes. (That’s how I figure it). The optimizer estimated 20 bytes per row, or 280 bytes.

In conclusion, whether dynamic stats or gathered stats were used, the resulting plan was the same. The estimates of cardinality, CPU and time were the same, but the estimates of bytes returned differed slightly. Trace 10053 helps to explain the similarities and differences.

This was a simple example to demonstrate the use of the trace 10053 scripts to find out the reason for the optimizer’s predicted costs.

Conclusion

This blog post made the following points about the trace 10053 technique:

  • Trace 10053 can be used to find out why the optimizer made its decisions.
  • A user must have been granted alter session before running trace 10053.
  • To get trace 10053 output, is necessary to force a hard parse.
  • To get a trace file split into pieces, change the trace file identifier.
  • To get a random identifier in sqlplus, use dbms_random and column new_value.
  • One way to force a hard parse is to vary the SQL with a random identifier.
  • Force a trace file switch by using a random tracefile identifier.

The scripts in this blog post can let you efficiently, systematically, and quickly collect trace 10053 files on a SQL statement. The collected files can be reviewed, analyzed, and compared.

 

 

 

 

 

 

 

 

 

Scripts for trace 10046 and tkprof

By Brian Fitzgerald

Trace 10046 is used to collect session diagnostic information, such as cursors, waits, and binds. Trace 10046 can also be used to investigate the internal workings of Oracle software. Trace 10046 can be used to localize errors. The tkprof utility formats 10046 trace files into a readable format.

The syntax for invoking trace 10046 and tkprof can be arcane, verbose, and error prone. Putting the commands in several files in a separate directory can lead to cleaner scripts and less cluttered directories. Using scripts can eliminate the manual steps of identifying the trace file and processing with tkprof. These scripts are for tracing “own session”.

Preliminary: To run trace 10046, the user must be granted alter session.

SQL> grant alter session to scott;
Grant succeeded.

Scripts for trace 10046 and tkprof

The scripts are general purpose and can be used for more than one investigation. You should set environment SQLPATH to a common directory and put the scripts there. For example:

[oracle@stormking cdb12201 sqlplus]$ echo $SQLPATH
/home/oracle/brian/sql/sqlplus

Here are  the scripts:

conn.pdba.scott.sql

A script for connecting to the schema.

conn scott/tiger@stormking:1521/pdba

trace.10046.on.sql

Start tracing

alter session set max_dump_file_size = '100M';
alter session set events '10046 trace name context forever, level 12';

trace.10046.off.sql

Stop tracing. Close the trace file.

alter session set events '10046 trace name context off';

tracefilename.sql

Display the trace file name. Set sqlplus substitution variable tracefilename

set verify off
set trimspool on
column tracefilename new_value tracefilename format a100
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

tkprof.sql

Identify the trace file and process it with tkprof. Store the output in the current working directory.

@ tracefilename.sql
column base new_value base
select regexp_replace( '&&tracefilename', '.*/(.*).trc','\1' ) base
from dual;
define tkfile=&&base..tkp
host tkprof &&tracefilename &&tkfile
prompt created file:
prompt &&tkfile

Demonstration

Here’s a simple example. User scott tries to run “show parameter”.

SQL> show parameter cursor
ORA-00942: table or view does not exist

What table or view does not exist? The message does not say. Let’s trace and find out. Here is file show.parameter.demo.sql for tracing “show parameter”:

@ conn.pdba.scott.sql
@ trace.10046.on.sql
show parameter optimizer
@ trace.10046.off.sql
@ tkprof.sql

See how clean and simple it looks. Now run it:

[oracle@stormking cdb12201 parameter]$ sqlplus /nolog @ show.parameter.demo.sql
SQL*Plus: Release 12.2.0.1.0 Production on Thu Jul 27 21:48:50 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Session altered.
Session altered.
ORA-00942: table or view does not exist
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_29212.trc
BASE
------------------
cdb12201_ora_29212
TKPROF: Release 12.2.0.1.0 - Development on Thu Jul 27 21:48:50 2017
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
created file:
cdb12201_ora_29212.tkp

In cdb12201_ora_29212.tkp

The following statement encountered a error during parse:
SELECT NAME NAME_COL_PLUS_SHOW_PARAM,
DECODE(TYPE,1,'boolean',2,'string',3,'integer',4,'file',5,'number', 6,'big integer', 'unknown') TYPE,
DISPLAY_VALUE VALUE_COL_PLUS_SHOW_PARAM 
FROM V$PARAMETER WHERE UPPER(NAME) LIKE UPPER(:NMBIND_SHOW_OBJ) 
ORDER BY NAME_COL_PLUS_SHOW_PARAM,ROWNUM
Error encountered: ORA-00942

tkprof shows that “show parameter” failed because the user has no access to view v$parameter.

The trace file itself also shows the error:

PARSING IN CURSOR #140627652972696 len=289 dep=0 uid=112 oct=3 lid=112 tim=2237530714211 hv=2462394820 ad='b4be2118' sqlid='7cfz5wy9caaf4'
SELECT NAME NAME_COL_PLUS_SHOW_PARAM,DECODE(TYPE,1,'boolean',2,'string',3,'integer',4,'file',5,'number', 6,'big integer', 'unknown') TYPE,DISPLAY_VALUE VALUE_COL_PLUS_SHOW_PARAM FROM V$PARAMETER WHERE UPPER(NAME) LIKE UPPER(:NMBIND_SHOW_OBJ) ORDER BY NAME_COL_PLUS_SHOW_PARAM,ROWNUM
END OF STMT
PARSE #140627652972696:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=2237530714211

User scott can confirm no access:

SQL> select * from V$PARAMETER where 1 = 2;
select * from V$PARAMETER where 1 = 2
 *
ERROR at line 1:
ORA-00942: table or view does not exist

To solve the user’s problem, select_catalog_role or some other role can be granted to user scott.

SQL> select grantee from dba_tab_privs 
where table_name = 'V_$PARAMETER' 
and grantee in ( select role from dba_roles ) order by grantee;
GRANTEE
----------------------------------------------------------------------------
DV_SECANALYST
SELECT_CATALOG_ROLE
SQL> select grantee from dba_role_privs 
where granted_role = 'SELECT_CATALOG_ROLE' order by grantee;
GRANTEE
----------------------------------------------------------------------------
DBA
EM_EXPRESS_BASIC
EXP_FULL_DATABASE
IMP_FULL_DATABASE
OEM_MONITOR
SYS
SYSBACKUP
SYSUMF_ROLE
8 rows selected.
SQL> select grantee from dba_sys_privs 
where privilege = 'SELECT ANY DICTIONARY' order by grantee;
GRANTEE
----------------------------------------------------------------------------
DBA
DBSNMP
GGSYS
GSMADMIN_INTERNAL
OEM_MONITOR
SYSBACKUP
SYSDG
WMSYS
8 rows selected.

Alternatively, we can create a new role:

SQL> conn / as sysdba
Connected.
SQL> alter session set container = pdba;
Session altered.
SQL> create role sqlplus_role;
Role created.
SQL> grant select on v_$parameter to sqlplus_role;
Grant succeeded.
SQL> grant sqlplus_role to scott;
Grant succeeded.
SQL> @ conn.pdba.scott.sql
Connected.
SQL> show parameter cursor
NAME TYPE VALUE
------------------------------------ ----------- --------------------------
cursor_bind_capture_destination string memory+disk
cursor_invalidation string IMMEDIATE
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
open_cursors integer 300
session_cached_cursors integer 50

The user’s problem is solved. sqlplus command “show parameter” no longer throws ORA-00942.

Summary

  • event 10046 can be used for performance diagnosis
  • trace 10046 can also be used to investigate the inner workings of software
  • trace 10046 can be used to identify the source of an error
  • reusable sqlplus scripts belong in SQLPATH
  • trace syntax can be saved in convenient scripts
  • the result is less cluttered administrative scripts
  • the task of identifying a the trace file and processing with tkprof can be handled in a single script
  • sqlplus command “show parameter” requires access to v$parameter
  • users can be granted limited privileges with custom roles