Notes on histograms

By Brian Fitzgerald

This blog post is a worked example of Oracle histograms. In this case, a frequency histogram is demonstrated in the context of a single table query.

Setup

cr.histodemo.sql

Table ords. Intended column data distribution:

10000 rows
10000 unique ordid
1000 categories, evenly distributed
2 distinct status, skewed
99.99% COMPLETE
0.01% PENDING

One index on each column, ORDID, CATEGORYID, and STATUS.

Not null constraints are there to simplify the demonstration.

whenever oserror exit 1
@ sqlerc.sql
drop table ords purge;
@ sqlerx.sql

create table ords
(
ordid number not null,
categoryid number not null,
status varchar2(10) not null
);

insert into ords ( ordid, categoryid, status )
select
level,
mod( level, 1000 ),
case mod( level, 10000) when 0 then 'PENDING' else 'COMPLETE' end
from dual
connect by level <= 10000;

alter table ords add constraint ords primary key ( ordid );
create index category_idx on ords ( categoryid );
create index status_idx on ords ( status );

quit

output:

[oracle@stormking db12201 stats]$ sqlplus u/u @ cr.histodemo.sql
SQL*Plus: Release 12.2.0.1.0 Production on Sun Feb 11 11:37:31 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Sun Feb 11 2018 00:10:42 -05:00
Connected to:
Oracle Database 12c Enterprise Edition 
Release 12.2.0.1.0 - 64bit Production
Table dropped.
Table created.
10000 rows created.
Table altered.
Index created.
Index created.
Disconnected from Oracle Database 12c Enterprise Edition 
Release 12.2.0.1.0 - 64bit Production

Execution scripts

query.ords.sql

Thwo queries on table ords.

  1. equality predicate on STATUS only
  2. equality predicate on STATUS and CATEGORYID

The 10053 trace shows optimizer’s rationale.

set linesize 100
@ sqlerx.sql
column plan_table_output format a80

@ set.tracefile.identifier.sql status
@ trace.10053.on.sql

select ordid from ords
where status = 'PENDING'
;
select * from table(dbms_xplan.display_cursor( format=>'basic' ));
@ trace.10053.off.sql

@ set.tracefile.identifier.sql cat
@ trace.10053.on.sql

select ordid from ords
where categoryid = 0
and status = 'COMPLETE'
;
select * from table(dbms_xplan.display_cursor( format=>'basic' ));
@ trace.10053.off.sql

rpt.col.usage.sql

Script to show what types of workload has run on the columns. Database monitoring is flushed for the sake of displaying the col_usage report before gathering stats in this demonstration. Gather stats automatically flushes column usagem so alling flush_database_monitoring_info is generally not required.

define ownname=&&1
define tabname=&&2
whenever oserror exit 1
@ sqlerx.sql
@ l32k.p50k.sql

set verify off

set long 2000000000
exec dbms_stats.flush_database_monitoring_info;
column rpt format a160
select dbms_stats.report_col_usage('&&ownname','&&tabname') rpt from dual;

gather.table.stats.sql

Invalidation, in this case, is to force a 10053 trace for the sake of this demonstration. The 10046 trace and tkprof is for checking what queries on ords get run during gather stats. With the exception of no_invalidate, dbms_stats is run with the default options to show how histograms are gathered automatically during routine database maintenance. Estimate_percent defaults to auto_sample_size. Gathering or deleting a histogram can be forced by using option method_opt.

define ownname=&&1
define tabname=&&2
whenever oserror exit 1
@ sqlerx.sql
set verify off

@ set.tracefile.identifier.sql stats
@ trace.10046.on.sql

begin
 dbms_stats.gather_table_stats (
 ownname => '&&ownname',
 tabname => '&&tabname',
 no_invalidate => false
 );
end;
/
prompt done gather stats on &&ownname..&&tabname

@ trace.10046.off.sql
@ tkprof.sql

user.tab.statistics.sql

Script to show the existence or nonexistence of a histogram

define tabname=&&1
whenever oserror exit 1
@ sqlerx.sql
@ l32k.p50k.sql
@ columnformat.sql

select
table_name,
blocks,
num_rows
from user_tab_statistics
where
table_name = '&&tabname';

select
index_name,
leaf_blocks,
num_rows,
distinct_keys
from user_ind_statistics
where table_name = '&&tabname';

select
column_name,
num_distinct,
density,
histogram,
num_buckets
from user_tab_col_statistics
where table_name = '&&tabname';

run.histodemo.sql

The demonstration script

whenever oserror exit 1
@ spoolfile.uniq.sql ords

@ rpt.col.usage.sql U ORDS
@ gather.table.stats.sql U ORDS
@ user.tab.statistics.sql ORDS
@ query.ords.sql

@ spooloff.sql
quit

Execution run 1

Files

ords.db12201.20180211.142321.84.txt

run.histodemo.sql output

db12201_ora_20225_STATS.trc

10043 trace of gather table stats

db12201_ora_20225_STATS.tkp

tkprof of gather table stats

db12201_ora_20225_STATUS.trc

10053 trace of “where status = ‘PENDING'”

db12201_ora_20225_CAT.trc

10053 trace of “where categoryid = 0 and status = ‘COMPLETE'”

Highlights

  • Because table ords is newly created, the column usage report shows no workload
  • Gather stats ran four queries on table ords, namely a full table scan and one per index.
$ grep -i '"U"."ORDS" t' db12201_ora_20225_STATS.tkp
 "U"."ORDS" t /* ACL,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL*/
 "U"."ORDS" t where "ORDID" is not null
 "U"."ORDS" t where "CATEGORYID" is not null
 "U"."ORDS" t where "STATUS" is not null
  • No histogram was gathered
  • ORDID is most selective column, having the highest num_distinct and lowest density
  • CATEGORYID is medium-selective
  • STATUS is not, in general, selective, having only 2 distinct values
COLUMN_NAME NUM_DISTINCT DENSITY HISTOGRAM NUM_BUCKETS
------------------------------ ------------ ---------- -------------------
ORDID 10000 .0001 NONE 1
CATEGORYID 1000 .001 NONE 1
STATUS 2 .5 NONE 1
  • No histogram is present on status. Density is computed:

density = 1 / num_distinct = 1/2 = .5

  • “where status = ‘PENDING'” catches a full table scan
----------------------------------
| Id | Operation | Name |
----------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| ORDS |
----------------------------------
  • “where categoryid = 0 and status = ‘COMPLETE'” catches an index range scan
------------------------------------------------------------
| Id | Operation | Name |
------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| ORDS |
| 2 | INDEX RANGE SCAN | CATEGORY_IDX |
-----------------------------------------------------------

Execution run 2

Files

ords.db12201.20180211.142525.84.txt

run.histodemo.sql output for run 2.

db12201_ora_20607_STATS.trc

db12201_ora_20607_STATS.tkp

Gather stats 10046 trace and tkprof.

db12201_ora_20607_STATUS.trc

db12201_ora_20607_CAT.trc

10053 trace again for the queries, “where status” and “where categoryid …
and status”.

Highlights

  • The column usage report shows that equality predicates have run on columns CATEGORYID and STATUS.
COLUMN USAGE REPORT FOR U.ORDS
..............................

1. CATEGORYID : EQ
2. STATUS : EQ
  • This time, gather stats ran six queries on table ords.
$ grep -i '"U"."ORDS" t' db12201_ora_20607_STATS.tkp | wc -l
6
  • gather table stats ran this query on column STATUS:
select 
/*+ no_parallel(t) no_parallel_index(t) dbms_stats 
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) 
no_monitoring xmlindex_sel_idx_tbl 
opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad 
*/
 substrb(dump("STATUS",16,0,64),1,240) val,
 rowidtochar(rowid) rwid 
from "U"."ORDS" t 
where rowid in (
chartorowid('AAAWt1AAHAAAAFjAAA'),
chartorowid('AAAWt1AAHAAAAGCAAv')) 
order by "STATUS"

Clearly, the only purpose is to obtain the two unique STATUS values.

  • Gather stats ran this query on column CATEGORYID:
select 
substrb(dump(val,16,0,64),1,240) ep, 
freq, 
cdn, 
ndv,
(sum(pop) over()) popcnt,
(sum(pop*freq) over()) popfreq,
substrb(dump(max(val) over(),16,0,64),1,240) maxval,
substrb(dump(min(val) over(),16,0,64),1,240) minval
from
 (
select val, 
freq, 
(sum(freq) over()) cdn, 
(count(*) over()) ndv,
(case when freq > ((sum(freq) over())/254) then 1 else 0 end) pop
from (select
  /*+ no_parallel(t) no_parallel_index(t) dbms_stats
    cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) 
    no_monitoring xmlindex_sel_idx_tbl 
    opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad
 */
  "CATEGORYID" val, 
  count("CATEGORYID") freq 
from "U"."ORDS" t 
where "CATEGORYID" is not null 
group by "CATEGORYID"
)) order by val

Inference: dbms_stats is collecting information needed to construct a histogram on STATUS and CATEGORYID.

Notice the column name “ep”, which could refer to a histogram endpoint value.

“pop” = 1 if popular, 0 if non-popular.

  • dbms_stats stored a histogram for column status but not for categoryid
COLUMN_NAME NUM_DISTINCT DENSITY HISTOGRAM NUM_BUCKETS
------------------------------ ------------ ---------- ------- -----------
ORDID 10000 .0001 NONE 1
CATEGORYID 1000 .001 NONE 1
STATUS 2 .00005 FREQUENCY 2

Inference: dbms_stats found skew in status but not in categoryid

  • The reported density on column status has changed:
82c84
< STATUS 2 .5 NONE 1
---
> STATUS 2 .00005 FREQUENCY 2

According to 2007 note New Density calculation in 11g by Alberto Dell’Era, density = 0.5 / num_rows for frequency histograms. Dell’Era’s paper refers only to not null values. In other words, for frequency histograms, density, as stored in the catalog, should be

density = .5 / num_values = .5 / ( num_rows – num_nulls )

I have found this to be accurate to within 2.5e-15 for 99% out of the 30,000 frequency histograms in one example database, where the histograms were mostly gathered with Oracle 11.1.0.7 binaries. After gathering statistics with Oracle 12.1.0.2 binaries, dba_tab_col_statistics density is found to agree with formula .5 / ( num_rows – num_nulls ) to within 2.5e-15 for 99.95% of all frequency histograms.This formula is found to apply only for the cases where num_distinct >1. For the remaining 0.05%, the density is within 2% of the formula. All the discrepant densities are IOTs; however, not all IOTs are discrepant, so I must be missing some detail.

In this example, num_values = 10000 – 0 = 10000. Density = .5 / 10000 = 0.00005.

The endpoint values are, roughly, using cre_hist_funcs.sql, function hist_numtochar, by Martin Widlake:

ENDPOINT_NUMBER VAL
--------------- ----------
 9999 COMPLEP
 10000 PENDIN<

Comment: density is used to cost predicates on non-popular values.

“where status = ‘COMPLETE'” remains non-selective.

“where status =’PENDING'” is selective.

Compare run without histogram vs run with histogram

where status =’PENDING’

diff run1/db12201_ora_20225_STATUS.trc run2/db12201_ora_20607_STATUS.trc

Output here: 10053 diff: where status = ‘PENDING’

There several points of interest in the 10053 trace comparison

  • With the histogram, the selectivity of status=’PENDING’ is 1.0000e-4, which is exactly correct.
875,876c877,879
< AvgLen: 9 NDV: 2 Nulls: 0 Density: 0.500000
< Estimated selectivity: 0.500000 , col: #3
---
> AvgLen: 9 NDV: 2 Nulls: 0 Density: 0.000050
> Histogram: Freq #Bkts: 2 UncompBkts: 10000 EndPtVals: 2 ActualVal: no
> Estimated selectivity: 1.0000e-04 , endpoint value predicate, col: #3
  • The plan changes thus:
1016,1021c1028,1034
< -------------------------------------+-----------------------------------+
< | Id | Operation | Name | Rows | Bytes | Cost | Time |
< -------------------------------------+-----------------------------------+
< | 0 | SELECT STATEMENT | | | | 11 | |
< | 1 | TABLE ACCESS FULL | ORDS | 5000 | 63K | 11 | 00:00:01 |
< -------------------------------------+-----------------------------------+
---
> ---------------------------------------------------------+-----------------------------------+
> | Id | Operation | Name | Rows | Bytes | Cost | Time |
> ---------------------------------------------------------+-----------------------------------+
> | 0 | SELECT STATEMENT | | | | 2 | |
> | 1 | TABLE ACCESS BY INDEX ROWID BATCHED | ORDS | 1 | 13 | 2 | 00:00:01 |
> | 2 | INDEX RANGE SCAN | STATUS_IDX| 1 | | 1 | 00:00:01 |
> ---------------------------------------------------------+-----------------------------------+

where categoryid = 0 and status = ‘COMPLETE’

diff run1/db12201_ora_20225_CAT.trc run2/db12201_ora_20607_CAT.trc

output here: 10053 diff: where categoryid and status

  • With a histogram, the estimated cost of using an index to access a popular value is higher. All the more reason not to use STATUS_IDX
 917 Access Path: index (AllEqRange)
 918 Index: STATUS_IDX
919,921c922,924
< resc_io: 30.000000 resc_cpu: 2414493
< ix_sel: 0.500000 ix_sel_with_filters: 0.500000
< Cost: 30.064126 Resp: 30.064126 Degree: 1
---
> resc_io: 60.000000 resc_cpu: 4827696
> ix_sel: 0.999900 ix_sel_with_filters: 0.999900
> Cost: 60.128217 Resp: 60.128217 Degree: 1
  • The cost of CATEGORY_IDX is unchanged.
 904 ****** Costing Index CATEGORY_IDX
 905 SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
 906 SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
 907 Estimated selectivity: 0.001000 , col: #2
 908 Access Path: index (AllEqRange)
 909 Index: CATEGORY_IDX
 910 resc_io: 11.000000 resc_cpu: 83586
 911 ix_sel: 0.001000 ix_sel_with_filters: 0.001000
 912 Cost: 11.002220 Resp: 11.002220 Degree: 1
  • CATEGORY_IDX is more selective and less costly than STATUS_IDX. The optimizer chooses CATEGORY_IDX.

Need for histogram in an application

The need for a histogram depends on the application. In this example, no histogram is needed on columns ORDID and CATEGORYID because the values are evenly distributed. There is no skew.

If the application code base consists of only this query, then no histogram is needed on STATUS:

select ordid from ords
where categoryid = 0
and status = 'COMPLETE'

If the application, additionally, has this query, then a histogram is needed on STATUS.

select ordid from ords
where status = 'PENDING'

What about “where status = ‘COMPLETE'”?

The question sometimes arises, what about a query with just “where status = ‘COMPLETE'”. Such a query would access a high number of blocks and return a high number of rows, and would be quickly identified as a poor performer. It is not clear what business value would be served by such a query. Such queries appear less commonly in practical applications.

What about two plans for the same statement?

Histograms are sometimes touted for their usefulness in finding more than one plan for the same sql. This blog post does not identify any such examples.

The hypothetical situation we are describing is:

  • The SQL is the same.
  • The plans are different.
  • In one case, the predicate is on a popular value and in the other case, a rare value.
  • In the popular case, the query may return a high number of rows, or a plan step may return a high number of rows to its antecedent.
  • In the non-popular case, the query accesses few blocks and returns quickly.
  • The popular and non-popular cases have business value.
  • The slower response time of the popular case is acceptable.

Findings

  • Column usage in predicates, as recorded in col_usage$, leads to analyzing for, but not necessarily storage of, histograms.
  • 10053 trace files show the optimizer’s rationale for choosing its final execution plan.
  • In some cases, a histogram helps the optimizer choose a less costly plan.
  • In some cases, a histogram improves the accuracy of a cost computation, but does not affect the final choice of plan.

10053 diff: group by status

[oracle@stormking db12201 12.2]$ diff run1/db12201_ora_20225_GBY.trc run2/db12201_ora_20607_GBY.trc
1c1
< Trace file /u01/app/oracle/diag/rdbms/db12201/db12201/trace/db12201_ora_20225_GBY.trc
---
> Trace file /u01/app/oracle/diag/rdbms/db12201/db12201/trace/db12201_ora_20607_GBY.trc
12,13c12,13
< Oracle process number: 41
< Unix process pid: 20225, image: oracle@stormking (TNS V1-V3)
---
> Oracle process number: 40
> Unix process pid: 20607, image: oracle@stormking (TNS V1-V3)
16,22c16,22
< *** 2018-02-11T14:23:23.367313-05:00
< *** SESSION ID:(84.19304) 2018-02-11T14:23:23.367313-05:00
< *** CLIENT ID:() 2018-02-11T14:23:23.367313-05:00
< *** SERVICE NAME:(SYS$USERS) 2018-02-11T14:23:23.367313-05:00
< *** MODULE NAME:(SQL*Plus) 2018-02-11T14:23:23.367313-05:00
< *** ACTION NAME:() 2018-02-11T14:23:23.367313-05:00
< *** CLIENT DRIVER:(SQL*PLUS) 2018-02-11T14:23:23.367313-05:00
---
> *** 2018-02-11T14:25:27.002021-05:00
> *** SESSION ID:(84.15782) 2018-02-11T14:25:27.002021-05:00
> *** CLIENT ID:() 2018-02-11T14:25:27.002021-05:00
> *** SERVICE NAME:(SYS$USERS) 2018-02-11T14:25:27.002021-05:00
> *** MODULE NAME:(SQL*Plus) 2018-02-11T14:25:27.002021-05:00
> *** ACTION NAME:() 2018-02-11T14:25:27.002021-05:00
> *** CLIENT DRIVER:(SQL*PLUS) 2018-02-11T14:25:27.002021-05:00
25c25
< *** TRACE CONTINUED FROM FILE /u01/app/oracle/diag/rdbms/db12201/db12201/trace/db12201_ora_20225_CAT.trc ***
---
> *** TRACE CONTINUED FROM FILE /u01/app/oracle/diag/rdbms/db12201/db12201/trace/db12201_ora_20607_CAT.trc ***
27c27
< Registered qb: SEL$1 0xf972ef68 (PARSER)
---
> Registered qb: SEL$1 0x6bfd6f68 (PARSER)
862c862
< Registered qb: SEL$1 0xf94dbf10 (COPY SEL$1)
---
> Registered qb: SEL$1 0x6c4d9b28 (COPY SEL$1)
876c876
< call(in-use=1736, alloc=16344), compile(in-use=130400, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=130400, alloc=136232), execution(in-use=2936, alloc=4032)
879c879
< call(in-use=1736, alloc=16344), compile(in-use=126272, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=126272, alloc=136232), execution(in-use=2936, alloc=4032)
883c883
< call(in-use=1736, alloc=16344), compile(in-use=126272, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=126272, alloc=136232), execution(in-use=2936, alloc=4032)
888c888
< call(in-use=1736, alloc=16344), compile(in-use=126680, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=126704, alloc=136232), execution(in-use=2936, alloc=4032)
891c891
< call(in-use=1736, alloc=16344), compile(in-use=126800, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=126824, alloc=136232), execution(in-use=2936, alloc=4032)
897c897
< call(in-use=1736, alloc=16344), compile(in-use=126800, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=126824, alloc=136232), execution(in-use=2936, alloc=4032)
901c901
< call(in-use=1736, alloc=16344), compile(in-use=127208, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=127272, alloc=136232), execution(in-use=2936, alloc=4032)
904c904
< call(in-use=1736, alloc=16344), compile(in-use=127328, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=127392, alloc=136232), execution(in-use=2936, alloc=4032)
911c911
< call(in-use=1736, alloc=16344), compile(in-use=127328, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=127392, alloc=136232), execution(in-use=2936, alloc=4032)
916c916
< call(in-use=1736, alloc=16344), compile(in-use=127736, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=127800, alloc=136232), execution(in-use=2936, alloc=4032)
919c919
< call(in-use=1736, alloc=16344), compile(in-use=127856, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=127920, alloc=136232), execution(in-use=2936, alloc=4032)
927c927
< call(in-use=1736, alloc=16344), compile(in-use=127856, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=127920, alloc=136232), execution(in-use=2936, alloc=4032)
930c930
< call(in-use=1736, alloc=16344), compile(in-use=128264, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=128328, alloc=136232), execution(in-use=2936, alloc=4032)
933c933
< call(in-use=1736, alloc=16344), compile(in-use=128424, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=128448, alloc=136232), execution(in-use=2936, alloc=4032)
949c949
< call(in-use=1736, alloc=16344), compile(in-use=129680, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=1736, alloc=16344), compile(in-use=129728, alloc=136232), execution(in-use=2936, alloc=4032)
951c951
< kkoqbc-subheap (create addr=0x7fddf9729810)
---
> kkoqbc-subheap (create addr=0x7f286bfd1810)
991a992,993
> Column (#3):
> NewDensity:0.000050, OldDensity:0.000050 BktCnt:10000.000000, PopBktCnt:9999.000000, PopValCnt:1, NDV:2
993c995,996
< AvgLen: 9 NDV: 2 Nulls: 0 Density: 0.500000
---
> AvgLen: 9 NDV: 2 Nulls: 0 Density: 0.000050
> Histogram: Freq #Bkts: 2 UncompBkts: 10000 EndPtVals: 2 ActualVal: no
1145c1148
< kkoqbc-subheap (delete addr=0x7fddf9729810, in-use=27312, alloc=32840)
---
> kkoqbc-subheap (delete addr=0x7f286bfd1810, in-use=27312, alloc=32840)
1148c1151
< call(in-use=9832, alloc=65656), compile(in-use=133208, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=9624, alloc=65656), compile(in-use=135032, alloc=136232), execution(in-use=2936, alloc=4032)
1154c1157
< call(in-use=9832, alloc=65656), compile(in-use=136304, alloc=139528), execution(in-use=2936, alloc=4032)
---
> call(in-use=9624, alloc=65656), compile(in-use=138208, alloc=140376), execution(in-use=2936, alloc=4032)
3080c3083
< SEL$1 0xf972ef68 (PARSER) [FINAL]
---
> SEL$1 0x6bfd6f68 (PARSER) [FINAL]
3083c3086
< call(in-use=12280, alloc=65656), compile(in-use=159680, alloc=218968), execution(in-use=7392, alloc=8088)
---
> call(in-use=12072, alloc=65656), compile(in-use=161568, alloc=223424), execution(in-use=7392, alloc=8088)

PL/SQL “constant”

By Brian Fitzgerald

The procedure-scoped PL/SQL “constant” can have a different value each call. The documentation says as much: “Constants are initialized every time a block or subprogram is entered.”

[oracle@stormking db12201 plsql]$ cat runtime.constant.sql
create or replace procedure pr
(
 p in number,
 o out number
)
is
 c constant number := p;
begin
 o := c;
end pr;
/

variable o number

exec pr ( 1, :o )
print :o

exec pr ( 2, :o )
print :o
quit
[oracle@stormking db12201 plsql]$ sqlplus u/u @ runtime.constant.sql
SQL*Plus: Release 12.2.0.1.0 Production on Tue Feb 6 23:28:32 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Tue Feb 06 2018 23:27:25 -05:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Procedure created.
PL/SQL procedure successfully completed.
O
----------
 1
PL/SQL procedure successfully completed.
O
----------
 2
Disconnected from Oracle Database 12c Enterprise Edition 
Release 12.2.0.1.0 - 64bit Production

 

Permanently deleting histograms

By Brian Fitzgerald

“A histogram is a special type of column statistic that provides more detailed information about the data distribution in a table column.” Without histograms, the optimizer could underestimate the cost of a table access for a nonselective predicate, and as a result, choose an inefficient index access path.

There are times when the DBA will want to delete histograms, including:

  • Issues with bind peeking.
  • Issues with adaptive cursor sharing.
  • Limit SYSAUX growth.
  • High number of child cursors resulting from adaptive cursor sharing while using a database resident connection pool (DRCP).
  • Plan instability caused by histograms.
  • Troubleshooting any of these.
  • Temporary workaround for any of these.
  • Permanent workaround for any of these.

You can delete a histogram, but that has obvious problems. In the first place, if you delete a histogram, you haven’t prevented it from coming back. You can lock statistics, but locking statistics has its drawbacks. You can only lock statistics at the table level, not the column level. If you table lock stats, other critical statistics, such as num_rows, num_nulls, and num_distinct will not get gathered. The optimizer will not keep up with changes in table population. Finally, to prevent reappearance of histograms, some DBAs decide to disable automatic statistics gathering. I usually do not agree with disabling automatic statistics gathering.

A more flexible way to delete a histogram is to set table preference method_opt to for columns size 1 for the desired column or columns. This blog posting demonstrates how this technique can be scripted.

Scripts for optimizer statistics

gather.table.stats.sql

Gather stats with the default options (no method_opt):

define ownname=&&1
define tabname=&&2
whenever oserror exit 1
@ sqlerx.sql
set verify off
set serveroutput on

declare
 l_fmt clob := q'{Gathered stats on %s.%s with default options.}';
begin
 dbms_stats.gather_table_stats (
 ownname => '&&ownname',
 tabname => '&&tabname'
 );
 out_pkg.printf(l_fmt, '&&ownname', '&&tabname' );
end;
/

Note, at this juncture, that every sql file that reads other sql files (with”@”) needs “whenever oserror exit 1”

dba.tab.col.statistics.sql

Display column statistics and show whether there is a histogram.

define owner=&&1
define tablename=&&2
whenever oserror exit 1
@ l32k.p50k.sql
@ columnformat.sql
set verify off

select column_name, num_distinct, histogram, num_buckets
from dba_tab_col_statistics
where owner = '&&owner'
and table_name = '&&tablename'
order by column_name;

rpt.col.usage.sql

Oracle decides whether to gather a histogram base on workload on skewed columns, as recorded in col_usage$. You can run report_col_usage to check on what workload Oracle has recorded. report_col_usage will not work unless you set long.

define ownname=&&1
define tabname=&&2
whenever oserror exit 1
@ sqlerx.sql
@ l32k.p50k.sql

set verify off

set long 2000000000
exec dbms_stats.flush_database_monitoring_info;
column rpt format a160
select dbms_stats.report_col_usage('&&ownname','&&tabname') rpt from dual;

prefer.delete.histo.sql

The delete histogram script that is the subject of this blog post. The script does not actually delete the histogram. It simply arranges that it will be delete on the next gather stats. You can either delete the histogram right away by calling dbms_stats.delete_column_stats, or you can gather stats with the default options (no method_opt).

define ownname=&&1
define tabname=&&2
define colname=&&3
whenever oserror exit 1
@ sqlerx.sql
set verify off
set serveroutput on

declare
 l_fmt clob := q'{Preference for no histogram on %s.%s is set.}';
begin
 dbms_stats.set_table_prefs (
 ownname => '&&ownname',
 tabname => '&&tabname',
 pname => 'method_opt',
 pvalue => 'for columns size 1 &&colname'
 );
 out_pkg.printf(l_fmt, '&&ownname', '&&tabname' );
end;
/

dba.tab.stat.prefs.sql

Use this script for reporting table stat prefs

define owner=&&1
whenever oserror exit 1
@ sqlerx.sql
@ l32k.p50k.sql
@ columnformat.sql
set verify off

column preference_name format a30
column preference_value format a50

select p.owner,
 p.table_name,
 p.preference_name,
 p.preference_value
from dba_tab_stat_prefs p
where owner = '&&owner'
order by
p.owner,
p.table_name,
p.preference_name;

Script in SQLPATH

Scripts mentioned in this blog post that I keep in a common directory for reuse.

sqlerx.sql

whenever sqlerror exit 1

l32k.p50k.sql

set linesize 32767
set pagesize 50000
set trimspool on

columnformat.sql

A very long, and ever growing, file of catalog column names.

column histogram format a20
column account_status format a20
column column_name format a30
column constraint_name format a30
column consumer_group format a30
etc.

The demo script

demo.delete.histo.sql

whenever oserror exit 1
@ l32k.p50k.sql
@ columnformat.sql
set verify off

drop table t purge;
drop sequence s;
@ sqlerx.sql
create table t
(
k number,
g varchar2(1)
);
create sequence s;

prompt Generate skewed data in column G
insert into T (k, g ) values ( s.nextval, 'M' );
insert into T (k, g ) values ( s.nextval, 'F' );
insert into T (k, g ) values ( s.nextval, 'M' );

commit;

@ gather.table.stats.sql U T
@ dba.tab.col.statistics.sql

prompt generating workload on column G
select count(*) from t where g = 'Z';

prompt checking col_usage$
@ rpt.col.usage.sql U T

@ gather.table.stats.sql U T
@ dba.tab.col.statistics.sql
prompt Yes, there is a histogram.

@ prefer.delete.histo.sql U T G

@ gather.table.stats.sql U T

prompt check existence of histogram on column G
@ dba.tab.col.statistics.sql
prompt The histogram is gone!

Demo script output

[oracle@stormking db12201 stats]$ sqlplus u/u @ demo.delete.histo.sql
SQL*Plus: Release 12.2.0.1.0 Production on Sun Feb 4 20:20:33 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Sun Feb 04 2018 20:16:14 -05:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Table dropped.
Sequence dropped.
Table created.
Sequence created.
Generate skewed data in column G
1 row created.
1 row created.
1 row created.
Commit complete.
Gathered stats on U.T with default options.
PL/SQL procedure successfully completed.
COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS
------------------------------ ------------ -------------------- -----------
G 2 NONE 1
K 3 NONE 1
generating workload on column G
COUNT(*)
----------
 0
checking col_usage$
PL/SQL procedure successfully completed.
RPT
---------------------------------------------------------------------------
LEGEND:
.......
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
...........................................................................
###########################################################################
COLUMN USAGE REPORT FOR U.T
...........................
1. G : EQ
###########################################################################
Gathered stats on U.T with default options.
PL/SQL procedure successfully completed.
COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS
------------------------------ ------------ -------------------- -----------
G 2 FREQUENCY 2
K 3 NONE 1
Yes, there is a histogram.
Preference for no histogram on U.T is set.
PL/SQL procedure successfully completed.
Gathered stats on U.T with default options.
PL/SQL procedure successfully completed.
check existence of histogram on column G
COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS
------------------------------ ------------ -------------------- ----------
G 2 NONE 1
K 3 NONE 1
The histogram is gone!
Disconnected from Oracle Database 12c Enterprise Edition 
Release 12.2.0.1.0 - 64bit Production

I have tested this script on 11.2, 12.1, and 12.2.

To summarize, I have deleted the histogram on table U.T, column G. I have done this not directly, by issuing delete_column_stats, but indirectly, by issuing set_table_prefs.

Notes

  1. Gather stats option “force” overrides locked statistics but does not override table preference.
SQL> exec dbms_stats.gather_table_stats(ownname=>'U',tabname=>'T',
force=>true)
PL/SQL procedure successfully completed.
SQL> @ dba.tab.col.statistics.sql U T
COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS
------------------------------ ------------ -------------------- -----------
G 2 NONE 1
K 3 NONE 1

The histogram remains deleted even though option “force” was used.

2. Gathering stats with non-default options can re-create the histogram.

SQL> exec dbms_stats.gather_table_stats(ownname=>'U',tabname=>'T',
method_opt=>'for all columns size 254')
PL/SQL procedure successfully completed.
SQL> @ dba.tab.col.statistics.sql U T
COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS
------------------------------ ------------ -------------------- -----------
G 2 FREQUENCY 2
K 3 FREQUENCY 3

I want to point out a bad practice here. Routinely gathering histograms on all columns is inadvisable. It is better to start off by gathering stats by using the default options, and then let Oracle decide, based on col_usage$, whether to gather a histogram.

3. The earliest reference I found on this technique of using set_table_prefs to make sure that histograms stay deleted is a 2011 article in Oracle’s Optimizer Blog.

4. Whether table T needs a histogram on column G is a matter unto itself. The answer might be different depending on whether G, M, and F represent “gender”, “male”, and “female” or “granularity”, “medium” and “fine”, whether the distribution of M and F is skewed, whether G appears as a predicate in the application query, whether there is an index on G, and whether a histogram is causing query plan stability issues, or other issues.

5. Others have demonstrated data skew using a high number of rows. I demonstrate skew with only 3 rows.

6. Likewise, others have demonstrated workload by running a high number of statements. I run only one query. Note that a query for a nonexistent value, g = ‘Z’, serves the purpose.

Multiple columns per table

You cannot use script prefer.delete.histo.sql to delete the histogram on more than one column per table. If you try, you get this result.

SQL> @ prefer.delete.histo.sql U T K
Preference for no histogram on U.T is set.
PL/SQL procedure successfully completed.
SQL> @ dba.tab.stat.prefs.sql U
OWNER TABLE_NAME PREFERENCE_NAME PREFERENCE_VALUE
------------------------------ ------------------------------ ------------
U T METHOD_OPT FOR COLUMNS SIZE 1 K

You must state the prefs in a single command, such as:

prefer.delete.histo.sq2

begin
 dbms_stats.set_table_prefs (
 ownname => 'U',
 tabname => 'T',
 pname => 'method_opt',
 pvalue => 'for columns size 1 G size 1 K'
 );
end;
/

Result:

SQL> @ prefer.delete.histo.sq2
PL/SQL procedure successfully completed.
SQL> @ dba.tab.stat.prefs.sql U
OWNER TABLE_NAME PREFERENCE_NAME PREFERENCE_VALUE
------------------------------ ------------------------------ -------------
U T METHOD_OPT FOR COLUMNS SIZE 1 G SIZE 1 K

Use of table prefs in a DBA organization

In a DBA group, members have different opinions based on different experiences. Some DBAs want to lock table stats. Others want to disable the stats gathering job. Sometimes non-DBAs, like developers and data center managers, get into the act with their own opinions.

Using set_table_prefs is offered here as an alternative. dba.tab.stat.prefs.sql can be used to report on tables that have the preference for no histogram set.

The best thing to do is come up with a consensus that is based on principles, current recommendations from Oracle, and rigorous testing.

move LOB results in unusable indexes

By Brian Fitzgerald

It is well known that moving a table segment causes indexes to become unusable. It is less known that moving an LOB causes indexes to become unusable.

Moving a table segment causes indexes to become unusable because indexes point to the table’s physical location. After a table move, the index still points to the old table location, so the index is invalid. Oracle marks the index as unusable.

alter table t
move
tablespace ts0002

If a table has an LOB column, you can move the table and the LOB in a single command. Again, the move result in unusable indexes.

alter table t
move
tablespace ts0002
lob ( l ) store as ( tablespace ts0002 )

If only the LOB is moved, not the table, indexes are also left unusable. This has surprised some users, but it is the expected behavior. The behavior is documented.

alter table t
move
lob ( l ) store as ( tablespace ts0002 )

Further investigation shows that when an LOB is moved, the table segment is also “moved”. The table data_object_id and the table segment header block change, and every rowid in the table segment changes. Indexes on the table become unusable.

Unusable indexes are unintuitive because the command syntax is to move an LOB column, and the column is not indexed. Why should an index become unusable if the column that is moved is not indexed?

A Google search on “move lob unusable index” turns up questions and answers on this subject, and lead to the documentation, and to Note 1228324.1, which state that moving and LOB leading to unusable indexes is the expected behavior.

Point taken.

But why?

Specifically, why does moving an LOB result in “moving” the table segment?

The reason is clear when you consider that whether the move succeeds of fails, the operation must leave the schema in a consistent state.

Moving an LOB requires updating the LOB locator in every row. It’s similar to an update of every row. Generally, updates can be done conventional or direct path. The choice is an implementation decision. Direct path can be more efficient and less dependent on instance resources such as memory and undo. Alter table is DDL. Direct path is commonly used in DDL.

A DDL operation must succeed or fail atomically. We don’t want to end up half way where some LOBs are migrated and some not. The LOBs are in a single segment, which is the smallest unit of administration. You can’t move part of a segment. The entire move must succeed or fail. This is implemented by making a copy of the LOB segment, and a copy of the table segment. When the copying is complete, the LOB and the table are switched to point to the new copies. After a successful LOB move, the LOB has been copied, the table has been copied, and the new LOB locators point to the copied LOB. If the alter table is interrupted, the copies are discarded and the state is the same as it was to start with. After a failed LOB move, the original LOB locators simply point to the original LOB.

Note that moving an LOB will require not only free space for the new LOB, but also, space for the table.

What follows is a demo of moving an LOB only. There are a few points to notice:

  • Table T LOB column L moves from tablespace TS0001 to TS0002
  • Table T tablespace remains TS0001.
  • Table T data object id changes from 74914 to 74918. The segment in fact “moved”.
  • Table T segment header block changes from 130 to 170. The segment “moved”.
  • Because the segment moved, index I is left unusable
  • tkprof shows direct path read and write, supporting the earlier comment.

Conclusions:

  • Whether you move the table, the LOB, or both, indexes will become unusable.
  • Unusable indexes are a result of a direct path LOB move implementation.
  • Space for the table and the LOB moves have to be planned for.
conn u/u
@ columnformat.sql
set linesize 32767
set trimspool on
column l format a1

drop table t purge;
create table t
(
c varchar2(1),
l clob
)
lob ( l )
store as ( tablespace ts0001 disable storage in row )
tablespace ts0001
;

insert into t ( c, l )
select 'A', 'Z'
from dual
connect by level <= 1000
;
commit;
create index i on t ( c );

select df.file_id,
df.tablespace_name
from dba_data_files df
where df.tablespace_name in ( 'TS0001', 'TS0002' );

@ trace.10046.on.sql

@ sel.t.info.sql

prompt moving the LOB only
alter table t
move
lob ( l ) store as ( tablespace ts0002 )
;

@ sel.t.info.sql
@ trace.10046.on.sql
@ tkprof.sql

SQL> @ movelob.sql
Connected.

Table dropped.

Table created.

1000 rows created.

Commit complete.

Index created.

FILE_ID TABLESPACE_NAME
---------- ------------------------------
 8 TS0001
 9 TS0002

Session altered.

Session altered.

OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME HEADER_BLOCK EXTENTS
------------------------------ ------------------------------ ------------------ ------------------------------ ------------ ----------
U T TABLE TS0001 130 2

OWNER OBJECT_NAME OBJECT_TYPE DATA_OBJECT_ID
------------------------------ ------------------------------ ------------------------------ --------------
U T TABLE 74914

OWNER TABLE_NAME COLUMN_NAME TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------ ------------------------------
U T L TS0001

TABLE_NAME INDEX_NAME STATUS
------------------------------ ------------------------------ --------
T I VALID

moving the LOB only

Table altered.

OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME HEADER_BLOCK EXTENTS
------------------------------ ------------------------------ ------------------ ------------------------------ ------------ ----------
U T TABLE TS0001 178 2

OWNER OBJECT_NAME OBJECT_TYPE DATA_OBJECT_ID
------------------------------ ------------------------------ ------------------------------ --------------
U T TABLE 74918

OWNER TABLE_NAME COLUMN_NAME TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------ ------------------------------
U T L TS0002

TABLE_NAME INDEX_NAME STATUS
------------------------------ ------------------------------ --------
T I UNUSABLE

Session altered.

Session altered.

TRACEFILENAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/db12201/db12201/trace/db12201_ora_15552.trc

BASE
-----------------
db12201_ora_15552

TKPROF: Release 12.2.0.1.0 - Development on Sun Nov 19 18:20:59 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

created file:
db12201_ora_15552.tkp

********************************************************************************

alter table t
move
lob ( l ) store as ( tablespace ts0002 )

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 2 1 0
Execute 1 0.58 13.03 1004 87184 61243 1000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.59 13.03 1004 87186 61244 1000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 110
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
 0 0 0 LOAD AS SELECT T (cr=87182 pr=1000 pw=1007 time=12586318 us starts=1)
 1000 1000 1000 TABLE ACCESS FULL T (cr=16 pr=0 pw=0 time=0 us starts=1 cost=5 size=237000 card=1000)

Elapsed times include waiting on following events:
 Event waited on Times Max. Wait Total Waited
 ---------------------------------------- Waited ---------- ------------
 PGA memory operation 3 0.00 0.00
 direct path read 999 0.88 11.40
 local write wait 8 0.00 0.00
 acknowledge over PGA limit 1 0.00 0.00
 direct path write 1 0.01 0.01
 enq: RO - fast object reuse 4 0.06 0.12
 db file sequential read 4 0.15 0.15
 enq: CR - block range reuse ckpt 4 0.14 0.14
 log file sync 1 0.00 0.00
 SQL*Net message to client 1 0.00 0.00
 SQL*Net message from client 1 0.00 0.00
********************************************************************************

 

 

 

 

 

 

 

 

 

 

SQL*Net break/reset to client

By Brian Fitzgerald

Introduction

This is a demonstration of event SQL*Net break/reset to client. This demo will assist understanding the origin of SQL*Net break/reset to client events on your database. The application details leading to SQL*Net break/reset to client may not be directly accessible to the DBA.

A session waits on SQL*Net break/reset to client when an error happens and the Oracle instance is trying to notify the client. Until the client acknowledges, the session waits on event SQL*Net break/reset to client. SQL*Net break/reset to client waits appear in the OEM display under the Application wait class.

To simulate the scenario, I run a sqlplus script that will sleep and eventually get an error. Before the error is reached, I suspend the client, so that it cannot receive any message from the instance. As a result, the session waits on SQL*Net break/reset to client.

Event SQL*Net break/reset to client is similar to SQL*Net message to client and SQL*Net more data to client in the sense that that Oracle has some information to communicate to the client. The waiting will persist if the client is preoccupied. Whereas “break/reset” is in the Application wait class, “message” and “more data” waits are in the Network wait class.

In summary, SQL*Net break/reset to client means that an error occurred and Oracle is trying to notify the client.

Demo

For this demonstration, I will open three windows:

  1. Run a PL/SQL script
  2. Suspend the batch, sleep, and resume the batch
  3. Check the event in v$session

Script

Here is the script used for the demo.

$ cat sqlnet.break.reset.sql
@ conn.pdba.u.sql
set verify off

declare
 l_rslt number;
begin
 dbms_lock.sleep(&&1);
 l_rslt := 0/0;
end;
/
quit

Oracle will sleep, reach division by zero, and try to notify the client about the error.

Window 1:

$ sqlplus /nolog @ sqlnet.break.reset.sql 20 & fg
[1] 30878
sqlplus /nolog @ sqlnet.break.reset.sql 20

SQL*Plus: Release 12.2.0.1.0 Production on Sat Nov 4 16:23:32 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected.

Note that the process id is 30878.

Window 2:

Here we want to simulate an client process that should be waiting for the oracle server, but is preoccupied with something else. We do that by suspending sqlplus for 120 seconds.

$ kill -STOP 30878 ; sleep 120 ; kill -CONT 30878

Window 1:

sqlplus responds

[1]+ Stopped sqlplus /nolog @ sqlnet.break.reset.sql 20

Window 3:

In fact, Oracle has tried to notify the application (sqlplus), and is waiting on a response that will not come until the client wakes up. I.e. the session is waiting on SQL*Net break/reset to client.

SQL> select sid, sql_id, event, p2, p2text 
from v$session 
where service_name = 'u' 
and sid != sys_context('userenv','sid');

 SID SQL_ID        EVENT                           P2 P2TEXT
---- ------------- ------------------------------ --- ------
 63                SQL*Net break/reset to client    0 break?

The text for p2, in this case, is “break?”. Nonzero means break and 0 means reset, so this case is a reset, which is the more common case.

sql_id is null. The session is a running statement not now, but in the past. An error occurred, and Oracle is trying to notify the client. In most cases of SQL*Net break/reset to client, sql_id will be null.

At this point, I have staged a scenario with SQL*Net break/reset to client.

Window 1:

After 120 seconds, window 2 issues kill -CONT 30878. sqlplus wakes up. The output appears:

$ declare
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at line 5

Disconnected from Oracle Database 12c Enterprise Edition 
Release 12.2.0.1.0 - 64bit Production

Monitoring

Event SQL*Net break/reset to client appears in OEM in the Application class. Refer to the bright red peak, at right.

active

The bright red region is a hyperlink. You can drill in to the Application class by clicking on it. The display shows the wait breakdown, which shows one session waiting on SQL*Net break/reset to client.

appl

Notice that the Top SQL section is empty. No statement is active. The session appears as a hyperlink, so you can drill down by clicking on Session ID 49.

ses

Impact

One session waiting on SQL*Net break/reset to client will probably have no adverse impact on the instance, or on other sessions. If the session is in a critical batch, then resolving this wait issue will be a high priority. In some cases, SQL*Net break/reset to client appears as a result of manual operation of various tools.

The Application class

We often associate the Application class with blocking, such as the row-level locking that leads to, enq: TX – row lock contention, or the enq: TM – contention waits associated with select for update or missing foreign key indexes. Those waits are associated with statements, and with tables or indexes. SQL*Net break/reset to client, although it is in the Application class, is not associated with a SQL statement or with locking. However, the name “Application” class is a clue that resolving this wait will require coordination between the DBA and others who develop, manage, or use the application.

Root Causes in Production

In a serious production application, waits on SQL*Net break/reset to client could be a symptom of an application design issue. It could be that the application launches a query and goes off down some other code path while the query is running. When an error appears in the Oracle session, the application is not attending to it right away. In other words, there is a lack of coordination between the client and the Oracle instance. The reason for this could reside in the application design and the database driver, including version and patch status.

Note also that the name SQL*Net break/reset to client does not necessarily mean that there is a network problem. As you can see, resolution of this issue did not require assistance from the network support team, or the operating system administrator.

When investigating issues such as this one, it is helpful to talk to other people who know about other aspects of the application, and engage in a sort of “brainstorming session”.

If you understand the SQL*Net break/reset to client event, you might not be able to resolve it on your own, but you will have an opportunity to lead an investigation that will eventually yield a solution.

Scripts for trace 10053

By Brian Fitzgerald

Trace 10053 tells you “how” the optimizer selected its plan — the reasons for the choices it has made. The main fact to remember when you are collecting trace 10053 is that you must force a hard parse each time you run your statement. Although forcing a hard parse can be done by manually varying the statement’s SQL text, this blog post offers a way to automate some of the steps, in case you need to quickly make a high number of trace 10053 runs.

For the sake of this blog article, we’ll use trace 10053 to examine the effect of statistics on the execution plan.

Preliminary: To run trace 10053, the user must be granted alter session.

SQL> grant alter session to scott;

Grant succeeded.

Trace 10053 with manual steps

In order to analyze the effect of gathering statistics on execution plan, we’ll run a statement twice, once without statistics and once with statistics. We’ll cycle the pluggable database to guarantee at least one hard parse to start with. Here is file demo.softparse.10053.sql

set pagesize 50000
conn / as sysdba
alter pluggable database pdba close;
alter pluggable database pdba open;
@ conn.pdba.scott.sql

alter session set tracefile_identifier = a;
exec dbms_stats.delete_schema_stats(user);
alter session set events='10053 trace name context forever, level 1';
select e.ename, e.deptno
from emp e
;
alter session set events '10053 trace name context off';
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

alter session set tracefile_identifier = b;
exec dbms_stats.gather_schema_stats(user);
alter session set events='10053 trace name context forever, level 1';
select e.ename, e.deptno
from emp e
;
alter session set events '10053 trace name context off';
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

The output:

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.softparse.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:25:35 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Pluggable database altered.
Pluggable database altered.
Connected.
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_A.trc
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_B.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ ls -l /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_{A,B}.trc
ls: cannot access /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_B.trc: No such file or directory
-rw-r----- 1 oracle asmadmin 99753 Aug 7 21:25 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_16771_A.trc

Whoops, where is my second trace file? The second trace file is not there because I did not force a hard parse on the second statement execution. The SQL text was exactly the same in both runs:

select e.ename, e.deptno
from emp e
;

We failed to collect a trace 10053 for the second execution.

I could work around by manually varying the white space, upper case, or adding a comment. There are other ways to force a hard parse, that sometimes work, such as issuing a grant or flushing the statement from the shared pool, if you have the privilege.

In the next example, the statements are the same, except that SELECT is in caps in the first, and FROM is in caps in the second. These SQL text variations were made manually with an editor. Here is how demo.hardparse.10053.sql differs from the original:

[oracle@stormking cdb12201 sql]$ diff demo.softparse.10053.sql demo.hardparse.10053.sql
10c10
< select e.ename, e.deptno
---
> SELECT e.ename, e.deptno
22c22
< from emp e
---
> FROM emp e
YYY

The output :

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.hardparse.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:34:36 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Pluggable database altered.
Pluggable database altered.
Connected.
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
Session altered.
PL/SQL procedure successfully completed.
Session altered.
ENAME DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ ls -l /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_{A,B}.trc
-rw-r----- 1 oracle asmadmin 99909 Aug 7 21:35 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
-rw-r----- 1 oracle asmadmin 98628 Aug 7 21:35 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc

Notice that there are two 10053 trace files this time, one for each statement. The statements are unique, so each one was hard parsed.

Notice also the trace file continuation message from file A to file B:

[oracle@stormking cdb12201 sql]$ tail -2 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_A.trc
*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_18928_B.trc ***

Issuing alter session set tracefile_identifier = b forced a switch to a new trace file.

That’s the basic idea, force a hard parse and switch trace file each execution, but I want to automate the process because in the future, I want to make a high number of trace 10053 runs.

Scripts for trace 10053

My demo queries are in directory /home/oracle/brian/sql/sql, but I plan to reuse my trace in the future. I set SQLPATH, and put reusable scripts there.

[oracle@stormking cdb12201 sqlplus]$ echo $SQLPATH
/home/oracle/brian/sql/sqlplus

Here are the scripts:

conn.pdba.scott.sql

A one-liner for connecting to the schema. I will reuse this script many times.

conn scott/tiger@stormking:1521/pdba

trace.10053.on.sql

Here is a one-line script for starting the trace 10053. Using this script is a lot easier than remembering the exact syntax, and it makes for an uncluttered main script.

alter session set events='10053 trace name context forever, level 1';

trace.10053.off.sql

Here is the script for stopping trace 10053. If you do not explicitly turn off trace, you will not get your trace file.

alter session set events '10053 trace name context off';

tracefilename.sql

This script displays the trace file name. It’s a better approach than searching for the file manually.

set echo off
column tracefilename format a100
set trimspool on
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

randident.sql

I want to use a random identifier in two places. I want a random trace file identifier, and I want a random identifier somewhere in my statement.

-- set sqlplus substitution variable randident
-- to a random character string
set termout off
column randident new_value randident format a6
select dbms_random.string('u',6) randident from dual;
set termout on

randtfi.sql

This script sets the tracefile identifier to a random string. Notice that each time you call randtfi.sql, Oracle switches to a new trace file. That’s handy if you want to break up the trace for a single sqlplus session into multiple parts.

-- Change to a new, randomly named trace file on each call
@ randident.sql
alter session set tracefile_identifier = &&randident;

This is my collection of reusable scripts. They go in /home/oracle/brian/sql/sqlplus

Demonstrating the scripts

The demonstration is a throwaway example. The script does not belong in SQLPLATH. I have put the script in /home/oracle/brian/sql/sql. Here is demo.trace.10053.sql:

set pagesize 50000
@ conn.pdba.scott.sql

exec dbms_stats.delete_schema_stats(user);
@ randtfi.sql
@ trace.10053.on.sql
@ randident.sql
set echo on
set verify on
select e.ename &&randident, e.deptno
from emp e
;
set echo off
@ trace.10053.off.sql
@ tracefilename.sql

exec dbms_stats.gather_schema_stats(user);
@ randtfi.sql
@ trace.10053.on.sql
@ randident.sql
set echo on
set verify on
select e.ename &&randident, e.deptno
from emp e
;
set echo off
@ trace.10053.off.sql
@ tracefilename.sql

Running demo.trace.10053.sql:

[oracle@stormking cdb12201 sql]$ sqlplus /nolog @ demo.trace.10053.sql
SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 7 21:50:53 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
PL/SQL procedure successfully completed.
Session altered.
Session altered.
SQL> set verify on
SQL> select e.ename &&randident, e.deptno
 2 from emp e
 3 ;
old 1: select e.ename &&randident, e.deptno
new 1: select e.ename AWQVDZ, e.deptno
AWQVDZ DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
SQL> set echo off
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc
PL/SQL procedure successfully completed.
Session altered.
Session altered.
SQL> set verify on
SQL> select e.ename &&randident, e.deptno
 2 from emp e
 3 ;
old 1: select e.ename &&randident, e.deptno
new 1: select e.ename ZRQGGE, e.deptno
ZRQGGE DEPTNO
---------- ----------
SMITH 20
ALLEN 30
WARD 30
JONES 20
MARTIN 30
BLAKE 30
CLARK 10
SCOTT 20
KING 10
TURNER 30
ADAMS 20
JAMES 30
FORD 20
MILLER 10
14 rows selected.
SQL> set echo off
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@stormking cdb12201 sql]$ grep '^sql_id' /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc:sql_id=6z7nrbdts160v plan_hash_value=-338806364 problem_type=3
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc:sql_id=5ub8qv1cugvr4 plan_hash_value=-338806364 problem_type=3

Again, there are two trace files. The first trace file has a continuation message on to the second:

[oracle@stormking cdb12201 sql]$ tail -2 /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_RYNDLO.trc
*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_22121_XGLYME.trc ***

I have varied the output column name:

new 1: select e.ename AWQVDZ, e.deptno

vs.

new 1: select e.ename ZRQGGE, e.deptno

Because the SQL is different each time, a hard parse is forced each execution. I could have also forced a hard parse by placing the random string in a comment:

/* &&randident */

Now, on to the analysis and comparison of the two trace files.

In the first execution, statistics had been deleted. The trace 10053 was collected in file cdb12201_ora_22121_RYNDLO.trc. The plan is:

============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3 | |
| 1 | TABLE ACCESS FULL | EMP | 14 | 280 | 3 | 00:00:01 |
-------------------------------------+-----------------------------------+

Trace cdb12201_ora_22121_XGLYME.trc, with statistics, show this plan:

============
Plan Table
============
-------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3 | |
| 1 | TABLE ACCESS FULL | EMP | 14 | 126 | 3 | 00:00:01 |
-------------------------------------+-----------------------------------+

The plan hash value in both cases is -338806364. The rows and blocks estimate is the same, and the bytes estimate is slightly different.

Trace cdb12201_ora_22121_RYNDLO.trc contains a dynamic sampling analysis:

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 2).
** Dynamic sampling updated index stats.: PK_EMP, blocks=1
** Dynamic sampling updated table stats.: blocks=5

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Generated dynamic sampling query:
 query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE RESULT_CACHE(SNAPSHOT=3600) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ NO_PARALLEL("E") FULL("E") NO_PARALLEL_INDEX("E") */ 1 AS C1, 1 AS C2 FROM "SCOTT"."EMP" "E") SAMPLESUB

*** 2017-08-07T21:50:53.660117-04:00 (PDBA(3))
** Executed dynamic sampling query:
 level : 2
 sample pct. : 100.000000
 actual sample size : 14
 filtered sample card. : 14
 orig. card. : 409
 block cnt. table stat. : 5
 block cnt. for sampling: 5
 max. sample block cnt. : 64
 sample block cnt. : 5
 min. sel. est. : -1.00000000

In the second execution, statistics had been collected. The trace is cdb12201_ora_22121_XGLYME.trc. No dynamic sampling appears.

The analysis of the bytes difference follows. For the case of gathered stats, the row length is:

SQL> select sum(avg_col_len) from user_tab_columns 
where table_name = 'EMP' 
and column_name in ('ENAME','DEPTNO');

SUM(AVG_COL_LEN)
----------------
 9

9 bytes per row x 14 rows = 126 bytes, which is the bytes value in the plan, step 2 TABLE ACCESS FULL. For deleted stats, avg_col_len is null. The dynamic sampling query only finds the row count. The simplified dynamic sampling query is:

SQL> SELECT NVL(SUM(C1),0) FROM (SELECT /*+ FULL("E") */ 1 AS C1 FROM "SCOTT"."EMP" "E");
NVL(SUM(C1),0)
--------------
 14

Dynamic sampling tells us nothing about the number of bytes per row. The row length would have to come from the catalog.

SQL> select column_name, data_length, data_type from user_tab_columns where table_name = 'EMP' and column_name in ('ENAME','DEPTNO');

COLUMN_NAME DATA_LENGTH DATA_TYPE
------------------------------ ----------- ------------------------------
ENAME 10 VARCHAR2
DEPTNO 22 NUMBER

Allowing number digits per byte, each row would return 21 bytes. (That’s how I figure it). The optimizer estimated 20 bytes per row, or 280 bytes.

In conclusion, whether dynamic stats or gathered stats were used, the resulting plan was the same. The estimates of cardinality, CPU and time were the same, but the estimates of bytes returned differed slightly. Trace 10053 helps to explain the similarities and differences.

This was a simple example to demonstrate the use of the trace 10053 scripts to find out the reason for the optimizer’s predicted costs.

Conclusion

This blog post made the following points about the trace 10053 technique:

  • Trace 10053 can be used to find out why the optimizer made its decisions.
  • A user must have been granted alter session before running trace 10053.
  • To get trace 10053 output, is necessary to force a hard parse.
  • To get a trace file split into pieces, change the trace file identifier.
  • To get a random identifier in sqlplus, use dbms_random and column new_value.
  • One way to force a hard parse is to vary the SQL with a random identifier.
  • Force a trace file switch by using a random tracefile identifier.

The scripts in this blog post can let you efficiently, systematically, and quickly collect trace 10053 files on a SQL statement. The collected files can be reviewed, analyzed, and compared.

 

 

 

 

 

 

 

 

 

Scripts for trace 10046 and tkprof

By Brian Fitzgerald

Trace 10046 is used to collect session diagnostic information, such as cursors, waits, and binds. Trace 10046 can also be used to investigate the internal workings of Oracle software. Trace 10046 can be used to localize errors. The tkprof utility formats 10046 trace files into a readable format.

The syntax for invoking trace 10046 and tkprof can be arcane, verbose, and error prone. Putting the commands in several files in a separate directory can lead to cleaner scripts and less cluttered directories. Using scripts can eliminate the manual steps of identifying the trace file and processing with tkprof. These scripts are for tracing “own session”.

Preliminary: To run trace 10046, the user must be granted alter session.

SQL> grant alter session to scott;
Grant succeeded.

Scripts for trace 10046 and tkprof

The scripts are general purpose and can be used for more than one investigation. You should set environment SQLPATH to a common directory and put the scripts there. For example:

[oracle@stormking cdb12201 sqlplus]$ echo $SQLPATH
/home/oracle/brian/sql/sqlplus

Here are  the scripts:

conn.pdba.scott.sql

A script for connecting to the schema.

conn scott/tiger@stormking:1521/pdba

trace.10046.on.sql

Start tracing

alter session set max_dump_file_size = '100M';
alter session set events '10046 trace name context forever, level 12';

trace.10046.off.sql

Stop tracing. Close the trace file.

alter session set events '10046 trace name context off';

tracefilename.sql

Display the trace file name. Set sqlplus substitution variable tracefilename

set verify off
set trimspool on
column tracefilename new_value tracefilename format a100
select value tracefilename
from v$diag_info
where name = 'Default Trace File';

tkprof.sql

Identify the trace file and process it with tkprof. Store the output in the current working directory.

@ tracefilename.sql
column base new_value base
select regexp_replace( '&&tracefilename', '.*/(.*).trc','\1' ) base
from dual;
define tkfile=&&base..tkp
host tkprof &&tracefilename &&tkfile
prompt created file:
prompt &&tkfile

Demonstration

Here’s a simple example. User scott tries to run “show parameter”.

SQL> show parameter cursor
ORA-00942: table or view does not exist

What table or view does not exist? The message does not say. Let’s trace and find out. Here is file show.parameter.demo.sql for tracing “show parameter”:

@ conn.pdba.scott.sql
@ trace.10046.on.sql
show parameter optimizer
@ trace.10046.off.sql
@ tkprof.sql

See how clean and simple it looks. Now run it:

[oracle@stormking cdb12201 parameter]$ sqlplus /nolog @ show.parameter.demo.sql
SQL*Plus: Release 12.2.0.1.0 Production on Thu Jul 27 21:48:50 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected.
Session altered.
Session altered.
ORA-00942: table or view does not exist
Session altered.
TRACEFILENAME
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb12201/cdb12201/trace/cdb12201_ora_29212.trc
BASE
------------------
cdb12201_ora_29212
TKPROF: Release 12.2.0.1.0 - Development on Thu Jul 27 21:48:50 2017
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
created file:
cdb12201_ora_29212.tkp

In cdb12201_ora_29212.tkp

The following statement encountered a error during parse:
SELECT NAME NAME_COL_PLUS_SHOW_PARAM,
DECODE(TYPE,1,'boolean',2,'string',3,'integer',4,'file',5,'number', 6,'big integer', 'unknown') TYPE,
DISPLAY_VALUE VALUE_COL_PLUS_SHOW_PARAM 
FROM V$PARAMETER WHERE UPPER(NAME) LIKE UPPER(:NMBIND_SHOW_OBJ) 
ORDER BY NAME_COL_PLUS_SHOW_PARAM,ROWNUM
Error encountered: ORA-00942

tkprof shows that “show parameter” failed because the user has no access to view v$parameter.

The trace file itself also shows the error:

PARSING IN CURSOR #140627652972696 len=289 dep=0 uid=112 oct=3 lid=112 tim=2237530714211 hv=2462394820 ad='b4be2118' sqlid='7cfz5wy9caaf4'
SELECT NAME NAME_COL_PLUS_SHOW_PARAM,DECODE(TYPE,1,'boolean',2,'string',3,'integer',4,'file',5,'number', 6,'big integer', 'unknown') TYPE,DISPLAY_VALUE VALUE_COL_PLUS_SHOW_PARAM FROM V$PARAMETER WHERE UPPER(NAME) LIKE UPPER(:NMBIND_SHOW_OBJ) ORDER BY NAME_COL_PLUS_SHOW_PARAM,ROWNUM
END OF STMT
PARSE #140627652972696:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=2237530714211

User scott can confirm no access:

SQL> select * from V$PARAMETER where 1 = 2;
select * from V$PARAMETER where 1 = 2
 *
ERROR at line 1:
ORA-00942: table or view does not exist

To solve the user’s problem, select_catalog_role or some other role can be granted to user scott.

SQL> select grantee from dba_tab_privs 
where table_name = 'V_$PARAMETER' 
and grantee in ( select role from dba_roles ) order by grantee;
GRANTEE
----------------------------------------------------------------------------
DV_SECANALYST
SELECT_CATALOG_ROLE
SQL> select grantee from dba_role_privs 
where granted_role = 'SELECT_CATALOG_ROLE' order by grantee;
GRANTEE
----------------------------------------------------------------------------
DBA
EM_EXPRESS_BASIC
EXP_FULL_DATABASE
IMP_FULL_DATABASE
OEM_MONITOR
SYS
SYSBACKUP
SYSUMF_ROLE
8 rows selected.
SQL> select grantee from dba_sys_privs 
where privilege = 'SELECT ANY DICTIONARY' order by grantee;
GRANTEE
----------------------------------------------------------------------------
DBA
DBSNMP
GGSYS
GSMADMIN_INTERNAL
OEM_MONITOR
SYSBACKUP
SYSDG
WMSYS
8 rows selected.

Alternatively, we can create a new role:

SQL> conn / as sysdba
Connected.
SQL> alter session set container = pdba;
Session altered.
SQL> create role sqlplus_role;
Role created.
SQL> grant select on v_$parameter to sqlplus_role;
Grant succeeded.
SQL> grant sqlplus_role to scott;
Grant succeeded.
SQL> @ conn.pdba.scott.sql
Connected.
SQL> show parameter cursor
NAME TYPE VALUE
------------------------------------ ----------- --------------------------
cursor_bind_capture_destination string memory+disk
cursor_invalidation string IMMEDIATE
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
open_cursors integer 300
session_cached_cursors integer 50

The user’s problem is solved. sqlplus command “show parameter” no longer throws ORA-00942.

Summary

  • event 10046 can be used for performance diagnosis
  • trace 10046 can also be used to investigate the inner workings of software
  • trace 10046 can be used to identify the source of an error
  • reusable sqlplus scripts belong in SQLPATH
  • trace syntax can be saved in convenient scripts
  • the result is less cluttered administrative scripts
  • the task of identifying a the trace file and processing with tkprof can be handled in a single script
  • sqlplus command “show parameter” requires access to v$parameter
  • users can be granted limited privileges with custom roles

Flashback database with deleted archive log, or no archive log.

By: Brian Fitzgerald

Scenarios:

  1. You want to run flashback database, but, for some reason, a needed archivelog has been deleted, but you have a backup.
  2. You want to be able to flashback a database without needing any archivelog.

Solutions:

  1. Identify and restore the needed archive log (usually just one).
  2. Create the restore point while the database is mounted and consistent.

Demo #1. Flashback if a needed archivelog has been deleted:

[oracle@stormking TESTFB blog]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Oct 24 20:09:43 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: TESTFB (DBID=2908670758)

RMAN> backup archivelog all not backed up 1 times delete input;

Starting backup at 20161024 20:10
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=65 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=21 RECID=6 STAMP=926107742
input archived log thread=1 sequence=22 RECID=7 STAMP=926107771
input archived log thread=1 sequence=23 RECID=8 STAMP=926107773
input archived log thread=1 sequence=24 RECID=9 STAMP=926107777
input archived log thread=1 sequence=25 RECID=10 STAMP=926107803
channel ORA_DISK_1: starting piece 1 at 20161024 20:10
channel ORA_DISK_1: finished piece 1 at 20161024 20:10
piece handle=+RECO/TESTFB/BACKUPSET/2016_10_24/annnf0_tag20161024t201006_0.532.926107807 tag=TAG20161024T201006 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: deleting archived log(s)
RMAN-08139: WARNING: archived redo log not deleted, needed for guaranteed restore point
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_21.456.926107741 thread=1 sequence=21
RMAN-08139: WARNING: archived redo log not deleted, needed for guaranteed restore point
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_22.465.926107771 thread=1 sequence=22
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_23.469.926107773 RECID=8 STAMP=926107773
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_24.536.926107777 RECID=9 STAMP=926107777
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_25.527.926107803 RECID=10 STAMP=926107803
Finished backup at 20161024 20:10

RMAN> delete noprompt force archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=65 device type=DISK
List of Archived Log Copies for database with db_unique_name TESTFB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - --------------
6 1 21 A 20161024 20:07
 Name: +RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_21.456.926107741

7 1 22 A 20161024 20:09
 Name: +RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_22.465.926107771

deleted archived log
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_21.456.926107741 RECID=6 STAMP=926107742
deleted archived log
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_22.465.926107771 RECID=7 STAMP=926107771
Deleted 2 objects

[oracle@stormking TESTFB blog]$ sysdba @ flashback.db.to.restorepoint.sql

SQL*Plus: Release 12.1.0.2.0 Production on Mon Oct 24 20:32:02 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options


no rows selected

a guaranteed restore point exists

NAME
--------------------------------------------------------------------------------
FBDEMO_RP_20161024_2009

FBDEMO_RP_20161024_2009
ORACLE instance started.

Total System Global Area 2533359616 bytes
Fixed Size 2927576 bytes
Variable Size 671089704 bytes
Database Buffers 1845493760 bytes
Redo Buffers 13848576 bytes
Database mounted.
old 1: flashback database to restore point &&name
new 1: flashback database to restore point FBDEMO_RP_20161024_2009
flashback database to restore point FBDEMO_RP_20161024_2009
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 1709223 to SCN 1709224
ORA-38761: redo log sequence 22 in thread 1, incarnation 2 could not be
accessed

OK, we have a problem. The solution is to find out the needed SCN from v$restore_point, and then restore that archivelog.

[oracle@stormking TESTFB blog]$ sysdba @ vrestorepoint.sql

SQL*Plus: Release 12.1.0.2.0 Production on Mon Oct 24 20:54:02 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options


Session altered.


 SCN DATABASE_INCARNATION# GUA TIME NAME
---------------- --------------------- --- ---------------- ----------------------------------------
 1709224 2 YES 2016-10-24 20:09 FBDEMO_RP_20161024_2009

RMAN> restore archivelog from scn 1709224 until scn 1709224;

Starting restore at 20161024 20:38
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=22
channel ORA_DISK_1: reading from backup piece +RECO/TESTFB/BACKUPSET/2016_10_24/annnf0_tag20161024t201006_0.532.926107807
channel ORA_DISK_1: piece handle=+RECO/TESTFB/BACKUPSET/2016_10_24/annnf0_tag20161024t201006_0.532.926107807 tag=TAG20161024T201006
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
Finished restore at 20161024 20:39

RMAN>

Recovery Manager complete.
[oracle@stormking TESTFB blog]$ sysdba @ flashback.db.to.restorepoint.sql

SQL*Plus: Release 12.1.0.2.0 Production on Mon Oct 24 20:39:16 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options


no rows selected

a guaranteed restore point exists

NAME
--------------------------------------------------------------------------------
FBDEMO_RP_20161024_2009

FBDEMO_RP_20161024_2009
ORACLE instance started.

Total System Global Area 2533359616 bytes
Fixed Size 2927576 bytes
Variable Size 671089704 bytes
Database Buffers 1845493760 bytes
Redo Buffers 13848576 bytes
Database mounted.
old 1: flashback database to restore point &&name
new 1: flashback database to restore point FBDEMO_RP_20161024_2009

Flashback complete.

SQL> alter database open resetlogs;

Database altered.

Demo #2. Be able to flashback without any archivelogs. First, shutdown immediate, then startup mount. Create the restore point. No archivelogs will be required to flash back.

[oracle@stormking TESTFB blog]$ sysdba @ cr.consistent.restorepoint.sql

SQL*Plus: Release 12.1.0.2.0 Production on Mon Oct 24 21:02:59 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options


no rows selected

flashback database is enabled

Session altered.


no rows selected


no rows selected

no restore point exists

RP INST
-------------------------------------------------------------------------------- --------
consistent_rp_20161024_2102 TESTFB

consistent_rp_20161024_2102
Database closed.
Database dismounted.
ORACLE instance shut down.
ORACLE instance started.

Total System Global Area 2533359616 bytes
Fixed Size 2927576 bytes
Variable Size 671089704 bytes
Database Buffers 1845493760 bytes
Redo Buffers 13848576 bytes
Database mounted.

no rows selected

the database is consistent
old 1: create restore point &&rp guarantee flashback database
new 1: create restore point consistent_rp_20161024_2102 guarantee flashback database

Restore point created.


Session altered.


 SCN DATABASE_INCARNATION# GUA TIME NAME
---------------- --------------------- --- ---------------- ----------------------------------------
 1712257 3 YES 2016-10-24 21:03 CONSISTENT_RP_20161024_2102

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@stormking TESTFB blog]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Oct 24 21:03:40 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: TESTFB (DBID=2908670758, not open)

RMAN> backup archivelog all not backed up 1 times delete input;

Starting backup at 20161024 21:03
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=30 device type=DISK
skipping archived log of thread 1 with sequence 22; already backed up
skipping archived logs of thread 1 from sequence 24 to 25; already backed up
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=26 RECID=13 STAMP=926109581
channel ORA_DISK_1: starting piece 1 at 20161024 21:03
channel ORA_DISK_1: finished piece 1 at 20161024 21:03
piece handle=+RECO/TESTFB/BACKUPSET/2016_10_24/annnf0_tag20161024t210351_0.469.926111033 tag=TAG20161024T210351 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_26.527.926109581 RECID=13 STAMP=926109581
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_22.465.926109543 RECID=11 STAMP=926109542
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_24.536.926109583 RECID=14 STAMP=926109582
archived log file name=+RECO/TESTFB/ARCHIVELOG/2016_10_24/thread_1_seq_25.456.926109579 RECID=12 STAMP=926109579
Finished backup at 20161024 21:03

RMAN> delete noprompt archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=30 device type=DISK
specification does not match any archived log in the repository

RMAN>

Recovery Manager complete.
[oracle@stormking TESTFB blog]$ sysdba @ flashback.db.to.restorepoint.sql

SQL*Plus: Release 12.1.0.2.0 Production on Mon Oct 24 21:04:21 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options


no rows selected

a guaranteed restore point exists

NAME
--------------------------------------------------------------------------------
CONSISTENT_RP_20161024_2102

CONSISTENT_RP_20161024_2102
ORACLE instance started.

Total System Global Area 2533359616 bytes
Fixed Size 2927576 bytes
Variable Size 671089704 bytes
Database Buffers 1845493760 bytes
Redo Buffers 13848576 bytes
Database mounted.
old 1: flashback database to restore point &&name
new 1: flashback database to restore point CONSISTENT_RP_20161024_2102

Flashback complete.

SQL> alter database open resetlogs;

Database altered.

SQL>
SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

Scripts that were used in this blog post, in alphabetical order:

assert.database.flashback.is.on.sql

whenever sqlerror exit 1
select 0 / 0 "chk that flashback is on"
from v$database
where FLASHBACK_ON != 'YES';

prompt flashback database is enabled

assert.database.is.consistent.sql:

whenever sqlerror exit 1

select 0 / 0 "chk datafiles are consistent"
from v$datafile_header
where status = 'ONLINE'
and fuzzy = 'YES';

prompt the database is consistent

assert.guaranteed.restorepoint.exists.sql:

whenever sqlerror exit 1

select 0 / 0 "chk for guar restore point"
from dual
where not exists
(
 select *
 from v$restore_point
 where guarantee_flashback_database = 'YES'
);

prompt a guaranteed restore point exists

assert.no.restore.point.exists.sql:

whenever sqlerror exit 1

select 0 / 0 "chk no restore point exists"
from v$restore_point
where rownum = 1;
prompt no restore point exists

cr.consistent.restorepoint.sql:

whenever oserror exit 1
whenever sqlerror exit 1
column inst format a8
set linesize 200
set trimspool on

@ assert.database.flashback.is.on.sql
@ vrestorepoint.sql
@ assert.no.restore.point.exists.sql

column rp new_value rp format a80
select 'consistent_rp_' || to_char ( sysdate , 'yyyymmdd_hh24mi' ) rp,
 sys_context( 'userenv', 'instance_name') inst
from dual;

prompt &&rp

shutdown immediate
startup mount

@ assert.database.is.consistent.sql

create restore point &&rp guarantee flashback database;

@ vrestorepoint.sql
quit

cr.fbdemo.restorepoint.sql:

whenever oserror exit 1
whenever sqlerror exit 1
column inst format a8
set linesize 200
set trimspool on

@ assert.database.flashback.is.on.sql
@ vrestorepoint.sql
@ assert.no.restore.point.exists.sql

column rp new_value rp format a80
select 'fbdemo_rp_' || to_char ( sysdate , 'yyyymmdd_hh24mi' ) rp,
 sys_context( 'userenv', 'instance_name') inst
from dual;

prompt &&rp

alter system checkpoint;
alter system switch logfile;

create restore point &&rp guarantee flashback database;

@ vrestorepoint.sql
quit

vrestorepoint.sql:

set linesize 200
set trimspool on
set pagesize 900
column scn format 999999999999999
column time format a16
column name format a40

alter session set nls_timestamp_format = 'yyyy-mm-dd hh24:mi';

select rp.scn,
 rp.database_incarnation#,
 rp.guarantee_flashback_database,
 rp.time,
 rp.name
from v$restore_point rp;