Optimizing Materialized Views Part III: Manual Refresh Mechanisms

So, back on the subject of fast-refreshing multiple materialized views.

Reviewing my previous comments on this, I think it's time for some clarification. I wrote before that I prefer to consider that fast refresh is based on the materialized view log rather than the master table itself, but as I also wrote in a followup this is not actually the case when you load a master table using direct path inserts. In fact the materialized view log is not touched in this circumstance and instead the logging takes place in the ALL_SUM_DELTA system view. So from the view point of the refresh process itself you can consider the materialized view log to be redundant. This doesn't mean that you can do away with it, however. There is no mechanism in Oracle for saying that you want a materialized view to be fast refreshable based only on direct path inserts and that the MV log is never going to be used.

In fact you might want to go so far as to ensure that it really is redundant by preventing inserts into it, since these would be an indication that conventional path insert is being used. A couple of mechanisms for this spring to mind, such as placing the log in a readonly tablespace, or placing an error-raising insert trigger on it. This would thereby prevent anything other than a direct path insert into the master table, including deletes and updates — considering the performance disparity between conventional and direct path change logging and subsequent fast refresh I would be inclined to go ahead with such a plan.

Refresh Mechanism Following Direct-Path Insert

So given that you have inserted data to the master table in direct path mode, what is the refresh mechanism for the materialized views? Looking at a SQL trace file tells us that a MERGE statement is used. Here is a prettified example derived from the script used here.

Select  /*+  Opt_Estimate(Query_Block Max=1000)   */
DLT$0.LOCATION_CD                          GB0,
Trunc(DLT$0.TX_TIMESTAMP,'MM')             GB1,
Sum(1)                                     D0,
Nvl(Sum(1*(DLT$0.TX_CST)), 0)              D1,
Sum(1* Decode((DLT$0.TX_CST), Null, 0, 1)) H1,
Nvl(Sum(1*(DLT$0.TX_QTY)), 0)              D2,
Sum(1* Decode((DLT$0.TX_QTY), Null, 0, 1)) H2
From     (
Select  /*+ Cardinality(16800) No_Merge No_Merge(LL$) Rowid(MAS$)
          Ordered Use_Nl(MAS$) No_Index(MAS$) PQ_Distribute(MAS$,Random,None) */
     MAS$.ROWID RID$  ,
     MAS$.TX_CST      ,
Where  LL$.TABLEOBJ# = 256646 And
     LL$.TIMESTAMP > :1     And
     MAS$.ROWID    Between LL$.LOWROWID
                       And LL$.HIGHROWID
) As Of Snapshot (:2)  DLT$0
On (
Sys_Op_Map_NonNull(SNA$.LOCATION_CD) = Sys_Op_Map_NonNull(AV$.GB0) And
Sys_Op_Map_NonNull(SNA$.TX_MONTH)    = Sys_Op_Map_NonNull(AV$.GB1)
When Matched Then
Update  Set SNA$.C_STAR   = SNA$.C_STAR+AV$.D0,
                        Null, Decode(AV$.H1, 0, Null, AV$.D1),
                              (SNA$.S_TX_CST + AV$.D1)),
 SNA$.S_TX_QTY = Decode(SNA$.S_TX_QTY,
                        Null, Decode(AV$.H2,0,Null,AV$.D2),
                              (SNA$.S_TX_QTY + AV$.D2))
When Not Matched Then
Values (AV$.GB0,

One of the characteristics to note is the absence of an APPEND hint in the merge statement, and to me this implies the following:

  1. New rows will be inserted through the conventional path. No direct path for you!
  2. Data segment compression on the MV base table will not be leveraged.
  3. If a fast refreshable MV is referencing MV1 as its master table then the change logging will be performed in the MV1 materialized view log, not through the SUM_DELTA mechanism

Optimizing MV Refresh By Avoiding MERGE

Another issue to note is simply that a single merge statement is used instead of an insert and an update. It is not inconcievable that you have a situation where the new data added to a fact table does not require modification of existing rows in one or more materialized views. If the materialized view groups by "Column A" and each load cycle on the fact table contains values of "Column A" that never appear in any other load cycle then this would be the case, and this might be either implicit or explicit in your data loading application.

The distinction here between implicit and explicit refers to whether the uniqueness of "Column A" to a particular load cycle is a deliberate design feature or whether it is more by happenstance or business requirements. An example of an explicit situation would be where you use a unique "load cycle code" as part of your ETL procedure (not a bad idea in itself), assigning a new value to each load and pushing this into one or more MV's. An example of an implicit situation might be where each load cycle operates on a granularity of fact table data that is repeated in the materialized view, such as where you load a complete day of transactions represented by a contiguous range of "TRANSACTION_TIME" (midnight-to-midnight) and the materialized view contains an expression such as:


In such a circumstance it is conceivable that it would be more efficient to manage the materialized view aggregation ourself than to leave it to the RDBMS, because using an insert allows us the following potential advantages:

  1. We avoid a potentially expensive join to the MV.
  2. A direct path write allows us to minimize logging.
  3. A direct path write allows us to use data segment compression.
  4. We can enforce a sort order in the materialized view data, potentially improving report query performance

I predict that this last issue will become more important in 10gR2 as the new hash-based aggregation algorithm replaces the current sort-based method, and the likelihood that GROUP BY returns a value-clustered result set is reduced.

However the tricky part of such an operation is in modifying the materialized view's data, which is not allowed through DML.

SQL> insert into master_mv1
2  select
3     location_cd,
4     trunc(tx_timestamp,'MM') tx_month,
5     sum(tx_qty) s_tx_qty,
6     sum(tx_cst) s_tx_cst,
7     count(*)    c_star
8  from
9     master
10  group by
11     location_cd,
12     trunc(tx_timestamp,'MM')
13  /
insert into master_mv1
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

For the experienced data warehouse developer an obvious method for bypassing such a limitation is to use a partition exchange operation, so let us construct a test case.

In this script we load a fact table upon which are based two materialized views, one of which is refresh on demand and the other of which is never refresh. The latter will be manually refreshed using a direct path insert into a non-partitioned table followed by a partition exchange without validation. To avoid either of the refresh processes benefiting from buffer caching by the other, the buffer cache will be flushed.

drop table master;
drop table master_test_data;
drop table master_mv1_exch;
drop materialized view master_mv1;
drop materialized view master_mv2;

create table master
location_cd  number not null,
tx_timestamp date   not null,
product_cd   number not null,
tx_qty       number not null,
tx_cst       number not null
pctfree 0

create materialized view log on master
with rowid
including new values

create materialized view master_mv1
compress nologging
Partition By List (TX_MONTH)
Partition Y2005M01 Values (to_date('01-jan-2005'))
Using No Index
Never Refresh
Enable Query Rewrite
trunc(tx_timestamp,'MM') tx_month,
sum(tx_qty) s_tx_qty,
sum(tx_cst) s_tx_cst,
count(*)    c_star
group by

create table master_mv1_exch
compress nologging
select *
from   master_mv1
where  1=0

create materialized view master_mv2
compress nologging
Partition By List (TX_MONTH)
Partition Y2005M01 Values (to_date('01-jan-2005'))
Using No Index
Refresh Fast On Demand
Enable Query Rewrite
trunc(tx_timestamp,'MM') tx_month,
sum(tx_qty) s_tx_qty,
sum(tx_cst) s_tx_cst,
count(*)    c_star
group by

create table master_test_data
location_cd  not null,
tx_timestamp not null,
product_cd   not null,
tx_qty       not null,
tx_cst       not null
pctfree 0
+ dbms_random.value(0,30),
connect by
1=1 and
level  'MASTER_MV2',
method    => 'F');

From the wall-clock timing of the queries the manual refresh is superior with its three stages of insert, partition exchange and "consider fresh" completing in 5.08 seconds (4.75+0.15+0.18), representing 66% of the merge-based MV refresh time of 7.65 seconds.

Tkprof extracted the following details from the trace file


Insert /*+ append */ Into master_mv1_exch
trunc(tx_timestamp,'MM') tx_month,
sum(tx_qty) s_tx_qty,
sum(tx_cst) s_tx_cst,
count(*)    c_star
group by

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.03       0.05          2        344          3           0
Execute      1      2.89       3.78       1927       3404        110        9999
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      2.92       3.83       1929       3748        113        9999

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 40

Rows     Row Source Operation
-------  ---------------------------------------------------
1  LOAD AS SELECT  (cr=3404 pr=1927 pw=27 time=3785583 us)
9999   SORT GROUP BY (cr=3365 pr=1914 pw=0 time=3652049 us)
1000000    TABLE ACCESS FULL MASTER (cr=3365 pr=1914 pw=0 time=4008780 us)

Elapsed times include waiting on following events:
Event waited on                             Times   Max. Wait  Total Waited
----------------------------------------   Waited  ----------  ------------
db file sequential read                        11        0.01          0.05
control file sequential read                    4        0.00          0.00
db file scattered read                         70        0.18          0.73
direct path write                               5        0.00          0.00
SQL*Net message to client                       1        0.00          0.00
SQL*Net message from client                     1        0.00          0.00


("DLT$0"."TX_CST")), 0) "D1", SUM(1* DECODE(("DLT$0"."TX_CST"), NULL, 0, 1))
"H1", NVL(SUM(1* ("DLT$0"."TX_QTY")), 0) "D2", SUM(1*
NULL, "AV$"."D1"), ("SNA$"."S_TX_CST" + "AV$"."D1")), "SNA$"."S_TX_QTY"=
DECODE("SNA$"."S_TX_QTY", NULL, DECODE("AV$"."H2", 0, NULL, "AV$"."D2"),
"SNA$"."S_TX_QTY") VALUES ( "AV$"."GB0", "AV$"."GB1", "AV$"."D0", DECODE
("AV$"."H1", 0, NULL, "AV$"."D1"), DECODE ("AV$"."H2", 0, NULL, "AV$"."D2"))

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          4           0
Execute      1      6.04       6.66       1922       3760      10640        9999
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      6.06       6.68       1922       3760      10644        9999

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 40     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
2  MERGE  (cr=3811 pr=1925 pw=0 time=6662264 us)
9999   VIEW  (cr=3682 pr=1914 pw=0 time=6556192 us)
9999    HASH JOIN RIGHT OUTER (cr=3682 pr=1914 pw=0 time=6516189 us)
0     PARTITION LIST SINGLE PARTITION: 1 1 (cr=3 pr=0 pw=0 time=92 us)
0      MAT_VIEW ACCESS FULL MASTER_MV2 PARTITION: 1 1 (cr=3 pr=0 pw=0 time=73 us)
9999     VIEW  (cr=3679 pr=1914 pw=0 time=6415840 us)
9999      SORT GROUP BY (cr=3679 pr=1914 pw=0 time=6365838 us)
1000000       VIEW  (cr=3679 pr=1914 pw=0 time=14006210 us)
1000000        NESTED LOOPS  (cr=3679 pr=1914 pw=0 time=8006206 us)
35         VIEW  (cr=76 pr=0 pw=0 time=4235 us)
35          FILTER  (cr=76 pr=0 pw=0 time=4055 us)
35           NESTED LOOPS  (cr=76 pr=0 pw=0 time=3802 us)
35            NESTED LOOPS  (cr=4 pr=0 pw=0 time=755 us)
1             TABLE ACCESS BY INDEX ROWID OBJ$ (cr=3 pr=0 pw=0 time=75 us)
1              INDEX UNIQUE SCAN I_OBJ1 (cr=2 pr=0 pw=0 time=32 us)(object id 36)
35             INDEX RANGE SCAN I_SUMDELTA$ (cr=1 pr=0 pw=0 time=472 us)(object id 161)
35            TABLE ACCESS CLUSTER USER$ (cr=72 pr=0 pw=0 time=2556 us)
35             INDEX UNIQUE SCAN I_USER# (cr=2 pr=0 pw=0 time=798 us)(object id 11)
0           NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
0            INDEX RANGE SCAN I_OBJAUTH1 (cr=0 pr=0 pw=0 time=0 us)(object id 107)
0            FIXED TABLE FULL X$KZSRO (cr=0 pr=0 pw=0 time=0 us)
0           FIXED TABLE FULL X$KZSPR (cr=0 pr=0 pw=0 time=0 us)
1000000         TABLE ACCESS FULL MASTER (cr=3603 pr=1914 pw=0 time=3222833 us)

Elapsed times include waiting on following events:
Event waited on                             Times   Max. Wait  Total Waited
----------------------------------------   Waited  ----------  ------------
db file sequential read                        10        0.00          0.03
db file scattered read                         70        0.02          0.48
log buffer space                               13        0.00          0.03

The difference in elapsed time between the two methods is almost entirely accounted for in CPU time, probably due to the joins required by the merge statement. You might also notice that I took the opportunity to make the tables NOLOGGING and COMPRESS.

The methodology presented here for the manual refresh of the materialized view through a partition exchange is not the only one available, of course. Another choice might be to define the materialized view upon a prebuilt table, which allows the MV to be temporarily dropped so as to allow DML on the underlying table. The recreation of the MV definition is then a trivial issue in terms of database work required, and the only effect would be the temporary loss of the ability to rewrite queries against that MV. It would have the further advantage that indexes (or partitions thereof) on the materialized view table could be temporarily rendered unusable during the insert process so as to rebuild them in a nologging mode, without users being inconvenienced by "unusable index" errors.

It would also be possible to split the manual refresh into more stages to reduce the period for which query rewrite is unavailable:

  1. Aggregate master table to temporary table
  2. Drop materialized view
  3. Insert into materialized view table from temp table (no aggregation required)
  4. Recreate materialized view

However this would sacrifice many of the performance benefits of the single stage manual insert.


If there is an alternative mechnism for avoiding the MERGE then I'd be very glad to hear it, but this method may stand on its own if the alternatives do not allow direct path operations on the MV table or the sorting of the table data.

There are also some clear benefits to this mechanism when you consider the refresh of multiple materialized views, but they will wait until after thanksgiving (if I escape both the long term and immediate threats to my health of deep-frying a turkey tomorrow).

Last Minute Addendum

It occured to me just after posting this that another benefit of using a manual refresh is that you can avoid the overhead of calculating the extra columns required by the fast refresh mechanism, in this case the COUNT(*). I'll do a test on the benefits of that also.


9 thoughts on “Optimizing Materialized Views Part III: Manual Refresh Mechanisms

  1. I’m not too familiar with MVs, but can you use a partitioned table with one partition as underlying table for an mv? Then the steps become:

    aggregate data into a temp table
    remove mv definition from target table
    exchange temp and target
    ad mv definition to target table

    Just a thought

  2. Yes, that’s one of the mechanisms I describe, except as William notes there is no need to drop/recreate the MV definition.

    I’ve also used single-partition tables quite effectively for dimension tables with complex changes, where I wanted to complete the modifications before exposing them all to the reporting environment in one shot.

    To be more precise, the dimension table was unpartitioned and the changes were made in the partitioned table. Of course the effect is just the same, with the exchange mechanism switching the data objects (table segments) associated with the two objects (tables).

  3. Most interesting that the refresh mechanism is handled with merge, given the limitations of that statement when handling more than one update to the same row in a given input data set.

    I wonder if that’s why it has that convoluted “group by”? Trying to reduce all changes of a given row to one, perhaps? I still can’t see how it works in detail, I think it’s only picking the last update of each column and using only that as if it was a single row update?

    Interesting stuff, David: thanks for posting this.

  4. Noons,

    The query does indeed guarantee that the merge will not attempt multiple updates of the same row. You see that the query block beginning:

    Select /*+ Opt_Estimate …

    … takes care of this by aggregating the new rows to the same level as the MV, in this case location_cd and trunc(tx_timestamp,’MM’), so it wouldn’t be a problem.

  5. If you partition exchange directly into an MV to optimise the refresh when the master table is bulk updated, but still receive small sets of changes to the master table, when you subsequently attempt to perform a fast refresh for this last set, don’t you then encounter ORA-32320?

  6. I thought I might be able to do use the CONSIDER FRESH command, but when I try this, I get told this is an unimplemented feature (ORA-3001). Have you (or anyone else) used this command successfully?

  7. Hmmm, I don’t recall receiving that error, but there is a funny intersection between freshness and partition operations — for example if you have (successfully) used “consider fresh” then I believe that PCTFR is no longer available. In short, nothing would surprise me there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s