11gR2: materialized view logs changes

In this post we are going to discuss some 11gR2 changes to materialized view logs that are aimed at increasing the performance of the fast-refresh engine of materialized views (MVs), especially the on-commit variant.

The MV logs, in 10gr2, now comes in two flavours: the traditional (and still the default) timestamp-based one and the brand new commit SCN-based one; you choose the latter type by specifing the "WITH COMMIT SCN" clause at MV log creation time. Interestingly, the "old" timestamp-based implementation has been changed as well. Let's examine both with the help, as usual, of a test case.

Timestamp-based MV logs (the "old" type)

The test case configures an MV log as "log everything", that is, it activates all the logging options:

create materialized view log on test_t1
with sequence, rowid, primary key (x1)
including new values;

In pre-11gR2 (e.g. in,, the MV log columns were:

pk1             number(22)
x1              number(22)
m_row$$         varchar2(255)
sequence$$      number(22)
snaptime$$      date(7)
dmltype$$       varchar2(1)
old_new$$       varchar2(1)
change_vector$$ raw(255)

now in 11gR2 (

pk1             number(22)
x1              number(22)
m_row$$         varchar2(255)
sequence$$      number(22)
snaptime$$      date(7)
dmltype$$       varchar2(1)
old_new$$       varchar2(1)
change_vector$$ raw(255)
xid$$           number(22)

the only difference is the new column xid$$ (transaction id) that uniquely identifies the transaction that made the changes to the row. For the curious, the number is a combination of the elements of the triplet (undo segment number, undo slot, undo sequence); it is simply the binary concatenation of the three numbers shifted by (48, 32, 0) bits respectively (as checked in the script).

The xid$$ column is used by the 11gR2 on-commit fast refresh engine, which can now easily retrieve the changes made by the just-committed transaction by its xid; at the opposite, the on-demand fast refresh one keeps using snaptime$$ as it did in previous versions. I will speak about this in more detail in an upcoming post.

Commit SCN-based MV logs (the "new" type in 11gR2)

Let's recreate the same MV log, this time adding the commit SCN clause (new in 11GR2):

create materialized view log on test_t1
with sequence, rowid, primary key (x1), COMMIT SCN
including new values;

The columns of the MV log are:

pk1             number(22)
x1              number(22)
m_row$$         varchar2(255)
sequence$$      number(22)
dmltype$$       varchar2(1)
old_new$$       varchar2(1)
change_vector$$ raw(255)
xid$$           number(22)

so, the only difference from the 11gR2 timestamp-based case is that snaptime$$ is no longer a column of the MV log; the only difference from the pre-11gR2 is that snaptime$$ has been replaced with xid$$.

For this log flavour only, the mapping between the xid that modified the table and its commit-time SCN is now tracked in a new view, all_summap (probably named after "SUMmary MAP", "summary" being yet another synonym for "MV"), which is (as of a straight "select *" of the dictionary table sys.snap_xcmt$. To illustrate, the script makes one insert, one update and one delete on the base table, which translates into 4 rows inside the MV log with the same xid:

SQL> select distinct xid$$ from mlog$_test_t1;


after the commit, we get

SQL> select * from all_summap where xid in (select xid$$ from mlog$_test_t1);

                  XID COMMIT_SCN
--------------------- ----------
     1126024460895690    2885433

hence, it is now possible to know the infinite-precision time (the SCN) when every modification became visible to an external observer (the commit SCN) by simply joining the MV log and all_summap (or sys.snap_xcmt$). Note that the commit SCN is not propagated to the MV log at all.

commit SCN-based MV logs for on-demand fast refresh

This new xid$$ column and commit-SCN mapping table are leveraged by the fast refresh of on-demand MVs as follows (on-commit ones do not need the SCN as they know exactly the xid of the committed transaction; again we will see that in an upcoming post).

With "old style" timestamp-based MV logs, the refresh is performed by using a "mark-and-propagate" algorithm, which is essentially (check this post for some additional details):
1) new log rows are inserted with snaptime$$=4000 A.D;
2) at refresh time, a snapshot of the new rows is taken, that is, all new rows are marked with snaptime$$=sysdate;
3) all modifications whose snaptime$$ is between the date of the last refresh (excluded) and sysdate(included) are propagated to the MV;
4) all obsolete log rows are deleted, that is, all rows whose snaptime$$ is less than or equal the lowest of all refresh times are removed from the log.

With "new style" SCN-based MV logs, the algorithm is, instead:
1) new log rows are inserted with xid$$=transaction id of modifing transaction;
2) at refresh time, the current SCN is retrieved (no snapshot is performed);
3) all modifications whose xid maps to a row in all_summap whose commit_scn is between the SCN of the last refresh (excluded) and the retrieved current SCN(included) are propagated to the MV;
4) obsolete rows are removed from the log as before, this time using the SCN instead of snaptime$$.

The main advantage is that the snapshot is not performed, thus removing the redo and undo generated by the update, and obviously the log visit (usually a full table scan) as well - at the cost of an additional join with all_summap (or sys.snap_xcmt$) later; if the join is calculated efficiently, that is very likely advantageous "in general" (but as always, it depends on your scenario).

It might be (rarely) beneficial to index xid$$, as it is (rarely) beneficial to index snaptime$$. In that case, having no snapshot performed reduces both the undo and redo generated for the index maintenance.

As a side and "philosophical" note, it is also worth noting that the new logging mechanism records more information - now we know which transactions modified the table and the infinite-precision time (the SCN) of modifications, and this is much more informative about the history of the logged table than the mostly meaningless refresh time contained in snaptime$$. This is definitely a better utilization of storage.

I plan to blog about how the new MV log impact fast refreshes in 11gR2 in the near future, focusing on join-only MVs; so stay tuned if you're interested.

  1. Rob van Wijk

    Tuesday, November 3, 2009

    Great post.
    Looking forward to the next one!

  2. Blogroll Report 30/10/2009-06/11/2009 « Coskan’s Approach to Oracle

    Tuesday, November 10, 2009

    [...] Alberto Dell’era-11gR2: materialized view logs changes [...]

  3. Alberto Dell’Era’s Oracle blog » 11gR2: new algorithm for fast refresh of on-commit materialized views

    Sunday, November 22, 2009

    [...] noted in the post of mine “11gR2: materialized view logs changes”, in 11gR2 a new column, xid$$, is now part of materialized view logs; this column records the id [...]

  4. suman

    Monday, September 17, 2012

    very nice top.

  5. sam

    Thursday, October 4, 2012

    I am on
    I have requirement to snap two tables, both the table are dealing with almost 40 million DML ( mostly inserts and deletes ).
    my snaps is not able to catch up with this many changes and always lag, once its start lagging, its not able to catch up.

    what you think how to deal with this ? you advice will be greatly appreciated .


  6. Alberto Dell'Era

    Thursday, October 4, 2012

    How many rows are in your master tables, and how many in their mv logs?
    Are your master tables local, or across a db-link?

  7. sam

    Thursday, October 4, 2012

    thanks for reply,

    master table is weekly partitioned, retention is 6 weeks, per partition is having almost 70 million rows, its on remote DB.

    as an avg, in 10 min I can see. around 200K insert/update and 30K delete in mview log.

  8. Alberto Dell'Era

    Thursday, October 4, 2012

    So when you refresh, you get 40M rows in the mv log, and 6×70M rows in the master ?
    Have you tried refreshing more often, or even switching to complete refresh (that can be much quicker then “fast” when you have to propagate many rows) and maybe (if you can live with it) trying with atomic => false ?

  9. sam

    Friday, October 5, 2012

    let me elaborate.
    I am trying to build the snap for migration of very big table with heavy DML activity. my refresh interval is 5 min.

    1. master side table size is 200G.
    2. 5 min DML activities are around 100K insert, 100K update and 30K delete on master.
    3. complete refresh is not an option here, it will take forever to finish it.
    do you have any other faster technique to do complete refresh ?
    because, suppose if first complete refresh takes 5 hours to finish then next fast refresh will have millions of rows and it will never finish.

    so what I am doing here.
    1.I am creating one empty table structure
    2. creating mlog on master
    3. then using dbms_mview.set_i_am_a_refresh(true); to insert into mview.
    4. loading 200G of data taking at least 4 hours.
    5. then using fast refresh to catch changes happened in 4 hours.
    5. but in those 4 hours mlog grows too big that its not catching up.

    my question is
    how to make fast refresh more faster on heavy DML activity table which generates almost 300K DML per 5 min.

  10. Alberto Dell'Era

    Friday, October 5, 2012

    If you are using dbms_mview.set_i_am_a_refresh(true) because this is an updatable MV, I must confess I have no experience with that kind of MV, sorry.

    Anyway, to speed up the refresh, I would try to investigate (by tracing) the statements submitted to refresh – for example, I have done that (for read only snapshots) here:
    for example, one could find that the plan is wrong due to missing statistics on the master or the master MV, or maybe find that some indexes might help, etc

    I would check whether LOCKING the stats on the MV logs makes any difference, as it does for join MVs:

    Maybe using rowid-based MV logs instead of primary key might save index visits during the refresh.

    If supported over a db-link, I would also investigate using SCN-based MV logs – it would save the “mark the rows in the log” phase:

    I would stress that these are only *speculations*; as said above, the best way is to trace, and better yet, profile the refresh and try to investigate and optimize the slower components. At the end, it’s not much different than standard performance tuning, with the complication that you can’t control the SQL statements submitted.

    HTH ;)

  11. sachin

    Friday, October 5, 2012

    thanks Alberto,
    above replay shows that you really care about your blog, I have seen many oracle blogger posts something and if someone ask anything about it, NO REPLY.

    I have created the index mlog but still i wasn’t able to catch up. session on master side hovering around seq. read and after few hours it terminated with snapshot too old, I will get the trace data also.

    I will also check on commit based mlog and update my findings.

    thanks again.

  12. antony

    Tuesday, September 3, 2013

    Hi Alberto,

    COMMIT-SCN based logging can only be used with locally managed MVs.As per Oracle11gR2 SQL reference doc,You cannot create remote materialized views on base tables with commit SCN-based materialized view logs.


  13. Alberto Dell'Era

    Saturday, September 7, 2013

    Hi Anthony,

    thanks for your observation – I had indeed tested only “locally managed” mviews, not “remote” ones (that used to be named “snapshot” in the good old days) …

  14. oraclenewbie

    Wednesday, July 30, 2014

    The commit SCN can be obtained from the MV log’s ORA_ROWSCN pseudocolumn, it should be the same as those obtained from the sys.snap_xcmt$, right?

    Does anyone know how the commit timestamp is inserted into the sys.snap_xcmt$? Through a internal trigger fired after commit? At least there is no commit trigger that the users can define.

  15. Trevor

    Thursday, January 22, 2015

    “now in 11gR2 (”, you mean “now in 11gR2 (”

  16. Alberto Dell'Era

    Thursday, January 22, 2015


    many thanks – fixed :)

Leave a Comment

Please enclose SQL code inside the tag pair [sql] ... [/sql]

Subscribe without commenting

Links (alphabetical order)

Blogroll (alphabetical order)

Blog Aggregators (alphabetical order)

Switch to our mobile site