CBO: NewDensity for Frequency Histograms,11g-10.2.0.4 (densities part IV)

As we have seen in the previous posts of this series, in 11g a new figure named "NewDensity" has been introduced as a replacement for the "density" column statistic for columns whose histogram has been collected; this change has been backported in 10.2.0.4 also.

In the previous post we discussed how NewDensity influences the CBO cardinality estimate for Height-Balanced histograms; in this one we are going to investigate the same for Frequency Histograms. We will see that the most important change is the introduction of the “half the least popular" rule (see the "Histogram change" post by Jonathan Lewis, which distills the findings of Randolf Geist and Riyaj Shamsudeen) - a surprising rule that might easily cause trouble (in fact as Jonathan reports in the comments - bug 6082745 was opened against this rule).

The test case (script density_post_freq.sql) considers the same test statement we focused on in the previous post (a single equality filter predicate which asks for a value inside the min-max range of the column):

select ...
  from t
 where value = 64.5;

Of course we compute a Frequency instead of an Height-Balanced histogram, and use a slightly different value distribution in order to highlight the new rule:

SQL> select value, count(*)
  2    from t
  3   group by value
  4   order by value;

     VALUE   COUNT(*)
---------- ----------
         8          8
        16         16
        64         64
       128        128

The histogram generated by the test case is (from DBA_HISTOGRAMS):

     VALUE         EP        BKT
---------- ---------- ----------
         8          8          8
        16         24         16
        64         88         64
       128        216        128

VALUE is an abbreviation for ENDPOINT_VALUE, EP for ENDPOINT_NUMBER.
BKT is the number of buckets covered by the value (i.e.: EP minus the previous EP), that is, the number of rows whose column value was equal to VALUE at statistics collection time.

When the filter predicate selects a value contained in the histogram, the new releases behave the same as the old ones (but check the "bottom note about singleton values" at the bottom for a a minor but interesting detail): neither density nor NewDensity is used, and the cardinality estimate is the usual intuitive one. In the complementary case of a value not contained in the histogram (but still inside the min-max interval), the cardinality used to be calculated as density*num_rows and it is now NewDensity*num_rows. Note the simmetry with the Height-Balanced case: the formula is the same, with NewDensity simply replacing density.

NewDensity with the “half the least popular" rule active

By default the rule is active, and in this case, NewDensity is set to

NewDensity = 0.5 * bkt(least_popular_value) / num_rows

and hence, for non-existent values:

E[card] = (0.5 * bkt(least_popular_value) / num_rows) * num_rows
        =  0.5 * bkt(least_popular_value)

For our test case, the least_popular_value is 8 and bkt(8) = 8, hence E[card] = 0.5 * 8 = 4 thanks to NewDensity being equal to 0.5 * 8 / 216 = 0.018518519. In fact, we can verify in the 10053 traces (in 10.2.0.4, 11.1.0.7, 11.2.0.1) for our statement, that asks for a not-existent value (64.5), that E[card] and NewDensity are set as above:

    NewDensity:0.018519, OldDensity:0.002315 BktCnt:216, PopBktCnt:216, PopValCnt:4, NDV:4
  Using density: 0.018519 of col #1 as selectivity of unpopular value pred
  Table: T  Alias: NOT_EXISTENT
    Card: Original: 216.000000  Rounded: 4  Computed: 4.00  Non Adjusted: 4.00

As another check, let's see what happens if bkt(least_popular_value) = 1, that is, if there is (at least) one value that occurred exactly one time (a singleton value) at statistics collection time. Adding such a row to our test case is trivial (just uncomment the first insert row in the script); in this scenario, our formula above predicts E[card] = 0.5 with NewDensity = 0.5 / 217 = .002304147, and in fact (check the *_least_is_one.trc traces):

    NewDensity:0.002304, OldDensity:0.002304 BktCnt:217, PopBktCnt:216, PopValCnt:4, NDV:5
  Using density: 0.002304 of col #1 as selectivity of unpopular value pred
  Table: T  Alias: NOT_EXISTENT
    Card: Original: 217.000000  Rounded: 1  Computed: 0.50  Non Adjusted: 0.50

note that E[card] gets rounded up from 0.5 to 1 (as usual).

What is the rationale behind this rule? Thanks to Randolf Geist (see the comment in Jonathan's blog entry above), we know that it was introduced as a patch to solve one particular scenario (see bug 5483301) and then included in the main release, for some reason. Luckily, the rule can be disabled and the old sane behaviour can be restored.

NewDensity with the “half the least popular" rule disabled

To disable the new rule, just switch off the patch 5483301:
alter session set "_fix_control"='5483301:off';
(or alter system if you want to make it permanent)

with this setting, NewDensity becomes simply

NewDensity = 0.5 / num_rows

and hence, again for non-existent values:

E[card] = 0.5

which is exactly what we got in pre-10.2.0.4, where density was used (and density was, and is still, set to 0.5 / num_rows by dbms_stats). So the cardinality estimate is 0.5 (rounded up to 1).

For our test case, we predict NewDensity = 0.5 / 216 = 0.002314815. In fact our 10053 traces tell us:

  NewDensity:0.002315, OldDensity:0.002315 BktCnt:216, PopBktCnt:216, PopValCnt:4, NDV:4
  Table: T  Alias: NOT_EXISTENT_OFF
    Card: Original: 216.000000  Rounded: 1  Computed: 0.50  Non Adjusted: 0.50

The rationale for this behaviour is sound; the CBO knows that no row with the requested value existed at statistics collection time, hence it returns the minimal cardinality estimate compatible with the non empty result set assumption (check this post for the importance of this assumption). If the statistics are reasonably fresh, this is the only sane estimate that can be made.

Playing with density - a warning

If you set your own column stats using dbms_stats.set_column_stats, the behaviour is different; I haven't made any extensive investigations but as far as I can tell, the value you provide for density is used instead of NewDensity. User-provided column statistics are flagged with dba_tab_cols.user_stats = 'YES'. You can disguise your user statistics as non-user by setting the flags parameter of dbms_stats.set_column_stats to 2 - but since the latter parameter is labeled as "for Oracle internal use only", I would do it only for investigations purposes - that is, never in production.

---
Bottom note about singleton values: actually in pre-10.2.0.4 versions, if the value was present in the Frequency histogram but covering a single bucket (hence it was present in the table exactly one time at statistic collection time), it used to be classified as "unpopular" and hence used to get the same treatment as a value not in the histogram - the end result being that the cardinality was estimated as 0.5 rounded up to 1; now it is 1 before rounding as one would intuitively expects. I hope to be able to investigate whether this change fixes the issues about join cardinality estimation I investigated - see "the mystery of halving" in this investigation of mine if interested.

Other posts belonging to this series:
densities part I
densities part II
densities part III


  1. Blogroll Report 16/10/2009-23/10/2009 « Coskan’s Approach to Oracle

    Tuesday, November 3, 2009

    [...] Alberto Dell’Era-CBO: NewDensity for Frequency Histograms,11g-10.2.0.4 (densities part IV) [...]

  2. Randolf Geist

    Thursday, December 3, 2009

    Alberto,

    regarding your comment “Playing with density”: I’ve suggested in a recent discussion to refer to your excellent blog series on Oracle-l (and try to set the density manually to get the “old” density behaviour back) – yesterday Wolfgang Breitling mentioned when we met at the UKOUG that manually setting the density actually removes the histogram, and generating a histogram manually will still use the NewDensity but not the density provided as part of the user-generated statistics.

    I haven’t had time yet to test this personally, but now that Wolfgang has mentioned it I’m quite sure I remember that he’s right regarding setting the density manually – it removes the complete histogram immediately.

    Can you comment on what you meant by above: “If you set your own column stats using dbms_stats.set_column_stats…”?

    Have you tried to modify the density only or generating your own histogram?

    Randolf

  3. Alberto Dell'Era

    Thursday, December 3, 2009

    Hi Randolf,

    thank you very much for your positive feedback, I really appreciate it – as you “may” know, it takes a lot of time to write about this kind of things trying to be clear and exhaustive …

    I use the following tiny SP to set the density while preserving the histogram:

    -- a utility procedure to change density, preserving the histogram
    create or replace procedure set_density (
      p_table_name  varchar2,
      p_column_name varchar2,
      p_new_density number,
      p_flags       number default null
    )
    is
      l_distcnt     number;
      l_old_density number;
      l_nullcnt     number;
      l_srec        dbms_stats.statrec;
      l_avgclen     number;
      l_new_density number;
    begin
      -- get the current column statistics
      dbms_stats.get_column_stats (
        ownname => user,
        tabname => p_table_name,
        colname => p_column_name,
        distcnt => l_distcnt,
        density => l_old_density,
        nullcnt => l_nullcnt,
        srec    => l_srec,
        avgclen => l_avgclen
      );
    
      -- reset them, overwriting "density"
      if p_new_density is not null then
        l_new_density := p_new_density;
      else
        l_new_density := l_old_density;
      end if;
    
      dbms_stats.set_column_stats (
        ownname => user,
        tabname => p_table_name,
        colname => p_column_name,
        distcnt => l_distcnt,
        density => l_new_density,
        nullcnt => l_nullcnt,
        srec    => l_srec,
        avgclen => l_avgclen,
        no_invalidate => false,
        flags         => p_flags
      );
    
      dbms_output.put_line ('density of '||p_table_name||'.'||p_column_name||' changed from '||l_old_density||' to '|| l_new_density);
    
    end set_density;
    /
    show errors;
    

    I’ve tested it right now again, albeit on 11.1.0.7 only, and it actually preserves the histogram. It is a script I’ve used for some years now.

    Notice how it works : it fetches the histogram from the data dictionary, and rewrites it back verbatim with a new density, possibly changing the “flags” as well …

  4. Randolf Geist

    Thursday, December 3, 2009

    Alberto,

    thanks for the clarification.

    So you mean to say: Using your SP you’ve manually modified the density while preserving the histogram – and the CBO uses then the manually modified density from the dictionary for calculation but not the NewDensity any longer?

    I thought that Wolfgang meant to say that he has generated a histogram manually along with a density, but the CBO sill applied the NewDensity algorithm to his made-up histogram.

    At least two things come into my mind that might matter:
    - I don’t know the version he has used to test
    - He seems to have setup a made-up histogram rather than preserving an existing one, however I don’t see where this exactly should make any difference

    Anyway, thanks for the feedback. If I find some time I’ll test it myself resp. ask Wolfgang for clarification what he did and what version he used to test.

    Randolf

  5. Alberto Dell'Era

    Thursday, December 3, 2009

    Randolf,

    >So you mean to say: Using your SP you’ve manually modified the density
    >while preserving the histogram – and the CBO uses then the manually
    >modified density from the dictionary for calculation but not the NewDensity any longer?

    Yes if you use the default and only supported value (null) for the “flags” parameter of dbms_stats.set_column_stats – as far as I understand, in this case the statistics get labeled as “user generated stats” (dba_tab_cols.user_stats = ‘YES’). If you set “flags” to 2, you are pretending to be dbms_stats or a system routine (dba_tab_cols.user_stats = ‘NO’) and hence NewDensity gets used.

    It makes sense – if the user has provided her own value for “density”, use it instead of NewDensity.

    NB: I have exchanged in the past many, many emails with Wolfgang and I’ve noticed that He commonly uses flags=2 in his scripts … that should explain everything nicely :)

  6. Xiang Rao

    Wednesday, August 22, 2012

    Hi Alberto,
    I am wondering if the num_rows inside the “half the least popular” formula NewDensity = 0.5 * bkt(least_popular_value)/num_rows should be the last endponiter_number? In your case, both of your row count and the buckets are the same: 216.

    I have a table with 99,826,738 rows, the concerned column has 60 distinct values ranged between 3 and 451, with frequncy histogram of 17 buckets, and density 5.17e-9. The last endponiter_number is 5,327 and the least popular value 303 has only 1 bucket. For value 4 (not any of the endpoint_value, but beteween first 2 values), Oracle gives cardinality estimate as 9,370 and newDensity as 0.000094, which is 0.5*bkt(303)/last_endpoint_number = 0.5*1/5,327. I was looking for the theory behind the newDensity value 0.000094, and google brought me to your article again.

    Thanks,

    Xiang

  7. Alberto Dell'Era

    Saturday, August 25, 2012

    Hi Xiang,

    yes, you’re right – in all my examples I have always used no sampling when gathering statistics (to keep the discussion simple and the examples deterministic), and when no sampling, the last endpoint_number is always exactly the same as num_rows. But it is the last endpoint_number that is to be used in the formula, as you kindly point out – which makes a difference to you since you are sampling for sure.

    Just to be complete, here’s a demo that shows the effect of sampling on endpoint_number for a table with num_rows=100000:

    SQL> create table tt as select 0 as x  from dual connect by level <= 100000;
    SQL> exec dbms_stats.gather_table_stats(user,'tt',method_opt=>'for all columns size 254', estimate_percent=>0.000001);
    SQL> select max(endpoint_number) from user_histograms where table_name='TT' and column_name='X';
    
    MAX(ENDPOINT_NUMBER)
    --------------------
                    2710
    SQL> exec dbms_stats.gather_table_stats(user,'tt',method_opt=>'for all columns size 254', estimate_percent=>null);
    SQL> select max(endpoint_number) from user_histograms where table_name='TT' and column_name='X';
    
    MAX(ENDPOINT_NUMBER)
    --------------------
                  100000
    

    Thanks for commenting!

  8. Rickey

    Monday, January 14, 2013

    Definitely believe that that you stated. Your
    favorite reason seemed to be on the net the easiest thing to
    take into accout of. I say to you, I definitely get annoyed even as folks think
    about concerns that they just do not know about.

    You managed to hit the nail upon the top as well as defined out
    the whole thing with no need side-effects , other people could take a signal.

    Will likely be again to get more. Thanks

  9. Sayan Malakshinov

    Tuesday, December 3, 2013

    Alberto,

    could you tell me please: is this my view dba_newdensity written correctly?

    create or replace view dba_newdensity as
    with
     histgrm1 as (
       select--+ inline merge
          owner                 as owner
         ,table_name            as table_name
         ,column_name           as column_name
         ,endpoint_number       as ep_num
         ,endpoint_value        as ep_val
         ,endpoint_actual_value as ep_act_val
         ,lag(endpoint_number) over(partition by owner,table_name,column_name order by endpoint_number) as ep_num_prev
       from
          dba_histograms h1
    )
    ,histgrm2 as (
       select--+ inline
          owner                       as owner
         ,table_name                  as table_name
         ,column_name                 as column_name
         ,ep_num                      as ep_num
         ,ep_val                      as ep_val
         ,ep_act_val                  as ep_act_val
         ,ep_num - nvl(ep_num_prev,0) as bkt
         ,decode (ep_num - nvl(ep_num_prev,0)
                   , 0, 0
                   , 1, 0
                   , 1
                 ) as popularity
       from
          histgrm1 h
    )
    ,hist_agg as (
       select--+ inline
           owner
          ,table_name
          ,column_name
          ,max(ep_num) as BktCnt -- should be equal to sum(bkt)
          ,sum(decode(popularity, 1, bkt,0))  as PopBktCnt
          ,sum(decode(popularity, 1, 1  ,0))  as PopValCnt
    --      ,min(bkt) keep(dense_rank first order by decode(popularity,1,ep_num)) as bkt_least_popular_value
    --      ,min(decode(popularity,1,bkt)) keep(dense_rank first order by decode(popularity,1,bkt) nulls last) as bkt_least_popular_value
          ,min(decode(popularity,1,bkt)) as bkt_least_popular_value
       from histgrm2
       group by owner,table_name,column_name
    )
    select
        st.owner
       ,st.table_name
       ,st.column_name
       ,st.histogram
       ,h.BktCnt
       ,h.PopBktCnt
       ,h.PopValCnt
       ,st.num_distinct as NDV
       ,h.bkt_least_popular_value
       ,st.density      as old_Density
       ,case st.histogram
          when 'FREQUENCY'
               then  -- 0.5 * bkt_least_popular_value / t.num_rows
                     0.5 * bkt_least_popular_value / BktCnt
          when 'HEIGHT BALANCED'
               then   ( 1 - PopBktCnt / BktCnt ) / (st.num_distinct - PopValCnt)
        end as newdensity
    from
         dba_tab_col_statistics st
        ,hist_agg   h
        ,dba_tables t
    where
          st.owner       = h.owner
      and st.table_name  = h.table_name
      and st.column_name = h.column_name
      and st.owner       = t.owner
      and st.table_name  = t.table_name;
  10. Sayan Malakshinov

    Tuesday, December 3, 2013

    without unnecessary:

    create or replace view dba_newdensity as
    with
     histgrm1 as (
       select--+ inline merge
          owner                 as owner
         ,table_name            as table_name
         ,column_name           as column_name
         ,endpoint_number       as ep_num
         ,endpoint_value        as ep_val
         ,endpoint_actual_value as ep_act_val
         ,lag(endpoint_number) over(partition by owner,table_name,column_name order by endpoint_number) as ep_num_prev
       from
          dba_histograms h1
    )
    ,histgrm2 as (
       select--+ inline
          owner                       as owner
         ,table_name                  as table_name
         ,column_name                 as column_name
         ,ep_num                      as ep_num
         ,ep_val                      as ep_val
         ,ep_act_val                  as ep_act_val
         ,ep_num - nvl(ep_num_prev,0) as bkt
         ,decode (ep_num - nvl(ep_num_prev,0)
                   , 0, 0
                   , 1, 0
                   , 1
                 ) as popularity
       from
          histgrm1 h
    )
    ,hist_agg as (
       select--+ inline
           owner
          ,table_name
          ,column_name
          ,max(ep_num) as BktCnt -- should be equal to sum(bkt)
          ,sum(decode(popularity, 1, bkt,0))  as PopBktCnt
          ,sum(decode(popularity, 1, 1  ,0))  as PopValCnt
          ,min(decode(popularity,1,bkt)) as bkt_least_popular_value
       from histgrm2
       group by owner,table_name,column_name
    )
    select
        st.owner
       ,st.table_name
       ,st.column_name
       ,st.histogram
       ,h.BktCnt
       ,h.PopBktCnt
       ,h.PopValCnt
       ,st.num_distinct as NDV
       ,h.bkt_least_popular_value
       ,st.density      as old_Density
       ,case st.histogram
          when 'FREQUENCY'
               then  0.5 * bkt_least_popular_value / BktCnt
          when 'HEIGHT BALANCED'
               then   ( 1 - PopBktCnt / BktCnt ) / (st.num_distinct - PopValCnt)
        end as newdensity
    from
         dba_tab_col_statistics st
        ,hist_agg   h
    where
          st.owner       = h.owner
      and st.table_name  = h.table_name
      and st.column_name = h.column_name;
    create public synonym dba_newdensity for dba_newdensity;
    grant select on dba_newdensity to public;
  11. Alberto Dell'Era

    Saturday, January 25, 2014

    Sayan,

    sorry for the long delay, my notification module has gone berserk again ;(

    There was a NewDensity calculation function inside the test case, have you checked that ? As far as I know my function is correct; if that’s not the case, I would like to know the reason

Leave a Comment

Please enclose SQL code inside the tag pair [sql] ... [/sql]

Subscribe without commenting

Links (alphabetical order)

Blogroll (alphabetical order)

Blog Aggregators (alphabetical order)


Switch to our mobile site