Designs That Balance ETL and Report Performance

It seems to me that there is sometimes a very difficult balance to strike, between optimizing the structure of a database for fast and convenient ETL loading and fast and convenient report querying.

Normally the choice of how to partition a fact table is pretty straightforward, because the data being loaded arrives in an order compatible with querying — ie. all of Tuesday’s transaction are available to the data warehouse ETL process early on Wednesday morning, and during the day the users will be running reports with such filters as “Give me Tuesday’s data”. Thus you can partition on a daily basis and accomodate both a daily ETL process of loading via partition exchange, and a daily reporting process for which the optimizer can invoke partition pruning.

So far so good — if the units in which the data becomes available is also a unit by which the end users want to report then the decision on partitioning is simple.

Now consider a case where all the data relating to a particular time period is not available immediately. An example here might help. Suppose you run an auto parts store, and as part of your business you sell replacements for broken-but-repairable items, such as a starter motor. When Joe Lunchpail comes into the store on Saturday afternoon to buy a new starter motor you tell him, “That part costs $100, but if you can bring in the old broken part then we’ll give you a $20 credit for it” — ie. there is a core charge of $20 on starter motors. Being a trusting organization you give him that $20 credit on the spot, as long as he uses a credit card for the purchase — if he doesn’t bring in the part within a week then you’ll charge him that extra $20.

By the following Saturday evening Joe hasn’t come back, so you charge him that $20. On sunday morning he comes in, hands over the part, and the system refunds him that $20. Three transactions in all, each one on a different date, but all of them relating to a customer interaction that occured on the first Saturday on which the starter motor was bought, and all of them linked back to it. In fact on the majority of the financial reports users are interested in “Original Transaction Date” (the first Saturday), not “Actual Transaction Date” (the three different dates on which the transactions really took place).

So imagine this being loaded into the company’s data warehouse. For query reporting purposes it would be desirable to partition by “Original Transaction Date”, but for ETL loading purposes it would be desirable to partition by “Actual Transaction Date”. Which is the more appropriate method to choose?

Well, here’s some pro’s and con’s.

Partition by Original Transaction Date

  • User queries run fast due to partition pruning, and partition-level statistics give optimizer better histogram information*.
  • ETL process cannot use simple exchange of populated table with empty partition. More complex method required

Partition by Actual Transaction Date

  • ETL process uses simple partition exchange of populated table with empty partition.
  • User query performance compromised by lack of partition pruning and use of global statistics.

* Just a quick note on this: For a partitioned table Oracle can store multiple sets of statistics. One set at the global (ie. all data in table) level, then one set for each partition, and one set for each subpartition. If a query accesses multiple partitions then global statistics are used by the optimizer in lieu of partition-level statistics, and this can be a real problem where histograms vary between different partitions. For example the partitions holding retail data for the period immediately prior to Father’s Day will probably have very different histograms for those immediately prior to Mothers’ Day, and will also have a different variation in the number of transactions per day. (I’m basing this on my own ability to leave shopping to the last minute in comparison to my wife’s).

Moving along …

There is a very clear implication in all this — if your partitioning scheme is not compatible with the report queries then you are probably wasting your time collecting partition-level statistics (unless you then use those to build global statistics, such as “Number of Rows”). And that probably means that you are seriously compromising the ability of optimizer to define efficient execution plans.

You see where I’m leading with this … the most important element of the system is report performance, and the partitioning scheme ought to be complimentary to that rather than to the ETL process. Hey, nobody said that ETL is easy — conventionally it is supposed to represent 70-80% of the effort of a data warehousing project, I believe, which sounds about right.

So, what are the alternatives to partition exchanging as a load mechanism for fact tables? Bearing in mind that the fact table may be using compression, and is almost certainly using bitmap indexes, the most feasible approach seems to be a direct path insert of data into the fact table itself. Now if your business model guaranteed that you would never receive records that are more than a few days old, then it might be feasible to either perform multiple partition exchanges to refresh old partitions, or to use a hybrid solution in which a partition exchange is performed for the newest day of data (presumably the bulk of the new feed) and direct path inserts for the remaining few older records.

However I have a feeling that being liberated from partition exchanges may sometimes be beneficial. What it implies to me is that you are then much more able to define a more complex partitioning scheme to further complememnt the report queries because the ETL process is not made any more complex by increased complexity of partitioning. For example, where previously you might have used a partitioning scheme based on Range(“Actual Transaction Date”) for ETL compatibility, you can now potentially improve report performance by using a composite range-list partitioning scheme such as Range(“Actual Transaction Date”,”Retail or Wholesale”)-List(“Region Code”). As long as the internal workings of Oracle don’t hiccup on some rarely used partitioning scheme (ie. ORA-00600, admittedly a big factor) the details are immaterial to the ETL process.

Now another side-effect may be in the application of Materialized Views (MV’s) – in particular, refreshing them. I’ll be honest here — I’ve thrown my heart and soul into MV’s, but I’ve nearly always received a cruel rejection for my efforts. Getting the correct astrological alignment that will permit me to use fast refresh (on commit or on demand), partition change tracking, query rewrite, and partition exchanges in a production environment has so far eluded me, but maybe by eliminating partition exchanges (which were the source of my last MV-related debacle) I will finally find happiness. Who knows?

30 thoughts on “Designs That Balance ETL and Report Performance

  1. It’s not too bad if ‘late’ arriving data is only applied as an insert – we have that on one of our systems. We do a multi-table insert to split on-time, late and rejected data and apply the on-time via pel and the late data as append insert. It’s only a problem if your business rule says ‘update the existing record’ we’ve done this too – it’s messy – basically select the whole partition out to a table and exchange it back in.
    Refresh of partition based MVs can be a problem, we found that we needed to fast refresh the view after loading new dimensional data, consider fresh after partition maintenance (add, drop, truncate) and fast refresh again after fact load. Otherwise it becomes a mess!

  2. On the matter of the MV’s, I’ve had such *ahem* issues with implementing both aggregation and joins in MV’s that I’m now trying to stick to either one or the other only. That primarily means providing MV’s that are just straight aggregations of fact tables.

    What I’m about to experiment with is pushing some of the more commonly used higher dimensional values into a fact table, where the higher value is deterministic upon the lowest level. For example we have a DATE_OF_DAY column in the fact table, but most users query by FISCAL_MONTH_CD which can just be defined as a function of the DATE_OF_DAY. So instead of partitioning on DATE_OF_DAY I’m pushing FISCAL_MONTH_CD into the fact table as well and partitioning on that. This approach gives us straightforward partition elimination in queries and drops a join out of nearly every report.

  3. The down side to partitioning by fiscal month is that you get around 30X more data than partitioning by day. Do you then need to sub-partition to keep the objects ‘manageable’?

    If you stick with MVs watch out for failures with query re-write as you may lose that ‘one to one’ ‘dimension to fact’ relationship

  4. Yes, we’ll be using a composite range-list setup … regardless of that though it’d be a tough sell to get users to specify day-based ranges instead of fiscal months, and we get more appropriate histograms also with the monthly partitioning.

    I think we’ll be OK on the query rewrite because the higher level that we push into the fact table will have it’s own table (ie it’ll be a partial snow-flake, with a “shortcut join” defined in Business Objects to join directly from the higher level dim table to the fact table). That higher level table will therefore join directly to both the fact table and the MV. Is that the sort of thing you meant Pete?

  5. Yep!
    When we put in nested MVs (for build speed) under Oracle 9.2 we had to do a partial snowflake (starflake?) as the higher level MVs were not built off the base level. E.g we needed to join to Month and there were 30 rows in the day table…
    Under 10g I could cheat – and use query equivalence to make the whole structure look flat.

  6. On transaction dates.

    Once upon a time, in a land far away, well OK it was Brighton in East Sussex, I was an accountant. Not a good one, and not even a fully qualified one, but never the less. Why would I want the what you call the original transaction date is in fact the true transaction date the other dates are payment dates and things of relevant.

  7. Hey, for some of us East Sussex is a land far away!

    Even in the environment that I’m basing the example on, some people are interested in the true transaction date instead of the original transaction date — I’m guessing that the former are more interested in cash and the latter are more interested in trial balances, but I only know what accountancy I pick up as part of my client interaction so I may be off the mark there.

    The partitioning is just based on the frequency with which the various reports are executed, and the overwhelming majority are based on original transaction date.

    I should add that the client in this case is not an autoparts store, but is one of those organizations to which the normal business standards do not generally apply – ie. they can pretty much make up their own rules!

  8. I wrongly believed that partition exchange cannot be used with partitions that has data until I attened a presentation by vincent chazhoor at the last odtug conference. I thought I could use partition exchange only if I partitioned or sub-partitioned the fact table by data. But Vincent’s presentation changed my view. Partition exchange can be done with a partition that has data – the partition need not to be empty!. I have a table that is monthly partitioned and our feeds are daily. Now I use partition exchange to load the data – modified the etl programs for one table after comming back from the odtug conference. This is how we do it. Before loading we copy the data from the current month’s partition to the staging area using create table …Select * from …. where …
    In step II, we load the data to the staging table and in step III we create indexes, statistics etc. As the last step we do the partition exchange between the table in staging area and the ‘current parititon’ in the warehouse table. The warehouse table gets the data that was in the staging table and the staging table gets the data that was the ‘current partition’ of the warehouse table. It works fine for us.

  9. Don,

    That’s one of the options we considered, and it’s similar to our method with weekly loading into monthly partitions.

    What primarily turned us against it was that we would be moving around up to 30x the amount of data that we really needed to, and that this 30x would occur at the end of the month when users were most keen to get the data as quickly as possible. I felt that the direct load method would give us the most even load performance throughout the month, although the materialized views will tend to be updated more by update than by insert at the end of the month which I’d rate as a disadvantage.

    But there’s no rights or wrongs about this — everyopne just has to be aware of the ups and downs of each method and choose what suits their environment the best.

  10. Thanks David. I agree that each situation is different and there is no magic solution that works everywhere. I agree that copying 30 days data to the temporary table could take a while. I haven’t tried but what about re-using the previous days temp table?. At the end of partition exchange the temp table will be one day behind the actual data warehouse table. We need to load just one day’s data to bring it upto data with the data warehouse table. This can be done lot earlier than the current days data file arrives without impacting the users. In that case I think it will be just loading 2X data every day. What do you think?

  11. Yes, that’s true — the table left over from the most recent exchange would indeed be only two load cycles behind. That would be an improvement upon a copy of the complete month. The efficiency would be affected by your ability to identify the prior load as well as the new data, and something might be done to cache that data outside of the fact table so as to avoid either an index-based scan of the fact table partition or a complete partition scan to identify it.

    However if your partitioning scheme is compatible with your load cycle, and it sounds like yours probably is since you don’t mention having to deal with multiple monthly partitions per daily load cycle, have you considered a non-regular partitioning scheme? If you loaded your data into daily partitions, then as the month progressed used a PARTITION MERGE operation to merge them into a single month-to-date partition, you would have the advantage of easy daily loads without the disadvantage of having so many partitions that non-pruning queries would suffer from having to touch too many local indexes.

    So your partitions might be as follows …

    Y2004_M01
    Y2004_M02
    Y2004_M03
    Y2004_M04

  12. In my current situation I don’t need to deal with multiple monthly paritions during the daily load. I need to deal with the last monthly partition only. But we have a few other fact tables where we have to deal with multi-partition inserts. We don’t use parition exchange there. The only way I can think of using parition exchange in such cases will be copying the whole parition, insert the new records and do a partition exchange. Building all this logic in the ETL could be very complex. I have the same comment about the partition merging approach you explained. There may be some special cases where this approach is usefull. Generally, I would probably stay away from that considering the complexity of creating partitions, loading data and merging paritions in ETL code.

  13. David,

    I am very interested in the PARTITION MERGE approach you explained. If we ae going by this approach, what will be the partition key for the table. Is it possible to have the current month partitioned based on day and all other partitions are based on month in the same table?. Could you please publish some sample scripts for this?.

  14. Don, I’d venture to state that the difficulty of managing partitions like that is related to the tool that you’re using. Certainly with Informatica it can get tricky, as you have to start calling Oracle procedures from Informatica, and once you go down that route you need to start expending dramaticaly on the Oracle competence of the Informatica developers — not that that’s a bad thing of course, but it’s “a thing” nevertheless.

    With a pure PL/SQL ETL process I wouldn’t think it to be an intractable problem to manage the partitions, though.

  15. Janu,

    Assuming that you are loading data on a daily basis and partitioning the fact table by month, then assuming you are about to load data for 05-Mar-2005 then you’d have a table defined with partitions:


    partition P200501 values less than (to_date(‘20050201′,’YYYYMMDD’)),
    partition P200502 values less than (to_date(‘20050301′,’YYYYMMDD’)),
    partition P200503 values less than (to_date(‘20050305′,’YYYYMMDD’)),
    partition P20050305 values less than (to_date(‘20050306′,’YYYYMMDD’)),
    partition P20050306 values less than (to_date(‘20050307′,’YYYYMMDD’)),
    partition P20050307 values less than (to_date(‘20050308′,’YYYYMMDD’)),

    … in which partition 200503 only holds part of a month’s data.

    Then you exchange your new data with partition P20050305, and merge P200503 with P20050305. Effectively this makes partition P20050305 “disapear” and makes the less-than value for P200503 increment by one day. This merge needn’t be carried out immediately by the way, nor need you create daily partitions much in advance.

    Now I have a nagging thought that just occured to me — there may be something about this scheme that gives poor optimization, in particular with respect to partition pruning … I’ll have a think about that.

  16. David,

    How can somebody do a range partition on one table based on two values – basically that is what I am trying to understand. What will be the definition of the month-to-date partition in that case. Don’t you think you will need to update the definition of the month-to-date partition every day?. From your example you have a month-to-date partition and daily partitions for day 24, day 25 etc. If are loading day 24th data which partition will it end-up?. It will end up in the daily partition if you have defined your monthly partition as less than 24th. Now, after loading to the daily partition how do you do the partition merging without modifing the definition of the month-to-date partition?

  17. If the month-to-date has values less than 24th and you have a daily partition following that with values less than 25th, then data for the 24th gets exchanged with the daily partition.

    When you merge the month to date and the followng daily partition it does indeed change the values-less-than clause for that month-to-date partition from the 24th to the 25th — the point is that through the mechanism of the merge operation the month-to-date partition subsumes the values of the daily partition and also inherits it’s values-less-then clause.

  18. David,

    I know we can merge two partitions into a third partition like
    ALTER TABLE q1_sales_by_region
    MERGE PARTITIONS q1_northcentral, q1_southcentral
    INTO PARTITION q1_central
    But it seems like what you are proposing is merging one partition to another – merging the date parition to the month-to-date partition every day. I just tried an example and it didn’t work for me. May be something wrong with my code. But when I checked the syntax of partition merge it is as below

    Merge Partition Syntax

    ALTER TABLE [schema.]table
    MERGE PARTITIONS partition1, partition2
    [INTO PARTITION [new_partition] [partition_description]]

    (Please see http://www.lc.leidenuniv.nl/awcourse/oracle/text.920/a96518/csql.htm
    )

  19. Oh, I did make an error on one of the partition naming rules for merges — that the new partition cannot inherit the name of the lower-bound partition when merging the ranges. This changes the naming convention a little.

    Try the following script … note that Y20050309 starts off as the month-to-date partition, then gets merged with Y20050310 which becomes the new month-to-date partition:

    create table fct
    (date_of_day date not null,
    col1 number)
    partition by range (date_of_day)
    (partition Y200501 values less than (to_date(‘20050201′,’YYYYMMDD’)),
    partition Y200502 values less than (to_date(‘20050301′,’YYYYMMDD’)),
    partition Y20050309 values less than (to_date(‘20050310′,’YYYYMMDD’)),
    partition Y20050310 values less than (to_date(‘20050311′,’YYYYMMDD’)),
    partition Y20050311 values less than (to_date(‘20050312′,’YYYYMMDD’)),
    partition Y20050312 values less than (to_date(‘20050313′,’YYYYMMDD’)),
    partition Y20050313 values less than (to_date(‘20050314′,’YYYYMMDD’)))
    /

    insert into fct values (to_date(‘20050101′,’YYYYMMDD’),1);
    insert into fct values (to_date(‘20050102′,’YYYYMMDD’),1);
    insert into fct values (to_date(‘20050201′,’YYYYMMDD’),1);
    insert into fct values (to_date(‘20050202′,’YYYYMMDD’),1);
    insert into fct values (to_date(‘20050301′,’YYYYMMDD’),1);
    insert into fct values (to_date(‘20050302′,’YYYYMMDD’),1);

    create table new
    (date_of_day date not null,
    col1 number)
    /

    insert into new values (to_date(‘20050310′,’YYYYMMDD’),1);

    commit;

    alter table fct
    exchange partition Y20050310
    with table new
    with validation;

    select * from fct partition (Y20050309);
    select * from fct partition (Y20050310);

    alter table fct
    merge partitions Y20050309, Y20050310
    into partition Y20050310;

    select * from fct partition (Y20050309);
    select * from fct partition (Y20050310);

    Note the ORA-02149 error indicating that partition Y20050309 no longer exists.

  20. David,
    There are few things fundamentally not correct in your example. You are saying Y20050309 is your month-to-date partition. In the insert statements there are no values that goes to this partition. It seems like you are merging one partition with an empty parition. I will play around your example modify a little and see how it goes.

  21. Don,
    As part of that script you ought to be seeing …

    SQL> select * from fct partition (Y20050309);

    DATE_OF_D COL1
    ——— ———-
    01-MAR-05 1
    02-MAR-05 1

    Do you not get this result?

  22. Don,

    David’s example works fine for me.

    David,
    I think the error was not in your partition names – it was in the concept. You stated that during partition merging the definition get inherited. That is not correct. Partition definitions do not get inherited during partition merging. We can do partition merging if the existing parition definition supports the data from the other partition.

  23. Hmmm, that’s another way of looking at it, and there’s nothing wrong with that.

    If you do a test on partition merging you find two different scenarios:

    If the new partition name is not specified or is a new name, then the new partition gets a new object_id.

    If the new partition name is the same as the upper-range partition being merged then the old partition’s object id is inherited by the new partition.

    In either case however a new data object id is assigned to the new partition because a new segment is created for it. Try the following script:

    drop table fct
    /

    create table fct
    (col1 number)
    partition by range (col1)
    (partition P1 values less than (2),
    partition P2 values less than (3),
    partition P3 values less than (4))
    /
    insert into fct values (1);
    insert into fct values (2);
    commit;

    select subobject_name,object_id, DATA_OBJECT_ID
    from user_objects
    where object_name = ‘FCT’
    and subobject_name is not null
    /

    select partition_name,file_id,block_id,bytes
    from dba_extents
    where segment_name = ‘FCT’
    and owner = user
    /

    alter table fct merge partitions p1,p2 into partition p2
    /

    select subobject_name,object_id, DATA_OBJECT_ID
    from user_objects
    where object_name = ‘FCT’
    and subobject_name is not null
    /

    select partition_name,file_id,block_id,bytes
    from dba_extents
    where segment_name = ‘FCT’
    and owner = user
    /

    I have to say that this is not the way that I understood merges to work. I thought that they were a lot “lighter” than this. Maybe I’ll have to go back to the drawing board, ‘cos having the data rewritten like this seems to be to be “a bad thing” … I’m sure it could be more efficient (in the example above) by just assigning the extents on the first partition to the second partition, ie. an internal data dictionary modification only.

    Well, some more thinking to do there … and maybe an enhancement request is called for.

  24. David, You said
    “Hmmm, that’s another way of looking at it, and there’s nothing wrong with that.”
    Actually it the right way to look at it because that is the way partition merging works in Oracle!. It would be good if they could change the way they have implemented it!. But that is a different topic. I think you were a little confused between parition merging and partition exchange. partitin exchange works by internal data dictionary modification – no moving of data.

  25. Janu,

    Yes, from the purely logical side your is the more simple explanation, that the low partition gets subsumed into the high one, rather than that the high one gets subsumed into the low one which then inherits the high one’s upper bound. It’s wortyh noting I think that unless the new partition is named the same as the old higher partition then it is actually a new logical object in the database.

    Physically what I was showing is that the new partition is a new segment anyway, and that just ain’t efficient (requiring index rebuilds etc).

    Anyhoo, there seem to be some inefficiencies here for sure.

  26. David,
    Yes, Janu’s explnation is logically very correct and physically not that good. Agree. But what was your explanation of partition merging?
    From your posting – start quote
    “the point is that through the mechanism of the merge operation the month-to-date partition subsumes the values of the daily partition and also inherits it’s values-less-then clause.”
    end quote.
    Do you think this is logically or physically correct?

  27. No, like I say, “your [sic] is the more simple explanation” … simple = good!

    Physically though both explanations are inadequate as there is actually the creation of a new partition segment.

  28. simple = good (!)
    Then inadequate = incorrect?
    I know it is difficult to say ‘I was wrong’ in public!. But I have to say you are much better than Don Bu…..n!

  29. From the physical side, — how the two initial partition segments themselves are modified — yes they are both incorrect. The partition segment that results from a merge appears to be completely new, which is not what I expected. I would imagine that if the partitions were in the same tablespace than the procedure could be optimized to merge the extents of the old partitions into a single segment.

    Better than DKB? Well he did offer me a free guide horse if my eyes got fried today, so right now he’s OK by me!

    http://www.blogger.com/comment.g?blogID=11995638&postID=112174259752288324#c112177340618169060

Leave a reply to David Aldridge Cancel reply