Pushing the Technical Frontier

I have a dilemma.

In the BI system that I’m working on I have the perfect application for the use of Expression Filters. It is almost as if someone specifically devised this problem in order to tempt me into using them, and I really am very tempted indeed.

Let me explain.

In the application which feeds data to the BI system users can define “display filters” that determine whether network alerts are displayed in particular windows or not. For example they might create a filter called “Failed Core Devices” for which the definition reads: “IS_CORE=’Y’ and FAILED>=1”. Network events which meet those criteria are displayed in that window. They might also have a filter called “Failed Cisco Devices” with a definition: “DEV_TYPE LIKE ‘CISCO’ AND FAILED >= 1”. So, a failed Cisco core device will actually appear in both windows.

The requirement is to tag which windows an event was displayed in at a particular time, and because there are no data elements for each event to tell you the display window(s) the only way to work that out is to evaluate the expression for each event.

This is exactly what an Oracle Expression Filter does, and although I would have to convert the definitions from Netcool ObjectSQL to Oracle SQL (LIKE in Netcool ObjectSQL is a regular expression, so “DEV_TYPE LIKE ‘CISCO'” has to be converted to “REGEXP_LIKE(DEV_TYPE,’CISCO’)”), they are an obvious fit for the problem.

My query to determine the filter names for the events would be reduced to something like:

select
  e.event_id,
  f.filter_name
from
  events e,
  filter_definitions f
where
  Evaluate(
    f.filter_condition,
    get_filters.getVarChar(
      e.dev_type,
      e.is_core,
      e.failed)) = 1

Which would obviously be pretty neat and tidy, considering the alternatives (think of how you might implement an expression such as “(COL1 NOT IN (1,3,4,5,78,900) AND COL2 IS NOT NULL) OR NOT REGEXP_LIKE(COL3,’^WHATEVER’)” by other means as simply as you would with an expression filter). It could be implemented as part of the ELT or as a filter or join in Business Objects (or your current query tool of choice).

However, what happens if I move on to another client and this method fails because of a new filter definition that references a new data element not currently included in the Expression Filter attribute set? Or an upgrade goes wrong? Or a patch breaks it? Or we discover a bug? Can I sleep at night if I’m leaving code behind that will have most reasonably competent Oracle practitioners reaching for Google to find out what EVALUATE is, and then has them reading documents for the next couple of hours? Even if I sit the support staff down and show them all how it works, that knowledge will have evaporated in a couple of months.

Documentation? Well I can certainly document how expression filters work, and how to detect problems and perform apparantly simple tasks like modifying the attribute set (I already did something similar for DBMS_Scheduler chains), but I’m not going to think of everything and even then do I want people to be following written instructions without understanding the architecture of expression filters? It’s pretty easy to get into a mess over this object oriented stuff.

The alternative? Currently unknown. The few options identified are prone to inaccuracy or imprecision (eg. they’ll only identify a single filter instead of the full set), and they tend to require continual maintenance of the ELT or the query tool or both whenever a filter is created, deleted or modified.

So to summarise: do I implement a solution that is a technical exact-match with potentially high future maintenance problems, or a poor technical solution with definite low-level ongoing maintenance problems.

At the moment, I have no idea. Input welcome.

Sequences — Not The Only Choice

I quick comment of mine on the Oracle Forums prompts me to add a further note of explanation here.

Many of us are pretty much wedded to the use of sequences to generate primary key values, either by using a trigger to populate the column or by referencing the sequence next value in the insert statement itself. We do this because they provide concurrent inserts with unique values, and we don’t care about missing the occasional value here and there due to caching, rollback etc.

Well sequences are fine things for solving concurrency issues on primary key generation, but I’m struggling to recall a time when I has multiple processes inserting new values into a data warehouse table concurrently. I can see that it might happen on a fact table, though it would rule out the presence of enabled bitmap indexes, but for a dimension we are generally inserting values from a data source that we have found do not exist as dimension values.

Without the need to support concurrent inserts the need to use a sequence largely disappears, and we are left instead with the need to use an object whose definition on creation often has to be inconveniently derived from the current maximum value in some column or other.

So, I’ve largely stopped using sequences to generate primary key values. Instead I have code of the following form:

 CREATE OR REPLACE VIEW NEW_DIM_NODE_VW AS
  with current_max as
         (select   Coalesce(max(node#),0) max_key_value
          from     dim_node),
       new_values as
         (select   node_txt
          from     source_data
          minus
          select   node_txt
          from     dim_node
          order by 1)
select node_txt,
       rownum + max_key_value node#
from   current_max c,
       new_values  n;

The “current_max” subquery factoring clause (SQFC) gives an extremely fast read of the index on the primary key column (node#) of the dimension table to find the current maximum value, using Coalesce() to ensure that an empty dimension table in a newly deployed schema returns zero instead of NULL.

The “new_values” SQFC provides a list of new values, ordered in order to reduce (very slightly) the overhead of index maintenance on the unique constraint that protects the dim_node.node_txt column.

The actual load is performed by a packaged procedure of this form:

  procedure publish_new_nodes
  is
  begin
     ETL_Support.Start_Action('publish_new_nodes');
     ETL_Support.set_start_time;

     insert into dim_node(node#,node_txt)
     select      node#,
                 node_txt
     from        new_dim_node_vw;

 

     ETL_Support.set_end_time;

     ETL_Support.Write_Log_DML(
         plsql_unit   => $$PLSQL_UNIT,
         plsql_line   => $$PLSQL_LINE,
         action       => 'Insert (Conventional Singletable)',
         message      => 'Loaded '||to_char(SQL%RowCount,'fm999,999,990')
                         ||' new nodes to DIM_NODE',
         dml_row_count=> SQL%RowCount);

     ETL_Support.End_Action;  
  End publish_new_nodes;

Witness also the lovely use of $$PLSQL_UNIT and $$PLSQL_LINE to pass to the DML logging procedure the approximate code location for the activity.

Sweet.

One side effect of this is that the primary key values are gap-free, unless one was to delete rows other than those with the highest primary key values of course. I wonder if the optimiser does have any special way of treating columns in tables of X rows for which the maximum value is equal to the minimum value plus (X-1) and the number of distinct values is X? Probably not, but I like to think that it would put the cat among the pigeons if it did.

Anyway, the only exception I have to the use of this method in the current system I’m working on is for tables populated by an Oracle Gateway process. Primary key values for those are still populated by trigger and sequence — what I have been describing as “Old School” to the imaginary critic in my head.

Aside from those, I’m done with sequences for now.

Applying Predicates and Partition Pruning to MERGE Target Tables

Just a quick pointer to an answer on the Oracle Forum:  http://forums.oracle.com/forums/thread.jspa?messageID=3912341#3912341

If you are merging into a data warehouse fact table (not so likely) or a large aggregation of a fact table (more likely) then you can improve efficiency by adding a transitive predicate into the USING clause to promote partition pruning on the target table.

Unless you are using Partition Change Tracking I think it is very unlikely that the MERGE inherent to a materialized view refresh is going to partition prune on the target table.

ETL Powered by Rice and Beans

I’m told that it’s very important to keep one’s brain in shape as one grows older. Or “drifts gently towards the twilight” as I prefer to think of it. Trying to look on the optimistic side, obviously. Fortunately I end each day with my brain wrung out like a well-used sponge courtesy of some ETL development work on a largeish data warehouse development.

Although the regular ETL development is outsourced, we are responsible for the coding that processes historical data, so on deployment we will run our historical load code and then the “other team’s” regular load will take over. Yes, it is a strange arrangement. And by “we” I really mean my collegues somewhere in the pestilent jungles of Brazil, surrounded by poisonous frogs and who-knows-what other kind of other fearsome thing, and living on a diet of rice and beans in their houses on stilts.

My own role is not so bad. I don’t have to get too involved in the detailed busness logic that the code has to implement, and thank goodness for that because frankly I’ve not seen anything so complex for … well, forever. So I sit here in a metaphorically lofty perch, making myself popular by criticising the hard work of others and encouraging them to greater efforts by calling them lazy if they stop work before midnight (the traditional management role). Yes, it’s a good life alright. I constantly remind them that the real aim of the project is to produce a process for loading five years of historical data that is faster than the “B-Team’s” load of single day.

I think our methodology works quite well though. When you are down in the weeds of the logic (did I mention that it is particularly complex logic?) it is always difficult to see the slightly larger picture of where you’re taking the data next and how you optimise both specific ETL stages and how you can adjust the overall process to make everything more efficient. It’s a little like pair programming, I guess.

And we do get to do the fun bit — 10 million records joining 0-M:0-N through eight OR’d join conditions to 10 million other records, with a couple of billion joins resulting that we then artfully winnow down to a few million. The optimization process is a little mind bending, with all sorts of issues that we wouldn’t normally worry about too much becoming vitally important in an effort to avoid blowing out the host server’s memory with a degree of parallelism of 40. I have to say that there was initially some FUD over the historical process, with estimates of the load duration being bandied about that were in the order of months — we never thought it would take more than five days, and it’s now looking like it’ll run in under one day, which means that we’d be able to start the daily load process directly afterwards without altering it. And I think we still have a few more tricks up our sleeves as well in case we need to buy an extra hour somehow.

In fact it’s turning into something of a stroll through the SQL Reference book. The list of features we’re touching on include analytic functions (which have become a second language for everyone now, along with the “PCTFREE 0 NOLOGGING COMPRESS” which I nearly appended to my sandwich order the other day), connect by clauses, and even some rather exotic stuff with finding the lowest set bit in a something-or-other that I don’t understand yet but which Heitor thought of while lying in bed one night and which he used to help reduce an eight hour process to fifteen minutes.  They say that they don’t have siestas in Brazil, but maybe they should start.

 

Oh well, another morning … Brazil is online … back to work!

Indexing Options for Change Data Capture

I just posted this question to the Oracle list, but thought I might post it here in case that reaches a wider audience. It also may be a situation that others in the DW arena have faced before.

 

I have a large and busy OLTP table, 100GB or so, against which there is a need to capture changes. Until an asynchronous CDC solution is in place we have to rely on two columns: create_date and update_date (null until the first update), both being of DATE type of course.

These are currently unindexed, but there is a desire to index them to improve change capture performance for queries such as:

 
select …
from   …
where     (create_dt >= trunc(sysdate)-1 and create_dt < trunc(sysdate))
       or (update_dt >= trunc(sysdate)-1 and update_dt < trunc(sysdate))
 
The desire is obviously to provide the maximum performance benefit while reducing the impact on the OLTP system.
 
I thought of four different indexing options:
 
i) Indexing the columns separately, leading in the best case to an unusual execution plan where the indexes are range scanned and the results merged before accessing the table.
ii) A single composite index (create_date,update_date),leading to a fast full index scan.
iii) A single composite index (update_date,create_date), rewriting the query predicate as …
   (create_dt >= trunc(sysdate)-1 and create_dt < trunc(sysdate) and update_date is null)
or (update_dt >= trunc(sysdate)-1 and update_dt < trunc(sysdate))
… and leading to two index range scans. (not sure about this)
iv) A single-column function based index on (Nvl(update_dt,create_dt)) and rewriting the predicate appropriately.
Whichever of these is chosen the growth pattern for the data naturally tends towards index contention as all the new values are equal to sysdate.
 
So the nub of my problem comes down to these questions:
 
Is it possible to mitigate the index contention problem with a suitably high initrans values? Does it seem likely that implementing a reverse index would be worthwhile in reducing contention (I could probably take the pain of the fast full scan if it was, although it would lead to a larger index due to the 50/50 block splits).
 
Would you regard implementing a function-based index as risky in any significant way on major tables of a busy OLTP system? How about a reverse function based index?
 
Basically, “what would you do”?
 
Maybe I’m overthinking it, but I’d like to go to the sytem developers with a fully thought out set of options (even if they do tell me to take a running jump anyway).

 

 

250,000 Tests for Uniqueness Per Second — Ain’t No Biggie

When you’re designing ETL processes the mantra of “do it in SQL” realy gets engrained in your way of thinking. PL/SQL-based processing as a method of handling business logic is anathema, as the principle of “row-by-row equals slow-by-slow” is very well highlighted by the processing of millions of rows.

Very recently though I found a case where this principle had to be violated, and with performance results that were not at all unpleasant. I was stumped by a SQL problem that I thought would be amenable to analytic functions, but I couldn’t think of a way of doing it. Here’s the situation.

I have two sets of data representing, let us say, sources and targets. Sources and targets only have one attribute each, which is a date in each case, and sources and targets can be joined in a many-to-many relationship. However only one join is valid per source and only one is valid per target, and we have to form connections between sources and targets on a one-to-one basis. We work out which one of the possbily-many joins is the valid connection according to the following logic.

We start with the most recent source. We work out the earliest target to which it is joined, and discard all other targets to which it is joined (if any). The target to which it connected is now ineligible for any other source to connect to it. Then we take the next most recent source and we work through the targets in ascending target-date order and we connect to the first (ie. earliest) target available, discarding all others. The only reason why a target might not be available is that we already connected a source to it.

One way of looking at this is to imagine that the set of joined sources and targets are ordered by source-date descending and then target-date ascending. Examining each pair in turn we ask two questions:

  1. “Has this source already connected to a target?”
  2. “Has this target already connected to a source?”

If the answer to both of those is “No” then we take that join as the valid connection and we move on to the next pair.

It’s very easy to construct a sample data set for this problem as well, by simply joining a list of integers to itself to form pairs such as (1,1), (1,2), (1,3), … , (1000,1000) and ordering the output with a query such as this:

select s.num s_num,t.num t_num from 
(select rownum num from dual connect by level <= 1000) s, 
(select rownum num from dual connect by level <= 1000) t 
order by 1 desc, 2 asc 
/ 
 

Given that data set the first pair is (1000,1), which becomes the first connection. The next connection will be (999,2), the next will be (998,3) etc.. It’s important to note that the real data set is not like that though, where every source connects to every target, so special conditions in this data set do not apply to the real one.

By the way, there are some special conditions that are easy even in the real data — if a row represents the first occurance for both the source and the target then that is always going to be the valid connection. Um … that’s actually the only special condition I could think of. There may be some very clever mathematical way of working out others, but I don’t know what it is.

So, the first solution I came up with was to use PL/SQL to read each row in order and attempt to insert into a table having unique constraints on the source_id and target_id columns, handling any uniqueness error. The handling of the error is the only virtue that PL/SQL brings to this case. It was functional, but it was slow because of course it requires a lot of logical reads and writes in the index maintenance. I considered an insert with an error-logging clause by the way, but that doesn’t handle unique or primary key constraint errors and if it did it would still have the same problem of high logical reads and writes.

The next solution was to maintain a pair of PL/SQL associative arrays to handle the detection of uniqueness, and that worked well except that I was using conventional path inserts for the result set, and I didn’t fancy coding up bulk inserts. I know enough PL/SQL to be dangerous but it’s not my native language.

My (currently) final solution was to embed the PL/SQL into a package so that a function to detect uniqueness could be embedded in a sql statement. I was pretty concerned about the context switch to PL/SQL, and the logic means that this process can’t be parallelised, but the result suprised me. When I streamed 100,000,000 rows through the process to detect 10,000 valid connections the process completed in 770 seconds, so that was at least 250,000 uniqueness tests per second. I can’t imagine that being achieved based on detecting violations of a unique constraint.

Here’s the package I used. Sorry about the crappy formatting:

CREATE OR REPLACE PACKAGE ss 
is 
  TYPE NumTyp IS TABLE OF NUMBER INDEX BY pls_integer; 
  src_tab   NumTyp; 
  tgt_tab   NumTyp; 
  empty_tab numtyp; procedure reset; function is_new(s_num IN NUMBER, t_num NUMBER) 
return NUMBER; 
end; 
/ 
CREATE OR REPLACE PACKAGE body ss 
is 
PROCEDURE reset 
is 
BEGIN 
    src_tab := empty_tab; 
    tgt_tab := empty_tab; 
end; function is_new (s_num NUMBER, t_num NUMBER) 
return number 
is 
BEGIN 
   IF src_tab.exists(s_num) OR tgt_tab.exists(t_num) 
   then 
      RETURN 0; 
   else 
      src_tab(s_num) := 1; 
      tgt_tab(t_num) := 1; 
      RETURN 1; 
   end if; 
end; 
end; 
/ 
 

The complete implementation can be tested as follows:

 exec ss.reset; 
        
select * from        
(select s.num s_num,t.num t_num 
from   (select rownum num from dual connect by level <= 1000) s, 
       (select rownum num from dual connect by level <= 1000) t 
where rownum > 0 
order by 1 desc, 2 asc) 
where ss.is_new(s_num,t_num) = 1 
/

The call to ss.reset purges the arrays, and must be executed between runs. The rownum > 0 predicate is just the poor mans way of ensuring that the call to ss.is_new is not pushed into the inline view — Oracle obviously doesn’t know that the row ordering is a critical part of the logic.

Obviously an INSERT (with append hint to invoke direct path) can be prepended to that to perform the actual ETL task.

Multitable Insert and Distinct Values

I puzzled a little on the way to work about efficiently populating a hierarchy table based on a flat dimension. (“Why?” … seriously, you just don’t want to know that).

So let us suppose you have a table like this:

create table dim 
  (city varchar2(30), 
  state varchar2(30), 
  region varchar2(30), 
  country varchar2(30)) 
/

… and data like this … 

insert into dim values ('Fairfax'         ,'Virginia','US-East'   ,'USA'  ); 
insert into dim values ('Merrifield'      ,'Virginia','US-East'   ,'USA'  ); 
insert into dim values ('Alexandria'      ,'Virginia','US-East'   ,'USA'  ); 
insert into dim values ('Portland'        ,'Maine'   ,'US-East'   ,'USA'  ); 
insert into dim values ('Colorado Springs','Colorado','US-Central','USA'  ); 
insert into dim values ('Denver'          ,'Colorado','US-Central','USA'  ); 
insert into dim values ('Longmont'        ,'Colorado','US-Central','USA'  ); 
insert into dim values ('Salt Lake City'  ,'Utah'    ,'US-Central','USA'  ); 
insert into dim values ('Rome'            ,'Lazio'   ,'IT-Central','Italy');

… and you want to turn it into a hierarchy in the following table …

create table hier 
(attribute varchar2(30), 
parent varchar2(30), 
lvl varchar2(30)) 
/

You could do it as follows:

insert into hier 
select          city   , state  , 'City'    from dim 
union all 
select distinct state  , region , 'State'   from dim 
union all 
select distinct region , country, 'Region'  from dim 
union all 
select distinct country, null   , 'Country' from dim 
/

No distinct on the first select clause because we assume that City is unique (it would be if it were the PK on a real dimension, of course). In the absence of indexes to support fast full index scans we have four full table scans and three operations to support the distinct operator …

----------------------------------------------------------------------------                                                                                                                                                                                                                                 
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                 
----------------------------------------------------------------------------                                                                                                                                                                                                                                 
|   0 | INSERT STATEMENT    |      |    25 |   418 |    11  (82)| 00:00:01 |                                                                                                                                                                                                                                 
|   1 |  UNION-ALL          |      |       |       |            |          |                                                                                                                                                                                                                                 
|   2 |   TABLE ACCESS FULL | DIM  |     9 |   171 |     2   (0)| 00:00:01 |                                                                                                                                                                                                                                 
|   3 |   HASH UNIQUE       |      |     9 |   162 |     3  (34)| 00:00:01 |                                                                                                                                                                                                                                 
|   4 |    TABLE ACCESS FULL| DIM  |     9 |   162 |     2   (0)| 00:00:01 |                                                                                                                                                                                                                                 
|   5 |   HASH UNIQUE       |      |     5 |    75 |     3  (34)| 00:00:01 |                                                                                                                                                                                                                                 
|   6 |    TABLE ACCESS FULL| DIM  |     9 |   135 |     2   (0)| 00:00:01 |                                                                                                                                                                                                                                 
|   7 |   HASH UNIQUE       |      |     2 |    10 |     3  (34)| 00:00:01 |                                                                                                                                                                                                                                 
|   8 |    TABLE ACCESS FULL| DIM  |     9 |    45 |     2   (0)| 00:00:01 |                                                                                                                                                                                                                                 
----------------------------------------------------------------------------                                                                                                                                                                                                                                

A pretty clean looking piece of SQL, but multiple scans of tables make my hackles rise. If this were a very large table and indexes couldn’t be leveraged then it could be a performance problem.

An alternative occured to me:

insert all 
when 1               = 1 then into hier values (city   ,state  ,'City'   ) 
when state_row_ind   = 1 then into hier values (state  ,region ,'State'  ) 
when region_row_ind  = 1 then into hier values (region ,country,'Region' ) 
when country_row_ind = 1 then into hier values (country,null   ,'Country') 
select city, 
       state, 
       region, 
       country, 
       row_number() over (partition by region  order by 1) region_row_ind, 
       row_number() over (partition by state   order by 1) state_row_ind, 
       row_number() over (partition by country order by 1) country_row_ind 
from dim 
/

A single scan of the dim table is performed, with three window sorts:

--------------------------------------------------------------------------------                                                                                                                                                                                                                             
| Id  | Operation               | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                             
--------------------------------------------------------------------------------                                                                                                                                                                                                                             
|   0 | INSERT STATEMENT        |      |     9 |   963 |     5  (60)| 00:00:01 |                                                                                                                                                                                                                             
|   1 |  MULTI-TABLE INSERT     |      |       |       |            |          |                                                                                                                                                                                                                             
|   2 |   INTO                  | HIER |       |       |            |          |                                                                                                                                                                                                                             
|   3 |   INTO                  | HIER |       |       |            |          |                                                                                                                                                                                                                             
|   4 |   INTO                  | HIER |       |       |            |          |                                                                                                                                                                                                                             
|   5 |   INTO                  | HIER |       |       |            |          |                                                                                                                                                                                                                             
|   6 |    VIEW                 |      |     9 |   963 |     5  (60)| 00:00:01 |                                                                                                                                                                                                                             
|   7 |     WINDOW SORT         |      |     9 |   288 |     5  (60)| 00:00:01 |                                                                                                                                                                                                                             
|   8 |      WINDOW SORT        |      |     9 |   288 |     5  (60)| 00:00:01 |                                                                                                                                                                                                                             
|   9 |       WINDOW SORT       |      |     9 |   288 |     5  (60)| 00:00:01 |                                                                                                                                                                                                                             
|  10 |        TABLE ACCESS FULL| DIM  |     9 |   288 |     2   (0)| 00:00:01 |                                                                                                                                                                                                                             
--------------------------------------------------------------------------------

The cost of the two methods is exactly the same, but the cardinalities are both out — the correct answer is “19”. Of the two methods I’d expect the first to be more accurate and it did do pretty well. With such a small amount of data it’s difficult to make an accurate comparison, and the proper approach would be to benchmark each method on the production data.

One interesting difference is that with the single table insert method you can control the order of insert into the hierarchy table, which might be important if you wanted to physically cluster particular rows together. I don’t think that there is a deterministic way of controlling the insert order in the multitable insert.

Anyway, that was it. No rocket science, just an idle thought.

How to Create Multiple Indexes in a Single DDL Statement

First, raise an enhancement request with your database vendor …

Yes, don’t get excited — I don’t know of a way to do it. I wish there was though, and the ability to rebuild multiple indexes (or partitions of different indexes) at the same time would be useful because …

Well the problem is in the domain of data warehousing — frankly, all the interesting problems are there ;) — and it arises in ETL operations when we want to load to an unindexed tables and then create or rebuild multiple indexes or index partitions on that data. With each of those index operations we need a full table or partition scan (you know, let’s just assume the “or partition” bit from now on) and for a large data set that may mean very high physical read rates are required.  Although we can avoid or mitigate the physical reads by building multiple indexes at the same time in different sessions using dbms_jobs or dbms_scheduler and relying on caching in the SGA or at a hardware level, we still incur the logical reads.

So I’d like to be able to create multiple indexes based on a single scan of the data segment. A “Create Indexes” statement in which some slave process spits out the required columns and rowids to a set of processes which handle the sorts etc.. Even if it meant writing a temporary (compresed?) data set of the required columns that then gets scanned multiple times then as long as we’re not indexing the majority of the columns on the table we could still see a net benefit.

I really can’t think of a downside to this. Of course, you could try to build fifty indexes on a fact table in a single operation and end up cooking your CPU’s and/or blowing out your temporary disk space and/or memory and incurring horrendous levels of the dreaded multipass sort, but so much of what we do is finding that sweet spot between two extremes anyway. Maybe building those fifty indexes in ten groups of five would reduce the number of reads of the data segment by a factor of five without spilling the sorts to disk, and would also put some decent load on those pricey CPU’s.

Enhancement request time again, then.

Actually, I suppose you can create mulitple indexes in a single DDL statement …

Create table my_table
  (col1 constraint xpk_my_table primary key,
  col2 constraint unq_my_table)
as
select blah
from whatdjamacallit;

Cheating really though, and I don’t suppose that it avoids rescanning the data segment. If I had a little time I’d test that with a bit of sql tracing.

“How do you create multiple indexes with a single statement” … now that’s quite the interesting interview question.

Interviewing: The difference between Delete and Truncate in Oracle

I interviewed a couple of senior ETL developer candidates last week, so I shook the dust off of some of my tried-and-true probes for technical knowledge.

I’ve always had a liking for asking for the differences between “truncate” and “delete” ever since I was asked it myself some years ago. I think it allows candidates to demonstrate a nice range of knowledge.

Here are some answers in no particular order.

  1. “Truncate is fast, delete is slow”. Hmmm, well that’s mostly true I guess. The most basic or all answers.
  2. “Truncate is DDL, delete is DML”. Is this just so obvious that people don’t mention it?
  3. “You can’t rollback a truncate”. Indeed. See 2 above, of course. And 1.
  4. “Truncate is implicitly a commit”. A better answer than 3 I think. I seem to recall that someone once mentioned that there are actually two commits in a truncate, one before and one after … but maybe I dreamed that. I should test it really.
  5. “You can’t grant permission to truncate a table”. Ah, practical experience shines through. If you don’t like your ETL process to connect as the owner of the schema then this is a challenge that has to be overcome with stored procedures or something sophisticated like that. You really don’t want to grant “DROP ANY TABLE” to your ETL user.
  6. “You can delete any subset of rows, but you can only truncate the complete table, or a partition or subpartition of it”. Is this also so obvious that nobody mentions it?
  7. “Truncate makes unusable indexes usable again”. A real gotcha for the unwary. If you attempt to optimise a data load by rendering indexes unusable and truncating a table (possibly followed by an index rebuild and a partition exchange) then be careful of the order.
  8. “Truncate can’t maintain foreign keys”. It’s “cascading delete”, not “cascading truncate”. That would be an interesting feature though — point 4 above would make it a little trickier. Truncating an index cluster is pretty close to a “cascading truncate” to a limited extent though. In any case no truncate is permitted on a table referenced by foreign keys.
  9. “You can’t flashback a truncate”. This is an oddity to me. We can flashback a “drop table”, rollback uncommited deletes, or use flashback to recover pre-commit deleted data, but a truncate is a barrier across which we cannot flashback.
  10. “Truncate deallocates space, delete doesn’t”. Unless you want it not to, using the “reuse storage” clause. However the high water mark is reset in either case so maybe that’s a better answer …
  11. “Truncate resets the high water mark, delete doesn’t”. And on the indexes, also. See 7 above also.
  12. “Truncate allows special treatment of materialized view logs”. More of that practical experience and/or documentation-reading skill shows through.
  13. “DML triggers do not fire on a truncate”. Because … um … it’s DDL not DML.

I suppose there are other issues as well, but this shows that it’s a complex business.  I really wish we could grant “Truncate table” to a user or role … enhancement request time I think.

Isn’t it interesting that in some circumstances only one method can be used to remove all rows from a table in a single step, sometimes both methods can. In other circumstances both methods can remove a subset of rows from a table, and sometimes both can.

Oh well, back to work.

TIMESTAMPs, Old Fogeys, and Data Warehouses

I scanned the structure of a data warehouse staging area yesterday, and developed a mild nervous tic. The cause: finding TIMESTAMP(6) data types scattered around willy-nilly.

Firstly, my natural caution and fear of the unknown was aroused by seeing that microsecond timings were being stored, even if it was to log times of inserts and updates of rows and not related to actual business data. My preference is usually to log all changes associated with a single load process with a single timestamp — in Informatica I derive that from the session timestamp so that it is a little easier to lookup the mapping execution that performed the load. If I want to know about performance of the order in which rows are inserted or updated then I’d probably go for turing on verbose logging in the workflow — I’ve rarely found the need to do so.

Secondly, and this is a more long-lasting and disturbing effect, have I officially turnded into an old fogey now? “I’ll hold no truck with these new-fangled data types … DATE was good enough for me, and it’ll be good enough for these young whippersnaper … if the good lord had intended us to measure in microseconds …” etc etc. Maybe I’m still having trouble letting go of the SQL*Plus COPY command, which I used just yesterday to move data from one test database to another, and which probably saved me about a week of messing around otherwise. TIMESTAMP is not supported by the sadly-deprecated COPY command.