A List: Ways to Scan a Table Faster

So you need to read an entire table? Here’s some features and techniques that may help.

Employ Parallelism: Ah, the sweetest technique of all. Brute force. If you have the resources, then let Oracle divide the table into chunks and use all your system resources to scan it. Unfriendly in an OLTP environment, mind you.
Compress Data Segments: Make the table smaller! Block level compression can shrink that data like a plunge into cold water. Very OLTP unfriendly.
Reduce Free Space Percent: PCTFREE = very small => more rows per block => smaller table. And potentially a higher chance of row migration, of course.
Increase Percent Used: PCTUSED = large => less likely that blocks retain free space following deletes => more rows per block => smaller table.
Use a Larger Block Size: For significantly long rows you may get reduced empty space, thus a smaller table to scan.
Reorder The Columns: If you are commonly interested in just a subset of columns (for example metrics in a fact table) then consider making them the first columns of the table definition – for tables with lots of columns there is measurable overhead in finding the end columns (I’m not talking about row chaining here). Hmmm, maybe I’ll post something else about this.
Index Columns of Interest: An index can be treated as a skinny table, and your query might be satisfied by a fast ful or full index scan. The usual comments about NULL values in indexes apply here. Don’t neglect consideraton of index size either – index key compression and use of bitmap indexes provide smaller structures to scan.
Materialized Views: Is there anything they’re not good for? This could be a genuine skinny table, or an aggregation of selected columns.
Ensure Table Is Spread Over Available Devices: With consequent reduced likelihood of encountering an i/o choke point.

There is not a single feature listed here that carries with it no disadvantages, and depending on your circumstances the severity of the disadvantages may range from the insignificant to the devastating.

Do not implement any of these without understanding how the feature works, the mechanism by which it provides the advantage, how it brings disadvantages, and how these all interact in your particular situation.


12 thoughts on “A List: Ways to Scan a Table Faster

  1. bitmap indexes, mate. no null disadvantages here.

    don’t be going and using the underlying table for anything else admittedly, but hey.

  2. you forgot one very important way to improve reading an entire table:


    don’t ask me why I say this. It involves tuning SQL…

  3. bitmap indexes, mate. no null disadvantages here.” … or composite indexes where at least one column is non-null, or some funky FB index …

    It’d be a nice enhancement if there was an INCLUDING NULLS clause on btree indexes. I don’t see any downside to that.

    you forgot one very important way to improve reading an entire table: don’t.

    Oh, that’s a whole can of worms there noons … we could equally well say that to speed access to part of a table (in that other list) just read the whole thing, in some circumstances. Well, maybe I’ll add them both as it does fit in with the holistic approach.

  4. How about archiving old data in the table? Or at least moving unused data to some other table.

    Fewer rows = faster scans.

    … stop laughing, that’s a viable solution sometimes.

  5. “… stop laughing, that’s a viable solution sometimes.”

    Laughing? Definitely not me: recently archived 100 million rows off one of our tables and the performance has definitely picked up.

    One of the most underrated ways of improving performance, IMHO.

  6. Philisophically, moving old data to an archive table is very similar to partitioning. You get the ability to perform a multiblock read on a subset of the entire data, or on the complete data set through a UNION ALL, and you get statistics at the “partition” level. The indexes of course are always local.

  7. well what about multiblock_read_count ? to the max baby but again there are disadvantages for OLTP system.

    and that an index may slow things down for access purposes if its not the right one ?

  8. multiblock read counts: I’d agree that if the multiblock read size is too small then it will hurt multiblock read performance by requiring more reads that are smaller than they should be, but I believe that there is only one right size for MBRC and that is the correct size.

    You can manipulate it to promote or demote full scans over index-based access but that works by either making full scans more attractive to the optimizer than they should be, or by crippling them to be less effective than they ought to be.

    So I’d be inclined to place this under the heading of “Reasons why full scans aren’t as fast as they ought to be”, which I think is a rather subtly different issue to the one that the list addresses.

  9. Oracle Parallel works very well if you have a good RAID with a large stripe size. On a 14 disk subsystem, I’ve easily reached the 320M/b SCSI limit doing FTSs with Degree of 4 (or was it 8?). However you’ll find that you’ll usually want to do some aggregation on the table and that (Sort Aggregate or in 10.2 Hash Aggregate) can add X times to the plain table scan. Back again to MViews…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s