Why TEMP Tablespace != Large Block Size (the second in an occasional series)

Because for temporary tablespaces Oracle uses direct i/o, so block size is not relevant.

Advertisements

7 thoughts on “Why TEMP Tablespace != Large Block Size (the second in an occasional series)

  1. Not all platforms allow Direct IO so perhaps your statement is not true for all cases ?

    Windows ports for example always use Direct IO – can’t do otherwise, whilst various Unix ports may or may not be able to use Direct IO and it can be configured using various initialisation parameters.

    Steve Adams talks in depth about a lot of this stuff…

    http://www.ixora.com.au/notes/filesystemio_options.htm

  2. Thanks Jeff, but in this case I’m maybe being ambiguous … by “direct i/o” I here mean that the reads and writes that Oracle perform bypass the Oracle buffer cache (in which the logical units of Oracle block size are defined).

    Therefore the minimum block size defined in the buffer cache is not relevant to performance, and the size of i/o requested by Oracle is not governed by block_size * db_file_multiblock_read_count but instead by the _direct_io_count(off the top of my head) parameter.

  3. I think the parameter was DB_FILE_DIRECT_IO_COUNT in 8.0.x, measured in blocks and was then deprecated in 9.0.x and replaced by the underscored equivalent _db_file_direct_io_count, measured in bytes.

    I’m still not convinced of your position on Direct I/O’s…although I admit I’m no expert! The above hidden parameter is used to determine the number of blocks requested of the block size specified for the TEMP tablespace subject to the max_io_size limit. Thus Block Size for the TEMP tablespace is 32K and the value for _db_file_direct_io_count is 64 then 2Mb of data is requested in each IO operation – but generally the max_io_size maxes out at 1Mb (at least on my HP-UX boxes at work it does)…so it would actually only request 1Mb for each IO operation (for the 2Mb in question two separate IO requests to the IO subsystem would be made).

    So…I believe that the block size is relevant…if it were only 8K with _db_file_direct_io_count of 64 (the default apparantly) then the IO request size would be only 512K.

    It’s late and I’m tired…so forgive me if I’ve completely missed the point.

  4. The important number here, I think, is neither the block size nor the io_count individually, but the product of the two. Given that the count can be adjusted to make this product a particular value whatever the value of the block size, that doesn’t really make the block size the key influence on the io size — the two are really independent.

  5. I probably didn’t say it directly but it was kinda inferred in my last post. I agree that you can use the multiplier parameter to ensure that you try to minimise the number of requests to the IO subsystem whatever your value of block size is.

    My gutfeel is still to go with largest block size (32K) in TEMP since there will be less block overhead wastage for the same volume of data since there are less blocks for a fixed volume of data.

    I think we’re almost in violent agreement!

  6. Yeah, could be that you get more data in that way I suppose, but I don’t know anything about the structure it stores it in. I wouldn’t think that you’d squeeze much more performance out — certainly not the 30% that Mike Ault quoted as the benefit of large block Temp sizes.

    The other issue forme would be that it requires a 32kb system TS, andunless you want to stick with that across the board then you’re into multiple buffer caches which is a bit of a headache.

    But violent agreement is the best sort, I think.

  7. I have a query…which may be slightly related…I’ve added the note to my blog at http://oramossoracle.blogspot.com/2005/09/temp-file-io-why-so-many-few-blocks.html

    …but I’ll say it again here in the hope that someone (you ?) may give some insight…feel free to get rid of this post or move it elsewhere or just answer on my blog…wasn’t sure if there was any protocol about asking questions of you/others…

    I run this:

    SELECT phyrds
    , phyblkrd
    , ROUND((phyblkrd / phyrds),2) blocks_per_read
    FROM (SELECT SUM(f.phyrds) phyrds
    , SUM(f.phyblkrd) phyblkrd
    FROM v$tempstat f
    )

    and this comes back with a blocks_per_read value of 2.23 on the warehouse I’m working on.

    Why so small ? Another query I run showing this kind of information against different Data Areas within the warehouse shows that the other areas are getting much higher figures for the blocks per read metric (like 20 – 50).

    I’d have thought this figure would be higher for temp ?

    Any ideas ?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s