The myth is still alive …
"To answer your questions, in general, DSS and OLAP databases (those characterized with lot's of full scans) might see a reduction in consistent gets with larger blocksizes. I have some client's use a db_cache_size=32k so that TEMP gets a large blocksize, and then define smaller buffers to hold tables that experience random small-row fetches."
If the clients are using a default block size of 32kb to do that, Don, I hope it wasn't based on your advice, because the blocks size of a temporary tablespace is irrelevant to the size of i/o it uses, because that is governed by a hidden parameter. The is no logical i/o involved, it is 100% physical.
Better get your excuses lined up for when they find out that this is a Big Oracle Myth, Don. Not only are you giving bad advice to people for free on your forum, you're actually charging your clients for advice that will make their database more complex to create, more difficult to manage, probably more prone to bugs, and with no performance advantages whatsoever.
Mike Ault has excuses ready, by the way … he's just standing by a friend. http://www.blogger.com/comment.g?blogID=11462313&postID=111369118788103463
Comments welcome. As always.