Microsoft actually thought otherwise, and found that if your fragments are sufficiently large (on the order of >64MB) then you can buffer and seek between consecutive reads in a way that makes it largely irrelevant. As a result, the NTFS defragger no longer attempts to consolidate files to chunks larger than 64MB.
But you know, I think I might actually test that theory out myself if I get around to it. I'd really like to know if what they were getting at was accurate with today's hard drives, given how long seek times can be. Considering Microsoft uses NTFS for all of its needs, from the user OS to SQL Server and other "enterprisey" products, I'd say they may have some experience in the matter. I mean, even in the case of large database objects, if you're incrementally adding entries to a table, how likely is it that it and all its indexes are going to remain consolidated on disk even if the filesystem does its best to keep the database file intact?