The standard recommendation I've seen is to overwrite at least 3, perhaps 5, 7, or even 9 times[0], often with a final all-zero overwrite[1] at the end (since an all-zero nominal image might discourage someone from looking harder, while a disk full of random-looking data can only result from a random overwrite or a full-disk encryption system).
The "kill it with fire" technique is more a question of speed and when you can afford to destroy disks. I've heard the NSA burns their disks, and Google physically mangles disks, but consider that those organizations are going to get rid of disks either when the device using them is past its useful lifetime, or when the disk starts failing. At that point the future value of keeping the disk around is low. It's more cost effective to use a quick method that prevents data recovery (of the desired level depending on threat model), rather than tying up computers and personnel in lengthy overwrite procedures when the disk is probably going to be thrown out anyway.
The reason for multiple overwrites is that if you look at absolute magnetic readings from the disk at each bit storage position, it's not digital. Instead of "1" or "0", you might see .998 or .005.
The one in-depth article I read a while back said that an overwrite moves the charge roughly 90% of the way to the opposite value. If a bit was "1" and is overwritten with "0", the new value would be 0.1 Subsequent overwrites similarly attenuate past data. Given disk error rates today, I think 90% is optimistically high.
For the sake of simplicity, if each overwrite pass changes the data value exactly 90% of the way from the current value to the target value, every bit on the disk is going to be either between 0 and 0.1 or between .9 and 1.0. More specifically, there are four possibilities for each bit. If the reading is close to the range 0.00 to 0.01, both the current and last image stored a zero. If the reading is close to the range 0.09 to 0.10, the current image is zero and the last image was a 1. Similarly for 0.90 to 0.91 and 0.99 to 1.00 ranges.
With a perfectly accurate magnetic detector and a HDD write mechanism that is perfectly accurate, and a perfectly linear and resilient magnetic layer on the disk, you could discover past images one by one... once you determine the last image logical value, you apply a function, possibly a linear map, to strip out the computer-visible layer and derive the exact magnetic reading as it would have been before the last overwrite. Repeat, wash, rinse...
The objective of overwriting several times is to push the magnetic differences caused by the last "real" stored data into the range where it's obscured by noise, either noise of the magnetic imager used to take raw magnetic readings, or much more likely, noise of the HDD writing mechanism (it isn't writing a perfect "1" value each time), or noise or imperfections of the magnetic substrate leading to imperfect magnetic storage.
I think recommendations for 35 overwrites, or even 9 overwrites, may be overestimating the capabilities of an adversary. Not because of anything the adversary does, but because of modern hard drives. Data is crammed into such small magnetic wells that the absolute magnetic readings are less consistent than ever before. Given the error rates of modern TB-sized disks, I would expect many blocks with unrecoverable (2+ bit errors per block) read errors upon reconstruction of even the second to last magnetic image. Repeating the process, I would expect errors to increase non-linearly. My WAG is that before 9 overwrites you're in a situation where even a perfect magnetic detector is reading only low-level noise from the drive. (I'm talking about noise from the non-perfect magnetic layer on the disk surface, and fluctuating magnetic field write strength from the drive head.)
[0] see, for instance, http://www.securityfocus.com/archive/1/310128
[1] An all-zero overwrite simply provides a surface layer of plausible deniability if nobody uses a magnetic imager and instead uses commodity hardware to check drive contents. A disk area filled with statistically random data, AFAIK, has only two causes: 1. a full-disk encryption program in a mode that doesn't use a header (e.g. Truecrypt's hidden containers), or 2) a secure overwrite pass. Both might draw unwanted attention in certain instances, where an all-zero disk might be mistaken for an unused drive.