You guys are somewhat right. Bits ARE written to disk as 1's and 0's logially, but as +1 and -1 in a magnetic sense since the direction switches as you go over each magnetic pole. Note that those signals interfere and destroy each other's signal if they get too close, a phenomenon known as high frequency zeros in the business -- write a 101010 pattern to the disk and if you put those bits too close together you get 000000 back out when you read it.
The UBD (user bit density) is the number of bits stored in an area of 50% of the width of an isolated pulse.
Old drives (pre-'96 or so) used peak detectors to find patterns. There we used high frequency "boost" or pulse slimming to read 1s and 0s and get around the high frequency zeros problem, but the UBD was limited to less than 1.
About '95 or so IBM introduced a much more complicated technology called PRML, for Partial Response, Maximum Likelihood detectors. These use intersymbol interference in a controlled way using things like Viterbi detectors and even more complicated backends. This more sophisticated technology allows drives to get UBDs to above 3+. It's not totally unlike QAM, but most of the details are pretty different. Besides, when I was doing QAM systems the data rate (not the carrier frequency) is much, much lower than it is in a drive, so it was a heck of a lot easier to do.
Oh, and writing exactly what you want to a drive is almost impossible anyway. You remember that PRML stuff? That requires coding, meaning that only certain patterns are allowed (this has to be a DC-neutral system). Further, in the more modern systems parity bits are written to the disk, special radomizers are added to improve coding efficiency and spread the spectrum, etc. You could no more write an arbitrary pattern to the disk than you could use a soldering iron to patch an i7's microcode.