...I can see at least one bogosity and a couple of omissions.
The author claims that the "phase doesn't matter" with the Nyquist criterion, when it can easily be shown that, for instance, sampling a 20KHz sign wave at exactly at 40KHz can result in a zero signal if the input and the sampling are synchronized such that the sampling points all occur as the input waveform crosses zero. If they're slightly out of sync, something will get through but it'll be greatly attenuated.
More importantly is the issue of "aliasing"--if there's any component to the input that's of a higher frequency than the sampler, the digital result will contain a "difference" component somewhere in the audible spectrum. For an idea of what this might sound like, listen to Don Ellis playing his trumpet through a ring modulator at the beginning of "Hey Jude" from the "Live At Fillmore" album. In practice, the sampling rate is placed somewhat higher than the maximum input frequency, to compensate for the analog input filter's cut-off being less than perfect. The 44.1 KHz rate for CD audio was the lowest rate at the time that allowed the recording industry to be able to claim "high-fidelity" i. e. reproduction of a 20-20KHz bandwidth. 48KHz is probably safer. Admittedly 192KHz is overkill, but perhaps not for mastering, assuming the amount of post-processing that's likely to happen between the original recording and the listener. Typical "webcasting" software, for example, contains multiple layers of digital filters, compression and whatnot, so it helps to start with something that's not already compromised.