The greatest time in that articel is spent claiming that 192Khz is overkill because everything above 20Khz is unhearable. He shows how a square looking waveform has all the right spectral components in the 20Khz range and so therefore it it is not missing anything. This is fourier and nyquist type argument that assumes linearity.
as you put it F( a+b) = F(a) + F(b). When this is true then it's as he said. But if F(a+b) != F(a) + F(b) then you need more than 20Khz to describe the spectrum.
I'm not saying 192Khz is the right thing. I'm just say the entire argument in the article is assuming linearity to draw the conclusions that the 0-20Khz spectrum contains all the information you can hear.
In fact we already know that ears are not linear. This is in fact how some compression algorithms function. They know that as it gets loud that you can't hear quieter frequencies as efficiently so they are removed. This is an example that actually works in the opposite direction-- that there's less information needed. However it supports the notion that describing everything by spectral analysis is wrong when things are linear.
You said, well it's just a change of basis. Sort of. How tightly you want to sample has th be determined first. This is what actually sets the bases that the analysis is going to be changing between. A given point spacing in time for a given lenght of time forces the interval over which the fourier transform exists. Conversely if you insist that the highes frequency is 20K (or 40K for nyquist) then you have fixed the time interval of the sampling. You are then blind to any point in the intervals between which is where the non-linear effects could, conceivebly, hide.