Why is it that when I look at wikipedia , they show all the various counters more or less in agreement, except netapplications which vastly overcounts IE and undercounts Chrome, android and safari?
Maybe because Net Applications is the only counter that tries to correct for known skewed sampling. Net Applications uses CIA internet usage data (how much of the population in each country has access to the Internet) to estimate absolute numbers for each country based on the measures distribution and the "Internet" population number. Net Applications is perfectly honest and upfront about this.
The other counters just report whatever stats has been collected. They also are perfectly honest and upfront about this.
Both correcting and not correcting may leave errors. Be your own judge.
But there's a perfectly good explanation as to *why* the numbers seem not to agree: They do not even claim to illustrate the same thing. Net Applications tries to create a number for "true" global distribution (and risk errors), the others do not even claim to compute such a number. In theory you could take the numbers from, say statcounter, by country and extrapolate the absolute number per country, sum them up by browser and calculate a number similar to net app. Could be interesting to see.
Also, be aware that there is also great popential for skewed demographics between the counters, not to mention the fact that Net Applications tries to measure unique visitors (discarding repeat visitors within a month) while most of the others just report page impressions. If for instance users of Chrome are more active on the 'net than users of IE, chrome would have a bigger share of page impressions than they would of unique visitors. There is no "right" in this: It all depends on the question you ask: If the Q is "which browser is the most popular?" you would look at unique visitors. If the Q is "which browser is used the most?" you would look at page impressions.
Why is it that of all the various counters netapplications is the one most often quoted, even though they appear to be using a bad methodology.
Maybe because they use the *least bad* methodology. The others do not even *pretend* to estimate global usage. They may report what *they* see of usage globally, but none of them claim to know how many users there are in each contry.