If you just read the abstract to TFA you can see that the claim here is less novelty than the press release makes it sound like (the press overplays things - SHOCKER!
Now, I am personally a bit dubious this is the first time the alternate derivation has been done, but I havne't read their particular approach. One would hope any reviewers assigned to the paper would have done reasonable due diligence/homework about the particulars (though sometimes that hope is in vain).
The press release doesn't cover that, nor the abstract and the rest of TFA is behind a paywall.
In case the one-liner in the subject isn't verbose enough the issue is "what is being measured". One needs some kind of gold standard. "Intelligence" is a slipperly enough of a concept that in practice it tends to be "defined by" some kind of measurement scheme. This new measurement scheme has to be calibrated by some existing one -- i.e. these measurements explain intelligence as independently assessed by some other extant measurement scheme.
Unless they get a lot better at correlating than 20%-ish then either they represent a refutation of those existing schemes (which requires some other compelling argument) or they are dramatically inferior, but some new enough approach to be "publishable". The latter is probably all the research article is about. So, don't get your hopes up on "pinning down the slippery". If you are already uncomfortable with IQ tests as assessments then you probably won't accept any calibration of the new technique and thus view it even more skeptically than the existing techniques.
Much of what is being said here is correct. Since the cancellation of the USA's SSC in the early 90s (a device that would have found the Higgs 15 years or so sooner), big science physics projects have had a hard go of things. Of course book publishers also will pounce on a catchy God particle marketing gimmick. Physicists will privately grimace even more at such over-hyping of the significance, but the difficulty of funding makes them shy away from outright rebuttal. The same people that are most "expert" in the domain have a direct interest in the domain seeming "interesting" to the ordinary folk who have to pay for it.
The Higgs mechanism only generates masses for the W and Z *gauge bosons*, not masses for quarks or leptons (see any good Wikipedia page) and certainly not "all matter" which is what a lot of the *officially* popular pieces indicate through inappropriate brevity. Without a Higgs-like particle the gauge bosons for the weak force ought to be massless like photons, but there was never, ever any problem with fermions like quarks and leptons having mass. Now, without W,Z,Higgs electroweak interactions would be very different, but it is almost totally insane to attribute everyday "mass" to the Higgs alone. Indeed, 99% of "everyday mass" comes from the binding energy of the strong force inside of nucleons, for example, not even the *rest* masses of quarks and electrons. "God particle" was never remotely appropriate. Various ideas about anti-gravity and the like are completely off track. It's important to be sure, but blown out of proportion (almost) beyond belief.
This all leads to "what bad analogies come next" in two to three decades when people want to fund (and promote) the Next Big Accelerator (NBA). The discoveries anticipated may have to do with supersymmetric partners. Could that lead to Jesus and Lucifer "offspring of God particle" or "wars in heaven" BS analogies or perhaps equally poor religion backlashes to already nutty analogies objecting to new pantheons or whatnot? Beats me. It seems likely that even allowing for global economic growth the "N.B.A." will be an even bigger fractional expense and so drive even greater craziness. Steel yourselves!
I actually was thinking of "what city would I like to live in".
If the real question is "what university would I like to be near" then a city is also the wrong aggregation unit, so not only the normalization but also the aggregation should change. I believe per university/per student or per professor/group output is what most academics would like to know for bragging rights or even funding priority reasons, but they usually make such evaluations themselves on a per department basis.
If you read the paper or click on the maps you will actually see that they DO NOT CORRECT for local population density. So, the metric in question is absolute rather than "per capita" productivity. This doesn't entirely invalidate it, but it calls into question how you would verbalize or interpret the results.
I mean, if 8 of the top 10 cities for science *by any metric* are also 8 of the top 10 cities by population you have said something less interesting. These cities are already top cities for "being" at all.
It would be far more interesting to normalize in a per capita sense. There are clearly some major outliers in that sense scrolling around on the map. Vancouver lept out at me, but I'm sure others could find them as well. Now, wouldn't it be nice if the fancy visualization researchers helped us along in that task?