The top 100 most cited papers are actually a motley crew of methods, data resources and software tools that through usability, practicality and a little bit of luck have propelled them to the top of an enormous corpus of scientific literature.
The article itself never mention 'data resources' that I saw, but there's a problem in many fields that the standards are to cite the 'first results' paper for that data
There are similar issues with software citation -- everyone's citing the announcement of the existing of the software, but how can you track who might've relied on a buggy version to let them know that they may need to re-run their analysis? I'm not as active in this field, but the arguments remain the same (giving proper attribution, documenting everything to make it reproducible, etc.). See the 2013 Knepley et.al paper, "Accurately Citing Software and Algorithms used in Publications" and the work of the Software Sustainability Institute (which also covers topics on writing better research software, as was alluded to in the article)
It's probably also work mentioning that our current ways of tracking 'importance' of papers are flawed. See the Altmetrics Manifesto for a collection of links to efforts to come up with other metrics and CiTO, the Citation Typing Ontology to enable a way to classify why something was cited (it might be for criticism; in most of the cases in the article, it would be "uses method in", which not all disciples feel needs to be cited).