Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education Science Technology

Some Science Journals That Claim To Peer Review Papers Do Not Do So (economist.com) 122

A rising number of journals that claim to review submissions do not bother to do so. Not coincidentally, this seems to be leading some academics to inflate their publication lists with papers that might not pass such scrutiny. The Economist: Experts debate how many journals falsely claim to engage in peer review. Cabells, an analytics firm in Texas, has compiled a blacklist of those which it believes are guilty. According to Kathleen Berryman, who is in charge of this list, the firm employs 65 criteria to determine whether a journal should go on it -- though she is reluctant to go into details. Cabells' list now totals around 8,700 journals, up from a bit over 4,000 a year ago. Another list, which grew to around 12,000 journals, was compiled until recently by Jeffrey Beall, a librarian at the University of Colorado. Using Mr Beall's list, Bo-Christer Bjork, an information scientist at the Hanken School of Economics, in Helsinki, estimates that the number of articles published in questionable journals has ballooned from about 53,000 a year in 2010 to more than 400,000 today. He estimates that 6% of academic papers by researchers in America appear in such journals.
This discussion has been archived. No new comments can be posted.

Some Science Journals That Claim To Peer Review Papers Do Not Do So

Comments Filter:
  • by whitesea ( 1811570 ) on Sunday June 24, 2018 @11:07AM (#56837944)
    They prey on people whose career depends on the quantity of publications. My friend published a paper that became famous in his area of research overnight. Everybody and their brother cited the paper, even mainstream media mentioned it. His dept chairman said, "We are satisfied with the quality of your papers. It's the quantity that's insufficient."
    • by gl4ss ( 559668 )

      well they gave themselves a set of criteria and didn't think further than that.

      also with rise of pseudoscience in general, like publishing a paper on how michael jackson was able to do the smooth criminal 45 degree lean.. something that was revealed in making of documentary 20 years+ ago.

      it's not looking good. on the upside, good science is as healthy as ever. it's just that there's shitloads of fluff science careers in effect right now and they only count those numbers.

      so of course there's more journals to

      • if they can't find out anything new, that should be the criteria.

        No, that should be one criteria. It's vital to science for falsifiability as well as replicability as well as originality and it all needs to be published for the world to see.

        It's also vital for scientists to actually publish papers that disprove their own theories. In these cases, the "negative" is just as important as the "positive."

        At any rate, I'm glad to read of these kinds of discoveries; the fear is that the anti-intellectual, anti-science types will point to this and say, "See!? Science is all a

        • It's also vital for scientists to actually publish papers that disprove their own theories. In these cases, the "negative" is just as important as the "positive."

          That doesn't happen directly all that much. It's more common for someone to publish a competing theory (which only disproves the old oe by inference), then the old one slowly falls into obscurity and effectively vanishes.

    • by jma05 ( 897351 ) on Sunday June 24, 2018 @11:44AM (#56838072)

      Peter Higgs says he would not have survived in this system.

      https://www.theguardian.com/sc... [theguardian.com]

    • by Anonymous Coward on Sunday June 24, 2018 @11:46AM (#56838078)

      As a tenured professor, I can say the papers are just the tip of the iceberg. Academic science, at least in the biomedical sciences, is falling apart due a variety of problems: ponzi schemes with doctoral and postdoctoral training, indirect funds of grants inflating their value to universities as profit margin, cuts from state governments and passing the buck of research funding to the federal government, and a general "widget production" model of science being demanded from administrators and conservative legislative overseers.

      I was in a faculty meeting a couple of years ago. A junior faculty member was undergoing annual review, and some of the faculty expressed concern that they were publishing too much in open access journals. These weren't questionable open access journals, though: they were pretty well-established ones that just weren't traditional academic journals. More importantly, the junior faculty member's impact factors, number of citations, etc. were all fine, comparable to any other successful junior faculty at that stage. However, some of the senior faculty felt that the journals weren't prestigious enough. So they created a memo to be circulated around the department, a list of journals, saying "these are journals junior faculty should be publishing in."

      The memo was justified in the interest of fairness and clarity, I and I get the intent, but on the face of it is absurd. The focus should be on the quality of the research, not the reputation of the journal. It's as if someone denigrated Lolita as a work of literature because the publisher was of poor reputation.

      These lists of predatory journals that float around are useful, and the journals should be criticized. But when you get to this scope of problem, these journals aren't the problem, they're a symptom. What you have now is an oversupply of very talented researchers, an underfunding of science (above and beyond the annual federal research budget, which we shouldn't be so dependent on), a focus on celebrity over substance, superficial indicators of productivity, nepotism... I could go on and on.

      Publishing in particular is sort of a house of cards. A rational outside observer would ask what the economic reasons for the current structure are, with such low costs to publishing now on the web. Why do peer-reviewed journals even exist now? Are scientists really paying attention to what they should be? Open access journals are one solution, but if you look at them closely, they probably in aggregate do more harm than good because they are pay-to-publish, which creates huge misincentives.

      These sorts of predatory journals are completely predictable, and are the tip of the iceberg. Keep in mind these are journals where there are *obvious* improprieties. Things get even more problematic if you realize that there even more "legitimate" journals that leverage moral greyness or plausible deniability as a way of avoiding these lists.

      Most of the time I feel like academic science is in crisis, and even more so every day. There's a huge disparity between how science actually occurs and how people are compensated and recognized.

      • The memo was justified in the interest of fairness and clarity, I and I get the intent, but on the face of it is absurd. The focus should be on the quality of the research, not the reputation of the journal.

        Agreed but the problem is how do you fairly evaluate a colleagues research at a global level when you do not work in that area? Given the need to do this high quality, peer-reviewed journals make sense since, if your colleague can get his/her work published in one then clearly the rest of their field also think their research is high quality.

        • Agreed but the problem is how do you fairly evaluate a colleagues research at a global level when you do not work in that area?

          You ask someone who does work in that area. I think that would be obvious. Merely having a paper published in a "prestigious" journal will not answer the question of whether they do good work in general or whether the specific work in question is actually valuable. Even the best journals with the highest reputations sometimes publish some shit science [thelancet.com] unintentionally. What determines the credibility of research is the quality of the work that builds from it.

          If you really want to evaluate the work of a r

          • You ask someone who does work in that area.

            That's practical for tenure and promotion but for every annual evaluation? Really? You'd suggest getting an external referee (since colleagues in the department may be biased one way or the other) for evaluations? As for getting it wrong how can you be any more certain that this external referee is likely to do a better job than those selected by a reputable journal?

            All it tells you is that a few "peers" felt the paper was potentially worthwhile....It doesn't mean they've done a deep dive to corroborate the research.

            True, but your suggested method has one peer decide. Having a few peers decide with those peers changing from one paper to the next seems lik

            • That's practical for tenure and promotion but for every annual evaluation?

              If you've asked once it's not like a lot will have changed in the last 12 months publication-wise. The work to check recent performance should be fairly minimal by comparison. And if that is too much work for an evaluation then the manager is either lazy or incompetent.

              You'd suggest getting an external referee (since colleagues in the department may be biased one way or the other) for evaluations?

              If you don't know what you are evaluating then absolutely yes. If you and I are both physicists then there is a good chance I know enough to properly evaluate your work. If we are in different fields then chances are you will need to ask

              • Who said anything about one? If you want to know someone's reputation in a field you will have to get a sample size greater than one.

                I think you have an extremely unrealistic view of how willing people are going to be to respond if they are bombarded by the number of requests to review someone that will be required if every single year every faculty member has to have multiple external referees to evaluate them. Additionally, if they are only evaluating the work for that one year it is hard to see how they will manage to come up with anything significantly different from what you would obtain from reviewing that same person's publicatio

      • The focus should be on the quality of the research, not the reputation of the journal.

        The whole publish or perish is caused by them not being willing to judge the quality of the research. They are operating on the philosophy that the higher quantity of papers you publish and the more prestigious the journal, the better the research. They are basically using the journals to "grade" the research.

        • The focus should be on the quality of the research, not the reputation of the journal.

          The whole publish or perish is caused by them not being willing to judge the quality of the research. They are operating on the philosophy that the higher quantity of papers you publish and the more prestigious the journal, the better the research. They are basically using the journals to "grade" the research.

          The concern about the prestige of the journals wouldn't be quite as much of an issue if 'published in a prestigious journal' was strongly and positively correlated with 'is high-quality, important paper.' Prestige seems to correlate more to the name of the journal, right now, which may admittedly be more due to inertia than anything else right now. Used to it had a lot to do with number of citations papers published in them recieve, and open access journals tend to get more because it's just plain less of

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        "conservative legislative overseers"

        How hilarious. Even when you manage to worm your way into an industry that's self-admittedly 99% liberal, you still blame conservatives for your problems.

      • by epine ( 68316 ) on Sunday June 24, 2018 @02:54PM (#56838668)

        I recently read George Dyson's book on the founding of the IAS.

        The intellectual and academic caliber of people fleeing Germany (and nearby regions potentially subject to German influence) was unbelievable, yet the cowboy-era administrative end-runs to secure stipends for many of these people were off the charts.

        Then America had its middle-class golden era between 1950 and 1980. If I've learned anything from my recent return to the history books it's this: this is the least economically normative period of the last 400 years. Short of handing Stalin the keys to the hydrogen bomb years earlier than he should have got them, it was a pretty amazing time for an empire unlike any that came before it.

        Not so long ago, 10% of the population went off to university. Now 41% of women in Canada attain a bachelor's degree or higher. This is associated with an inevitable pressure to make the filters of meritocracy ever more fine-grained. So, of necessity, academia invents a cascade of credentialism mountain ranges. But there isn't enough legitimate signal to make this work, so we're forced to invent credentialist hoops.

        Zoom all the way up to the TARP bailout of 2008. There's a large group of economists who think this was too large/unnecessarily/ineffective, another large group who think it was too small/absolutely essential/effective so far as it went (with maybe a sliver of fence-sitters eating porridge at the perfect temperature).

        Prospective judgement is counterfactual. Retrospective judgement is counterfactual. In none of these matters are we afforded a virtual mulligan to run the simulation again.

        If you're not Einstein, you're probably facing some kind of peer-group proxy measure, with the intervals between the scythe are having contracted from leap years to stutter-step quarters (who can summon the effort to leap any more, when the relentless ankle-blades never let up?)

        Goodbye Erdos number two. Hello author position number 997 on a single published paper.

        Meritocracy is this weird nodable concept. Sure sounds like a good idea (especially after a long stretch where things are anything but meritocratic). But merit turns out to be a terribly, terribly hard thing to implement well in practice, with acceptable consequence.

        On TARP, we never achieved a retrospective standard of merit, all we got was two lousy, tribal camps locked into an egocentric crowing competition. I mention TARP mainly because the stakes here are in the trillion dollar range. Surely if incentive porn FTW, this would be the ultimate case study concerning the collective human incentive to get the individual incentives sorted out.

        ———

        I found these two books exceptionally interesting to read back to back:

        * Twilight of the Elites: America After Meritocracy â" 2012
        * Coming Apart â" 2012

        Same publication year, same thematic material, anode vs. cathode political perspective, but ultimately the same message: implementing meritocracy is far harder than it looks.

        Basically the problem here boils down to not enough lions. Actual survival was once a pretty good proxy measure on who had it together, and who didn't. (Until the fated day came where your—former—best friend hid your Nikes.) Problem: the objective measure of the lion cull was a just a tad morally blind.

        One of the problems with incentive porn is the notion that incentive gradients should be pervasive and perpetual: don't go to university, struggle to pay the rent; don't graduate at the top of your class, struggle to pay your student loans. Etc.

        The other model is that you mill around aimlessly (sort of) until something clicks, and then you go off on a mad tear, when there's clearly something special you feel that you can achieve. If you succeed, you get perks (recognition, fancy jobs, fancy peers). If not, you're simply cast back into the milling pond.

        A UBI is one way to provide a giant milling pond of opportunity.

        It would surely

        • Thanks for this - the most interesting thing I've read on slashdot in over a decade. The only point that I don't think that you covered is that as society moves further into specialized roles there are many people who simply do not rise above the threshold in any particular specialty - the statistics behind this are quite simple. Yet we need to maintain these "unproductive" people specifically so that we have a broad enough pool to avoid over-fitting what we focus on.

      • Why are you posting this comment AC?

      • The memo was justified in the interest of fairness and clarity, I and I get the intent, but on the face of it is absurd. The focus should be on the quality of the research, not the reputation of the journal. It's as if someone denigrated Lolita as a work of literature because the publisher was of poor reputation.

        I'm not sure you really got the intent. The reason why science has these flaws is because of senior faculty, and they enforce these ridiculous norms to keep their own reputation. If junior faculty m

      • Many have responded to your comment but no a single one of them took note of the most crucial point of your lament.

        Academic science, at least in the biomedical sciences, is falling apart due a variety of problems: ponzi schemes with doctoral and postdoctoral training

        What you have now is an oversupply of very talented researchers

        Only researchers know.

      • Academic science is in crisis. One point you did not even mention is the reproducibility problem. Studies are not be reproduced, not only because they cannot be, but also because there is no incentive to even try to reproduce.

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Sunday June 24, 2018 @11:17AM (#56837982)

    As far as I can tell adademia has turned into quite a self-referential naval-gazing afair in recent decades. Point in case: I've finally gone into college and started my BsC Media-CompSci and on my first project I get a stern look for quoting details on RSA from a book from 1998. They told me they're embarrased to quote anything that's older than 5 years and I should never quote a book from 1998. ... WTF? Seriously? Even if it covers elemental basics that haven't changed a single bit since decades ago? Mind you, this is CompSci where you would expect some sort of academic credibility, unlike Sociology or "Gender Studies". Given, CompSci is still miles behind math, physics and engineering, but I still would expect it to me somewhat mature.

    This paper-publishing apears to me like some intelectual masturbation academics like to indulge in and not neccesarly something that brings humanity forward.

    Maybe I'm wrong, but I still can't shake the feeling that academia has degraded severly and the avantgarde has moved on.

    • Perhaps the citation age limit is part of the field since I know that my CS colleagues rarely publish papers and instead focus on conferences because they claim it is rare to have a research topic that is still cutting edge by the time it has gone through the writing, review and publication process which can take up to a year. Certainly it does not apply in physics - I cited papers all the way back to 1932 in my thesis and 1963 for papers.

      As for degrading, we are part of society and as society moves away
      • In theoretical CS, (except for the results that are immediately applicable to specific field), many papers are first cited 5~6 years after their publication. Reasons:
        1. Theory is hard. Few can do.
        2. Breakthroughs are hard to come by.

    • by mrvan ( 973822 )

      They told me they're embarrased to quote anything that's older than 5 years and I should never quote a book from 1998. ... WTF? Seriously? [..] Mind you, this is CompSci where you would expect some sort of academic credibility, unlike Sociology or "Gender Studies"

      I won't make any comments about gender studies, but I would like to point out that sociology and other fields of social science are usually a lot better at keeping track of and citing literature. I've done my PhD in AI but have since moved to social science, and I'm often embarrassed at the quality of literature review / previous work section in comp sci papers: extremely short and superficial and often more aimed at dismissing everything that is done before than at showing the tradition, context, and frame

    • References to papers less than 5 years old help your fellow colleagues get tenure, and help your fellow students get Masters Degrees and PhDs.

      Older references are essentially sites to influential papers. Those papers were written by influential people that already have tenure. Those people would prefer you help the junior faculty, the Master's and PhD students.

      To be a friend, you need to cite the newer research.

      BTW: I didn't invent this system.

    • oint in case: I've finally gone into college and started my BsC Media-CompSci and on my first project I get a stern look for quoting details on RSA from a book from 1998. They told me they're embarrased to quote anything that's older than 5 years and I should never quote a book from 1998. ... WTF?

      I'd agree with the WTF. I've spent a lot of time in academia though and never had that. You might have just got unlucky with an idiot for an adviser.

      This paper-publishing apears to me like some intelectual masturba

  • by Anonymous Coward

    Knowledge is finding out that all of academia is a scam. Publish or perish and self selection biases abound. An academic doctorate is akin to indoctrination in an elite club. One where there are no free thinkers or contrarians allowed; any and all descent will be censured. This begins with grad school admissions, is enforced through faculty advisor cony-ism and is confirmed by tenure.

  • The whole theory behind peer review breaks down in a world in which areas of expertise are too narrow.

    A study that no one seems to want to do is which fact finding ends up being more accurate? The criminal court system (with its trial system and adversarial review) or quality control system of scientific publications? Most advanced scientific research can only be checked by very few peers. And the only checks they perform is whether they can replicate the results. And even that is not always the standar

    • Re:so? (Score:5, Interesting)

      by godrik ( 1287354 ) on Sunday June 24, 2018 @12:04PM (#56838112)

      I publish Computer Science articles frequently. While I am not necessarily happy about how the peer review process works in my field, it often means something else than people expect.

      In my opinion the review process verifies two things: Does the result seem correct? Is the paper interesting?

      Whether the paper is "interesting" or not is a judgment call. In the context of a conference, I have asked to reject paper that were correct but I could not believe the problem and results would interest a room of 80 people for half an hour.

      Whether the result is correct is a much more complicated question. If the paper is theoretical, you should be able to verify it during the peer review process. There are typically 3 reviewers, and that is usually enough to get a clear idea of whether the models and proofs are sound. If a part of the paper is still obscure to the 3 reviewers, then clearly the paper lacks clarity and should be revised before being accepted.

      The real problem in CS comes from experimental papers. Because reproducing experimental results is hard and sometimes not possible. Maybe you don't have access to the code (research codes are not always made available). Maybe you don't have access to the data (some data is proprietary and can not be shared). Maybe you don't have access to the machine (I think only Chinese nationals can get access to tianhe-2 for instance; I myself wrote paper about an experimental not-yet-released system). Even if you could reproduce the result, it could take month to reproduce. So in practice, you don't attempt reproduction of most experimental CS paper.
      What you do is check for consistency. Does the result make sense? Does the technique provide an output that is coherent with expectation? If it doesn't, is the discrepancy explained in the paper? Is there a clear drawback to the method that is not mentioned in the paper? Do I believe that the paper contains all the information necessary for reproducing the results if I wanted to? That is about the type of things that you check. Some are pushing for including experimental results as supplementary material to experimental papers or to make experimental results more reproducible in general. (See the work of Arnaud Legrand or of Lucas Nussbaum for instance, but many other work on that.) The SC conference has now a reproducibility initiative to help with that.

      The adversarial review that you are talking about happens AFTER publication. That is where the review peer review starts. It starts when dozens of master student or PhD student will compare their method to the state of the art. And that is when you will know what will stick and what won't. Because they will make the comparisons to different frameworks, on different machines, on different datasets.

  • .. by publishing elusive works written by a text generator, it was done before, and worked, so when they try it again destroy them with "fire and fury".

  • So, a company that decides whether you peer review properly won't tell anyone their criteria for doing so?

    Why do I suspect that buying whatever they sell is part of the criteria for deciding that your pub peer-reviews properly?

  • by bluegutang ( 2814641 ) on Sunday June 24, 2018 @12:13PM (#56838148)

    Exactly which papers allow you to publish papers without peer review?

    Not that it matters, but I will be facing a tenure committee soon...

    • Exactly which papers allow you to publish papers without peer review?

      Not that it matters, but I will be facing a tenure committee soon...

      Ones you have to pay a fee to publish in.

  • The article is paywalled, so we know little about how this finding was made, but let's test each journal by submitting several plausible but fake papers to it. Peer review would then be rated on how many of the falsies are caught.

  • by Anonymous Coward
  • by Anonymous Coward on Sunday June 24, 2018 @04:57PM (#56839128)

    Posting anonymously to avoid legal problems by disclosing the following information.

    Not only fake journals are a problem. Supposedly "serious" journals with a corrupt editor using a coercive citation [wikipedia.org] scheme are also an important issue. Coercive citation is typically used to increase the metrics of the journal, but I have seen at least one case in which it is used to increase the metrics of the Journal Editor (!)

    Specific example: Springer's Journal of Supercomputing [springer.com] is edited by infamous Hamid Arabnia [uga.edu]. Yes, this is the same folk that used to run the WorldComp [google.com] conference series in Vegas, which is now rebranded as CSCE [americancse.org] after it was widespread that they accepted any crap as long as you paid for registration.

    Well, I submitted some years ago a real research paper to this Journal of Supercomputing. It was serious research, not top-level, but reasonable. Reviews were reasonable, but H. Arabnia requested to add citations to FOUR of his own personal papers, completely unrelated with our submission, in order to accept the paper. We didn't add any (and the paper was eventually accepted), but we could check that he did this routinely: you can check for example this paper [springer.com], in which authors cite TEN unrelated papers from the editor of this journal. I don't blame the authors: in many cases, they badly need the publication and agree to the coercive mechanism.

    You can also check H. Arabnia's Google Scholar page [google.com], with a very high h-index value. However, this page also allows you to check the citations of the papers. If you check the 88 citations to this paper [google.com] from 1995, you can see that it was almost unnoticed for twenty years, and suddenly it resurged in 2015... with ALL citations [google.com] coming from the Journal of Supercomputing, which he edits!!

    The funny fact: The journal of Supercomputing has a JCR impact factor of 1.326 in the last (2016) list, being in the second quartile (Q2) of its category. Let's see the update, coming in a few days/weeks. According to the rankings, this should be a respected journal, but it happens to be the playground of this clown, abusing it to increase his own metrics.

    • Can someone please mod this up? I can confirm that this is a common practice by many today... but who can blame them? The only lesson is that citation count is no longer trustworthy. Also, Google Scholar now counts blog posts as papers, and web links as citations.

      (I have mod points but can't moderate since I have coomented on this article)

  • by SoftwareArtist ( 1472499 ) on Sunday June 24, 2018 @09:22PM (#56839938)

    Scientists know which journals are reputable. In any field, there are maybe a half dozen or a dozen journals that most papers get published in. Everyone knows what they are. And then there's a hundred wannabe journals with intentionally similar names that exist just to make money. No one reads them, and no one reputable publishes in them. If you look at someone's publication list and see a lot of papers in journals you've never heard of, you immediately know to be suspicious.

    • Problem is that they have the citation counts... because there are so many of them citing each other's trivial results.

  • https://www.counterpunch.org/2... [counterpunch.org]
    "David Noble: ... And again, going back to your first question, the purpose of peer review is prior censorship and I believe very strongly that if people want to criticize something that you write or I write, they have every right to do that AFTER itâ(TM)s published not before itâ(TM)s published. To me thatâ(TM)s the critical issue."

    Search on "peer review is censorship" for similar opinion pieces.

    This is not to defend any journals being misleading though...

  • As my GF works in research, we occassionally discuss topics like this. In the Economist article there is a group mentioned asking for a more transparent review process - sounds very reasonable. Another thing is - if you participate in the peer review process, this should get you merit similart to publishing a paper. Peer review done right requires time and if done anonymously there is not so much incentive for you to do so since you are already too busy producing all the papers the system requires from you.
  • Peer has been broken for quite a while, and there are many articles on why. The basic issue is that there's no money in reviewing papers, and it doesn't help anyone with their own career growth.

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...