Absolutely- the proper typesetting gave airs of polish and correctness. The effect is long-gone now, but I do remember seeing mathematical preprints, typeset nicely in TeX, which had as the first line something to the effect of: "Warning: although this looks like a final result, do not be fooled by its appearance. It is really preliminary and should be gauged as if it were haphazard handwritten scribbles rather than polished typeset mathematics." These days people are used to seeing all kinds of mathematical tripe typeset nicely (perhaps generated automatically, in fact) so those kind of disclaimers are no longer needed!
It's hard to tell from the summary or article what is going on here. I suspect a decent fraction of these may be people submitting proposals under different programs for similar or overlapping projects. Sometimes a scientific project will kind of fall between programs and people will submit more-or-less the same proposal to two different parts of the NSF, hoping for funding from one. Given the low funding rate and the great deal of uncertainty about funding (thanks, oh-so-functional Congress!) it is pretty common for people to submit to multiple programs or to have several co-Principal Investigators, each with a component of a larger-scale project. And people definitely recycle their earlier proposals, funded or not. There are also often required sections on ``broader impact'' that are important in many fields that may not have much in common with the specific proposal and may be copied from other proposals. To me, there is a huge difference between ``self-plagarism'' or duplication, between recycling a broader impact statement from a colleague, and between outright plagiarism, unknown to the person who is being copied, with genuine scientific theft of ideas. From what is described, it sounds like they aren't yet distinguishing between these cases.
There was a good description on the arXiv about good plagiarism detection methods and tuning parameters for efficient detection of duplication and plagiarism, applied to a good part of the body of arXiv submissions. That algorithm is run now, which is why you see those ``article has significant text overlap'' messages, detailed here.
Thanks, and I agree that many associate profs do just continue full-strength out of momentum and self-motivate. The system and general academic research culture encourages complete devotion to your research and you get used to it. Going into academic research is a terrible idea if you don't truly love what you do, and get your own internal rewards from research success as well as the external ones.
Glad you like the username, I have fond memories of the Apple ][, though that was a while ago. I have a lot of brain cells still (uselessly) remembering old commands...
Not quite accurate- the progression at least at US academic research institutions is typically as follows:
- grad student, under supervision of a senior faculty member, 4-6 years typically of hard work. Many from here do not get research postdocs and leave academia, if their research isn't strong enough or they aren't interested.
- postdoc, 3 years at another institution, under supervision of a senior faculty member typically, often people do more than one postdoctoral position at more than one institution. Hard work, has the potential to significantly broaden experience. Many from here never get any other options beside additional postdocs and leave academia.
- assistant professor: potential for eventually getting tenure at that institution. Typically considered for tenure in the sixth year. Lots of pressure to get grants, be productive, show work independent from advisor and postdoctoral mentors. Those who do not get tenure leave academia or head to a lesser institution to possibly get tenure there. Often, if people are not being very productive, they leave after three years rather than risk being rejected for tenure when the time comes.
- associate professor: congratulations, typically getting tenure results in a promotion to associate professor. This job is basically permanent. There is strong pressure to get grants and be productive if interested in getting promoted to full professor. Some people burn out at this point and do not continue to get grants and be productive, and become "associate professor for life." Sometimes they are moderatey productive, sometime they are bitter, sometimes they become more teaching-focused, sometimes they do some research but not dramatic research, sometimes they become more involved in running the administrative aspects, etc.
- "Full professor" (technically, the title is "Professor" but many people use the word "full" to emphasize that it is not assistant or associate) has been considered for promotion from associate prof and been promoted. Generally not a rigid timetable, often about 7 to 10 years past associate, with most institutions requiring big grants, big results, big recognition, etc. for promotion to "full" professor. Even after promotion to full, people are so used to working like crazy for so many years and interested in getting further recognition, grants, etc. that they are still going full bore, despite there not being explicit pressure.
Adding up time in years, roughly: 5 as grad student, 5-8 as postdoc, 5 assistant, 5 associate, means that typically full prof has 20-30+ years in the field as a researcher. Stronger people make proceed up through the ranks more quickly but that is about typical. And at each stage, many people leave or get stuck.
The ACM is pretty terrible on this front and compares very poorly to other professional societies, for example the American Mathematics Society. AMS has more reasonable fees, much more reasonable copyright assignment for their journals, charges less for their journals, does not have some difficult-to-use online journal system, and in fact their modestly-priced journals and books effectively subsidize the rest of their operations. The Society for Industrial and Applied Mathematics is also good by all these measures. Both of them are much smaller than the ACM. The ACM has a glossier magazine, though, so there is that.
"Impact factors" are an important part of the old dead-tree for profit publishers efforts to persist. The most common one, Thomson JCR, is itself run by a for-profit publisher with strong incentives to maintain the illusion that the only high-impact journals are the for-profit ones. In fact, to be considered for the JCR entry, the journal must actually pay real money, and many open-access/inexpensive journals can't afford to pay or have chosen not to participate. So the ranking is then more biased toward the traditional model. Further, the metrics for "impact" are somewhat dubious and many authors and publishers are aware of the metrics and make strong efforts to game the existing system based upon citations. As I mention in another comment, many open-access journals are self-conscious about having low standards, and accept only the very-top quality papers, and as a result don't have many articles appear each year.
As mentioned in the other recent academic publishing story, there has been some progress but it has been slow. One thing that I am hopeful about are "epijournals" which separate the review from the distribution by serving as overlays to the remarkably successful arxiv preprint servers, at least in many areas of math and physics.
There are a number of issues here, many of which have been brought up often before. A few to recap:
- Prestige: As a fully-promoted researcher who isn't worried much about prestige, I am free to only submit articles to journals which are either open-access or are reasonably-priced (for example, many of those run by universities or professional societies, rather than by for-profit organizations.) Which I choose to do, and have made clear for many years via the Banff Protocol and now the CostOfKnowledge petition. However, when collaborations with junior researchers lead to publications, I am willing to submit to some of the other journals, as for the co-author, the prestige may be important for them getting a job, tenure, promotion, or grant funding.
- Standards: Oddly, many of the open-access/free electronic journals have standards that are much higher than many of the for-profit journals. The second and lower tiers of for-profit journals will often publish less-than-impressive to just plain terrible articles and have much lower standards than the typical electronic ones. They have economic incentives to publish many articles, and there are sites devoted to exposing various sham journals or editorial failings of journals from Elsevier, etc. I think that many of the electronic free journals are worried about not being prestigious enough and so they tend to have high standards, significantly higher than many for-profit ones. I've had things that were rejected by good electronic journals that were accepted quickly and with high praise to middling traditional journals.
- Author-pays model: there has been a proliferation of "open-access" publications, some of which are outright scams, see Beall's list of predatory "open-access" publishers. Often, these journals have names very similar to existing prestigious-to-middling journals, which complicates things and has made many authors naturally suspicious of various open-access journals as a whole.
- Institutional culture: it takes a while for things to change. There have been a few mass resignations of for-profit journal editorial boards to start more-or-less identical less-expensive or free versions which are basically identically, but not nearly as many as I have hoped. Tim Gowers' efforts and the recent White House memo in the USA are progress but of course there is still a long way to go.
I did misread what is there; $80k/year is the total for direct expenses. Those include "Server costs" $41,700/year, and "Network bandwidth & telephony" of $1550/year and the rest is staff computers, staff travel, and advisory board travel. I have no idea about hosting costs in general. My experience with the arXiv has been that they are very cost-efficient compared to many academic institutions. They do have millions of downloads and some of those articles are hundreds of pages long. I suspect they are quite cautious when it comes to secure archiving and so on, but I don't know much about that.
It's not the bandwidth expenses; there is a staff and moderation and a great deal of effort to make it as useful as it it.
There is a very competent staff and a good description of the arXiv expenses in their FAQ. Cornell has been trying to get other institutions to contribute as well; everyone agrees the arXiv is amazingly useful and hopefully the expense will be shared across many institutions. Current institutional contributors are listed here already, to the tune of under $3k/year each institution. The Simons Foundation has given a good amount of money towards expenses as well, $50k/year.
Their operating costs are laid out transparently in this document and come to about $800k a year, with Cornell contributing $300k a year presently. There are 3 full-time-equivalent staff positions which make up most of that. The hosting expenses are about $80k a year presently it seems.
I did find that other prominent people are resigning from Elsevier boards; here's a senior researcher in malaria resigned from an editorial board on the life-sciences side. His motivation was particularly strong- he is working in malaria research, and the idea that people who could benefit from the research may well be not able to pay for the paywall is abhorrent. But I think the same rationale applies to all of science- why keep research from people who cannot pay for it?
In other Elsevier news, I found some more journal shenanigans described here which include both rigging the reviews to be sock-puppet reviews and getting into their editorial board systems, resulting in yet more retractions.
Well done! Since you are already doing this, have you thought about signing the Cost of Knowledge petition if you haven't already done? In theory, this will prevent at least Elsevier editors from asking you to review in the first place and hopefully help establish more momentum for change.
In my experience with declining requests to review, I have commonly mentioned access and/or price as concerns for declining and have found some sympathy with various editors. If this becomes more commonplace, hopefully that will speed change to more reasonable publishing models.
For those not familiar with the arXiv, it is a preprint server service that is free (expenses footed by multiple institutions around the world, notably Cornell University in the USA.) Researchers upload their preprints generally about the time that they submit the article for consideration for publication at a typical (eg. primarily dead-tree) journal. The article will be considered, accepted, rejected, modified etc. by the journal, which has generally asked other academics to review it (for free, motivated by a sense of community, typically) and then sometimes the author makes changes and gives the final version to the journal. They may or may not update the arXiv posting to reflect the changes (typos, revisions, serious issues) that have been made in response to the reviewing process. In any case, most of the people actually interested in the result will have long seen the arXiv posting long before the journal publication happens, so the journal is principally playing the role of a validator about importance, significance, originality, correctness, etc. rather than dissemination, for those who submit their work to the arXiv.
Different disciplines have different levels of participation in the arXiv; high-energy physics and many areas of math generally have broad participation, whereas computer science, statistics, and other areas in math have lower overall levels and different publishing culture.
What the Epijournals are a project to have the validation process be similar, but not to bother with the actually having a (primarily dead-tree) journal. Rather, they will be overlays to the arXiv so the hosting and logistical expenses are all already sorted out. There are multiple free electronic journals, but the costs associated with archiving, etc. are generally either borne from "page charges" to authors, various institutional support options, or private generosity. See for example the Electronic Journal of Combinatorics a long-standing top journal in combinatorics. With the hosting on the arXiv, this should remove one of the barriers to entry for new journals.
Link to Original Source
There are many open-source research software efforts already, and of course it would be good to see this become more widespread. These range from small-scale individual researcher one-off efforts to broad multi-institution efforts that are well-maintained over years. The software that I develop in the course of my mathematical research is available freely from our webpages, with intermittent downloads. And I still get inquiries about using it, to which I just say that it's all on our webpages already.
One barrier to broader efforts in the US is that science agencies (at least the National Science Foundation) generally support research proper, rather than development of tools. Oddly, I am much more likely to get a grant to work out research that perhaps 20 to 50 people may be interested in than I am to get a grant to develop research tools that may be useful in furthering research to a few hundred researchers. Nevertheless, it is more common that universities and funding agencies expect data and software from research to be freely available. Many people drag their feet on these requirements as they are worried that some other researchers will use their tools to scoop them, but I think these instances are very rare.
Elsevier's financial interests have repeatedly caused conflicts before with the overall interest of good quality scientific publishing at a good value. There is a reason that many scientists have organized an Elsevier boycott, see this earlier slashdot story as very little has changed since then, aside from some superficial Elsevier posturing.
There are good quality affordable journals, run by professional societies or universities, which are an excellent alternative to Elsevier and other expensive for-profit journals. For the health of science, it is important that people choose to submit there. For untenured people who are under a great deal of pressure to submit to "top journals" it poses a difficult quandary, but for those of us for whom that isn't a concern, I don't see a reason to continue to support journals and publishers which have repeatedly done poorly.