Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Since you seem to know how this works... (Score 1) 223

Journals that want high impact factor select articles that will get a lot of citations. This often means somewhat controversial results, rather than good science. Also "good" journals are often crap because of this.

Yes, I'd agree. The generally agreed definition of "good" is unfortunately "high impact factor". Not "has a fair and honest review process and publishes interesting work". Shame.

Comment Re:Since you seem to know how this works... (Score 1) 223

Also, does anyone know if the current open access policy covers review papers?

I'm not subject to the policy so I can't really say for sure... but I'd imagine not. Review papers tend to be done in a researchers "own time" or rather not funded out of a specific project or subject to the policy of a given funding body.

Remember the open access policy only relates to research funded by the NIH and to support that policy I would imagine the NIH allows you to allocate project funds to cover publication charges. If you're writing a review you might be able to fiddle things so it looks like it's related to a particular project and claim funding. But equally you could say "well, I just wrote that myself it's not related to the project" and publish it in a "closed access" journal. The downside is that when the NIH comes to review the project you couldn't claim that as a useful outcome. You also couldn't get them to cover publication charges.

Comment Re:It sounds simplistic because it is. (Score 1) 223

If your research is that valuable, don't take federal money. A lot of universities are taking federal money for research and then selling those discoveries to companies that sell them back to the taxpayers. It's not always that clean but it just doesn't seem right.

That's not what's happening, nor is it federal money being taken. Federally-funded research products lead to patentable inventions. Those patents are held by the government. In order to make that research commercially valuable, additional research is needed and private investment is required to bring the research to a marketable level of maturity. In turn, private entities agree to fund the necessary further research, without which the first sets of patents are worthless.

Sure, I guess the issue is that there are different models for doing that. If researchers didn't patent an invention but simply published the idea openly those inventions are highly likely to still make it in to products. Quite regularly in fact a company will develop a product without holding the patents and only acquire them once they've proved the concept.

Without the patents those ideas would be in the public domain. The researchers would be providing a service which provides inventions to industry and the public on an open bases. Note, I'm not saying this would be a better model but it's a possible way of distributing the IP without patents.

Essentially what patent protection does is insure that the funding body or researcher who developed the invention gets something back. That keeps some of the money generated within the country that funded the project. So for example you can't end up with the situation that the US government funds a project, it gets exploited in the Japan and the US gets nothing. In the current model, US develops it, Japanese company wishes to exploit it they have to buy rights to the invention.

Comment Re:Since you seem to know how this works... (Score 4, Informative) 223

I've long wondered--what is it that academic journals DO, precisely? They don't seem to provide any services that a vanity press couldn't do better and cheaper.

Is there something I'm unaware of that they merely overcharge massively for, or are they actually the complete and total parasites that they sound like?

They basically provide quality control by making sure that the peer review process happens. A good journal will first screen out a lot of papers that are entirely unsuitable, they'll then find relevant experts in the field to review the paper. You're paper wont get published unless the referee's think it's good enough. So they manage that process. That process is attractive to authors because a good journal garners a lot of respect in the scientific community. More than that it effects how much funding the university gets (universities often get more government funding if they maintain staff with high impact publications). If you want to know more about how impact is measured look at http://en.wikipedia.org/wiki/Impact_factor . Impact factor is a fairly stupid way to measure the quality of a paper, but it's what people do. Hopefully we will move towards citation counts or some similar metric, but essentially all these metrics are quite coarse.

But, back to the main point. Journals do provide a useful service in managing the peer review process. And yes they massively overcharge for that service and often force you to assign copyright to them so they can extract as much money as possible from your work. That's part of the reason people are now looking towards open access... however ironically open access tends to end up costing the author more (as the journal can no longer charge subscription fees they charge higher publication fees).

Comment Re:Well, of course! (Score 4, Insightful) 223

I smell sarcasm but just in case there are people reading who don't know how academic publishing works...

Publishing companies need to make enormous amounts of money so they can do important things like:

  • Paying researchers top dollar for important publications

Scientific authors don't get paid for publications. Often the author has to pay a publication charge for in order to get published. In particular if you have color figures, you often have to pay extra.

Offering large emoluments for Reviewers

Referees don't get paid either, they do it out of the kindness of their hearts. :) Actually why they do it is a bit of a mystery, but it keeps you connected with the academic community.

Hiring top-notch editors to perform quality typesetting

Many journal force authors to fiddle with their manuscript endlessly until the formatting meets the journals specification.

Host powerful commercial publishing access sites, as universities, libraries, and professional organizations are simply unwilling to pitch in.

Not sure what this means...

Comment Re:Why not? (Score 3, Insightful) 493

yes I've heard of gprof that's profiling not a profiling optimizer they are different things. Turns out the mozilla build system does support PGO with gcc though: https://developer.mozilla.org/en/Building_with_Profile-Guided_Optimization . I'm not sure how it compares with the PGO in MSCC. I don't even know if the distros tend to build it with PGO. In any case gcc tends to lag behind commercial compilers in terms of performance (that is after all the selling point of commercial compilers). GCC tends to be ahead in terms of standards compliance.

Comment Re:Not suprised (Score 1) 493

It would be interesting to try a Linux version compiled using the intel compiler (or perhaps pathscale). I'd guess the intel compiler performs at least as well as the MS compiler. I totally agree with you though, this test is basically a compiler benchmark if anything.

Comment Re:Why not? (Score 1) 493

Whenever I've seen PGO it's a compiler feature not a feature of the build system/software. The intel compiler supports PGO, perhaps the Microsoft one does as well. To my knowledge, gcc which I assume the Linux version of firefox is built with doesn't.

There is however no reason why firefox couldn't be built using the Linux Intel compiler and PGO. However I would imagine most distros would reject this idea (the intel compiler isn't in any sense free). A complete Linux distro compiled with the intel compiler would be interesting, on average it's code tends to be 10% faster than gcc.

Comment Re:Read About Face... (Score 2, Informative) 553

For example with a virtual machine you have all of the metadata that you need to serialize, and transport data. With C, C++, and assembler you must explicitly say I have four bytes that need to go to point a. A big big difference in my mind.

That's not really true is it. With C++ I can get a library that serializes my object, transmits to a file, or over a network, or to a cluster via MPI and then reconstructs it at another point or on another computer (e.g. Boost Serialisation, Boost MPI etc.). It's just isn't a language feature, it's a library feature. Because it's a library feature it can be harder to use than say Java. That's the trade off, flexibility and performance over ease of use.

Some people might say that flexibility isn't important any more. Well... for me it is. A language like C++ gives me a huge range, I can program anything from embedded controllers to operating systems, to video games, to large computational simulations in it. There are few other languages with that versatility.

Comment Re:Nonsense (Score 1) 555

MS also already in effect gives Windows away free in a lot of countries (certainly where I live) by tolerating blatant piracy (shops in city centre shopping malls, every PC retailer around).

Yep, I've always thought that one of MSs strokes of genius was not including any significant copy protection on any of their products. It's not only in countries where piracy is common place. The majority of Office installations on student/home PC are most likely pirated.

MS must love it. It locks these users in to those products which in turn means they will promote it in the workplace, where it counts. Any loss of funds in the home market can be made up for my increasing prices in the business market.

So called "emerging" markets where piracy is common place in business must worry them though. I'm guessing this may lead to a push towards an online service based model, if lobbying for stricter enforcement doesn't work.

Comment Re:Nonsense (Score 5, Insightful) 555

I'd like to point out that open source does not have to mean free.

And free does not have to mean open source. The article gives several reasons why MS might want to give Windows away but no compelling reasons why it should make it open source. Closed source isn't just about getting paid for software it's about control. They control the APIs and all the little gotchas that make producing a windows clone difficult. If Windows was fully open sourced, I'd bet we'd have a fully working Wine within months. At that point MS Windows just becomes "another Windows API implementation". You could say "so what!? they just start targeting the windows API for Linux, it's an even bigger market!". The problem is that controlling the API gives MS unique advantages. Exchange integrates tightly with active directory MS get to add features to the operating system API simply to make their apps work better. If the API is open they don't get to do that and in general they get pushed toward open standards rather than proprietary ones. No more "you can't get feature X,Y and Z unless you use Outlook", no more lockin.

Comment Re:Will they allow encryption? (Score 1) 342

The first stage of any decent encryption algorithm should be to compress the data. Compressing data remove redundant information that could be used in cryptanalysis.

There are possibly economies of scale if you compress large amount of data in a single block, but it shouldn't be more than a few percent and probably not worth it for the associated hit in access time. If you ban encrypted data from this reason you should also ban compressed data (so no uploading jpegs or most anything else).

Slashdot Top Deals

Force needed to accelerate 2.2lbs of cookies = 1 Fig-newton to 1 meter per second

Working...