Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Re: New Accounts every 3 months (Score 1) 368 368

Well, you can still resubscribe for a long time if the constraint is to use different credit cards. Pretty much every single store you visit nowadays wants to give you a credit card. I already have about a dozen credit card number I could use. And remember that credit card expires and send you a brand new card periodically. I seriously hope for them that a credit card is not how they plan on identifying returning users...

Identifying users based on name, adresses and verifying the name based on credit card information seems a bit better.

Comment: Re:Actually much sooner (Score 1) 298 298

Actually that is the reason why I think we should keep oil for precisely these applications. It seems like there are applications where fossil fuel can be replaced by renwable energies and places where it can not.

Instead of treating fossil fuel like a commodity, maybe we should treat it like "an endangered resource".

Comment: Re:The title game (Score 1) 124 124

Or better yet, untie H1Bs from a company, make it a 2 year visa, and let them go wherever they want.

I am on H1B right now and probably getting a green card soon. Tying the visa to a particular job is in my opinion the worst thing about H1B. From the employee perspective, it is pretty bad. They have no leverage for negociation at all. For the company, it is pretty good, they can keep on paying substandard salary. For the country, it is pretty bad, since it creates small monopolistic job markets.

Let them compete on the national job market and I think you will solve the problem with tech emigration and the lack of stem workers in the US.

Comment: Re:For anything less than 600 miles... (Score 1) 515 515

It really depends on how you count. But I respectfully disagree with how you count.

Driving cost gas; it depends on gas price and energy efficiency on your car, but 35miles/gallon with a $3/gallon gas cost seems reasonnable. That's about 8cents a mile.
But driving also wears your car which cost repairs. Here again, it is not clear what the cost is, but assuming a $20k car and $10k or repair maintenance on its lifetime and a 200k miles of lifetime, that's about 15cents a mile in average.
There is also typically an insurance cost. (Not if you do it once in a while, but a regular traveler would.)
But I feel like the use of the car will not be below 25 cents a mile. That is actually below the federal reimbursement value of 53 cents per mile. So driving LA->SF would have a cost of about $95 while the federal value would be $214. (twice for round trip)
The train would certainly be cheaper. Meanwhile if you take the train, you will get there much faster and you'll be free of doing what you want during the ride. That is certainly worthwhile as well; especially in CA where the salaries are so high.

There are costs induced by taking the train, but even with a round trip of $200and two $50 taxi rides, the price remain the same. And you saved both your time and your stress. Properly operated trains are cheap and great.

Comment: Re:What? - Question Solved. (Score 1) 174 174

well...I beg to disagree.
Computer science is not mostly discrete mathematics, it use to be true in 1980, but that was last century. Also we have "reproducibility" issues in proofs as well. Many proofs in the field are not correctly written. And it causes many of them to be incorrect. For instance there was a critical flaw in the proof of TimSort which caused problem recently.

But Computer Science is much more than that nowadays. Algorithm get tested in practice because they are proved on models of computers. You need to investigate runtime, numercial stability, .... It is good to know that this algorithm is in O(n^2) but the Big-Oh notation hides a constant in there (and a rank of initial validity) which are found experimantally. And that is only in algorithm design.
The entire field of networking and performance is essentially built around a modelization, experiments, discrepancy loop.
All data mining/machine learning is also essentially built on hypothetisation that a model fits reality and validation.
Programming languages/middleware is also fairly experimental, you hypothesized that this programming language delivers a better effort-quality tradeoff than that other ones and you verify it experimentally by taking measuring the performance of a human population with different tools.
Human Computer Interaction is also very experimental, after designing a new system/mode of operation, you are hypothesizing that it enables performing a task faster than some other one. Which you will verify by comparing the performance of a set of users.

Comment: Re:What? - Question Solved. (Score 1) 174 174

I don't get the argument. You are saying that becasue most trained computer scientist are engineers then computer science is not a science?
Then I guess medicine is not a science either since most medical doctors are saving lifes and treating illnesses and not researching their root cause.
I also assume that mechanics is not a science since most professionals are build bridges and designing component and not solving Navier-Strokes equation.

There is a science called computer science that gets published in scientific journal and conferences. And there are reproducibility issues in there as well. Maybe even more than in psychology because these guys get beat up with experimental protocols. And in CS we are more relaxed about it.

Comment: Re:What? - Question Solved. (Score 1) 174 174

Actually 39% is not bad at all. I am sure it is not better in computer science. As a reviewer I typically need to fight with the authors for them to give enough details to be able to even attempt reproduction. Most CS papers lacks basic information on :
-how the code is written (language, major data structure),
-how it is executed (complied, interpreted, which level of optimization),
-where it is executed (which machine, complete spec, operating system, idle load, is parallelisation used),
-what dataset are used (randomly generated does not say anything unless you give distributions),
-what is precisely measured (Did you include or exclude I/O, did you only measure the kernel you are interested in, did you measure teh entire algorithm, did you include startup and closedown of you execution engine)

If these is not mentionned, then you lack information to attempt to reproduce the result. And then if you have them, there is always a possibility the result will differ from what was reported.

No seriously, psychology is probably better at this game than computer science.

Comment: Re:If it's published, it must be true? (Score 1) 174 174

I tend to seriously dislike the kind of comment that attributes malicious intent to researchers. I do not think that this is a problem with collusions. It is a problem that making a sound and reproducible experiement is HARD. It is easy to forget to report a phase of your experiment that you did not think about but that turn out to be important. It is also easy to have an implicit biais you did not recognize: an obvious one could be you did your experiment on sunday, so you excluded all the church goers.

Comment: Re:This is stupid (Score 1) 109 109

No it is great. In an era where we find it so difficult to get young people interested in the science, I think his comment is briliant. It is the perfect example of what many people should do, try to capture the interest of people using whatever reasonnable communication channel.

Comment: Re:Useless for budget scientific computing (Score 4, Insightful) 110 110

I was at the keynote at GTC this morning and it really depends on what you are doing. If you want to do numerical simulation, it is not very useful because double precision performance is terrible. But if you do data mining, you mostly care about bandwidth and single precision performance. And then 12GB isn't too much. Actually I find it still a bit on the low side. Intel Xeon Phi are featuring 16GB this days. And in the realm of data analysis fitting the data on the accelerator is what make the difference ebetween the accelerator is great and the accelerator is useless.

Comment: Re:While publish or perish has problems... (Score 2) 112 112

Of course we are publishing more. We are pushed to publish, so obviously paper get forgotten. But it is not clear to me that it is a bad thing. What pulish or perish accomplished is that we are communicating more. So clearly we are communicating smaller ideas, smaller experiements, smaller contributions but we are also communicating earlier in the process.

It is frequent nowadays that one idea is spinned into 3 papers, one preliminary workshop, one conference and one journal. Clearly once the journal is published, the workshop and conference version will not receive much citation. But does that mean that they were not useful? The citation they got mean that some people read these papers and that the knowdledge/insight contained in them was spread. This might not be a bad thing.

Now, if you were using number of publication/citation as a metric of how good people have, the metric is probably ruined now. But that was a terrible metric to being with.

Comment: Re:Sure (Score 1) 44 44

I am a university professor and I do not think it is going to worsen it.

Those that don't have other access to higher education will certainly learn from it.
Those who have access to higher education now have a new type of resources that they can use to learn.

Those that will skip classes and say "I'll watch the video the week before the exam" or "I don't need to learn it, there is a video about it" will certainly suffer from MOOCs. But clearly they weren't ready to put the effort necessary in learning the material, so they were not going to learn.

This is clearly another case of too many mad scientists, and not enough hunchbacks.

Working...