Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Re:For anything less than 600 miles... (Score 1) 515

It really depends on how you count. But I respectfully disagree with how you count.

Driving cost gas; it depends on gas price and energy efficiency on your car, but 35miles/gallon with a $3/gallon gas cost seems reasonnable. That's about 8cents a mile.
But driving also wears your car which cost repairs. Here again, it is not clear what the cost is, but assuming a $20k car and $10k or repair maintenance on its lifetime and a 200k miles of lifetime, that's about 15cents a mile in average.
There is also typically an insurance cost. (Not if you do it once in a while, but a regular traveler would.)
But I feel like the use of the car will not be below 25 cents a mile. That is actually below the federal reimbursement value of 53 cents per mile. So driving LA->SF would have a cost of about $95 while the federal value would be $214. (twice for round trip)
The train would certainly be cheaper. Meanwhile if you take the train, you will get there much faster and you'll be free of doing what you want during the ride. That is certainly worthwhile as well; especially in CA where the salaries are so high.

There are costs induced by taking the train, but even with a round trip of $200and two $50 taxi rides, the price remain the same. And you saved both your time and your stress. Properly operated trains are cheap and great.

Comment: Re:What? - Question Solved. (Score 1) 174

well...I beg to disagree.
Computer science is not mostly discrete mathematics, it use to be true in 1980, but that was last century. Also we have "reproducibility" issues in proofs as well. Many proofs in the field are not correctly written. And it causes many of them to be incorrect. For instance there was a critical flaw in the proof of TimSort which caused problem recently.

But Computer Science is much more than that nowadays. Algorithm get tested in practice because they are proved on models of computers. You need to investigate runtime, numercial stability, .... It is good to know that this algorithm is in O(n^2) but the Big-Oh notation hides a constant in there (and a rank of initial validity) which are found experimantally. And that is only in algorithm design.
The entire field of networking and performance is essentially built around a modelization, experiments, discrepancy loop.
All data mining/machine learning is also essentially built on hypothetisation that a model fits reality and validation.
Programming languages/middleware is also fairly experimental, you hypothesized that this programming language delivers a better effort-quality tradeoff than that other ones and you verify it experimentally by taking measuring the performance of a human population with different tools.
Human Computer Interaction is also very experimental, after designing a new system/mode of operation, you are hypothesizing that it enables performing a task faster than some other one. Which you will verify by comparing the performance of a set of users.

Comment: Re:What? - Question Solved. (Score 1) 174

I don't get the argument. You are saying that becasue most trained computer scientist are engineers then computer science is not a science?
Then I guess medicine is not a science either since most medical doctors are saving lifes and treating illnesses and not researching their root cause.
I also assume that mechanics is not a science since most professionals are build bridges and designing component and not solving Navier-Strokes equation.

There is a science called computer science that gets published in scientific journal and conferences. And there are reproducibility issues in there as well. Maybe even more than in psychology because these guys get beat up with experimental protocols. And in CS we are more relaxed about it.

Comment: Re:What? - Question Solved. (Score 1) 174

Actually 39% is not bad at all. I am sure it is not better in computer science. As a reviewer I typically need to fight with the authors for them to give enough details to be able to even attempt reproduction. Most CS papers lacks basic information on :
-how the code is written (language, major data structure),
-how it is executed (complied, interpreted, which level of optimization),
-where it is executed (which machine, complete spec, operating system, idle load, is parallelisation used),
-what dataset are used (randomly generated does not say anything unless you give distributions),
-what is precisely measured (Did you include or exclude I/O, did you only measure the kernel you are interested in, did you measure teh entire algorithm, did you include startup and closedown of you execution engine)

If these is not mentionned, then you lack information to attempt to reproduce the result. And then if you have them, there is always a possibility the result will differ from what was reported.

No seriously, psychology is probably better at this game than computer science.

Comment: Re:If it's published, it must be true? (Score 1) 174

I tend to seriously dislike the kind of comment that attributes malicious intent to researchers. I do not think that this is a problem with collusions. It is a problem that making a sound and reproducible experiement is HARD. It is easy to forget to report a phase of your experiment that you did not think about but that turn out to be important. It is also easy to have an implicit biais you did not recognize: an obvious one could be you did your experiment on sunday, so you excluded all the church goers.

Comment: Re:Useless for budget scientific computing (Score 4, Insightful) 110

by godrik (#49280771) Attached to: NVIDIA's GeForce GTX TITAN X Becomes First 12GB Consumer Graphics Card

I was at the keynote at GTC this morning and it really depends on what you are doing. If you want to do numerical simulation, it is not very useful because double precision performance is terrible. But if you do data mining, you mostly care about bandwidth and single precision performance. And then 12GB isn't too much. Actually I find it still a bit on the low side. Intel Xeon Phi are featuring 16GB this days. And in the realm of data analysis fitting the data on the accelerator is what make the difference ebetween the accelerator is great and the accelerator is useless.

Comment: Re:While publish or perish has problems... (Score 2) 112

by godrik (#49262985) Attached to: Scientific Study Finds There Are Too Many Scientific Studies

Of course we are publishing more. We are pushed to publish, so obviously paper get forgotten. But it is not clear to me that it is a bad thing. What pulish or perish accomplished is that we are communicating more. So clearly we are communicating smaller ideas, smaller experiements, smaller contributions but we are also communicating earlier in the process.

It is frequent nowadays that one idea is spinned into 3 papers, one preliminary workshop, one conference and one journal. Clearly once the journal is published, the workshop and conference version will not receive much citation. But does that mean that they were not useful? The citation they got mean that some people read these papers and that the knowdledge/insight contained in them was spread. This might not be a bad thing.

Now, if you were using number of publication/citation as a metric of how good people have, the metric is probably ruined now. But that was a terrible metric to being with.

Comment: Re:Sure (Score 1) 44

by godrik (#49233613) Attached to: edX Welcomes 'The University of Microsoft' Into Its Fold

I am a university professor and I do not think it is going to worsen it.

Those that don't have other access to higher education will certainly learn from it.
Those who have access to higher education now have a new type of resources that they can use to learn.

Those that will skip classes and say "I'll watch the video the week before the exam" or "I don't need to learn it, there is a video about it" will certainly suffer from MOOCs. But clearly they weren't ready to put the effort necessary in learning the material, so they were not going to learn.

Comment: Re:Weren't deep convolutional nets debunked? (Score 1) 142

by godrik (#49077723) Attached to: Breakthrough In Face Recognition Software

I don't see the practical relevance of this? You can not walk through an airport with a scrambled face. So the images the camera will get are "regular" imaegs. Sure you can generate ridiculous images that triggers false positive. But these images will probably not be fed to an actual system.

Each new user of a new system uncovers a new class of bugs. -- Kernighan

Working...