Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Submission + - Can Bad Scientific Practice Be Fixed? 3

HughPickens.com writes: Richard Horton writes in that a recent symposium on the reproducibility and reliability of biomedical research discussed one of the most sensitive issues in science today: the idea that something has gone fundamentally wrong with science (PDF), one of our greatest human creations. The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. According to Horton, editor-in-chief of The Lancet, a United Kingdom-based medical journal, the apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world or retrofit hypotheses to fit their data.

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivized to be right. Instead, scientists are incentivized to be productive and innovative. Tony Weidberg says that the particle physics community now invests great effort into intensive checking and rechecking of data prior to publication following several high-profile errors,. By filtering results through independent working groups, physicists are encouraged to criticize. Good criticism is rewarded. The goal is a reliable result, and the incentives for scientists are aligned around this goal. "The good news is that science is beginning to take some of its worst failings very seriously," says Horton. "The bad news is that nobody is ready to take the first step to clean up the system."

Comment InfoSec implications of AI (Score 1) 421

I am Information Security practitioner and not an expert in this field, because nobody is. My experiences is that nobody knows what they are doing, most information systems are not secure in mistaken belief that nobody would bother breaking them, others are just secure enough to deter low-knowledge attacks. Almost everyone practices what is known proportional value deterrent, but treat high-value systems as truly isolated when so many side-channels exist.

If malicious AI ever shows up, we are screwed. We have zero hope of securing any information system from it. The only hope is that it won't end us because there is a good chance that a lot of hardware that AI might need will go dark.

Comment Isowhat? (Score 4, Informative) 95

I had to read TFA to figure out what isostatic is.

"Bizarrely enough, if we wanted to reach the Earth’s mantle, our best bet would be to dive down to the ocean floor and dig there; we’d “only” have to go through maybe 3 km of crust, as opposed to upwards of 25 km atop the Himalayas. This concept is known as isostatic compensation, and was actually uncovered by the famed British astronomer George Airy."

Comment Sate business (Score 2, Informative) 288

In Russia, there is no such thing as independent large corporation, there are only nominally privately owned, and formally state owned corporations. While Kaspersky does some good work, they should be treated the same way as NIST is in USA, with a primary mission to protect and advance state interest.

Comment Primary purpose is to drive (Score 1) 287

I still remember how awful early consumer operating systems were. They crashed, they had ridiculous requirements, and bad design. While all of this was unfortunate, the improvements were to the primary purpose of these systems.

For cars, the awfulness of digital platform is for secondary purposes - these systems do not improve how the car drives, yet implications for your safety when something goes wrong are much higher.

Comment Re:AI is not predictable to humans (Score 1) 408

I drive roadsters that can stop on a dime. If I stand on my brakes because I hallucinated a wall in the middle of the highway, I can guarantee that people behind me will rear-end me. Would you say there were following me too close and are at fault?

Driving requires a great deal of prediction, it is simply not feasible to drive 100% defensively in most urban environments. People will cut in front of you, you will never merge or change lanes, you will never turn left at a light. If other drivers start acting irrationally (or different-rationally like AI), the accidents will drastically increase. AI not only has to drive well, it also has to drive somewhat like a human, or other humans will keep crashing into it.

Comment AI is not predictable to humans (Score 2, Interesting) 408

Big issue with AI controlled cars in a human-dominated traffic is that AI doesn't react the same way people do. Sure, all-AI traffic would likely be more efficient and less prone to accidents, but we are nowhere near this. Instead we have AI that to humans is hard to predict.

For example, huge puddle on the road, most humans would unwisely drive through it. What would AI do? No idea, and I wouldn't want to be driving behind it when that happens. What about a hobo at the end of the offramp begging for change? Would AI freak out about pedestrian on the road? No idea, and I wouldn't want to be driving behind it to see what happens.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...