Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:I disagree. (Score 2, Insightful) 270

I don't understand why the source code itself is the distinction between something being verifiable or non-verifiable. It seems the results of the machine under standard testing would.

If we stood your biocube up to the similar tests that any real device would go though (being presented with samples with a known alcohol percentage) your system would get some right, some wrong and you could then say, of a sample size X, the device accurately determined the BAC of Y samples with a accuracy window of Z.

This data (as well as the device itself - generally) would be available for use during any case which this was used in. For example, if (for the sake of argument) the initial tests came out with 100% accuracy - the defense would have the device available to conduct another set of tests (And another, and another) until they could show evidence that the measuring device itself was faulty - or at least inconsistent. In an ideal system, this would lead to reasonable doubt that the test worked, and the defense would win the case.

I don't see why the real device wouldn't be put up to the same test regardless of what drives the device (hardware, software, vanilla pudding, whatever) to determine if it is providing data in an accurate enough way to be used as evidence. If the device is known to have inconsistencies like the summary states, it would seem like testing would be able to show this (caused by software or not) - hence reasonable doubt of any readings. It's up to us (as people who would be on a jury) to determine whether Y correct out of X samples is accurate enough to convict someone in our society.

I'm all for the company to release the software - but I'm not 100% convinced that a software bug being shown in court would cause a device like this to fail on the above criteria.

Lets say (for the sake of argument) that the device consists of:
a physical sensor of some sort
a way to adjust the input of the sensor (since there's always variance on those sorts of things),
software which reads this adjusted input and displays the result (perhaps averaging samples over time, etc, etc).

A normal production model would go through the testing described above, have the input adjusted until the results displayed by the software met standard criteria, then the adjustment would be locked and the unit would ship.

Even in the case where the software could have a significant bug, like always adding .15 to every read, the system as a whole would be adjusted to compensate for this.

And while the bug in this case would definitely be used in court - I have the feeling it would be more of the "get an guilty person free" rather than the "showing true reasonable doubt of the system" type of use - since the device as a whole would be accurate and precise, whether that bug existed or not.

Space

Submission + - Intergalatic Clouds of Missing Mass Missing Again

Ponca City, We Love You writes: "Researchers at the University Of Alabama In Huntsville have discovered that some x-rays thought to come from intergalactic clouds of "warm" gas are instead probably caused by lightweight electrons leaving the mass of the universe as much as ten to 20 percent lighter than previously calculated. In 2002 the same team reported finding large amounts of extra "soft" (relatively low-energy) x-rays coming from the vast space in the middle of galaxy clusters. Their cumulative mass was thought to account for as much as ten percent of the mass and gravity needed to hold together galaxies, galaxy clusters and perhaps the universe itself. When the team looked at data from a galaxy cluster in the southern sky, however, they found that energy from those additional soft x-rays doesn't look like it should. "The best, most logical explanation seems to be that a large fraction of the energy comes from electrons smashing into photons instead of from warm atoms and ions, which would have recognizable spectral emission lines," said Dr. Max Bonamente."
Education

Submission + - MIT's SAT Math Error

theodp writes: "The Wall Street Journal reports that for years now, MIT wasn't properly calculating the average freshmen SAT scores (reg.) used to determine U.S. News & World Report's influential annual rankings. In response to an inquiry made by The Tech regarding the school's recent drop in the rankings, MIT revealed that in past years it had excluded the test scores of foreign students as well as those who fared better on the ACT than the SAT, both violations of the U.S. News rules. MIT's reported first-quartile SAT verbal and math scores for the 2006 incoming class totaled 1380, a drop of 50 points from 2005."

Slashdot Top Deals

You've been Berkeley'ed!

Working...