Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment It all sounds ok to me. Score one for a dumb crook (Score 2) 532

A search history on a personal computer is a personal document, for whomever did the search. If the defendant is the only person able to access the PC, he has to live with the document. A very similar situation would be a spiral notebook with detailed lists and notes and entries identified by day (a real OC piece of work) all about how to kill your wife, all in the defendant's handwriting. I see no difference in admitting both of these into evidence, given a proper foundation. Murder is a crime and should be punished. The problem is not that the record itself is bad. The problem is that most people do not know how to do searches without leaving behind a broad trail of bread crumbs for whomever might follow.

Comment Chicken Feather Charcoal IS carbon nanotubes (Score 1) 318

Why is this so surprising? My understanding was that carbonized Chicken Feathers, like many charcoals obtained from natural biologic materials, contains significant amounts of carbon nanotubes, buckyballs, and all sorts of unrelated glop. The nanotubes cannot be separated from the glop, so researchers write off the whole thing as a failure. Now to find the reference. That is going to be a pain. I heard this a long time ago.

Comment Re:Worse, he is incompetently wrong. (Score 1) 404

I have had all kinds of experience. Some of it a little strange.

A couple of things I did around the end of law school bear mention here.

I probably should not discuss it, but I helped calibrate the Multi-State Bar exam, during my third year of law school. Most lawyers will scream bloody murder, that I should have been allowed anywhere near the data.

It is not like it sounds. I was working with a real psychometrician. He knew the statistics and methodology, and I knew the practical parts of computer systems. We both knew SAS, very well.

(Statistical Analysis System - it is its' own little language. In many ways, the language is an improvement over languages like Fortran, and I *like* Fortran)

The data was double-blinded. Neither my friend nor I saw the questions or any of the answers. Someone else handled that part. All we knew was that for each one of the thousands of examinees, for each question, whether or not the examinee got the correct answer or not. The order of the questions was also scrambled, so we did not even know the order of the questions as they were taken by the examinees.

FWIW, for the Multi-State Bar in my State, and many others, only one thing counts: the total correct. Nothing is taken off for wrong answers. A passing score is much higher than 25% of the total. I do not recall now, but 60-80% is the neighborhood of correct answers to pass (actually it was a combined score from the written and Multi-State, but if you got only 25% on the Multi-State, you failed, period.)

Bad statisticians get crappy results because they make wrong assumptions. Whoever the guy is that wrote the article, never let him do your statistics. He makes assumptions that competent psychometricians know are false.

I know the article's assumptions. I made them myself until I worked with my friend.

Strange things happen with really good test questions. This is not all, but most.

First, some guy randomly guessing, say, by going down the questions and always taking the first answer, will fail. Even if he is incredibly lucky and is nearly three standard deviations out (one of the very unlikely possibilities from a uniform distribution), he will still fail the exam.

Second, if his answers were educated guesses instead of blindly picking from a, b, c, d, or e, then his chances of getting the correct answer went down.

You cannot even take the exam until you graduate from an accredited law school, or you practice in California. *Every* single person that took the exam made at least educated guesses on most of the answers.

(One of the top guys in my law school class decided he was going to give the psychometricians a heart attack, by answering every single question correctly. He bragged about it before the exam. The smartest guy I ever met, bar none. Later he said that at the end of the day, he looked up, realized that he had twenty-five questions to go, and there were five minutes left. He *blindly* answered the last twenty-five questions (all with (b), IIRC) and turned in his exam. He passed, of course.)

My friend and I could sort of tell the order of some of the questions, in spite of the double-blind. The more difficult questions, in the last fifty or sixty, clearly had a higher random component of correct answers. Taking into account the difficulty of the question and the size of the random component, we felt confident that we could identify and order three-fourths of the final fourty questions.

Difficulty == number of examinees getting a correct answer, adjusted for their relative ability to correctly answer all the other questions. For small numbers of examinees, this is perilous. Our sample set was more than ten thousand, and verified against results from previous years going back. Those answers, in turn, were sampled, then diligently validated against LSAT scores (a law school entrance IQ test), law school grades, relative difficulty of law school, undergraduate grades, and personal inquiries to individual law school professors. No problem with either validity or sample size.

Educated guessers, with weaker skills, almost always got the wrong answer. It was not one in four or five of getting the correct answer, which is what you would assume from a uniform random distribution. Typically, it was one in twenty or thirty of getting the correct answer, and many were much lower.

Distributions of examinees were "S" shaped. For those below the "S", the chances of guessing correctly were slim. For those above the "S", almost all of the answers were correct. The number of incorrect answers in that group was consistent with the expected number of people simply filling in the wrong answer when they knew the correct one. Inexplicably, the converse was not true for the weaker examinees. They tended not even to mistakenly get the correct answer. In the "S" area, the percentage getting a correct answer started slowly, accelerated up, then decelerated and leveled-off.

New questions, still being evaluated, would sometimes have a very wide "S" area, even being the entire span of examinees. Everyone who got it correct got the point for being correct, but the question was not used again the next year.

Everyone taking the exam knew most of this before the Exam started.

None of the questions gave inverted results - weaker examinees consistently getting correct answers, with very strong candidates consistently getting wrong answers.

From the article, comparing a guy knowing two answers versus a guy only knowing one answer from the hundreds on the Exam, there was no difference. They both would have failed the Multi-State, and by a wide margin.

The other bit comes from some research work I did for a statistician. In plain language, he was applying Latfi Zadeh's Fuzzy Logic to relative performance of examinees in answering a set of questions of varying difficulties. Straight-line statistics were imperfect in dealing with the information available. Fuzzy Logic, however, looked like it had been hand-crafted to fit the problem. The fit really was that good.

He gave me articles to study in preparation for assisting his research. One article mentioned the C.P.O. exam. The author was a psychometrician and statistician that had worked for the Navy in calibrating this exam. This is the exam that the U.S. Navy gave, before promoting sailors to Chief Petty Officer.

Each CPO had to pass the exam. Only veteran sailors with eleven years or more of time in the Navy were allowed to take the exam.

Again, inexplicably, there was one super-question in the set of exam questions. Because it was so good, the Navy took pains to maintain the usefulness of the question. It was not given in every exam, and those in the know would not discuss the details. If I knew them, I would not discuss them, either.

Every time a sailor answered that question, his answer correctly predicted his ability to handle the CPO job. Those who answered it incorrectly, but got a high score on the remaining questions, were invariably incompetent, or very mediocre as CPOs. Those who got it correct, and managed to become CPOs, were competent, no matter what their other indicators might have been. The random effects of guessing were negligible to the point of being invisible.

If you could keep it secret, that one question could be the whole exam. If the Article were correct, that question could not possibly exist.

The mere existence of that one, single, question, makes the parent Article irrelevant to the point of being incompetent.

Slashdot Top Deals

The moving cursor writes, and having written, blinks on.

Working...