Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:RTFA before commenting (Score 1) 629

Standardized tests are one measurement, but not the only or best one... just the cheapest and the easiest for politicians and lazy reporters to spout about.

"Best" would imply some set of criteria, right? If inexpensive, consistent, apparently-easy-to-understand, and status-quo are part of your criteria, then couldn't standardized tests be the "best"? While the states place far too much confidence in the results (e.g. they do not even report the students' scores in error bands), they may be justified in their selection of standardized tests as a method of assessment.

Many (most) states use tests that are far below industry standards. But we shouldn't besmirch all standardized tests because the state chooses poorly.

Comment Re:I say test the teachers (Score 2, Interesting) 629

Test the teachers on the material they are teaching.

James Popham, a prof. ameritus at UCLA, wrote that if we want to know something about someone, we measure that something in that someone. To measure something in the students and then draw a conclusion about the teacher is "a second-step inference." He pointed out that current psychometric theory (see the AERa, APA, NCME 1999 Standards for psychological and educational testing) only deal with first-step inferences.

Note that the LA Time analysis used value-added methods, which have not been fully vetted in the psychometric literature. Especially, the degree to which measurement error (which is operationalized slightly differently in psychometrics than in other fields) interacted with value-added methods has not been established. Given that the false-result rate on New York State's tests are around 5% (which is probably close to CA's), I doubt you can rely on them as much as this analysis has.

Comment Re:Validity (Score 1) 571

I'm a research psychologist who specializes in testing and assessment, and asking "is this test valid?" is exactly the right response to this article.

I hope you're referring to the test being valid as a shorthand way of communicating with non-psychologists. The latest Standards for Psychological and Educational Testing (AERA, APA, NCME, 1999), Chapter 1, explain quite clearly that tests are not valid or invalid, nor are the test's results. It is the inferences we draw from test results that can be more-or-less valid. This, of course, follows from Messick's (1995) work and is a derivation (or, rather an evolution) of classical validity, which was overseen by Cronbach.

Comment Re:Labeling (Score 4, Insightful) 228

Also, right now, ASD clumps together symptoms even though they may have different etiologies. Having a biological test for a trait correlated with autism may help tease out the degree to which different conditions result in the same symptoms. When children test negative, but still exhibit ASD, we know there is another pathway to the condition that may be better served through different treatment.

This could be HUGE.

Comment Re:Bad idea in the first place (Score 4, Interesting) 44

Especially when there are already laws against the behavior in question and these laws already put the onus on the companies. (This isn't original to me, but I'm too lazy to look up the original reference.)

It works like this: If Person A pretends to be me and gets something without paying for it, that's fraud, not "ID theft." But with fraud, I'm not the victim, whoever accepted the fraudulent credentials is.

Over the last 15 years we've seen a new crime called, "ID theft" wherein the victim is no longer the entity with the power to impede the crime, the victim is a third party. That way credit-granting agencies can ignore the warning signs, and then bill the wrong person for the transaction.

If we stopped talking about "ID theft" and just went back to fraud, the companies would already have the motivation to tighten their ID checks.

Comment Re:Rather a Poor Metric (Score 1) 659

The latest standards from AERA, APA, NCME require test publishers (which includes surveys, self-report tools, etc.) to collect evidence to support the interpretations they claim can be made of the test results. That doesn't mean they all do, and instruments developed by researchers for their own research usually lack that evidence. Whether or not a test has such evidence largely determines its quality. Higher-end (expensive) tests like Student Self-concept Scale will pay for the research to support it.

The whole subfield of supporting certain interpretations of test results is called "test validity," which is slightly different from either logical validity or scientific validity. The popular model is based on the work of Lee Cronbach, but the most advanced model (which is canonized in the latest standards) came from the work of Samuel Messick. The Wikipedia articles reflect this duality with "Validity (Statistics)" describing Cronbach's view, and Test Validity describing Messick's.

To answer your question, correlation has been an enormous part of validity, to the point that a correlation coefficient has been called a "validity coefficient," though this terminology is falling out of favor. (As a graduate student, I was humbled by an established leader in the field when he dismissed my correlations with, "You can get anything to correlate.") Correlation is an important tool, but it's a first step.

Some studies do ask other people to verify someone's self-rating, and some scales (e.g. The Vineland Adaptive Behavior Scale) have others (informants) fill out ratings on the examinee. The examinee never even sees the test (though the examiner must have their or their legal guardian's permission).

Comment Questions about the linked instrument (Score 1) 659

Holy Donald Campbell, Batman!

That instrument may have a couple of serious issues. I would like to see the data before trusting it.

1. It uses uses a bunch of negative statements that would work better as positive statements with reverse coding.

2. It has an odd number of response categories. (This is somewhat of a religious issue in the field.)

3. Each item is scored a a straight 5-point scale. The assumption that each response is at equal intervals may or may not be true. A Rating Scale Model (1-parameter logistic) would establish the extent to which that is true for each item.

Add to this issues of perception vs. reality (which is a concern with all self-report scales) and you get a practically useless instrument.

Comment Re:Rather a Poor Metric (Score 2, Insightful) 659

As a professor, I agree with your observation that empathetic behaviors have not changed in the last 20 years. I wonder if real empathy has remained the same or are students today just better at faking it. (Conversely, they could be more empathetic and worse at showing it.)

The relation between the measurement results and the actual trait would need to be established, assuming we could get an objective measure of empathy.

All TFA shows is that student perception of their own empathy, as measured by self-report instruments, has decreased. The "why" is another study.

Comment Re:Rather a Poor Metric (Score 1) 659

That's known as SDR (Socially Desirable Response) in psychometrics and it's a well-explored phenomenon. For self-report instruments such as this, SDR is an accepted risk because there is really no better way to measure these traits. (The legendary Donald Campbell tried for 20 years, but gave up.)

I'm not saying this scale is a good scale, only that we must temper our interpretations of the results (which is central to validity in measurement). About all we can say is that the resulting scores have decrease over the last two decades. Tying that to actual empathy is a huge stretch.

For example, I do a lot of work in measuring confidence, specifically the trait of self-efficacy. When I write up my results, I am very careful to only talk about perceptions, not actual traits.

Comment Re:Midas Touch (Score 1) 175

It's well known in a lot of places thanks to the documentary "Beer Wars". In the DC area where I live there are several Dogfish Head alehouses and the local Wegmans stocks several of their beers as well. I don't normally like beer but Dogfish Head makes excellent products with variety and eccentricity that actually taste good.

For those of you one the West Coast: Wegmans is a Rochester-based grocers that puts anything else to shame. Seriously, I moved here from the Bay Area.

Comment Re:Religion (Score 1) 892

How is it faith to take the basis of this description at face value? How can scientific evidence be revelation if it's tested again and again?

That's the way it works in theory... Let me know when you get you own LHC fired up so you can personally replicate those findings.

Slashdot Top Deals

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...