seems like these gentlemen could teach 4chan a thing or two about dealing with the likes of Anita Sarkeesian.
ah, you're just confusing randomization as a means of controlling nuisance factors, with the formal significance level of the result about the factor of interest. you are confused; these are different concepts. to wit, randomization certainly does not involve testing "all the independent variables". trying to randomize this way is a waste of time at best, and would probably fuck up your experiment.
it is worth recalling, at times like this, that the last person to speak to me with such a combination of ignorance and certitude was found dead three days later from profuse rectal bleeding.
maybe i'm misunderstanding you, but why would you test to "ensure that"? the randomization guarantees it (assuming that it is done correctly, of course); poking around after-the-fact can only undo the blind, which is why good experiments take some measures to make it difficult.
and why is it "guaranteed to happen 5% of the time"? is that independent of sample size and distribution of the factor? quite remarkable indeed!
you sound quite confused about certain things.
i am a statistician and i've worked closely with a sociologist (one of the few who uses math correctly, if a bit pedantically). you are correct, it is not intrinsically impossible to do sociology correctly. however, the mathematical literacy standards for the field are woefully lacking even in the ivy league.
this song by Tom Lehrer holds true today, just replace "sigma and chi-square" by "social network analysis".
yes, you are correct: social psychology done rigorously becomes economics. as for the rest, however...
yes, i am.
true randomization allows you to control for everything (intuitively: since it's randomized, there is no way for you to introduce bias), at the cost of increased variance. however, you can make up for increased variance by increasing the sample size, which is what they did here. i forget the exact numbers, but they sent out hundreds of letters.
far from what you assert, randomization is fundamental to experimental control, and randomness is quite easily generated in a controlled manner. here's a general hint for you and everyone else: don't say things like "randomness cannot be controlled because then it wouldn't be 'true' randomness". it just makes you seem like an idiot.
Dammit, I meant the letters were written anonymously and then labeled with names later. I guess "pseudonymous" would have been a better word. Oh well.
Yes they can, in some cases. There was a very well-controlled study where two sets of anonymous letters of application were sent to various positions at a large number of companies from a large number of applicants. The letters included similar random credentials from random institutions, random cosmetic variations of the same cover letter, and so on, to avoid tipping the hand of the researchers. The only difference between the two groups of letters was that one were given names sampled uniformly from African-Americans, and the other given names sampled uniformly from everyone else. The names were assigned in a blind way, literally a random form insertion, to avoid introducing bias.
I'm sure you can guess where this is going. The response and offer rate to the blacks was significantly lower, both statistically and practically. It's rather hard to explain that away, though I'm sure someone here will try without having even read the study.
those two examples are from economics. whatever your opinion of that discipline may be, it is, at least, in a different class of bullshit from sociology or "social psychology".
I have multiple masters in business and social science and worked on a Ph.D. in social science (Being vague here for a reason).
And what reason is that? You're not even close to identifiable from this information, you know...
speak for yourself. i've never tried using a Springer book as a nipple weight, though; i'll give it a try sometime. thanks.
eh, you are right about the origin of the phrase, but really it works either way. in this case, i like the imagery of a small yoked vehicle having to pull a lot of dead weight; it's an apt description of math/stats as it relates to the social sciences right now.
It's the opposite really. You can publish any fucking thing by mining for a low p-value (through multiple comparisons, outright biased sampling techniques, etc., etc.) and then turning your brain off.
Of course, just getting rid of the p-value outright won't solve this, but at the very least, the problem isn't what you're saying it is. Blind math fetishism isn't solving anything.
there's plenty of demand for a strong quantitative education. those jokes about mathematicians not being able to feed their families sound incomprehensibly alien, yet were common just ~15 years ago. according to the bureau of labor statistics, ``the median annual wage for mathematicians was $101,360 in may 2012," followed by an explanation of what a median is, which is perhaps telling.
PROTIP: if you ignore it, it goes away.