Right. So finding women attractive for their intelligence is a bad thing. Got it.
According to some random website of potentially dubious accuracy, he makes 10 million a year and has a 150 million net worth. This is approximately how much money he makes in 7 minutes. I'm gonna go out on a limb and say that there are probably better uses of his time than the equivalent of picking a penny up off the sidewalk.
A year or so ago a friend of mine sent me a text that said "Have you ever considered becoming an actuary?" The following day I had ads for an actuarial school in my newsfeed. Are they saying that was a coincidence?
This is why I'm making an effort to move away from Google products. Anything that doesn't turn enough profit disappears. Smaller companies are better suited to handle smaller products that aren't directly related to the Google business model. I won't use Sketchup for this reason, nor will I use anything new they put out in the future, unless it appears to be generating substantial income for them. Everything else just disappears.
Nonetheless, we do know that psychology is applied biology, chemistry, and fundamentally physics.
Right. And art is applied paint.
It's beyond absurd to compare the performance of single cities to the performance of entire nations. There's so much money riding on making US schools look bad so that they can be privatized. It's unfortunate people keep falling for these bullshit statistics.
How are they supposed to function without metrics?
Thanks for this. As an inexperienced programmer I appreciate when unintuitive things are simply explained.
False. Half didn't vote at all.
Is this a marketing ploy to promote Ender's Game?
Why do Luddite trolls always get modded "insightful?"
Granted I'm saying this without investigating the specific studies referenced, but my experience with educational research (master's in instructional tech) has left me very wary of these studies. Most of them deal with a very small sample set, and conclude that a particular piece of software is a resounding success.. Others will use a larger sample to evaluate a piece of software, but fail to provide any training to the teachers on how to use it, and then conclude that the software is a failure when the reality is most of the involved teachers got frustrated and simply stopped using it due to a lack of knowing how. Still others make sweeping conclusions of the "technology has no effect on student performance" variety after finding no difference between writing on the board and using PowerPoint. Very few studies are of any quality or are even worth being aware of. Two books worth looking at for those interested are "Using Technology Wisely" by Harold Wenglinsky and "Scaling Up Success," by Chris Dede and others.
You're running into the problem there that exists with all metrics. Metrics affect what they measure when the subject, or those influencing the subject, has the opportunity to respond on the basis of the result. If you grade based on the amount of time spent doing a task, even those who can do it quickly will slow down for a better grade. Measure kids in such a way that a typo on 2+2 gets averaged in with a correct answer on an algebra problem, and you end up with a very strange result. (Test prep companies advise kids to spendmore time on the easy pproblems and ignore the deeper, more difficult ones. Take a moment to consider the implications of that.) Knowledge is the most rudimentary level of learning; the ability to apply it is what we need more of. So as you observed your teachers teaching to the test and you made your assessment on that basis, you didn't question whether anything was being measured that was of value. I have never seen anything that shows me it's possible to test deeper levels of knowledge with a standardized test. I'd be delighted to be wrong about this, so by all means, cite something that supports that claim. You're now the second person to complain about me not citing anything without citing anything yourself. If this exists, I need to see it.
Fair point. Unfortunately I'm not able to properly hunt anything down right now, other than remembering Dan Pink references a few in his book Drive. But does it really seem like that much of a stretch to say that a single high pressure multiple choice test is a worse indicator of ability than a larger number of lower pressure tests? I'd also point out that your be detector doesn't seem to detect anything about the idea that standardized tests are a good metric despite the fact that you haven't cited anything either. Perhaps you mistakenly bought a mislabeled conflicts-with-by-bias detector?