Academics gotta publish. Novel results are more publishable than confirmations of previous research. "If you torture the data enough, it will confess." Even honest researchers will mistake randomness for a pattern, especially if they like the result for ideological, personal, or professional reasons.
Combine these and other things, and you get the reason for this paper: " Why Most Published Research Findings Are False"
This was from 2005. I wonder if since then any published research findings confirm his conclusions?
One appalling aspect of the Spanish Flu epidemic is that, due to wartime censorship, information about the disease was suppressed, including information about where it was, and how to avoid it.
Not in Spain, though. The disease was in a lot of places, including Spain. Spain wasn't in WW 1, so wartime censorship did not apply there.
Information about it was in Spanish newspapers, and that's how it got the name -- even though it apparently started in Kansas, and spread through overcrowded US military barracks and troop ships.
As was stated in a podcast -- http://www.econtalk.org/archiv... -- failure to finish doesn't mean much.
If you can get what you came for after a few hours, or after doing 90% of the course, why bother to go on? You got value from it.
And if you quit after a quick look, is that a bug or a feature? Do people always attend every university they visit? Does people ever drop a class when they realize after a session or two that it isn't going to work for them?
There may not be a "right" metaphor for this, but if there is, it's probably not related to academia.
A MOOC is more like a library book than a college class. You're not obligated to complete what you start, and it's silly to suggest that not completing it is some kind of "failure".
"On August 6, U.S. District Judge Anthony Trenga ordered the Executive Branch
One of my pet peeves: the journalistic practice of pretending that the federal judiciary is not part of the federal government.
Sounds even stupider when I put it into words.
Well, if 95+% of the elected, appointed and hired people in the federal government are busy being punished -- presumably by arrest, prosecution and imprisonment, but I'd settle for hickory switches methodically administered in a measured number of strokes for each infraction, by the nearest available taxpayer -- who's going to run the government?
I dunno, but wouldn't it be nice to find out?
It didn't quit being evil just because it ended. History is chock-full of examples of things that worked out badly. It's important to remember them.
I'd offer some obvious examples, but I'm trying to avoid Godwin's Law.
It's nicknamed the Conspiracy Channel now? Before I quit cable, it was the War Channel.
And if the federal government can't improve its processes for such things, perhaps it will quit attempting them. After all, screw up enough things badly enough, and it'll run out of money, and go away. Such is the nature of failed institutions.
Oh, wait. That won't happen. Two reasons: IRS and Federal Reserve.
Of course, it could go away without running out of money. Two examples: Weimar Germany and Zimbabwe. The places were still there, but they had dramatic changes in management.
Such is the nature of failed institutions.
There were people seeing this "menace" back in the 1970s, and offering similar "solutions". http://duckduckgo.com/?s=the.t... (I'm seeing the upside of becoming an old fart. Perspective. Spotting patterns of alarmism.)
There will still be need for people to make lace and stockings and cloth, after the machines take over, Mr. Ludd. (Whoops! Wrong iteration.)
There will still be need for people to do whatever it is that machines can't, or can't do at an competitive price.
"Can't" is a bigger category than technological infeasibility. People still commission oil painting portraits, after over a century of photography. There are still restaurants, and not a proliferation of automats. ("Automats"? https://en.wikipedia.org/wiki/...) One-off tasks -- organizing a conference, for instance -- could be automated to a degree, but ultimately someone is going to have to conceive of it, identify key participants, convince them to attend, obtain sponsors, etc.
"At a competitive price". It may be possible to design a robot that picks up cigarette butts and other minor debris in every possible location and situation, or to do gardening of every conceivable landscape, but to do it well enough might just be too damn expensive. Centralizing pre-made decisions has proven damnably hard to do well, at any cost. It won't be any different when they are in software and about litter or about aesthetics and locally-suitable horticulture.
Moore's Law would not have made Soviet Union workable, and isn't going to help in many situations in the future. Not help enough to matter. Even free computing and data gathering and data transmission won't do it People on the scene are flexible, and have knowledge distant theoreticians don't even know to acquire. (http://duckduckgo.com/?s=I.pencil+ferrule+graphite )
The real problem will be the pace. Personal adaptability allowed some blacksmiths to become auto-mobile mechanics, and the rest were able to get by during the transition, sticking with the horses. The important skill in the future will not be a specialization in any area or guessing what skills will not automate (people will always need shrinks or whores or physical therapists or tactful portraitists or wine stewards or
Hardly an original thought. As I recall, skeptics were offering it back in the 1970s. (And, I bet, back in the 1770s.)
The substitution of capital goods for labor is more appealing when interest rates are low.
Central banks attempt to keep interest rates artificially low -- at least for certain borrowers. (Guess who?)
Yet another way governments use central banks to screw over "the little guy".