What will they think of next? Remote desktop? It was called X-windows in the 1980s.
Kids these days think they invented sex.
Of course, you can get estimates for water vapour from IR satellite measurements. I saw this done also in the 1980s. At the time I didn't understand all the math used to do this, but remember that it involved taking IR emissions over several wavelength bands, and somehow combining these to infer the water vapour content at various heights in the atmosphere, under each pixel. These satellites certainly have 2-mile spatial resolution, but the problem I see there is that the polar-orbiting satellites that provide this information pass over any spot on earth about 4 times per day, so the temporal resolution is as low as the balloons'.
Finally, data from airlines is going to be largely restricted to heights at cruising altitude, so you're missing a large cross section of the atmosphere there.
And don't get me started on weather observations over the ocean, where there are very few ground stations or balloons.
The issue is that the Navier- Stokes equations being solved in weather forecasting are very sensitive to initial conditions, so it's really crucial to get the data right to set up the calculations. Sitting in my armchair, I remain a bit skeptical that we will ever be able to get the true initial conditions.
Despite all this I'm always impressed that the NWS manages to get pretty decent one-week forecasts out, despite the impossible task they face. There must be some deep voodoo in those numerical models!
I think it's impossible to simulate intention. People mistake intention with mathematical knowledge. That's why the term "artificial intelligence" is so wrong and so limited. Consciousness and intention are much richer than mathematical intelligence. Consciousness and awareness require subtle skills that we humans use to interact with the real world; skills that are so subtle we barely notice ourselves using them. We certainly don't understand those skills. And since we don't understand them we can't build them into a computer program.
Writing software that interacts with the real world is very hard, because the world is too complicated and variable. Have you ever tried to write code that handles a user interface? It's very hard, because users are so varied in their assumptions about what your program is doing. Hell, you don't have to be a programmer to know that: we have all run into bad UIs. That's why the iPhone was such a big hit: its designers paid a huge amount of effort and time to make sure the UI worked well. Previous phones had UIs written by the phone's engineers as an afterthought. Those UIs sucked, and the iPhone ate their lunch.
Even though writing a UI is so hard, users have to learn how to use it. They have to get accustomed to its limitations. A general purpose UI that could interact with the untamed world is probably impossible to write. So much more impossible must it be to write a program that can interact with the real world and plan that world's destruction.
Then I had a transformative experience through a one-week stay at the University of Waterloo, where we spent all our time in front of teletypes programming in APL. Boy those teletypes were noisy, especially with a room with twenty of them going at it full blast. When I went to bed my ears would still be ringing. Yes, APL was my first programming language, and frankly the trauma hasn't worn off yet.
When I started university in 1980 at McGill in Montreal, there still were punch card machines and punch card readers at the computer center, and its satellite centers. However, pretty much everyone at the school had a "computer code", ie an account on the time sharing system called MUSIC: see wikipedia. It was a lovely system, and you could run commands interactively. It came with Fortran, Snobol, Pascal, Lisp, IBM 360 assembler, and, of course Adventure!
I spent many hours playing Adventure, and of course mapped the whole cave in detail.
At McGill, there were two kinds of terminals: video terminals for the lucky people, and paper terminals from Digital Equipment called DecWriters for everyone else. The former had wisiwig code editors, on which you could prepare your source code and, when ready, "submit" your program to be compiled, run and printed. You would then walk down to the basement of Burnside Hall, where a surly operator would hand you your printout. That's when you learned that you had a typo in your code. Needless to say we were told over and over to WRITE YOUR PROGRAMS OUT before you typed them in, but nobody listened then, and nobody listens now either. In any case, the turn-around time from "submit" to picking out your printout was in the order of 15 minutes, not overnight as in TFA.
The design of Watfiv fortran focused on fast compilation. Your code wouldn't be optimized, but that was OK, because as a student the majority of your runs would have compilation errors. Typically you only ran your program, fully implemented, once or twice, after dozens of botched compile runs.
All in all, it was a great experience. What I appreciate the most, in retrospect, was that CS students were required to learn IBM assembler. Higher level languages don't fully make sense until you know what's going on at the CPU level. I still can't fathom how an intro course in Java can give you any true knowledge of how computers work.
Are you referring to this video by James Duane? Don't talk to the police .
Highly recommended, assuming you can survive Duane's machine-gun rate of speech!
Prozac is the most obvious fabrication. In many studies, its effect can't be distinguished from placebo (http://en.wikipedia.org/wiki/Fluoxetine#cite_note-78). Of course, if you ARE on prozac and stop taking it suddenly, you will likely get depressed (and sometimes suicidal), but that's most likely withdrawal symptoms, not a manifestation of an underlying condition.
I suspect that pharmaceutical researchers can't think of a way to mask the symptoms of Asperger's, so there is no need to list it in the DSM. Call me cynical if you like.