Analog computing is more of a '60s thing...
Solar power was providing enough lift to keep his butt (and the overnight batteries) aloft - that's enough to be more impressive than a balloon.
Idiot, yes. Blowhard, yes. Wrong about Moore's Law being an infinite exponential and not an S-Curve, yes.
Also right about how surprisingly quickly some things can come to pass. Looking at human history from 5000 years ago until 250 years ago, the progress of the last 100 years would have been unimaginable. Literally, stories like Jules Verne didn't start to appear until things like steam engines were running around.
When self improving AI does emerge, some things could be changing very rapidly - Hollywood movie overnight kind of rapidly. Or we might just plod along for another 5000 years making incremental improvements and minor discoveries.
I was kind of hoping for a Saturnian style ring, of mostly pure iron... the devil is in the details.
As I understand it, the engine will be used for other missions, just none of them manned.
Iron rich asteroid? Just have to tip it out of solar orbit and get a lucky aerobrake in the Venusian atmosphere, should melt and purify the iron nicely in the process...
It's bad planning, to have a "human rated" design that will only be used once. If the engine would find other human carrying missions in the future, it wouldn't be so bad, but that does not seem to be the case.
The bit of wire wrapped around the push-mower handle that keeps the engine brake from engaging the moment I release my grip on the handle.
Oh, it's for the safety of the children! Think of the puppies! No. Just, No.
Agreed - the early days of neural nets surprised a lot of people with their ability to "learn" optical character recognition. Then they kind of fizzed out. Numenta is doing some interesting stuff with video processing, but it's a slow grinding progress, not an explosive revolution like Moore's Law. The "nets" can be massive for low cost now, but there's apparently not much (orders of magnitude) more to be gotten from them.
Not that this particular theory is going anywhere, but a fun one I've heard is a sort of "survival of the fittest pattern replication" where an "event" is encoded as a repeating pattern of discharges. Multiple (hundreds even thousands) of groups of neurons take up variations of the theme and the dominant pattern(s) are the ones passed on to the next level of processing, where a similar process occurs - transforming the previous levels' patterns into new patterns on the current level. It's a departure from the resource starved ideas of early digital processing where you have one fragile chance to get the right answer processed through the system. Instead, hordes of parallel processing units reach consensus, with lots of potential outcomes considered and discarded in favor of the eventual result.
Some kind of departure, as different as "fittest patterns" is from AND and OR gates, is likely to be the next big step.
I'd say that the neural net model is "good enough" for maze navigation performance - if the net is programmed with "cat avoidance" then it, too, can deal with the cat: for purposes of navigation. For purposes of optical recognition of "cat" and evolving to learn why they should be avoided, that's probably a stretch for neural nets.
In the end, I would expect AI to do things differently, otherwise we've just replicated the wetware. Doing things using the same internal mechanisms might be a shortcut to achieving "artificial cognition", but I think there's better odds of finding a novel approach first.
Are we at ground zero? Absolutely not. But 40 years in, I don't think we're even 10% along the road to human level artificial intelligence. But, study your Kurzweil, the next 90% could come in much less than 40 years.
My point in this area would be: does our knowledge allow us to generate desired outcomes in novel subjects with any level of certainty?
For instance: we know with great certainty that you can stimulate the optic nerve and cause the subject to "see things" (and also: not see things that are really there).
On the other hand, with respect to cognition, can we do anything that simulates (reconstructs) a biological cognition system?
Can we learn a maze the way a rat does? I think so. Neural nets with reward and punishment inputs can perform approximately the same.
Can we process language the way a person does? I think not. We've had the eliza program for decades, and doubtless there are many around today that are orders of magnitude complex, but can they learn, adapt, and handle novel situations the way (some) people can?
Awesome, now that we have another 50 years of technological advance, maybe it's time to revisit the issue...
Instead of having a special case every few years, how about going ahead and making a millisecond of adjustment every day as needed? The adjustments could start with 0 or 1 milliseconds, and as the oceans slosh us ever slower, we could start making 1 or 2 millisecond adjustments every day at midnight.
Would also keep the stars better aligned to the official time.
It doesn't even have to target humans.... how about a mutation that makes Hitchcock's Birds a reality, on demand when they smell a certain chemical that is odorless to humans....