Users can still log in sans issue. However, they arrive at empty inboxes: No custom folders, no messages in "Sent" or "Deleted," nothing. As one might expect, the abruptness (and unexpectedness) of the purge has left some of Hotmail's long-time users a bit in the dark
It's much easier to teach a biologist how to program than it is to teach a programmer about emergence.
The problem with AI is that it's been done by programmers. Cognitive Science should have stayed a Life Science, the way it was before 1955.
The glossy screens on MacBooks are no problem at all as long as you are wearing a black turtleneck.
Yes, I also instantly saw the correct answer before reading the article. In fact, I even missed the sentence about not using diodes.
I knew about voltage drops.It was the single fact that the problem had been solved (patented) that made me think about it again in a different light.
It's interesting that when the first solution that comes to mind (diodes) doesn't work (voltage drop) then it blocks our problem solving process from finding other solutions.
I deal with this daily in my work; I try to come up with Holistic solutions to problems that have been traditionally (and in vain) attacked using the much more common Reductionist (model based) methods.
We're working at one level below that. How do humans learn *anything* at all. Language is a special case of that.
The lower you go, the easier the problem gets. But you need to make sure that when you are measuring your progress
you really are measuring the right thing.
> But we know that after a certain critical period it becomes functionally impossible for a human being to learn language
This is simply incorrect. I know five languages. I don't see a problem learning another one as long as I can learn *anything*.
If you want to form an opinion about my competence, perhaps you should watch a video or two of the ones I've posted on the web
or check out http://artificial-intuition.com/
- Monica
What if some non-academic AI researcher said the Singularity was unlikely and provided an argument to support that. Not a proof, mind you, but something roughly at the level of the pro-singularity arguments? And made arguments that AI has failed because it set it sights too high (logical perfection in an imperfect, complex world)? Would they still be "fringe"? In a 60-year old discipline, have we even made enough progress to designate what the core is with enough clarity that it won't be completely redefined in the next decade?
10% of the AI community is in direct opposition to the other 90%. This is not well known except by the people in this 10%. We're known as "The Subsymbolicists". We're not "fringe", just marginalized
Anderson's rule: "All good books about AI are out of print"
This is beacuse people only buy books about the 90%.
Oh BTW, I do make these arguments towards the end of the video named "A new direction in AI research" at http://videos.syntience.com/
- Monica Anderson
Actually, axilmar hit it on the nail. There's more than one nail here, but that's not bad at all.
The next nail is "What patterns are *salient*". This is the billion dollar question in AI.
We hit *that* nail around 2003. In fact we're several nail further along....
I'm part of the crowd that thinks AI is much simpler than most people think. It's still not trivial.
But there's a *big* difference between a project to "tell the computer everything about the world
in first order predicate calculus" and "Figuring out how learning in the brain might work,
implementing that in a computer to test it, figuring out what might be wrong, and repeat the process
until we have something that is capable of learning anything we tell it roughly the way humans do".
The first approach is doomed to fail for reasons explained on my website below, including the simple
reason that the everyday mundane world is more complex than we think. Any ontology or semantic
web based project is thus doomed to fail.
The latter is "only" hampered by the fact that we haven't tried it yet. Attacking it from the
neuroscience angle is one way, but it's actually *easier* to attack it from the Epistemology
angle. "How is it possible to learn *anything*? What is it possible to learn at all? How *might*
the brain go about doing what it does? How could we duplicate it in a computer to see if
the theory is correct?" Repeat until we succeed.
A million man-years has been wasted on 20th Century style AI. We have so far put
10 person-years into 21st Century AI. To wit:
I (and my company) have been working on our idea of how this is supposed to be done since 2001
and though we have some interesting results and many insights we haven't been able to demonstrate
effects that are stronger than what you can do with regular programming. We have good benchmarks
but we're currently at 80-85% on tasks where regular programming can do 95% and humans 99.99%
but we're slowly improving. And as opposed to *many* AI projects, we are writing code, running
experiments daily (and overnight), have built our own extra-large computers (32 GB RAM linux systems)
etc. We are attempting to learn human languages (any language) by unsupervised training by simply
reading books (Jane Austen, in our case). We have good semantic level reading comprehension
tests that can be completely automated and work at *very low levels of IQ and reading comprehension*.
We've funded all of this work ourselves and hope to leverage this effort (once we get it to work
into a market leading position on various semantic technologies including web search support technologies,
true semantic search, and superior speech understanding. Ask me for a Use Cases and Markets document.
When comparing the ideas in my company to those of almost all other AI research, *including TFA*,
I'd like to think that *we* at least got the Most Significant Bit correct. And we feel sad that most people
that are entering the field of AI today are being taught the wrong things, perpetuating the old myths and
mistakes and thereby guaranteeing we won't get decent AI any time soon.
I have an unpublished article that I'm trying to get into some mainstream magazine at
http://syntience.com/AIResearchInThe21stCentury.pdf - feel free to peek at it.
It's not a direct response to the MIT article but argues a different angle and aims
at roughly the same audience (you!).
If you want more info beyond that, then check out our other online resources:
Theory and motivational site (2 years old) : http://artificial-intuition.com/
Video site (latest insights, more detailed info) http://videos.syntience.com/ (or go to Vimeo.com and search for "syntience") Axilmar will enjoy "Models vs. Patterns" video.
Blog: http://monicasmind.com/
Corporate: http://syntience.com/
Meetup: http://ai-meetup.org/ (great fun twice a month if you live in the SF bay area. If you want to talk to me in person, then this is an easy way)
Investor inquiries welcome.
- Monica Anderson
CEO, Syntience Inc.
Living on Earth may be expensive, but it includes an annual free trip around the Sun.