OK that's the conceit of NN in a nutshell- just like a biological brain, so you said it. To me that's like saying a camera is like an eye. The brain is more than just neurons firing over synapses and reinforcing the ability to communicate across synpases. For example, nitrous oxide diffuses through the brain and is used in signalling. There are other things like that going on.
Meh. So there are some additional interconnections. Are those actually essential? It seems unlikely to me, but they could certainly be added if they are.
It may be the start of a good way to model the actual working of the brain.
Irrelevant. Oh, I suppose it may someday be relevant to neuroscientists whose goal is to understand the brain rather than to create useful systems. But for the people interested in being able to create automated systems that can be taught to make complex decisions effectively, what really matters is that it seems to work very well. Sure, the fact that we don't understand how they work means they may occasionally do insane things, but that's also true of complex decision-making algorithms we do understand (or think we do). And it's also true of living brains.
AI has a very very long and ignoble history of overhawking its wares, dating back to the 60s then the 80s then the 90s.. oh fuck it, every 10 or 15 years.
You say that as though everyone here isn't fully aware of it. But it's obvious, and it's common. It's in no way particular to AI. People are tremendously bad in general at predicting future technology for the -- rather obvious, actually -- simple reason that future technology will be built based on knowledge that we don't yet possess. You can't accurately predict the results of applying knowledge you don't yet have.
the idea that we're anywhere close to "the Singularity" which a lot of naive people believe, anywhere close at all is just not true.
You're wrong. And so are they.
The truth is that we have no idea how close we are to that point, and really won't have any idea until we're there, or until we prove that it's impossible.
However, none of that has anything to do with the topic at hand. Neural networks (biological or electronic) are almost certainly not the only way to construct the information flows underlying intelligence. It's also perfectly possible that our current NN models are inadequate for producing intelligence. So what? There are huge numbers of tasks for which we don't need general intelligence. All we need is a good automated decisionmaker which makes the right decisions and doesn't cost too much to build.
That is where NNs are awesome. How many engineer-weeks of effort would it take to produce an algorithm that, fed only the raw pixel data from the screen, can play a video game effectively? With Google's DeepMind NN, it takes one week of computer time, with little or no human involvement at all.
Neural networks may or may not be useful in reaching toward general AI. But they absolutely are useful tools, enabling us to build useful automated systems now.
I give you 100 to 1 that their self driving cars FAIL as a general mode of transportation by the year 2050.
I'll take that action. How much will you put on it? Let's define the terms and work out the logistics. Also, I'd be fine with pulling in that year by a couple of decades. I'll bet that self-driving cars on Google's model (full self-driving, highways and in town, with vanishingly few situations the car can't handle) will SUCCEED by 2030.