I think it's more selection bias on the part of the news media/sources.
I think it's more selection bias on the part of the news media/sources.
There haven't been that many stories, but a few stories have gotten a large amount of publicity, often repeatedly. And there's selection bias, too.
This doesn't prove that it's an incorrect assumption that Tesla drivers try to get the car to drive itself. Just a week ago I saw an electric wheelchair jay walking diagonally across a four lane boulevard (official speed limit 35 mph) while the driver of the wheelchair was busily engaged in texting on their phone. But notice that nobody EVER claimed THAT was safe behavior. So if anything that's evidence that renaming the "autopilot" wouldn't help;
I've got to disagree...though not totally. ISTM that overloaded operators need to be marked, rather than eliminated. I once suggested that overloaded operators be enclosed in pipe chars, e.g. |+|, but nearly any mark would do. And this be only used for operators. I also wanted to allow alternative symbols, names, etc. to be used for operators, but there I ran into the precedence problem
The language that's easy to read is the one you know well. I've used Python enough to think that it's easier to read than C or often C++ code that does the same thing. C's problems is indirections via multiple levels of pointers and macros. With C++ it is just that the language as a whole is too large, and I only know parts of it well, though it can include C's problems as well (but it doesn't need to).
I've looked at Nim a couple of times, most recently earlier this month. I didn't get much beyond looking, as I need various libraries as well as the basic language, but it did look interesting. If you only need one or two external libraries it might be worth your while to look at it more deeply than I did.
But I really doubt that their code generation averages as fast as decently hand-crafted code. But it may well be a lot faster to write.
You've got your definition, I've got mine. If you don't like mine, let's hear yours. (Mine would include not doing things that are clearly going to leave you in a situation that is worse, from your own evaluation, than the current situation, and which you have reason to know will have that result...unless, of course, all the alternatives would lead to even worse results.)
Well, we know of no fundamental law against it. I would claim that humans are NOT a counter-example existence proof, because to claim we're intelligent flies in the face of a large body of evidence to the contrary. So it may be impossible.
But it's clearly possible to get as close to intelligent as humans are, because there *IS* an existence proof. It may (hah!) require advanced nano-mechanics, but I really, really, doubt it.
That said, it doesn't take that much intelligence to be a threat to human existence. Just being able to issue the necessary commands with the required authority.
That said, while I do consider AIs to be an existential threat to humanity, I consider them LESS of a threat than the current political system. There's a reasonable chance that they'll have sufficient foresight and strong enough ethical constraints that they'll avoid the problem.
FWIW, we can't understand current deep-learning systems either. Different people understand different parts of them. Some understand fairly large parts, but nobody understands the complete program.
FWIW, that was even true with Sargon 40 years ago, and that wasn't an AI in any reasonable sense. It was basically an alpha-beta pruner and an evaluation function. And I understood a LOT of it, but not by any means all. (The source code was printed as a book for the Apple ][, but it naturally didn't include the systems routines, etc.)
People have limits on the complexity that they can understand. It varies slightly between people, but rarely exceeds certain bounds. I tend to call this our "stack depth" though that's clearly a poor analogy. But "working memory" can be measured (inexactly, it's true), and any idea too complex to be held in working memory can't be understood. We handle this by breaking it up into communicating modules, but the communication puts limits on the kinds of ideas we can handle. This is why when parallel programming I tend to use a simplified message-passing actor model. But some things can't be handled that way. If you doubt, try to imagine (visualize) a rotating tesseract. I have trouble even with a simple general quadratic curve and need to solve it and plot it out unless it's in one of a very few special forms.
You are making unreasonable assumptions about it's motivational basis. Here's a hint: It won't be analogous to any mammal, though it may be able to fake it so as to seem understandable.
That said, it *might* destroy all humans, possibly by causing us to destroy each other. Were I an AI, and had I decided upon that as an intermediate goal, I think I'd proceed by causing the social barriers against biological warfare to be reduced.
They have NOT re-established their reputation as a reputable technology company. That's going to take a LOT of work. Possibly as much as it took to build it in the first place, and they not only destroyed something that had taken decades to build over the course of a year, they repeated the offense multiple times by doing things like hiring people to put root kits into their devices, and then offering a "repair" that left you vulnerable to trivial attacks.
It's going to take lots of time and effort to repair their reputation. One good device that isn't yet known to be backdoored isn't going to do the job.
If they were looking at calcium channel opening, then I'd agree with you. They appear to be looking at things from a much more abstract level. And their results aren't proof, but certainly raise reasonable questions.
Whoa whoa whoa.. back up a bit with the gender mud, I'm pretty sure Obama wants to be called he.
Doesn't matter. They'll just be calling us all "cucks" since we don't beat our wives or espouse genocide against people of color.
The camera "sees" the user and even knows which user it is seeing. The camera then locks the screen immediately when the user is not present.
How long before the computer "sees" the user and notifies the police that they can pick up their known dissident. I mean, really, given the kind of governance we're about to enter into, this (not to mention Alexa-like audio surveillance "features") are the last thing I'd want on any equipment in my home.
And no, I don't have anything to hide. But conversely, I also don't use the restroom in the middle of 5th Avenue. Privacy is a thing, even in a world full of morons who think it isn't.
Don't believe the EULA limitations. A lot of them are just there as intimidation. Which terms are enforceable depend on your state, and local laws trump the EULA.
Yes, quite carried away. Your exposition is quite naive in thinking that people think in the scope you think they do. The failure to respond has been repeated historically quite a number of times.
And I think your timing of off by 50+ years, nothing will happen until people are really starving.
Nothing will likely happen until the 0.1% are starving, by which time it will be too late to do anything. The only reason to even hold out what little hope there is, is that people like the grandparent are at least thinking about, and worrying about, these things. If enough do, then real change can happen. Like the outcry that forced the Republicans to back off (at least for now) gutting the House Ethics committee, when the masses do voice their concern, they are heard. Unfortunately we all feel too weak, and too powerless, to make much noise unless things really hit the fan (by which point it is often too late). This is not an accident, and there are very specific reasons we as citizens are constantly made to feel powerless (hint: it benefits those running the show, on whichever side of the aisle).
If you have a procedure with 10 parameters, you probably missed some.