What keeps your robot from realizing that your food bill cuts into his electricity bill and he would be better off without you?
Communism can work. As soon as people prefer working to being rich.
Someone already did, I'd say...
What's the big difference between not having a job in Detroit and not having a job in San Francisco? Probably that you can afford the rent in Detroit, considering that everyone's fleeing...
Making content free will probably not be a good idea. But it would still beat the current situation.
But either extreme is bad. We have to get back to an IP law that follows the idea behind them: To give people an incentive to create and to create a balance between those that create and those that want to enjoy that creation.
Right now, the IP law fails in every of these aspects. It does not promote creation, since it allows an author to milk his success forever. Not only that, but also his heirs to do just the same. And please don't come with the "but a true artist will create even if he is rich already" argument. The same argument can be fielded for abolishing IP laws altogether.
Look at the bright side. It means that the economy is not as bad off as it could be, at least 90% of businesses out there still provide goods&services instead of trying to IP troll.
Well, pretty much any stories I know that deal with the "three laws" stress their shortcomings, be it how the AI(s) find loopholes to abuse or how the laws keep robots from doing what those laws should make them do.
I'd say logically.
When is IP important to you, as a business? If you hold patents and if you're heavily invested in R&D, and copyright is something that you care about strongly if you're creating content, be it music, movies or software. Else, at best, it's uninteresting to you. At worst, it is a headache to you since you always have to watch out whether or not something trivial you do steps on someone' patent toes.
What I am arguing is that these systems are good in their one single field and unable to learn anything outside of them. If you want to call that intelligence, so be it, but we're still a far cry from an AI that can actually pose a threat to the point where its "morality" starts to come into play.
Some obscure browser that only runs on an obsolete OS.
It suffers from one basic major flaw. The same flaw the MS Antivirus suit suffers from: It is the one thing most people have, and the one thing no malware author can avoid.
Malware is the biggest reason to avoid the IE like the plague. No, not because it's more susceptible. I don't even want to discuss whether it is more secure or less secure than $obscure_browser. It is simply the bigger target. It would be more sensible for a malware writer to try to infect via a timing hole that allows one out of ten attempts to succeed (because the IE was so superspecialawesomely secured that this remained the only way to use it for an infection) than writing one that succeeded every single time with a trivial exploit that even an idiot could find and write attack code for for some obscure, unknown browser. He'd STILL infect way more people.
But even if you spray paint it gold, it still stinks.
That's an AI in as much a sense as calling an amoeba an animal. Yes, technically it qualifies, but it still makes a rather poor pet.
These AI are more expert systems that are great at doing their job, but they are not really able to learn anything outside their field of expertise. That's like calling an idiot savant a genius. Yes, he can do things in his tiny field that are astonishing, but he is absolutely unable to apply this to anything else.
The same applies to those "AIs". And as long as AIs are like that, we need not worry about their morality. They may be much, but not really intelligent in the human sense.
Well, maybe they should next time use animals with a lifespan longer than a gnat next time. It's not like you get to die of lung cancer at 25 when you started smoking when you were 20. Usually, people tend to die around age 50 or so.
Let's be honest here: These "laws" were part of a fictional universe. They were never endorsed by any kind of institution that has any kind of impact on laws. It's not even something the UN seriously discussed, let alone called for.
Why should any government bend itself to the limits imposed by a story writer? Yes, it would be very nice and very sane to limit the abilities of AIs, especially if you don't plan to make them "moral", in the sense that you impose some other limits that keep the AI from realizing that we're essentially at the very best superfluous, at worst a source of irritation.
What intelligence without a shred of morality is can be seen easily in corporations. They are already the very epitome of intelligence without moral (because everyone can justify pitting his mind behind it while at the same time shifting blame for anything morally questionable on circumstances or "someone else"). Now imagine that all but also efficient and without the primary intent for the most personal gain rather than the corporation's interest.