Clarke did very little writing on robot brains.
Um, I'll have to assume that you weren't around for April, 1968, when the leading AI in popular culture for a long, long, time was introduced in a Kubrick and Clarke screenplay and what probably should have been attributed as a Clarke and Kubrick novel. And a key element of that screenplay was a priority conflict in the AI.
Well, you've just given up the argument, and have basically agreed that strong AI is impossible
Not at all. Strong AI is not necessary to the argument. It is perfectly possible for an unconscious machine not considered "strong AI" to act upon Asimov's Laws. They're just rules for a program to act upon.
In addition, it is not necessary for Artificial General Intelligence to be conscious.
Mind is a phenomenon of healthy living brain and is seen no where else.
We have a lot to learn of consciousness yet. But what we have learned so far seems to indicate that consciousness is a story that the brain tells itself, and is not particularly related to how the brain actually works. Descartes self-referential attempt aside, it would be difficult for any of us to actually prove that we are conscious.
You're approaching it from an anthropomorphic perspective. It's not necessary for a robot to "understand" abstractions any more than they are required to understand mathematics in order to add two numbers. They just apply rules as programmed.
Today, computers can classify people in moving video and apply rules to their actions such as not to approach them. Tomorrow, those rules will be more complex. That is all.
Agreed that a Robot is no more a colleague than a screwdriver.
I think you're wrong about Asimov, though. It's obvious that to write about theoretical concerns of future technology, the author must proceed without knowing how to actually implement the technology, but may be able to say that it's theoretically possible. There is no shortage of good, predictive science fiction written when we had no idea how to achieve the technology portrayed. For example, Clarke's orbital satellites were steam-powered. Steam is indeed an efficient way to harness solar power if you have a good way to radiate the waste heat, but we ended up using photovoltaic. But Clarke was on solid ground regarding the theoretical possibility of such things.
VMWare is a GPL violator and got off of its most recent case on a technicality. Any Linux developer can restart the case.
The Linux foundation is sort of like loggers who claim to speak for the trees. Their main task is to facilitate the exploitation of Open Source rather than contribution to it.
Bitcoins aren't really worth anything. There are just some people who have convinced themselves that they are worth something. You can'r really rely on such people continuing their belief.
I view streaming content on a variety of devices off of a perfectly acceptable cable internet connection and I still see the compression, but the worst of it is seen on the "main" family TV. Netflix offers the best experience (followed by Amazon Video, followed by the truly horrific Google Play), but it's still there.
I fully admit that I am not a hardcore video guy and not obsessed with tweaking a bunch of TV settings so there is indeed room to make adjustments. That said, I'm very happy with up-scaled DVDs of the same movies on the same TV. Adjusting contrast/brightness would only force the shadows even deeper for disk-based video and that's not an acceptable trade-off.
I should clarify my previous statement above. When I wrote "Visible gradients ruin every single scene always" I didn't meant to imply I'm seeing gradients all the time. I'm only seeing them in scenes containing large percentages of darkness/black.
Call it anything you want: "Netflix uses bagels to compress video" I don't really care. I just wish they would take a closer look at the darkest parts of a scene and stop compressing the hell out of it. Visible gradients ruin every single scene always.
"Marriage is low down, but you spend the rest of your life paying for it." -- Baskins