Link to Original Source
This is probably a function of the age of corporate executives, i.e., older folks who don't actually browse the web very much. Advertising around unmoderated comment sections is like placing ads in bathroom stalls. It's done, and it can be done successfully, but generally for local businesses and only in certain categories.
Clarke did very little writing on robot brains.
Um, I'll have to assume that you weren't around for April, 1968, when the leading AI in popular culture for a long, long, time was introduced in a Kubrick and Clarke screenplay and what probably should have been attributed as a Clarke and Kubrick novel. And a key element of that screenplay was a priority conflict in the AI.
Well, you've just given up the argument, and have basically agreed that strong AI is impossible
Not at all. Strong AI is not necessary to the argument. It is perfectly possible for an unconscious machine not considered "strong AI" to act upon Asimov's Laws. They're just rules for a program to act upon.
In addition, it is not necessary for Artificial General Intelligence to be conscious.
Mind is a phenomenon of healthy living brain and is seen no where else.
We have a lot to learn of consciousness yet. But what we have learned so far seems to indicate that consciousness is a story that the brain tells itself, and is not particularly related to how the brain actually works. Descartes self-referential attempt aside, it would be difficult for any of us to actually prove that we are conscious.
You're approaching it from an anthropomorphic perspective. It's not necessary for a robot to "understand" abstractions any more than they are required to understand mathematics in order to add two numbers. They just apply rules as programmed.
Today, computers can classify people in moving video and apply rules to their actions such as not to approach them. Tomorrow, those rules will be more complex. That is all.
Agreed that a Robot is no more a colleague than a screwdriver.
I think you're wrong about Asimov, though. It's obvious that to write about theoretical concerns of future technology, the author must proceed without knowing how to actually implement the technology, but may be able to say that it's theoretically possible. There is no shortage of good, predictive science fiction written when we had no idea how to achieve the technology portrayed. For example, Clarke's orbital satellites were steam-powered. Steam is indeed an efficient way to harness solar power if you have a good way to radiate the waste heat, but we ended up using photovoltaic. But Clarke was on solid ground regarding the theoretical possibility of such things.
VMWare is a GPL violator and got off of its most recent case on a technicality. Any Linux developer can restart the case.
The Linux foundation is sort of like loggers who claim to speak for the trees. Their main task is to facilitate the exploitation of Open Source rather than contribution to it.
Fits real well for tools that don't have your particular preferences baked in...
Every time I need to use whatever editor happens to be available on whatever platform I need to maintain on whatever day I maintain it, those damn tab settings screw over formatting.
Note how that last comment refers to "the average cloud-ready server"...
NO competent large scale developer would ever even think in terms of of a "cloud-ready server"! That's exactly what I meant by technological refactoring. It IS happening but we're not bothering to notice it. (Other than some fat-fingering maintenance at Amazon last week) We have uptime expectations, performance expectations, that were impossible a few years ago.
As younger generations of developers move in to replace older ones, the loss of implicit and limiting assumptions of the older ones will allow for newer ways of thinking about the problem space. That is where the stepwise improvements will come from, just at they have been arriving all along.
I remember those days too... No stack, non-reentrant architecture, insufficient resources to emulate it in software (no base-indexed addressing like the IBM 360, for example). In fairness, the sorts of things we did with -8's were simple enough that the lack of resources was an acceptable tradeoff.
In response to the endless comments about bloated software, etc., expectations have increased either in step with, or perhaps ahead of, capacity improvements due to Moore's law. Being old enough to remember how much time was spent on a given "capability" in 1973, the godlike power granted to developers by a $10 pi-Zero-W, $35 Pi (quad 64 bit? way past Sci-Fi...) or Odroid is a wonder to behold. The average cloud-ready server is multiple orders of magnitude more powerful yet.
I would argue that we have no real sense of how dramatically today's world differs from those days, even those of us who were there. The word "inconceivable" just doesn't reach far enough into fantasy to compare with normal expectations. It is unsurprising that some deep technological refactoring is in order given the orders of magnitude differences between then and now.
There has been some paradigm-shifting between then and now, but more is needed to really take advantage of current technology. Until then improvements will look a bit anemic. I wonder whether I'll live long enough to see that shift.... It promises to be *very* interesting.
Bitcoins aren't really worth anything. There are just some people who have convinced themselves that they are worth something. You can'r really rely on such people continuing their belief.
Man is the best computer we can put aboard a spacecraft ... and the only one that can be mass produced with unskilled labor. -- Wernher von Braun