Link to Original Source
Looks like they're trying to head off a negative news cycle from this tweet from the owner of No Starch Press
"Images of counterfeit copies of Python for Kids being sold on Amazon. Legit copies are thicker, color, layflat binding, nicer paper. @amazon"
Also see discussion on HN:
Luckily he won't have to. The latest diff patch slated for April 1 should fix over 72% of those. It weighs in at a mere 73GB and is considered essential by Microsoft because an exec's husband wrote portions of it for the Bob project. Awfully caring of Microsoft to help out users like that.
There is one real up side to this. Microsoft as you know only puts out small, efficient updates in the minimal needed package sizes. This should be great comfort to users on metered connections, they are only being lovingly graced with the minimum needed amount of bytes. Can you imagine if Microsoft was one of those companies that pushed out near-daily 100+MB behemoths to update a spelling error in notepad's FAQ? Luckily they don't do this, and we all win!
Note: Yes this is sarcasm. If you didn't get that by the 19th word, go play with some tiles.
Clarke did very little writing on robot brains.
Um, I'll have to assume that you weren't around for April, 1968, when the leading AI in popular culture for a long, long, time was introduced in a Kubrick and Clarke screenplay and what probably should have been attributed as a Clarke and Kubrick novel. And a key element of that screenplay was a priority conflict in the AI.
Well, you've just given up the argument, and have basically agreed that strong AI is impossible
Not at all. Strong AI is not necessary to the argument. It is perfectly possible for an unconscious machine not considered "strong AI" to act upon Asimov's Laws. They're just rules for a program to act upon.
In addition, it is not necessary for Artificial General Intelligence to be conscious.
Mind is a phenomenon of healthy living brain and is seen no where else.
We have a lot to learn of consciousness yet. But what we have learned so far seems to indicate that consciousness is a story that the brain tells itself, and is not particularly related to how the brain actually works. Descartes self-referential attempt aside, it would be difficult for any of us to actually prove that we are conscious.
You're approaching it from an anthropomorphic perspective. It's not necessary for a robot to "understand" abstractions any more than they are required to understand mathematics in order to add two numbers. They just apply rules as programmed.
Today, computers can classify people in moving video and apply rules to their actions such as not to approach them. Tomorrow, those rules will be more complex. That is all.
Agreed that a Robot is no more a colleague than a screwdriver.
I think you're wrong about Asimov, though. It's obvious that to write about theoretical concerns of future technology, the author must proceed without knowing how to actually implement the technology, but may be able to say that it's theoretically possible. There is no shortage of good, predictive science fiction written when we had no idea how to achieve the technology portrayed. For example, Clarke's orbital satellites were steam-powered. Steam is indeed an efficient way to harness solar power if you have a good way to radiate the waste heat, but we ended up using photovoltaic. But Clarke was on solid ground regarding the theoretical possibility of such things.
VMWare is a GPL violator and got off of its most recent case on a technicality. Any Linux developer can restart the case.
The Linux foundation is sort of like loggers who claim to speak for the trees. Their main task is to facilitate the exploitation of Open Source rather than contribution to it.
Bitcoins aren't really worth anything. There are just some people who have convinced themselves that they are worth something. You can'r really rely on such people continuing their belief.