Link to Original Source
They probably won't use Oreo, because that names a trademark. (You could argue that this is another area of business...but better to avoid legal wrangles.) Orangeade would probably be safe, but it's not similar to marshmallow or nougat. I suppose they could just use orange. (That would let them use quince for "q". But I still don't see how they'd handle 'x'.)
OK. I'll say color management was introduced on the Apple ][.
OTOH, this is a really stupid argument. Names are just names, and two guys named Harry can be quite different, yet both really named Harry. I don't know what the MSWind color management system is like, or what the Android color management system is like, but even if they're quite different they can both reasonably be called by the same name if they do *something* having to do with managing colors.
What you have said is, in theory, correct. There are, however, cases that cast a lot of doubt on how that theory is actually applied in practice. Naturally we probably only notice outliers, but those outliers *do* exist, so there is a definite chance that the judge could take a large number of years to decide that he's "unpersuadable".
Now the standard of "beyond a reasonable doubt" also has a large number of counter-examples, especially when the accused is being accused of something that most people consider horrendous. In fact, if the accusation is vile enough, many people won't even consider whether the evidence has any validity, or whether it could have been faked, or....well, much of any mitigating factor. He is being accused of obstruction of justice, which may or many not be true, and he will be punished extensively before there's ever a trial at which he, presumably, will have a fair defense. Many, however, never receive a fair defense, or even an only moderately poor defense. And even if he's found not guilty he will already have been punished extensively.
I don't know what an ideal way of handling things would be, but don't fantasize that we in the US have something even coming close to something fair to those who are poor or unpopular.
You are making the presumption that he is guilty. This may or may not be true. He might be innocent and actually have forgotten his password.
OTOH, just consider, if he gives them his password, they will be able to implant any evidence they choose onto his disks. Whoops! Forging dates isn't that hard.
Sorry, but that's not true. I know I've forgotten my own password a few times. I had to reinstall and recover from backups. (I really don't like sudo, and I also don't like logging in as root, and a decade or so ago I forgot the root password twice. It was really annoying, but not a real problem as one of the times I'd been thinking about switching distros anyway. The other time was more annoying, but I had the original CDs, and there hadn't been THAT many updates. [I said it was over a decade ago. I think it was while I was using Red Hat Professional edition, before I switched to KRUD Linux.])
That would be destruction of evidence...if done by the accused. After a warrant was served.
OTOH, if it's set to do this after n failed attempts, then it could quite likely be done by inept police, who didn't bother to image the disk before working on it. And then it's the police who destroyed the evidence...if such it was.
That said, I really doubt that applies in this case. But I know *I've* forgotten passwords, and had to reinstall, and recover from backups. So "I've forgotten the password" sounds possible, though unconvincing.
Clarke did very little writing on robot brains.
Um, I'll have to assume that you weren't around for April, 1968, when the leading AI in popular culture for a long, long, time was introduced in a Kubrick and Clarke screenplay and what probably should have been attributed as a Clarke and Kubrick novel. And a key element of that screenplay was a priority conflict in the AI.
Well, you've just given up the argument, and have basically agreed that strong AI is impossible
Not at all. Strong AI is not necessary to the argument. It is perfectly possible for an unconscious machine not considered "strong AI" to act upon Asimov's Laws. They're just rules for a program to act upon.
In addition, it is not necessary for Artificial General Intelligence to be conscious.
Mind is a phenomenon of healthy living brain and is seen no where else.
We have a lot to learn of consciousness yet. But what we have learned so far seems to indicate that consciousness is a story that the brain tells itself, and is not particularly related to how the brain actually works. Descartes self-referential attempt aside, it would be difficult for any of us to actually prove that we are conscious.
You're approaching it from an anthropomorphic perspective. It's not necessary for a robot to "understand" abstractions any more than they are required to understand mathematics in order to add two numbers. They just apply rules as programmed.
Today, computers can classify people in moving video and apply rules to their actions such as not to approach them. Tomorrow, those rules will be more complex. That is all.
Agreed that a Robot is no more a colleague than a screwdriver.
I think you're wrong about Asimov, though. It's obvious that to write about theoretical concerns of future technology, the author must proceed without knowing how to actually implement the technology, but may be able to say that it's theoretically possible. There is no shortage of good, predictive science fiction written when we had no idea how to achieve the technology portrayed. For example, Clarke's orbital satellites were steam-powered. Steam is indeed an efficient way to harness solar power if you have a good way to radiate the waste heat, but we ended up using photovoltaic. But Clarke was on solid ground regarding the theoretical possibility of such things.
So did I. In fact I think I read about a new COBOL compiler just last year. That doesn't keep it from being a lousy language (with a few good features).
COBOL was a horrible language even for the day. Fortran was much better...but it was harder for the bosses to pretend to understand. (OTOH, COBOL was decent at some formatting tasks compared to Fortran, and I believe it included BCD numbers, where in Fortran you needed to use a library for that.)
It's true that there's a lot of ancient COBOL code still live, and I attribute it to the fact that nobody can really understand it, but, unlike assembler, it didn't automatically die when you change processors.
"Show me a good loser, and I'll show you a loser." -- Vince Lombardi, football coach