Link to Original Source
HTML 5 video has many mechanisms to restrict media access based on client properties. For example, there is a robustness parameter which implementations are expected to evaluate according to their perceived ability to prevent user-controlled access to content.
I suspect that Widevine (the DRM plugin used by Firefox) did not provide a robustness level on Linux which Netflix was comfortable with. To a degree, this is still ongoing. I think the maximum resolution you can get on Linux still is 720p, while Windows will go to at 1080p at least.
wouldn't that let the cat out of . .
>Lena Dunham or Amy Shumer, mostly because I dislike their material.
wait a minute, "their"?
Are you saying that she is, er they are, not the same person?
Tivo is still around.
Well, sort of.
I switched from Directv to the hated cable company to get back to a tivo (a romio). Turns out that the interface just isn't, well, what we liked tivo for.
Rather than clicking on record in the listings, it's something like three. And for a season pass, rather than clicking record twice, it's several. Because, gosh, they've got to make buying it to watch the default first choice, don't they?
Can't screen for series premiere any more either.
Now, it's just a slightly better DVR
Clarke did very little writing on robot brains.
Um, I'll have to assume that you weren't around for April, 1968, when the leading AI in popular culture for a long, long, time was introduced in a Kubrick and Clarke screenplay and what probably should have been attributed as a Clarke and Kubrick novel. And a key element of that screenplay was a priority conflict in the AI.
Well, you've just given up the argument, and have basically agreed that strong AI is impossible
Not at all. Strong AI is not necessary to the argument. It is perfectly possible for an unconscious machine not considered "strong AI" to act upon Asimov's Laws. They're just rules for a program to act upon.
In addition, it is not necessary for Artificial General Intelligence to be conscious.
Mind is a phenomenon of healthy living brain and is seen no where else.
We have a lot to learn of consciousness yet. But what we have learned so far seems to indicate that consciousness is a story that the brain tells itself, and is not particularly related to how the brain actually works. Descartes self-referential attempt aside, it would be difficult for any of us to actually prove that we are conscious.
You're approaching it from an anthropomorphic perspective. It's not necessary for a robot to "understand" abstractions any more than they are required to understand mathematics in order to add two numbers. They just apply rules as programmed.
Today, computers can classify people in moving video and apply rules to their actions such as not to approach them. Tomorrow, those rules will be more complex. That is all.
Agreed that a Robot is no more a colleague than a screwdriver.
I think you're wrong about Asimov, though. It's obvious that to write about theoretical concerns of future technology, the author must proceed without knowing how to actually implement the technology, but may be able to say that it's theoretically possible. There is no shortage of good, predictive science fiction written when we had no idea how to achieve the technology portrayed. For example, Clarke's orbital satellites were steam-powered. Steam is indeed an efficient way to harness solar power if you have a good way to radiate the waste heat, but we ended up using photovoltaic. But Clarke was on solid ground regarding the theoretical possibility of such things.
In the mid-80s, my older engineering professors commented that *their* professors refused to get on airplanes for that reason: "normal" engineering tolerance was 300%-400%, and planes were 10% to 15% . . .
>PAR2 uses Reed-Solomon error correction.
I'm no expert, but it seems to me that when the correct value of the data is disputed, cutting the data in half as a solution is a Bad Idea(TM) . . .
The absence of labels [in ECL] is probably a good thing. -- T. Cheatham