TV screens are not being projected on with an anamorphic lens. There is equal spacing between each pixel on a TV. So making a TV screen curved simply ADDS the distortion that curved cinema screens are designed to prevent.
This is the worst part though:
The slight curvature also reduces visual geometric distortion. When you watch a perfectly flat TV screen, Soneira explained, the corners of the screen are farther away than the center so they appear smaller. "As a result, the eye doesn't see the screen as a perfect rectangle - it actually sees dual elongated trapezoids, which is keystone geometric distortion," Soneira wrote.
WHAT? The screen is a rectangle, so our eye sees it as a rectangle, just as it would any other rectangular object! The visual cortex of our brain makes sure of that. How can someone who works with TVs not understand basic concepts of human vision?
The question, if anything, is not "can they brick services if you don't agree to the terms?" It's "can you sell a TV advertising certain features, then change your terms in the future making those features unavailable to certain people?"
I am no lawyer but I am inclined to say "yes". The legal precedent seems to be "you can change your terms of service AS LONG AS you notify your users". Which LG is doing.
Though one could argue that it was not clear from the marketing that any agreement was required in order to use the advertised services. If there is any case against LG, I would guess that would be it.
Does a manufacturer have the right to "brick" certain integral services just because the end user doesn't feel comfortable sharing a bunch of info with LG and other, unnamed third parties?
If by "right" you mean "legal right", then yes. Next question.
The federal government would need a warrant from a judge if it wants the cooperation of California officials
I'm pretty sure the NSA can already get a lot of information WITHOUT cooperating with state government officials.
"Well I DO have a moral system!" you say. Well then do what that moral system tells you to do. If the answer is indeterminate, then your moral system is logically inconsistent. Simple.
Also, I can't imagine this "random" system having much success. The legal implications alone would be tremendous. Not to mention the fact that there's no way I would buy such a vehicle.
should a robotic car sacrifice its owner’s life, in order to spare two strangers?
If such a car exists, I won't buy it, that's for sure! I'll buy from another car manufacturer. I imagine most people would feel similarly. Are you suggesting that there should be a law that all automated vehicles have this behavior? Ha! Good luck finding a politician who's willing to take that up.
all other options point to a chaos of litigation, or a monstrous, machine-assisted Battle Royale, as everyone’s robots—automotive or otherwise—prioritize their owners’ safety above all else, and take natural selection to the open road
We already have human drivers that prioritize their own safety above all else (I know I do!). Replacing these with superior robot drivers could only make things better, no?
the leap from a crash-reduced world to a completely crash-free one is an assumption
Only an idiot would make that assumption. Stop treating your readers like idiots. Oh wait, it's Popular Science. Never mind.
Even if it were possible to simply order all robots to never hurt a person, unless they suddenly able to conquer the laws of physics, or banish the Blue Screen of Death in all its vicissitudes, large automated machines are going to roll or stumble or topple into people.
More often than human drivers already do?
And this is where the slashdot idealists come in and say "well if that's the case, we just shouldn't have automated cars. We should only have them if they're completely safe." But be real. What would you rather have: an automated vehicle that works 99.9% of the time, or no automated vehicle at all? I know what I'd choose. Besides, it would be easier to focus on potential hazards if you were able to devote your full mental capacity to it.