Actually, this may not be so bad. If they're not government agencies, then they're not immune to lawsuits and when they bust in the wrong house, that person can sue the hell out of them, right?
Come as a surprise? If they WEREN'T doing this, then the people running the company would be incompetent and should be tossed out the door.
The irony is that he's 180 degrees off from the main problem with his story, which is that he thinks robots are magic too. The reason robots will not be making ethical decisions is that they can't, not only because getting them to reason ethically would require us to agree on a system of ethics for them to follow, but because even if they had such a system, they don't have enough data to act on it with the degree of accuracy that would be required for the premise of the article to make sense. The author essentially assumes that these car-driving robots will be omniscient, or that they will be able to trust the omniscience of the robots in other cars with which they are communicating. The first supposition is nonsensical; the second is unlikely to be true, for the same reason that video game cheats are a problem.
He does no such thing. He assumes that the programmers who write the algorithms that control the robots will consider various possible responses to an emergency situation and will use ethical decisions in deciding how to code their algorithms. There may indeed be circumstances where the robot does not all of the data available that would be needed to make a valid ethical decision. Robots will certainly not be omniscient. Their sensors will not be infallible, nor will they be able to accurately discern all of the factors in all of the cases. But that doesn't mean there are no cases in which ethics will play a factor. A robot would almost certainly be able to tell the difference between a bus and a small passenger car, and it's reasonable to assume that the bus carries more passengers than the car, even if there are some cases where that would not be true. If a bus turns left in front of you when you have the right-of-way and the robot calculates that it is unable to avoid a collision altogether, should it hit the bus or swerve into the next lane, hitting the passenger car there? That's a scenario where some variant will almost certainly happen if self-driving cars become common, and it's one the algorithm should take into account. It doesn't at all mean the robot-cars are capable of thinking, of calculating ethics, or are omniscient. The question is how the programmer's writing the algorithms should code the decision making tree.
True, they did not, but I would put that at the level of mistake rather then being unreasonable.
I'm reasonably certain that the OpenSSL team did not do this on purpose. It likely wasn't a sabotage by a malicious developer. I seriously doubt someone paid the team to intentionally install the bug. You're almost certainly right that it was a mistake. But arrogance, ignorance and other weaknesses lead to mistakes which should not be made, and when they do, it's jake to point the finger. Just because it was a mistake doesn't mean it was out of their control.
I would consider an alternative periodic table a success if it predicts new elements or new interactions that the old one didn't.
This, right here. This is the only valid argument for changing an existing and well-understood model when there's no new evidence to consider.
The Periodic Table isn't a model, or at least not a functional model. It's a chart - a way to represent data. Arguably, a chart is a model of sorts but considering your comment concerning "new evidence," you certainly seem to be implying that it's a model of how things function and this new proposal provides an alternate functional model, which isn't the case. The proposed alternative isn't a new theory of elements. It doesn't change our idea of how things works. It simply presents the same information and understanding in a different way. If the new table doesn't provide any new predictive ability at all but it does, say, present the information in a way that's easier to grasp or makes relationships clearer, then it's worth considering and possibly worth adapting.
Let's turn that around. Assume, for the moment that (like myself) you are not a US citizen. Now you are told that this surveillance is only carried out on non-US nationals, as if that is somehow OK and the action of a good neighbour.
How would that make you feel?
What happened to "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."? (Emphasis mine.) Granted that the Declaration of Independence isn't legally binding in the way that the Constitution is, there's still no way that you can square the "Constitutional guarantees only apply to citizens" doctrine with the fundamental principle of human rights.
I was just thinking. If your switch doesn't support this fancy stuff (first, what the heck are you doing, get a managed switch)
Exactly. You (the submitter) are aware that this is trivial on any enterprise switch, right? Often, it's not a direct capability to turn the port on and off at a specified time but it's effectively the same. For example, you might create an access list which drops all traffic on a port during a specified timeframe and passes everything outside it. The port is technically still enabled but since no traffic comes in or goes out, it might as well be shut down.
What's even worse is the somehow equivalence of "knowing some random fact" and "being smarter." Wikipedia makes it easier to be smarter, but it also makes it MUCH easier to believe you are smarted... but really you're just lazy.
Did you RTFA?
How many years before we have a brain interface to Google? You'd know everything. And its not crazy to think that soon after we'd find ourselves limited by how slow our brains process information. The obvious next step being to augment our brains, our thinking, and in the process - augment who we are. That's what our scientists will be working on then (and of course, are actually already working on).
NC headphones help but by themselves won't block out everything. Get some noise-cancelling headphones and play music - just don't play music you like. Find something you don't completely hate but that really doesn't float your boat. Something without vocals is preferable. You can grab all sorts of classical, big band, early jazz ensembles, etc. for free. It's not going to grab your attention and distract you like music you really like, but it will block out the sound around you. At least, that works for me. I use the trick for writing in public spaces.
The talk was completely off-topic and couldn't possibly improve the environment of the conference.
And, of course, that opinion is the only one that matters, so it's OK to lie and use whatever other cheap, underhanded methods you can use to impose your perspective on everyone else, right? "Rape trigger" is a convenient tool because it shuts down all further conversation.
A: "Rape trigger!"
B: "But I
A: "What, do you support rape? What kind of sleazy, disgusting asshole are you?"
B: (slinks away)
Aside from the "this is a hack?" issue, get a headboard and clamp the tablet holder to that.
Except for Anathem, which has the most boring, uninteresting start to a book I've ever tried to read. After several attempts I've only made it a few chapters in.
To each his own and different strokes makes the world go 'round, etc. But I found the first half of Anathem incredibly good and the second half (once they left the Math) much less exciting. Part of that may be because I'm a fan and amateur student of philosophy.
The problem isn't whether or not it's "easy to use".
The problem is that it's designed to be easy to use on tablets and tablets are rubbish for doing real work. On desktop machines
That fails to explain why a three-year-old has no problems using it
Two things. First, a three year old doesn't have to unlearn years of expectations of a system acting a certain way. Second, what a three year old is trying to accomplish on a PC might be just slightly different from the purposes of a typical business user.
My advice? Do the responsible thing and stick it out until retirement or mortgage/kiddo's schooling is paid off, then take your walkabout.
You can also start looking for new opportunities but don't quit your day job until you have something solid lined up.
Are you tech skills solely limited to coding? Even if you can't get out of the IT field, you might try a different area. I retired from the Navy (I was an Electronic Technician) at age 39 and got a job as a Network Technician. I got my CCNA, which got my foot in the door. Within three years I'd been promoted to Network Engineer, and now, six years after retiring, I'm the Lead Site Engineer of a network with some 1200 devices and 15,000 users. It's still IT but it's very different from coding.
Not only is the cure worse than the disease, but the severity of the disease appears to be widely overstated: