But defense contracting would be a bit of a shift in how they like to do business, and I'm not sure a positive one. Alternately, they could just repurpose the acquired tech and expertise towards Google's own robotics projects, and dump the military clients. That would be leaving quite a bit of money and existing business on the table, though, not to mention possibly annoying some politically powerful folks.
Boston Dynamics *is* a defense contractor, so by extension Google is one too, now. I am going to try to remain optimistic about the positive effects that Google can have on human advancement. Science and engineering seem to leap forward much farther and much, much faster when they are deployed in the service of armed conflict. Companies like Planetary Resources, Armadillo Aerospace, and SpaceX are going to have to be able to defend their extra-terrestial ventures, and NASA has demonstrated beyond a shadow of a doubt that robotic missions in space are far more cost-effective in terms of results than manned missions. The minute Planetary Resources starts exploiting the asteroid belt, they are likely going to need a way to defend against claim jumpers, and I'm hoping that by hoovering up all these robotic companies,Google is positioning itself to defend these companies in their (hopefully) peaceful occupation and exploitation of the solar system.
I think there's a big future for a testing company, like Underwriter's Labs is for physical goods, to do just that. Anyone big or small can send them code to review, and pay a fee, and they'll certify the resulting binary as trouble-free, at least to level of confidence you's expect from a good app store or distro (acknowledging that sufficiently clever malware can hide anywhere, but forcing it to be really clever would probably fix 99% of the problem),
This. So what if some company certifies the code as non-toxic? For every legit code certifying company that goes online, there will be a hundred phishing sites popping up over-night to take advantage of it. The problem is not toxic code --- the problem is the toxic levels of foolishness and naivete of the vast majority of users on the net.
Just because you can't prevent anyone from doing something (murder, rape or holding a speech) doesn't make it a "right".
Try arguing your "right to life" with a hungry lion, rights only exists between entities that recognize those rights. If your government doesn't recognize freedom of speech, the difference between having it and not having it is entirely philosophical.
Hmmm. Excellent post. But I'm having trouble reconciling these two assertions.
From the point of view of a warlord, superior military force confers the right to murder and rape. Indeed, it confers any right the warlord chooses to assert. Ditto your hungry lion -- his right to eat me stops at the muzzle of my rifle.
It would seem to me that you need something more than just the other party recognizing that you have rights. You have to be able to successfully assert those rights. In French, it is "preter main forte" or "show the strong hand." In English, it would be "might makes right."
I always chuckle when I hear people say 'if I die...", when the correct wording is "when I die...". The exact circumstances vary from person to person, but the end result is always the same.
And I always cringe when somebody makes an assertion that is counter to my experience and to my intuition. I think about death fairly often, dude, and so do *a lot* of other people. I like to participate in activities -- skydiving, motorcycle racing, and stunt flying, just to enumerate what I did this weekend -- which could reasonably be expected to be fatal if not done correctly or well. I like to think that my parachute is going to open *every time* I exit the aircraft, that there is no debris that found its way onto the track at the apex of a blind turn that is going to cause me to high-side at a buck fifty, or that I'm not going to pull so many negative g's that I red-out and auger in, so that my death remains firmly in the hypothetical. I want "if" and not "when" to remain the correct way for me to phrase thoughts about dying for many, many, decades to come. I will happily concede your point that dying is inevitable, but for some of us, getting close to death is pleasurable, and we would like to dance with it for as long as humanly possible. Yeah, we are probably not going to die of "natural causes" but we will be part of the tiny fraction of humanity that gets to at least have some say in the time and manner of our demise. Unlike Scott Adams' father, whose time and manner of death was dictated by the fiscal self-interest of the medical facility that was prolonging his life for financial gain.
So you get to starve to death or dehydrate. Excuse me if I don't consider death by organ failure over several days as "quickly". I don't think anyone would call that humane.
We would put down a dog in that condition. Not let it starve or die by dehydration.
"No heroic methods." That is the magic incantation that let's you die with dignity. At least in jurisdictions that allow advance healthcare directives, anyway. Run, do not walk, to your nearest legal professional and execute an advance healthcare directive, if you want to be able to die with dignity. If you don't live in a jurisdiction that allows advance healthcare directives, move to one that does. Period. BTW, morphine takes the edge off -- if you specifically allow the use of palliative measures in your directive, you can die with dignity and do it painlessly as well.
Do not make medical decisions about which drug to take by yourself, it's a bad idea.
Hmmm. Bad medical decisions that *you* make stop when your heart stops. The alternative is for some other person to make medical decisions on your behalf. This other person is immune to the consequences of a badly grokked medical decision, which leaves him free to continue dispensing bad advice. How is this not a bad thing, as well? Is there a middle course between these two choices?
From what I can determine, in all cases it is used to augment your ability to communicate and/or navigate. Why is wanting either of these pathetic in *any* circumstance?
Don't be naive. Do you really think that some clever sociopath is *not* going to figure out how to exploit his/her augmented ability to "communicate and/or navigate" to enhance their ability to fuck with people? C'mon. By your line of reasoning, a gun just augments our ability to throw things. Why is there no downside to throwing things harder and with more accuracy? I suppose you live in a (fantasy) land where armed robberies never happen?
And the Nexus 5 has a SoC with 2 more cores, 80% higher max clock rate and double the RAM. That it can only keep up is pretty amusing.
What is amusing is that the Nexus 5 costs half what the iPhone does. Apple's target demographic has always been people with more money than brains. Thwok....ball's in your court.
Well, that's a great shame. Whoever wrote the article on Wikipedia made no attempt to explain it in layman's terms. I give you:
In quantum gravity, the Wheeler–DeWitt equation describes the quantum version of the Hamiltonian constraint using metric variables. Its commutation relations with the Diffeomorphism constraints generate the Bergmann-Komar "group" (which is the Diffeomorphism group on-shell, but differs off-shell).
Years of study no doubt required in order to even attempt to understand what that's all about!
Actually, a pair of undergraduate classes in abstract and linear algebra thirty years ago is all that I'm going on, and it seems adequate to get the gist of the Wikipedia article. Admittedly, I had to look up Bergmann-Komarr (I'm not a physicist; the dynamical evolution of Einstein's differential equations describing GR have only a limited, abstract appeal to me!) , but abstract groups in general, along with commutations, diffeomorphisms, and Hamiltonians were covered at the freshman level. So months, not years.
The ACLU and medical professionals don't think there's anything voluntary about receiving medical treatment, and that medical ethics override other concerns.
What about cases where the patient is under a doctor's care as the result of publicly visible, *voluntary* behaviors? I can't really feel solidarity with some overweight person who smokes complaining that her medical information is being disclosed to some third party. Unhealthy behavior should be discouraged, and a good way to do that is to punish these people in the pocket book by maintaining a database of people with unhealthy behaviors so that insurance companies can shift the risk of those unhealthy behaviors unto the shoulders of those that deserve it. Those of us who try to stay healthy should not have to bear the financial burden when some chain-smoking junk food junkie's coronary arteries eventually seize up and her lungs shut down. My insurance company created a tiered system where they charge smokers much higher premiums than they charge for non-smokers. Ditto for BMI. People who refuse to maintain reasonable height-weight proportionality should have to pay more for health insurance than the rest of us. I have to get swabbed and weighed once a month to prove I'm not smoking anymore and am maintaining a healthy weight, but it saves me over $2000 per year in premiums.
More to the point, I think I have the absolute right to determine *for myself* that the people I put trust in (teachers, bus drivers, cops, firemen, bankers, janitors, housekeepers) are not abusing the medications that have been prescribed to them -- a publicly accessible medical database would go a long way in making that possible.
Its FAR more frustrating that rather than trying to -fix- the edge cases Asimov uncovered with the 3 laws (later 4 laws), we've decided to just go full steam ahead without any laws at all with robots designed for the sole purpose of killing us.
Well, we've had lethal robots that meet this definition since the first time an anarchist connected an alarm clock to bundle of dynamite and hid it in the luggage compartment on a train. A human-class AI must have the capacity to kill, or it wouldn't be human-class. It also must have the capacity to make decisions based on probabilistic outcomes, and evaluate those outcomes against some nominal goal and change its behavior based on that assessment -- the same way humans do.
Fwiw, the Good Doctor changed the Three Laws to include a fourth law, which he called the "Zeroth Law" and introduced in Robots and Empire
0. A robot must not harm humanity, or, by inaction, allow humanity to come to harm.
The edge case he needed to fix was resolving conflicts where the death of a human or humans was necessary for the greater good. I think he realized that his pacifist take on conflict resolution as embodied in the original Three Laws was too much even for *his* fans' credulity.
But -- it also helped bridge the uncanny valley between his robots and the rest of humanity. Giskard and R. Daneel Olivaw became far more "human" after Asimov introduced the Zeroth Law, where their actions were not constrained by the inherent pacifism of the original Three Laws.