Comment Re:hahaha (Score 1) 455
The Kruger Dunning explains most post on
/. http://en.wikipedia.org/wiki/D...
The Kruger Dunning explains most post on
/. http://en.wikipedia.org/wiki/D...
So apparently Watson didn't play Jeopardy. Apparently it was the programmers who played Jeopardy, using Watson as a tool. Does that prove Watson is not intelligent?
Let's say a fictional Dr. Sorenson, unscrupulous and backed by a powerful and wealthy totalitarian state with no regard for human life, has several dozen children upon which to conduct unrestricted psychological experiments. After years of research and careful conditioning, he has succeeded in programming a child to disregard all concerns except the acquisition of knowledge and the ability to understand complex and tricky queries. This child is completely subservient to Dr. Sorenson's instructions. It grows and learns over the next 20 years, a human tool to the evil Doctor. After that time has passed, the state wants to prove that its children are the best educated in the world, and so taps Dr. Sorenson's research to do so. The child is to travel to America with a team of caretakers, much like Watson, and play Jeopardy. The child is not exercising free will or otherwise acting in any recognizably human manner; it only is acting out years of conditioning and controlled learning. Clearly, it is actually Dr. Sorenson that is playing Jeopardy, using the child as a tool. Does that prove that the child is not intelligent?
"Averages about $1200", and "You could get"? Every single machine I see over $1200 is an old Mac Pro; current models are over $3000 as they use server-type processors and ECC memory. And you "could get" a five year old 2.66Ghz quad core Mac Pro from that link at $739, which is...less than two thirds of $1200. Or a three year old 2.5Ghz quad core iMac at $799.
If the price still puts you off, you don't have to buy into that market. But don't complain that these things keep their value; in most products (like cars), that means it was a solid, reliable product, unlike certain computers I've bought that died after only a year. I dare you to find non-Macs from 5 years ago that sell anywhere near $799 now.
The Mac chiclet keyboard is vastly superior (in my opinion) to most other laptop or OEM-supplied keyboards. There are many better keyboards, but not that the computer manufacturer will give you with the machine. The trackpad on laptops used to be several orders of magnitude better than anything available on a laptop, but Windows laptops (especially those certified for Windows 8) have been copying it for a while now so the difference is probably negligible. As for a mouse, you can buy a three button wired mouse for $5 if you really don't like the option of a touch-enabled "Magic Mouse", a trackpad, or the "Apple Mouse" that I think has middle button support (but maybe only if you use a third party configuration tool?) for its 360-degree scroll ball.
I don't blame you for having higher standards, but unless you're buying a gaming computer anything else you could buy comes with cheaper, crappier stuff than Apple's "style over usability" keyboard and mouse. The way I see it you'd be spending extra money either way unless you just use your existing keyboard and mouse, which you'd have to do anyway for the Mac Mini (which is your only Mac option under $899).
First let's say that the explanation in the article makes some pretty weird alterations to the trolley problem to come to its conclusion. So first we have the problem that the trolley problem is not an adequate measure of whether a robot can "correctly decide to kill humans". From there it goes through some weird permutations until there is a decidedly "correct" solution to the trolley problem: a computer program designed by a known villain is about to be installed on a switch that could make a decision to injure maintenance workers. And since another computer program cannot determine whether the first program will ever halt, it cannot always make the right decision.
The flaw is right there in the summary:
One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.
In what way could anybody always prove that a piece of computer software is safe and correct? Let's assume that the hardware is safe and an expert in machine code is available to make the determination. Could that expert ever make the right choice? Of course. But could the expert always make the right choice? The Halting Problem doesn't state that a program can never determine if a program will halt. It only states that a program cannot always determine if any program will halt. A machine could use exactly the same methods available to us humans (recognizing certain design patterns, certain known logical structures with known outcomes) to usually make the right choice, but could not always know what the right choice is. The disconnect that might make one think a computer is somehow less capable of making this decision is in believing that a human being can make a better determination. A human can't. If the program is written in a strange or obscure manner, the human can't know what will happen either. And that's where we encounter the halting problem: you can't always know for sure without running the program, and if the program never halts (or never makes a bad judgement), that's not proof positive that it doesn't halt (or is safe).
Ultimately the real "flaw" is the way this result has been picked up. The headline "Halting Problem Proves That Lethal Robots Cannot Correctly Decide To Kill Humans" is a lot more sensational than the title of the paper, "Logical Limitations to Machine Ethics / with Consequences to Lethal Autonomous Weapons". The Medium headline and article claim that this paper "proves" something about the capabilities of "lethal robots", when all it really does is prove limitations of machine ethics. It isn't really about lethal weapons; based on this result, an algorithm cannot always make the morally correct choice, regardless of whether or not that choice involve killing. And the reason? Because sometimes, making the morally correct choice requires information that is provably impossible to always obtain.
ISIS hates everyone and is a threat to everyone. Why should America have to lead the charge? Because we're the only ones willing to? ISIS is a graver threat to European countries who are content to keep their hands clean. ISIS is an even graver threat to other Middle Eastern countries like Turkey and Iran, whose agendas are different from our own and whose actions may provoke us against them so they aren't going to want to make themselves a target. You see? America being the first to act in every military situation makes everyone else back off. And before you say "America, fuck yeah!" remember who is leading our country. Do you want Obama to be the only person in the world capable of waging war on every single threat that comes along? What about the next president? And the one after that? Even if you like every single president this country is ever going to have, that is simply too much responsibility to weigh on one person's shoulders. Our Congress won't even exercise its Constitutional mandate to decide when our country goes to war.
If we are ever going to advance as a society, we need to get past this "world superpower" phase. America must not be in the business of policing the entire world. It is extremely costly and it makes us a target. Why do you think ISIS is beheading Americans? They desperately want us to lead a unilateral attack, because they know that if they can goad us to strike alone then no one else will join us. They will be able to recruit even more people to fight demon America, the nation that fancies itself a greater power than God. And don't make the mistake of thinking that this is a purely military conflict. This is ideological. This is cultural. This will not end until those that would aid ISIS know that it isn't just their greatest nemesis American that is fighting them. The whole world must be arrayed against them. They must fear that the forces against them are assembled not by their enemies, but by God himself.
Hey, don't accuse me of hijacking the discussion. I wasn't the one who asked if an SD card bought from unreliable sources could install malware without autorun:
Does anybody know if there are any known firmware issues with SD or other non-USB flash cards that could effectively allow a foreign seller/distributor to place malicious software on my Android phone or laptop simply on insertion of the device with autoplay turned off?
That was the OP, not me. I just wanted to praise queazocotal for actually answering the OP's question.
There's a big difference between "it will lose the data you put on it" and "it will infect your computer and destroy the data you put everywhere". If I wanted to conduct secure transactions with my bank over the internet, it doesn't really matter (much) if my computer is running off of an unreliable hard drive. It might crash in the middle, but I probably won't lose money over it. But if the hard drive infected the operating system, the infection could undermine the security of my transactions and drain my bank account. When we apply that logic to a piece of removable storage instead of the main system drive, an unreliable flash drive or SD card won't crash your computer (unless you're using it for memory paging), but an insecure one could still drain my bank account.
I won't say that everyone knows the risks of faulty storage coming from east Asia. But the OP has chimed in in reply saying that he understands the risks and bought it anyway. So would everyone please stop saying the same damn thing over and over again and take a look at what is really the much more interesting question of whether SD cards are a meaningful attack vector with autorun disabled?
Stellar rays prove fibbing never pays. Embezzlement is another matter.