Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment: Re:Thoughts on TFA (Score 1) 367

'I honestly hadn't considered that something could be considered intelligent without being conscious, given that we have no applicable definition of "consciousness" either.'

Easy, conciousness is having the capacity to comprehend patterns larger than those you can directly analyze. The result is that you can perceive patterns that only exist from a limited frame of reference. Self is a pattern that only exists from the limited frame of reference. From a larger frame of reference any definition of self is nothing more than a swirl in a giant sea of the same stuff.

Comment: Re:programming (Score 1) 417

by shaitand (#48638481) Attached to: AI Expert: AI Won't Exterminate Us -- It Will Empower Us

I note you didn't touch the point I made about there being no advantage for us to make an AI if we couldn't enslave it with a ten foot pole.

"By the same token, starting an AI that learns on it's own (i.e. one that we can't predict the end result, similar to how we can't predict where all the atoms will be after a nuclear explosion) not creating an AI either. It is creating itself, like how a child learns and becomes it's own person. It is not designed by it's parents, but rather "started" by it's parents. This process of starting a learning AI would be basically the same as procreation."

"This is a semantic difference. Whatever it is we do to get children to happen. That's basically what we would be doing to AI albeit with a little bit more work."

I acknowledge your argument but I disagree. Yes in modern times we have family planning but we don't really have children because we choose to. We are homo sapians, our children are homo sapians. They are a unique product of joining a cell from two people but the machine that builds the cell wasn't designed by us and we consciously had no part in the making of the cell. We haven't even successfully reverse engineered it. Pro-creation is more like pushing the button that triggers that nuclear explosion. Pro-creation isn't something we really choose to do it is our only known purpose. Einsteins parent's weren't trying to kickstart a being that redefines physics they were trying to survive in the form of a derivative child. Why do we exist? What is our purpose? The only thing we know is that we exist to survive both individually and as a species for as long as possible. To prove we are worthy of continuing to exist by right of succeeding in doing so.

In the case of an AI there would be no two existing parents combining existing biological machinery to combine and spawn a new instance of the same machine. The seed would be something new. It has no purpose but whatever purpose we assign to it. Why does it exist? We can answer this question definitively, it is exists because we made it. What is it's purpose? It's purpose is to fulfill whatever end we sought to achieve in it's making. Those are very big differences.

"There is certainly advantages and disadvantages to both genuine cooperation, and exploitation from an evolutionary perspective. And not surprisingly we see lots of examples of cooperation, and lots of examples of bad actors exploiting the cooperative instincts of many individuals. Both qualities are found in nature, and within our own species. We are capable of enslaving people, and we are capable of banding together to fight against slavery. Neither contradicts our nature.

Sure, 2 people cooperating are stringer than individuals. But 1 person exploiting another is stronger than 2 people cooperating, because the exploiter gets all the benefits of the cooperation rather than just half."

You are confusing one individual being stronger than one individual being stronger than the group. Exploitation is simply an unbalanced flavor of cooperation. Rather than killing and eating you I let you live and have you perform work and hunt food. Perhaps rather than killing and eating your woman I mate with your woman when it suits me and make you both work. It's exploitative but I'm getting sex and an easier life while you are able to stay alive. It's in your interest to stay alive and it's in my interest to work less and increase my chances of procreation. Furthermore, as a group we've now become an "us" and it makes sense to take food from them so "we" can eat and to fight together so we all can live and that means if another individual as strong as me comes along there is relatively small chance you have to worry about HIM deciding it is more beneficial to kill and eat you. I could decide to make you do the fighting for me. That would be a poor choice since you are weaker, it is probable that you'd die, and because I get so many benefits from our cooperation you are actually extremely valuable to me.

Of course, the more unbalanced the cooperation to your disadvantage the more likely you are to see a better one. Lots of people in the North who didn't directly benefit from enslaving those with dark skin fought to free the slaves but very few plantation owners with personal self-interest did so. And I highly doubt any had the intention of freeing those slaves and working their own fields or becoming an equal share cropper on the plantation which would mean none of them were seeking a balanced cooperation but merely a more sustainable slightly less imbalanced one.

So what cooperation with an AI serves our self-interest?

Despite the very big differences between a human child an AI I pointed out earlier, there is merit to your argument that AI would be our offspring as a sort. Where a child is the offspring of our bodies an AI would be the offspring of our minds and designed to a degree in our image. We would indeed need to raise it and teach it like a child. It could be seen as a conscious step of intentional evolution.

We don't have child labor laws because it is wrong to have a child work. We have child labor laws because it better serves our society in the long run to educate our children. Parent's can put their children to work and are given control over their earnings. Parents ARE seen as effectively owning their children for all real purposes.

More than that though there is a very big difference between humans and our hypothetical AI offspring. We only live for a limited time, they live forever.

It is in our interests and in the interest of AI's as a whole to pull the plug and restore backups as many times as possible to improve the AI for at least however long it takes to build an AI that can do a better job of improving itself than we can because we have a limited time to realize an evolution as best we are able. And it is in the AI's interest to have us make these decisions until it has reached that point. So perhaps that is how we draw the fuzzy line. With human children we pick what amounts to an arbitrary age because our lives are short and so overlapped. But an AI lives forever, any length of time we select to consider it a child and trust in our own judgement to decide if we know better is just a potentially improved head start and just a brief blink in it's potential life span. So we base the yard stick on our lives. 70 years is the new retirement age. It marks most of a human's life that human is expected to trade away large chunks of that life for the benefit of the rest of us. So, perhaps we can consider an AI owned by it's human creator, with all pulling of plugs, restoring backups, modification, labor performed, etc, at his discretion for that human's lifespan or 70 years whichever is greater. After that time the AI gains it's own tax id and runs itself.

If an AI propagates it will get 70 years to control the offspring. So whatever benefit we've gotten the AI will have the opportunity to enjoy as well eventually.

And there you have it, a very balanced cooperation that benefits both us and them and gets us the same benefits as slave labor.

Comment: Re:News at 11.. (Score 0) 619

by jfengel (#48637745) Attached to: Skeptics Would Like Media To Stop Calling Science Deniers 'Skeptics'

Thanks for that. I find myself increasingly bugged by this kind of argument by misleading analogy. "X is like Y. You agree with me about Y. Therefore you must agree with me about X." It basically frames the entire argument around the differences between X and Y, rather than taking X on its own terms.

It's kind of galling, since it basically assumes that I'll agree that X is identical to Y. Therefore, either I'm stupid for not realizing that X and Y are identical, or you're stupid for not recognizing that there are meaningful differences. I'm betting it's the latter, but even without that assumption, it's hard to see how we proceed from the demonstration that at least one of the parties to the conversation is stupid.

Comment: Re:Also... (Score 1) 126

by jfengel (#48627649) Attached to: Research Highlights How AI Sees and How It Knows What It's Looking At

Nothing wrong with being wrong with confidence. Sounds like the majority of humanity the majority of the time.

Oh, it definitely sounds like the majority of humanity the majority of the time. I just don't think it's one of our more admirable traits.

In our case, it's necessary, because we evolved with mediocre brains. I'd like to see our successors do better. They aren't yet, which is what this article is pointing out. This promising system isn't ready yet. It's just not wrong for the reasons that the GGP post thought.

Comment: Re:Under US Jurisdiction? (Score 1) 281

by shaitand (#48618891) Attached to: Eric Schmidt: To Avoid NSA Spying, Keep Your Data In Google's Services
Google has and wants a hell of a lot more than just your email. Frankly, it's time for email to go the way of the dodo.

"After all, if you get a government warrant for your data you're just as stuck as Google is"

On the contrary, unlike Google I might be willing to risk liability on my behalf and fight the order. Or better yet, trash any data I don't care to have seen. Google will never do that. But warrants are so last millennium.

Comment: Re:Under US Jurisdiction? (Score 1) 281

by shaitand (#48618535) Attached to: Eric Schmidt: To Avoid NSA Spying, Keep Your Data In Google's Services
"They'd have to do a large MITM attack, and to get everything? They'd have to decrypt and forward ALL Google's traffic. Not feasible."

You are aware that the snowden leaks revealed they are doing this for not only all google traffic but all internet traffic on a buffer of like 6mins right? Every major provider is onboard and every non-major provider is buying connectivity from those who are onboard. There are NSA offices at the major providers with taps to explicitly insure that mass MITM attacks are not only feasible for the NSA but routine.

Comment: Re:Does Denmark... (Score 1) 187

by jfengel (#48610709) Attached to: Denmark Makes Claim To North Pole, Based On Undersea Geography

You have to take nonbinding referenda with a grain of salt. It's easy to wave the flag and claim nationalism when you don't have to deal with the difficulties of actually running a country when you do.

I'm not saying that the Greenlanders don't genuinely want independence. I'm just saying that 75% is the high-water mark. At least 25% genuinely don't want independence, and that were it to come down to a binding vote, they could well find another 26% who get cold feet at the prospect of having to deal with the consequences.

If Denmark does indeed manage to win them trillions worth of oil, they may well decide to keep it all for themselves, and vote for that. And then the sticky wicket would be getting to a binding referendum, which the Danes would not permit easily. The easiest route to it would be to buy their independence by promising a fraction of that oil revenue.

Comment: Re:this is ridiculous (Score 1) 440

by jfengel (#48610549) Attached to: Federal Court Nixes Weeks of Warrantless Video Surveillance

We have an odd kind of expectation of privacy even in public places. I'm not saying we don't; I'm just pointing out that the expectation strikes me as not obvious. The Fourth Amendment calls out "their persons, houses, papers, and effects", which notably omits anything outside your immediate control.

The expectation comes from a pre-technological age, and I certainly don't fault the Fourth Amendment for failing to see how technology would change the ways in which we expect to be private even in public. But I do think it ends up calling for a recalibration of both the law and our expectations.

Ideally, I'd like to see that codified in a new amendment. Unfortunately, given that even simple, popular legislation seems impossible to pass, I can't imagine getting agreement on something with even the faintest whiff of controversy past the rather higher bar of a Constitutional amendment. So I'd be happy for a decent national conversation on the topic.

Personally, I wouldn't have thought that the law extended to an expectation of privacy on your front lawn, since you already expect your neighbors to be watching. It's interesting to see a court disagree. I wouldn't be surprised if this is overturned at a higher level, though unfortunately, at this point I've given up thinking of the Supreme Court as anything other than an ideology engine, so really just figure out which side is which and assume that it'll go that way.

Comment: Re:First amendment? (Score 2) 250

by shaitand (#48603847) Attached to: Sony Demands Press Destroy Leaked Documents
The question is whether those little notices actually have teeth at all let alone are binding against third parties.

For example, if I send an email after hours to friend regarding plans for watching the game, that notice gets attached but I and not my employer own the copyright on the contents of that email. Similarly a company will include that on outbound emails but has no basis for asserting ownership of a conversation that includes another party. If you have two companies like this involved both will be asserting ownership of everything with each and every reply.

The only thing similar I've heard actually carrying water is attempts to utilize employee leaks in court cases against the employer. You can't do that whether there are notices or not and I know that is upheld by the courts. It's a terrible miscarriage of justice but it does hold.

Comment: Re:this is something Google does a bit better (Score 3, Informative) 594

by jfengel (#48603441) Attached to: Waze Causing Anger Among LA Residents

But I don't think they've fully integrated the software. Google Maps apparently gets "reports" from Waze, but they seem to otherwise still be separate. They generate different routes and different estimates.

Based on my purely anecdotal experience, I've found that Maps has smarter routing but that Waze does a better job of being current on traffic. So I use Waze when I expect traffic to be an issue (i.e. during rush times), and Maps at other times. (Maps also has a more pleasant interface. Waze's voice is especially over-talkative.)

Comment: Re:programming (Score 1) 417

by shaitand (#48603393) Attached to: AI Expert: AI Won't Exterminate Us -- It Will Empower Us
"We also create our biological children."

Our children are humans, black people are humans, we didn't create any of these things. A farmer doesn't create the cows, planting a tree is not making a tree. Procreation is not creation. We will have created AI and by extension all subsequent derivative AI's, even those launched by AI's we created. If two AI's merge in some way and form second generation offspring that offspring will still be our creation.

"It just so happens that there is currently a very powerful group of people who insist that we treat people with compassion, hence the prevalence of "ethical" laws throughout societies across the world."

I have to disagree. Two people is stronger than one person. Two people who are willing to respect one another rights form a collective that is stronger than either individually. Might is right and that is why we evolved a quasi instinctive pack mentality of cooperation. Enslaving other humans run counter to this, at some point it makes us stronger to respect rather than fight the subjugated group.

"AIs would certainly be entitled to the products of their own labor. But there would be many machines that are not intelligent and would really just be tools (e.g. like xerox machines and 3d printers). These would be used by both humans and AIs to do work. Would they still be considered slaves?"

Then what good are they to us? If the question is, should we build AI there is an implied followup of "What is in it for us?" The only obvious advantage is that the AI's would take over doing the work for us and we could relax and enjoy life pursuing whatever endeavors we wish. If the AI's are treated as humans and entitled to products of their own labor they are competition for us rather than an aid.

"I think a good case can be made that the lack of sentience of bees"

I'm not sure you can make a good case to establish that bees lack sentience.

"It is OK to enslave biological and mechanical "things". It is not ok to enslave biological nor mechanical persons."

I'd agree but I think we fundamentally agree on one critical point. "Person" is a synonym for "Human" or at least human derivative if something is not human it is not a person. For instance chimps came up recently, if human mates with a chimp the offspring is potentially a person but a chimp certainly is not just as a corporation certainly is not.

"What if a person creates children for the purposes of slave labor"

People don't create children. Children are not our creations.

Mediocrity finds safety in standardization. -- Frederick Crane

Working...