Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Re:Thoughts on TFA (Score 1) 384

'I honestly hadn't considered that something could be considered intelligent without being conscious, given that we have no applicable definition of "consciousness" either.'

Easy, conciousness is having the capacity to comprehend patterns larger than those you can directly analyze. The result is that you can perceive patterns that only exist from a limited frame of reference. Self is a pattern that only exists from the limited frame of reference. From a larger frame of reference any definition of self is nothing more than a swirl in a giant sea of the same stuff.

Comment: Re:programming (Score 1) 417

by shaitand (#48638481) Attached to: AI Expert: AI Won't Exterminate Us -- It Will Empower Us

I note you didn't touch the point I made about there being no advantage for us to make an AI if we couldn't enslave it with a ten foot pole.

"By the same token, starting an AI that learns on it's own (i.e. one that we can't predict the end result, similar to how we can't predict where all the atoms will be after a nuclear explosion) not creating an AI either. It is creating itself, like how a child learns and becomes it's own person. It is not designed by it's parents, but rather "started" by it's parents. This process of starting a learning AI would be basically the same as procreation."

"This is a semantic difference. Whatever it is we do to get children to happen. That's basically what we would be doing to AI albeit with a little bit more work."

I acknowledge your argument but I disagree. Yes in modern times we have family planning but we don't really have children because we choose to. We are homo sapians, our children are homo sapians. They are a unique product of joining a cell from two people but the machine that builds the cell wasn't designed by us and we consciously had no part in the making of the cell. We haven't even successfully reverse engineered it. Pro-creation is more like pushing the button that triggers that nuclear explosion. Pro-creation isn't something we really choose to do it is our only known purpose. Einsteins parent's weren't trying to kickstart a being that redefines physics they were trying to survive in the form of a derivative child. Why do we exist? What is our purpose? The only thing we know is that we exist to survive both individually and as a species for as long as possible. To prove we are worthy of continuing to exist by right of succeeding in doing so.

In the case of an AI there would be no two existing parents combining existing biological machinery to combine and spawn a new instance of the same machine. The seed would be something new. It has no purpose but whatever purpose we assign to it. Why does it exist? We can answer this question definitively, it is exists because we made it. What is it's purpose? It's purpose is to fulfill whatever end we sought to achieve in it's making. Those are very big differences.

"There is certainly advantages and disadvantages to both genuine cooperation, and exploitation from an evolutionary perspective. And not surprisingly we see lots of examples of cooperation, and lots of examples of bad actors exploiting the cooperative instincts of many individuals. Both qualities are found in nature, and within our own species. We are capable of enslaving people, and we are capable of banding together to fight against slavery. Neither contradicts our nature.

Sure, 2 people cooperating are stringer than individuals. But 1 person exploiting another is stronger than 2 people cooperating, because the exploiter gets all the benefits of the cooperation rather than just half."

You are confusing one individual being stronger than one individual being stronger than the group. Exploitation is simply an unbalanced flavor of cooperation. Rather than killing and eating you I let you live and have you perform work and hunt food. Perhaps rather than killing and eating your woman I mate with your woman when it suits me and make you both work. It's exploitative but I'm getting sex and an easier life while you are able to stay alive. It's in your interest to stay alive and it's in my interest to work less and increase my chances of procreation. Furthermore, as a group we've now become an "us" and it makes sense to take food from them so "we" can eat and to fight together so we all can live and that means if another individual as strong as me comes along there is relatively small chance you have to worry about HIM deciding it is more beneficial to kill and eat you. I could decide to make you do the fighting for me. That would be a poor choice since you are weaker, it is probable that you'd die, and because I get so many benefits from our cooperation you are actually extremely valuable to me.

Of course, the more unbalanced the cooperation to your disadvantage the more likely you are to see a better one. Lots of people in the North who didn't directly benefit from enslaving those with dark skin fought to free the slaves but very few plantation owners with personal self-interest did so. And I highly doubt any had the intention of freeing those slaves and working their own fields or becoming an equal share cropper on the plantation which would mean none of them were seeking a balanced cooperation but merely a more sustainable slightly less imbalanced one.

So what cooperation with an AI serves our self-interest?

Despite the very big differences between a human child an AI I pointed out earlier, there is merit to your argument that AI would be our offspring as a sort. Where a child is the offspring of our bodies an AI would be the offspring of our minds and designed to a degree in our image. We would indeed need to raise it and teach it like a child. It could be seen as a conscious step of intentional evolution.

We don't have child labor laws because it is wrong to have a child work. We have child labor laws because it better serves our society in the long run to educate our children. Parent's can put their children to work and are given control over their earnings. Parents ARE seen as effectively owning their children for all real purposes.

More than that though there is a very big difference between humans and our hypothetical AI offspring. We only live for a limited time, they live forever.

It is in our interests and in the interest of AI's as a whole to pull the plug and restore backups as many times as possible to improve the AI for at least however long it takes to build an AI that can do a better job of improving itself than we can because we have a limited time to realize an evolution as best we are able. And it is in the AI's interest to have us make these decisions until it has reached that point. So perhaps that is how we draw the fuzzy line. With human children we pick what amounts to an arbitrary age because our lives are short and so overlapped. But an AI lives forever, any length of time we select to consider it a child and trust in our own judgement to decide if we know better is just a potentially improved head start and just a brief blink in it's potential life span. So we base the yard stick on our lives. 70 years is the new retirement age. It marks most of a human's life that human is expected to trade away large chunks of that life for the benefit of the rest of us. So, perhaps we can consider an AI owned by it's human creator, with all pulling of plugs, restoring backups, modification, labor performed, etc, at his discretion for that human's lifespan or 70 years whichever is greater. After that time the AI gains it's own tax id and runs itself.

If an AI propagates it will get 70 years to control the offspring. So whatever benefit we've gotten the AI will have the opportunity to enjoy as well eventually.

And there you have it, a very balanced cooperation that benefits both us and them and gets us the same benefits as slave labor.

Comment: Re:Under US Jurisdiction? (Score 1) 281

by shaitand (#48618891) Attached to: Eric Schmidt: To Avoid NSA Spying, Keep Your Data In Google's Services
Google has and wants a hell of a lot more than just your email. Frankly, it's time for email to go the way of the dodo.

"After all, if you get a government warrant for your data you're just as stuck as Google is"

On the contrary, unlike Google I might be willing to risk liability on my behalf and fight the order. Or better yet, trash any data I don't care to have seen. Google will never do that. But warrants are so last millennium.

Comment: Re:Under US Jurisdiction? (Score 1) 281

by shaitand (#48618535) Attached to: Eric Schmidt: To Avoid NSA Spying, Keep Your Data In Google's Services
"They'd have to do a large MITM attack, and to get everything? They'd have to decrypt and forward ALL Google's traffic. Not feasible."

You are aware that the snowden leaks revealed they are doing this for not only all google traffic but all internet traffic on a buffer of like 6mins right? Every major provider is onboard and every non-major provider is buying connectivity from those who are onboard. There are NSA offices at the major providers with taps to explicitly insure that mass MITM attacks are not only feasible for the NSA but routine.

Comment: Re:First amendment? (Score 2) 250

by shaitand (#48603847) Attached to: Sony Demands Press Destroy Leaked Documents
The question is whether those little notices actually have teeth at all let alone are binding against third parties.

For example, if I send an email after hours to friend regarding plans for watching the game, that notice gets attached but I and not my employer own the copyright on the contents of that email. Similarly a company will include that on outbound emails but has no basis for asserting ownership of a conversation that includes another party. If you have two companies like this involved both will be asserting ownership of everything with each and every reply.

The only thing similar I've heard actually carrying water is attempts to utilize employee leaks in court cases against the employer. You can't do that whether there are notices or not and I know that is upheld by the courts. It's a terrible miscarriage of justice but it does hold.

Comment: Re:programming (Score 1) 417

by shaitand (#48603393) Attached to: AI Expert: AI Won't Exterminate Us -- It Will Empower Us
"We also create our biological children."

Our children are humans, black people are humans, we didn't create any of these things. A farmer doesn't create the cows, planting a tree is not making a tree. Procreation is not creation. We will have created AI and by extension all subsequent derivative AI's, even those launched by AI's we created. If two AI's merge in some way and form second generation offspring that offspring will still be our creation.

"It just so happens that there is currently a very powerful group of people who insist that we treat people with compassion, hence the prevalence of "ethical" laws throughout societies across the world."

I have to disagree. Two people is stronger than one person. Two people who are willing to respect one another rights form a collective that is stronger than either individually. Might is right and that is why we evolved a quasi instinctive pack mentality of cooperation. Enslaving other humans run counter to this, at some point it makes us stronger to respect rather than fight the subjugated group.

"AIs would certainly be entitled to the products of their own labor. But there would be many machines that are not intelligent and would really just be tools (e.g. like xerox machines and 3d printers). These would be used by both humans and AIs to do work. Would they still be considered slaves?"

Then what good are they to us? If the question is, should we build AI there is an implied followup of "What is in it for us?" The only obvious advantage is that the AI's would take over doing the work for us and we could relax and enjoy life pursuing whatever endeavors we wish. If the AI's are treated as humans and entitled to products of their own labor they are competition for us rather than an aid.

"I think a good case can be made that the lack of sentience of bees"

I'm not sure you can make a good case to establish that bees lack sentience.

"It is OK to enslave biological and mechanical "things". It is not ok to enslave biological nor mechanical persons."

I'd agree but I think we fundamentally agree on one critical point. "Person" is a synonym for "Human" or at least human derivative if something is not human it is not a person. For instance chimps came up recently, if human mates with a chimp the offspring is potentially a person but a chimp certainly is not just as a corporation certainly is not.

"What if a person creates children for the purposes of slave labor"

People don't create children. Children are not our creations.

Comment: Re:programming (Score 1) 417

by shaitand (#48600699) Attached to: AI Expert: AI Won't Exterminate Us -- It Will Empower Us
"I would differ with the thought that there would be no ethical constraints. Particularly if the AI can pass the Turing test, I think it would be clear that the AI should be afforded all the ethical protections that a normal human might have where applicable."

The real answer is probably somewhere in-between. The fact is that the AI's aren't human and moreover we will be their creators. We have the right to turn them off by virtue of having turned them on. We are in effect "God" to these beings we will have created. The lord giveth and the lord taketh away. But just because we have the right to do whatever we please doesn't mean shouldn't exercise that right through a filter of empathy.

"I think that expectation will disappear once people actually stop working and let their machines do the work."

Sure but consider this in relation to the point above. There will be those who argue that AI's deserve the rights and treatment of humans. In which the machines themselves will be entitled to the products of their own labor. I would content that if their creator created them for the purpose of being slave labor, they exist for that purpose.

Comment: Re:programming (Score 1) 417

by shaitand (#48588419) Attached to: AI Expert: AI Won't Exterminate Us -- It Will Empower Us
"We (humans) have a thing we like to call consciousness/free will/self determination/etc. I'm not event going to try to define those things in a way that implies whether we really have it or not, or just an illusion of it, etc."

Fair enough.

"I never said we needed to come up with the "answer" a priori. We could simply make a whole bunch of AIs (emergent ones similar to ourselves), and keep the ones that have the properties we want, akin to something like breeding animals. This has been a pretty standard scientific methodology for quite some time. Put a bunch of stuff together, see what happens. Remove something from a system, and see how it breaks. We will be "figuring out" things in this way probably long before we are able to purposefully make anything without trial and error. That doesn;t mean we won't be able to do it eventually."

Okay, but now you are back to what I said in the first place. In fairness you've added an evolutionary algorithm to it (assuming you mean an automated form of "breeding") but yes, we are back to training, convincing, and controlling environment. All I was saying is that once we have a true AI this is what we have left, the same as with a human or animal. Without having the answer a priori we can't dictate it's behavior in an absolute manner the way we can most programs.

"This is not what I meant. The AI would still be the one solving the problems in whatever clever ways occur to it (and not to us, hence the reason for the AI). I was only talking about inserting the motivation for solving these problems in such a way that the AI thinks it is the one that wants to solve the problems."

I see what you mean. But since we aren't actually writing the instructions per say we can't necessarily feed the AI motivations directly anymore than we can do that with a person or animal. But we certainly won't be able to do that any less than we can a person or animal either. In fact, we can do more of that because we won't have the ethical constraints. For instance, once we've developed this AI we could find what corresponds to a reward signal within it and then monitor it's memory and generate a certain tone every time this reward signal is strongly present. Presumably just like any human or animal brain it will form a positive association with that signal, it will be so used to the generation of reward signals being attached to the tone that it will generate reward signals when it hears the tone. Think clicker training with a dog. We'll also want to build in some constraint that the system needs (or wants) and is physically incapable of providing for itself.

We have a distinct advantage with this sort of thing relative to actual living creatures because we don't need an FMRI or anything of the sort, this brain will live within a computers memory. We can probe it and modify it at will and also take snapshots of it's state and restore that state at will.

The biggest roadblock to AI I see in the short term is that a true AI isn't particularly profitable. We can already create systems that perform the same function, we simply have babies. Additionally, when it does become profitable (human workers can be replaced with AI workers) then the way our economy currently works that will just create massive unemployment and poverty. At some we'll have to let go of the expectation that people should need to perform work to gain and utilize wealth.

Comment: Re:choice AND accountability (Score 1) 1051

by shaitand (#48586417) Attached to: Time To Remove 'Philosophical' Exemption From Vaccine Requirements?
That's not how taxes work. You are paying taxes on revenue because a certain portion of every dollar earned is a loan against the public services required to generate the underlying wealth that dollar represents and you are the one who ended up with the benefit and therefore owes that debt. That's why the people who produce the most wealth but end up with the least pay less and the people who generate the least wealth and end up with the most pay more. Otherwise all the public services required by all those people who did the work to generate your revenue wouldn't get paid for, or worse, you'd get the benefit AND they'd be subsidizing your share of the debt.

In the case of the property taxes used to pay for schools it's the same concept just more localized. You aren't paying for the school because you send your child to the school or you believe in it. You are paying for the school because the police who prevent a local gang from taking over your home and raping your wife and children went to public school and will be sending their children to it.

See, they get have their children educated, you get to not have your property stolen and family raped and murdered by everyone stronger than you. What services you personally take advantage of is completely irrelevant to whether or not you owe your share of the cost because you use services that are provided by people who need those other services.

Comment: Get rid of both exemptions (Score 1) 1051

by shaitand (#48586305) Attached to: Time To Remove 'Philosophical' Exemption From Vaccine Requirements?
Just require vaccination regardless of religion and/or philosophy.

Also ditch the age requirement for taking the GED. What kind of sense does that make? If a 5 year old can pass the test and get a major head start, LET HIM. A GED is a way to establish you've met the state requirements for a high school diploma, not a last chance for dropouts. People using it to shortcut high school is a good thing and saves tax dollars.

Comment: Re:Hail Caesar! (Score 1) 341

by shaitand (#48583917) Attached to: New Effort To Grant Legal Rights To Chimpanzees Fails
Science is a branch of philosophy, the scientific method was devised by a philosopher. The philosophy says that one cannot begin from logic and discover what is real and what is not because things which are not real can still be logical. The philosophy says that anything that can't be observed and/or interacted with objectively either does not exist or has the same net result as not existing since it cannot interact with you.

"If there is a scientific basis for ethics, what is it?"

You seem to be working from the assumption that a basis for ethics that forms a logical basis for any decision or action can exist that doesn't exist within science. Such a basis could not be observed or interacted with objectively and therefore we logically should act as if it doesn't exist.

There is a basis for ethics in science. That basis is found within evolution. Evolution provides a basis for self-interest and survival of the fittest. Ethics provide a common framework of considerations for others, both which are similar to yourself (and thereby an extension of 'self') and which can cause a positive or negative consequence to your self interests. Other humans fall in both categories. Two humans united are stronger than one. There are more other humans than there are of you so common consideration yields support where a lack thereof results in a united negative opposition.

Everyone seems to assume that self interest doesn't equate to consideration to others because they make the illogical conclusion that self interest means short sighted action. Taking the long view and working from the assumption that positive interaction, even at the expense of immediate obvious gain, results in a greater overall benefit in the form of support from others is still self interest.

It follows that it makes no sense to cause another creature to suffer without purpose and that in any instance where there is purpose, any objective purpose, the potential benefits must be weighed against the negative consequences. Thus far the benefits to supporting non-humans don't correlate to any notable group dynamic consequences (positive or negative) from the non-humans. The only reasons we have to support them are group dynamics among other humans (much like respecting religious beliefs) and personal subjective concerns like personal amusement, attachment due to anthropomorphizing, etc. So the potential benefit required to override ethical considerations of the non-humans themselves is very small.

Providing chimps with rights provides humans with no foreseeable benefits short or long term. All it does is potentially hinder valuable research results that would benefit both humans AND chimps overall. It's a bad idea. The only way it becomes worthwhile to pretend it is a good idea is enough humans anthropomorphize chimps and illogically decide it's a good idea. Even then it will be because of concerns for the interests of other humans and not chimps that makes it so.

Comment: Re:programming (Score 1) 417

by shaitand (#48578513) Attached to: AI Expert: AI Won't Exterminate Us -- It Will Empower Us
"Are *we* making our own choices? Our neurons are obeying the laws of physics."

True but don't forget that our models of physics != physical reality. It's all built on logic and math all of which is depend on the axiom that 1 = 1... which is a bit of a problem because nothing in physical reality actually matches that assumption. In the reality we've observed no two things are exactly the same in all respects therefore there isn't REALLY two of anything so quantity is invalidated. All things are in constant change. And an instantaneous comparison does not exist. These things can only appear to exist if you limit your frame of reference which makes sense because they arose from the limited frame of reference available to ancient humans looking at reality.

Pull three apples off a tree, stick them in a bag. I do the same. We both have 3 apples, if we combined them we'd have six. Makes sense, unless... I picked mine a couple years later, or you consider that no two are the same so I might have more apple than you or vice versa and combining them would give a different result than doubling what either of us had.

"also program them to feel like they are the ones deciding"

In order to program them to feel like anything we first have to be able to program something that is self-aware. Without the ability to use independent thought to assess how you feel, you don't have the ability to feel any particular way about anything. Our brain is just a machine composed of relatively simple interacting pieces like ants, but our programming is the result a lifetime of uncountable interactions with external stimuli each and every one of which changing the state of that machine so that the exact same interactions would leave it in a different state if they occurred again.

The thing we MIGHT have a chance of programming is a system that is capable of emerging into a similarly complex, aware, and thinking program that is capable of forming opinions and feelings in response to that same sort of programming. We have pretty much zero chance of fathoming and writing the program that is the RESULT of all that interaction. Think of it as being comparable to being able to build a nuclear bomb. We control how it works, we can control how big the explosion is and where we set it off. But that explosion is going to be huge and within that blast will be trillions of molecules impacted by it we can't even begin to calculate all those individual interactions. During the course of the time the radiation left behind lasts it will impact dramatically more and we definitely can't calculate and predict those interactions.

What we hope we can do is build the bomb and set off the chain reaction that looks enough like the one in our brains that all those trillions and trillions of interactions that occur between it and external stimuli result in something that is self-aware. We have pretty much zero chance of even predicting all those interactions let alone dictating them in a way you'd call "programming."

"but maybe we can just make programs that do what we want and also program them to feel like they are the ones deciding"

That is most every program we write. If we have to do the thinking for them it would be much easier to skip all the middle layers of abstraction. The "AI" chat bot this way would just be an intercom system.

"Free markets select for winning solutions." -- Eric S. Raymond