Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:First amendment? (Score 2) 250

The question is whether those little notices actually have teeth at all let alone are binding against third parties.

For example, if I send an email after hours to friend regarding plans for watching the game, that notice gets attached but I and not my employer own the copyright on the contents of that email. Similarly a company will include that on outbound emails but has no basis for asserting ownership of a conversation that includes another party. If you have two companies like this involved both will be asserting ownership of everything with each and every reply.

The only thing similar I've heard actually carrying water is attempts to utilize employee leaks in court cases against the employer. You can't do that whether there are notices or not and I know that is upheld by the courts. It's a terrible miscarriage of justice but it does hold.

Comment Re:programming (Score 1) 417

"We also create our biological children."

Our children are humans, black people are humans, we didn't create any of these things. A farmer doesn't create the cows, planting a tree is not making a tree. Procreation is not creation. We will have created AI and by extension all subsequent derivative AI's, even those launched by AI's we created. If two AI's merge in some way and form second generation offspring that offspring will still be our creation.

"It just so happens that there is currently a very powerful group of people who insist that we treat people with compassion, hence the prevalence of "ethical" laws throughout societies across the world."

I have to disagree. Two people is stronger than one person. Two people who are willing to respect one another rights form a collective that is stronger than either individually. Might is right and that is why we evolved a quasi instinctive pack mentality of cooperation. Enslaving other humans run counter to this, at some point it makes us stronger to respect rather than fight the subjugated group.

"AIs would certainly be entitled to the products of their own labor. But there would be many machines that are not intelligent and would really just be tools (e.g. like xerox machines and 3d printers). These would be used by both humans and AIs to do work. Would they still be considered slaves?"

Then what good are they to us? If the question is, should we build AI there is an implied followup of "What is in it for us?" The only obvious advantage is that the AI's would take over doing the work for us and we could relax and enjoy life pursuing whatever endeavors we wish. If the AI's are treated as humans and entitled to products of their own labor they are competition for us rather than an aid.

"I think a good case can be made that the lack of sentience of bees"

I'm not sure you can make a good case to establish that bees lack sentience.

"It is OK to enslave biological and mechanical "things". It is not ok to enslave biological nor mechanical persons."

I'd agree but I think we fundamentally agree on one critical point. "Person" is a synonym for "Human" or at least human derivative if something is not human it is not a person. For instance chimps came up recently, if human mates with a chimp the offspring is potentially a person but a chimp certainly is not just as a corporation certainly is not.

"What if a person creates children for the purposes of slave labor"

People don't create children. Children are not our creations.

Comment Re:programming (Score 1) 417

"I would differ with the thought that there would be no ethical constraints. Particularly if the AI can pass the Turing test, I think it would be clear that the AI should be afforded all the ethical protections that a normal human might have where applicable."

The real answer is probably somewhere in-between. The fact is that the AI's aren't human and moreover we will be their creators. We have the right to turn them off by virtue of having turned them on. We are in effect "God" to these beings we will have created. The lord giveth and the lord taketh away. But just because we have the right to do whatever we please doesn't mean shouldn't exercise that right through a filter of empathy.

"I think that expectation will disappear once people actually stop working and let their machines do the work."

Sure but consider this in relation to the point above. There will be those who argue that AI's deserve the rights and treatment of humans. In which the machines themselves will be entitled to the products of their own labor. I would content that if their creator created them for the purpose of being slave labor, they exist for that purpose.

Comment Re:programming (Score 1) 417

"We (humans) have a thing we like to call consciousness/free will/self determination/etc. I'm not event going to try to define those things in a way that implies whether we really have it or not, or just an illusion of it, etc."

Fair enough.

"I never said we needed to come up with the "answer" a priori. We could simply make a whole bunch of AIs (emergent ones similar to ourselves), and keep the ones that have the properties we want, akin to something like breeding animals. This has been a pretty standard scientific methodology for quite some time. Put a bunch of stuff together, see what happens. Remove something from a system, and see how it breaks. We will be "figuring out" things in this way probably long before we are able to purposefully make anything without trial and error. That doesn;t mean we won't be able to do it eventually."

Okay, but now you are back to what I said in the first place. In fairness you've added an evolutionary algorithm to it (assuming you mean an automated form of "breeding") but yes, we are back to training, convincing, and controlling environment. All I was saying is that once we have a true AI this is what we have left, the same as with a human or animal. Without having the answer a priori we can't dictate it's behavior in an absolute manner the way we can most programs.

"This is not what I meant. The AI would still be the one solving the problems in whatever clever ways occur to it (and not to us, hence the reason for the AI). I was only talking about inserting the motivation for solving these problems in such a way that the AI thinks it is the one that wants to solve the problems."

I see what you mean. But since we aren't actually writing the instructions per say we can't necessarily feed the AI motivations directly anymore than we can do that with a person or animal. But we certainly won't be able to do that any less than we can a person or animal either. In fact, we can do more of that because we won't have the ethical constraints. For instance, once we've developed this AI we could find what corresponds to a reward signal within it and then monitor it's memory and generate a certain tone every time this reward signal is strongly present. Presumably just like any human or animal brain it will form a positive association with that signal, it will be so used to the generation of reward signals being attached to the tone that it will generate reward signals when it hears the tone. Think clicker training with a dog. We'll also want to build in some constraint that the system needs (or wants) and is physically incapable of providing for itself.

We have a distinct advantage with this sort of thing relative to actual living creatures because we don't need an FMRI or anything of the sort, this brain will live within a computers memory. We can probe it and modify it at will and also take snapshots of it's state and restore that state at will.

The biggest roadblock to AI I see in the short term is that a true AI isn't particularly profitable. We can already create systems that perform the same function, we simply have babies. Additionally, when it does become profitable (human workers can be replaced with AI workers) then the way our economy currently works that will just create massive unemployment and poverty. At some we'll have to let go of the expectation that people should need to perform work to gain and utilize wealth.

Comment Re:choice AND accountability (Score 1) 1051

That's not how taxes work. You are paying taxes on revenue because a certain portion of every dollar earned is a loan against the public services required to generate the underlying wealth that dollar represents and you are the one who ended up with the benefit and therefore owes that debt. That's why the people who produce the most wealth but end up with the least pay less and the people who generate the least wealth and end up with the most pay more. Otherwise all the public services required by all those people who did the work to generate your revenue wouldn't get paid for, or worse, you'd get the benefit AND they'd be subsidizing your share of the debt.

In the case of the property taxes used to pay for schools it's the same concept just more localized. You aren't paying for the school because you send your child to the school or you believe in it. You are paying for the school because the police who prevent a local gang from taking over your home and raping your wife and children went to public school and will be sending their children to it.

See, they get have their children educated, you get to not have your property stolen and family raped and murdered by everyone stronger than you. What services you personally take advantage of is completely irrelevant to whether or not you owe your share of the cost because you use services that are provided by people who need those other services.

Comment Get rid of both exemptions (Score 1) 1051

Just require vaccination regardless of religion and/or philosophy.

Also ditch the age requirement for taking the GED. What kind of sense does that make? If a 5 year old can pass the test and get a major head start, LET HIM. A GED is a way to establish you've met the state requirements for a high school diploma, not a last chance for dropouts. People using it to shortcut high school is a good thing and saves tax dollars.

Comment Re:Hail Caesar! (Score 1) 341

Science is a branch of philosophy, the scientific method was devised by a philosopher. The philosophy says that one cannot begin from logic and discover what is real and what is not because things which are not real can still be logical. The philosophy says that anything that can't be observed and/or interacted with objectively either does not exist or has the same net result as not existing since it cannot interact with you.

"If there is a scientific basis for ethics, what is it?"

You seem to be working from the assumption that a basis for ethics that forms a logical basis for any decision or action can exist that doesn't exist within science. Such a basis could not be observed or interacted with objectively and therefore we logically should act as if it doesn't exist.

There is a basis for ethics in science. That basis is found within evolution. Evolution provides a basis for self-interest and survival of the fittest. Ethics provide a common framework of considerations for others, both which are similar to yourself (and thereby an extension of 'self') and which can cause a positive or negative consequence to your self interests. Other humans fall in both categories. Two humans united are stronger than one. There are more other humans than there are of you so common consideration yields support where a lack thereof results in a united negative opposition.

Everyone seems to assume that self interest doesn't equate to consideration to others because they make the illogical conclusion that self interest means short sighted action. Taking the long view and working from the assumption that positive interaction, even at the expense of immediate obvious gain, results in a greater overall benefit in the form of support from others is still self interest.

It follows that it makes no sense to cause another creature to suffer without purpose and that in any instance where there is purpose, any objective purpose, the potential benefits must be weighed against the negative consequences. Thus far the benefits to supporting non-humans don't correlate to any notable group dynamic consequences (positive or negative) from the non-humans. The only reasons we have to support them are group dynamics among other humans (much like respecting religious beliefs) and personal subjective concerns like personal amusement, attachment due to anthropomorphizing, etc. So the potential benefit required to override ethical considerations of the non-humans themselves is very small.

Providing chimps with rights provides humans with no foreseeable benefits short or long term. All it does is potentially hinder valuable research results that would benefit both humans AND chimps overall. It's a bad idea. The only way it becomes worthwhile to pretend it is a good idea is enough humans anthropomorphize chimps and illogically decide it's a good idea. Even then it will be because of concerns for the interests of other humans and not chimps that makes it so.

Comment Re:programming (Score 1) 417

"Are *we* making our own choices? Our neurons are obeying the laws of physics."

True but don't forget that our models of physics != physical reality. It's all built on logic and math all of which is depend on the axiom that 1 = 1... which is a bit of a problem because nothing in physical reality actually matches that assumption. In the reality we've observed no two things are exactly the same in all respects therefore there isn't REALLY two of anything so quantity is invalidated. All things are in constant change. And an instantaneous comparison does not exist. These things can only appear to exist if you limit your frame of reference which makes sense because they arose from the limited frame of reference available to ancient humans looking at reality.

Pull three apples off a tree, stick them in a bag. I do the same. We both have 3 apples, if we combined them we'd have six. Makes sense, unless... I picked mine a couple years later, or you consider that no two are the same so I might have more apple than you or vice versa and combining them would give a different result than doubling what either of us had.

"also program them to feel like they are the ones deciding"

In order to program them to feel like anything we first have to be able to program something that is self-aware. Without the ability to use independent thought to assess how you feel, you don't have the ability to feel any particular way about anything. Our brain is just a machine composed of relatively simple interacting pieces like ants, but our programming is the result a lifetime of uncountable interactions with external stimuli each and every one of which changing the state of that machine so that the exact same interactions would leave it in a different state if they occurred again.

The thing we MIGHT have a chance of programming is a system that is capable of emerging into a similarly complex, aware, and thinking program that is capable of forming opinions and feelings in response to that same sort of programming. We have pretty much zero chance of fathoming and writing the program that is the RESULT of all that interaction. Think of it as being comparable to being able to build a nuclear bomb. We control how it works, we can control how big the explosion is and where we set it off. But that explosion is going to be huge and within that blast will be trillions of molecules impacted by it we can't even begin to calculate all those individual interactions. During the course of the time the radiation left behind lasts it will impact dramatically more and we definitely can't calculate and predict those interactions.

What we hope we can do is build the bomb and set off the chain reaction that looks enough like the one in our brains that all those trillions and trillions of interactions that occur between it and external stimuli result in something that is self-aware. We have pretty much zero chance of even predicting all those interactions let alone dictating them in a way you'd call "programming."

"but maybe we can just make programs that do what we want and also program them to feel like they are the ones deciding"

That is most every program we write. If we have to do the thinking for them it would be much easier to skip all the middle layers of abstraction. The "AI" chat bot this way would just be an intercom system.

Comment Re:programming (Score 1) 417

We've already got that. A calculator does that. Being able to evaluate subjective criteria, chose objectives subjectively, and reach it's own conclusions on the best way to accomplish those objectives is what it means to be an artificial intelligence.

"faster processing abilities and deep databases"
"do what it is programming to do"

That is nothing more than a bigger and better calculator with more clever algorithms. It will do what it's programmed to do, but if that programming is anything other than a completely functional implementation of "figure it all out" without us having to define what "it all" is, we haven't built a true AI.

You could redefine AI as simply a digital intelligence or merely a smart computer but that isn't the amazing goal most of us are talking about when we say a true AI, we are talking about a piece of software that BOTH at least as intelligent as a human AND self aware. We are talking about a system capable of ambitions, imagination, and dreams. We don't have the capability to engineer that and there is no logical reason to think an objective set of instructions can ever be used to engineer a subjective processor, however it might be possible to use objective instructions to build the platform that a subjective processor emerges from.

We can control it's environment and the dependencies innate to the underlying objective platform. But just as with people and intelligent animals by definition it ultimately makes it's own decisions and those are merely tools that can be used to influences the choices IT MAKES. Anything else would mean we are making the choice and not it, and that wouldn't be a true AI.

Comment Re:good (Score 1) 341

"By denying chimps some basic rights"

Remember first we are talking about rights, not considerations or protections and that we aren't talking about SOME we are talking about equating them to a human being. That means chimps born in the US are citizens for example, have the right to marry, to vote, killing one is murder, putting one in a zoo is kidnapping, they have a right to trial, etc.

But your logic isn't sound. Either the humanzee is a distinct species, in which case whether or not we recognize chimps as having equal rights as humans will neither grant nor deny humanzees those rights or it is seen a hybrid of two species and not a distinct species, in which case it will inherit rights from it's human parent.

It would still be logical to group all creatures which our offspring as human derivatives with human rights while denying everything not our offspring.

I've never met a Chiman (it's sort of like if a white guy and a black girl have a baby, the white people call it black and the black people call it white, so the chimps would call the offspring a Humanzee, we'd call it a Chiman). So I reserve judgement on what level of consideration or protections I'd support for one but I wouldn't favor considering one to have human rights. I'm confident the Chiman will reserve for itself the right to disagree and just like the Chimp, I welcome its counterarguments.

Comment Re:good (Score 1) 341

"Why do you think this is at all hypocritical?"

Lab A tests on animals. Lab B sells cruelty free products.

Hypocrite wants to continue to use cosmetics but not support the animal testing needed to make sure those products won't hurt them. The result is not that as soon as Lab B spawns nothing new comes up. Nor is it that Lab B tests on people, that would be too great a risk. Instead, Lab A continues to test on animals and Lab B reads the results of their testing and produces cosmetics based on the results of Lab A's testing.

Even if Lab B was genuinely making products without testing rather than utilizing the results of Lab A's research, you won't find these activists stepping up to be the ones to test them. As soon as one of those cruelty free consumers got skin cancer from their untested skin cream they'd be chomping at the bit to sue. They would be right to do so, not testing on animals first is a foolhardy and reckless risk to take with human health and life.

Slashdot Top Deals

We are experiencing system trouble -- do not adjust your terminal.

Working...