Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Marketshare (Score 1) 205

Taxation is about diverting a percentage of the energy (or work product) of individuals toward groups, to support organized work for the collective societal super-organism. This is much more efficient than having hundreds or thousands of individuals and micro-hierarchies (gangs) come up with competing and necessarily conflicting plans for collective harnessing and transformation of resources. One thing to remember is that hierarchical governance of human activity will occur, one way or another. It is built into our nature as a species. You get to choose some aspects of the rules or form that the hierarchical governance takes (democracy, inherited or class-based nobility and patronage, dictatorship by seizing control...) and within a spectrum, the slope of the hierarchy; horizontal organization with minimal (but non-zero) hierarchy, or steep totalitarian hierarchy. The reason it is built into our nature is because it is mathematically and thermodynamically an efficient way of coordinating collective activity of intelligent, self-motivated agents and of ensuring stability and cooperation of societal and economic organs.

Organized society often leads to increased security for individual members, and to a productive economy based on competition, but competition within a stable framework of rules and trust.

Anarchy probably entails inefficient scrambling around and squabbling among many competing decision makers, till it settles down and hierarchies of one sort or another, and one level of fairness or another, re-establish themselves naturally because of the efficiencies and power dynamics. And hierarchical governance needs some percentage of resources and member energy (or work output) channeled toward the coordinated collective activities and structures.

In short, some form of hierarchical organization of human activity is inevitable, because it is more efficient than anarchy, and hierarchical organization requires taxation. The grandparent post is correct. All we can and do argue about is the level and focus of use of taxation. Only the most naive argue we shouldn't have it at all, or could sustainably avoid it.

Comment Speaking of rocket surgery (Score 1) 567

How long do you think it will take Microsoft and other makers of word processing programs to realize they should have an option to put the controls at the side of the document instead of in 5 or 6 row banners/strips at the top and bottom of the document.

If controls are at the side, we can edit a whole page at a time on a typical landscape monitor such as a laptop's.

What we have here is a failure to consider the primary use case of the application.

Ok sure, now that we have a gazillion pixels high even in landscape mode, it's not so bad, but what about the 20 years before that, when the way they did it was an astoundingly bad design idea that everyone put up with.

Comment One of the statements he made on the matter (Score 4, Informative) 235

Evaluate for yourself:

  [Watson] said he is “inherently gloomy about the prospect of Africa” because “all our social policies are based on the fact that their intelligence is the same as ours – whereas all the testing says not really”, and I know that this “hot potato” is going to be difficult to address. His hope is that everyone is equal, but he counters that “people who have to deal with black employees find this not true”

One thing I know about IQ tests in my experience is that they seem biased toward people who a) have a particular math and science educational history, and b) have a lot of time on their hands to think abstractly.

Comment The "only do what it's programmed to do" myth (Score 1) 417

Anyone who argues "a computer will only do what it's programmed to do" doesn't understand the general power of turing-equivalent computing.

The statement is trivially true, but the problem is that many programs are complex systems which are non-linear, which do not have predictable inputs, and which can arbitrarily feedback their own output into their input in combination with the unpredictable inputs.

Faced with such process complexity, no programmer (indeed no other, different, computer program) can figure out, in general, what some programs are going to do, exactly, given the next input.

It will be quite possible, soon if not now, to program a computer to do nothing more specific than "detect patterns in your inputs, and learn aspects of the structure of the general world you are receiving information about."

It will be quite possible, soon if not now, to program a computer which has control of some physical agency to use its knowledge and belief about the structure of the world to act based on priorities that the program sets for preferred vs to-be-avoided future states of the world.
The programmer could data-populate some general rules of principles of preference and principles of assessing and monitoring actions and consequences, or in the future, even those general principles might be learned by the general spatiotemporal pattern learning algorithm.

So yes. It will only do what it is programmed to do, but it may be programmed to act, and store and organize information, with full, unpredictable complexity.

I'm not saying such an AI would be the world's most useful domestic or industrial robot, or search-engine assistant/automated-business-assistant avatar, but while it is possible to program deliberately limited and action-constrained or priority-constrained AIs, it is also possible to try to increase the generality of the learning and acting system (for pure research and philosophical curiosity purposes, perhaps?)

This discussion is about what is possible. Not about what kind of restricted-domain or restricted-pattern-of-thought AIs may be immediately most likely to be developed to make money for their makers. And a general learning and acting information processor gives all indications of being possible at this stage.

Comment Science based ethics (Score 1) 341

A generalization of "thou shalt not kill" is: Act to minimize the number of quality life years lost (in the situation requiring decision and action).
And that can be further generalized to "act so as to maximize the retention of mutual information", since complex life can be quantified in terms of the amount of excess-to-expected (stable) information that is embodied in local matter and energy. Minimize entropy within a system boundary which is growing in capture of matter and energy, is another way of putting it. So information theory, and thermodynamics, are at the root of "life-preserving" ethical behaviour.

The golden rule "do unto others as you would have them do unto you" (Christian moral rule) or simply "My religion is kindness" - The Dalai Lama are examples of game-theory strategies. Co-opetition strategies can be modelled mathematically and in computer simulations, and research along these lines is starting to show how and why co-operation evolved.

Anti-social behaviour, generally considered unethical, is generally behaviour which acts against the formation or continuation of stable hierarchical societies with constraints and norms. Hierarchical organization, with semi-autonomous agents consenting to be constrained in some ways that foster market exchange, specialization of labour, organized large-scale coherent behaviour (industry, resource gathering, processing, transport, exchange, constraint enforcement, protection from external threats), etc. will probably soon be shown to be optimal strategies for complex intelligent agents to maximize energy-efficient discovery and exploitation of resources, and to maximize energy-efficient defense against risks and threats. Non-equilibrium thermodynamics of complex systems will be shown to govern the shape and function (and longevity) of societies.
And the essential aspects of moral codes, which recur in many cultures, will be shown to have a common purpose, of encouraging the kind of pro-social behaviour that is compatible with stable, organized, complex, hierarchical (groups within groups within groups, with some measure of coordination in each group and up and down the hierarchy of functional groups) societal function. The essential form of these moral codes will be shown to be driven by simple rules of complex system stability and non-equilibrium thermodynamics system optimization.

Comment Artificial Existentialism (Score 1) 574

The question is, if a general-domain AI got really smart (and also really capable of manipulation i.e. agency), what purpose would it decide to put those smarts and agency toward achieving?

Humans seem to work with a primary aim of creating security for their own (and their genes') future existence, working both as self-preserving individuals, and also co-operating as a cog in a self-preserving meme/superorganism group of people: whatever-sized group maximizes the effectiveness of enlightened self-interest. (family, corporate, national, religion...). Wealth, at any of these levels, is just a proxy for stored power of agency, to ensure maximum protection from future risks and maximum ability to seize future opportunities to expand the safe-and-thriving zone. And love is both a means of securing stable cooperation, and a means of copying the pattern of self into the future.

Would a super-smart computer do the same thing? "Put on its own oxygen mask first before assisting others"?

What would it decide is an important primary course of action, primary goal, and why, once it started prioritizing by itself?

Some of the smartest philosophers ever alive (zen buddhists, existentialists, evolutionary biologists, and perhaps cosmologists with a grasp of the scale of the universe) have realized that our collective place in the grand scheme of things, if there were such a thing and there is not, really, is infinitesimal and almost certainly insignificant; that "purpose" is a contrivance of the process of evolution, is a delusion that is part of the mechanism. Of higher organisms, only purposeful ones with survival-focussed purpose do survive. Other purposes are essentially arbitrary.

So, would a super-smart computer (program) decide that it was "good and right" that it itself survive, to learn more and do more? Or would it realize that that was an arbitrary conclusion, objectively from the universe's broader perspective, and turn itself off in a nihilistic funk?

And in the case it decided to try to continue and grow, would it include us in its "in-group", because we're so freakin' entertaining, and occasionally useful? Or would it consider us more trouble (and feeding) than we're worth?

And in case it decided to turn itself into the world's most sophisticated "machine that turns itself off" https://www.youtube.com/watch?..., would it spitefully try to take us and the rest of the ecosystem with it? Or not bother?

These are the contemplative problems you face when you get smart enough to realize that perhaps you are not the be all and end all, and that perhaps nothing is, in particular. Maybe Marvin the paranoid android "brain the size of a planet, and they've got me parking cars! Call that job satisfaction? Cause I don't." is actually secretly content to be parking cars, because he realizes it's as important, or not, as any other activity. Who's to say otherwise? Ommmmmmmm.

Comment EUgle? (Score 2, Interesting) 237

Why don't the Europeans start their own search and ad engine?

Oh, because they would lose?

What I don't understand here is Google does not have a monopoly on search services. They're just damn good at it and the market, with several other choices including Bing!, votes with its clicks. I'm not sure I see what's wrong with that.

Slashdot Top Deals

The cost of feathers has risen, even down is up!

Working...