Comment In the (sadly) late Iain Banks Culture novels... (Score 1) 206
... Culture "Minds", drones, and humans/cyborgs all have privacy of what is in their own thoughts and memories. However, anything in a non-sentient "databank" is public to all (so, externally stored communications or designs in that sense are publicly shareable). I'm just re-reading "Excession" (out loud to my kid) where Banks made that point. In the "Culture", Banks makes it clear that sentient beings of any sort (including typical drones) have a variety of rights related to independence. When I first read that, coming from an idea of free software and free culture, it seemed somehow strange or wrong that the AI "Minds" or drones would have that sort of privacy, but now it seems to make more and more sense to me, given the sort of issues raised in the article, including that there can be many times when the line is blurred between human and machine. But the probably deeper issue is what it means to have an advanced post-scarcity "Culture" where many of the citizens are entirely non-biological (like the AI "Minds" that run much of everything).
BTW, the original "RUR" story from 1920 (where the term "robot" came from) has almost exactly the same plot as you outline for BG.
http://en.wikipedia.org/wiki/R....
A lot of long-term robotics (like Asimo) is implicitly the quest for the ideal "slave". The question is, at what points does something have rights? In the USA and elsewhere animals have some legal rights (or at least laws to protect them) since starting about a 150 years ago, and that campaign I've heard eventually led to children having independent rights (on the logic of, why should a horse or dog have rights when a child does not?).
http://en.wikipedia.org/wiki/A...
http://www.nal.usda.gov/awic/p...
"The first national law to regulate animal experimentation was passed in Britain in 1876--the Cruelty to Animals Act of 1876. This bill created a central governing body that reviewed and approved all animal use in research. After that, there were numerous countries in Europe that adopted some regulations regarding research with animals. "
Also:
http://www.humanium.org/en/chi...
"At the beginning of the 20th century, children's protection starts to be put in place, including protection in the medical, social and judicial fields. This kind of protection starts first in France and spreads across Europe afterwards. Since 1919, the international community, following the creation of The League of Nations (later to become the UN), starts to give some kind of importance to that concept and elaborates a Committee for child protection."
However, going back to hunter/gatherer times thousands of years ago, there was in many such cultures (from what remains of them) at least an ethic of giving thanks to the larger "animal" kind (e.g. "Rabbit") that you killed for it letting you kill it so you might survive. But it's hard to know for sure what such cultures really believed day-to-day in all circumstances. And some such cultures had various sorts of slavery.
I don't know what the line is where a mechanism (mechanical or electronic or photonic or fluidic or other) becomes self-aware, or even if that should be the line. Or at what point can a mechanism feel "pain" or "pleasure"? Is that ultimately a political and/or religious question?
http://www.rfreitas.com/Astro/...
http://en.wikipedia.org/wiki/R...
http://news.bbc.co.uk/2/hi/tec...
And also:
http://www.aspcr.com/
"We are the American Society for the Prevention of Cruelty to Robots, founded in 1999 in Seattle, Washington.
This article makes an insightful point:
http://www.bostonglobe.com/opi...
"In her 2012 paper, she quotes Immanuel Kant to the effect that a man shooting a dog "damages in himself that humanity which it is his duty to show toward mankind." So how we treat our robots will tell us volumes about ourselves."
Anyone who "owns" one or more slaves becomes a slave master. There is a certain social and psychological dynamic to being a slave master, and a lot of it is self-justifying righteousness about the need for consciously dispensing cruelty or reward to keep order, and for ignoring the pain or pleasure or hopes and dreams or social relationships felt by others who are defined as lesser beings, and for justifying taking almost everything that entity produces for ourselves. Do we as a global society really want to go there again in a big way? What are the consequences and how far would that sort of thinking spread? Of course, one might argue we are still very much in that mental space in the way we as a society (especially in the USA) relate to "wage slaves" (versus a "basic income"), to compulsory schooling, to the population of other countries "our" big corporations do business in, or to various ecosystems or billions of farm animals -- although I would like to think we are improving overall in some ways.
http://en.wikipedia.org/wiki/W...
http://www.whywork.org/rethink...
"Work makes a mockery of freedom. The official line is that we all have rights and live in a democracy. Other unfortunates who aren't free like we are have to live in police states. These victims obey orders or else, no matter how arbitrary. The authorities keep them under regular surveillance. State bureaucrats control even the smaller details of everyday life. The officials who push them around are answerable only to higher-ups, public or private. Either way, dissent and disobedience are punished. Informers report regularly to the authorities. All this is supposed to be a very bad thing. And so it is, although it is nothing but a description of the modern workplace. The liberals and conservatives and Libertarians who lament totalitarianism are phonies and hypocrites. There is more freedom in any moderately de-Stalinized dictatorship than there is in the ordinary American workplace. You find the same sort of hierarchy and discipline in an office or factory as you do in a prison or a monastery. In fact, as Foucault and others have shown, prisons and factories came in at about the same time, and their operators consciously borrowed from each other's control techniques. A worker is a part-time slave. The boss says when to show up, when to leave, and what to do in the meantime. He tells you how much work to do and how fast. He is free to carry his control to humiliating extremes, regulating, if he feels like it, the clothes you wear or how often you go to the bathroom. With a few exceptions he can fire you for any reason, or no reason. He has you spied on by snitches and supervisors, he amasses a dossier on every employee. Talking back is called "insubordination," just as if a worker is a naughty child, and it not only gets you fired, it disqualifies you for unemployment compensation. Without necessarily endorsing it for them either, it is noteworthy that children at home and in school receive much the same treatment, justified in their case by their supposed immaturity. What does this say about their parents and teachers who work? "
Sadly, ironically, the very technology that should be liberating more humans from drudgery in the worplace or "classroom" is instead being used to make such places even more controlling. As one very insightful comment months ago on Slashdot said (wish I had the link), essentially we were promised technology would liberate us with household robots and flying cars but instead it is being used to enslave us with 24X7 surveillance. Some other thoughts on that by me:
http://pcast.ideascale.com/a/d...
"Now, there are many people out there (including computer scientists) who may raise legitimate concerns about privacy or other important issues in regards to any system that can support the intelligence community (as well as civilian needs). As I see it, there is a race going on. The race is between two trends. On the one hand, the internet can be used to profile and round up dissenters to the scarcity-based economic status quo (thus legitimate worries about privacy and something like TIA). On the other hand, the internet can be used to change the status quo in various ways (better designs, better science, stronger social networks advocating for some healthy mix of a basic income, a gift economy, democratic resource-based planning, improved local subsistence, etc., all supported by better structured arguments like with the Genoa II approach) to the point where there is abundance for all and rounding up dissenters to mainstream economics is a non-issue because material abundance is everywhere. So, as Bucky Fuller said, whether is will be Utopia or Oblivion will be a touch-and-go relay race to the very end. While I can't guarantee success at the second option of using the internet for abundance for all, I can guarantee that if we do nothing, the first option of using the internet to round up dissenters (or really, anybody who is different, like was done using IBM computers in WWII Germany) will probably prevail. So, I feel the global public really needs access to these sorts of sensemaking tools in an open source way, and the way to use them is not so much to "fight back" as to "transform and/or transcend the system". As Bucky Fuller said, you never change thing by fighting the old paradigm directly; you change things by inventing a new way that makes the old paradigm obsolete."
More ideas by others: http://www.ieet.org/
"The Institute for Ethics and Emerging Technologies is a nonprofit think tank which promotes ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies. We believe that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. We call this a "technoprogressive" orientation. Focusing on emerging technologies that have the potential to positively transform social conditions and the quality of human lives - especially "human enhancement technologies" - the IEET seeks to cultivate academic, professional, and popular understanding of their implications, both positive and negative, and to encourage responsible public policies for their safe and equitable use. The IEET was founded in 2004 by philosopher Nick Bostrom and bioethicist James J. Hughes. By promoting and publicizing the work of international thinkers who examine the social implications of scientific and technological progress, we seek to contribute to the understanding of the impact of emerging technologies on individuals and societies, locally and globally. We also aim to shape public policies that distribute the benefits and reduce the risks of technological advancement. "