Part of something I posted in 2000 to Doug Engelbart's "Unifinshed Revolution II" colloquium touching on corporations as "AIs":
========= machine intelligence is already here =========
I personally think machine evolution is unstoppable, and the best hope for humanity is the noble cowardice of creating refugia and trying, like the duckweed, to create human (and other) life faster than other forces can destroy it.
Note, I'm not saying machine evolution won't have a human component -- in that sense, a corporation or any bureaucracy is already a separate machine intelligence, just not a very smart or resilient one. This sense of the corporation comes out of Langdon Winner's book "Autonomous Technology: Technics out of control as a theme in political thought".
You may have a tough time believing this, but Winner makes a convincing case. He suggests that all successful organizations "reverse-adapt" their goals and their environment to ensure their continued survival. These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient. People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends).
Corporate charters are granted supposedly because society believe it is in the best interest of *society* for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves.
I'm not saying the people in corporations are evil -- just that they often have very limited choices of actions. If a corporate CEOs do not deliver short term profits they are removed, no matter what they were trying to do. Obviously there are exceptions for a while -- William C. Norris of Control Data was one of them, but in general, the exception proves the rule. Fortunately though, even in the worst machines (like in WWII Germany) there were individuals who did what they could to make them more humane ("Schindler's List" being an example).
Look at how much William C. Norris http://www.neii.com/wnorris.ht... of Control Data got ridiculed in the 1970s for suggesting the then radical notion that "business exists to meet society's unmet needs". Yet his pioneering efforts in education, employee assistance plans, on-site daycare, urban renewal, and socially-responsible investing are in part what made Minneapolis/St.Paul the great area it is today. Such efforts are now being duplicated to an extent by other companies. Even the company that squashed CDC in the mid 1980s (IBM) has adopted some of those policies and directions. So corporations can adapt when they feel the need.
Obviously, corporations are not all powerful. The world still has some individuals who have wealth to equal major corporations. There are several governments that are as powerful or more so than major corporations. Individuals in corporations can make persuasive pitches about their future directions, and individuals with controlling shares may be able to influence what a corporation does (as far as the market allows). In the long run, many corporations are trying to coexist with people to the extent they need to. But it is not clear what corporations (especially large ones) will do as we approach this singularity -- where AIs and robots are cheaper to employ than people. Today's corporation, like any intelligent machine, is more than the sum of its parts (equipment, goodwill, IP, cash, credit, and people). It's "plug" is not easy to pull, and it can't be easily controlled against its short term
What sort of laws and rules will be needed then? If the threat of corporate charter revocation is still possible by governments and collaborations of individuals, in what new directions will corporations
have to be prodded? What should a "smart" corporation do if it sees this coming? (Hopefully adapt to be nicer more quickly. :-) What can individuals and governments do to ensure corporations "help meet society's unmet needs"?
Evolution can be made to work in positive ways, by selective breeding, the same way we got so many breeds of dogs and cats. How can we intentionally breed "nice" corporations that are symbiotic with the humans that inhabit them? To what extent is this happening already as talented individuals leave various dysfunctional, misguided, or rouge corporations (or act as "whistle blowers")? I don't say here the individual directs the corporation against its short term interest. I say that individuals affect the selective survival rates of corporations with various goals (and thus corporate evolution) by where they choose to work, what they do there, and how they interact with groups that monitor corporations. To that extent, individuals have some limited control over corporations even when they are not shareholders. Someday, thousands of years from now, corporations may finally have been bred to take the long term view and play an "infinite game".
========= saving what we can in the worst case =========
However, if preparations fail, and if we otherwise cannot preserve our humanity as is (physicality and all), we must at least adapt with grace whatever of our best values we can preserve or somehow embody in future systems. So, an OHS/DKR to that end (determining our best values, and strategies to preserve them) would be of value as well.