http://www.corporatecrimerepor...
Fines or imprisoning CEOs do little to change the pattern of relationships and values and policies that make an organization what it is, any more than a human body loosing some skill cells or even brain cells usually changes how a person behaves very much.
Seriously, why should any corporate communications have any expectation of privacy? Corporations with "limited liability" are chartered for the public interest. 150 years ago, US Americans put such creatures on very short leashes because they had seen what trouble resulted from big British corporations in the American colonies. Individuals have now lost pretty much all informational privacy due to large corporations and the current internet. Why should bigger more powerful creatures than humans like corporation have more privacy in practice than humans? See also David Brin's "The Transparent Society". Any argument that corporations need privacy (like for salaries or payments for services) for some sort of commercial advantage is trumped by the public interest in understanding what corporations are doing and also that if all corporations were transparent there would be a level playing field. Granted, it would require new ways of doing business, but books like "Honest Business" also extol the value of "open books". Or perhaps corporations should be forced to choose -- if they want limited liability for shareholders then they need to be transparent; if every shareholder accepts full responsibility for all actions of the organization, then they can have privacy?
And see also my comments from 2000, the relevant section copied below (sadly a lot of links there have rotted):
http://www.dougengelbart.org/c...
========= machine intelligence is already here =========
I personally think machine evolution is unstoppable, and the best hope
for humanity is the noble cowardice of creating refugia and trying, like
the duckweed, to create human (and other) life faster than other forces
can destroy it. [Well, I now in 2014 think there are also other options, like symbiosis, maybe friendly AI, and in general trying to be nicer to each other like with a basic income in hopes that leads to a happier singularity...]
Note, I'm not saying machine evolution won't have a human component --
in that sense, a corporation or any bureaucracy is already a separate
machine intelligence, just not a very smart or resilient one. This sense
of the corporation comes out of Langdon Winner's book "Autonomous
Technology: Technics out of control as a theme in political thought".
http://www.rpi.edu/~winner/
You may have a tough time believing this, but Winner makes a convincing
case. He suggests that all successful organizations "reverse-adapt"
their goals and their environment to ensure their continued survival.
These corporate machine intelligences are already driving for better
machine intelligences -- faster, more efficient, cheaper, and more
resilient. People forget that corporate charters used to be routinely
revoked for behavior outside the immediate public good, and that
corporations were not considered persons until around 1886 (that
decision perhaps being the first major example of a machine using the
political/social process of its own ends).
http://www.adbusters.org/magaz...
Corporate charters are granted supposedly because society believe it is
in the best interest of *society* for corporations to exist.
But, when was the last time people were able to pull the "charter" plug
on a corporation not acting in the public interest? It's hard, and it
will get harder when corporations don't need people to run themselves.
http://www.adbusters.org/magaz...
http://www.adbusters.org/campa...
I'm not saying the people in corporations are evil -- just that they
often have very limited choices of actions. If a corporate CEOs do not
deliver short term profits they are removed, no matter what they were
trying to do. Obviously there are exceptions for a while -- William C.
Norris of Control Data was one of them, but in general, the exception
proves the rule. Fortunately though, even in the worst machines (like in
WWII Germany) there were individuals who did what they could to make
them more humane ("Schindler's List" being an example).
Look at how much William C. Norris http://www.neii.com/wnorris.ht... of
Control Data got ridiculed in the 1970s for suggesting the then radical
notion that "business exists to meet society's unmet needs". Yet his
pioneering efforts in education, employee assistance plans, on-site
daycare, urban renewal, and socially-responsible investing are in
part what made Minneapolis/St.Paul the great area it is today. Such
efforts are now being duplicated to an extent by other companies. Even
the company that squashed CDC in the mid 1980s (IBM) has adopted some of
those policies and directions. So corporations can adapt when they feel
the need.
Obviously, corporations are not all powerful. The world still has some
individuals who have wealth to equal major corporations. There are
several governments that are as powerful or more so than major
corporations. Individuals in corporations can make persuasive pitches
about their future directions, and individuals with controlling shares
may be able to influence what a corporation does (as far as the market
allows). In the long run, many corporations are trying to coexist with
people to the extent they need to. But it is not clear what corporations
(especially large ones) will do as we approach this singularity -- where
AIs and robots are cheaper to employ than people. Today's corporation,
like any intelligent machine, is more than the sum of its parts
(equipment, goodwill, IP, cash, credit, and people). It's "plug" is not
easy to pull, and it can't be easily controlled against its short term
interests.
What sort of laws and rules will be needed then? If the threat of
corporate charter revocation is still possible by governments and
collaborations of individuals, in what new directions will corporations
have to be prodded? What should a "smart" corporation do if it sees
this coming? (Hopefully adapt to be nicer more quickly. :-) What can
individuals and governments do to ensure corporations "help meet
society's unmet needs"?
Evolution can be made to work in positive ways, by selective breeding,
the same way we got so many breeds of dogs and cats. How can we
intentionally breed "nice" corporations that are symbiotic with the
humans that inhabit them? To what extent is this happening already as
talented individuals leave various dysfunctional, misguided, or rouge
corporations (or act as "whistle blowers")? I don't say here the
individual directs the corporation against its short term interest. I
say that individuals affect the selective survival rates of
corporations with various goals (and thus corporate evolution) by where
they choose to work, what they do there, and how they interact with
groups that monitor corporations. To that extent, individuals have some
limited control over corporations even when they are not shareholders.
Someday, thousands of years from now, corporations may finally have been
bred to take the long term view and play an "infinite game".