Comment Issues with Group Ethics (Score 1) 130
Ethics are an individual thing. Choices about whether to pursue technologies or ideas are made by people who understand them or the ideas behind them enough that they see some potential new utility.
In contrast, institutionalized ethics as a group activity in the form of ethics boards and impact committes seem less useful. Medical ethicists argue over what's ethical here and now and later but those views are fluid, depending on their institutional affiliation, their government, and the exigencies of the moment (and likely the grant money available). It's simply too much to believe that technologists in other areas would be much different.
It's interesting to note that a product like Thalidomide that basically became taboo in the 50s has now been rehabilitated for exploration and use in different medical contexts 50 years later. Presumably we have a deeper and presumably more mature understanding of what it does and how it works but that implies that someone was poking around with it in a different context.
Technologies tend to be pursued whether they are ethically blessed or not. Technologies like cheap virus cookers and nuclear ballistic missles are pursued and perfected by governments and willing individuals as well as any other group with the means and desire to make a controlling social impact. The only way to prevent (dangerous) technologies of this sort from spreading is to keep them secret. But the chances of any given technology being kept hidden seem slim to none given that
- Governments aren't particularly secure organizations expecially in democratic countries, where everything outs eventually if only for budgetary reasons,
- Serendipity and synchronicity among different researchers and groups will occur anyway (who invented calculus: Newton or Liebnitz? who invented the computer: Babbage or Mauchly or Atanasoff?) and
- Private intellectual property rights are merely legally protective in nature and don't prevent others from reinventing a similar solution if they want to spend the time and energy.
Finally, it seems particulary naive to think that all consequences of a technology will be determinable and evaluatable in advance by some well meaning ethics body. Nature will have its way and humans are far too creative to restrict themselves from exploring new or related ideas. Unintended consequences seems a pretty good descriptor for documenting quite a few of the major discoveries and disasters throughout history. Trying to eliminate them seems a fruitless task at best. Better still to try to foresee the nastier ones and work agressively to forestall them or blunt their effect before they turn from consequences into disasters. But that seems a dim hope as well.
Personally, it gets back to individuals making smart choices about what's ethical to pursue, what's ethical to share and when, and whether a particular technology (warhead design, nanobots, gene sequencing, AI) is basically beneficial or basically malevolent in the broader set of social contexts within which we work.