Submission + - Is research into recursive self-improvement becoming a safety hazard? (foommagazine.org) 1
Gazelle Bay writes: One of the earliest speculations about machine intelligence was that, because it would be made of much simpler components than biological intelligence, like source code instead of cellular tissues, the machine would have a much easier time modifying itself. In principal, it would also have a much easier time improving itself, and therefore improving its ability to improve itself, thereby potentially leading to an exponential growth in cognitive performance—or an 'intelligence explosion,' as envisioned in 1965 by the mathematician Irving John Good.
Recently, this historically envisioned objective, called recursive self-improvement (RSI), has started to be publicly pursued by scientists and openly discussed by AI corporations' senior leadership. Perhaps the most visible signature of this trend is that a group of academic and corporate researchers will be hosting, in April, a first formal workshop explicitly focused on the subject, located at the International Conference on Learning Representations (ICLR), a premier conference for AI research. In their workshop proposal, organizers state they expect over 500 in attendance.
However, prior to recent discussions of the subject, RSI was often—but not always—seen as posing serious concerns about AI systems that executed it. These concerns were typically less focused on RSI, itself, and more focused on the consequences of RSI, like the intelligence explosion it might (hypothetically) generate. Were such an explosion not carefully controlled, or perhaps even if it were, various researchers argued that it might not secure the values or ethics of the system, even while bringing about exponential improvements to its problem solving capabilities—thereby making the system unpredictable or dangerous.
Recent developments have therefore raised questions about whether the topic is being treated with a sufficient safety focus. David Scott Krueger of the University of Montreal and Mila, the Quebec Artificial Intelligence Institute, is critical of the research. "I think it's completely wild and crazy that this is happening, it's unconscionable," said Krueger to Foom in an interview. "It's being treated as if researchers are just trying to solve some random, arcane math problem ... it shows you how unserious the field is about the social impact of what it's doing."
Recently, this historically envisioned objective, called recursive self-improvement (RSI), has started to be publicly pursued by scientists and openly discussed by AI corporations' senior leadership. Perhaps the most visible signature of this trend is that a group of academic and corporate researchers will be hosting, in April, a first formal workshop explicitly focused on the subject, located at the International Conference on Learning Representations (ICLR), a premier conference for AI research. In their workshop proposal, organizers state they expect over 500 in attendance.
However, prior to recent discussions of the subject, RSI was often—but not always—seen as posing serious concerns about AI systems that executed it. These concerns were typically less focused on RSI, itself, and more focused on the consequences of RSI, like the intelligence explosion it might (hypothetically) generate. Were such an explosion not carefully controlled, or perhaps even if it were, various researchers argued that it might not secure the values or ethics of the system, even while bringing about exponential improvements to its problem solving capabilities—thereby making the system unpredictable or dangerous.
Recent developments have therefore raised questions about whether the topic is being treated with a sufficient safety focus. David Scott Krueger of the University of Montreal and Mila, the Quebec Artificial Intelligence Institute, is critical of the research. "I think it's completely wild and crazy that this is happening, it's unconscionable," said Krueger to Foom in an interview. "It's being treated as if researchers are just trying to solve some random, arcane math problem