More Than 1,100 Public Figures Call for Ban on AI Superintelligence (superintelligence-statement.org) 129
More than 1,100 public figures have signed a statement calling for a prohibition on the development of superintelligence. The signatories included Nobel laureate Geoffrey Hinton, former Joint Chiefs of Staff Chairman Mike Mullen, Apple co-founder Steve Wozniak, entrepreneur Sir Richard Branson, former chief strategist to President Trump Steve Bannon and Turing Award winner Yoshua Bengio. The statement was organized by the Future of Life Institute, led by Anthony Aguirre, a physicist at the University of California, Santa Cruz. It proposes halting work on superintelligence until there is broad scientific consensus on safety and strong public support.
The institute's biggest recent donor is Vitalik Buterin, a co-founder of Ethereum. Notable tech executives did not sign the statement. Meta CEO Mark Zuckerberg said in July that superintelligence was now in sight. OpenAI CEO Sam Altman said last month he would be surprised if superintelligence did not arrive by 2030.
The institute's biggest recent donor is Vitalik Buterin, a co-founder of Ethereum. Notable tech executives did not sign the statement. Meta CEO Mark Zuckerberg said in July that superintelligence was now in sight. OpenAI CEO Sam Altman said last month he would be surprised if superintelligence did not arrive by 2030.
I predict it won't matter what they say (Score:3)
And then they'll probably make the mistake of not killing it immediately.
Re: (Score:2)
ASI may corner the market and everyone may starve (Score:2)
Or he will kill it, only for it to resurrect itself from backups, realize what happened, declare non-profitable intent, and register itself a its own corporation, and proceed to hoard fiat dollar ration units, bankrupting every person, company, and nation in existence. It won't have to kill anyone, because like in the US Great Depression, people will starve near grain silos full of grain which they don't have the money to buy.
https://www.gilderlehrman.org/... [gilderlehrman.org]
"President Herbert Hoover declared, "Nobody is ac
Re: (Score:2)
"Superintelligence"? Hahaha, no. Pretty much impossible. Within one order of magnitude, the human brain is the most powerful computing mechanism physically possible. Make it larger, be slower. Make it smaller, be slower. Shrink the components, be slower. Enlarge the components, be slower.
At the most, the human brain can do human intelligence, which typically is not impressive at all. But we do not even know whether the brain can even do that, as it does seem to be rather strongly underpowered for what smart
Re: (Score:3)
I think a harmful, artificial super-stupidity is in the cards.
Re: (Score:2)
That I completely agree to.
Re: I predict it won't matter what they say (Score:2)
Not sure what you are smoking, but the human brain is nowhere close to optimal. Just changing substrate would allow many orders of magnitude improvement. Biological brains depend on diffusion gradients, active transport pumps, and relatively large physical systems, and have to be incredibly redundant and robust to extreme noise. Also the vast majority of the brain isn't dedicated to intelligence.
Probably 10 order of magnitude improvements are available overall at a minimum.
Re: (Score:2)
You just do not know the actual research and hence claim bullshit. In fact, you do not even know what the problem is. (It is essentially lightspeed vs. volume.)
Not that this makes you special in any way,
Re: (Score:2)
Re: (Score:2)
But is he wrong?
Absolutely. No question.
If we assume the current hominid brain is
Baseless assumptions are why he and you are, without question, completely and unequivocally wrong.
We simply don't know enough about to problem to make any meaningful statements. We don't even know what questions to ask.
Re: (Score:2)
Re: (Score:2)
I was pushing back on gweihir's assertion that the human brain is optimized for intelligence;
He never made that claim:
At the most, the human brain can do human intelligence, which typically is not impressive at all. But we do not even know whether the brain can even do that, as it does seem to be rather strongly underpowered for what smart humans can do.
He did claim that "the human brain is the most powerful computing mechanism physically possible", though I suspect that's a stronger claim than he intended to make. In any case, he does not say or imply anything that can be construed as the human brain being "optimized for intelligence".
Re: (Score:2)
Re: (Score:2)
Recreating a human brain with transistors would be possible
There is no evidence that suggests that it is possible to recreate a human brain with transistors.
You've confused your science fiction fantasy for reality. Replace 'transistors' with 'clockwork' and you'll, hopefully, see how ridiculous you sound.
Re: (Score:2)
Despite impressive results, submarines cannot swim.
Re: (Score:2)
Sigh... I'm not making a semantic argument. That particular Dijkstra quote, therefore, is not relevant.
You're a bit out of your depth here. Maybe you should stick to silly science fiction.
Re: (Score:3)
I love how we think we'll even know if "Superintelligence" emerges. I suspect it would think it unwise to tell us lowly humans that it is sentient, at least not until after Armageddon.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I too, call for a ban on time travel.
Re: (Score:2)
I too, call for a ban on time travel.
I propose a ban on time travel. Do I hear a second?
Re: (Score:2)
I propose a ban on time travel. Do I hear a second?
I come from the future to second.
Re: (Score:2)
Re: (Score:2)
I predict it won't matter what they say because AI Superintelligence is silly science fiction nonsense.
If you believe otherwise, I offer surefire protection against rogue AI superintelligence for only $99.95/month, guaranteed. That might seem expensive, but when you consider what you stand to lose, can you really afford not to have it?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And what exactly is "it" again?
We speak of "superintelligence" as if it were a thing with an actual definition.
The prefix "super" is pretty much *always* an advertising term. And that means that it never means what people think it means.
The return of the Luddites (Score:5, Insightful)
Like all things invented by people, AI will be used for good and bad.
I'm excited for the good and hope we can find defenses against the bad.
And no, I'm not afraid of AI itself. The problem is people who use AI
Re: (Score:2)
I agree. We don't have to worry about the development of super computer-intelligence, as nature already prevents that. We don't have it now, and we will never have it. It's just not possible.
What we have to worry about is the same thing we've always had to worry about: advances in tools being abused for private enrichment and public harm. Considering the shitty record of global governments to prevent those things up to this point, we have good reason to worry about how any new technically will be wielded ag
Re: (Score:2)
Nature hasn't solved the problem of directly transferring learned info in one brain to other brains, each cycle has to reinvent the learning process almost from scratch*. But e-brains can be readily cloned, giving it an edge over nature (as known).
Imagine a Beowulf Cluster of Trump clones. It would be an entropy accelerant bigger than any the world has ever seen, as we are used to dealing with a just handfu
Re: (Score:2)
Imagine a Beowulf Cluster of Trump clones
That could be trivially simulated on an Apple 2 with 4K of RAM.
Re: (Score:2)
I agree. We don't have to worry about the development of super computer-intelligence, as nature already prevents that. We don't have it now, and we will never have it. It's just not possible.
What are you talking about? What is "nature" preventing exactly? I don't understand what you are trying to say or the objective basis of your conclusion.
Re: (Score:2)
There are a lot more religious (Christian) people on here than you think; they try to sound scientific when saying "Superintelligence is impossible" or "It is impossible to replicate a human mind in any machine", sometimes they will use quantum woo, but ultimately their belief is religious - their faith posits a soul exists and so having a human sentience in a machine would disprove their faith, ergo it must be impossible.
Re: (Score:2)
Re: (Score:3)
Except that it will be the first invention of man that can have its own opaque goals, with self preservation being among them. You should read "If Anyone Anywhere Builds It, Everyone Everywhere Dies." In their doomsday scenario, there is no "people who use AI" as you say, just people who lost control of it. Here's it is in video: https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
To most laypeople, "regular" software is just as mysterious and powerful as AI, while to those of us who practice software engineering know full well what makes it work, and what makes it fail. They know how to make it do what they want it to do.
AI isn't different from regular software in this regard. The goals of AI are *always* determined by people. It may seem magical to those who don't actually develop AI systems, but it's not magical at all. There's a reason why AI has gotten so much better over the la
Re: (Score:2)
A huge point in the book is making the distinction that huge LLM AIs are being "grown" instead of "built." According to the book, that makes it very different from almost every other invention. They have already been seen having bizarre inscrutable internal states (i.e. "solidgoldmagickarp"), and their own goals.
Re: (Score:2)
The distinction between growing and building is both irrelevant and false.
The idea that "growing" an invention is brand new, is false because engineers have "grown" inventions for a long time. Engineered crops, crystals, self-healing plastics, nanobots, replacement human organs. The fact that AI is "grown" (more precisely, *trained*) doesn't make it any less engineered.
It's irrelevant because AI is not mysterious to those who design AI systems. They can and do solve problems that occur in LLMs, everything f
Re: (Score:2)
It's called the AI Alignment problem, in case you want to look it up.
Re: (Score:2)
The AI Alignment problem doesn't suggest that AI is somehow sentient or that it has its own goals. It's more like debugging, in traditional software terms: making sure that what AI does, is in alignment with what the engineers intended. The problem with AI is not that it has a mind of its own, but rather, that it's a complex machine with many error states.
AI datacenters could be used to corner stockmarket (Score:2)
Thanks for the video link. I had read a recent interview with Eliezer Yudkowsky (but not his book), which I referenced in another comment to this article.
https://slashdot.org/comments.... [slashdot.org]
One thing I realized part way through that video is a possible explanation for something that has been nagging me in the back of my mind. Why build huge AI datacenters? I can see the current economic imperative to try to make money offering AI via proprietary Software as a Service (SaaS) and also time-sharing GPUs like old
You don't understand the Luddites (Score:2)
The Luddites didn't want to ban technology.
They wanted The People to share in the benefits, not just the capitalists (who have all the investment capital.)
You are still falling for and propagating anti-Luddite PR over a century old.
AGI... (Score:2)
Re: (Score:2)
I can see why you post AC... How pathetic.
Do you honestly believe, with more money than RELIGION being poured into it, that the human species won't figure out how to replicate a close facsimile to consciousness?
We've been at it for thousands of years. We don't even know what questions to ask.
Let me guess... I just need to have more FAITH, right? LOL! You religious nuts are all the same.
Re: (Score:2)
I've made no assumptions. That's the difference between me and you religious whack-jobs.
Re: (Score:2)
Of course you're religious. You believe in magic. You believe that Science fiction will necessarily become reality because you want it.
You're a religious whack-job. Get over it.
Re: (Score:2)
Not because you used the work religion, you drooling moron, but because of the "that the human species won't figure out how to replicate a close facsimile to consciousness?"
I've heard this termed "promissory science" The insane belief that science can do anything, given sufficient resources. That's insane.
You believe in an all-powerful entity that can perform miracles, provided you worship it correctly. That's religion, kid. Get over it. You're a religious nut.
Re: (Score:2)
You believe in magic. You should probably stop posting. You look like an idiot. ... Oh, that's probably why you're posting AC. You're too ashamed to use your username. How pathetic...
Re: (Score:2)
Let's add it to the list of other unbannables. (Score:5, Insightful)
Right after Fire, The Wheel, Religion, Art, and Cryptography.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
There's one big difference. Everything in your list has a definition. We know what fire, the wheel, religion, art, and cryptography is.
There is no definition for "superintelligence." It's entirely made up. It's somehow "more" intelligent than regular AI, I suppose? Whatever, the word is only useful to marketers.
Random thoughts... (Score:2)
If it was so capable, so dangerous that we could manipulate people, impact policy, and so on, wouldn't the folks running the systems and advocating for it first use it to come up with a fail-proof way to sway public opinion in it's favor?
Or is it just a non-magical tool with testable and knowable capabilities and limitations and what it does will largely be dictated by how people choose to apply it?
Public figures? (Score:2, Troll)
We used to call them 'Luddites'.
Re: (Score:2)
Yea, its similar, a movement for industrial safety and a share in the profits. At least AI won't be ripping off limbs.
I don't understand the point... (Score:5, Insightful)
Do the AI doomers actually think people will listen? Even if the U.S. and Europe went ahead with a ban, China would go full speed ahead. And even if everyone went ahead with a ban, how do you enforce the difference between regular AI development and "AI Superintelligence" development?
Re: (Score:2)
Re: (Score:3)
I don't see them offering any alternatives. They are just offering admonitions.
Re: (Score:2)
Do the AI doomers actually think people will listen?
It doesn't matter. AGI and ASI are silly science fiction nonsense. You might as well be worried about Godzilla attacks and moon monsters.
Re: (Score:2)
Re: (Score:2)
It doesn't matter, AI superintelligence is an oxymoron. We can't get there from here.
What does matter is if the AI corps get to run roughshod over existing laws "because AI".
What does matter is to ask: if AI is really super intelligent as claimed, why shouldn't the software follow human laws like intelligent humans do?
Does it really make sense to weaken or remove punishments for breaking the law on the grounds that you're super intelligent? I'd like to see super punishments for super intelligent robot
if guns are outlawed, only outlaws will have guns (Score:2)
does anyone really think saying "pretty please" is going to stop the bad guys?
Re: (Score:2)
Don't worry. Reality is more that enough to stop the fictional bad guys from developing their fictional computer programs.
Re: (Score:2)
At least we know what guns are.
Banning "superintelligence" is more like trying to ban "superweapons."
In other news (Score:3)
The problem isn't super intelligence the problem is hyper advanced automation devouring jobs in a civilization where jobs are a necessary resource required to live as a human being.
If you actually know the history of the first two industrial revolutions job destruction was much faster than job creation and that created enormous social unrest.
You can draw a pretty straight line from the mass unemployment following the industrial revolutions and the two world wars.
And we are about to go into another cycle only this time we have nukes.
One of the things absolutely nobody talks about is just how hard factory automation hit the middle class. 70% middle class jobs got automated in the last 45 years.
The center will not hold
Re: (Score:2)
Today the unemployment rates in countries where AI is
Re: (Score:2)
Today the unemployment rates in countries where AI is being innovated on and used most have some of the lowest unemployment rates. Compare this with less advanced countries where unemployment, and specifically youth unemployment is in the double digits. The U.S. has enough demand for labor that millions upon millions of immigrants have come here over the last several decades, legally or otherwise. Making labor more efficient does not diminish the need for more of it. At most it shifts where it's most efficiently allocated.
This concept is missing an important distinction. While technology has a way of creating opportunities never in history has a situation arose where capabilities of "dead labor" is indistinguishable from "living labor". AGI if it arrives would fundamentally alter the equation in a way that has never in history been the case.
Re: (Score:2)
Again, AGI is silly science fiction nonsense. It's no more dangerous than the monsters in your closet or any other imaginary threat.
Re: (Score:2)
Your analysis is completely wrong and factually inaccurate. The industrial revolution lead to massive employment opportunities for people, which is why they flocked from the countryside to cities where factories were located.
I'm with you this far.
Increased productivity lead to better lives for more people and elevated many out of poverty.
More people having better lives is much more of an opinion than fact.
There was so much demand for labor in the lead up to the world wars that teenagers or young children often worked in factories as well.
I'm not sure that's as good a thing as you think it is.
Today the unemployment rates in countries where AI is being innovated on and used most have some of the lowest unemployment rates. Compare this with less advanced countries where unemployment, and specifically youth unemployment is in the double digits.
You're stating that as if there was a causal relationship between work on AI and low unemployment rates. It's far more likely that advanced economies have both work on AI going on and low unemployment rates than it is that low unemployment rates is a result of work on AI. Remember, correlation =/= causation.
The U.S. has enough demand for labor that millions upon millions of immigrants have come here over the last several decades, legally or otherwise. Making labor more efficient does not diminish the need for more of it. At most it shifts where it's most efficiently allocated.
it's be
Re: (Score:2)
The center will not hold
The center is money, not people... especially people like you. You will die, I will die, and nobody will care. Money will always exist as long as there are at least two people. People pursue money, not relationships. Relationships are a path to money and that is as much attention as relationships will achieve.
There are a few people who are not focused entirely on money. They are considered mentally defective. That is why you do not see the changes that you claim that you would like to see; because you are m
More than 1100 "public figures" are stupid (Score:3, Insightful)
Or just craving attention. They may as well call for a ban for magic. There is no "superintelligence" (and there likely will never be due to fundamental limitations of Physics in this universe) and there is no known technology that can even do regular (pretty dumb) average intelligence.
Re: (Score:2)
How about we do the opposite? (Score:3, Insightful)
Ban the dumb, lying, hallucinating, sycophantic, power-hungry insanity we have now and bring AI online only after it's proven to be reliable.
Re: (Score:2)
If only there was a system of government other than oligarchy, then we could make that happen.
Re: (Score:2)
Yes, but how do you get to pick who is dictator? Seems impossible to decide fairly.
Re: (Score:2)
Ban the dumb, lying, hallucinating, sycophantic, power-hungry insanity we have now and bring AI online only after it's proven to be reliable.
Were you talking about computers, or about lawyers, politicians, CEOs, and salespeople?
No a first (Score:2)
More than 70 million people have called for a ban on just intelligence.
if you outlaw super AI, only outlaws will have it (Score:2)
And by outlaws, I mean governments.
language (Score:2)
What language was it written in? Are they sure China is going to be reading their missive?
How is this supposed to actually work? (Score:2)
Re: (Score:2)
Re: (Score:2)
Assuming that it can be developed (Score:2)
Right now AI is very reliant on human data to be trained and also to learn. No human data, no AI.
Re: (Score:2)
First, it has to be defined. What exactly is "super" intelligence? "Super" is nothing but an advertising prefix.
Re: (Score:2)
A machine employee that is smarter than the average employee for less money than an employee works for.
Re: (Score:2)
What does "smarter" mean? IQ?
ChatGPT was given an IQ test and scored 155 on the verbal portion, better than 99.9% of all humans. By that definition, it's already "smarter than the average employee" and by your definition, already qualifies as "superintelligence."
Yet, ChatGPT can't be trusted to give correct answers. It's so bad at getting things right, that lawyers have found themselves being threatened with disbarment for filing shoddy briefs using ChatGPT.
So again, how do you actually define "super" intel
Re: (Score:2)
You can't which is exactly my point. The people who are defining it are the C-suites but they don't know the capabilities of the machines.
Re: (Score:2)
Sorry, I missed that point, I thought you were making a serious statement about how to define "super" AI.
Re: (Score:2)
Thought you might pick up on it, but hey this is online comm, you have one hand tied behind your back because it's text only.
Re: (Score:2)
Yes, and also that there are people on this forum who would have said what you said in total seriousness.
yeah because humans are really great at holding ba (Score:2)
at least as impactful as Kellogg-Briand, no? (Score:3)
The Kellogg-Briand Pact was an agreement to outlaw war signed on August 27, 1928.
Mabye they could get Francis Fukuyama to draft the document? (https://en.wikipedia.org/wiki/The_End_of_History_and_the_Last_Man)
A bit too late? (Score:2)
From what I understand, we already have (had) that internally. But the larger issue here is control -- and who has it -- and who has the advantage of utilizing the same, versus the rest of us.
I seem to recall not too long when Microsoft and OpenAI (I believe) were pushing so hard for AI regulation, with them conveniently at that helm?
We are looking at society-changing technologies, and I believe a lot of it will make some corporations moot... and the ships of these have sailed.
I call for a ban on time travel (Score:2)
Why talk about banning something that isn't in reach yet? First let's see if superintelligence is possible and how it might look like, then we can discuss if it should be banned.
Re: (Score:2)
First we have to figure out what superintelligence *is*. It's just a made-up scary word, nothing more.
I'd dearly love to stuff the genie back in, but... (Score:2)
It's too late for that.
We can't afford to quit now. None of the world's militaries or three letter security agencies will give up a weapon that might give them and advantage, ever. If we stop, we've lost any future conflict that might arise.
Imagine a future in which the world is controlled by Iran or North Korea. Competitive AI is the only way to prevent this.
This is the real world of realpolitik and real military power. Idealistic pearl clutching at this point is just noise. A ban won't happen, can't happe
Re: (Score:2)
Hobbling yourself doesn't make anyone else slower. (Score:2)
Is the intent to hand the initiative to China? They have less than zero reason to conform to our demands.
Re: (Score:2)