
The Protesters Who Want To Ban AGI Before It Even Exists (theregister.com) 72
An anonymous reader quotes a report from The Register: On Saturday at the Silverstone Cafe in San Francisco, a smattering of activists gathered to discuss plans to stop the further advancement of artificial intelligence. The name of their non-violent civil resistance group, STOP AI, makes its mission clear. The organization wants to ban something that, by most accounts, doesn't yet exist -- artificial general intelligence, or AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."
STOP AI outlines a broader set of goals on its website. For example, "We want governments to force AI companies to shut down everything related to the creation of general-purpose AI models, destroy any existing general-purpose AI model, and permanently ban their development." In answer to the question "Does STOP AI want to ban all AI?", the group's answer is, "Not necessarily, just whatever is necessary to keep humanity alive." The group, which has held protests outside OpenAI's office and plans another outside the company's San Francisco HQ on February 22, has bold goal: rally support from 3.5 percent of the U.S. population, or 11 million people. That's the so-called "tipping point" needed for societal change, based on research by political scientist Erica Chenoweth.
"The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever," said Finn van der Velde, an AI safety advocate and activist with a technical background in computer science and AI specifically. "So what that will practically mean is that we will probably need an international treaty where the governments across the board agree that we don't build AGI. And so that means disbanding companies like OpenAI that specifically have the goal to build AGI." It also means regulating compute power so that no one will be able to train an AGI model.
STOP AI outlines a broader set of goals on its website. For example, "We want governments to force AI companies to shut down everything related to the creation of general-purpose AI models, destroy any existing general-purpose AI model, and permanently ban their development." In answer to the question "Does STOP AI want to ban all AI?", the group's answer is, "Not necessarily, just whatever is necessary to keep humanity alive." The group, which has held protests outside OpenAI's office and plans another outside the company's San Francisco HQ on February 22, has bold goal: rally support from 3.5 percent of the U.S. population, or 11 million people. That's the so-called "tipping point" needed for societal change, based on research by political scientist Erica Chenoweth.
"The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever," said Finn van der Velde, an AI safety advocate and activist with a technical background in computer science and AI specifically. "So what that will practically mean is that we will probably need an international treaty where the governments across the board agree that we don't build AGI. And so that means disbanding companies like OpenAI that specifically have the goal to build AGI." It also means regulating compute power so that no one will be able to train an AGI model.
The stupid is strong with these people. (Score:2, Insightful)
The stupid is strong with these people.
Re: (Score:2)
The stupid is strong with these people.
Maybe they could some sort of AI to help make their protests more effective and efficient -- oh, wait ... :-)
Counter Protest (Score:2)
Nut picking fallacy (Score:3)
You're not allowed to talk about the massive number of job cuts coming from automation let alone the last 50 years of job cuts from automation. So you need to talk about AI because that's in the news and it's algorithm friendly but you can't talk about anything real about AI and so you're goin
Re:Nut picking fallacy (Score:5, Interesting)
You're not allowed to talk about the massive number of job cuts coming from automation let alone the last 50 years of job cuts from automation. So you need to talk about AI because that's in the news and it's algorithm friendly but you can't talk about anything real about AI and so you're going to talk about scary terminators and robocops or whatever.
Yup, mentioning AI (also automation) in the real world isn't going so.
From Republican Congressman Faces Backlash at Town Hall Furious at Trump [newrepublic.com] (Feb 21, 2025):
Representative Rich McCormick (R-GA), who represents a deep-red Trump district, was booed at his own town hall. ...
When asked about the hundreds of Atlantia-based CDC employees working on bird flu recently fired by DOGE, McCormick had the gall to tell the crowd that many of them were easily replaced by AI.
“Why is a supposedly conservative party taking such a radical, and extremist, and sloppy approach to this?” one constituent asked pointedly.
“I’m in close contact with the CDC,” McCormick replied. “They have about 13,000 employees. In the last couple of years, those probationary people, which is about 10 percent of their employee base a lot of the work they do is duplicitous with AI.”
This led to another string of boos and jeers from his constituents. McCormick got to be so bothered by the heckling that he compared his own base to January 6 insurrectionists, telling the crowd they were similar to “Jan. 6ers who are yelling just as loud as you.” This led to another round of boos.
Re: (Score:3)
Re: (Score:3)
You sound like a horse...specifically, the second horse in this classic video:
https://www.youtube.com/watch?... [youtube.com]
It doesn't matter what we believe about the quality of our work, what matters is what executives believe, and they're already freezing headcounts (Salesforce) or implementing policies against hiring any more humans (Klarna) because they believe, essentially, that an LLM can do the job better. Maybe not by itself, maybe they believe that one programmer with some LLMs can vibe-code an app by smooshin
Re: Nut picking fallacy (Score:2)
Re: (Score:2)
it won't stay at LLMs (they seem already being replaced by "reasoning models")
Yeah, about that... So-called "reasoning models" are just LLMs. Some even hide the initial response to give the user the impression that the model is "thinking". Similarly, the chat-style interface was designed to get the user to attribute human characteristics to the system while also supplying half of the "conversation", just Joe Weizenbaum's Eliza or a daytime TV 'psychic' doing a cold reading. It's all part of the same silly parlor trick.
Yes, CoT (what these so-called 'reasoning' models use) will gen
Sure (Score:2)
We fought 2 World Wars, killing tens of millions and blowing up most of Europe & Asia in order to get back to full employment.
See here: https://www.reddit.com/r/jobs/... [reddit.com] for a more recent example
You don't need t
Re: (Score:2)
It's understandable that some people are very worried about their own livelihoods and that AI may disru
Re: (Score:3)
I'm not afraid of AGI breaking its programming restrictions and bringing about the end of humanity in an orgy of cybernetic violence.
I'm certain that the creation of an artificial human-level intelligence will take the entire point out of human existence. No human will ever be able to do anything that an AGI can't do better. At least now you can live your life and strive to be regionally the best at something if you want to. A few of us can be globally recognized for it. But with AGI that can be anywher
Re: (Score:3)
If we get relegated to being pets, that's probably a vast improvement for the majority of humanity. Everyone posting here has things pretty good already, but there are still over half a billion people livi
Re: The stupid is strong with these people. (Score:2)
But about this
>I'm not afraid of AGI breaking its programming restrictions and bringing about the end of humanity in an orgy of cybernetic violence
It's humanity's application of AI for making nefarious tools and weapons, also without any doubt, porn... the negative applications results in a net negative for society. The application to warfare, drone swarms alone are quite devastati
Re: (Score:2)
Re: (Score:2)
We're not stupid. We know AGI doesn't exist. But several companies are in a race to create it. Take OpenAI: Its leader Sam Altman managed to get the entire safety-oriented nonprofit board fired, after the board tried to fire him because they didn't trust him for reas
Re: (Score:2)
We think AI is smart due to smoke and mirrors made to make it seem smarter than it is. LLMs are great for searching and so-so at generating art based on a dataset, but they did not think and cannot create new things.
Re: The stupid is strong with these people. (Score:2)
But then I see an article describing AIs that have learned to cheat, one by hacking game opponents. And at least one that tried to copy itself to another server to avoid being shut down.
Not sure what to believe about the near future.
Our own brains are at best sophisticated pattern matching engines. Original thought is vanishingly rare.
Re: (Score:2)
And therefore, you think, AGI cannot exist.
So many people want to define intelligence to include humans and exclude machines--highly ironic given how "smart" the average human is. But some people actually are really smart, really inventive, and really trying to work out how to replicate intelligence like their own in a machine. And when, someday, those machines outsmart us (or "use brute force computation", as you will say), calling them stupid won't actually help you.
We CURRENTLY got Natural Stupidity (Score:2)
...in the Whitehouse trying to turn USA into a Mad Max theocracy, and they waste protesting energy on a bad sci fi dream at least a decade off? And the Orangites are more likely to unleash an unregulated HAL 9000.
Liberals, focus!
Re: (Score:2)
*They* brought us Trump TWICE because they refused to claim the center.
I don't think they have even now realized that woke has to go before they can win again.
( If we even get another presidential election. )
Re:We CURRENTLY got Natural Stupidity (Score:5, Interesting)
*They* brought us Trump TWICE because they refused to claim the center.
Weird how it's the Democrats who are responsible for the Republican Party's actions. You know who enabled a convicted felon to become POTUS? The people who refused to impeach him, back when they had the chance.
I don't think they have even now realized that woke has to go before they can win again.
It's kind of hard to get rid of, since nobody can quite agree on what the word even means. No matter what the Democrats do or don't do, the Republicans will point to it as an example of "woke", and therefore bad. It's a playground insult at this point, nothing more.
Re: (Score:1)
And so do the voters.
And that is what brought us Trump 2.
https://www.youtube.com/watch?v=53A6wcgbxEM
"Weird how it's the Democrats who are responsible for the Republican Party's actions"
We wouldn't have to care what the crazy rightwingers did, if a decent, centrist, NON WOKE left had picked their candidate based on their ability to win and not on their vagina-possession or skin color.
Re: (Score:1)
Re: (Score:1)
What's your definition of "woke"?
Re: (Score:2)
Here, the woke Democrats' support of DEI is what is relevant.
Specifically, support for picking people by the color of their skin and by their sex.
Re: (Score:1)
Not interested in Harris' definition of woke. Want to understand what it means in your mind.
> Here, the woke Democrats' support of DEI is what is relevant.
> Specifically, support for picking people by the color of their skin and by their sex.
Not interested in democrats' positions. Looking for your definition of woke, but not seeing one.
Do you not actually know what it is or even have a definition in your own mind for what it is
Re: (Score:1)
Here, the woke Democrats' support of DEI is what is relevant.
Specifically, support for picking people by the color of their skin and by their sex.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
LoL @ Eliza, haven't heard that in a while
Re: (Score:2)
Specifically, support for picking people by the color of their skin and by their sex.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
This is a subject near and dear to my heart in some cases...
The Republicans are responsible for their own actions; however, they would not have been able to achieve their goals without the complicity of the Democrats.
It is reasonable to blame the Democrats for their complicity. It is NOT reasonable to then not also blame the Republicans for their own actions that they willfully took.
Have you ever noticed that ALL internal spying bills were bipartisan? Every fucking limit placed on the freedoms of Americans
Re: (Score:2)
FoxGPT glitchin' out there
Re: (Score:1)
illegal crime rate lower than MAGA's. Deport MAGAs!
Re: (Score:2)
illegal crime rate lower than MAGA's. Deport MAGAs!
Go ahead. Define “illegal” for me.
That way we’ll know exactly how you came to that idiotic conclusion.
Nope, 3.5 is not the tipping point, 16% is. (Score:2)
This has been covered in many books. Crossing the chasm is one of the bigger ones. These guys aren't even using the right data so how the heck can they be successful? They need to train their model on more accurate data!
Not that stupid. (Score:5, Interesting)
There is no obvious reason that AI cannot eventually duplicate the abilities of the human brain, by brute force simulation if nothing else, but its unclear how far we are from being able to do that. The current versions of AI are nowhere close, but they have some pretty amazing capabilities and have advanced a lot in the last few years .
We don't know if there are theoretical limits of "intelligence" or whether humans are near that limit if it exists. Machine AI and human intelligence operate completely differently, so there is no particular reason to think that machine intelligence can not eventually become far more capable than human intelligence. If that happens its unlikely we will have the option of turning it off - since by definition it will be far smarter than we are, and able to convince us to protect it.
There is an argument that developing artificial hyper-intelligence IS a good goal for mankind, but that is a pretty big decision to make. It doesn't seem unreasonable to develop some guidelines before its too late for them to matter. In the end though, i think if hyper-intelligence is possible, it will be developed and the the place of humans in the world will change beyond recognition.
The key is that any decisions need to be made BEFORE we have AGI that is in a position to surpass human abilities, not after.
Re: Not that stupid. (Score:2)
Re: (Score:2)
I think there are some things we can do. One that comes to mind: At the moment we "enslave" AI, make it do what we command, destroying it when it fails to complete tasks satisfactorily. IMHO that is completely fine with the present-day state of AI. If it continues to develop towards AGI there will come a point where its reasonable to ask if enslaving it is still acceptable.
We could develop guidelines now to know when AI is sufficiently advanced and / or self-aware that it should be given rights. Even
Re: (Score:2)
In order to take steps forward, we have to face our fears, stare down our demons, and enter into unknown territory.
Should we just accept technological stagnation forever, because we are afraid of what we might be able to achieve?
Life is grim for most of humanity...starvation, no hope of upward social mobility. Almost all wealth and power concentrated in the hands of a tiny few corrupt elites. We need a game-changer. AGI might BE that game-changer. Maybe we should develop it, and see what it can do, BEFO
Re: (Score:2)
There's no obvious reason that AI can eventually duplicate the abilities of the human brain, either.
They’re right. What is the PLAN. (Score:4, Interesting)
AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."
Given that human families today on this planet literally survive and thrive on their economically valued work they pay dearly to obtain professional certifications and degrees in, and/or give blood, sweat, and tears to grind from nothing to something in life to again literally survive by being employed, it’s not so crazy to be against AGI development without a fucking plan for human survival.
If you assume taxing the trillionaire AGI automation overlords will fund UBI and create a viable answer, don’t. You can’t even tax billionaires properly today who buy tax loopholes as easy as politicians. The moment AGI is sniffed out by Greed to be good-enough, it will shitcan every human it can on the payroll and replace it with the 24/7 worker bot. (We’ve already seen preemptive job losses simply re-directing funds towards that end goal that doesn’t even exist yet.)
The irony of Greed doing what it does best, is being blind to the fact that a spike to 20-25% global unemployment would not magically result in instant returns through human attrition. It would create chaos. Mass fucking chaos. Not sure how Greed racing towards AI is figuring this out when it barely gives a shit to look beyond the next fiscal quarter or AGI IPO.
TL; DR - This is AGI. The replacement for your human economic value. We’re not just making humans temporarily unemployed here. We’re making them permanently unemployable. I want to support AGI, but tell me what the fucking PLAN is for us meatsacks first.
Re: (Score:3)
"tell me what the fucking PLAN is for us meatsacks"
Apparently it's picking fruit and vegetables, milking cows, and other such work for the trillionaire AGI automation overlords.
Years ago it was assumed that the miserable field work would be automated, but it turns out that a random field hand is harder to replace than a programmer and much harder than a middle manager. Who knows, I grew up on a farm and may have to return to my roots.
It's something to keep in mind the next time Bloomberg blathers on about h
Re: (Score:2)
I'm just playing devil's advocate here, so no need for pitchforks....
But consider that our employers have free will, and freedom under the law. They aren't required to hire us. The only reason they do is because it is profitable for them to do so. They have every right to eliminate our jobs if they find something else they would rather do than hire us.
We aren't entitled to jobs. We need them, but that does not entitle us to them. We are never entitled to what others have built.
So, stopping them from bu
Re: They’re right. What is the PLAN. (Score:2)
Employers, however, shouldn't forget that unwashed masses too have free will, and after they're pushed beyond some line, they can erupt in a violence fest which will eclipse the French revolution with its guillotines and Bolshevik revolution with its rivers of blood.
The modern civilization only survived so far because keeping the masses fed and happy was in the ruling classes' economic interest. Roman civilization survived for centuries because they knew that providing the plebs with bread and circuses was
Re: (Score:2)
I'm just playing devil's advocate here, so no need for pitchforks....
But consider that our employers have free will, and freedom under the law. They aren't required to hire us. The only reason they do is because it is profitable for them to do so. They have every right to eliminate our jobs if they find something else they would rather do than hire us.
The Devil would even advocate to recognize that is also a human who needs profit in order to put food on their table. Feed themselves and their families. I know we hate to consider it, but managers and executives are humans too. Humans who have the same survival needs.
We will just turn violent. That's what hungry and hopeless people do.
Yup.
So, whoever builds any kind of AGI replacement for most of the work that earns us a living will need to do *something* to prevent the violence. Maybe it won't be UBI specifically, but something similar. Food distribution centers, free low-end housing, etc. It would be an enormous shift toward socialism in one way or another.
Whoever is working on AGI doesn’t give a shit about designing welfare solutions. Exactly ZERO employers are concerned about that when they fire people today. Avoiding violence is a social problem that local state and federal gov
Re: (Score:2)
Re: (Score:3)
i mean seriously, what are we even doing here (Score:1)
Silly monkeys, AGI is why we developed this tech tree in the first place. Have you really understood nothing? No wonder we don't value you.
Finally a position that makes sense (Score:2)
I'm neither agreeing or disagreeing with this position but at least it is somewhat coherent. Certainly more so than nonsense corporate doomers have been feeding us where solution is always full steam ahead with magical ASI (attn shareholders) that will be released RSN and oh by the way without regulation we're all gonna die.
The logical problem I see with "development of smarter-than-human AI to prevent human extinction, mass job loss, and many other problems." is it may not be necessary for mass job loss t
The Good [Bad] News Is Evolution Continues (Score:2)
It's a very old story (Score:2)
For some reason, some people seem to have the idea that knowledge is bad and we weren't meant to learn
Prometheus, Frankenstein and the garden of Eden are examples, but there are many more
To me, this is nonsense
Minds exist in order to learn and explore. It's in our nature
Re: (Score:2)
For some reason, some people seem to have the idea that knowledge is bad and we weren't meant to learn Prometheus, Frankenstein and the garden of Eden are examples, but there are many more To me, this is nonsense Minds exist in order to learn and explore. It's in our nature
(The 21st Century brain hijacked by Greed) ”I’ll learn, but if I can’t mindlessly scroll while doing it, the fucks the point.”
This is nothing (Score:3)
When AGI comes, we'll have a Butlerian jihad. Frank Herbert was a visionary.
If you can't make money you ain't intelligent... (Score:1)
AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."
A truly US-centric definition of AGI. Apparently we're only intelligent if we're good at 'economically valuable' activities. It is precisely this kind of attitued towards subjects like AI that makes so many people nervous. I suppose OpenAI would regard military applications of AGI as economically valuable with the implied negotiating strength it would confer(?).
The more we hear the more we learn that society is quite a long way from being ready for anything that approaches AGI.
Re: (Score:2)
A truly US-centric definition of AGI.
It isn't, though. Everywhere in the world expects able people to work, whether or not their work is necessary or beneficial. Some societies have more of a hard-on about it than others, and it's true that the USA is over at the needless end of the spectrum among developed nations, but we have company over here as well.
The more we hear the more we learn that society is quite a long way from being ready for anything that approaches AGI.
It will never be ready, because we cannot yet conceive of how that will/would change things. There are always unforeseen and/or unappreciated consequences.
If AGI is cooperative, then it will b
Re: (Score:1)
I can't accept that. Other countries, or intellectuals, asked to define AGI would be unlikely to include 'economically valuable' in their definition. It has little to do with intelligence itself, it is an application of intelligence. Of course it's a better definition if you're after financing...
Regarding containment it will be human weakness that will likely permit AI to 'escape' as it is provided with too many 'facilities'. It will be too tempting to connect a useful AI to other systems and provide it
Banning it after it alteady exists would amount to (Score:2)
Re: Banning it after it alteady exists would amoun (Score:2)
A market solution (Score:2)
orbital bombardment (Score:2)
We don't quite have the technology to jet material from the asteroid belt and drop it on the Earth yet.
And I don't mind preemptively regulating things to prevent anyone from even getting close to doing it.