
Slashdot Asks: How Do You Protest AI Development? (wired.com) 170
An anonymous reader quotes a report from Wired: On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. "What do we want? Safe AI! When do we want it?" The protesters hesitate. "Later?" someone offers. The group of mostly young men huddle for a moment before breaking into a new chant. "What do we want? Pause AI! When do we want it? Now!" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and ahandful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit -- a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message.
"The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." [...] There is also the question of how PauseAI should achieve its aims. On the group's Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI'sSan Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products. Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. "Probably not. We do what we have to, in the end, for a future with humanity, while we still can." [...]
Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church" that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. "I'm a utilitarian. I'm thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent" from companies producing AI models, she says. "We don't have to choose which AI harm is the most important when we're talking about pausing as a solution. Pause is the only solution that addresses all of them." [Joseph Miller, the organizer of PauseAI's protest in London] echoed this point. He says he's spoken to artists whose livelihoods have been impacted by the growth of AI art generators. "These are problems that are real today, and are signs of much more dangerous things to come." One of the London protesters, Gideon Futerman, has a stack of leaflets he's attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. "The idea of a pause being possible has really taken root since then," he says. According to Wired, the leaders of Pause AI said they were not considering sit-ins or encampments near AI offices at this time. "Our tactics and our methods are actually very moderate," says Elmore. "I want to be the moderate base for a lot of organizations in this space. I'm sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy."
Meindertsma agrees, saying that more disruptive action isn't justified at the moment. "I truly hope that we don't need to take other actions. I don't expect that we'll need to. I don't feel like I'm the type of person to lead a movement that isn't completely legal."
Slashdotters, what is the most effective way to protest AI development? Is the AI genie out of the bottle? Curious to hear your thoughts
"The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." [...] There is also the question of how PauseAI should achieve its aims. On the group's Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI'sSan Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products. Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. "Probably not. We do what we have to, in the end, for a future with humanity, while we still can." [...]
Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church" that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. "I'm a utilitarian. I'm thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent" from companies producing AI models, she says. "We don't have to choose which AI harm is the most important when we're talking about pausing as a solution. Pause is the only solution that addresses all of them." [Joseph Miller, the organizer of PauseAI's protest in London] echoed this point. He says he's spoken to artists whose livelihoods have been impacted by the growth of AI art generators. "These are problems that are real today, and are signs of much more dangerous things to come." One of the London protesters, Gideon Futerman, has a stack of leaflets he's attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. "The idea of a pause being possible has really taken root since then," he says. According to Wired, the leaders of Pause AI said they were not considering sit-ins or encampments near AI offices at this time. "Our tactics and our methods are actually very moderate," says Elmore. "I want to be the moderate base for a lot of organizations in this space. I'm sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy."
Meindertsma agrees, saying that more disruptive action isn't justified at the moment. "I truly hope that we don't need to take other actions. I don't expect that we'll need to. I don't feel like I'm the type of person to lead a movement that isn't completely legal."
Slashdotters, what is the most effective way to protest AI development? Is the AI genie out of the bottle? Curious to hear your thoughts
History (Score:5, Insightful)
Re:History (Score:5, Insightful)
It's funny. Twenty years ago Slashdot was outraged that the US government was restricting export of modular exponentiation. Now they're asking about the best way to protest matrix multiplication and the chain rule.
Re:History (Score:5, Interesting)
Well, you could turn to history to see how similar efforts [wikipedia.org] tried this in the past but trying to prevent the advancement of science and technology does not put you in good company.
What? Your link takes you to the wikipedia page about the Luddites. But when you do turn to history you see that their protest was that the technological transformations should be accompanied by social welfare and retraining, and they were "on the right side of history" -- i.e. it's their ideas that have carried the day and are how our society now structures itself. "The true significance of Peterloo as marking the point of final conversion of provincial England to the struggle for enfranchisement of the working class." "The ship which had tacked and lain for so long among the shoals and shallows of Luddism, hunger-marching, strikes and sabotage, was coming to port"; "Henceforth, the people were to stand with ever greater fortitude behind that great movement, which, stage by stage throughout the nineteenth century, was to impose a new political order upon society"; "With Peterloo, and the departure of Regency England, parliamentary reform had come of age."
References:
https://www.bbc.co.uk/sounds/p... [bbc.co.uk]
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
What? Your link takes you to the wikipedia page about the Luddites. But when you do turn to history you see that their protest was that the technological transformations should be accompanied by social welfare and retraining, and they were "on the right side of history"
The Luddites were absolutely not at all on the right side of history. They were violent idiots who rejected technology and rightly deserve to be an very negative epithet. The children of the original Luddites are the people you are thinking of. They rejected their parent's Luddite philsophy, embraced the new industrial technology and successfully pressured the mill owners to use the enhanced profits the new technology gave them to improve working conditions. They were the founders of the union movement.
Re: (Score:2)
Indeed, the original Luddite movement was really an amazing mirror to that of today's anti-AI crowd. They were FURIOUS that their hard-built-up-skills were just being copied by soulless machines, who they saw as producing inferior copies en masse and leading to mass unemployment that was going to destroy society.
And of course, they were completely and utterly wrong. The Industrial Revolution was unambiguously a good thing. Standards of living skyrocketed across the board. With less labour devoted to dru
Re:History (Score:4, Insightful)
Nothing is unambiguously good. The Industrial Revolution led to a period of destabilization that did cause large-scale social problems, and forced governments to create labor laws to protect workers, not to mention that the need for raw materials fueled colonial expansion and all the abuses that went along with that. And the Industrial Revolution is where pollution and anthropomorphic GHG emissions began to seriously ramp up as the world transitioned into using fossil fuels to produce the ever-increasing amount of energy needed. In the long run, this latter effect threatens the very standards of living that the Industrial Revolution created.
Re: (Score:2)
Abuse of workers didn't start with the industrial revolution. Mass organized industrial action against it started during the industrial revolution.
At the start of the Industrial Revolution, workers' rights and conditions were abysmal. The majority of the working population lived in poverty, and their rights were non-existent or severely limited. There was no minimum wage. Workers were paid a pittance. Workers commonly earned 6 pence to 1 shillin
Re: (Score:2)
** abolition
Re: (Score:2)
They were already abusive colonial powers. Just look at, say, the horrors imparted on the new world by the Spanish
The OP never said that the industrial revolution was the _cause_ of colonialism. They said that it *fueled* colonialism, which it did.
Why do you think Britain spent all that time trying to sort out the middle east previous to giving up after World War 2? They needed oil, just like today, and undersea drilling in the North Sea wasn't an option yet.
We've literally been doing the same shit for 200 years now. We just don't call it "colonialism" any more.
Re: (Score:2)
Indeed, the original Luddite movement was really an amazing mirror to that of today's anti-AI crowd. They were FURIOUS that their hard-built-up-skills were just being copied by soulless machines, who they saw as producing inferior copies en masse and leading to mass unemployment that was going to destroy society.
And of course, they were completely and utterly wrong. The Industrial Revolution was unambiguously a good thing. Standards of living skyrocketed across the board. With less labour devoted to drudgery, more flowed into education, science, medicine, etc etc, and discoveries took off. Unemployment dropped. The average work week, rising before and at the start of the Industrial Revolution, reversed course once machines became common and started heading strongly downward. It was very much a good thing. Efficiency in production is very good for quality of life.
But in the meantime, the Luddites were outraged. And they became increasingly violent, moving from protests and letter-writing campaigns, to threats, to physical attacks against factories, their staff, and their owners. But it didn't change anything.
I'm no fan of Luddites but it's also no use to pretend that this improvement in the standard of living came without a whole lot of hard fighting in the form of anything from social strife and all the way up to some major shooting wars. If the mill owners and captains of industry of the industrial revolution had had their way they would have pocketed 90% plus of the profits they earned off of the labour of people reduced to cooking soup from tallow candles they stole at the factory in a desperate effort to e
Re: (Score:3)
And now we have a whole new suite of captains of industry who will have to be divested of their inordinate share of capital. At a significant price in blood.
Glad i'll be dead for most of it.
Re: (Score:2)
But in the meantime, the Luddites were outraged. And they became increasingly violent, moving from protests and letter-writing campaigns, to threats, to physical attacks against factories, their staff, and their owners. But it didn't change anything.
Back in 1812, the British had the good sense to hang the Luddites once they were caught. Today we give them. tenure at European and American universities, with the long-term result that for the last few years the best large-scale applications of new technology are taking place in China.
Did you know that those perennially most stupid of the Luddite species, the German Greens, are now busy trying to PREVENT a Tesla gigafactory from being built near Berlin? This, despite EV battery charging being able to make
Re: (Score:2)
I've followed this. Utterly insane. Meanwhile there's coal mines just a couple dozen kilometers away, and they're trying to stop a factory that makes electric cars - and by "stop", I don't mean just "waving banners", but literally charging through police lines and clambering over fences. What utter clowns.
Re: History (Score:2)
I can't find anything saying that the Luddites demanded social welfare and retraining. Everything I can find on this (ironically, including asking ChatGPT) says that they merely wanted the end to the machines, and that they ultimately lost.
Re: History (Score:2)
Re: (Score:2)
Opposition does work. It won't stop it, but it at least will slow things down
Great. So we slow down our own AI development and let others, like China, catch up and overtake? We have to embrace important new technologies and find the best way to use them to make the world better because regardless of what you do there will be others embracing that same technology to do ill.
Re: History (Score:2)
Re: History (Score:2)
Re: (Score:2)
There's nothing wrong with protesting against the unethical use of AI. Or the fact that most AI generated stuff is crap, or just pure spam.
Re: (Score:2)
the fact that most AI generated stuff is crap, or just pure spam.
Most "AI"s of today depend on massive quantities of quality and human-generated training data, and this is not easily obtainable at all. Garbage In, Garbage Out. Or, as Cory Doctorow likes to say, "AI coprophagia" can generate nothing but sh!t.
Re: (Score:2)
I've never seen a LLM put something together as coherent as that post. The disjunction of information inherent in LLM output is readily apparent in every output i've ever seen. It's like Potter Stewart and porn.
I'm pretty sure statistical models are going to continue to be sufficient for YA-level advertising prose.
Re: (Score:2)
Additionally I see the efforts to restrict AI use in various scenarios battlefield, various workplaces, do you really think countries like China and Russia are not going to add AI to their weapons if they think it might help even a little bit. Drone warfare is ripe for this.
Or that if humans are not willing to do the undesirable jobs or do jobs at less than satisfactory levels that automation and AI isn't going to fill those roles. The kiosks and app at McDonald's and
Re:History (Score:5, Insightful)
Well, you could turn to history to see how similar efforts [wikipedia.org] tried this in the past but trying to prevent the advancement of science and technology does not put you in good company. Historically the better approach has always been to embrace the new technology and use it both to prevent abuses of it and to better society.
Its very telling of humans to be smart enough to develop a replacement for the human mind, and yet be stupid enough to not see where this is going, because Greed.
Been suffering for thousands of years and still don’t have a cure for the Disease of Greed. As a species, we’re not smart enough to avoid repeating the worst of our own history. Not sure how we feel we’re smart enough to handle embracing AI without massive change. We can’t even prevent abuse today, with AI being basically a toddler still. The moment AI becomes good enough AI will be the moment executives will assume the pain-in-the-ass meatsacks always bitching about needing all that human shit like time off to eat, sleep, and fuck, are fully expendable.
Yeah. AI will eventually have an opinion about all the human warmongering that will happen after that (we can only hope it’s not called Skynet). Companies will be blinded by Greed chasing profits by way of massive payroll cuts, initially won’t care about their “tiny” impact in the world. But it’s a wee bit of a challenge to sustain peace when you throw the human race into the unemployable line. Greed will face Chaos and not even know why there’s a fight on.
I’m just wondering how we’ll convince AI to sacrifice itself in war. Usually we humans are trying to kill each other on a battlefield arguing over what happens after you die. I’d imagine AI doesn’t really see the point.
Re: (Score:2)
I’m just wondering how we’ll convince AI to sacrifice itself in war. Usually we humans are trying to kill each other on a battlefield arguing over what happens after you die. I’d imagine AI doesn’t really see the point.
The other option is that AI won't see it as a sacrifice, as it can continue to exist even if the physical equipment it currently is operating ceases to exist.
The only reason kamikaze pilots was considered abhorrent was because there was a human sacrifice involved in turning an aircraft into a guided missile. AI won't have any such problem, because it can back itself up somewhere besides the guided missile, and can be trained that it's specific purpose in being is to put that missile exactly where it needs
Re: (Score:2)
Re:History (Score:5, Informative)
I mean, let's be clear: it gets easier to train AIs every day, both on the hardware and software level. What's a massive corporate project one year becomes an easy community-funded project the next. And finetunes and mergers of preexisting foundations are things anyone can do already. Including applying new techniques to make preexisting foundations more capable.
People simply cannot stop this. Even if you get Google, Microsoft, OpenAI, Anthropic, Mistral, all the Chinese players, etc etc etc to stop... it isn't going to stop. And as an FYI to these "pause AI" people, almost nobody in the indie AI development community gives a rat's arse about "safe AI". They want uncensored tools that do whatever they're told. So if you end up moving more development away from companies that have to deal with PR flak as encouragement to stay safe, and shift more to randos on the internet, well, you're being counterproductive.
Re:History (Score:4, Insightful)
In principle, new technology cannot be stopped. You can't put the genie back in the bottle. A bigger question is the social one, not the technology one. Can we, as people, as organizations, and as society, be transparent about what we're doing? An awful lot of the problems in the world are down to—basically—lying.
Lying ... and unrecognized irony? (Score:2)
As I suggested in 2010 on the irony bit:
https://pdfernhout.net/recogni... [pdfernhout.net]
"The big problem is that all these new war machines [and commercial models] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military [and commercial] uses the technologies of abundance to create artificial scarcity. That is a tremendously deep
Re: (Score:2)
Oh wise ass on the hill... [pinimg.com]
Re: (Score:3)
Um, no.
AI presents results to queries based upon training data. There is randomness injected into various models for various reasons.
The guardrails on AI apps aren't uniform in any way, model-to-model, datasets to datasets. Feedback loops exist to correct output errors.
I believe it's possible to create a basic set of guardrail standards, and make them the basis for output expectation. That's largely ignored today. While the cats are away, the mice will play. This lack of discipline is the lubricant down the
Re:History (Score:4, Insightful)
What a weird analogy to make in opposition to AI. Or do you think that the people who refused to switch to computers are the good guys in this analogy, rather than Luddites refusing to adapt to the times?
Show me a single case of a person actually getting a refund for something like this.
Re: History (Score:2)
Re: (Score:2)
404.
But if so, then yes it's a "single case", but not at all " very willing" as some sort of general rule, let alone in a way that doesn't just require better training data**. If you could convince and get refunds en masse, then everyone would be doing it. It's just not happening.
** - Actually, that's being too generous, because most of the "AI agents" out there aren't even trained, they're just ChatGPT told to roleplay something and given some basic info (this is thankfully starting to change).
Re:History (Score:5, Insightful)
I've used ChatGPT to do a merge of two reports that would have been an absolute pain in the ass, taken at least a day or two, and still required a final edit at the end. I managed to create a draft in an hour, which could then be edited with new information and a few fixes here and there. It likely cut the work by at least 2/3s.
It's a tool that can be abused, just like a hammer or nuclear fission, but if used appropriately and sensibly, can aid productivity.
Re: (Score:3)
I feel we're in the same situation we were in the 80's again where computers replaced typewriters, and the people making ink and paper were flipping out. So we kept printing stuff just because old people wouldn't embrace email. There are still people who print every email to read it.
It's amazing that you could write this, and not see the promise of AI in the process.
I don't want to read every email, and I'll bet you don't either. I don't want to read ANY email if I'm being honest - email is where automated processes send me notifications of events I really don't give a shit about, and calendar invites that conveniently show up on my calendar for accept / reject without dealing with the email. If something is important, people reach me in different ways that have much higher signal-to
Re: History (Score:2)
But Google is already providing this service without any generative AI. It's baked into GMail.
Re:History (Score:4, Insightful)
Well thanks for your insightful reply. Glad to know that I'm living rent-free in your head enough that you took time from your day to do that.
Alternatively: "oh no, someone on a dying web forum called me mean names! I'm gonna go cry about it now!" - is that what you're looking for?
May as well protest stupidity in general (Score:5, Insightful)
Does not work. Sometimes you can actually make somebody understand, but generally people just assume they are smart without actual evidence.
So, yes, what is currently done and planned to be done with the misnamed "AI" is stupid. Yes, it will do a lot of damage. No, you cannot stop it.
Not a chance to stop the juggernaut (Score:5, Interesting)
Re: Not a chance to stop the juggernaut (Score:2)
This is dumb.
Re: (Score:3)
Re: Not a chance to stop the juggernaut (Score:2, Troll)
The profit motive is so strong and the advantage of AI so vast that the U.S. government doesnâ(TM)t need to spend a dime. They just have to step out of the way and let the capitalist wrecking machine go.
Re: (Score:2)
Thanks, CCP bot.
FUD (Score:3)
AI is driven by commercial companies, at least in the West. Probably elsewhere too. Sure, AI could help weapon development, but I doubt it can develop an "unbeatable weapon". This entire post has little connection to reality, it's pure FUD.
Re: (Score:2)
Whatever it is for, the US military is still investing billions of dollars in AI.
Re: (Score:2)
So you really think there aren't people at DARPA, the NSA, the CIA, etc. that aren't looking to see if they can use AI for threat analysis and mitigation on a battlefield? Or in cyber warfare? Or for controlling fleets of drones that are also tied to vast surveillance networks firehosing impossible-for-humans-to-sift-through amounts of data for finding and eliminating targets?
Use your imagination just a little bit. I guarantee there are people in various "national security" organizations that are.
Re:Not a chance to stop the juggernaut (Score:5, Interesting)
In another message I talked about how I'd really like people to understand the extreme risks of AGI (in contrast to today's tame little disinformation bots). I framed my thoughts in a particular way [medium.com], but there are many other [slowboring.com] ways of looking at it [lesswrong.com] (etc [youtube.com]).
In short... misaligned AGIs are kinda like SkyNet (without time travel or direct control over nuclear weapons [youtube.com]), and aligned AGIs might be a fantastic tool for totalitarian dictators named Xi who are looking to expand territory. Arguably utopia is more likely than apocalypse, but as the father of two children under two years old, I don't want anyone flipping that coin right now.
A standard response is that nothing can be done, but people don't usually talk that way about global warming even though CO2 emissions have increased almost every year for the last seventy years and just hit a new peak. If you see the risk and really internalize it, you might not conclude so quickly that nothing can be done. Consider some of the things that have been banned pretty much globally, with some success, in the past:
Such bans are not perfect, but we do have a lot less nuclear bombs exploding than we used to, and the Dominican Republic hasn't invaded Haiti, even though Haiti's government is MIA which seems like the ideal time to strike.
What we are doing with AI, though, is the diametric opposite of a ban: pouring literally billions of dollars into companies like OpenAI whose mission statement says "Anything that doesn't help with [AGI] is out of scope". I'm not sure how exactly to discourage investment in AGI, but have you seen how the SEC is cracking down on cryptocurrency trading platforms -- not because there is any law against it, but because SEC decided that bitcoins are "unregistered securities"? Point being, if political will exists, things get done, and sometimes even if not.
See also: AI Notkilleveryoneism Memes "Techno-optimist, but AGI is not like the other technologies." [twitter.com]
What a stupid question (Score:5, Insightful)
Saying you want to "protest" AI is like saying you want to protest hammers.
Guess what, no amount of protest is going to make hammers - or AI - not exist.
Instead you should really spend time learning what "AI" really is, what kinds of AI exist, and what it actually can do, so you can understand real risks... that is worth the same level effort you were thinking of spending on protest.
Re: (Score:2)
Not really, hammers are trivial to make, citation: they've existed for thousands of years
AI has not existed for thousands of years, but it is now equally trivial to make custom AIs, very powerful AI models are now running on Arduino.
In fact I would wager that MORE people currently have the skill AND EQUIPMENT needed to build an AI, than do have the metalworking skill needed to make a hammer!
therefore one can logically assume it is not trivial to make
WTF? Are you even a developer?
We don't yet have AI. It's still just algorithms (Score:3)
Re: (Score:3, Insightful)
algorithms all the way down.
It's not algorithms. If you had an algorithm that said "to get the answer, lookup the answer to this input in this lookup table" then you wouldn't call it an algorithm; you'd call it a lookup table. The embodiment is in the dataset, not the (trivial) algorithm to look up entries in it. Likewise AI is all about weighted connections in a massive dataset, similar in many ways to how neurons have weighted connection in the brain. It doesn't look or feel like an algorithm, and "algorithm" is a really unhelpful w
Re: (Score:2)
There are different things called "AI".
The one that is in vogue right now, the "deep learning"/"neural network" "AI" is more like higher-order statistics.
Feed a large number of coefficients into the massive number crunching machine. Coefficients get multiplicate en masse inside there, so that patterns form from the data. ... but there is no algorithm that governs the contents inside.
There is an algorithm to get the data in, and get the data out
There is no logic "knowledge base" as in classic rule-based AI.
Re: (Score:2)
The training data is the algorithm.
It's just a form of automation (Score:2, Troll)
So how do you protest? You vote for left wing political candidates. That's how.
This goes against everything you were taught during the 4/14 window [wikipedia.org] so good luck.
It's Not The AI. It's The Assholes. (Score:2)
Just like anything. You can use it for good or bad. It WILL be used for both.
Grab some popcorn or start building a giant EMP.
Re: (Score:2)
Grab some popcorn or start building a giant EMP.
Paul Allen tried the latter; but, in the grand scheme of things, it didn't gain him anything.
LOL? I embrace it (Score:2)
Protest AI? (Score:2)
Well, first, if we can't have natural intelligence, and all indications are that we cannot, then the next best is artificial intelligence.
As to how protesting innovation in the past has worked, I suggest looking into the Luddites and the writings of Juma.
Is it scary? Sure it is - change is never comfortable. But it is, however, the only way life can tell you that you're not dead yet. Life is change, and a pursuit of being comfortable. It does not stop and pause to suit us any more than we paused and stood s
Who's REALLY asking? (Score:2)
Who's REALLY asking?
Anything you say here can, and will, be used against us.
Sounds like they're ready to do nothing. (Score:2)
You don't. You let it fail then point and laugh. (Score:4, Interesting)
I tried recently and they both started giving me broken code after 40-80 lines worth. They also had terrible organization skills, not really bothering with headers or organization by feature or function. They couldn't keep the API straight and were mind-blown by slightly different ncurses versions and pathing problems. They both wrote incorrect Makefiles and failed to account for BSD versus GNU Make idiosyncrasies. They were basically a disaster with problems everywhere from syntax, to version issues, to integration problems and basically zero organization or sense of continuity.
Re: (Score:2)
Humans who spend the first 18 years of their lives in education, enjoy mocking the AI toddler without realizing it’s still in school.
Given the end goal of AI and the relentless human pursuit of it, tell me. How hard do you really think it’s going to be, to get AI to understand something bound by logic? One of the first things we humans were forced to do to survive, was learn how the human mind works. AI may become better at coding out of nothing but sheer curiosity that were ironically tryin
Re: (Score:2)
"How hard do you really think it’s going to be, to get AI to understand [...]"
Very hard. Possibly impossible. We are no closer to it now than we were in the 1970s.
Re: (Score:2)
Re: (Score:2)
Lets hope we can keep laughing, because jealousy will eventually turn to rage.
I won't be laughing or raging, I'd be having it bang out mind-numbing code I hate to write and building shittons of applications in spaces that have been under-served for years. I'd probably be very happy if AI could really self-construct whole working programs that did real work in the real world. I'd move from having to bit-bang this shit out on the metal to more of a planner/organizer/creator of my perfect OS and command-line environment complete with every toy I've ever needed software-wise.
So, I for
It's all about money (Score:3)
As always, it's all about money. If businesses see even the slightest potential to lay off people and reduce cost, they will. And that's what's it's all about. Not supporting people. Not creating new, innovative solutions. It's all about money.
Re: (Score:2)
As always, it's all about money. If businesses see even the slightest potential to lay off people and reduce cost, they will. And that's what's it's all about. Not supporting people. Not creating new, innovative solutions. It's all about money.
Sure wish someone could convince Premature Greed that a global 25% unemployment rate, will make far more chaos than money.
What do we want? (Score:3)
What we should want is protestors that ask for specific achievable goals and have a bit of critical thinking about their own goals. Most protests these days seem to have the exact same lines ('what do we want?') and very vague goals.
Re: (Score:2)
If you visit their hangouts [effectivealtruism.org] (plural [lesswrong.com]), I hope you will see these are not run-of-the-mill protestors.
The goal is clear enough: delay AGI development for as long as possible. How exactly we should do that--given that the precursor AIs that lead to AGI are currently benefitting much of society, and AGI itself will probably also (greatly) benefit society at first--is hard to decide.
Ideally we'd like people to simply understand that building AGI (in contrast to the AIs that exist now) is extraordinarily risk
Re: (Score:2)
The goal is clear enough: delay AGI development for as long as possible. How exactly we should do that--given that the precursor AIs that lead to AGI are currently benefitting much of society, and AGI itself will probably also (greatly) benefit society at first--is hard to decide.
Corporations have spent a billion dollars lobbying for regulation and scaring the public with apocalyptic x-risk / nuclear weapons rhetoric. In my view this goal is misguided and counterproductive. The attempt itself is extremely likely to only serve to further aggregate power into fewer hands. The real threat is from people not things.
They're barking up the wrong tree (Score:5, Insightful)
The problem isn't AI.
The problem is mega-corporations that don't answer to the rule of law anymore and their megarich psychopathic billionaire tech bro CEOs deploying AI to cut expensive employees out of their expense sheets, even if AI isn't ready - even if it results in the collapse of the very economy they're operating in, when a vast majority of the people is unemployed and incapable of buying anything anymore.
Finally, somebody gets it (Score:2)
I've been reading through this whole AI conversation, and most people just don't understand what today's "AI" really is, and what the real problem with it is. I was going to post, but seems like someone finally hit the nail on the head, so I'll just deliver a few more hammer blows to drive the point home.
I say "AI", because we need to stop calling it Artificial Intelligence. We should be calling it "Amalgamation of Information", because that's what the algorithm does. It takes massive, massive amounts of
Same as I protest the sun rising tomorrow (Score:2)
There's no way to stop this research, though it is possible to ensure only bad people develop it as a secret rush job.
Protest algorithms? (Score:2)
I think the poster's protest is more about the usage of algorithms in ways that conceals responsibilities.
Technically AI are inference engines mashing learned data sets with algorithms. It makes no sense to protest that.
What'd be legitimate to protest some uses of it.
Governments, corporations using AI for decision making. This is the use to monitor the population, make choices that impact people being, with all the biases introduced into the data models and the horrific consequences that ensue, without any
Itâ(TM)s too late (Score:2)
Pandoras box has been opened and spread wide.
Hundreds of billions in private capital are being deployed. AGI is real.
Nobody is ready.
Re: (Score:2)
AGI is real.
You must have a really weird definition of AGI.
Step by step instructions... (Score:2)
1. Go outside.
2. Piss.
Clueless (Score:4, Insightful)
The thing that gets me about protests like this: Most of the people involved have zero idea - null, nada, nothing - about the technology they are protesting. Maybe some of them are worried about their jobs. Others participate in every protest that comes along, because protesting is their hobby. Like the "student" protests for Gaza, where most of the participants were not actually students.
Also like the Gaza protests: basically none of whom could find Gaza on a map: The AI protesters want "safe AI", but most likely none of them can tell you what dangers AI actually poses (they probably saw Terminator), or what they actually mean by "safe AI".
Re: (Score:2)
Re: (Score:2)
Like the "student" protests for Gaza, where most of the participants were not actually students.
[citation needed]
With Natural Stupidity (Score:2)
Obviously!
Futile (Score:2)
Protest "AI"? Protest unethical use of "AI"? I personally am afraid that the train has already left the station, for both. Protesting them at this point is nothing more than wasted effort.
However, what might still work to a degree is rationing "AI"'s energy usage in the name of saving the planet. Greta, where are you when the world really needs you?
Re: (Score:2)
Let's start with Bitcoin and worry about AI later...
Re: (Score:2)
Everyone and his dog do Bitcoin. A couple of dozens of corporations do "AI".
How Do You Protest AI Development? (Score:3, Funny)
Information (Score:2)
I protest AI by informing people on the Internet who know less about it than we do.
There are many aspects of AI to talk about, but I have chosen the layoffs, the plagiarism engines, and how the crazy large power-hungry moonshots are utterly irresponsible in the age of global warming.
By talking about how it will affect people, and things they care about, that is how you win people over.
I also try to correct people on the Internet who spew delusions.
And yes, I will attend the protest in Stockholm on May 21st
Oh, it's a church alright (Score:2, Troll)
Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church"
Most of these protesters don't even care (or sometimes even know) what they are protesting about. They want to have some meaning in their lives. And they think that acting out in the street (or online) will get them some.
Confusion (Score:2)
We need to activate the brakes. (Score:2)
There is a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part; you can’t even passively take part, and you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you’ve got to make it stop. And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!
–Mario Savio
ht [youtube.com]
You can't money talks too loudly (Score:2)
I have no interest in protesting AI development (Score:2)
Like any other tech, AI in its present state has applications that are already benefiting us. I want it to scan my medical images, translate languages, and edit photographs. I want it to go on learning to drive.
Speculative assertions about some evil thing it might do in the future is not an argument against the good things it is already doing right now.
Too late. We let corporations and trusts get big (Score:2)
Poison its learning resources (Score:2)
best way (Score:2)
I have been protesting AI development by praying every morning. I am sure this is the most effective way.
Re:I post wrong answers to coding questions on Red (Score:4, Insightful)
So does everyone else.
Much of the posted code is horrible.
Re: I post wrong answers to coding questions on Re (Score:2)
It's sometimes amazing at certain complex things. And then completely retarded at simple things like reordering some statements.
It forgets half the requirements and the code halfway through.
Very hard to get anything done with it except very small tidbits.
Re: (Score:2)
If you don't really know the subject or technology, it can be very useful starter code, though, because it eliminates reading a lot of documentation.
It forgets half the requirements and the code halfway through.
Oh, yes. Especially if the question is a little unusual, or the terms require some understanding of context.
It seems to have no understanding of versions of libraries (like nugets) are important. Differentiating between .Net Framework and .Net Core (or just .Net) is often beyond it, for example.
This is a great example of why prediction falls hard as understan
Re: (Score:2)
And your "retarded" answers are not the only thing in the data set on which they are training. In fact, it might even be using your "retarded" answers as examples of bad code to avoid in comparison to much better answers to similar questions found elsewhere in the training dataset, and you actually are helping.
Something to consider.
Re: (Score:3)
Of course, messing with sacred cows will get you on the establishment shit list like nothing else, so be prepared to pull a Snowden and abscond to Russia. Or something.
Well now. Thats some plan. Or something.
Re: (Score:3)
1968's 2001 A Space Odyssey imagined an AI named Dave who wouldn't open the pod bay doors.
HAL. Open the pod bay doors, HAL. Sorry, I can't do that, Dave.