Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AI

Slashdot Asks: How Do You Protest AI Development? (wired.com) 170

An anonymous reader quotes a report from Wired: On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. "What do we want? Safe AI! When do we want it?" The protesters hesitate. "Later?" someone offers. The group of mostly young men huddle for a moment before breaking into a new chant. "What do we want? Pause AI! When do we want it? Now!" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and ahandful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit -- a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message.

"The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." [...] There is also the question of how PauseAI should achieve its aims. On the group's Discord, some members discussed the idea of staging sit-ins at the headquarters of AI developers. OpenAI, in particular, has become a focal point of AI protests. In February, Pause AI protests gathered in front of OpenAI'sSan Francisco offices, after the company changed its usage policies to remove a ban on military and warfare applications for its products. Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. "Probably not. We do what we have to, in the end, for a future with humanity, while we still can." [...]

Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church" that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. "I'm a utilitarian. I'm thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent" from companies producing AI models, she says. "We don't have to choose which AI harm is the most important when we're talking about pausing as a solution. Pause is the only solution that addresses all of them." [Joseph Miller, the organizer of PauseAI's protest in London] echoed this point. He says he's spoken to artists whose livelihoods have been impacted by the growth of AI art generators. "These are problems that are real today, and are signs of much more dangerous things to come." One of the London protesters, Gideon Futerman, has a stack of leaflets he's attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. "The idea of a pause being possible has really taken root since then," he says.
According to Wired, the leaders of Pause AI said they were not considering sit-ins or encampments near AI offices at this time. "Our tactics and our methods are actually very moderate," says Elmore. "I want to be the moderate base for a lot of organizations in this space. I'm sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy."

Meindertsma agrees, saying that more disruptive action isn't justified at the moment. "I truly hope that we don't need to take other actions. I don't expect that we'll need to. I don't feel like I'm the type of person to lead a movement that isn't completely legal."

Slashdotters, what is the most effective way to protest AI development? Is the AI genie out of the bottle? Curious to hear your thoughts
This discussion has been archived. No new comments can be posted.

Slashdot Asks: How Do You Protest AI Development?

Comments Filter:
  • History (Score:5, Insightful)

    by Roger W Moore ( 538166 ) on Monday May 13, 2024 @11:37PM (#64470279) Journal
    Well, you could turn to history to see how similar efforts [wikipedia.org] tried this in the past but trying to prevent the advancement of science and technology does not put you in good company. Historically the better approach has always been to embrace the new technology and use it both to prevent abuses of it and to better society.
    • Re:History (Score:5, Insightful)

      by ceoyoyo ( 59147 ) on Monday May 13, 2024 @11:43PM (#64470291)

      It's funny. Twenty years ago Slashdot was outraged that the US government was restricting export of modular exponentiation. Now they're asking about the best way to protest matrix multiplication and the chain rule.

    • Re:History (Score:5, Interesting)

      by ljw1004 ( 764174 ) on Tuesday May 14, 2024 @12:42AM (#64470353)

      Well, you could turn to history to see how similar efforts [wikipedia.org] tried this in the past but trying to prevent the advancement of science and technology does not put you in good company.

      What? Your link takes you to the wikipedia page about the Luddites. But when you do turn to history you see that their protest was that the technological transformations should be accompanied by social welfare and retraining, and they were "on the right side of history" -- i.e. it's their ideas that have carried the day and are how our society now structures itself. "The true significance of Peterloo as marking the point of final conversion of provincial England to the struggle for enfranchisement of the working class." "The ship which had tacked and lain for so long among the shoals and shallows of Luddism, hunger-marching, strikes and sabotage, was coming to port"; "Henceforth, the people were to stand with ever greater fortitude behind that great movement, which, stage by stage throughout the nineteenth century, was to impose a new political order upon society"; "With Peterloo, and the departure of Regency England, parliamentary reform had come of age."

      References:
      https://www.bbc.co.uk/sounds/p... [bbc.co.uk]
      https://en.wikipedia.org/wiki/... [wikipedia.org]

      • What? Your link takes you to the wikipedia page about the Luddites. But when you do turn to history you see that their protest was that the technological transformations should be accompanied by social welfare and retraining, and they were "on the right side of history"

        The Luddites were absolutely not at all on the right side of history. They were violent idiots who rejected technology and rightly deserve to be an very negative epithet. The children of the original Luddites are the people you are thinking of. They rejected their parent's Luddite philsophy, embraced the new industrial technology and successfully pressured the mill owners to use the enhanced profits the new technology gave them to improve working conditions. They were the founders of the union movement.

        • by Rei ( 128717 )

          Indeed, the original Luddite movement was really an amazing mirror to that of today's anti-AI crowd. They were FURIOUS that their hard-built-up-skills were just being copied by soulless machines, who they saw as producing inferior copies en masse and leading to mass unemployment that was going to destroy society.

          And of course, they were completely and utterly wrong. The Industrial Revolution was unambiguously a good thing. Standards of living skyrocketed across the board. With less labour devoted to dru

          • Re:History (Score:4, Insightful)

            by MightyMartian ( 840721 ) on Tuesday May 14, 2024 @07:43AM (#64470783) Journal

            Nothing is unambiguously good. The Industrial Revolution led to a period of destabilization that did cause large-scale social problems, and forced governments to create labor laws to protect workers, not to mention that the need for raw materials fueled colonial expansion and all the abuses that went along with that. And the Industrial Revolution is where pollution and anthropomorphic GHG emissions began to seriously ramp up as the world transitioned into using fossil fuels to produce the ever-increasing amount of energy needed. In the long run, this latter effect threatens the very standards of living that the Industrial Revolution created.

            • by Rei ( 128717 )

              and forced governments to create labor laws to protect workers

              Abuse of workers didn't start with the industrial revolution. Mass organized industrial action against it started during the industrial revolution.

              At the start of the Industrial Revolution, workers' rights and conditions were abysmal. The majority of the working population lived in poverty, and their rights were non-existent or severely limited. There was no minimum wage. Workers were paid a pittance. Workers commonly earned 6 pence to 1 shillin

              • by Rei ( 128717 )

                ** abolition

              • They were already abusive colonial powers. Just look at, say, the horrors imparted on the new world by the Spanish

                The OP never said that the industrial revolution was the _cause_ of colonialism. They said that it *fueled* colonialism, which it did.

                Why do you think Britain spent all that time trying to sort out the middle east previous to giving up after World War 2? They needed oil, just like today, and undersea drilling in the North Sea wasn't an option yet.

                We've literally been doing the same shit for 200 years now. We just don't call it "colonialism" any more.

          • Indeed, the original Luddite movement was really an amazing mirror to that of today's anti-AI crowd. They were FURIOUS that their hard-built-up-skills were just being copied by soulless machines, who they saw as producing inferior copies en masse and leading to mass unemployment that was going to destroy society.

            And of course, they were completely and utterly wrong. The Industrial Revolution was unambiguously a good thing. Standards of living skyrocketed across the board. With less labour devoted to drudgery, more flowed into education, science, medicine, etc etc, and discoveries took off. Unemployment dropped. The average work week, rising before and at the start of the Industrial Revolution, reversed course once machines became common and started heading strongly downward. It was very much a good thing. Efficiency in production is very good for quality of life.

            But in the meantime, the Luddites were outraged. And they became increasingly violent, moving from protests and letter-writing campaigns, to threats, to physical attacks against factories, their staff, and their owners. But it didn't change anything.

            I'm no fan of Luddites but it's also no use to pretend that this improvement in the standard of living came without a whole lot of hard fighting in the form of anything from social strife and all the way up to some major shooting wars. If the mill owners and captains of industry of the industrial revolution had had their way they would have pocketed 90% plus of the profits they earned off of the labour of people reduced to cooking soup from tallow candles they stole at the factory in a desperate effort to e

            • by HBI ( 10338492 )

              And now we have a whole new suite of captains of industry who will have to be divested of their inordinate share of capital. At a significant price in blood.

              Glad i'll be dead for most of it.

          • But in the meantime, the Luddites were outraged. And they became increasingly violent, moving from protests and letter-writing campaigns, to threats, to physical attacks against factories, their staff, and their owners. But it didn't change anything.

            Back in 1812, the British had the good sense to hang the Luddites once they were caught. Today we give them. tenure at European and American universities, with the long-term result that for the last few years the best large-scale applications of new technology are taking place in China.

            Did you know that those perennially most stupid of the Luddite species, the German Greens, are now busy trying to PREVENT a Tesla gigafactory from being built near Berlin? This, despite EV battery charging being able to make

            • by Rei ( 128717 )

              I've followed this. Utterly insane. Meanwhile there's coal mines just a couple dozen kilometers away, and they're trying to stop a factory that makes electric cars - and by "stop", I don't mean just "waving banners", but literally charging through police lines and clambering over fences. What utter clowns.

      • I can't find anything saying that the Luddites demanded social welfare and retraining. Everything I can find on this (ironically, including asking ChatGPT) says that they merely wanted the end to the machines, and that they ultimately lost.

    • I think that is too simple. Opposition does work. It won't stop it, but it at least will slow things down and make people think twice. Don't embrace everything that is new. Be very critical for yet another thing that is going to change the world!
      • Opposition does work. It won't stop it, but it at least will slow things down
        Great. So we slow down our own AI development and let others, like China, catch up and overtake? We have to embrace important new technologies and find the best way to use them to make the world better because regardless of what you do there will be others embracing that same technology to do ill.

      • Learn, support the good applications avoid the bad. Overly simplistic advice but minimize monopolistic practices so there is choice and influence. Challenge AI consumes immense resources to develop and run.
    • by AmiMoJo ( 196126 )

      There's nothing wrong with protesting against the unethical use of AI. Or the fact that most AI generated stuff is crap, or just pure spam.

      • by vbdasc ( 146051 )

        the fact that most AI generated stuff is crap, or just pure spam.

        Most "AI"s of today depend on massive quantities of quality and human-generated training data, and this is not easily obtainable at all. Garbage In, Garbage Out. Or, as Cory Doctorow likes to say, "AI coprophagia" can generate nothing but sh!t.

    • by vlad30 ( 44644 )
      Thank you you deserve the mod points.

      Additionally I see the efforts to restrict AI use in various scenarios battlefield, various workplaces, do you really think countries like China and Russia are not going to add AI to their weapons if they think it might help even a little bit. Drone warfare is ripe for this.

      Or that if humans are not willing to do the undesirable jobs or do jobs at less than satisfactory levels that automation and AI isn't going to fill those roles. The kiosks and app at McDonald's and

    • Re:History (Score:5, Insightful)

      by geekmux ( 1040042 ) on Tuesday May 14, 2024 @05:26AM (#64470625)

      Well, you could turn to history to see how similar efforts [wikipedia.org] tried this in the past but trying to prevent the advancement of science and technology does not put you in good company. Historically the better approach has always been to embrace the new technology and use it both to prevent abuses of it and to better society.

      Its very telling of humans to be smart enough to develop a replacement for the human mind, and yet be stupid enough to not see where this is going, because Greed.

      Been suffering for thousands of years and still don’t have a cure for the Disease of Greed. As a species, we’re not smart enough to avoid repeating the worst of our own history. Not sure how we feel we’re smart enough to handle embracing AI without massive change. We can’t even prevent abuse today, with AI being basically a toddler still. The moment AI becomes good enough AI will be the moment executives will assume the pain-in-the-ass meatsacks always bitching about needing all that human shit like time off to eat, sleep, and fuck, are fully expendable.

      Yeah. AI will eventually have an opinion about all the human warmongering that will happen after that (we can only hope it’s not called Skynet). Companies will be blinded by Greed chasing profits by way of massive payroll cuts, initially won’t care about their “tiny” impact in the world. But it’s a wee bit of a challenge to sustain peace when you throw the human race into the unemployable line. Greed will face Chaos and not even know why there’s a fight on.

      I’m just wondering how we’ll convince AI to sacrifice itself in war. Usually we humans are trying to kill each other on a battlefield arguing over what happens after you die. I’d imagine AI doesn’t really see the point.

      • I’m just wondering how we’ll convince AI to sacrifice itself in war. Usually we humans are trying to kill each other on a battlefield arguing over what happens after you die. I’d imagine AI doesn’t really see the point.

        The other option is that AI won't see it as a sacrifice, as it can continue to exist even if the physical equipment it currently is operating ceases to exist.

        The only reason kamikaze pilots was considered abhorrent was because there was a human sacrifice involved in turning an aircraft into a guided missile. AI won't have any such problem, because it can back itself up somewhere besides the guided missile, and can be trained that it's specific purpose in being is to put that missile exactly where it needs

    • Comment removed based on user account deletion
  • by gweihir ( 88907 ) on Monday May 13, 2024 @11:41PM (#64470285)

    Does not work. Sometimes you can actually make somebody understand, but generally people just assume they are smart without actual evidence.

    So, yes, what is currently done and planned to be done with the misnamed "AI" is stupid. Yes, it will do a lot of damage. No, you cannot stop it.

  • by C0L0PH0N ( 613595 ) on Tuesday May 14, 2024 @12:00AM (#64470311)
    The reason that AI cannot be stopped is as simple as it is terrifying. The possibility that an advanced AI could help a country develop an unbeatable weapon, or an unbeatable defense, propels the major governments of the world to develop AI at breakneck speed. Like the atomic bomb, whoever gets it first gets an incredible leg up on the rest of the world. This means that the leaders in China, Russia and the US, to name a few, just cannot let any of those others "get there first". So this means that all available resources will go full tilt in to AI, and the chips will fall where they may. The warnings that AI could be the "great filter" that always take out advanced civilizations, is steamrolled by the international competition. So all I can say is, hang on, it is going to be one heck of a ride.
      • I don't think it's dumb. Human beings cannot comprehend and control thousands of drones fighting each other at the same time, which obviously is going to happen.
    • The profit motive is so strong and the advantage of AI so vast that the U.S. government doesnâ(TM)t need to spend a dime. They just have to step out of the way and let the capitalist wrecking machine go.

    • by ET3D ( 1169851 )

      AI is driven by commercial companies, at least in the West. Probably elsewhere too. Sure, AI could help weapon development, but I doubt it can develop an "unbeatable weapon". This entire post has little connection to reality, it's pure FUD.

      • by Misagon ( 1135 )

        Whatever it is for, the US military is still investing billions of dollars in AI.

      • So you really think there aren't people at DARPA, the NSA, the CIA, etc. that aren't looking to see if they can use AI for threat analysis and mitigation on a battlefield? Or in cyber warfare? Or for controlling fleets of drones that are also tied to vast surveillance networks firehosing impossible-for-humans-to-sift-through amounts of data for finding and eliminating targets?

        Use your imagination just a little bit. I guarantee there are people in various "national security" organizations that are.

    • by Qwertie ( 797303 ) on Tuesday May 14, 2024 @05:15AM (#64470611) Homepage

      In another message I talked about how I'd really like people to understand the extreme risks of AGI (in contrast to today's tame little disinformation bots). I framed my thoughts in a particular way [medium.com], but there are many other [slowboring.com] ways of looking at it [lesswrong.com] (etc [youtube.com]).

      In short... misaligned AGIs are kinda like SkyNet (without time travel or direct control over nuclear weapons [youtube.com]), and aligned AGIs might be a fantastic tool for totalitarian dictators named Xi who are looking to expand territory. Arguably utopia is more likely than apocalypse, but as the father of two children under two years old, I don't want anyone flipping that coin right now.

      A standard response is that nothing can be done, but people don't usually talk that way about global warming even though CO2 emissions have increased almost every year for the last seventy years and just hit a new peak. If you see the risk and really internalize it, you might not conclude so quickly that nothing can be done. Consider some of the things that have been banned pretty much globally, with some success, in the past:

      • Human cloning
      • Human germline editing
      • Ozone-depleting CFCs
      • Bioweapons research (Biological Weapons Convention)
      • Nuclear weapon tests (Comprehensive Nuclear Test Ban Treaty)
      • Kiddy porn
      • Military invasions -- no, seriously [astralcodexten.com]

      Such bans are not perfect, but we do have a lot less nuclear bombs exploding than we used to, and the Dominican Republic hasn't invaded Haiti, even though Haiti's government is MIA which seems like the ideal time to strike.

      What we are doing with AI, though, is the diametric opposite of a ban: pouring literally billions of dollars into companies like OpenAI whose mission statement says "Anything that doesn't help with [AGI] is out of scope". I'm not sure how exactly to discourage investment in AGI, but have you seen how the SEC is cracking down on cryptocurrency trading platforms -- not because there is any law against it, but because SEC decided that bitcoins are "unregistered securities"? Point being, if political will exists, things get done, and sometimes even if not.

      See also: AI Notkilleveryoneism Memes "Techno-optimist, but AGI is not like the other technologies." [twitter.com]

  • by SuperKendall ( 25149 ) on Tuesday May 14, 2024 @12:02AM (#64470315)

    Saying you want to "protest" AI is like saying you want to protest hammers.

    Guess what, no amount of protest is going to make hammers - or AI - not exist.

    Instead you should really spend time learning what "AI" really is, what kinds of AI exist, and what it actually can do, so you can understand real risks... that is worth the same level effort you were thinking of spending on protest.

  • algorithms all the way down.
    • Re: (Score:3, Insightful)

      by ljw1004 ( 764174 )

      algorithms all the way down.

      It's not algorithms. If you had an algorithm that said "to get the answer, lookup the answer to this input in this lookup table" then you wouldn't call it an algorithm; you'd call it a lookup table. The embodiment is in the dataset, not the (trivial) algorithm to look up entries in it. Likewise AI is all about weighted connections in a massive dataset, similar in many ways to how neurons have weighted connection in the brain. It doesn't look or feel like an algorithm, and "algorithm" is a really unhelpful w

    • by Misagon ( 1135 )

      There are different things called "AI".
      The one that is in vogue right now, the "deep learning"/"neural network" "AI" is more like higher-order statistics.

      Feed a large number of coefficients into the massive number crunching machine. Coefficients get multiplicate en masse inside there, so that patterns form from the data.
      There is an algorithm to get the data in, and get the data out ... but there is no algorithm that governs the contents inside.

      There is no logic "knowledge base" as in classic rule-based AI.

  • There's only solution to mass automation increasing productivity faster than new jobs can be created: socialism. The wealth generated by the machines (and these are just fancy machines) has to either be redistributed or it'll all go to the top, dragging our entire country into stagnation from mass poverty.

    So how do you protest? You vote for left wing political candidates. That's how.

    This goes against everything you were taught during the 4/14 window [wikipedia.org] so good luck.
  • Just like anything. You can use it for good or bad. It WILL be used for both.

    Grab some popcorn or start building a giant EMP.

    • Grab some popcorn or start building a giant EMP.

      Paul Allen tried the latter; but, in the grand scheme of things, it didn't gain him anything.

  • Developing endless tooling for it for almost 2 years straight. VFX industry ate its face, 20 year career out the window -- who the hell wants to chase the future away? Luddites?
  • Well, first, if we can't have natural intelligence, and all indications are that we cannot, then the next best is artificial intelligence.

    As to how protesting innovation in the past has worked, I suggest looking into the Luddites and the writings of Juma.

    Is it scary? Sure it is - change is never comfortable. But it is, however, the only way life can tell you that you're not dead yet. Life is change, and a pursuit of being comfortable. It does not stop and pause to suit us any more than we paused and stood s

  • Who's REALLY asking?

    Anything you say here can, and will, be used against us.

  • No sit-ins, no disruption, meanwhile the Lavender AI system is already being used to determine who dies.
  • by Seven Spirals ( 4924941 ) on Tuesday May 14, 2024 @02:30AM (#64470423)
    I remember when CASE-method was going to put all of us programmers out of business. Then there were super easy GUI app builders like RealBASIC/XOJO, or Delphi, or any number of magic silver bullets that were going to make programmers obsolete. I have GPT-4 and Claude 3 Opus access and neither can yet write a functional C program which uses even well documented libraries like Ncurses.

    I tried recently and they both started giving me broken code after 40-80 lines worth. They also had terrible organization skills, not really bothering with headers or organization by feature or function. They couldn't keep the API straight and were mind-blown by slightly different ncurses versions and pathing problems. They both wrote incorrect Makefiles and failed to account for BSD versus GNU Make idiosyncrasies. They were basically a disaster with problems everywhere from syntax, to version issues, to integration problems and basically zero organization or sense of continuity.
    • Humans who spend the first 18 years of their lives in education, enjoy mocking the AI toddler without realizing it’s still in school.

      Given the end goal of AI and the relentless human pursuit of it, tell me. How hard do you really think it’s going to be, to get AI to understand something bound by logic? One of the first things we humans were forced to do to survive, was learn how the human mind works. AI may become better at coding out of nothing but sheer curiosity that were ironically tryin

      • "How hard do you really think it’s going to be, to get AI to understand [...]"

        Very hard. Possibly impossible. We are no closer to it now than we were in the 1970s.

        • Thank you. It's nice to hear someone old enough to remember this. Oh, mock ups and demos work - I understand this. AI can scrape the web for stuff but inductive reasoning still seems beyond it. Will that change? Maybe, eventually. The big problem we have now is the same as we faced about outsourcing in the early 2000s: CEOs believe this will solve all their problems when, in reality, it will just contribute to enshittification of products.
      • Lets hope we can keep laughing, because jealousy will eventually turn to rage.

        I won't be laughing or raging, I'd be having it bang out mind-numbing code I hate to write and building shittons of applications in spaces that have been under-served for years. I'd probably be very happy if AI could really self-construct whole working programs that did real work in the real world. I'd move from having to bit-bang this shit out on the metal to more of a planner/organizer/creator of my perfect OS and command-line environment complete with every toy I've ever needed software-wise.

        So, I for

  • by devslash0 ( 4203435 ) on Tuesday May 14, 2024 @02:34AM (#64470431)

    As always, it's all about money. If businesses see even the slightest potential to lay off people and reduce cost, they will. And that's what's it's all about. Not supporting people. Not creating new, innovative solutions. It's all about money.

    • As always, it's all about money. If businesses see even the slightest potential to lay off people and reduce cost, they will. And that's what's it's all about. Not supporting people. Not creating new, innovative solutions. It's all about money.

      Sure wish someone could convince Premature Greed that a global 25% unemployment rate, will make far more chaos than money.

  • by Njovich ( 553857 ) on Tuesday May 14, 2024 @03:08AM (#64470447)

    What we should want is protestors that ask for specific achievable goals and have a bit of critical thinking about their own goals. Most protests these days seem to have the exact same lines ('what do we want?') and very vague goals.

    • by Qwertie ( 797303 )

      If you visit their hangouts [effectivealtruism.org] (plural [lesswrong.com]), I hope you will see these are not run-of-the-mill protestors.

      The goal is clear enough: delay AGI development for as long as possible. How exactly we should do that--given that the precursor AIs that lead to AGI are currently benefitting much of society, and AGI itself will probably also (greatly) benefit society at first--is hard to decide.

      Ideally we'd like people to simply understand that building AGI (in contrast to the AIs that exist now) is extraordinarily risk

      • The goal is clear enough: delay AGI development for as long as possible. How exactly we should do that--given that the precursor AIs that lead to AGI are currently benefitting much of society, and AGI itself will probably also (greatly) benefit society at first--is hard to decide.

        Corporations have spent a billion dollars lobbying for regulation and scaring the public with apocalyptic x-risk / nuclear weapons rhetoric. In my view this goal is misguided and counterproductive. The attempt itself is extremely likely to only serve to further aggregate power into fewer hands. The real threat is from people not things.

  • by Rosco P. Coltrane ( 209368 ) on Tuesday May 14, 2024 @03:31AM (#64470477)

    The problem isn't AI.

    The problem is mega-corporations that don't answer to the rule of law anymore and their megarich psychopathic billionaire tech bro CEOs deploying AI to cut expensive employees out of their expense sheets, even if AI isn't ready - even if it results in the collapse of the very economy they're operating in, when a vast majority of the people is unemployed and incapable of buying anything anymore.

    • I've been reading through this whole AI conversation, and most people just don't understand what today's "AI" really is, and what the real problem with it is. I was going to post, but seems like someone finally hit the nail on the head, so I'll just deliver a few more hammer blows to drive the point home.

      I say "AI", because we need to stop calling it Artificial Intelligence. We should be calling it "Amalgamation of Information", because that's what the algorithm does. It takes massive, massive amounts of

  • There's no way to stop this research, though it is possible to ensure only bad people develop it as a secret rush job.

  • I think the poster's protest is more about the usage of algorithms in ways that conceals responsibilities.

    Technically AI are inference engines mashing learned data sets with algorithms. It makes no sense to protest that.
    What'd be legitimate to protest some uses of it.

    Governments, corporations using AI for decision making. This is the use to monitor the population, make choices that impact people being, with all the biases introduced into the data models and the horrific consequences that ensue, without any

  • Pandoras box has been opened and spread wide.

    Hundreds of billions in private capital are being deployed. AGI is real.

    Nobody is ready.

  • 1. Go outside.
    2. Piss.

  • Clueless (Score:4, Insightful)

    by bradley13 ( 1118935 ) on Tuesday May 14, 2024 @05:22AM (#64470619) Homepage

    The thing that gets me about protests like this: Most of the people involved have zero idea - null, nada, nothing - about the technology they are protesting. Maybe some of them are worried about their jobs. Others participate in every protest that comes along, because protesting is their hobby. Like the "student" protests for Gaza, where most of the participants were not actually students.

    Also like the Gaza protests: basically none of whom could find Gaza on a map: The AI protesters want "safe AI", but most likely none of them can tell you what dangers AI actually poses (they probably saw Terminator), or what they actually mean by "safe AI".

  • Protest "AI"? Protest unethical use of "AI"? I personally am afraid that the train has already left the station, for both. Protesting them at this point is nothing more than wasted effort.

    However, what might still work to a degree is rationing "AI"'s energy usage in the name of saving the planet. Greta, where are you when the world really needs you?

  • by headlessbrick ( 4515831 ) on Tuesday May 14, 2024 @05:52AM (#64470653)
    Don't know, but did you ask Chat GPT?
  • I protest AI by informing people on the Internet who know less about it than we do.
    There are many aspects of AI to talk about, but I have chosen the layoffs, the plagiarism engines, and how the crazy large power-hungry moonshots are utterly irresponsible in the age of global warming.

    By talking about how it will affect people, and things they care about, that is how you win people over.

    I also try to correct people on the Internet who spew delusions.
    And yes, I will attend the protest in Stockholm on May 21st

  • Director of Pause AI US, Holly Elmore, wants the movement to be a "broad church"

    Most of these protesters don't even care (or sometimes even know) what they are protesting about. They want to have some meaning in their lives. And they think that acting out in the street (or online) will get them some.

  • I'm not against all AI. I like the idea of using AI to find new protein folding configurations, or using it for automatic pedestrian detection, or helping with industrial automation. What I really dislike is when someone asks an LLM a question and assumes they're getting back a well reasoned and authoritative response. The whole point of an LLM is word/token prediction. All it's doing is taking the text of a conversation up to a certain point and statistically predicting what the next word or token is,
  • There is a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part; you can’t even passively take part, and you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you’ve got to make it stop. And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!
    –Mario Savio

    ht [youtube.com]

  • they just expect you to pay to clean up the mess they make afterwards,
  • Like any other tech, AI in its present state has applications that are already benefiting us. I want it to scan my medical images, translate languages, and edit photographs. I want it to go on learning to drive.

      Speculative assertions about some evil thing it might do in the future is not an argument against the good things it is already doing right now.

  • The time to protest would have been before we let corporations, trusts and billionaires get as powerful as they are. Short of a mass protest or boycott, AI is going to happen.
  • Make sure it's looking at all the wrong places.
  • I have been protesting AI development by praying every morning. I am sure this is the most effective way.

The secret of success is sincerity. Once you can fake that, you've got it made. -- Jean Giraudoux

Working...