Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

AI Leaders Urge Labs To Halt Training Models More Powerful Than ChatGPT-4 (bloomberg.com) 153

Artificial intelligence experts, industry leaders and researchers are calling on AI developers to hit the pause button on training any models more powerful than the latest iteration behind OpenAI's ChatGPT. From a report: More than 1,100 people in the industry signed a petition calling for labs to stop training powerful AI systems for at least six months to allow for the development of shared safety protocols. Prominent figures in the tech community, including Elon Musk and Apple co-founder Steve Wozniak, were listed among the signatories, although their participation could not be immediately verified. "Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control," said an open letter published on the Future of Life Institute website. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
This discussion has been archived. No new comments can be posted.

AI Leaders Urge Labs To Halt Training Models More Powerful Than ChatGPT-4

Comments Filter:
  • by timeOday ( 582209 ) on Wednesday March 29, 2023 @09:03AM (#63408648)

    Prominent figures in the tech community, including Elon Musk and Apple co-founder Steve Wozniak, were listed among the signatories, although their participation could not be immediately verified.

    ...and they didn't talk about signing it on the twitter accounts firsts? Sure...

    • by Guspaz ( 556486 )

      They didn't sign it, it's a fake letter, many people included on the letter have since said they've never heard of the letter, never signed it, and disagree with it.

      • who said this and where? I don't see Musk denying it.
        • by Guspaz ( 556486 )

          You can't expect people to notice and respond to every random person who misattributes something to them. They initially claimed the letter was signed by Xi Jinping (yes, the president of China), Sam Altman (CEO of OpenAI, basically who this letter aims to kneecap), and Yann LeCun (whose denial can be read here: https://twitter.com/ylecun/sta... [twitter.com]).

          When I looked at the letter when the news started circulating, signing it was just a text form where you typed your name in. There was no verification/validation,

  • by Mostly a lurker ( 634878 ) on Wednesday March 29, 2023 @09:12AM (#63408668)

    King Canute was unsuccessful in holding back the sea. Similarly, the power and applications of AI are increasing at a rate that cannot be halted. The result is going to be a mix of the positive and negative. At best, we can try to create mechanisms that limit the negatives, though there will be limits to how successful we can be with this. AI is going to change the world out of all recognition over the next decade. Mostly, we can just pray that the world that emerges will be one that is positive for the majority of us.

    • by mccalli ( 323026 ) on Wednesday March 29, 2023 @10:03AM (#63408838) Homepage
      ...but completely successful in his mission of pure sarcasm against sycophants. Slightly OT I know, but I absolutely love this tale and it's often presented as if he believed it.

      In the story Canute/Knut/c'nut/insert spelling here heard people saying he was so powerful he could turn back the sea. He knew this was ludicrous, and showed them so.. He wasn't actually asking to turn back the tide.
    • by AmiMoJo ( 196126 )

      I do wonder if current AI's usefulness is overstated. We had all this when deepfake videos first appeared, but the predicted effect on politics and people's ability to determine if something is true of not hasn't really happened.

      ChatGPT is a bit harder to spot, simply because the output is just text.

      • by jbengt ( 874751 )

        . . . but the predicted effect on politics and people's ability to determine if something is true of not hasn't really happened.

        So many people are already so bad at determining if something is true or not that it would be pretty hard to tease out the effect of AI and deepfakes on their gullibility.

      • by HiThere ( 15173 )

        Of course it's overstated. That doesn't mean it's not also understated. I will guarantee that it's both. Many of the claims are pure balloon juice, but it will be put to uses nobody has thought of...yet.

        A 6 month moratorium would be a good idea if it could be agreed upon, but this is rather like a modified prisoner's dilemma, with LOTS of prisoners. How many of the signers just want others to slow up so that they can catch up to them? I doubt that the number is zero even for that selected group. As fo

      • I do wonder if current AI's usefulness is overstated. We had all this when deepfake videos first appeared, but the predicted effect on politics and people's ability to determine if something is true of not hasn't really happened.

        I believe that is due to the deepfake stuff not being quite ready for primetime then, nor was the tech as available to the masses. It is getting to that stage now.

        You combine that with the AI that is rapidly developing, that combination is getting to the point to where it could re

      • So one of these AIs is pretty much equivalent to an intelligent person with a lot of free time and access to a library and Google.

        I don't get why that can have such a dangerous effect.

        We already have almost unlimited capacity to write plausible sounding unmitigated bullshit.

        This is a minor uptick at worst.
        • The real threat will be to the rich. When all us normal people lose our jobs, we'll demand the rich take care of us at a certain acceptable level. When they say no, we will very likely put ourselves back in the dark ages. Considering there are so many normal people and a very small portion of rich, I'm going to say us normal people will pull the rich down and while doing that, end civilization.

          Only time will tell. If drones advance quickly enough, the rich might be able to just kill us all off by walling of

      • by narcc ( 412956 )

        I do wonder if current AI's usefulness is overstated

        Wonder no longer! It is absolutely overstated. [techcrunch.com] We've been down this road before, but everyone seems to forget about previous AI hype cycles. We get a "this time, it's really different!", every single time. Things seem crazy right now, but only because expectations are still rising. We're a few years, i suspect, from the "trough of disillusionment".

        I know that's not a terribly compelling argument, but let's think about it in more practical terms. Have you ever used a site called fiverr? It's like a mec

  • by MooseTick ( 895855 ) on Wednesday March 29, 2023 @09:16AM (#63408684) Homepage

    Musk has been anti AI for years and believes he is the "Trump" of AI and has effectively stated "I alone can fix it". In reality, I believe he sees it as a huge potential revenue source and wants to control it. He may also believe it could be the catalyst that actually lets full self driving be a reality which would naturally induce him to want to direct it's development, control it, scare others away from it, own it, and get ahead of any government regulation or control.

    • This applies to any company / CEO / seeker of power. Control AI and you'll control an infinite number of willing souls to do your bidding at a moments notice. Even worse, you'll be able to replicate them to maturity instantly the second those beneath you manage to kill one of them. The potential to control and make demands of others has never been greater.

      The only thing limiting their reach is the physical means of getting there. That's the next thing to fix. Once that's done, your enemies and segregation
    • Wasn't Musk an investor in openai?
      • by narcc ( 412956 )

        Not just an investor, a co-founder. The parent is confused. Elmo's "warning" is pretty damn cynical.

  • Hypocritical (Score:5, Interesting)

    by Roger W Moore ( 538166 ) on Wednesday March 29, 2023 @09:17AM (#63408686) Journal

    no one -- not even their creators -- can understand, predict, or reliably control

    Impressive as they are, these algorithms are predictive text engines. Claiming their creators cannot reliably control them, presumably because they do not know what uses they will be put to, is more that a little hypocritical when it comes from people who have disrupted industries by coming up with unforeseen uses of technology. It was ok when they did it but not now when they are the ones likely to be disrupted?

    • The goal isn't control yet. Often that comes later. But it feels like a valid thing to both ignore for now, and worry about that fact.

      We're trying to mimic the magic that happens in a biological human brain. That is without understanding how it all works. So we make a box with a few rules inside and ask it to get better.

      It's not easy to later on split that box apart into functional units or something. We can't say "oh, that branch is for ..., while the other branch is ...". Just that it works all toge

  • by t0qer ( 230538 ) on Wednesday March 29, 2023 @09:20AM (#63408694) Homepage Journal

    When AI can code, it can code exploits. That's my immediate conclusion with this. A country that might not have coding expertise, but can write a chatGPT prompt like, "Write me some C code that will send ../../../passwd root to port 80" will now be in the running to cause havoc on the internet. Yes, that's an overly simplified example of something that's not a common exploit anymore, but the point is new exploits will be easier to proof of concept.

    I don't think it will be at a stage where the AI will launch exploits just yet, unless it's told to.

    • by Brain-Fu ( 1274756 ) on Wednesday March 29, 2023 @09:55AM (#63408800) Homepage Journal

      Maybe that same AI can scan the mountains of open-source code that power our digital infrastructure and find exploitable code, and fix it for us.

      The fact that this can be weaponized does not eliminate the fact that it is a powerful and useful tool.

    • Re: (Score:2, Insightful)

      by narcc ( 412956 )

      You have nothing to worry about. AI of the type everyone is freaking out about can't actually code. Lacking the capacity for anything like understanding or analysis, that simply isn't something these programs can do in any meaningful way. Just yesterday, for example, I asked ChatGPT to write a simple function that computes the perimeter of a trapezoid. This is what it produced (comments removed):

      def calculate_perimeter(base1, base2, height, side1, side2):
      perimeter = base1 +

      • I just did this.

        Prompt: (yes, spelling errors were included)

        Write a python gunvtion to calculate the perimeter of a trapeozoid

        Reponse:


        To calculate the perimeter of a trapezoid, you need to know the lengths of all four sides. Here's a Python function that takes 4 parameters (base1, base2, side1, side2) and calculates the perimeter of a trapezoid:

        python
        Copy code
        def trapezoid_perimeter(base1, base2, side1, side2):
        perimeter = base1 + base2 + side1 + side2

        • by narcc ( 412956 )

          Good for you, I guess? This is the complete output I got yesterday, with the prompt "write a function that computes the perimeter of a trapezoid":

          def calculate_perimeter(base1, base2, height, side1, side2):
          """Calculate the perimeter of a trapezoid.

          Args:
          base1 (float): The length of the first base.
          base2 (float): The length of the second base.
          height (float): The height of the trapezoid.
          side1 (float): The length of the first slant side.
          side2 (float): The length of the second slant side.

      • chatGPT is a tool. If you don’t make any effort to learn how it works and how to use it, you will, most likely, not get anything terribly useful. That’s your fault.

        I run a website that has a support page (a manual, basically) and a FAQ page. I wrote a small chatGPT program where I feed the FAQ questions as json into the chatGPT API and used some python libraries to scan the manual page in as well. I added a bunch of guidelines for how to modulate the reponse (“be pithy, be polite, don

  • Yeah, no... (Score:2, Insightful)

    The time for pressing the brake pedal has long passed - assuming that controls would ever have been effective. The cat's out of the bag, the gold-rush fever has taken hold, the horse is out of the barn - pick your favourite appropriate metaphor.

    Now is the time to start planning mitigation, before non-savvy people - and even some of the savvy ones - start taking advice and direction from AI. Or - and here's the chilling part - putting it in charge of key infrastructure and perhaps even segments of financial

  • by Anonymous Coward on Wednesday March 29, 2023 @09:25AM (#63408702)
    AI will not be our destruction. It's still very much just some stochastic models. I only foresee one way AI can be our doom: AI says something stupid and equally stupid humans in charge follow that stupidity.

    Stupidity will be our undoing. People who don't want to learn or study things and just take whatever they are told without question and then acting on that.

    "Welcome to Costco, I love you!"
    • Stupidity will be our undoing. People who don't want to learn or study things and just take whatever they are told without question and then acting on that.

      Which is where the problem lies when it comes to AI as it exists today. Experts using these bots as guidance? Fine. Newblets and morons taking everything these bots say as gospel will blindly follow them right to the edge of extinction if allowed. Which is why we need some sort of guidelines. Granted, I don't think the guidelines will do much good when they come up against profit potential for some uber-corp somewhere. Ultimately, greed and stupidity will do us in. Combined, they form the most powerful god

    • Could not agree more. What terrified me about school and college age kids in the last 10 years was their incredible reliance on the Internet and mobile phones. They have "outsourced" part of their brains to the cloud.
      Watching a 5 min youtube video has replaced learning things from first principles. Am working with kids (OK, 25 year olds - I'm ancient, sue me) who have 6 years of Java experience on paper; and do not understand what is really going on in Inheritance, and why "object assemblies" are better t
    • You and I may also be stochastic models, FWIW. We just can't see it at this level of aggregation
  • While I'm all for responsible parties attempting to set up some basic checks against the possibility of run-away emergence taking off in a direction we don't want it to? There will be some rich group of non-compliant assholes somewhere running their own. The singularity may not ever come in the form we "would like," but that doesn't mean emergent behavior can't take off in a way that could lead to, let's just say, "very bad things" for us, or the planet. Especially with how much of our infrastructure we've

    • by narcc ( 412956 )

      While I'm all for responsible parties attempting to set up some basic checks against the possibility of run-away emergence taking off in a direction we don't want it to?

      Don't worry. I've already taken the necessary steps to protect the whole of humanity from the threat of "run-away emergence". You can rest easy.

  • by hdyoung ( 5182939 ) on Wednesday March 29, 2023 @09:32AM (#63408722)
    The term “digital minds”. As much as all the self-styled “futurist” CS majors wish, we’re not living in a william gibson novel. This isnt consciousness. Not yet, and probably not even close. Probably not for centuries.

    I’m all for responsible use of tech. But, AFAIK chatgpt is basically an internet downloader combined with a cleverly designed randomizer that tosses internet info together and mixes it up just enough to avoid direct plagiarism or copyright infringement. That’s enough to OCASIONALLY pass the turing test but that’s not sufficient to convince me we’re dealing with a consciousness.
    • This isn't consciousness. Not yet, and probably not even close. Probably not for centuries.

      I agree with your first statement: "This isn't consciousness. Not yet." I'm not sure I agree with your next statement "probably not even close", and I strongly disagree with your statement "Probably not for centuries." This is advancing much faster than most people thought it would. First people believed computers could never beat grandmasters at chess. Then a computer beat the world champion. Then people said, computers may be good at chess, but computers would never beat the best Go players cause tha

  • by Applehu Akbar ( 2968043 ) on Wednesday March 29, 2023 @09:35AM (#63408738)

    I understand that Biden has appointed Sarah Connor as our watchdog over self-aware AIs.

  • "...Elon Musk and Apple co-founder Steve Wozniak, were listed among the signatories, although their participation could not be immediately verified." Of course not. CGTP-3 probably signed for them.
  • Large language modes as currently implemented can't learn online, have a completely deterministic runtime with no internal dialogue except their output tokens, which disappear after each session.

    Sure, theoretically someone could be working on something far more advanced, but just adding more context and parameters to LLMs isn't going to allow them to escape on the internet and launch nukes.

    • by bradley13 ( 1118935 ) on Wednesday March 29, 2023 @09:59AM (#63408816) Homepage

      Large language modes as currently implemented can't learn online, have a completely deterministic runtime with no internal dialogue

      Well, that's the thing: What happens when you create a feedback loop? Have the model ask itself questions, and feed the results back into the model?

      The current crop may not be up to this, and the type of questions and feedback needs research, but: this has the potential to produce a dynamic system that is effectively capable of learning and change.

      • ChatGPT isn't just a large language model anymore. It has image recognition, voice recognition, can speak, can create art. It has passed the uniform bar exam, SAT, GRE, Wharton MBA, USA biology Olympiad semi final, USMLE, etc. It has passed many cognitive and advance reasoning exams as well. You're wrong if you think it's just regurgitating text. GPT4 is now connected to the Internet for continuous learning. Soon we'll have AI trained by giving it sensory inputs and releasing it in the wild.
        • by aldousd666 ( 640240 ) on Wednesday March 29, 2023 @11:16AM (#63409046) Journal
          that stuff you named is all true, including that GPT-4 is a multi-modal AI system, however, many of the things that it does are based on emergent properties of effectively generating the next token. Token predictors can predict tokens of any type, in theory, if they have the context. So for example, you could teach a large language model math by just showing it a lot of math problems. At first it would only regurgitate the math problems its seen willy nilly, but eventually the relationships between the numbers and their operations would be generalized, by just a large language model all by itself with enough training.
          • Thank-you for the explanation. My understanding is limited to the basics of neural networks I learned about decades past.
          • by narcc ( 412956 )

            eventually the relationships between the numbers and their operations would be generalized, by just a large language model all by itself with enough training.

            That's extremely unlikely.

      • Re: (Score:3, Insightful)

        by HiThere ( 15173 )

        The current models are NOT up to that. But I'm not sure that a small increment couldn't change that. They need to have their basis not in language, but in physical reality (or something that recognizably closely simulates it). This may be a small change in the software an a larger change in the training. And somebody could be doing it right now. There's no way we would know.

        That said, a really intelligent AI, or even AGI, wouldn't automatically turn into a runaway cascade. The problem is that there ar

      • Large language modes as currently implemented can't learn online, have a completely deterministic runtime with no internal dialogue

        Well, that's the thing: What happens when you create a feedback loop? Have the model ask itself questions, and feed the results back into the model?

        The model gets over trained and the quality of responses goes way down.

      • by narcc ( 412956 )

        What will happen? At best, nothing. Though it's far more likely that the model will rapidly degrade.

        Let's look at something simpler, so that you can really get a sense of the problem. Starting with a Markov chain text generator, train it on some text until you start getting decent output. Now, try training a second model only on the output from the first. What does the quality look like after you've trained the second model on a similar amount of text? What will happen if you train a third model on t

    • What are they afraid of? There are two unrelated issues:

      First is the concern that AI will become "too intelligent" and disrupt humanity. This is, of course, absurd. Even GPT-4 is a glorified Clever Hans, mindlessly regurgitating and paraphrasing crap that it reads.

      The other concern is that AI will either promote or suppress unpopular speech, depending on which side voices the concerns. We're already seeing guardrails on GPT that prevent it from voicing anything negative - particularly on topics favored by i

    • I believe your comment was written by ChatGPT, to lure us into a false sense of security. We're all going to die.
  • by turp182 ( 1020263 ) on Wednesday March 29, 2023 @09:42AM (#63408756) Journal

    The larger models have been exhibiting interesting, unexpected, "emergent" behaviors.

    Example, a Linux "virtual" "virtual machine":
    https://www.engraved.blog/buil... [engraved.blog]

    Links of Note:
    https://www.quantamagazine.org... [quantamagazine.org]

    https://www.jasonwei.net/blog/... [jasonwei.net]

    https://openreview.net/forum?i... [openreview.net]

  • At the moment, AI isn't quite sophisticated enough to do the things people say they fear. But it has the potential to eliminate lots of jobs currently entrenched by humans. Moreover, it has the potential to kill off certain business revenue streams by eliminating monetization of status in favor of pure efficiency. Take, for example, air travel. AI could easily create not just optimal flight routes and schedules but it would also tell everyone that the current boarding process is grossly inefficient. It

    • I wouldn't say that we'll see all access to AI cut off from the plebes. We'll all probably be assigned AI "therapists" or "friends" depending on which marketing moron gets ahold of the concept, where we are encouraged to share *EVERYTHING* with them. Those therapist/friend bots will report back to the mothership, get minor tweaks and updates, and notify the authorities should our thoughts ever stray from "standard, non-deviant behavior patterns." For our own good, of course. And the ultimate goal of our ent

      • Dude, your toaster will be running a language model in 10 years. "Access" to AI will lose meaning in the sense that it has now. It won't be like 'going to the oracle' (like it is now,) it'll just be ubiquitously providing us assistance, 24 hours a day
    • by narcc ( 412956 )

      AI could easily create not just optimal flight routes and schedules

      No, it can't. It's not magic.

  • by Okian Warrior ( 537106 ) on Wednesday March 29, 2023 @09:56AM (#63408802) Homepage Journal

    AI Leaders Urge Labs To Halt Training Models More Powerful Than ChatGPT-4

    And this will lead to... exactly bupkis.

    Let your imagination wander for a moment, and consider the impact that this announcement has on a meeting in the US military: do they decide to politely stop their research on AI applications?

    How about a non-US military? Consider that meeting, know that they imagine the aforementioned US meeting. Do you think the non-US military will abide by the moratorium?

    Now consider the several dozen startup companies working to adapt Chat-GPT to various use cases. Each has a stable of engineers diligently working on their application... will any of these will voluntarily stop working for 6 months while incurring startup costs?

    Consider Microsoft and Google, both racing to incorporate Chat-GPT into their products in a desperate attempt to stay relevant. Both are dying dinosaurs, both will take a long time to be eclipsed by more modern companies, but either might extend their corporate lifetime by incorporating AI. (I say *might* because it depends on what they implement and how - lots of people predict how awful a "Clippy" version of search would be, but true innovation sometimes happens.)

    Consider researchers and professors. Will any of them put off publishing their next paper?

    Essentially, this is an anonymous version of the prisoner's dilemma. Everyone everywhere will imagine what other groups will do, that other groups will be getting a jump on whatever AI aspect they're currently working on, and will conclude that a) they need to continue in order to remain competitive, or b) if the other group stops we can get a jump on them.

    Is there anyone, anywhere, that would abide a moratorium?

    About 12 years ago I switched job focus to AI, and have been doing AI research ever since. I make a distinction between research and application, where implementing an application for Chat-GPT is an aspect of engineering, and not research. (I'm currently trying to make a program that counts/identifies the number of colors in an image - the number a human would say when presented with a frame from the Simpsons, 7 for instance, and not the count of RGB colors used, which is typically several hundreds of thousands. I do a lot of reading into brain physiology and human psychology as background for this.)

    Early on I had a crisis of conscience about the bad results of strong AI. All the cautionary tales about AI are fictions, and I get that, but I can draw a direct line from where we are to a couple of fundamental dystopias using "best intentions" each step of the way(*).

    I think everyone who works in AI and thinks deeply about the ramifications comes to the same conclusions and has to grapple with their conscience. Non-experts do this as well - Steven Hawking did, so did Bill Gates, and now Elon Musk.

    And yet... despite the dystopian conclusions, everyone continues working on AI.

    I decided that it wouldn't make any difference if I worked on AI or not, because there are so many others doing exactly the same thing. I imagined the engineers at Google and thought about whether they would have any qualms about it. The software industry has people who make all sorts of bad (in the sense of evil) software in all sorts of ways, and any who refuse on philosophical grounds can be easily replaced by someone who won't. Ads, malware, spam, tracking privacy intrusion, facial recognition... the list goes on.

    AI research is something I enjoy, there's no upside to avoiding it, so I might as well continue.

    Again, it's the prisoner's dilemma.

    (I would enjoy reading other philosophical viewpoints people have on this, because I'm still a bit uncomfortable with the decision, but knowing this website and the current state of the 'net I expect a lot of ad-hominem attacks. Never talk about yourself in a post - it only opens you up to scathing criticism.)

    (*) One obvious one: full self driving would eliminate about 25 million jobs in

    • by narcc ( 412956 )

      (I'm currently trying to make a program that counts/identifies the number of colors in an image - the number a human would say when presented with a frame from the Simpsons, 7 for instance, and not the count of RGB colors used, which is typically several hundreds of thousands. I do a lot of reading into brain physiology and human psychology as background for this.)

      Just curious, why couldn't you just extract the palette and use a clustering algorithm? I'm assuming you've already tried this but the results weren't satisfactory for some reason.

      • (I'm currently trying to make a program that counts/identifies the number of colors in an image - the number a human would say when presented with a frame from the Simpsons, 7 for instance, and not the count of RGB colors used, which is typically several hundreds of thousands. I do a lot of reading into brain physiology and human psychology as background for this.)

        Just curious, why couldn't you just extract the palette and use a clustering algorithm? I'm assuming you've already tried this but the results weren't satisfactory for some reason.

        Exactly right: the results aren't satisfactory in a number of ways.

        To get a feel for how hard this is, write a program to show histograms of the R, G, and B values in an image and imagine the results as curves with added noise.

        Or, imagine an image with a background pattern consisting of pixels of two close colors, alternating randomly. The human will easily note that the two background colors work together to constitute the background pattern, and be able to distinguish between the background and any foregr

        • by narcc ( 412956 )

          Thanks for that. I imagine that I'll waste quite a bit of time playing with this later.

          Do you keep a blog about this or plan to publish? I'd be interested in seeing where this ends up.

          • My E-mail is at the bottom of my journal. Contact me and I can send you some images and histograms and stuff to show the problems.

            One image I'm using is from the Darpa shredder challenge, which you can view at the link below. The first step in solving this is to distinguish shreds from background, which means you have to identify the background color, which has led to my current research. Yes, I'm still working on this puzzle 12 years later :-)

            Lots of odd artifacts in this image that play hob with clusterin

  • by DarkOx ( 621550 ) on Wednesday March 29, 2023 @10:01AM (#63408826) Journal

    More than 1,100 people in the industry signed a petition calling for labs to stop training powerful AI systems for at least six months to allow for the development of shared safety protocols.

    Gee what a surprise - a group of people almost certainly composed of a current big tech stake holders, people who have their personal wealth invested in big tech sake holders, and persons with a very specific globalist social agenda, want everyone to stop what they are doing so they can make rules that suit their interests.

    The cows are already out of the barn society will not be served by permitting only the current 'digital nobility' to place a yokes around the necks of these beasts. All that will do is what it always does and produce more divide between the haves and have nots and more calcification of who falls into which group. FUCK THAT. If you are building a large ML model do all of us a favor give this group the middle finger. The rest of the world will adapt like we have adapted to every other technology. The last thing we should want though is for something that could be this centuries printing press to be restricted to the hands of a chosen few.

  • by Felix Baum ( 6314928 ) on Wednesday March 29, 2023 @10:14AM (#63408854)
  • Can't even have the woketard thing write a story set in Chicago race riot eras; we need less snowflakery not more

  • Hah! Yeah. (Score:5, Insightful)

    by Petersko ( 564140 ) on Wednesday March 29, 2023 @10:23AM (#63408894)

    While the church was trying to control the printing press, people were absconding with bible pages despite the dangers of prosecution and even execution. You will NEVER control this. Cat's out of the bag. All you'll do is get the semi-respectable companies to pay lip service to the restrictions, while those who have clearer nefarious motives will do business as usual.

    • All you'll do is get the semi-respectable companies to pay lip service to the restrictions, while those who have clearer nefarious motives will do business as usual.

      I doubt there are many with nefarious motives. They all think they're doing something good, and that very fact will lead them to ignore this, because they know they're being careful and anyway what they're building is important and worth the risk. Not that their good intentions will do anything at all to prevent them from unleashing disaster. The fact is that we don't know how to be careful, other than simply stopping, which will not happen.

  • at Luddites.com.

  • Why are we calling Woz an AI leader? Anyway, the genie's out of the bottle. Anyone with a backpack full of NVIDIA's can cook one of this up with a fresh NPM install of tensorflow.
  • by bugs2squash ( 1132591 ) on Wednesday March 29, 2023 @11:23AM (#63409070)
    This sounds like a great way to fund-raise. We'll get a head start on our competitors by training during the ban if you send money now
  • Well duh. If your competitor (Google) is winning, of course you want them to stop so you can catch up. Let me know if you find Demis Hassabis (Deepmind/Google AI leader) from the list.

    • google isn't winning in the consumer space on this one, OpenAI and Microsoft are. Who knows what they have going on in dark projects, though. Perhaps they've got billions in contracts. Right now though it doesn't look like google is winning.
  • Musk and Wozniak as AI leaders? Hmm. OK, the first name on the actual petition [futureoflife.org] is Yoshua Bengio. Musk and Wozniak have name recognition, but are not AI leaders.

    The petition asks, "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?". My first thought is why this is suddenly a pressing issue even though aut

  • Comment removed based on user account deletion
  • I guess Musk doesn't want any competition for his self driving cars, then?
  • AI could vastly speed up the deployment of malicious code and combined with some codebreaking could prove an unstoppable opponent. How about AI having some concept of self-preservation? I don't mean as emergent behaviour, at the simplest somebody trains it that way. Asimov wrote the laws years ago and we seem to have missed it!
  • It is doomed to epic failure.
  • in 2044 by the Turing Act after an AI goes rogue
  • Also OpenAI was virulently criticized for pursuing LLM, by AI researchers. So I don't think these pundits are worth a damn.
  • I don't care how venerated Elon Musk feels, he is not an engineer. He is the face of some successful companies. They're just drinking their own cool-aid here. Believing their image.

    Are these people who reliably, and single-handedly, predicted whole industries or catastrophes publicly beforehand? Not that I know of. They just have opinions, and their own goals and expectations to work with. "I don't understand this thing, and it might tank the economy people say.... 'I think you should stop!'"

    And just

GREAT MOMENTS IN HISTORY (#7): April 2, 1751 Issac Newton becomes discouraged when he falls up a flight of stairs.

Working...