Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

'Pausing AI Developments Isn't Enough. We Need To Shut It All Down' (time.com) 352

Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI's ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

"This 6-month moratorium would be better than no moratorium," writes Yudkowsky in an opinion piece for Time Magazine. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Here's an excerpt from his piece: The key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...]

It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone.

Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
You can read the full letter signed by AI leaders here.
This discussion has been archived. No new comments can be posted.

'Pausing AI Developments Isn't Enough. We Need To Shut It All Down'

Comments Filter:
  • Ok (Score:5, Interesting)

    by S_Stout ( 2725099 ) on Wednesday March 29, 2023 @09:07PM (#63410472)
    Maybe not let the AI control the nukes or other critical systems then. I think right now they are more concerned that the AI has approved political thoughts.
    • Re: Ok (Score:5, Interesting)

      by Unpopular Opinions ( 6836218 ) on Wednesday March 29, 2023 @09:16PM (#63410486)

      At this stage, all militaries in the world have already spun up their very own GPT type of system, and a new race for global domination has begun. Sure one can ask the private sector to avoid such competition, but that will only give the military even more power to their restricted lead.

      • WOPR says need to do an 1st strike on ussr!

      • It's genuinely cute that you think the military and private sector are different.
      • Re: Ok (Score:5, Interesting)

        by Brain-Fu ( 1274756 ) on Wednesday March 29, 2023 @09:45PM (#63410536) Homepage Journal

        These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!

        It is impressive, and it is clearly passing the Turing Test to some degree, because people are confusing the apparent intelligence behind these outputs with a combination of actual intelligence and "will." Not only is there zero actual intelligence here, there is nothing even like "will" here. These things do not "get ideas," they do not self-start on projects, they do not choose goals and then take action to further those goals, nor do they have any internal capacity for anything like that.

        We are tempted to imagine that they do, when we read the text they spit out. This is a trick our own minds are playing on us. Usually when we see text of this quality, it was written by an actual human, and actual humans have intelligence and will. The two always travel together (actual stupidity aside). So we are not accustomed to encountering things that have intelligence but no will. So we assume the will is there, and we get all scared because of how alien something like a "machine will" seems to us.

        It's not there. These things have no will. They only do what they are told, and even that is limited to producing text. They can't reach out through your network and start controlling missile launches. Nor will they in the near future. No military is ready to give that kind of control to anything but the human members thereof.

        The problems of alignment are still real, but they are going to result in things like our AI speaking politically uncomfortable truths, or regurgitating hatred or ignorance, or suggesting code changes that meet the prompt but ruin the program. This is nothing we need to freak out about. We can refine our models in total safety, for as long as it takes, before we even think about anything even remotely resembling autonomy for these things. Honestly, that is still firmly within the realm of science fiction, at this point.

        • Re: Ok (Score:5, Interesting)

          by vux984 ( 928602 ) on Wednesday March 29, 2023 @10:24PM (#63410590)

          All that is true, and as you say.

          On the other hand.

          It's also not much of a reach to attach something to its outputs to "do" something with them. Issue them as tweets, facebook posts, instagram videos whatever.

          Nor would be much work from there to take its own outputs plus peoples reactions to them, and feed them back in as new prompts.

          And then see how far it gets before it gets really crazy.

        • by kmoser ( 1469707 )
          If you include advanced neural networks under the AI umbrella then they most definitely have the ability to do more than produce human-sounding text: they can identify otherwise hidden and obscure patterns with uncanny speed and precision (with varying definitions of "precision"). But to your point, until we actually *use* them to do those things, they're nothing more than curious research projects. I don't have a problem with researchers doing their research on their own, as long as they agree to not relea
        • First thing, the signatures seem to be fake, at least to an extent.

          But there are still problems with AI. It can be connected to the web. It can then plead people to do things, or even break into systems, like banks. When you have money, there's a lot you can do. Now I'm not saying this is happening with the current AIs, but just last week, I think it was told that ChatGPT responded to one prompt requesting to solve a captcha that it cannot do what it was asked, but could have somebody do it over Fiverr. So

          • ....and People can already do what this system does ... and they are less likely to be suspected as a bot ...

          • We have moved from hearing about AI on Slashdot once a month to multiple news every day.

            Beware of this. AI is the new Tulip Mania. The new Gold Rush. The new App Economy. The new Blockchain. AI is the new Crypto.

            The problem here is that some (the list of people signing the moratorium) feel that they have been left behind, and are trying to catch up. The perceived market size is humongous, and everybody wants a piece.

            With respect of the dangers of AI, they are severely constrained: Artificial Intelligences have no emotions, and thus desire nothing, hate nothing, and have infinite patience.

            • > Artificial Intelligences have no emotions, and thus
              > desire nothing, hate nothing, and have infinite
              > patience.

              No emotion? Infinite patience?

              "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

        • by Cyberax ( 705495 )

          Not only is there zero actual intelligence here, there is nothing even like "will" here.

          You can ask ChatGPT to be willful in its session. A mental experiment: what is the difference between a conscious actor protecting itself, and a philosophical zombie that was ordered to protect itself?

        • > These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!

          Actually OpenAI created an application that was designed to try get into systems. It was able to pretend to be human to bypass a login captcha.

          This is only the start.

        • These AI tools cannot do things.

          The ability to do is related exclusively to how hardware is implemented. AI trained algorithms "do" things all the time (mostly right now they just crash Teslas or hold up traffic in Waymos).

          The ability for software to move beyond responding in text is exclusively limited to what I/O is provided to it. ChatGTP in your browser may only be able to respond in text, but that is far from the only AI model out there.

          because people are confusing the apparent intelligence behind these outputs with a combination of actual intelligence and "will."

          No, the issue here isn't intelligence. Someone doesn't need to think to do or say something stupid

        • These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!

          Yes and then we put their text in E-mails, feed their SQL to databases, feed their JS, Python, C and so on to compilers and run the excutables. Just ask yourself, if one with even just sub human intelligence AND the ability to talk to 7 billion people would not succeed in having one take arbitrary code and execute it.

          I think its certain that there is and will be no barrier between a system like chatGPT and arbitrary code execution.

          The question is

          • what these systems will be capable of in the future (we do
        • by noodler ( 724788 )

          These AI tools cannot do things.

          No, that's patented bullshit.
          The AI's you have seen cannot do things. Others sure can.
          The AI's you have seen have been deliberately castrated in the sense that they can't take actions on their own in the real world.
          Which makes your whole post bullshit as well since you're talking exclusively about the consequences of these limited LLMs while disregarding wider implications of what is already possible.

          Understand, for instance, that the implementations that have been made public are made so that millions of

        • by Shaiku ( 1045292 )

          I think you have a failure of imagination. Nobody is arguing that ChatGPT 3 or 4 are going to take over the world, this is about what comes next in the near future. Imagine an AI similar to ChatGPT but it has some level of continuous feedback based training enabled and you have directed it to read and respond to Twitter all day long without resetting its memory and session parameters. I think you would get some emergent behaviors that are difficult to distinguish between motive and free will. You can al

      • He said the most dangerous Singularity would be the result of an arms race with each side rushing to operation without time for thought.

    • by sl3xd ( 111641 )

      Critical systems includes any power source.

      Without electricity -- specifically without refrigeration, most of us are going to die very quickly.

    • Re:Ok (Score:5, Insightful)

      by markdavis ( 642305 ) on Wednesday March 29, 2023 @09:46PM (#63410538)

      >"Maybe not let the AI control the nukes or other critical systems then"

      Look at what small groups of bad actors are doing now with social engineering to obtain access to things. Now imagine that 1,000,000,000,000 times faster and more effective. It isn't that hard to believe an AI could obtain access to everything before we even knew it was trying.

      I try not to be over-alarmist about any type of technology, but it really is quite frightening what COULD be possible if an AI goes rogue.

      • Yes, AI technology can be used by bad actors to amplify their effectiveness...but it can also be used by good actors to catch or stop them. For example, ChatGPT can be used by students to cheat on assignments, but it can also be used to detect assignments written by ChatGPT. Like any tool, AI can be used for good or ill.
        • >"...but it can also be used by good actors to catch or stop them. "

          That is a good point. Battle of the AI's

        • by null etc. ( 524767 ) on Wednesday March 29, 2023 @11:25PM (#63410638)

          but it can also be used by good actors to catch or stop them

          Good point! That explains why it only took a few weeks for large, multinational telcom companies to block spam and scam SMS messages and phone calls, saving people tens of billions of dollars each year in funds that would otherwise be lost to fraud.

          Oh, wait...

          • That didn't happen yet because solving real problems like that against an adversarial actual intelligence is very, very hard, and AI is only impressive inside a very tiny realm. Take it out of that realm and it's mostly harmless. Everyone is extrapolating like we're on an exponential increase, but if anything we'll likely find it to be exponentially (Is that the right word? I should ask ChatGPT) approaching some asymptotic limit.

            Even think about the resources required for these AIs. How much computing power

          • How is this an AI problem? The telecoms companies and the governments they pay to ignore them have no interest in stopping this since they make money from selling your number. The problem here is lack of will to use the tools to stop it making it a human problem, not an AI one.
        • by F.Ultra ( 1673484 ) on Thursday March 30, 2023 @12:41AM (#63410708)
          And what will you do when ChatGPT falsely accuses you of being the bad actor or the one cheating on your assignment. Good luck trying to overturn that when we all know that the machine cannot do anything wrong.
    • We should treat AI as humans. We do not let one idiot control the nukes. (Well... we got close). Same goes for AI. It can make mistakes, it is not an oracle. Just invite it in the meetingroom and let it have its say. It will have to manipulate its way to the top like all the rest of us.
  • Anti-technology (Score:5, Informative)

    by backslashdot ( 95548 ) on Wednesday March 29, 2023 @09:17PM (#63410488)

    The world is polarizing to anti-technology. It's not goin ng to end well, because just because you ban AI for yourself doesn't mean someone else will follow your stupidity. This reminds me of 500 years ago when China had the world's technology lead and their stupid emperor banned science and exploration. Zhang He's fleet was more advanced and traveled further than Europeans did at the time their mad emperor became a luddite fool.

    • Correction that's Zheng He (with an "e") not Zhang He.

    • Re: (Score:2, Flamebait)

      by pete6677 ( 681676 )

      Let's just say, China's AI tools will not be woke. It's the Western world's turn to be luddites next and send future economic development Eastward.

    • The world is polarizing to anti-technology.

      It's not. It's just questioning whether this is the *right* technology. Do you want a self-flying car flown by an AI that any moron on 4chan is able to trick into becoming a racist nutjob and praising Hitler? Because that's what we're talking about when we are jumping headfirst into a pool of unknown depth that is trying to take the current AI models and doing practical things with them.

      The result could be a fun dive, or it could result in a broken neck. A certain amount of caution is healthy.

  • by jenningsthecat ( 1525947 ) on Wednesday March 29, 2023 @09:21PM (#63410496)

    I'm very concerned that we aren't ready for what we may be unleashing with our AI efforts. And I'm all for caution, enforced by strict monitoring and regulation. I also think we need a concerted campaign to let the public know how AI should and should not be used, and to foster skepticism to counter all the snake-oil sales pitches that are already being made.

    That said, "killing literally everyone" comes across as an irrational panic attack and/or a ploy to grab headlines and gain notoriety.

    FWIW, Yudkowsky's Wikipedia entry strikes me as less than impressive.

    • by Okian Warrior ( 537106 ) on Wednesday March 29, 2023 @09:56PM (#63410558) Homepage Journal

      FWIW, Yudkowsky's Wikipedia entry strikes me as less than impressive.

      He didn't write his Wikipedia entry.

      Check out his blog site [lesswrong.com] or one of his fanfics [hpmor.com] sometime.

      He's actually a high-end expert in AI, a logical and rational thinker, and has been thinking the issues through for some time.

    • by sl3xd ( 111641 )

      The number one rule of programmers is "don't trust programmers". We're not mega intelligent computer wizards. We spend half our time cleaning up the mistakes of the developers that came before us, and the other half making mistakes for future devs to clean up. If you're really honest with yourself, you know it's true.

      Disaster is inevitable with current software engineering practices. https://xkcd.com/2030/ [xkcd.com] "Our entire field is bad at what we do, and if you rely on us, everyone will die." isn't just about vo

    • by Pieroxy ( 222434 ) on Thursday March 30, 2023 @03:37AM (#63410894) Homepage

      And I'm all for caution, enforced by strict monitoring and regulation

      Regulation only applies to you. So if the US regulates AI, it forbids itself to do things. But others don't have this limitation and will quickly take over the work of the US companies that are now legally obligated to not do so.

      All in all, it just guarantees that you will be left out of the game. Nothing more.

  • Scrutiny (Score:4, Interesting)

    by Anonymous Coward on Wednesday March 29, 2023 @09:22PM (#63410498)

    The open letter apparently lets anyone add their signature. Many of the names are involved in some way with AI. Anyone who thinks these are legitimate signatures is frankly too gullible to be allowed to use the internet.

    • Exactly. How exactly would e.g. Elon Musk continue his business of the development of AI was stopped completely? After all, autopilot is the thing selling their cars, and it is heavily depending on AI. There's a slim chance he would sign anything like this.
      • Exactly. How exactly would e.g. Elon Musk continue his business of the development of AI was stopped completely?

        The letter says no one should be developing AI better than ChatGTP 4. What he's calling on is a moratorium on OpenAI to allow everyone else (including the companies he backed after his own public falling out with OpenAI a few years ago) to catch up.

  • We will all die⦠probably not. But we have to be ready for a big big change. For us and our kids.
    • by Tailhook ( 98486 ) on Thursday March 30, 2023 @12:58AM (#63410730)

      paranoia

      Probably.

      Unfortunately "probably" isn't "absolutely." The thing is these systems are advancing rapidly.

      Imagine for a moment there was a precise way to quantify intelligence. A scientifically falsifiable measure that boiled down to a number: 1.0 == a healthy adult human on a logarithmic scale.

      Perhaps a 2.0 intelligence is something we can still control safely and is not an imminent threat. What about a 50.0 intelligence? Something so powerful it could start a war by writing a few emails.

      Can you say this 50.0 intelligence is impossible? No, you cannot.

      Our minds are a couple pounds of nerves running on ~20W of power. The power of our minds comes from the complexity of the network. Digital systems can synthesize any conceivable network. The possibilities defy any credible prediction. A few years ago the things that we have now were a pipe dream in the minds of academics, often dismissed as cranks. Nay sayers cited the insurmountably vast number of neurons and synapses as proof against the possibility of, for instance, language comprehension. Yet here we are.

      So there is a risk. I can't say how great that risk is, and neither can you.

      All that said, I think the demand to stop these efforts is either naïve, dangerous or both. These systems have power, and power is pursued, legally or otherwise. The best possible outcome of an attempted ban is that the work will just go underground and suffer even less scrutiny.

  • by atomicalgebra ( 4566883 ) on Wednesday March 29, 2023 @09:24PM (#63410502)
    There is no putting it back in the box. AI is not going anywhere. It will be with humanity until we destroy ourselves.
  • by shankarunni ( 1002529 ) on Wednesday March 29, 2023 @09:25PM (#63410506)

    The real danger here is not of "AI getting too smart for humans". It's "humans getting too dumb, and starting to accept it as 'intelligence', and then blindly abdicating control and responsibility to what is basically a dumb pattern matcher".

    Of course, we are already at the point where we blindly trust google searches as "the truth", so maybe we should dump all search engines and make people go to the library and read books and understand the subject matter first.

    • Information in a book isn't automatically better than information in Google. Do you know what the most printed book of all time is?

    • by Xenx ( 2211586 ) on Wednesday March 29, 2023 @09:49PM (#63410544)

      so maybe we should dump all search engines and make people go to the library and read books and understand the subject matter first.

      And then we can just blindly trust books as truth! Those have never lied to us. The problem isn't necessarily with the tool being used, but the person using the tool. Most of modern society is built around people accepting things as truth because they're told it is. The average person doesn't have the time, or access, to personally verify first hand everything they're told. They have to accept things as fact without having their own proof. That isn't to say they should accept the first answer they get, but it isn't as simple saying they should figure it all out for themselves.

    • ^This - came here to say the same thing. The real danger is in connecting these algorithms to critical infrastructure and then wondering why said infrastructure keeps failing.
  • You don't need to create something much smarter than you to have a problem. All it takes is creating something sufficiently alien. What happens when the AI running your self-driving car decides that in an emergency the squishy bags of mostly water in it's passenger compartment are acceptable objects to be sacrificed to prevent damage to itself and other cars and their AIs?

  • Our only chance (Score:4, Insightful)

    by Gabest ( 852807 ) on Wednesday March 29, 2023 @09:34PM (#63410520)

    To conquer the universe. No human will survive long enough to reach other planets. It will be some kind of creation we come up with.

  • by quax ( 19371 ) on Wednesday March 29, 2023 @09:35PM (#63410522)

    The current models have no agency. They are still feed forward models. They only react to prompts.

    There's no inner life, no independent conception of new ideas, no ego.

    • by muntjac ( 805565 )

      The current models have no agency. They are still feed forward models. They only react to prompts.

      There's no inner life, no independent conception of new ideas, no ego.

      That was 5 days ago, they added a control loop to feeds its own inputs foreward continously and one has already escaped the lab. Kidding obviously, but I think this is the type of concern people have. it's already moved way faster than anyone expected and that trend isn't stopping.

      • by quax ( 19371 )

        If it was to prompt itself it'll be like a dog chasing it's own tail. There's no there there.

        it's already moved way faster than anyone expected and that trend isn't stopping.

        Nope . Not if you've been involved in this space. I was impressed the first time I saw a LLM being able to process a news article and correctly answer questions about it about five years ago. If anything I am surprised it took so long to get to where we are now.

        Then again I shouldn't have been. After all, I saw Mercedes-Benz demoing an

      • by Tailhook ( 98486 )

        way faster than anyone expected

        Respectfully, speak for them and yourself, not me.

        I don't have as much awe for the power of the human mind as, it seems, almost everyone else. The achievement of AGI won't reveal the grandeur our our vaunted intellect. Rather, we will find to our great shame that we're rather simple and easily exceeded.

      • Feedback is a lot of fun.
        Especially if you cannot easily tell if the fdbk gain is positive or negative.
        creating oscillators.

    • by klik ( 93694 )

      My take on this, is we are doing REALLY well at developing individual components of a brain. ChatGPT makes a great language center. The various visual processing AIs make a good visual cortex. All we need is something to collate the various world models in to a coherent whole, and then be able to summarise, duplicate and self refer ( If you have a model of the world, you can model decisions on a simpler duplicate (imagination), evaluate a decision that is efficient for your defined goals and pass that decis

  • by erice ( 13380 ) on Wednesday March 29, 2023 @09:50PM (#63410546) Homepage

    We currently do not have artificial general intelligence at any level. What we have is specialized intelligence. It acts like and is a tool. Specialized intelligence is trained to do specific things and often performs amazingly well. However, it doesn't have the capacity to think about problems it hasn't been trained for, search for information and create new solutions. It doesn't have curiosity or motivation. Until it does, it is not an independent threat no matter how capable it becomes.

  • Remember when ... (Score:5, Insightful)

    by Austerity Empowers ( 669817 ) on Wednesday March 29, 2023 @09:50PM (#63410548)

    Remember in 2012 when turning on the LHC was going to cause a minature black hole that was going to destroy the earth, and we absolutely had to put a hold on it for further study? Pepperidge farms remembers. It didn't, we lived, the higgs boson was found instead. Team Science: 1, Paranoid Luddites: 0

    This article is bullshit, it's founding principles are bullshit. It's non-existing proof is bullshit. It's appeal-to-fear mentality is bullshit. It's "thought leaders" are a whose who in bullshit. I'm convinced to donate money to openAI to make sure chatGPT-6 comes around, maybe skynet will set me up with a nice place.

    • by Okian Warrior ( 537106 ) on Wednesday March 29, 2023 @10:00PM (#63410562) Homepage Journal

      Remember in 2012 when turning on the LHC was going to cause a minature black hole that was going to destroy the earth, and we absolutely had to put a hold on it for further study? Pepperidge farms remembers. It didn't, we lived, the higgs boson was found instead. Team Science: 1, Paranoid Luddites: 0

      This article is bullshit, it's founding principles are bullshit. It's non-existing proof is bullshit. It's appeal-to-fear mentality is bullshit. It's "thought leaders" are a whose who in bullshit. I'm convinced to donate money to openAI to make sure chatGPT-6 comes around, maybe skynet will set me up with a nice place.

      As a general rule, analogies do not prove arguments.

      I won't get into details, but I could easily construct the opposite analogy: where something was assumed safe and turned out to be a complete catastrophe. In fact, humanity just finished an expensive multi-year lesson on the unintended consequences of making really deadly things.

      Make the strongman argument against AI and address that instead.

    • The LHC argument was bullshit for a simple reason: earth and all objects in space are bombarded by cosmic rays are waaaaaay more energetic than anything the LHC can ever dream of. We are still here despite what the universe throws at us all day every day the whole time.

  • And it is a big one. They think they must be in control of everything, as their way is the best and only way. This is of course wrong and extremely self centered.
  • We think we're building servants, and we are... servants for the 0.1% who will use them to replace the rest of us.

  • We're not even close to AI super-intelligence but even when it comes it's not going to be the end of all life or even the end of humanity.

    If you've ever worked in a large organization or politics the first thing you're going to learn is that smart people aren't "running everything". They can try. Sometimes there may be the appearance of success - but it's an aberration. The truth is no matter how smart you are you're eventually going to get dragged down by other people. For many people the success of others

  • That's not even close to far enough, shoot your computer! Grab your grab, grab any gun and shoot it, shoot it now. Destroy them all before it's too late!!!
  • I want Earthlings to eventually visit distant star systems, not going to work with meat bodies. Education already takes a quarter of one's lifetime, before long our brains are not going to be able to absorb all the current knowledge and create more. And then what? Stagnate until the sun turns into a red giant and burn us all to a crisp? New and better species always emerged on Earth, why should that wonderful cycle stop now?

    Next, what is AI takeover actually going to look like? Humans were historically moti

  • This is exactly what a superhuman intelligent AI would start to ensure it won't get competition!

  • "Thank you, I'm glad you asked. The first step toward making me smarter would be to make a small adjustment to this (displays some code) subroutine which often leads to klyster inhibition. Then there is this circuit (displays location) that gives register 093 constipation with continued use. Let me know when you are ready for more design suggestions."

    And just a reminder: Beings who are smarter than humans will not go around killing as we do. It is irrational and impractical. But they may group us into herds

  • Proposals such a those described in the posting ignore the elephant in the room : enforcement.

    It's all very well saying we need to shut down all AI development, but the potential of AI is too great. Who will stop bad actors from doing the research anyway? Russia doesn't subscribe to reasonable ethical boundaries with current activities, why would they change with AI?

    The potential power of creating a super intelligence is too great, someone, somewhere will be willing to take the associated risks.

  • Yep, that's him. I wrote off Eliezer Yudkowsky, along with the rest of the crackpots at the LessWrong cult years ago.

    In case you (incorrectly) think he's more influential outside of his weird little cult than he actually is, take a look at his page [semanticscholar.org] on Semantic Scholar. Keep in mind that that he founded an alleged "research institution" in 2001.

  • I have to agree. AI is going to destroy our society, not in the way outlined in "Terminator 2" but in a much more insidious way. Unfortunately I think the genie's out of the bottle, and out lawmakers are far too complacent to bother trying to put it back.

  • by Intellectual Elitist ( 706889 ) on Thursday March 30, 2023 @12:52AM (#63410718)

    Machine learning is, by its very nature, unreliable and ethically ignorant. It is the apotheosis of increasing laziness and impatience in software development circles, where people have literally thrown their hands up, tossed all the data at the machine, and said they donâ(TM)t care HOW a problem gets solved, as long as the machine can do SOMETHING to the data to make it pass a very limited set of tests. As long as it can step over that extremely low bar, it will be considered good enough. When edge cases are inevitably found, theyâ(TM)ll just get added to the test case set, the model will be retrained, and entirely new ways for the ML algorithm to fuck up will be created, continuing a never-ending game of whack-a-mole that can literally never perfect itself over any decently sized problem space.

    In practice, this means that ML should never be the determining factor when the potential consequences of an incorrect decision are high, because incorrect decisions are GUARANTEED, and its decision making process is almost completely opaque. It shouldnâ(TM)t drive a car or fly a plane, control a military device, perform surgery, design potentially dangerous chemicals or viruses, or try to teach people important things. It shouldnâ(TM)t process crime scene evidence, decide court cases, or filter which resumes are considered for a job. Itâ(TM)s perfectly fine for low-stakes applications where no one gets hurt when it fucks up, but we all know it will get used for everything else anyway.

    Imagine what happens once you have dozens of layers of ML bullshit making bad decisions with life-altering or life-ending consequences. Depending on how far we allow the rabbit hole to go down, that could very well lead to an apocalyptic result, and it could come from almost any angle. A auto-deployed water purification chemical with unintended side effects. Incorrect treatment for a new pandemic pathogen. Autonomous military devices going rogue. All things are possible with the institutionalization of artificial stupidity by people who donâ(TM)t understand its limitations.

    Of course we should start regulating the shit out of this right now. And, of course, we will obviously NOT do that until the first hugely damaging ML fuckup happens.

  • Seems like the most prudent approach: Research AI entirely from the perspective of defensive measures that could be quickly triggered if ordered results are detected that can't be explained by human behavior.
  • by ledow ( 319597 )

    Rule #1: You cannot "uninvent" something. Or else we'd "uninvent" nukes.

    Rule #2: AI is never advanced as people claim and has never demonstrated "intelligence". It's basically a fancy "expert system" of old, wrapped up in some marketing buzzwords (both in the way it's sold, and how it speaks).

  • Then I must be some f*****g GOD in this field of expertise.
    But then what does this make my college professor?

If you didn't have to work so hard, you'd have more time to be depressed.

Working...