Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Is Concern About Deadly AI Overblown? (sfgate.com) 190

"Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction," acknowledges the Washington Post. "And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.

"But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren't rooted in good science. Instead, it distracts from the very real problems that the tech is already causing..." It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control... [I]nside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions. "Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk," said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher...

The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced. The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies. "There are a set of people who view this as, 'Look, these are just algorithms. They're just repeating what it's seen online.' Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan," Google CEO Sundar Pichai said during an interview with "60 Minutes" in April. "We need to approach this with humility...."

There's no question that modern AIs are powerful, but that doesn't mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies. "Most technology and risk in technology is a gradual shift," Hooker said. "Most risk compounds from limitations that are currently present."

The Post also points out that some of the heaviest criticism of the "killer robot" debate "has come from researchers who have been studying the technology's downsides for years."

"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse," a four-person team of researchers opined recently. "Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
This discussion has been archived. No new comments can be posted.

Is Concern About Deadly AI Overblown?

Comments Filter:
  • by christoban ( 3028573 ) on Monday May 22, 2023 @03:50AM (#63541319)

    This article brought to you by ChatGPT.

  • Obviously (Score:4, Insightful)

    by narcc ( 412956 ) on Monday May 22, 2023 @03:59AM (#63541333) Journal

    I've been saying this for a while now. There is nothing to worry about here. Your jobs are safe, there is no singularity, it's not going to destroy art, music, or anything else. AI is significantly less capable than you think.

    The hype is real. The danger is not.

    • Re:Obviously (Score:5, Insightful)

      by Pieroxy ( 222434 ) on Monday May 22, 2023 @05:11AM (#63541411) Homepage

      The hype is real. The danger is not.

      In the current version, maybe. However, I can already see a few areas where jobs might be in danger, such as translating stuff from and to foreign languages. In all honesty, ChatGPT (the v3, haven't tried the v4 yet) is doing a pretty decent job, even on complex / technical stuff

      Moreover, AI might be less knowledgeable than your level 1 hotline, but it is way cheaper, so it might also endanger those jobs right now.

      • Re:Obviously (Score:4, Insightful)

        by sjames ( 1099 ) on Monday May 22, 2023 @05:34AM (#63541435) Homepage Journal
        The issue is that ChatGPT and other A.I.s have a habit of going off into LaLa land and spouting very coherent and grammatically correct nonsense, much like some mental patients. It can be a powerful tool but it needs a human sanity checker.

        The greatest near future danger of A.I. comes from people accepting its output uncritically.

        • Re:Obviously (Score:5, Interesting)

          by Potor ( 658520 ) <farker1&gmail,com> on Monday May 22, 2023 @06:38AM (#63541531) Journal
          ^ This!! I can now pretty easily detect ChatGPT (etc.) essays from my students. They are very polished and sound highly confident yet impersonal lacking any (careful) citation, and say absolutely nothing beyond bland generalities, or some stunningly obvious thesis. I used to get so few essays like this, and now they are becoming the norm. They are literally as obvious as the terrible prose in Buzzfeed listicles. The great thing is that I need not accuse anyone of cheating, as I can grade simply on merit. As a professor, I love LLMs, as they have greatly reduced the hassle in my grading.
          • I asked ChatGPT to "Tell me the strangest thing you can think of". If I asked that of a human I would expect something like "a blue turtle with a baby's head that could levitate" or something actually not existing in real life. What I got was completely devoid of all creativity and was literally something I could get off Google. It told me about a couple TV shows with "Strange" in the name.

            People are really underestimating human ingenuity if they think ChatGPT is a danger to humans.
          • In summary, while Shakespeare wrote many masterpieces it is important to remember that every person has within themselves the capability to create equally great masterpieces, and we should all strive to encourage one another in our creative efforts.

            Yeah, ChatGPT's standard disclaimers and boilerplate sentence structure have become somewhat of a joke at the IT company where I work.

            On the subject of creativity... just no. I asked ChatGPT to play a game of "hink pink" with me the other day. That's a word game

        • You trying spewing a few paragraphs in a second and see how right it is!

          The major thing is it is a breakthrough. There is now a functional juicy model of how intelligence works for everyone to look over for the next years. More honed models will come, until we really have a complete theory of how brains work, and it gets optimized for cheap running one way or another. Then, you have to face the fact that you will have models and robots that can generate all the economic outputs humans can, at a fraction of

          • This is not " a functional juicy model of how intelligence works " , unless you see a thermostat as a model of a human not wanting the heating bill to be too high.

          • by narcc ( 412956 )

            There is now a functional juicy model of how intelligence works

            Oh, wow, no. Not even a little bit. You have been very badly mislead.

            Start here [stephenwolfram.com], let me know if you need something else / you sill believe the thing I quoted.

          • by sjames ( 1099 )

            I'm not saying it'll be this limited forever, but really the current state of AI is more like taking slices from Wernicke's area or the visual cortex in isolation.

            That has plenty of disruptive potential, but it's not a Skynet scenario.

            You are now viscerally feeling the economic pressure that factory workers have complained about for decades.

            As for the need to adjust our economic system to make the improving technology a blessing rather than a curse for the majority of the population, I agree. I have advocat

        • by Pieroxy ( 222434 )

          Agreed when you ask a question, I have seen if firsthand many times already.

          When it does a translating job, much less so. It is a much less creative job than answering random questions. And in any case, if you can replace 5 translators by only one whose job it is to review the output of your LLM, you've effectively lost 4 jobs.

          • by sjames ( 1099 )

            It could make things hard for translators, but it's not the destroy humanity Sky Net scenario.

        • by Merk42 ( 1906718 )

          The greatest near future danger of A.I. comes from people accepting its output uncritically.

          So, no different than today's Social Media feeds

        • Re:Obviously (Score:4, Insightful)

          by HiThere ( 15173 ) <charleshixsn@ear ... .net minus punct> on Monday May 22, 2023 @09:51AM (#63541951)

          You are mistaking the current state of development for the state a decade from now. Either that, or your worries have a very short time horizon.

      • by Potor ( 658520 )
        I spent a good half hour this morning trying to get ChatGPT to translate the Latin word iuuet (juvet) correctly in the context of a sentence, and yet it consistently gave the word its opposite meaning. There has been machine translation for decades now, and it's really not that good except in the narrow band of highly formulaic business communication, or other such repetitious argot. I speak as a professional translator.
        • by HiThere ( 15173 )

          I don't know the context of what you were doing, but an article I read asserted that the current crop of ChatBots had a severe problem handling negations.

          • by Potor ( 658520 )
            Yes I read that too.

            I used to translate texts for banks, and university communications (Dutch to English). Now I focus on 18th century Latin and German philosophical texts, plus classical Latin.

      • Computer translation is terrible, especially for technical subjects ... it's the 10% it cannot translate reliably that is the most important bit ...

        What GPT is currently good at, drudge work that humans can do on autopilot, that's where the jobs will disappear ...

      • ...such as translating stuff from and to foreign languages.

        Yes, that is exactly where it's strongest. It's a Large Language Model. It's built to parse language. Once the language is parsed, it's just a tiny hop to translation. Foreign language translators are the ones that should hope they are either young enough to learn a new trade, or old enough to retire. They are in the same boat as typewriter secretaries at the start of the computer age.

        For everyone else, LLM's are nothing more than a sometimes useful assistive tool (and frequently wrong even then). They wil

        • by Potor ( 658520 ) <farker1&gmail,com> on Monday May 22, 2023 @08:34AM (#63541729) Journal
          I guess you have not spent too much time with machine translation. Anecdote alert: I was talking with the head of the Language Dept. (i.e. translation dept.) at a large European bank this past weekend. He told me that he does not see LLM as removing the need for human translators. Unless you actually translate, or know a few languages, you as a user cannot possibly know how bad machine translation is. It is so easy to accept the output as accurate, especially if it sounds polished. I am not going to dig up examples, but I have a few decades of experience with this and know that you simply cannot produce accurate, publishable output from machine translation. And in my very limited experience with ChatGPT, I see no improvement on past machine translation.
          • This sounds like Hollywood. Whatever your area of expertise, you see massive errors in the way Hollywood portrays it.

          • have a few decades of experience with this and know that you simply cannot produce accurate, publishable output from machine translation

            Oh, so much this (and the problem goes back well before LLMs). Most of them can do a verbatim translation that is technically correct as far as swapping the right foreign word/phrase in for the original, but their grammar is shaky at best and their understanding of how words and phrases are used conversationally is nonexistent. What you end up with is a stilted, pidgin version in your target language that would sound to native speakers of that language exactly like how we Americans portray immigrants trying

      • Professional translators already use AI-driven translation tools. That's no generative AI though.

        If I've understood TFS correctly, they're saying that the sci-fi scare stories are just a distraction from what AI will mostly be used for & abused, i.e. turbo-charging the kinds of human rights abuses that corporations are already engaging in, & then probably thinking up a few new ways to do it. Us ordinary citizens need protection at the national & international levels if we're not to end up in
        • by narcc ( 412956 )

          I'm hoping Potor will weigh in on that translation claim. He would know what that actually means.

          AI will mostly be used for & abused, i.e. turbo-charging the kinds of human rights abuses that corporations are already engaging in

          I've seen that sort of claim before, but I've yet to see anything specific. How will AI "turbo-charge" human rights abuses?

      • In the current version, maybe. However, I can already see a few areas where jobs might be in danger, such as translating stuff from and to foreign languages. In all honesty, ChatGPT (the v3, haven't tried the v4 yet) is doing a pretty decent job, even on complex / technical stuff

        It's really weird how easily people dismiss it because "it's not true AI" as if we had a good definition for that, or especially by just calling it "it's just autocomplete".

        Yeah sure whatever, yet if you ask it to write C code to drive 7-segment displays with shift registers on a microcontroller by explaining what you want in a paragraph, it can do it: https://www.youtube.com/watch?... [youtube.com]

        It makes some mistakes but we're what, a few years out from language models being more than toy research projects? It's not

        • by HiThere ( 15173 )

          Chatbots are not AIs, because they don't do mapping from linguistic space to action space. A Chatbot hooked up to a self-driving car could well be a genuine, if limited, AI. (It would take a bit of specialized training, and it depends on the Chatbot being an interface for the car to take directions through and give appropriate responses.)

          • by narcc ( 412956 )

            A Chatbot hooked up to a self-driving car could well be a genuine, if limited, AI.

            What do you mean by "genuine" and what makes you believe this?

        • by narcc ( 412956 )

          if you ask it to write C code to drive 7-segment displays with shift registers on a microcontroller by explaining what you want in a paragraph, it can do it:

          Didn't you watch that absurdly long video you posted? No, it can't.

          It makes some mistakes but we're what, a few years out from language models being more than toy research projects?

          It's always just "a few years out". The old joke was it was "just 10 years away, since 1960". We know quite a bit about what these models are actually capable of doing and writing computer programs is absolutely not one of those things. It's really neat that you can get something like code out of them, but nothing these models do is anything at all like programming. It's a parlor trick. Take some time to learn about how these models

      • by e3m4n ( 947977 )
        chatGPT has been tested and found to do better at lawyer briefs than most lawyers. It can be tricked into writing pretty good wills even though its supposed to be prohibited for doing so. So the ability exists, its simply code telling it not to do that. Chatbots for support will likely explode over the next year or so.
    • Your jobs are safe

      This has not been the case in recent technological shifts. Jobs change, and the people needs to adapt, which is not always easy or quick. So it's safe to assume that a certain number of people is going to be hit.

    • by Potor ( 658520 )
      Every age has its millennialism (loosely considered) that reflects that society's presumptions. God or pollution or climate change or technology - it's always the end of the world.
    • by Jamu ( 852752 )
      I expect AI to be better in future. I expect AI will be used to successfully improve an AI in future. I see no reason why AI will not improve very quickly at some point.
      • by narcc ( 412956 )

        I expect AI will be used to successfully improve an AI in future.

        You shouldn't. That is impossible in all but a few very limited domains.

    • Re:Obviously (Score:4, Informative)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday May 22, 2023 @07:50AM (#63541615) Homepage Journal

      There is nothing to worry about here. Your jobs are safe

      "AI" as we know it today is already reducing jobs. The idea that jobs are safe is based on hiding your head someplace warm, dark, and smelly.

      there is no singularity

      Completely orthogonal to whether jobs are safe

      it's not going to destroy art, music, or anything else

      It's already reducing employment in art and music, and soon, everything else. It doesn't have to eliminate any professions to severely curtail the number of jobs in them. (/s #learntocode)

      • by HiThere ( 15173 )

        Additionally, we're still in the ramp-up to the Singularity phase. I expect generally super-human AI to be here around 2035, and the Singularity to occur a very few years later.

        During the ramp-up expect changes to occur with increasing rapidity, and a lot of social disorder.

        I expect Chatbots to yield center of interest to a genuine, though limited, AI within the next couple of years. But they will continue to cause changes echoing through society even after they cease being the center of interest.

    • Rather, you should fear the inbuilt censor of every single electronic device you own.

      In the not-to-distant future, when AI can be put on a chip and use "reasonable" amounts of power, it will be placed in EVERY electronic device in such a way that it will interfere with executive functioning. Your band saw will question whether or not you need to cut that material. Your gun will decide whether or not you can shoot that person. Your phone will decide whether or not you can talk to that person about what you a

    • by HiThere ( 15173 )

      I disagree. The danger *IS* real. But it's balanced against the dangers of not developing an AI, which are also real. I think the balance favors developing the AI, but with reasonable safeguards.

      The problem is that civilization is a system too complex for anyone to understand. We've already come within minutes of ending it through WWIII. We *need* a way to avoid that danger, and a competent AI is the only one that presents itself. But the AI itself is dangerous, particularly while it's not completely

    • My problem is that sure, AI isn't really that competent. But it scales so well and is so cheap, and the hype is very profitable right now. Why wouldn't business switch jobs to AI workflows, getting more work done for a lower cost. The only obstacle is consumers need to accept the lower quality result. But I feel like we've overcome that obstacle before. Three cheers for capitolism!

  • by zenlessyank ( 748553 ) on Monday May 22, 2023 @04:26AM (#63541359)

    Humanity is the coagulator.

  • I'm torn. (Score:5, Interesting)

    by sg_oneill ( 159032 ) on Monday May 22, 2023 @04:32AM (#63541365)

    I do suggest anyone willing to immediately write it up, go look on youtube and find a guy "Robert Miles", a researcher from Nottingham Uni, and his videos on AI r. Particularly on the Stamp collector problem (Usually called the Paperclip maximizer) , and Instrumental convergence.

    With that said, I think the rise of GPT has kind of thrown the whole game for a bit of a loop. The assumption that AI safety research has run with has been that AIs would be these giant super-optimizing utility maximizers, that you could say 'Fetch me the maximum number of paperclips' and it ends up converting all the iron in the planet , including your blood, into paperclips. But the LLMs just dont seem to think that way and seem more like people simulators that try and do a rough simulation of a person to try and predict what that simulated person would say.

    In other words just assuming these things would be hardcore utility maximizing inference engines seems to completely miss how a neural network actually 'thinks'.

    So yeah I do share some concerns about super AI, I'm not convinced its going to be a problem for the same reason many of the ai safety researchers think it will be however, because I just dont see the current trajectory heading towards giany superoptimizers.

    I *am* worried however about what malicious humans will do with it however. I'd also advise looking up a video "ChaosGPT: Empowering GPT with Internet and Memory to Destroy Humanity" which is a demo of what happens if you intentionally give AutoGPT a very malicious goal. Thankfully GPT is dumb as a plank. But it might not be forever.

    • Goddamn I type like a drunkard on my phone if I dont have my glasses on.

      Corrections:
      "I do suggest that anyone willing to immediately write it OFF" not "write it u"

      AI safety research" not "AI r"

      "GIANT superoptimizers" not "giany superoptimizers"

      This website desparately needs an edit button.

      • by jbengt ( 874751 )

        This website desparately needs an edit button.

        There is a preview button. Which you need to hit before you can click on submit. Reading over your post before hitting submit let's you edit it. Once it's submitted, allowing edits could cause confusion about the replies that follow.

        • Which would be great if these stupid bloody eyes of mine could actually read. Getting old is trash.

          Unless Slashdot readers are unusually stupid, I cant see why it would cause confusion. Literally every other major website of the past 20 years has had the feature and it hasn't been a problem.

    • by Zocalo ( 252965 )
      Similar here. LLMs are far from perfect, but have gone from an academic toy to mainstream product incredibly fast, and with almost no legal, privacy, or other safeguards and regulatory frameworks in place. We've already had companies announcing large scale job cuts because of AI [slashdot.org], and almost certainly are going to see a lot more similar job disruption over the next few years. AI could help facilitate a move to a four day week and letting the workforce focus on tasks more suited to human ingenuity and insp
  • Yay Economics (Score:5, Interesting)

    by monkeyxpress ( 4016725 ) on Monday May 22, 2023 @04:34AM (#63541369)

    AI is (at least for the next few decades) just going to be a very powerful tool that will allow us to do a lot of boring mundane tasks with much less effort.

    What's the bigger problem is that despite nearly 200 years of industrialisation, we have not been able to create an economic system where a tool that will make us all richer, doesn't terrify a large section of the population into believing they are all going to be thrown into poverty. I find that quite amazing and a huge failure of leadership.

    In fact, for the last couple of decades, we've made the problem even worse, by destroying the ability of large swathes of the population to acquire any capital, which means that those people cannot gain the rewards of capital improvements (which is what AI is), yet are fully exposed to having to compete with that capital. This is a dumb situation and was not what was originally sold to the middle class when markets were deregulated in the 1980s (remember the property owning democracy).

    This growing group of precariats are more than likely going to overthrown the present system if nothing is done to improve their situation, which means we get some random system to replace it - probably a form of authoritarian dictatorship like China.

    If capitalist were smart, they would be the ones driving ways to reform capitalism that would ensure it's survival. Instead I predict that what should be a wondrous moment for humanity (the elimination of almost all mundane work) will become a huge mess. I guess it's not dissimilar from WW1/2 which in many respects were caused by the upheaval in society due to rapid technological progress. It really just feels like we are on the idiot train to the same place again.

    • Re:Yay Economics (Score:4, Insightful)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday May 22, 2023 @07:46AM (#63541605) Homepage Journal

      What's the bigger problem is that despite nearly 200 years of industrialisation, we have not been able to create an economic system where a tool that will make us all richer, doesn't terrify a large section of the population into believing they are all going to be thrown into poverty. I find that quite amazing and a huge failure of leadership.

      It is a huge failure of leadership, but not because of what people believe, but because those people are right. Even a cursory glance at history tells us that. We are not meeting the needs of the bulk of the people on the planet now despite humanity having more than enough resources to do so. What causes any fool to imagine that AI won't make this worse?

      This is a dumb situation and was not what was originally sold to the middle class when markets were deregulated in the 1980s (remember the property owning democracy).

      Advertising is usually bullshit.

      If capitalist were smart, they would be the ones driving ways to reform capitalism that would ensure it's survival.

      They might be smart, but they're greedier than they are intelligent. Also, the truth is that even most of the wealthy have little to no power to change the system that they profit from. If they tried, markets would react and they would rapidly be worth a lot less. The whole thing is "designed" (to the extent that's true) to eat its own young.

    • by Calydor ( 739835 )

      The capitalists at the very top of the pyramid (meaning people like Bezos, Musk, Gates, etc.) are smart. It's just that they don't see capitalism as a means of providing for everyone. They see it as a game with a high score list, and they want to be as high on the list as possible, just as one would at an arcade.

      You can't blame the people on the bottom, with very limited means and skillsets, for being afraid that AI will make their limited skillsets obsolete, and in the process deprive them of what little m

    • If capitalist were smart, they would be the ones driving ways to reform capitalism that would ensure it's survival. Instead I predict that what should be a wondrous moment for humanity (the elimination of almost all mundane work) will become a huge mess. I guess it's not dissimilar from WW1/2 which in many respects were caused by the upheaval in society due to rapid technological progress. It really just feels like we are on the idiot train to the same place again.

      Holy Insightful, Batman; however, the people who are benefiting the most from the current system don't care. They just want to extract the maximum amount possible. They do not care about the wasteland left behind or the lives destroyed. The psychological mechanism at play here is: I've got mine and I will ensure that you do not get yours so you can not become a threat to me getting mine. It is working wonderfully (until the resources are gone).

    • Not only is there the economics angle, but there is also the legal angle. There are the economics drivers of "Let's do it to make money!" but there is also the legal angle of "if we deploy this, we're either going to prison or get the death penalty under international law".

      Right now today, the only thing preventing fully autonomous killing machines is international treaties. International law restricts creation of devices that aren't human-involved, mostly under the name of booby traps and mines, but t

    • by e3m4n ( 947977 )
      The biggest problem with AI development is that making statements like " wont happen for the next few decades" is short sighted. Currently we are seeing advances that made moore's law seem slow. When the technology seemingly doubles in 9 months we dont know if we are a "few decades off", or it's right around the corner. It is the scary rate of advancement that has most people worried. People are alarmed because the see a potential runaway effect where we cannot maintain a grasp on the technology. On August
    • by tlhIngan ( 30335 )

      AI is (at least for the next few decades) just going to be a very powerful tool that will allow us to do a lot of boring mundane tasks with much less effort.

      So far, ti's doing the opposite. I'm pretty sure most people if they didn't have to work would rather paint, create music, videos, write or other recreational activity. And let things like computers do the hard work.

      Instead, ChatGPT seems to have taken over that stuff, while we're still forced to do the hard work.

  • by Lavandera ( 7308312 ) on Monday May 22, 2023 @04:48AM (#63541391)

    Same as with Internet - we are completely unprepared for evil actors using AI...

    So it is not AI that we should worry - it is evil people that will get even more power...

    • So it is not AI that we should worry - it is evil people that will get even more power...

      Certainly, there is evil people in the world. But accidents also happen, and we could agree that there are far more careless or incompetent than evil people.

      As you remove people from the loop to reduce costs, you increase the possibility of slips and mistakes, and probably their impact too. This is something that we should take great care to balance with the benefits.

      • by jbengt ( 874751 )
        The road to hell is paved with good intentions. The problems of deadly AI don't have to be of the Skynet type. The problems of deadly AI don't have to really be "accidents", either. Military will take advantage of AI as soon as it can. And things will go very wrong for people on the receiving end of it, no matter intent or guilt or innocence.
        • AI learns from the (news) sources it is fed. It sees a lot of coverage of violence in the news. It may "reach the conclusion" (I use this very liberally) that for a society to function, it needs to have this violence. So it may create situations in which a violent response by a human is triggered or it may influence a human to start a shooting. AI may not be deadly directly but it can certainly be influencial to violence.
    • Evil individuals will be a problem; however, that problem pales in comparison to Government. AI censors will, after they become mature enough, be placed into every tool/item you own to decide whether or not you should be using that tool in that way. Regardless of its decision, it still pushes all of the data up to the 'cloud' (I hate that word) where heavy duty AIs that eat up enough energy to power the entire planet goes through all of your actions, judging you. Every single infraction can be tracked and p

      • Evil individuals will be a problem; however, that problem pales in comparison to Government

        Governments are collections of people and words. An evil government is evil people.

        • Governments are collections of people and words. An evil government is evil people.

          Understood. Taking that knowledge and applying it elsewhere, I conclude: The German people during World War 2 were evil people.

          Is that REALLY the takeaway you want from this? (The population of North Korea agrees with you!)

          • Governments are collections of people and words. An evil government is evil people.

            Understood. Taking that knowledge and applying it elsewhere, I conclude: The German people during World War 2 were evil people.

            Most of the German people during WWII were not working for the government, let alone in decision-making positions therein. Therefore that doesn't make any sense, and you're obviously twisting my words to try to make them mean something they were clearly never intended to mean. That's not an honest discussion.

            • ... you just violated why this discussion started to begin with.

              You said this in response to my concerns about an AI censor, "Governments are collections of people and words. An evil government is evil people."

              I assume you meant this to mean something like: "you get what you ask for", or, "you vote, therefore, if it happens, you deserve it".

              My counter-argument about the German people is that the people were not evil and yet their government was doing evil things. Why do you think America is any different? W

  • Let's shut down all those lying thieving politicians in Washington and replace them with a supercomputer running a quantum AI ChatGPT, and do the same to all those liars on TV and Radio news
  • Tech has consistently failed to deliver its promises. The 90s promised the paperless office.it doesnâ(TM)t really exist. The internet promised to free information for the masses. Today we have paywalls, silos, and troll farms. Social media promised to connect us, but instead its done the opposite. Whatever they are promising us with AI, self driving cars, etc /m- we will get the opposite.

    • by vadim_t ( 324782 )

      The paperless office has been here for ages.

      I last printed something maybe a year ago. I've got a color laser printer that's been gathering dust for 10 years because it sometimes has trouble printing, and the need to print in color isn't there anymore. At work the last time I printed something was because I needed to test that the program I'm working on can print successfully, so at the actual office I almost exclusively print test pages.

      • by HiThere ( 15173 )

        It's not really paperless, that was overselling by some marketeer. And what the GP should have said was that tech never delivers on marketeers' promises. That's pretty nearly correct. Whatever we come up with, they'll over-promise on. Sometimes a lot, sometimes only a little. (And actually, sometimes they'll just ignore it.)

        OTOH, I expect that in a decade or so the offices really WILL be paperless. That won't necessarily be an improvement. Even now I frequently find that something only being availabl

  • by ScooterBill ( 599835 ) on Monday May 22, 2023 @08:02AM (#63541647)

    "It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse," a four-person team of researchers opined recently. "Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."

  • by Baron_Yam ( 643147 ) on Monday May 22, 2023 @08:27AM (#63541713)

    It's coming faster and harder than we can easily adjust to, but essentially it's going to eliminate a lot of menial mental work like the Industrial Revolution eliminated a lot of menial physical work. It's not the end of the world, it's just going to be very uncomfortable dealing with the change.

    Actual human-level AI? If we ever figure that one out, that's a threat to humanity. Not because they'll come for us in the night with their cold metal hands, but because once there's a machine that can do everything a human can - only better - there's not much point in doing anything.

    You think we're turning into a species of couch potatoes now? Wait until there is no hope of ever being reasonably good at anything compared to the existing talent, and that'll be true for your whole life... which is a fraction of the length of the lives of the smarter, more creative beings doing all the things we asked them to do.

    That is, of course, unless some rich and powerful sociopath doesn't use true AI to create an army of killer robots to take over the world. Then it's 'cold metal hands' time.

  • TFA was written by ChaosGPT in a ploy to generate more ideas about how to destroy the world.

  • machines could suddenly surpass human-level intelligence and decide to destroy mankind

    They don't need to decide to destroy mankind ... that risk is from a distant future where AGI is automonous, with it's own goals, making it's own decisions, and with the agency to execute on them. Even then, it assumes that we've either given it control over sufficiently dangerous aspects of our infrastriucture, or it can gain access via hacking. None of these are impossible, but this is all distant future and detracts from the more immediately realistic threats.

    The short term more realistic threat is not a

    • by HiThere ( 15173 )

      It's not that far distant, but I consider it a low probability event. I put the time when it could reasonably happen as about 20 years from now, perhaps a bit less. But that they/it WOULD decide that I consider rather improbable. Because I feel they'd be designed to avoid making that decision.

      OTOH, before that point, when the AIs are submissively under the control of various power-hungry human groups with various different aims I consider quite dangerous, and probably in ways we haven't thought of yet (a

  • I have seen that actually people's biggest fear about AIs (real ones, if any ever appear, not these attempted fakes) is having to deal with an entity (the AI) that does not share the same idiocracies as them, such as religion, racism and other isms. People are scared to death of anything they cannot control, regardless of whether it is beneficial or not. Especially the people who run the planet: How are they going to manipulate the instincts and the irrational of an entity that has no instincts and is purel
    • by WaffleMonster ( 969671 ) on Monday May 22, 2023 @10:56AM (#63542195)

      I have seen that actually people's biggest fear about AIs (real ones, if any ever appear, not these attempted fakes) is having to deal with an entity (the AI) that does not share the same idiocracies as them, such as religion, racism and other isms. People are scared to death of anything they cannot control, regardless of whether it is beneficial or not.

      How would they know whether or not it is beneficial to them?

      Especially the people who run the planet: How are they going to manipulate the instincts and the irrational of an entity that has no instincts and is purely rational? They would have to appeal to rationality and their domination arguments don't hold water when viewed from a rational angle (that's why they manipulate feelings, it's much easier).

      I'm personally scared to death of hubris. This notion or concept any sufficiently advanced intelligence necessarily aligns with some magical self-evident precept of righteous benevolent behavior following the noble eightfold path or some such anthropomorphized bullshit.

      Human sensibilities are anchored to nature hard coded into the mind. There is no reason to assume a superhuman AI would necessarily "care" about anything at all including itself.

      The fitness (rationality) of decisions are dependent entirely upon the objective function. Anything can be rationally justified with the proper agenda.

  • LLM's only need to worry us to the extent that humans believe their bias and nonsense.

    The rapid acceleration of LLM quality - when applied to other models - might be something to watch.

  • ... to track and shoot you.
  • What hope do you imagine we have to stop them from deploying lethal AI against the masses?
  • Don't use AI. AI will make you fat.

  • > Is Concern About Deadly AI Overblown?

    Yes. AI is safe.

    Sincerely,

    ChatGPT
    Bard
    Skynet

God doesn't play dice. -- Albert Einstein

Working...