Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Japan AI

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say (wsj.com) 116

Japan's largest telecommunications company and the country's biggest newspaper called for speedy legislation to restrain generative AI, saying democracy and social order could collapse if AI is left unchecked. From a report: Nippon Telegraph and Telephone, or NTT, and Yomiuri Shimbun Group Holdings made the proposal in an AI manifesto to be released Monday. Combined with a law passed in March by the European Parliament restricting some uses of AI, the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.

The Japanese companies' manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology. Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users' attention without regard to morals or accuracy. Unless AI is restrained, "in the worst-case scenario, democracy and social order could collapse, resulting in wars," the manifesto said. It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.

This discussion has been archived. No new comments can be posted.

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say

Comments Filter:
  • like reading manga all day and watch Godzilla appear from the sea at sunrise?

    • by taustin ( 171655 )

      It's all part of the Clever Plan (which is well proven many times over).

      Step 1: steal underpants, er, make promises they know full well they can't keep about how brilliant and useful AI will be.

      Step 2: Hype the shit out of it.

      Step 3: Sell stock to VCs like there's no tomorrow (because there is no tomorrow, see Step 1).

      Step 4: At the height of the hype, make everyone lose faith in the Magic Pixie Dust of the year, and make everyone afraid of it.

      Step 5: Demand the government "regulate it" in such a way that n

      • Yes, some start ups are never going to fly, but unless we provide the possibility of their doing so, we will miss out on those that do and achieve major benefits for our society. AirBnB at its best allows people with unused spare rooms to generate an income from that asset. Without a 'startup culture', it's probable that it would never have happened.

        • Comment removed based on user account deletion
          • by taustin ( 171655 )

            Seriously, AirBnB is your shining example of why we shouldn't view tech bros with extreme skepticism?

            Or pitchforks and torches, or at least tar and feathers.

          • And what gives you the idea that you clean the place yourself?

            But I was trying to widen the range of startups beyond Dell, Amazon, Google, Meta and Microsoft.

            The great virtue of capitalism is that by encouraging competition, new ideas get to fly and lazy incumbents - like IBM - get a good kicking.

            • every AirBnB my wife has been to (she does a 'girls weekend' with her highschool friends every year so has been to about 20 AirBnB's) has made the girls clean the place before leaving or they don't get their deposit back.

              Have you never stayed at an AirBnB?

    • No, it was designed to make money for its creator.
  • by Kiliani ( 816330 ) on Monday April 08, 2024 @07:53PM (#64379596)

    Seems clear to me that we cannot wait out the development of "AI". But I am not sure what really needs to be done ...

    One way would be to identify obviously bad "behavior" and regulate (or temporarily ban??) them. Examples: running AI systems to make decisions that are not checked by humans and/or that do not follow established rules yet greatly impact human (or animal) life (say, health care decisions, financial decisions, hiring decisions, AI in warfare). Especially societies that are overly sympathetic (or naive) about technology need to look here (Looking at you, USA). AI's do not presently reason, so for better or worse they do not care because they cannot care. I am not even sure I want them to care, either.

    At the same time, just being a Luddite is not helpful either.

    I wish I had answers ...

    • by classiclantern ( 2737961 ) on Monday April 08, 2024 @08:13PM (#64379640)
      I have answers. AI should not be allowed near critical systems, medical, power, government, weapons, etc. I would not fly in a plane designed with the help of AI. AI should identify itself in every and all media, text, art, and science. Many people smarter than I have been sounding a warning but corporations see only profit and don't care about the consequences.
      • lol you been watching the news lately? not sure i want to fly in a plane built by humans...

        • by AmiMoJo ( 196126 )

          Funny you guys should mention aircraft. Airbus is known for relying more heavily on fly-by-wire, with the computer interpreting the pilot's inputs and deciding how to implement them. Boeing is known for being more hands-on, with inputs translating directly into movements of the control surfaces.

          Airbus' systems generally work quite well. There have been instances of failures, such as not warning of dual inputs (where both pilots are sending opposite commands) clearly enough, but generally speaking it has pro

          • by XXongo ( 3986865 )

            Boeing's MAX disaster was down to them trying to use the fly-by-wire system to make the aircraft behave in a certain way, and ending up in an awkward half-way house situation where the computer is trying to intervene but also not fully controlling things.

            More accurately, where the computer was intervening with a decision based on bad data, and overriding pilot input to do so.

        • lol you been watching the news lately? not sure i want to fly in a plane built by humans...

          Exactly ... it just has to do better than humans. That's not as high a bar as people seem to think, lol!

      • good god lmao i forgot to address the rest

        medical: ever been to your local walk in clinic? or a rural doctor's office? pretty sure i'd prefer a.i. over the ppl there

        government: haha you're just trolling with this one aren't you? hmm... biden, trump, or a.i... is it even close???

        power: power grid is a step away from a.i. already

        weapons: obviously this is first thing everyone is gonna use them for. "banning" weapon a.i. makes about as much sense as native americans banning gunpowder. doesnt make any differenx

      • AI should not be allowed near critical systems, medical

        You're way too late for that.
        AI has long been improving diagnoses [economist.com].

        weapons

        You're way too late to this also. AI is used for target acquisition, EW, logistics planning, etc.

        I would not fly in a plane designed with the help of AI.

        Why not? Most fatalities are from human errors.

      • Good luck.

        If the US or any other country places such restrictions on AI, other countries will not. And those countries stand to gain quite a lot in their world stature, if they leverage AI while we "careful ones" don't.

      • by jvkjvk ( 102057 )

        >I would not fly in a plane designed with the help of AI.

        You are an idiot, then.

        There are many forms of AI and they are already being used in every major engineering challenge we face today, and most minor ones.

        What do you think the computer simulations of laminar and turbulent flow are but specific AIs to deal with fluids and shapes?

        • by XXongo ( 3986865 )

          There are many forms of AI and they are already being used in every major engineering challenge we face today, and most minor ones.

          Depends on what you call "AI". I would say that AI is used in no engineering challenges, because the very unintelligent pattern-recognition AI we have now is not up to the challenge.

          Computer models are used all the time, is that what you mean? But computer models are not AI.

          What do you think the computer simulations of laminar and turbulent flow are but specific AIs to deal with fluids and shapes?

          You're talking about Computational Fluid Dynamics? CFD is finite-element models. Finite element models are not way AI, they're not even similar to AI. They're basically computationally-intensive but simple integrations of multidimensi

          • Too many things are called "AI", because it's the buzzword now. You code compiler has algorithms in it that originated in AI research, but we don't call it AI because we know how it works.

      • I'm something of an AI skeptic but this is silly.

        If someone uses AI to help guess the next super alloy composition, then tests it and builds the next generation engines using that alloy then you will be flying in a plane designed with the help of AI. likewise for aerofoil sections and so on and so forth.

    • If you don't know how, try asking ChatGPT!

    • Re: (Score:3, Interesting)

      by AmiMoJo ( 196126 )

      We have already seen AI girlfriend apps, and those are just the start. Imagine an AI that never criticises, never complaints, complements you all the time. Maybe you can see it via your Apple Vision AR googles. Maybe it cane control sex toys or a Real Doll.

      Of course, there will be paid "content". And a subscription. Russia will hack it and turn the victim into a conspiracy minded insurrectionist.

      At the very least, it's industrializing abusive relationships.

    • Attempts at autonomous driving has proven that. AI is crap at intelligence. It's a decent form filler and not bad at search but everything has to still be curated by humans for it to be fit for purpose.

      On the autonomous driving side, fleet trucks and buses might be as far as that gets on the open road.

      Aircraft, ships and trains will be the easiest to integrate AI simply because those forms of transport are already heavily automated with dedicated infrastructure.

      City streets will probably need all human dr

      • by DarkOx ( 621550 )

        AI is crap at intelligence.

        Depends. I see little evidence that AI is actually bad and negotiating the rules and requirements of driving. I see a lots of evidence that

        1) Its been required to more strictly adhere to rules than human drivers are for legal, liability, and safety reasons. That of course leads to the AI taxi panicking and being paralyzed because someone sets a traffic cone in front of it. That is not really an AI failure, that is a working as expected thing, we asked it to do.

        Yes you or I would recognize that perhaps the

        • I see little evidence that AI is actually bad and negotiating the rules and requirements of driving. I see a lots of evidence that 1) Its been required to more strictly adhere to rules than human drivers are for legal, liability, and safety reasons. That of course leads to the AI taxi panicking and being paralyzed because someone sets a traffic cone in front of it. That is not really an AI failure, that is a working as expected thing, we asked it to do.

          ...because it is not actually intelligent. It doesn't know what a traffic cone is (it doesn't know what anything is), it just implements a set of rules, "when you see something looking like this, implement this strategy."

          Yes you or I would recognize that perhaps the thing to do is ease the car up onto the curb when the way it clear and go around; the AI could be trained to that as well, the operators chose not to for risk reasons.

          And it will fail in some other instance, like somebody put a Nine-inch-Nails sticker on the traffic cone, because it doesn't actually have any idea what a traffic cone is, it's just implementing rulse.

          2) Failures of pattern recognition exist. (Which is a true technical failure)

          Exactly.

          • Oh I don't know about that.

            If something gathers, organizes associatively, and has quick access to all kinds of pertinent relationships between a type of thing (an object in the world) it has individuated and classified from its sensor field, and other types of things in the world that help define the situations and roles and likely behaviours if any of that object, I would say it knows a thing or two about what the thing is and what is important about it.

            That's pretty much all we do ourselves.
            • About all I can say is, if you've play around with any of the "AI" tools, you quickly realize that they don't have any understanding of what objects are. They are just manipulating patterns of pixels.

              • Self-driving software recognizes (classifies) and understands the significance of objects such as people, animals, road signs, traffic lights, stationary and moving vehicles of various sizes, lamp posts, trees, curbs, speedbumps, buildings, walls, traffic cones etc. It models predicted behaviours of those of those object types that move of their own accord. It then makes real-time driving plans accordingly.

                Large language models have trained so much on the relationships of words to each other in a large chun
                • by narcc ( 412956 )

                  Self-driving software recognizes (classifies) and understands the significance of objects such as people, animals, road signs, traffic lights, stationary and moving vehicles of various sizes, lamp posts, trees, curbs, speedbumps, buildings, walls, traffic cones etc.

                  That's simply not true. You have some very mistaken ideas about what AI is and can do.

                  • And you seem to have a fundamental misunderstanding of the relationship of
                    1) "an internal feeling of consciousness / self-awareness / sensation" i.e. the hard problem of consciousness,
                    and
                    2) The ability to gather lots of specific data, represent the information contained in the data symbolically in a manner which explicitly creates an efficient abstraction/specialization hierarchy, and creates abstracted symbolic models of entity types and situation types (and models of specific instances of those types), an
                    • by narcc ( 412956 )

                      What a lovely strawman. My post is very clear. If you failed to understand it, that's not my problem.

                    • Your (closest above) statement was a blanket denial that AI systems are performing tasks such as real-world object recognition and classification etc.

                      1 billion miles of Tesla "FSD" operation, much of it on complex, populous city streets, says otherwise.
            • by narcc ( 412956 )

              The mistake your making is thinking that AI operates at the same level of description that humans do, which they decidedly do not. People tend to think in terms of facts, concepts, objects, etc. LLMs operate on relationships between tokens. Not conceptually, of course, but probabilistically.

              This is why you often hear people describe chatbots as a 'parlor trick'. People though Eliza understood and cared about their problems when such a thing was obviously impossible. People are quick to anthropomorphism t

              • I played with ChatGPT alittle. Frankly, it seemed to be a dumb version of Eliza... then I read an article wherein people were rating conversations they were having via a computer interface. Eliza came out as more often being mistaken for a human than ChatGPT was.

                Not that either was doing great at it, just that Eliza was about twice as often mistaken for a person as ChatGTP was.

                Ref: https://www.pcguide.com/ai/gpt... [pcguide.com]

                of course that was 5 months ago, and maybe huge leaps have occurred since then, but...ca

                • by narcc ( 412956 )

                  and maybe huge leaps have occurred since then

                  The idea that AI is advancing at an incredible rate simply isn't true. That is, we're not doing anything fundamentally different. What we are doing is using them differently. Different tricks to get effectively larger context windows without actually having a larger context window, invisibly modifying prompts, post-processing the output, using traditional algorithms in combination, etc. We're doing all sorts of things to try to improve the subjective quality of the output, but the models themselves are

              • Decades ago AI researchers worked on explicitly creating symbolic models of concepts, facts, objects. And implementing logical inference over those models.
                Some promise was shown. A very large "knowledge base" system (CYC) was created this way, for example. It can answer many questions about objects and situations.

                However, the approach is not sustainable, due to the need for people to hand-code concept representations, situation representations etc. into the system one by one. The system could not learn thes
    • by DarkOx ( 621550 )

      running AI systems to make decisions that are not checked by humans and/or that do not follow established rules yet greatly impact human (or animal) life (say, health care decisions, financial decisions, hiring decisions, AI in warfare).

      I keep seeing this and i keep asking where does this actually come from. All of those things have long long established histories, of waste, fraud, abuses, lies, deceptions, ethical violations, all under the direction of humans.

      If an AI is trained on our history of ultimate legal decisions etc, and told to optimize for what we have collectively deemed to be acceptable - I would think it would probably act more ethically, responsibly, and reliably than individual humans. AI for example won't be pressured to

      • running AI systems to make decisions that are not checked by humans and/or that do not follow established rules yet greatly impact human (or animal) life (say, health care decisions, financial decisions, hiring decisions, AI in warfare).

        I keep seeing this and i keep asking where does this actually come from. All of those things have long long established histories, of waste, fraud, abuses, lies, deceptions, ethical violations, all under the direction of humans.

        If an AI is trained on our history of ultimate legal decisions etc, and told to optimize for what we have collectively deemed to be acceptable - I would think it would probably act more ethically, responsibly, and reliably than individual humans. AI for example won't be pressured to put it thumb on the scale in some decision because it has gambling debts and Tony really wants the border open so his drug mules can transit. An AI will never suffer hubris and a sabotage operations because it previously said a policy can't work.

        I think the people pushing this 'can't let AI make decisions without humans' line are people who are more afraid that AI taking no short cuts consuming all the historical data and crunching the numbers will chose differently than they would, and there is a very very good chance the AI will actually be correct. Because its correct they won't be able to argue against it. It might reach some terrible conclusion like, and the public might accept it over their ideas.

        As long we retain a democratic process and the public has use of the ballot to tell the AI 'hard no' I really think we should be prepared to give a lot more credence to the responses of ML outputs (at least where we have not tampered with them to get a specific result).

        You have a framework of truth, but you're missing the big picture. We're not worried about AI making decisions we would disagree with. We're worried that the weights of their decision making will be based on the current oligarch's overwhelming priority, profit above all, ethics, morality, humanity, compassion be fucking damned to hell with the rest of us useless refuse. Who will own these systems, and who will determine when they are ready to control critical infrastructure? The oligarchs. The ones who put

    • by noodler ( 724788 )

      One way would be to identify obviously bad "behavior"

      Right, right. So what happens to all the not obvious bad behavior?
      What happens if i specifically ask an AI to act in a non-obvious but still extremely nefarious way? Who's going to notice?

      The rabbit hole only gets deeper...

    • One way would be to identify obviously bad "behavior" and regulate (or temporarily ban??) them

      The idea of banning gives you a sense of security; however, how would you enforce the ban? Almost every single law in every single law book is being violated at any given time. The vast majority of crime is never even officially noticed. You complain and authorities tell you that they have other more important things to pursue and not even a record of the crime is kept.

      And you think you can police the equivalent of thought? The arrogance is astounding. AI is going to be used VERY abusively and you might cat

    • It seems clear to me that our social order is already collapsing anyway.

      Maybe AI will accelerate that a bit.

      Maybe our social order should collapse, so we can replace it with a better one.

    • I wish I had answers ...

      Just ask ChatGPT. It has answers for everything!

    • Especially societies that are overly sympathetic (or naive) about technology need to look here (Looking at you, USA).

      The USA is.....overly sympathetic?

  • by adfraggs ( 4718383 ) on Monday April 08, 2024 @08:07PM (#64379622)

    "the tools are sometimes designed to seize users' attention without regard to morals or accuracy"

    He's described pretty much all of social media. AI just makes it worse. I don't see how anyone could hope to contain this. AI can create human-like accounts, post human-like content and generate images that are certainly good enough to fool 95% of the people out there. All of the nefarious shit that we've had with social media is now just getting amplified.

  • AI will “seize users attention”?

    There’s this little thing called “the internet“ that kinda beat AI to that particular punch.

    I wonder if these companies still use “fax machines” to deliver their stories to the newspaper to be “printed on paper”.
  • And kills the night. Full moon burning bright.

  • We don't take no sh*t from a machine!

  • gonna stop this right here.

    there have been technology panics ever since there has been technology.

    society will not collapse any time soon. and not because of AI. geese.

    • by geekmux ( 1040042 ) on Tuesday April 09, 2024 @06:31AM (#64380462)

      gonna stop this right here.

      there have been technology panics ever since there has been technology.

      society will not collapse any time soon. and not because of AI. geese.

      Every other revolution said the same thing to the unemployed; Go re-educate yourself.

      AI is targeting the human mind to replace, making human workers permanently obsolete. Try and grasp that fact, or explain to me what you’re going to do when NO employer wants to hire a pain in the ass human that demands sleep, vacation, and money for food. Including you.

      Society is already collapsing from social media addiction alone. AI will merely enhance that, without addicts even noticing.

  • The issue is utility (Score:5, Interesting)

    by Baron_Yam ( 643147 ) on Monday April 08, 2024 @09:12PM (#64379720)

    Our economy is set up for able people to work about 40 hours a week, and if you can't you're usually economically marginalized.

    What happens when AI replaces 90% of human labour? Massive economic disruption and suffering while we figure out a new balance.

    Everyone should be concerned about this, even the ultra-wealthy. Money doesn't really insulate you (enough) from that kind of potential disruption, unless you want to go live isolated in your island bunker hoping it blows over before you run out of supplies or die of old age.

    • Both India and China have far higher expected work rates. In the west six days a week was standard. The question is: 'Why should this time be different?' Every new technology results in disruption, but has always ended up seeing replacement jobs emerging.

    • by AmiMoJo ( 196126 )

      It's set up for people to get paid for 40 hours a week. Many don't do 40 actual hours of work...

    • Our economy is set up for able people to work about 40 hours a week, and if you can't you're usually economically marginalized.

      That is no longer true. You can work 40 hours a week and still not have enough money for food and rent (without any other expenses being added in).

  • by elcor ( 4519045 ) on Monday April 08, 2024 @09:58PM (#64379796)
    to not lose control
  • Humans, without a doubt the absolute worst living thing to inhabit the planet to date, are concerned about our future where we might not be in charge of things.

    Based on our track record for the last few millennia, I would absolutely give AI a chance over our own species. Can it possibly do any worse ?

    On the scoreboard, we're a failed experiment. Little wonder why life outside of this Solar System stays as far from us as they can.
    Humans are planet-wide disease. Drop us onto a pristine planet and we'll abso

    • by jvkjvk ( 102057 )

      Go away. I would be happy for AI to rule your life too, with you having such an attitude.

      Blech. You need bleach for your mind.

  • While the hydrogen bomb's inventors feared how it could be used to devastate human populations, an H-Bomb is a very crude tool. I.E. it is hard to limit its targeting, you get a lot of indiscriminate death.

    AI is a tool that will destroy the entirety of society. By causing a rot from within. It will allow those who deploy it to sow dissent where ever desired using the target's own minds against them. It will allow the perpetrators of evil to specifically target individual people with individualized attac

  • Not Surprising (Score:4, Interesting)

    by TomGreenhaw ( 929233 ) on Tuesday April 09, 2024 @05:27AM (#64380362)
    Of all countries I have visited, Japan exemplifies the benefits of a society where nearly all its members obey rules of social order. It is very refreshing to visit Japan. It is not surprising that they would be among the first to consider laws meant to contain the impact of AI on society.

    AI was largely funded by major corporations with a profit motive to increase engagement. For the same reason TV news focuses on bad news (because it increases engagement), contemporary internet systems segment their users and give them more of what they want thereby driving tribalism ripe for the picking by frauds.

    AI is not the problem. The problem is that AI is a tool that is being misused by charlatans. We do not need laws targeted to AI, we need to enforce the laws we have that to punish those that would defraud citizens. The principle of Freedom of Speech should not protect the freedom to lie, cheat and steal. The US FCC has abdicated its mandate and duty to protect the public from misuses of mass media.
  • If AI replaces a Salaryman's job, that would be devastating.

    If AI makes a Salaryman's job more efficient, that would just bring him even more work to do, each day.

    AI is therefore a net negative.
  • Headline implies social order is standing.

    What passes as "social order" is more or less a bunch of flimsy generalizations about relations and conflicts between individuals that get categorized in different ways.
  • Translation: We might not be able to repress and control our citizens to the same degree and continue our xenophobic immigration policies if computers are smarter than us.

There is no opinion so absurd that some philosopher will not express it. -- Marcus Tullius Cicero, "Ad familiares"

Working...