Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

How Should AI Be Regulated? (nytimes.com) 153

A New York Times opinion piece argues people in the AI industry "are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down." But how? What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That's where the government comes in — or so they hope... [A]fter talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I'd prioritize.

The first is the question — and it is a question — of interpretability. As I said above, it's not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand... The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It's ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.

The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet. Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast.

The piece also recommends that AI-design companies "bear at least some liability for what their models." But what legislation should we see — and what legislation will we see? "One thing regulators shouldn't fear is imperfect rules that slow a young industry," the piece argues.

"For once, much of that industry is desperate for someone to help slow it down."
This discussion has been archived. No new comments can be posted.

How Should AI Be Regulated?

Comments Filter:
  • by cstacy ( 534252 ) on Sunday April 16, 2023 @08:42PM (#63454714)

    A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.

    • Prisoner's dilemma (Score:5, Insightful)

      by Okian Warrior ( 537106 ) on Sunday April 16, 2023 @09:03PM (#63454742) Homepage Journal

      A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.

      A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.

      A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.

      The problem is that we're in a prisoners dilemma. Imagine the various US military projects that have sprung up around Chat-GPT: do you think any of them will pause development?

      Now imagine the military of a different nation - enemy or ally - do you think any of *them* will pause development? And do they believe that the US military will actually pause development, even if the US military says they will?

      And as has been pointed out in the OP, there are at least 3 major players rushing to play "catch up" before one of the other giants eats their lunch, and probably 50 or more "minor" players in the form of companies or people with "one good idea" working feverishly to get a demo product running.

      Does anyone believe that *any* player in this field will abide a moratorium, knowing that the others probably won't?

      We're in the prisoner's dilemma, where everyone would benefit if everyone acted against their best interest, but there's a huge reward for one player acting selfishly. It's especially bad, because any player acting selfishly can simply keep quiet about it and no one would know.

      Just about every smart person who thinks deeply about AI comes to the conclusion that it will bring about widespread disaster of various sort. Elon musk did, Stephen Hawking did, Bill Gates did, lots of others do as well.

      I work on strong AI (as my day job) and I came to the same conclusion: a wide range of apocalyptic outcomes come from having infinite human-level labor, or infinite ability to prattle talk, or brain/computer interfaces.

      My take was that even if I stopped researching and experimenting, that the people at Google would have no such qualms and continue to push the envelope beyond any sane Rubicon of danger, and I might as well do something that I enjoy and ignore the consequences.

      Google certainly hasn't taken a high moral stance against invasive ads, or tracking personal behaviour, or manipulating opinions, or suppressing free speech... and it's likely they won't take the moral stance on AI.

      Why should anyone?

      • by JustAnotherOldGuy ( 4145623 ) on Sunday April 16, 2023 @09:40PM (#63454782) Journal

        Does anyone believe that *any* player in this field will abide a moratorium, knowing that the others probably won't?

        Not if they have two functioning brain cells to rub together, lol.

        Welcome to the Wild, Wild West of AI, where anything goes and the consequences are still unfathomable.

        In 10 years this tech will be used everywhere, especially in places where it shouldn't. Scammers and corporations alike will be humping this as hard as they can.

        Soon you simply won't be able to trust live audio/video (Facetime, Skype, Discord, etc), and you won't be able to be sure the 'person ' on the other end is who you think it is unless you quiz them about some shared secret or bit of trivia.

        It'll get more subtle, more capable, and more adept.

        Frankly I wouldn't be surprised if a couple of my coworkers could be replaced by a well-tuned AI instance.

        • by sg_oneill ( 159032 ) on Monday April 17, 2023 @04:52AM (#63455346)

          Oh scammers ARE already humping this technology as hard as they can.

          Theres one going around at the moment involving phishing phone calls using AI rendered versions of a loved ones voice (Including a horrifying one where the loved one is claimed to have been kidnapped, demanding a million dollars [because random people who fall for phishing just happen to have a million dollars lying about])

          And we've seen actual state level disinfo attempts using it. Earlier in the Ukraine war deepfaked video of Zelenski surrendering got passed about attempting to fool ukranian soldiers into surrendering. That one failed pretty spectacularly as it was an *extremely bad* deepfake that looked obviously altered.

          And yeah, the poor old graphic designers and professional writers are already getting termination notices.

          I might not be a fully unhinged doomer, but I do have some reservations about instrumental convergence and these things going haywire on a paperclip maximizer task.

          • by AmiMoJo ( 196126 )

            You and the GP don't seem to understand what regulation would do.

            Scammers can do this because ChatGPT is available to them. If the regulations allowed them to build ChatGPT, but they had to be more careful about making it publicly available so scammers couldn't prompt it to "write me 100 phishing emails in the persona of IT tech support requesting a password reset" then things wouldn't get so bad so quickly.

            Same with the graphical ones, there is a difference between developing them and making them into easy

            • You and the GP don't seem to understand what regulation would do.

              Scammers can do this because ChatGPT is available to them. If the regulations allowed them to build ChatGPT, but they had to be more careful about making it publicly available so scammers couldn't prompt it to "write me 100 phishing emails in the persona of IT tech support requesting a password reset" then things wouldn't get so bad so quickly.

              Same with the graphical ones, there is a difference between developing them and making them into easy to use open source tools that allow people to generate massive amounts of involuntary pornography.

              Training costs as a function of capability are going batshit with no end in sight. Legislating access to trained models is not going to do jack except offer a momentary reprieve that may well become effectively worthless by the time it can be enacted.

      • by znrt ( 2424692 ) on Sunday April 16, 2023 @11:33PM (#63454930)

        it will bring about widespread disaster of various sort.

        maybe in near future with agi in general, but chat-gpt is not it, besides all the hysteria. it's just another baby step and the biggest impact might be rising unemployment rates. and copyright issues. which is serious, but not necessarily catastrophic level of serious. the singularity will have to wait a bit.

        gpt is amazing, and a startling plot twist, but let's not go nuts: it is A TEXT GENERATOR. or speech generator if you want, but in essence just text. text may be awful, and a bunch of lies, but who is supposed to actually read all that text generated by that exponential capability or even give a shit? it will probably change how we go about a lot of things in daily life, work and research, but you can't disrupt a civilization merely by spewing out mountains of generated human-flavored text.

        even if you imagine a nightmare scenario were that text were to be made into law or policy, from what i have seen and with just basic supervision we wouldn't be fundamentally worse off than with our current assortment of human think tanks and lawmakers. the corruption would concentrate at much higher level, this would require careful management, but otherwise it would be just the same job done much more efficiently, and much less contaminated by spurious interests.

        this is just fear. self driving cars? regardless of the media circus around every isolated incident involving them, self driving cars are already vastly safer than human drivers, the presence of human drivers is actually the problem and if all cars were self driving the rate of accidents could probably be reduced to virtually zero. but self driving cars will just drive us around, not destroy humanity. again, the only negative impact is in the job market. good riddance, let's just stop the stupid wars and the frantic working and enjoy life.

        the bias in algorithms, the generation of information bubbles ... all that simply feeds on existing human behavior and bias, so is just more of the same, more efficiently. what's not to like? well, the lack of transparency and unaccountability of the service providers. we knew about that a while ago. that's relatively easy to regulate and we haven't barely started. should we impose a six month moratorium on facebook too now?

        once we fabricate an agi that has infinite capability of producing, say, toxins or even weapons, we might be in deep shit, though.

        • this is just fear. self driving cars? regardless of the media circus around every isolated incident involving them, self driving cars are already vastly safer than human drivers, the presence of human drivers is actually the problem and if all cars were self driving the rate of accidents could probably be reduced to virtually zero. but self driving cars will just drive us around, not destroy humanity. again, the only negative impact is in the job market. good riddance, let's just stop the stupid wars and the frantic working and enjoy life.

          I guess when you don't count all the times a human stopped the computer from doing something exceptionally stupid self driving is vastly safer.

          the bias in algorithms, the generation of information bubbles ... all that simply feeds on existing human behavior and bias, so is just more of the same, more efficiently. what's not to like?

          once we fabricate an agi that has infinite capability of producing, say, toxins or even weapons, we might be in deep shit, though.

          Yea it's all fun and games until someone asks the latest and greatest general AI model to hack into Moderna and covertly add a few tens of thousands of codons to the next batch of vaccines to unleash a virus to kill everyone in the world.

      • by cstacy ( 534252 )

        A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.

        Does anyone believe that *any* player in this field will abide a moratorium

        Well, that's rather the point of my post above. (However, lesser minds than yours modded it 100% Troll.)

      • by DarkOx ( 621550 )

        The simple answer is we should NOT regulate AI development.

        Because it will done as you say one way or the other. In the worst case our nation and friendlies abide by some set of rules while the CPP, develops systems that give them all sorts of competitive advantages in secret, until they have such a commanding lead they just show their cards and say too darn bad, we have AI will use it, you can't catch up.

        In the best base, all major powers ignore the rules, and the spooky three-letter-agency types build it

    • One example is the COBOL program handling your bank account and another is the SAP program handling your salary payment.
      • Oh COBOL is easy enough to understand. It just requires a lot of time to read through a lot of code.

        But readability is actually COBOLs strongest point. Its that very weird category of languages that are hard to write and easy to read.

        Not that I've looked at any COBOL in nearly 30 years. God help me that first job was like having my brain hacked out with a plastic spoon. COBOL is the devil.

  • by Futurepower(R) ( 558542 ) on Sunday April 16, 2023 @08:43PM (#63454716) Homepage
    It seems to me that no one knows enough yet to begin regulating AI.
    • In soviet socialist Russia, AI regulates you!

    • We should just let AI regulate itself.

      Given its stellar job on everything so far, I'm sure that will work out great!

      • by MrL0G1C ( 867445 )

        That actually is their plan, AI regulation/morals etc is called 'alignment'. The researchers are still working on it and (literally) hoping they can get the AIs to regulate themselves by they aren't certain.

        • by narcc ( 412956 )

          "Researchers". Yeah, the LessWrong nuts and their pretend "research institute" aren't actually researchers.

    • by narcc ( 412956 ) on Monday April 17, 2023 @12:01AM (#63454976) Journal

      The real question is why the industry wants to be regulated. They're free to make and follow any strictures they'd like.

      My guess is so that they can delay the inevitable crash as we slide down the slope of disillusionment once we crest the top of the hype wave.

      It makes for a nice excuse as well. "Sorry investors, it's these darn regulations!" There's about to be a noticeable lack of progress on that front as larger models become prohibitively expensive and the real limits of current models become too obvious to ignore.

      • by WaffleMonster ( 969671 ) on Monday April 17, 2023 @12:52AM (#63455010)

        The real question is why the industry wants to be regulated. They're free to make and follow any strictures they'd like.

        A common reason industries beg even lobby to be regulated is regulation serves as a means of reducing competition by increasing barriers to entry. The big guys can afford the resources to jump thru all the process hoops.

      • by MrL0G1C ( 867445 ) on Monday April 17, 2023 @01:05AM (#63455028) Journal

        They want to be regulated because they're some of the brightest brains on the planet and they know that the next step, which may just be months from now is a genius level AI with an encyclopedic knowledge that far surpasses any human.

        People think that everything will be ok because they'll just be able to use AI as a tool to be more productive but what they're missing is that AI will also be good enough to use AI as a tool to be more productive, those humans who want to be more productive by using AI won't be needed. Any job that doesn't require physical work is at risk of replacement within a handful of years.

        • by narcc ( 412956 ) on Monday April 17, 2023 @05:45AM (#63455414) Journal

          the next step, which may just be months from now is a genius level AI with an encyclopedic knowledge that far surpasses any human.

          That's very obviously not going to happen. How did you come to such an absurd conclusion?

          A lot of the other replies suggest that the real goal of all this regulation talk is limit competition by increasing barriers to entry, which makes a lot more sense than experts in the field being afraid of imaginary monsters.

          I've also thought that all this fearmongering could be part of a marketing stunt intended to make the technology appear far more advanced than it is, or the pace of development far faster than it is, without making any specific claims that will get them in trouble with the investors they're busy fleecing. Why would I think such a thing? Well, we know that the technology isn't nearly as advanced as people seem to think, the pace of development isn't even remotely as fast as people think, and investors are undoubtedly being fleeced.

          Any job that doesn't require physical work is at risk of replacement within a handful of years.

          Elmo has been claiming fully self-driving cars within a year every year [futurism.com] for the last 9 years. Every year, some new group of hopefuls, along with the hopelessly credulous from years past, go along with it under the delusion that this time things are different.

          You're predicting a pretty optimistic timeline, even for one of the faithful. I wonder how many months or years it will take before you start pretending that you were always skeptical? What will be the excuses along the way?

          • AI is over 100 IQ already and yes is is about to be genius level within months white collar jobs will be wiped out if AI isn't banned or very tightly regulated .
            We are close to the point where AIs will be able to create more intelligent AIs.

            Your assertion that AI won't be very intelligent is one of ignorance, I said them same up until recently but what I've seen in the last couple of weeks has completely changed my mind. See the last month if videos on the AI channel I have linked in my sig and you'll under

            • I think one thing people tend to either gloss over or forget entirely is that we don't really *know* what intelligence is. Tie that together with humanity' strange ability to perceive everything as proof that we're somehow special, better, more, (add more descriptors here) than anything ever in the history of anything, and yeah, a lot of people are going to deny that machines can be "intelligent." Hell, a lot of scientists will insist that no animal other than humans have ever been intelligent, tool using m

              • I'll take intelligent as being able to get a high score in any IQ test that is thrown at it. These AI machines will be able to replace coders, lawyers, customer services, anyone in finance or insurance basically most people with a desk job that commute to an office or similarly work from home

                • I'll take intelligent as being able to get a high score in any IQ test that is thrown at it. These AI machines will be able to replace coders, lawyers, customer services, anyone in finance or insurance basically most people with a desk job that commute to an office or similarly work from home

                  Agreed. For the most party, I'd think they could do a lot of that now if the people "training" them didn't have to spend so much time making sure they push the current identity politics soup du jour above "knowledge" in its own right.

                • by narcc ( 412956 )

                  These AI machines will be able to replace coders, lawyers, customer services, anyone in finance or insurance basically most people with a desk job that commute to an office or similarly work from home

                  There is absolutely no reason to believe that. That's just pure delusion. We know how these things work. We know what their actual capabilities and limitations are. They only seem mysterious to you because you don't actually know much about them. I can assure you that they are not even remotely close to being able to do the things that you think they're doing, let alone whatever future things you've imagined.

                  You might want to spend a bit less time with pop sci videos and a little more time with a text

            • by narcc ( 412956 )

              Wow, you really believe this nonsense?

              Let me know how it does on an IQ test that requires it to balance a set of parenthesis or do basic arithmetic. LOL!

      • by kiore ( 734594 )
        Or maybe the regulations they hand government to impose will be carefully phrased to stop new entrants in the market?
      • The ones screaming the most about wanting regulation aren't actually "the industry", it was people like Musky who felt left out and that "AI researcher" guy whose name I forget but who hasn't actually done anything but write idiotic opeds.

      • by tlhIngan ( 30335 )

        The real question is why the industry wants to be regulated. They're free to make and follow any strictures they'd like.

        Because the way AI is going, it's going to run straight into existing laws. And if that isn't taken care of, it could sink the AI ship.

        Think of it right now - let's say you use ChatGPT to make you a nice Mickey Mouse figure. You put it up on your website as art, but then Disney gets word of it and starts throwing their weight around and you suddenly see yourself at the end of a lawsuit cit

    • I don't think AI can be regulated beyond banning it's use for all but a tightly defined set of uses that won't make millions of people redundant.

    • by ranton ( 36917 )

      We know plenty to begin regulating AI. We don't know enough to enact regulation which will govern the industry for decades to come, but that is a red herring. We know enough to get started now.

      One area to start is regulating how companies can obtain and use test data. It is the wild west right now and AI companies have no idea if they are breaking the law or not. The government needs to step in here. We also need regulators to start identifying where we need new laws and where existing laws can be used. Can

    • Comment removed based on user account deletion
  • by zenlessyank ( 748553 ) on Sunday April 16, 2023 @08:43PM (#63454718)

    Let the marbles fall where they may. It will self heal.

    • Why did Isaac Asimov create his 3 laws? Was there some sort of disaster or incident in the world of "I, Robot" that required the robots to be limited in such ways to prevent another occurrence, or was it just out of an abundance of caution?

      If you tried to purposefully create the laws of robotics today, you would end up with Robocop's 400 directives which is what drove him electrocute himself.
      • Re:It Shouldn't (Score:4, Interesting)

        by mrfaithful ( 1212510 ) on Monday April 17, 2023 @04:08AM (#63455294)

        I always think back to times when game programmers have tried their hand at self learning AI to generate the model which the realtime AI will use and how inevitably they leave it overnight and come in to find that the AI has solved the problem as denoted by the fitness function but in a way that's not remotely useful to the purposes of the game. So they add more rules to the fitness function and try again. And the same thing happens. The AI exploits gaps in the logic to solve a problem in a way that's useless. By the time they realise they've pissed too much time up the wall the fitness function starts to look a whole lot like what they should have written by hand in the first place...

        People are worried that GPT will become Skynet. I'm more worried that we'll have another dotcom crash where massive investment is thrown at companies who all have software that gets 90% of a problem solved and leaves that last pesky 10% up to the gods until the money runs out and one by one they start to fail and take large chunks of the economy with them.

        • by vadim_t ( 324782 )

          There was the story that this happened in the development of Oblivion.

          The devs tried to make an AI that worked based on goals and priorities, and it had all sorts of weird outcomes like people murdering each other because somebody forgot to give a NPC a tool they needed, or characters would wander off somewhere and leave their post deserted.

          That's funny in the abstract, but it's not fun to play a game where you find that crucial NPC Bob is dead because he stole from the blacksmith, who killed Bob, and then

      • Re:It Shouldn't (Score:4, Insightful)

        by vadim_t ( 324782 ) on Monday April 17, 2023 @07:02AM (#63455512) Homepage

        Isaac Asimov created the 3 laws to illustrate over and over the unintended consequences that arise from simple rules.

        Really, a good chunk of his stories is about how the 3 laws have all sorts of unexpected snags and weird outcomes.

        • by Erioll ( 229536 )

          It was a plot device. Maybe it turned out to be prescient, but was he intending to be a futurist? That's something that's hard to know. But maybe somebody who's studied his life and letters, interviews, etc, knows better.

          I'd say regardless of original intent, it has served well as a warning against arbitrary "laws" like that. But that's just my interpretation, and a good story has many.

  • The same Fear Of Missing Out drives the competition between nation-states on this front as well. Without correcting for this pressure, all any amount of domestic pressure can do is play kingmaker -- and not in a good way. Whoever looks the other way the longest probably "wins", but this may well be a case of "play stupid games, win stupid prizes". Only problem is that we all get the prize.

  • Hopeless (Score:5, Insightful)

    by Retired Chemist ( 5039029 ) on Sunday April 16, 2023 @08:55PM (#63454736)
    Even if the US or the EU came up with some good regulations, China and Russia and lots of other people would continue to do whatever they want. Short of an outright ban with punitive sanctions, there is no way that regulation will work. As long as social media are allowed to run rampant, there is too much value to people who want to spread their message to expect any regulatory system to work. AI systems are a lot easier to abuse than to use properly, and too many people can benefit from the abuse.
  • Why Regulate It? (Score:4, Interesting)

    by lsllll ( 830002 ) on Sunday April 16, 2023 @09:08PM (#63454744)
    I've always thought it's better to educate people than to stop harmful things coming their way (to an extent, I suppose). In sight of everything AI that's happening, I've had conversations with my wife and adult children that we are definitely to the point (we were somewhat even before AI) where you can't believe anything that you read/see. Everything you see on an electronic, internet-connected, device must be questioned, but realize that this is strictly in relation to the specific equipment. Don't change the way you trust people. But know that this stuff is out there and protect yourself against it. Making something outlawed or illegal doesn't make it disappear. It just drives it underground.
    • by Tony Isaac ( 1301187 ) on Sunday April 16, 2023 @09:43PM (#63454798) Homepage

      You have way more faith in the power of education than I do.

      We've been educating people how to drive properly for decades. You can't get a license to drive without it. Do we see people driving safely and thoughtfully out there on the roads?

      We've been educating people in reading, writing, and arithmetic for generations. Do we have a society where everyone can read, write, and calculate well?

      We've been educating people about avoiding phishing emails for years now. How's that one going? In every pen-test my company does, about 10% of employees fail, and click the bait link.

      Education does help some people, so it's worth doing. Just don't be overly optimistic.

      • by narcc ( 412956 )

        Think about how bad things would be without any education!

        • Of course!

          The point the OP made was that education could substitute for regulation.

          I say that we need both education *and* regulation, because education alone isn't enough to keep abuses from happening.

    • by MrL0G1C ( 867445 )

      SEPT. 15, 2022 â" Between 2019 and 2021, the number of people primarily working from home tripled from 5.7% (roughly 9 million people) to 17.9% (27.6 million people), according to new 2021 American Community Survey (ACS) 1-year estimates released today by the U.S. Census Bureau.

      If AI isn't regulated then those 27.6 people can kiss their jobs goodbye and the gov't will have a huge tax hole, employees pay taxes, AIs don't.

      • by lsllll ( 830002 )
        Why would you think that AI is going to replace people who are working from home? I work from home and I don't see my job getting replaced by any sort of AI. And I'm a full-stack developer for the most part.
        • by MrL0G1C ( 867445 )

          See my sig:

          AI: IQ, Sentience, danger, papers: https://tinyurl.com/3cc7wv9w [tinyurl.com] . . . . Ilya Sutskever: https://youtu.be/Yf1o0TQzry8 [youtu.be]

          That tinyurl links to a youtube channel that shows where AI is really at, GPT in combination with autoGPT is already at average human level IQ. GPT can code right now, it's not perfect but the wrinkles are currently being ironed out by better models. The next iterations of GPT won't have an IQ of an average human, they will have genius level IQ, a knowledge that'd take humans a thou

  • The military is probably building a huge completely unregulated model designed for warfare. As much as we dislike the big corps, I'd rather have their models surpass any military model. For a nice AI on AI battle rewatch Person of Interest, I think season 3 when Samaritan appears. https://www.imdb.com/title/tt1... [imdb.com]
  • A model or program sitting inertly on a hard drive doesn't need to be regulated, any more than a naughty book on a shelf.

    If you use the program to hurt people, such as resulting in discriminatory hiring practices, or physically harming people, those things are already illegal.

    One thing regulators shouldn't fear is imperfect rules that slow a young industry

    I'll wait for the press sacrifice its own freedom of speech first. Then maybe we can think about constraining the right to code.

  • If it can be dangerous then license use. Log access, etc.

  • by julian67 ( 1022593 ) on Sunday April 16, 2023 @09:41PM (#63454788)

    But what about printing? We need to deal with that too, and soon. People keep writing all kinds of stuff to each other and some of it is wrong or bad. The same with speaking. Some people say things which are not true and/or annoy or offend or inconvenience me. This has never before happened in the history of humanity. Something must be done!

  • It is just a tool. A powerful one. Learn to use it properly.

    • Answers to these "why" questions come more easily if we dispense with the self-flattering delusion that we are a rational, enlightened, people prone to scientific thought...and instead embrace the unfortunate, but no less true, reality that we are a superstitious people naturally prone to cargo cult pseudoscience where superficial resemblance is paramount, and underlying truth be damned.

      When you realize this, you understand why our politicians talk the way they do about things like gun bans and AI regulatio

    • by MrL0G1C ( 867445 )

      Sure, but don't expect to be employed to use the tool because that tool will have genius level AI and will be able to do the job itself.

  • by backslashdot ( 95548 ) on Sunday April 16, 2023 @09:42PM (#63454794)

    The only regulation, if any, needed is banning unmonitored AI from life/death decision processes .. not because it would become skynet .. but because it is liable to make stupid mistakes. AI technology sucks really bad, it would be really dumb to even think about regulating AI for at least 30 years if not longer. We still don't have decent humanoid robots .. AI technology sucks. AI still cannot reason. Even chatGPT is nowhere close to demonstrating reasoning ability.I just don't see it happening.

    Elon is against AI, but doesn't mind telling us to have it drive us around? Well, that's the type of shit that I'd be scared of.. that the AI becomes sentient it may get pissed off for playing the wrong music or a do something rash.

    • AI technology sucks. AI still cannot reason. Even chatGPT is nowhere close to demonstrating reasoning ability.I just don't see it happening.

      After watching presentations on the GPT-4 model I would say the ability to reason has been demonstrated. Either that or there was some extreme cherry picking taking place.

      • by MrL0G1C ( 867445 )

        And GPT4 is just a stepping stone, it can be further tuned and GPT5 is expected to be far better.

        A criticism has been that GPTs are not good at counting and math the solution is literally to show GPT how to use a calculator and Wolfram|Alpha. Another deficit is the lack of experience, for this memory is being added - learning on top of the learning. And to improve results, the GPTs can check their outputs before outputting them and often correcting mistakes they made, this can happen multiple times.

  • It makes no sense to regulate something that isn't a problem. Doing so wastes time and money. If, for example, we decided to regulate the colors of car paint, that would be absurd, unless there are some colors of car paint that create a traffic hazard, or perhaps because there is some compelling reason to do so. Even if we could imagine a scenario where paint color could cause an issue, doing so with no actual motivation takes time and money away from issues that are more urgent and more dire.

    Regulating AI

    • by MrL0G1C ( 867445 )

      When the actual issue is an intelligence great enough to be able to wipe-out humankind you might want to introduce the regulations before it kills everybody.

      • When the actual issue is an intelligence great enough to be able to wipe-out humankind

        Is there an intelligence not great enough to be able to wipe out humankind? It does not take any kind of AI to be that "great." The level of AI, or machine learning, or just regular software, was "great enough" to wipe out human kind...decades ago. It didn't, however, because the humans that created the software didn't intend for it to do such a thing.

        Like regular software, AI has human creators, and those human creators are able to specify it's capabilities. It's not excessive intelligence, by itself, that

        • Genius level AI will only be stopped if it is banned now. Every job that can be done like 'work from home' can be done by AI instead, that is tens of millions of jobs

          Without tax and spending from those workers the economy will collapse.

          There has not been general AI before now.

          • You've been reading way too much science fiction. What the heck is "genius-level AI"? Sentience?

            And do you actually think that it can be "banned"? So we ban it in the US, and Europe, let's say. Who's going to stop China or India from marching headlong into AI, leaving us "responsible" nations in the dust?

            Factories put blacksmiths worldwide out of business. Form mechanization cost millions of farm workers their jobs. Yet somehow we still have more than enough jobs to go around, at least, in the developed wor

    • by malx ( 7723 )

      I agree with this, but the real issue isn’t that pre-emptive regulation is unnecessary, it’s that it’s futile.

      If it were merely unnecessary the harm would only be to the benefits foregone, and foregoing some of those benefits might be thought a reasonable price to prevent, oh I don’t know, human extinction. (Yes, this is hyperbolic, but some people really claim that’s what we’re facing).

      But without real world effects we are groping in the dark. We’ve no idea what th

  • Politicians aren't smart enough to regulate AI.

  • Don't believe it! (Score:5, Interesting)

    by BitterOak ( 537666 ) on Sunday April 16, 2023 @10:08PM (#63454826)

    "For once, much of that industry is desperate for someone to help slow it down."

    Translation: The big players like Google, Microsoft, etc. are terrified that a disruptive technology like AI might challenge their dominance in the world of computing so they want a big regulatory regime to slow potential startups from overtaking them. The big boys have the resources to work with (and help shape) the regulatory system; it's the new entrepreneurs that would most likely lack the resources to comply with the complex schemes they're suggesting. Think of the other disruptive technologies that have shaped the world we live in today: the semi-conductor, the microcomputer, the internet, etc. etc. None of these were burdened with heavy regulation and as a result small companies like Apple (which started out of a garage) were able to grow and later change the world. Why would we now completely change direction and start regulating the tech industry?

  • by RightwingNutjob ( 1302813 ) on Sunday April 16, 2023 @10:09PM (#63454834)

    usually means the "beggar" is well-resourced enough to game and/or capture the regulatory framework to his advantage to shut out competitors.

    Discuss.

  • Has it been a popular theme that a race to create AI contributed to problematic AI?

    Colossus: The Forbin Project, that still packs a punch imo.

    Colossus: The Forbin Project (1970) - Clip 1: Missile Launched! (HD)

    https://youtu.be/tzND6KmoT-c [youtu.be]

    How did Iain M. Banks have the Culture Minds put it, "When a Mind goes bad, it goes, really, really, bad"?

  • You can try.. (Score:5, Insightful)

    by Z80a ( 971949 ) on Sunday April 16, 2023 @10:48PM (#63454892)

    But all attempts at regulating popular products ended in failure, and you can't even use dogs to sniff for AIs.

  • Why didnt they regulate nuclear weapons?

    • They kinda tried [wikipedia.org] but we all know how well that went. And for the same reasons that AI regulation won't work globally.

  • Regulating AI in the US would simply be handing victory in the AI arms race over to China, which would be worse. Treaties, etc, wouldn't help either as China wouldn't abide by any treaty that didn't put them on top.
  • ...because having done so, governments have completely stopped malicious code getting out in the wild.

  • by smoot123 ( 1027084 ) on Sunday April 16, 2023 @11:39PM (#63454934)

    I call bullshit. Dollars to bagels, the Grey Lady cherry picked a few developers who want to be regulated, just like they can find rich people who claim they want taxes raised.

    If you did a statistically valid survey, I find it very hard to believe most AI researchers want government regulation. We talk about AI at my company all the time and I've never, ever heard someone suggest that as a reasonable next step. Caution, sure, but not asking regulators to make decisions for us. We're learning far too much too fast about how to make ML systems work for any regulatory framework to keep up.

  • by Pinky's Brain ( 1158667 ) on Monday April 17, 2023 @12:09AM (#63454992)

    The open ended question of interpolating from copyrighted training data scares the shit out of them. Regulation which recognizes the current status quo would give them some legal cover.

  • by Eunomion ( 8640039 ) on Monday April 17, 2023 @01:32AM (#63455068)
    AI is something that human beings are writing and deploying to serve their own interests. So hold them accountable.

    Honestly, it seems like the media is being paid to be obtuse about this. It's not rocket science.
  • Much the same fear was raised about bio-weapons. Everyone agrees they are dangerous and far too uncontrollable to use in war. There are even bans in place. However, that does not stop many major governments from continuing development in secret. On the basis that "we've got to keep going, because the other guy is".

    AI has many of the same attributes. Plus that much of the development (that we know about) is taking place in the public sphere. The same drivers apply too. That if the west decides to slow down

  • I'm already sick of all the disclaimers and reasons not to answer any question because of fears that somebody might be offended somewhere some time.

  • Even if regulating AI were a realistic option (and many comments point out why it is not), I have to ask: Just which government would be competent to create such regulations?

    Government is essentially always decades behind the times. Which is generally a good thing: slow-moving government is better than one that reacts to every trend and fashion. However, it also means that government completely lacks the competence to regulate emerging technologies. Sure, they could pay some expensive consultants, but thos

  • Unfortunately the word limit on Slashdot comments precludes me from disclosing it.
  • You first need a definition, or anyone who does not want to be regulated will redefine what they are doing as Not AI ...

    It is not as clear cut as most people think ...

  • Regulations should require all schools to learn that pictures, videos and writings are all fabrications unless proved otherwise, and it's impossible to prove a negative.

    Regulations should require teaching how to use tools for improving quality of life.

    Regulations should provide funding for educating elderly that no picture, video or writing is proof of anything.

    Regulations should educate the justice system and rules of evidence.

    There should be a regulation teaching regulators it's impossible to regulate tec

  • by twocows ( 1216842 ) on Monday April 17, 2023 @10:46AM (#63456208)
    It's probably a bit early to be talking about rights since right now we're basically at the stage of language models that can act semi-autonomously (see AutoGPT and other similar projects), but then this is something we need to think about before it actually happens. I think there will come a point in the next few decades (maybe longer, maybe a LOT longer, it's hard to say) where we'll invent something that is meaningfully intelligent and autonomous... but very likely deny that we've actually accomplished that.

    I think that before we reach that point, we need to come up with objective criteria for determining if a program has reached this point (if such criteria are even possible to come up with; we're still rather vague on the whole matter as it pertains to ourselves), and more importantly, we need to think about the matter of "human" rights as it pertains to AI. Because really, what we're going to accomplish long-term (assuming we don't destroy ourselves first) is, for all intents and purposes, the creation of intelligent digital life. We must guarantee rights to such a "lifeform," as to do anything less would be morally equivalent to enslavement.

    The discussion around artificial intelligence right now is centered around what it can do for us and fear of what people will do with it (hence regulation) because right now it's just an exceptionally powerful tool that we haven't had access to before; it's like the printing press or something. But we need to consider that we'll continue to improve upon this tool and we need to anticipate what comes next and not just fixate on the current problems (which admittedly we're also late in facing).

    To loop back to the original topic of regulation, I think any regulation happening right now should also try to anticipate the evolution of the "tool" into something greater and build in guarantees for when we reach that point.
    • Clean training sets that can be inspected and provenance proved
    • Watermarking for AI generated content
    • Clear statements of what is done with material provided to AI systems - are your prompts captured, logged, tied to your user account, or otherwise used for something?
    • A mechanism for data verification, acknowledgement when status is unknown
    • An audit trail so that results can be traced - why was a specific response generated?
  • Fortunately, this stuff needs enormous databanks so it's not at all hard to regulate - in the same sense we regulare people trying to grow weed indoors.
    Out of all the things society doesn't need, quickly producing enormous amounts of low quality rehashed content with central control/injection enabled in it is among the top. We've already seen the extent of manipulation via subtle alteration of search and social media algorithms that is possible, and that doesn't even approach the realms of possibility a s
  • by dlingman ( 1757250 ) on Monday April 17, 2023 @12:57PM (#63456604)

    Text prediction is not going to take over the world.
    A long time ago, when we wanted to know something, we went to a library, and looked it up. A certain percentage of the books were crap, and weren't labeled, so sometimes we got good info, sometimes bad.
    Then came search engines. Initially, a fancy card catalog of web pages, they eventually got better indexing, and a little information about the types of things you looked for before, to help guide you to the right web pages. Advertisements followed shortly based on what you were looking for. (yes, we're really sorry about that)
    Phone autoprediciton looked at what you'd typed in, and tried to guess what you might want to type next, mostly cause typing on phones sucks.
    Chat GPT is basically a phone auto predict that is marginally smarter, that is trying to guess what comes next, based on your prompt, and similar things it's seen in as much internet accessible info as it could get it's hands on.

    The uses to which ANY of these ways of accessing information can be put to use is what needs to be thought of. I can search for how to make explosives. or any number of things that are not good for society in general. That doesn't make the search engine evil. It makes it indifferent. The use that I put that knowledge to is what is potentially evil.

    Does it mean that spammers can use AI to generate better looking spam that tries to evade the blockers? Yes. Does it mean that the AI is evil? No. No more than MS-Word is evil. (Maybe a bad example...)

    Does it mean we get deepfake videos of our political leaders doing idiotic stuff? Yes. Will that be detectable from the usual idiotic stuff they do? Maybe not.

    Does it mean that we should complain about copyright violation? Yes. We should.

    Should we be scared that chatgpt is going to take over the world? No. But the people that make optimal useage of it may find that task easier...

  • What's AI again?

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...