Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

ChatGPT Risks Divide Biden Administration Over AI Rules in EU (bloomberg.com) 36

Biden administration officials are divided over how aggressively new artificial intelligence tools should be regulated -- and their differences are playing out this week in Sweden. From a report: Some White House and Commerce Department officials support the strong measures proposed by the European Union for AI products such as ChatGPT and Dall-E, people involved in the discussions said. Meanwhile, US national security officials and some in the State Department say aggressively regulating this nascent technology will put the nation at a competitive disadvantage, according to the people, who asked not to be identified because the information isn't public. This dissonance has left the US without a coherent response during this week's US-EU Trade and Technology Council gathering in Sweden to the EU's plan to subject generative AI to additional rules. The proposal would force developers of artificial intelligence tools to comply with a host of strong regulations, such as requiring them to document any copyrighted material used to train their products and more closely track how that information is used. National Security Council spokesman Adam Hodge said the Biden administration is working across the government to "advance a cohesive and comprehensive approach to AI-related risks and opportunities."
This discussion has been archived. No new comments can be posted.

ChatGPT Risks Divide Biden Administration Over AI Rules in EU

Comments Filter:
  • It’s funny how AI has rapidly developed since 2012. It went from being busy work for doctoral students to a commercially viable technology with its own subculture of tech proponents. Thousands of engineers and scientists are coming out of school with some experience in AI, and many engineers who are not formally trained in AI are exploring it for their projects. And it’s easy for them to do that, because many of the landmark papers are open access, there are tons of open source codes and even op
    • by AmiMoJo ( 196126 )

      It's sad to hear the language they are using to justify this. "Competitive disadvantage" ignores the huge gains from properly regulating AI. The problem is those gains are for you, not for big corporations with lobbying power.

      • by WDot ( 1286728 )
        Without intending to start a fight, honestly what are the “huge gains” to be had from regulating AI? From my perspective, regulation in the US, whether well-intended or once effective, seems to generally be a method for big businesses to consolidate their lead by raising the baseline cost of their industry. Maybe this makes (or made) sense for things like food, medicine, and utilities, but seems to make less sense for “software in general,” and AI is just a particular method of makin
        • by ranton ( 36917 )

          Without intending to start a fight, honestly what are the “huge gains” to be had from regulating AI? From my perspective, regulation in the US, whether well-intended or once effective, seems to generally be a method for big businesses to consolidate their lead by raising the baseline cost of their industry.

          Consumer and employee protection are two areas where regulation has been very beneficial, and those are directly related to how AI regulation will benefit regular people. All (or at least most) of your concerns about regulation are valid, and regulatory capture is especially dangerous, but claiming regulation doesn't help average people is simply ridiculous. Of course you can find plenty of times where regulation has caused problems, but a modern world without regulation would be dystopian.

        • I'm also curious as to viable proposals as to how you would actually go about regulating it. It would have to be done on a world scale to be effective. The cat's out of the bag. It's not like nuclear weapons where you need hard to acquire equipment and materials.
    • by gweihir ( 88907 )

      Thousands of engineers and scientists are coming out of school with some experience in AI

      Yep. My IT Security students learned 3 weeks ago that ChatGPT was completely useless in making even a pretty simple firewall configuration. I would say it was a valuable lesson.

  • by bradley13 ( 1118935 ) on Wednesday May 31, 2023 @10:37AM (#63564161) Homepage

    The thing is: US and EU politicians are profoundly unqualified to even discuss these issues. The people they have advising them are not much better. If they try to tailor laws and regulations to this specific technology, they will screw it up.

    What they can do - what they should do - is write laws and regulations that about actual important issues. The EU did this with their privacy laws, for example: You cannot deal in people's private data. They didn't say "you can't use JavaScript to deal in private data" - the laws are not tied to a specific technology.

    Ignore the "AI" part, ignore ChatGPT and Bard. What *effects* are important? What rights need protected? Because those effects and those rights need to be protected *anyway*, regardless of what technology you are protecting them from.

    • by sinij ( 911942 )
      AI has specific issues that are not covered by the existing laws. For example, who is responsible for libel generated by LLM, if there are limitations on publicly accessible data when training LLMs, and authorship/copyright to LLM outputs. It is unwise to leave this to courts to decide.
      • I disagree: this is not AI specific. If any piece of software does something illegal, who is responsible? You buy tax software, and it miscalculates your taxes. Your car's navigation software sends you somewhere very, very wrong. You use a web-scraper and a bug in the software DOS's the target site. Windows installs an update and reboots your laptop in the middle of an important presentation.

        Software is a tool. Granted, a complex tool, but still a tool. Laws should address the effects, not the specifics o

      • by wed128 ( 722152 )

        The *publisher* is responsible for the libel. If i generate an essay via LLM, and then put it on my website, i'm liable for its content.

        • by sinij ( 911942 )
          It is not that simple. What if AI generates libel in response to a query? [washingtonpost.com]
          • by wed128 ( 722152 )

            What if i type libel into microsoft word? it's a tool, it should be treated as such.

            • by sinij ( 911942 )
              It is very obviously not the same thing, as LLM is in-between search engine and news publisher. Closes would be auto-fill suggestions by Google and even then, it is not exactly the same.
              • by wed128 ( 722152 )

                That perception, right there, is exactly the problem. It's not a search engine, and it's not a news publisher. It's a language model. Given a query, it gives a "statistically likely" response. This response is trained on "this looks like valid language", not "this is truth"

                There is no relationship between GPT responses and the truth. Sometimes, likely responses correlate with the truth, but this is just a side effect of how the training works.

                These AI tools are much closer to "auto-fill suggestions" then to

                • by sinij ( 911942 )

                  That perception, right there, is exactly the problem. It's not a search engine, and it's not a news publisher. It's a language model. Given a query, it gives a "statistically likely" response.

                  If you design a device that looks like a stoplight, operates like a stoplight, get installed in place of a stoplight and then gets someone killed by giving 'statistically unlikely' 4 way green, you can't blame users for misinterpreting the green light.

    • 1. AI generated content must be labeled as such
      2. Liability for any damage caused by AI falls on the entity deploying AI, not the developer (eg a medical help bot says to take *way* too much ibuprofen)

      • 1. AI generated content must be labeled as such
        2. Liability for any damage caused by AI falls on the entity deploying AI, not the developer (eg a medical help bot says to take *way* too much ibuprofen)

        1. No, that's the point, what about deceptive content created by other means? "Deceptive or misleading content must be labeled as such", who cares how it was created? You can lie with Photoshop, or hell, multiple exposures on film.
        2. Liability for damage caused by anything should be based on fitness for claimed purpose. If whoever provides it makes claims that whatever it is cannot support, then they should be liable to whatever extent the claims were false.

        There, two reasonable and reasonably simple rules

        • by JBMcB ( 73720 )

          1. No, that's the point, what about deceptive content created by other means? "Deceptive or misleading content must be labeled as such", who cares how it was created? You can lie with Photoshop, or hell, multiple exposures on film.

          The point is AI generated content is not provably reliable. Humans aren't reliable, either, but someone should be told to whom they are interacting with so they can make an educated judgement.

          2. Liability for damage caused by anything should be based on fitness for claimed purpose.

          Again, because AI is not reliable, anyone deploying it should bear the brunt of liability. If a 3rd party developer is making claims about it's reliability, then the entity deploying the AI can sue them for breach of contract, but the deploying entity should not be held blameless. Once it can be scientifically proven t

          • by ranton ( 36917 )

            The point is AI generated content is not provably reliable. Humans aren't reliable, either, but someone should be told to whom they are interacting with so they can make an educated judgement.

            Does the same hold true for something manipulated with Photoshop? It really isn't hard to see how requiring the labeling of AI content is ridiculous.

          • by DarkOx ( 621550 )

            Yes this are simple rules that should generally play in all spaces.

            You can make a bot, and you can give it whatever abilities you can design but you should not be able to discard responsibility for what said bot does.

            You can make spoof photo/sounds/etc but if your intention is to cause someone to act in a way that is harmful or against their interest that is _fraud_.

            From a regulatory perspective AI should not be the interesting part. Which is why we should all be very very skeptical of the large industry pl

        • 1. No, that's the point, what about deceptive content created by other means? "Deceptive or misleading content must be labeled as such", who cares how it was created? You can lie with Photoshop, or hell, multiple exposures on film.

          In general, it is not against the law to lie...protected speech.

          A few exceptions, like when talking to the Feds (always a bad idea...lawyer up if they come calling on you)....and if you're committing fraud or for slander (or is it libel, I get confused).

          But just telling lies al

          • by ranton ( 36917 )

            In general, it is not against the law to lie...protected speech.

            He was probably referring to fraud, where it is illegal to lie to claim a payment or procure property or services (among other things). An AI lying should be fine, as long as it wouldn't be considered fraud.

      • by ranton ( 36917 )

        1. AI generated content must be labeled as such

        That sounds simple and reasonable, but AI has been being used to generate digital images for a long time. If I am looking at an advertisement in a magazine where a moon was digitally generated using AI, does that advertisement need some text in the lower right corner of the image saying some of this image was generated with AI? If they remove a tattoo from a model using AI, do we need to be told? I sure hope some banner doesn't need to be put on screen when I'm watching the next MCU blockbuster.

      • 1. AI generated content must be labeled as such 2. Liability for any damage caused by AI falls on the entity deploying AI, not the developer (eg a medical help bot says to take *way* too much ibuprofen)

        Great! Now define "AI" for the purposes of enforcing these laws.

    • by ranton ( 36917 )

      This should be the litmus test for any regulations targeting AI. If the law has to specifically target AI, it is a bad law. At best it may be important to clarify how existing laws apply to AI, but never creating new and specific legislation.

      If you are worried about misleading content created by AI, you should be just as worried as misleading content created manually by an artist / photographer / writer / etc.

      If you are worried about copyrights being infringed by AI, you should be just as worried about copy

    • They didn't say "you can't use JavaScript to deal in private data" - the laws are not tied to a specific technology. Ignore the "AI" part, ignore ChatGPT and Bard. What *effects* are important? What rights need protected? Because those effects and those rights need to be protected *anyway*, regardless of what technology you are protecting them from.

      US law is littered with that. Very often it has been a way to score points on being "tough on crime", making things that are already illegal even more illegal. Other times it is because of prosecutors who fear that the courts won't enforce the laws the way they want so they don't bother the lawsuit without a very specific law on the books. In the end we get tons of ultra-specific laws dealing with a specific technology or a specific method of doing things, and then we have the general form that doesn't get

      • US law is littered with that. Very often it has been a way to score points on being "tough on crime", making things that are already illegal even more illegal.

        Yep, like the so-called "hate crimes" they try to push....

        If someone is murdered...bad. If they, apparently are killed because they are (insert race or sexuality here)....somehow that is worse?

        Dead is dead...doesn't matter why you killed someone, the result is the crime.

      • by ranton ( 36917 )

        As a simple example, fraud itself is illegal under federal law. Then we have specialized laws on mail fraud, wire fraud, credit card fraud, [...]

        That is a good example, and while I'm sure I would disagree with the creation of some of those laws it is probably important to make some kinds of fraud more severe than others. Especially for those where financial damages are difficult to discern, like citizenship fraud. And while IANAL, it seems like some of these laws primarily exist to ensure a violation is considered a federal crime. That could potentially be an area where I could see new regulations, if it was necessary to make existing laws enforceab

    • THIS.

      AI regulation implies that they must behave in a morally acceptable manner.

      Do we really trust politicians to properly define morality today?
    • by hey! ( 33014 )

      No single person or profession has the qualifications to draft laws and regulations for this stuff yet. It's an interdisciplinary problem.

      We went through something very much like this when computer technology supplanted paper record keeping. In 1972 the Nixon Administration'assembleed an interdisciplinary panel of technical experts, business leaders, social scientists and legal scholars to study the dangers computers posed to privacy That multi-disciplinary approach clearly worked, because their report, ca

    • You can't see what the effects of having rights are, but doesn't mean effects justify taking them away.

      Stenuously agree they aren't qualified to even talk about this subject.
  • by Tokolosh ( 1256448 ) on Wednesday May 31, 2023 @10:56AM (#63564219)

    "The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by an endless series of hobgoblins, most of them imaginary."

  • Whatever rules you put in place will be ignored. Those directly in the line of fire will play nice with the regulators, but behind closed doors they'll quietly experiment so that when the guardrails are dropped they aren't behind.

    Besides, you're only regulating the friendlies. The bad actors are not in the path.

  • Every artist should keep a journal of every artwork, music, language and copyright work they have ever been influenced by so we can properly tax/compensate the original works when the art sells. You will fast come to know just how derivative the "human" element is.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...