Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI EU

AI Generated Content Should Be Labelled, EU Commissioner Jourova Says (reuters.com) 45

Companies deploying generative AI tools such as ChatGPT and Bard with the potential to generate disinformation should label such content as part of their efforts to combat fake news, European Commission deputy head Vera Jourova said on Monday. From a report: Unveiled late last year, Microsoft-backed OpenAI's ChatGPT has become the fastest-growing consumer application in history and set off a race among tech companies to bring generative AI products to market. Concerns however are mounting about potential abuse of the technology and the possibility that bad actors and even governments may use it to produce far more disinformation than before.

"Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation," Jourova told a press conference. "Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognise such content and clearly label this to users," she said. Companies such as Google, Microsoft and Meta Platforms that have signed up to the EU Code of Practice to tackle disinformation should report on safeguards put in place to tackle this in July, Jourova said.

This discussion has been archived. No new comments can be posted.

AI Generated Content Should Be Labelled, EU Commissioner Jourova Says

Comments Filter:
  • We already have fake news from human-generated sources. Labeling ChatGPT as potentially fake doesn't make non-ChatGPT information any less fake. You're just setting it up so people implicitly trust human-generated news more, despite the fact that it's not any more trustworthy, because again, humans lie.

    The solution to fake news is simple: multiple independent sources with fact-checking and evidence. Label anything that isn't that as "potentially fake".

    • Re:Wrong way (Score:5, Insightful)

      by Tony Isaac ( 1301187 ) on Monday June 05, 2023 @01:17PM (#63577483) Homepage

      Maybe if you're talking about *text* news. But it could be a good thing to have ChatGPT images and videos watermarked as such. At list then this would limit the damage caused by the fake news text, when it relies on photographic or video "proof" that the nutjob pontificator is using to support his claims.

      • by ranton ( 36917 )

        But it could be a good thing to have ChatGPT images and videos watermarked as such.

        That sounds simple enough, but what about other generative AI tools? What about open source tools? Nothing stops a bad actor from bypassing any of these rules. And what about when people start using the new Photoshop rules to make image editing easier with generative AI. Do all those need to be labelled? Soon nearly all image and video editing are likely to make some use of generative AI.

        • Re:Wrong way (Score:4, Interesting)

          by test321 ( 8891681 ) on Monday June 05, 2023 @02:43PM (#63577719)

          It can't cover all the situations but the proposal is quite reasonable. There is a push for all those companies to provide detection tools. The proposal means that Facebook/... will have to scan images with the OpenAI tool that detects ChatGPT text, the Adobe tool that detects Photoshop-generated images, etc. (or develop their own). It won't cover all cases but it will cover the low hanging fruits and reduce their nefarious influence. It's like a spam detector then playing cat and mouse with originators. It's not perfect but it's what we have.

          • by ranton ( 36917 )

            Most fakers want quick and easy. If they have to actually WORK for their forgeries, the actors would be limited mostly to deep-pocketed governments.

            This assumes most people using these tools will be doing so to forge art work, which is very unlikely to be the case. For the most part it will be used to increase productivity for content creation tasks. If Photoshop can make it easier to remove tattoos or change the color of a model's hair using generative AI, that shouldn't then mean the newspaper ad utilizing this technique needs some kind of watermark. We didn't do that when these tools started allowing artists to touch up photographs. We shouldn't do

        • Just as with any kind of authentication, there is no sure way to catch every case of forgery. But there are ways to make it more difficult. The point of anti-forgery techniques is not to eliminate the possibility of fraud, but the make it harder. Watermarks do that. Open source AI tools won't have the capabilities of the major commercial platforms, not even close. This is because it takes money, and a lot of it, to make and effectively train an AI tool. For this reason, forgers (those who want to produce fa

          • Open source AI tools are currently way more capable than commercial ones, at least in the image generation space. There would have to be a new innovation that can't be replicated easily for that to change. I wouldn't put my money on that.

            And no, there's no reason why I can't generate an image in the super-advanced thing photoshop will have 10 years from now and then remove the watermark in 2023's stable diffusion. It still won't require any artistic skill.
      • Would that extend to the image manipulation tools that already exist in photoshop? What happens when photoshop gets AI image generators built right in? What happens when my camera gets them built right in?

        • Yes, it applies, and Photoshop and camera manufacturers will no doubt be subject to the same regulation, just as printer vendors today are required to include features that prevent making realistic replicas of paper money.

      • Maybe if you're talking about *text* news. But it could be a good thing to have ChatGPT images and videos watermarked as such. At list then this would limit the damage caused by the fake news text, when it relies on photographic or video "proof" that the nutjob pontificator is using to support his claims.

        Bad actors...aren't going to label anything.

        In due time, the AI technology will be easy enough to get that you won't need to be a major corporation nor a state actor to have the tech available.

        You get to

        • AI technology, including open source AI, may become cheaper, but the training will remain expensive. The training part will limit the effectiveness of open source AI technology.

          Yes, it is an arms race. Just as printers these days are good enough to create good copies of paper currency, they are also created with anti-forgery technology. Can a bad actor get around this by making their own printer? After all, printers are cheap, right? It's not so easy, and does raise the bar of effort required.

          • AI technology, including open source AI, may become cheaper, but the training will remain expensive. The training part will limit the effectiveness of open source AI technology.

            What will stop the home grown AI person, or a "bad actor" that doesn't care about copy right...form using the whole web as their training model?

            • The most important aspect of AI training is not the quantity of information. The most important thing is the quality of the feedback. The training process involves feeding data to the system, noting the output, and then providing feedback telling the system whether the output was correct or accurate. That feedback is the most expensive part of AI development, and requires lots of humans. OpenAI along employs thousands of humans for this task. https://www.semafor.com/articl... [semafor.com]

              That home-grown "bad actor" who

    • "multiple independent sources with fact-checking and evidence"

      It's a lost cause. In a world of power struggles there is no way to ensure such sources are independent, and there is no way to ensure they are doing their job honestly unless they were legally compelled to do so, which, even if it were possible, would be such a long and drawn out process to enforce that its bandwith would be close to zero compared to how much news are generated. Just look at the Hunter Biden laptop example, whatever you think of

      • In a world of power struggles there is no way to ensure such sources are independent,

        Can we at least know if they are independent from each other?

      • Re: (Score:2, Insightful)

        by peterww ( 6558522 )

        > In a world of power struggles there is no way to ensure such sources are independent

        In the same sense that there's no such thing as truth? You can still get pretty close to it. Perfection is the enemy of the good.

        > there is no way to ensure they are doing their job honestly

        That's why you ask for evidence and multiple independent sources. Journalism 101.

        > would be such a long and drawn out process to enforce that its bandwith would be close to zero compared to how much news are generated

        Granted, i

        • There was only 1 hard drive, not multiple, and its provenance couldn't be proven even under forensic analysis. The rest of the story was literally just accusations, conspiracies, and reactions.

          The feds got the laptop. They verified the content was real and belonged to Hunter Biden.

    • by Luckyo ( 1726890 )

      You unironically described the best addition to twitter. Community notes. Except that it's a black list style thing, not a white list as you suggest.

  • This is silly. (Score:2, Insightful)

    by CAIMLAS ( 41445 )

    This is silly - it's a bunch of troglodytes trying to pass ultimatums on things they simply don't understand.

    AI image generation is difficult. It is not something an unskilled person can do trivially, and gaining the skills to do it is an extended, drawn out process.

    Sure, someone can use the tools and luck into a good result, but the odds of that are exceedingly slim. Even then, getting a good result requires some degree of "compositing" and other techniques quite similar to traditional video, image, and au

    • Re:This is silly. (Score:5, Informative)

      by Tony Isaac ( 1301187 ) on Monday June 05, 2023 @01:19PM (#63577489) Homepage

      It's not such a remote possibility. On May 22, images showing explosions at the Pentagon circulated widely, causing the stock market to dip until people started to realize the the image was fake and likely AI generated. https://www.npr.org/2023/05/22... [npr.org]

      A watermark would help helped conclusively prove the images to be fake. In this case, the damage was minimal, but it's easy to imagine scenarios that would cause much more harm.

    • by Okian Warrior ( 537106 ) on Monday June 05, 2023 @01:33PM (#63577521) Homepage Journal

      AI image generation is difficult. It is not something an unskilled person can do trivially, and gaining the skills to do it is an extended, drawn out process.

      Sure, someone can use the tools and luck into a good result, but the odds of that are exceedingly slim. Even then, getting a good result requires some degree of "compositing" and other techniques quite similar to traditional video, image, and audio production and editing.

      There is no "idiot button" which produces anything of value, and honestly... if a single-line rhyme or Haiku can have IP protection, so should the derivative of an LLM prompt. If not, we should seriously just throw out all of IP law, because AI-generated material is at least on par with anything made by Pollock for uniqueness and "fakeness".

      Check out the reddit thread [reddit.com] on people using stable diffusion and then get back to us on how difficult it is.

      Or check out any of the Photoshop ads [youtube.com] that use the new AI features.

      Then get back to us on how difficult it is.

    • I do not consider something that can produce professional quality work in under 160 hours of effort to be "difficult".
      It's even easier for people with any computer and art experience (of which there are millions to tens of millions).

      AI photos and videos are going to be a problem. It breaks one of the easy ways we verified things were real/true.

  • And all porn should be labelled, and we should prevent people from removing those labels.
  • What if open AI writes an article and no one proofs it. What if open Ai writes an article and some one proofs it What if they fix typos? What if they rewrite every word in article? What if the ai writes a thousand word summary and someone manually condenses it to 50 words? What if a human wrote those thousand words and the AI summarized it? What if someone used the ai autocomplete to while typing? What if they used grammarly? What if they asked open AI âoegive me five synonyms for stupidâ but eve
    • All those questions. I don't know how anybody could possibly cover them all with one simple caveat. How could it be done? How? How???

      E-stamped at the bottom of all relevant articles and other stuff: "AI was used in the course of preparing this content for publication."

      Anything else I can help you with?

  • But they won't let the AI copyright or patent its works...

    • That's not true. At least not in the U.S. A U.S. Court has ruled that AI Generated IP can't be patented. Copyright is a whole different ball of wax.

  • Have AI constructed people wear the scarlet letters A and I on their forehead? Kidding!

    Or maybe a variation on what Iain M. Banks imagined for VR worlds, and with AI enhanced imagery there has to be a little button you can mouse over/toggle that highlights what isn't real.

  • by waspleg ( 316038 ) on Monday June 05, 2023 @03:01PM (#63577787) Journal

    law for this it won't matter. Half (maybe more?) the world are under authoritarian dictatorships that will do whatever benefits themselves regardless of the cost.

    This is about to turn in to some b scifi shit and will definitely be weaponized in short order.

    • your CIA equivalent can go after the foreign adversaries with any means they choose. You can't do that to citizens. You have to have laws for that.
    • by AmiMoJo ( 196126 )

      This isn't about trying to control the attacks from outside the EU. Obviously. That would be dumb.

      This is about companies operating in the EU. Using AI to generate articles and reports, or images. There has been talk around similar issues for years, e.g. photoshopping models to attain unrealistic looks that are known to cause mental health problems for children and young adults.

  • Utterances of the EU/US/other bureaucrats/politicians are labelled as such, why shouldn't AI get credit where credit is due?

    Slightly less silly, at what level of human intervention should AI be credited? It's been years, but IIRC, MS-Word gives a green underline for wording it considers ungainly. Does this count as AI?

    As I understand it, AI runs with what you give it. What happens if you steer it alot, say more than once every sentence? And check references!

  • In the near future most word processors and graphics tools will have some kind of AI copilot. So I guess all content in the EU will need to be labelled.

    It makes me wonder who's advising the EU on tech, though. It seems to be a case of policy-by-knee-jerk.

  • Given the nature of interactions, all these labeling will have to be voluntary. Bing already labels them, legitimate blogs and newspapers could use them as well. I think there will be exceptions where AI is expected, like smart speakers. But, again it is all voluntary.

    Those who has something to hide on the other hand?

    Any visible watermark can be distorted or deleted using photoshop. Invisible ones, ..., are by definition not visible to end users. If we somehow add this to text generated by GPT, it can be fe

  • Face it: Trump has been getting duped by fake AI for a long time.
    Both he and a significant US population have been severely conned.

    How do we get the truth now?
    When will our legislators see the urgency of the need to get actual truth unquestionably distributed?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...