

AI Generated Content Should Be Labelled, EU Commissioner Jourova Says (reuters.com) 45
Companies deploying generative AI tools such as ChatGPT and Bard with the potential to generate disinformation should label such content as part of their efforts to combat fake news, European Commission deputy head Vera Jourova said on Monday. From a report: Unveiled late last year, Microsoft-backed OpenAI's ChatGPT has become the fastest-growing consumer application in history and set off a race among tech companies to bring generative AI products to market. Concerns however are mounting about potential abuse of the technology and the possibility that bad actors and even governments may use it to produce far more disinformation than before.
"Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation," Jourova told a press conference. "Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognise such content and clearly label this to users," she said. Companies such as Google, Microsoft and Meta Platforms that have signed up to the EU Code of Practice to tackle disinformation should report on safeguards put in place to tackle this in July, Jourova said.
"Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation," Jourova told a press conference. "Signatories who have services with a potential to disseminate AI generated disinformation should in turn put in place technology to recognise such content and clearly label this to users," she said. Companies such as Google, Microsoft and Meta Platforms that have signed up to the EU Code of Practice to tackle disinformation should report on safeguards put in place to tackle this in July, Jourova said.
Wrong way (Score:2)
We already have fake news from human-generated sources. Labeling ChatGPT as potentially fake doesn't make non-ChatGPT information any less fake. You're just setting it up so people implicitly trust human-generated news more, despite the fact that it's not any more trustworthy, because again, humans lie.
The solution to fake news is simple: multiple independent sources with fact-checking and evidence. Label anything that isn't that as "potentially fake".
Re:Wrong way (Score:5, Insightful)
Maybe if you're talking about *text* news. But it could be a good thing to have ChatGPT images and videos watermarked as such. At list then this would limit the damage caused by the fake news text, when it relies on photographic or video "proof" that the nutjob pontificator is using to support his claims.
Re: (Score:3)
But it could be a good thing to have ChatGPT images and videos watermarked as such.
That sounds simple enough, but what about other generative AI tools? What about open source tools? Nothing stops a bad actor from bypassing any of these rules. And what about when people start using the new Photoshop rules to make image editing easier with generative AI. Do all those need to be labelled? Soon nearly all image and video editing are likely to make some use of generative AI.
Re:Wrong way (Score:4, Interesting)
It can't cover all the situations but the proposal is quite reasonable. There is a push for all those companies to provide detection tools. The proposal means that Facebook/... will have to scan images with the OpenAI tool that detects ChatGPT text, the Adobe tool that detects Photoshop-generated images, etc. (or develop their own). It won't cover all cases but it will cover the low hanging fruits and reduce their nefarious influence. It's like a spam detector then playing cat and mouse with originators. It's not perfect but it's what we have.
Re: (Score:2)
Most fakers want quick and easy. If they have to actually WORK for their forgeries, the actors would be limited mostly to deep-pocketed governments.
This assumes most people using these tools will be doing so to forge art work, which is very unlikely to be the case. For the most part it will be used to increase productivity for content creation tasks. If Photoshop can make it easier to remove tattoos or change the color of a model's hair using generative AI, that shouldn't then mean the newspaper ad utilizing this technique needs some kind of watermark. We didn't do that when these tools started allowing artists to touch up photographs. We shouldn't do
Re: (Score:2)
We didn't do that when these tools started allowing artists to touch up photographs.
Though we did: Israel (2012), France (2017 https://www.france24.com/en/20... [france24.com] ), Norway (2021), Australia https://theconversation.com/th... [theconversation.com] Italy and Spain https://www.digitaltrends.com/... [digitaltrends.com] It was also proposed in US (2011) https://www.forbes.com/sites/m... [forbes.com]
Also, Instagram had apparently decided to do it https://screenrant.com/instagr... [screenrant.com]
Re: (Score:2)
Very informative, thanks. I live in the US so I've never seen this, and hopefully few countries go down such a misguided path again. Considering one of the articles showed how the disclaimers didn't even work for their intended purpose, there should be more evidence this time around that it is a very bad idea.
Re: (Score:3)
Just as with any kind of authentication, there is no sure way to catch every case of forgery. But there are ways to make it more difficult. The point of anti-forgery techniques is not to eliminate the possibility of fraud, but the make it harder. Watermarks do that. Open source AI tools won't have the capabilities of the major commercial platforms, not even close. This is because it takes money, and a lot of it, to make and effectively train an AI tool. For this reason, forgers (those who want to produce fa
Re: (Score:1)
And no, there's no reason why I can't generate an image in the super-advanced thing photoshop will have 10 years from now and then remove the watermark in 2023's stable diffusion. It still won't require any artistic skill.
Re: (Score:2)
Would that extend to the image manipulation tools that already exist in photoshop? What happens when photoshop gets AI image generators built right in? What happens when my camera gets them built right in?
Re: (Score:2)
Yes, it applies, and Photoshop and camera manufacturers will no doubt be subject to the same regulation, just as printer vendors today are required to include features that prevent making realistic replicas of paper money.
Re: (Score:2)
Bad actors...aren't going to label anything.
In due time, the AI technology will be easy enough to get that you won't need to be a major corporation nor a state actor to have the tech available.
You get to
Re: (Score:2)
AI technology, including open source AI, may become cheaper, but the training will remain expensive. The training part will limit the effectiveness of open source AI technology.
Yes, it is an arms race. Just as printers these days are good enough to create good copies of paper currency, they are also created with anti-forgery technology. Can a bad actor get around this by making their own printer? After all, printers are cheap, right? It's not so easy, and does raise the bar of effort required.
Re: (Score:2)
What will stop the home grown AI person, or a "bad actor" that doesn't care about copy right...form using the whole web as their training model?
Re: (Score:2)
The most important aspect of AI training is not the quantity of information. The most important thing is the quality of the feedback. The training process involves feeding data to the system, noting the output, and then providing feedback telling the system whether the output was correct or accurate. That feedback is the most expensive part of AI development, and requires lots of humans. OpenAI along employs thousands of humans for this task. https://www.semafor.com/articl... [semafor.com]
That home-grown "bad actor" who
Re: (Score:2)
"multiple independent sources with fact-checking and evidence"
It's a lost cause. In a world of power struggles there is no way to ensure such sources are independent, and there is no way to ensure they are doing their job honestly unless they were legally compelled to do so, which, even if it were possible, would be such a long and drawn out process to enforce that its bandwith would be close to zero compared to how much news are generated. Just look at the Hunter Biden laptop example, whatever you think of
Re: (Score:3)
In a world of power struggles there is no way to ensure such sources are independent,
Can we at least know if they are independent from each other?
Re: (Score:2, Insightful)
> In a world of power struggles there is no way to ensure such sources are independent
In the same sense that there's no such thing as truth? You can still get pretty close to it. Perfection is the enemy of the good.
> there is no way to ensure they are doing their job honestly
That's why you ask for evidence and multiple independent sources. Journalism 101.
> would be such a long and drawn out process to enforce that its bandwith would be close to zero compared to how much news are generated
Granted, i
Re: (Score:3)
The feds got the laptop. They verified the content was real and belonged to Hunter Biden.
Re: (Score:2)
You unironically described the best addition to twitter. Community notes. Except that it's a black list style thing, not a white list as you suggest.
This is silly. (Score:2, Insightful)
This is silly - it's a bunch of troglodytes trying to pass ultimatums on things they simply don't understand.
AI image generation is difficult. It is not something an unskilled person can do trivially, and gaining the skills to do it is an extended, drawn out process.
Sure, someone can use the tools and luck into a good result, but the odds of that are exceedingly slim. Even then, getting a good result requires some degree of "compositing" and other techniques quite similar to traditional video, image, and au
Re:This is silly. (Score:5, Informative)
It's not such a remote possibility. On May 22, images showing explosions at the Pentagon circulated widely, causing the stock market to dip until people started to realize the the image was fake and likely AI generated. https://www.npr.org/2023/05/22... [npr.org]
A watermark would help helped conclusively prove the images to be fake. In this case, the damage was minimal, but it's easy to imagine scenarios that would cause much more harm.
Get back to us when... (Score:4, Informative)
AI image generation is difficult. It is not something an unskilled person can do trivially, and gaining the skills to do it is an extended, drawn out process.
Sure, someone can use the tools and luck into a good result, but the odds of that are exceedingly slim. Even then, getting a good result requires some degree of "compositing" and other techniques quite similar to traditional video, image, and audio production and editing.
There is no "idiot button" which produces anything of value, and honestly... if a single-line rhyme or Haiku can have IP protection, so should the derivative of an LLM prompt. If not, we should seriously just throw out all of IP law, because AI-generated material is at least on par with anything made by Pollock for uniqueness and "fakeness".
Check out the reddit thread [reddit.com] on people using stable diffusion and then get back to us on how difficult it is.
Or check out any of the Photoshop ads [youtube.com] that use the new AI features.
Then get back to us on how difficult it is.
Re: (Score:1)
I do not consider something that can produce professional quality work in under 160 hours of effort to be "difficult".
It's even easier for people with any computer and art experience (of which there are millions to tens of millions).
AI photos and videos are going to be a problem. It breaks one of the easy ways we verified things were real/true.
Re: (Score:2)
Maybe, to prove your content to be authentic, you could hold a contest inviting people to try to prove you wrong, and offering a money reward if successful. Surely that would give credibility to your work, right? https://www.npr.org/2023/04/27... [npr.org]
Re: (Score:1)
Yeah, right. (Score:1)
What is AI generation? (Score:1)
Re: (Score:2)
All those questions. I don't know how anybody could possibly cover them all with one simple caveat. How could it be done? How? How???
E-stamped at the bottom of all relevant articles and other stuff: "AI was used in the course of preparing this content for publication."
Anything else I can help you with?
AI content will have to be labelled (Score:1)
But they won't let the AI copyright or patent its works...
Re: (Score:3)
That's not true. At least not in the U.S. A U.S. Court has ruled that AI Generated IP can't be patented. Copyright is a whole different ball of wax.
Like Rimmer in Red Dwarf with his "H" (Score:2)
Have AI constructed people wear the scarlet letters A and I on their forehead? Kidding!
Or maybe a variation on what Iain M. Banks imagined for VR worlds, and with AI enhanced imagery there has to be a little button you can mouse over/toggle that highlights what isn't real.
No matter how strict or intelligently you write (Score:3, Insightful)
law for this it won't matter. Half (maybe more?) the world are under authoritarian dictatorships that will do whatever benefits themselves regardless of the cost.
This is about to turn in to some b scifi shit and will definitely be weaponized in short order.
The point is to shut down local think tanks (Score:2)
Re: (Score:2)
This isn't about trying to control the attacks from outside the EU. Obviously. That would be dumb.
This is about companies operating in the EU. Using AI to generate articles and reports, or images. There has been talk around similar issues for years, e.g. photoshopping models to attain unrealistic looks that are known to cause mental health problems for children and young adults.
Fair is Fair, EU labelled !! (Score:2)
Utterances of the EU/US/other bureaucrats/politicians are labelled as such, why shouldn't AI get credit where credit is due?
Slightly less silly, at what level of human intervention should AI be credited? It's been years, but IIRC, MS-Word gives a green underline for wording it considers ungainly. Does this count as AI?
As I understand it, AI runs with what you give it. What happens if you steer it alot, say more than once every sentence? And check references!
Gmail completed my sentence. Is that AI? (Score:2)
In the near future most word processors and graphics tools will have some kind of AI copilot. So I guess all content in the EU will need to be labelled.
It makes me wonder who's advising the EU on tech, though. It seems to be a case of policy-by-knee-jerk.
"Voluntary" labels (Score:2)
Given the nature of interactions, all these labeling will have to be voluntary. Bing already labels them, legitimate blogs and newspapers could use them as well. I think there will be exceptions where AI is expected, like smart speakers. But, again it is all voluntary.
Those who has something to hide on the other hand?
Any visible watermark can be distorted or deleted using photoshop. Invisible ones, ..., are by definition not visible to end users. If we somehow add this to text generated by GPT, it can be fe
Like Trump... (Score:1)
Both he and a significant US population have been severely conned.
How do we get the truth now?
When will our legislators see the urgency of the need to get actual truth unquestionably distributed?