Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI DRM

Ask Slashdot: Could a Form of Watermarking Prevent AI Deep Faking? (msn.com) 67

An opinion piece in the Los Angeles Times imagines a world after "the largest coordinated deepfake attack in history... a steady flow of new deepfakes, mostly manufactured in Russia, North Korea, China and Iran." The breakthrough actually came in early 2026 from a working group of digital journalists from U.S. and international news organizations. Their goal was to find a way to keep deepfakes out of news reports... Journalism organizations formed the FAC Alliance — "Fact Authenticated Content" — based on a simple insight: There was already far too much AI fakery loose in the world to try to enforce a watermarking system for dis- and misinformation. And even the strictest labeling rules would simply be ignored by bad actors. But it would be possible to watermark pieces of content that deepfakes.

And so was born the voluntary FACStamp on May 1, 2026...

The newest phones, tablets, cameras, recorders and desktop computers all include software that automatically inserts the FACStamp code into every piece of visual or audio content as it's captured, before any AI modification can be applied. This proves that the image, sound or video was not generated by AI. You can also download the FAC app, which does the same for older equipment... [T]o retain the FACStamp, your computer must be connected to the non-profit FAC Verification Center. The center's computers detect if the editing is minor — such as cropping or even cosmetic face-tuning — and the stamp remains. Any larger manipulation, from swapping faces to faking backgrounds, and the FACStamp vanishes.

It turned out that plenty of people could use the FACStamp. Internet retailers embraced FACStamps for videos and images of their products. Individuals soon followed, using FACStamps to sell goods online — when potential buyers are judging a used pickup truck or secondhand sofa, it's reassuring to know that the image wasn't spun out or scrubbed up by AI.

The article envisions the world of 2028, with the authentication stamp appearing on everything from social media posts to dating app profiles: Even the AI industry supports the use of FACStamps. During training runs on the internet, if an AI program absorbs excessive amounts of AI-generated rather than authentic data, it may undergo "model collapse" and become wildly inaccurate. So the FACStamp helps AI companies train their models solely on reality. A bipartisan group of senators and House members plans to introduce the Right to Reality Act when the next Congress opens in January 2029. It will mandate the use of FACStamps in multiple sectors, including local government, shopping sites and investment and real estate offerings. Counterfeiting a FACStamp would become a criminal offense. Polling indicates widespread public support for the act, and the FAC Alliance has already begun a branding campaign.
But all this leaves Slashdot reader Bruce66423 with a question. "Is it really technically possible to achieve such a clear distinction, or would, in practice, AI be able to replicate the necessary authentication?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Could a Form of Watermarking Prevent AI Deep Faking?

Comments Filter:
  • Fiction (Score:5, Informative)

    by ceoyoyo ( 59147 ) on Sunday January 14, 2024 @07:17PM (#64158693)

    In case anyone is looking for FACStamp to see how it's implemented, this appears to be a complete work of fiction.

    Would it work? Not as described. There are already cryptographic signing mechanisms built into some cameras for photojournalists, but they show any manipulation of the photo at all, they only work because they're embedded in the camera firmware, and because of that could presumably be hacked if you were sufficiently motivated.

    • Re:Fiction (Score:5, Insightful)

      by ffkom ( 3519199 ) on Sunday January 14, 2024 @07:56PM (#64158749)
      Given that the firmware of many cameras has already been hacked to enable features (that its manufacturers were trying to withhold from users unless they buy higher priced models) [personal-view.com], it is for certain that any reasonably motivated group could also hack a camera to inject pictures into its certification process that were not recorded by the sensor.
      But sure, regimes could have an interest to ban all non-certified photographs as "fake", simply to gain another oppression tool by authorizing certificates only for known-to-be-regime-friendly photographers. Or they could, instead of just warning citizens to not publish recordings of thieves [abc7.com], deny "porch cameras" to certify recordings, and thus "prove" that all complaints about theft problems are just fake news.
    • Everything they're talking about could *possibly* work for cloud based AI because it's easy to regulate those. But try regulating deepfakes generated by a script kiddie using nothing more than Python and a run-of-the-mill GPU.

      • by ceoyoyo ( 59147 )

        They're not talking about regulating AI. The idea is to mark the true images, not the fake ones.

    • by Anonymous Coward
      Could a Form of Watermarking Prevent AI Deep Faking?

      No. Next question.
    • We NEED digital signatures standardized globally for all HUMANS to use!

      Try to watermark the meta data if you'd like but simply make it a standard meta data. You can also meta a hash to the source image for signed edits. Devices can sign all images by default as well; you simply add or replace your signature.

      If I don't sign it, then it's not me. I can control if I sign it; I can't control everybody else.

      If you are media, it's fake unless they sign it.

      Fake media like Fox can sign their delusions so you can di

    • In case anyone is looking for FACStamp to see how it's implemented, this appears to be a complete work of fiction.

      I thought that was clear by the sentence which said it was created May 1st, 2026.

    • " they only work because they're embedded in the camera firmware, and because of that could presumably be hacked if you were sufficiently motivated."

      I'm sure an AI can fake it just as well as a tiny chip in a camera.

      • by ceoyoyo ( 59147 )

        You might want to look up the wikipedia page on hash functions and cryptographic signing.

    • Journalists are going to skip the coding part and go straight to the vaporware phase.
    • by Reziac ( 43301 ) *

      And if it did exist, is there some reason AI couldn't incorporate this watermarking into its generated images?

      • by ceoyoyo ( 59147 )

        It's not a watermark. It's a cryptographically signed hash. If you've got a hash that's signed by, e.g. Canon, that matches the hash of the image in your hands, that image is genuine, at least up to the limits of the technology. If the hash doesn't match, the photo isn't the same one that was signed.

        You can't fake that a posteriori with AI or any other technology. The only way to do it is to acquire Canon's key or compromise the actual signing chip in the camera.

        • by Reziac ( 43301 ) *

          Okay, that's good to know. I was wondering if something could be done with a hash and a server that would not have touched the image, so anyone could confirm it; same principle.

          • by ceoyoyo ( 59147 )

            You could, sure, but how does that server know the image is what it's says it is?

            The crypto smart contract stuff has the same problem. You can absolutely sign things, and hashes work really well, but as soon as they meet the real world you've got a verification problem.

            If the hash is computed and signed right in the camera then it's still possible to fake, but it's a lot of hardware hacking. If it's done right, it's very difficult and expensive, like hacking the secure store in a phone. On the other hand, I

  • by Lehk228 ( 705449 ) on Sunday January 14, 2024 @07:18PM (#64158697) Journal
    a watermark wouldn't do the job, but cameras could include a tamper resistant chip that signs original captured images and video to provide a higher confidence of authenticity (someone could still take measures to project an image upon a disassembled camera sensor or otherwise inject an image into the system prior to signing)
    • And no editing (Score:5, Interesting)

      by Okian Warrior ( 537106 ) on Sunday January 14, 2024 @07:56PM (#64158753) Homepage Journal

      a watermark wouldn't do the job, but cameras could include a tamper resistant chip that signs original captured images and video to provide a higher confidence of authenticity (someone could still take measures to project an image upon a disassembled camera sensor or otherwise inject an image into the system prior to signing)

      Which would technically work, but then you'd be unable to edit the video for better production values: crop the video, enhance the color scheme, extract only the interesting 3 minutes from a 20 minute video, and so on.

      To be fair, this would prevent some unfair editing such as cropping out important context (showing a crowded protest that fills the image, instead of a small crowd of people in an otherwise empty street), or misleading edits (leaving out words in the middle of a sentence to reverse a meaning, leaving out a question or context words such as "let's suppose we had a situation that"), and so on.

      And to the OP point, having a watermark would work very well - if you could convince China to abide by it, and the CIA, and MI5, and the Ukranians, and the Iranians, and Turkey, and Russia, and... well, I guess it wouldn't work very well after all.

      The complete and simple way to keep deep fakes out of news reports is for journalists to DO THEIR FUCKING JOB!

      Don't post unverified videos: see if you can find actual witnesses to the event, call around to see if the venue was rented by that person, see if you can get security camera footage from the local businesses, see if there are multiple videos from different viewpoints, and generally verify that what you're posting is actually true.

      Of course that takes time and you'll only be scooped by some other MSM outlet that didn't wait, but when no one trusts the MSM any more because they don't bother to check for accuracy, we won't have MSM any more.

      Basically, the death of MSM is inevitable at this point.

      • Not So, you can still edit, you just have to retain the digital original for comparison should anyone question the reasons for editing - it would be pretty clear if the context of the image was significantly altered. You could even provide a link to the unedited original as a routine procedure for all published edited images.

    • Or point the camera at a screen. You don't need to get fancy.
  • Quis custodiet...? (Score:5, Interesting)

    by The Last Gunslinger ( 827632 ) on Sunday January 14, 2024 @07:25PM (#64158707)
    Setting aside the risible condition of requiring all computers to connect to a "non-profit FAC Verification Center," how, exactly, do you prevent rogue AI from inserting a FACStamp into its own content?
    • And don't forget, "non-profit" does not mean free. Even if the organization is "non-profit" they will need to have a paid staff and programmers and that money has to come from somewhere. The highest paid CEO of a "non-profit" made over $32 million in 2022. How expensive would these license fees need to be?

      Licensing the feature to the camera manufacturers will mean a slightly higher price per camera - license fees to be determined later. Also, do you charge the photographer per image or do you charge the

  • like they prevent painting forgeries or counterfeit money: someone who makes fakes tends to do it for criminal purposes and usually avoid voluntarily marking them as obvious fakes.

    In other words, the only deepfakes that will bear watermarks are those done for comedic or homage purposes, whose author readily advertises that they're deepfakes, and those are already clearly marked as such today.

  • by vadim_t ( 324782 ) on Sunday January 14, 2024 @07:47PM (#64158731) Homepage

    How is that supposed to work?

    Russia, as much as the current government sucks, still has plenty smart people in it. If you create a signing mechanism and Russia or China want to create chaos by AI fakes, then it won't be that much effort for them to obtain phones, hack them, and sign whatever they please.

    Worst case, they take the device apart, and hook up something instead of the sensor to feed the right image to the signing chip.

    More likely, they'll find some hole to exploit, or even print whatever they want in high quality, and take a photo of it really carefully.

    This kind of thing can't possibly dissuade a serious attacker like a government.

    • Simply don't trust Russian, Chinese and other untrustworthy sources that can't be verified. Or at least make content from places like that really obvious (visual notification that it's from an untrustworthy source).

      On the flip side, just only allow authenticated signatures from good sources or make content notify you that the signature was verified from a legit source.

      • by vadim_t ( 324782 )

        How do you plan to enforce that? Remember we're talking about state level actors.

        1. Buy a bunch of phones in the US
        2. Ship to Russia
        3. Take apart, hook up probes instead of the camera.
        4. Fake the entire environment -- spoof GPS signal, spoof cell tower.
        5. Produce whatever signed picture you need, that seems to have been taken elsewhere.

        Sure, this takes work, but all the tooling for it is available and it's peanuts to a government, there's even small companies that own such hardware for testing purposes.

        • I don't know. I'm not sure if the goal is 100%. Maybe it would be like copy protection. Yes, it will be broken, and yes, it is not a perfect system even in theory. But it may cut down casual copying. Even if we can identify the majority of deepfakes, I think that would be a win. I don't think we can 100% identify anything, so I just don't think that should be the goal.
  • The government will mandate the use of FACstamps?

    Not as long as we have a First Amendment they won't. Good lord.

    • Ok, then another way: They won't mandate it, but anything made without has the stigma of being considered fake until proven otherwise.

      You act like this would be the first time the 1st gets assraped because it's convenient.

  • Watermarks in originals would need to withstand post-processing and image-editing to some degree, but not above that degree. The degree would depend on the contents and meaning of that image. That is not possible today and nobody knows whether it is possible at all. Watermarks in AI output would need to be robust against watermark removal techniques. That is also not possible today and probably impossible in general.

    Hence: no.

    • This. You can't prove an image wasn't deepfaked, manual-faked or manually altered with a watermark, but it might be possible to prove that an image was taken with a real camera, or at least existed in a certain state at a certain time, with digital signatures and notary services. You could also thwart an AI's attempts to train on or alter an image, at least in a limited and temporary way, by modifying it with adversarial example techniques such as with Glaze or Photoguard.

      • by gweihir ( 88907 )

        but it might be possible to prove that an image was taken with a real camera

        That is still likely impossible. Cameras are not HSMs and some TPM just does not cut it (TPM is somewhat ok for Digital Restriction Management, but not whole system security against the user). It may also give you a lot less than you think. After all, you could deepfake the picture and then take a photo of it. That may be hard or impossible to detect and will give you a false sense of security.

  • by SubmergedInTech ( 7710960 ) on Sunday January 14, 2024 @08:18PM (#64158779)

    Step 1: Generate fake content.
    Step 2: Display on 8K monitor ($4k) or make a high-res printout.
    Step 3: Take photo of that using FACStamp device.
    Step 4: Profit.

    No hacking required. And it's not even counterfeiting the FACStamp. It's a perfectly valid, unretouched photo of an 8K monitor.

    • You'd need a lot of setup between steps 2 and 3 to make sure you have the exact correct angle, distance, lighting, etc... Otherwise it'll be obvious that it's a picture of a monitor, and not a real life scene.

      It'd like the difference between a DVD rip of a movie and a theater-cam version.

      • I agree it's not point-and-shoot with no planning. But it isn't actually isn't too hard.

        Look at how *bad* paparazzi videos and photos are. These are the target, not a fake National Geographic special on unicorns and dragons.

        - Lighting is easy because the display *is* the light source. EXIF information for shutter speed and aperture could be a little obnoxious to match for outdoor sunlit shots, but new displays are pretty bright and could easily be used to replicate an indoor or night scene. Color temper

      • My videos go from camera - hdmi - video capture device - computer. I think that is pretty common for professional workflows as well, maybe not the majority, but certainly a lot of them.
        I could easily swap out the camera for a video feed from another computer, and the recording computer would never know.

    • There has never been a picture taken of a computer screen that hasn't obviously been a picture of a computer screen. https://xkcd.com/1814/ [xkcd.com]

      Movie studios used to do this for VFX for years and it was always obvious what was being done.

    • Exactly this.
      Circumventing it would be as trivial as circumventing "uneditable" pdfs.

      Interestingly, the more the authorities of the world insist nevertheless that this imaginary bullshit would be impregnable, the value of such schemes (and thus the certainly of its being hacked) rises even more.

  • It's a real photo of a fake image, it's real fake, 100% legit deep.
  • by OneOfMany07 ( 4921667 ) on Sunday January 14, 2024 @09:59PM (#64158901)

    "But it would be possible to watermark pieces of content that deepfakes."

    That deepfakes what?

  • It's pretty simple, and it's what people do today (md5 anyone?). Make the hardware add a hash that 'proves' where it came from. I've assumed some hardware company would do this for a good 5+ years already. Yes, Google, Samsung, Sony, and Apple... you're stupid not to have included this for many MANY years already.

  • by LostMyBeaver ( 1226054 ) on Sunday January 14, 2024 @10:29PM (#64158921)
    When DVD Jon published the crack for DVD CSS many of us had been watching encrypted DVDs using open source for quite some time. It was simple, if an electronic device is capable of playing the media, the media could be extracted. I think when I ripped my first DVD, about a year before DeCSS, I spent 3-4 hours single stepping one of the Windows DVD players (might have been a Mac one) and while I didn't reverse engineer the encryption, I did cut/paste the function and use it in Linux instead. Point being, if it can be displayed, it can be extracted.

    I've been doing signal processing for about 20 years now. Sometimes as a profession, sometimes just to solve puzzles. Watermarks are far more useless today than ever before.

    A file watermark which we could instead see as simply signing a file can be ignored when reading it. There's nothing of value there other than to authenticate origination of the file. Unless the full file workflow has the private keys involved, the file can't be color corrected or any such thing. And if the keys are transferrable, the signature has no value at all.

    I would spend more time on file watermarks, but they're useless for anything other than proving whether they originated unaltered at a specific source.

    There are visible watermarks such as those which TV networks would place on the screen to advertise their brand and identify the source of material. At one point we cropped this. Now, we simply execute a convnet to identify the position of a watermark and its edges, delete it and in-paint it. For a picture this is a little more difficult than a movie since you only have two dimensions to in-paint from. Neural networks are nice for this particular purpose, but if the watermark was big enough (more or less interfering with the photo) or complex enough, modern neural networks could lack enough training data to infer the missing region. For film, it's a non-issue because signal processing can easily reproduce missing regions from motion as it's pretty likely that future and past frames probably hold the missing content. Then neural networks can infer the rest.

    Then there's "invisible watermarks" which I've experimented with over time. I've come up with some pretty creative techniques which are certainly circumventable which employed altering quantization factors in individual macroblocks within videos. It would really take little more than two similar streams with different watermarks and a simple delta comparison to reverse the process. But it is mathematically impossible to produce an invisible watermark which would survive any editing at all... such as recompression. And if it's known how to create or identify a watermark, then it's possible to remove it and add a new one. If the watermark is a signature, the private keys would be needed, but there's little or no chance of storing a signature with enough bits to be useful without altering the image and making the signature visible.

    I wish there were a way to invest against companies making these technologies so I could make money by publishing how to circumvent all their algorithms. It would be really fun :) Wait for them to become big and then short trade and publish that their algorithm has been irreversibly cracked?
    • I wish there were a way to invest against companies making these technologies so I could make money by publishing how to circumvent all their algorithms. It would be really fun :) Wait for them to become big and then short trade and publish that their algorithm has been irreversibly cracked?

      Short trading would indeed be the way to "invest against" them. If you're the one with the irreversible crack, you might want to consult with lawyers before shorting the stock prior to releasing said crack.

      Because the likely result of you doing so would at least be being sued.

      • by dfghjk ( 711126 )

        There would be nothing illegal or unethical about shorting a stock prior to releasing factual information, given that you are not an insider. You sound like an entitled douche. Worried that your Tesla stock might plummet?

        • You're the one sounding like an entitled asshole. I mean, what the hell does my imaginary Tesla stock have to do with it? We're talking about a company/organization doing image watermarking, not EV cars.

          Note that I didn't say not to do it. I merely said to consult a lawyer first.

          You've been around slashdot long enough to know that people releasing cracks/vulnerabilities have a tendency to get sued or even criminal charges on occasion, and that's without the complexities of short selling their stock as we

  • An Article written by tech illiterates with no real understanding of the flawes of what they are proposing. Any such system would be incredibly easy to bypass just by providing the faked feed as the live visuals to a system that then stamps the fake as FACT.
    • Or the Fact as Fake. Much worse.

    • by dfghjk ( 711126 )

      ...and the editors are /. are simultaneously (1) too ignorant to know any better, (2) too lazy to write their own content rather than steal from other sites and (3) too dumb not claim it as their own work with the "Ask Slashdot" tag. Not long ago the site would know that an article this stupid would not appeal to its literate audience, now its too illiterate to know and right to assume it does. Nothing left but the SuperKendalls.

  • by sonoronos ( 610381 ) on Sunday January 14, 2024 @11:58PM (#64158997)

    The whole premise of this depends on a âoenon profitâ that performs the verification. This fictional story masquerading as news is shit because it falls under the fallacy that non-profits are somehow not politically motivated or otherwise corrupted. Non-profits merely earn no profit, and are easily taken over by radicals on the left and right to advance political agendas. In fact, virtually every non profit in America is designed to further an agenda.

    Once one political party or group takes over the verification non-profit, it will work to actively silence people who disagree with them.

    The dystopian future that the fictional story posing as a news article is trying to prevent is written by an idiot who doesnt understand the system or how it can be abused.

  • by Todd Knarr ( 15451 ) on Monday January 15, 2024 @12:19AM (#64159013) Homepage

    What's to stop the deepfake people from just using the same process to add the FACStamp to their content? I can think of multiple ways off-hand to do that. Then their deepfakes will register as authenticated content and nobody can tell the difference.

    Second problem is the requirement for connectivity to a central database. What do you do when you don't have connectivity for whatever reason? Or when the central database is down? Or when one of the deepfake people has hacked into the central database and wiped or contaminated it?

    Now, there is a way to do this successfully, but you have to start by asking "How do we verify the source of a message without depending on a central authority to tell us the result?". The initials "HKP" might help you along, the principles apply to X.509 equally.

  • You can process it in 1000s of ways, watermarking is useless.
  • by dfghjk ( 711126 ) on Monday January 15, 2024 @01:58AM (#64159095)

    About 20 years ago I scuba dived with a lot of professional photographers who were rather consumed with promoting photo competitions. Back then the transition from film to digital was beginning and UW photographers were among the last to change. Sadly, slide film was the back stop of photo competition back then, photographers were not (as a group) clever enough to fake slides.

    It was a common belief that all you needed to do to restore fairness in competition was to force competitors to submit results in a "raw" format. After all, everyone knew that raw formats could not be duplicated on a computer! Raw formats were produced by the essence of what made a camera a Canon or a Nikon, you know, the very magic that people like SuperKendall argue about. Cameras were definitely NOT computers.

    But then, Adobe introduced DNG and some camera makers started supporting it out of camera. Curious. Good thing, too, considering how far digital photography has come since then and how much better UW photography is now in such a short period of time.

    It's funny that people forget that photography is about lying with light, the whole art of photography is in capturing a scene whether it ever existed or not. Does using a flash render an image fake? using a wide angle or telephoto lens? A gradient filter? How about shooting 10 frames a second and choosing just the right moment?

    You will never overcome stupid, there are far too many of them and being a photographer more requires low IQ than high IQ. Ask SuperKendall, he's a published photographer!

  • Ever wondering if AI will be trouble along with ChatGPT ??
  • Unless your put "FAKE" across the images in large letter most people are not going to care if they are real or deep fakes. Might make identifying fakes as such but most people could care less. I want to see whomever do whatever and if I get a pix/video of it I am good regardless of its its real or fake.
  • by macwhiz ( 134202 ) on Monday January 15, 2024 @10:14AM (#64159677)

    Sure, FACstamp sounds alluring, until you think about it a little. How do you get a FACstamp?

    A central authority has to provide it. How does that authority know it's from a trustworthy source? Obviously you're going to have to provide proof of identity, to show that it really came from you. And that proven identity will be tied to an account, and that account data will be part of the FACstamp.

    And no world government will ever abuse a system that requires your photo to be absolutely traced back to you, where people believe there's no way that attribution could be faked. After all, FACstamp is all about proving authenticity!

    No one would use it to track down the dissident that took a photo of government misbehavior... or fake evidence that a political rival committed a heinous act... and certainly no government would ever think to suborn that central authority to make such things even easier.

    It's not just that the idea assumes that it's possible to create an unhackable, indelible watermark that proves beyond doubt that an image is a genuine representation of reality. Nor the naïve belief that such a system will make photos "show true facts" and prevent the creation of photos that mislead. It's the failure to consider the unintended consequences of such a system! (Or, as usual, to assume those consequences can be overcome by "nerding harder"...)

  • Imagine the [unlikely?] case where someone wants to implement FACstamp on their own computer. Can they?

    They'd end up facing a similar problem as DRM standards: whoever backs it can't allow any independent implementations, because that would undermine the purpose: preventing people from signing the "wrong" data.

    So this FACstamp idea requires proprietary software for every step of the process, with a key obfuscated or hidden inside a TPM chip or something like that. Wanna write something that is interoperable

  • Betteridge's law of headlines:
    "Any headline ending with a question mark can be answered with no."

Good day to avoid cops. Crawl to work.

Working...