Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI

Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes 56

Cornell University researchers have developed an "invisible" light-based watermarking system that embeds unique codes into the physical light that illuminates the subject during recording, allowing any camera to capture authentication data without special hardware. By comparing these coded light patterns against recorded footage, analysts can spot deepfake manipulations, offering a more resilient verification method than traditional file-based watermarks. TechSpot reports: Programmable light sources such as computer monitors, studio lighting, or certain LED fixtures can be embedded with coded brightness patterns using software alone. Standard non-programmable lamps can be adapted by fitting them with a compact chip -- roughly the size of a postage stamp -- that subtly fluctuates light intensity according to a secret code. The embedded code consists of tiny variations in lighting frequency and brightness that are imperceptible to the naked eye. Michael explained that these fluctuations are designed based on human visual perception research. Each light's unique code effectively produces a low-resolution, time-stamped record of the scene under slightly different lighting conditions. [Abe Davis, an assistant professor] refers to these as code videos.

"When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos," Davis said. "And if someone tries to generate fake video with AI, the resulting code videos just look like random variations." By comparing the coded patterns against the suspect footage, analysts can detect missing sequences, inserted objects, or altered scenes. For example, content removed from an interview would appear as visual gaps in the recovered code video, while fabricated elements would often show up as solid black areas. The researchers have demonstrated the use of up to three independent lighting codes within the same scene. This layering increases the complexity of the watermark and raises the difficulty for potential forgers, who would have to replicate multiple synchronized code videos that all match the visible footage.
The concept is called noise-coded illumination and was presented on August 10 at SIGGRAPH 2025 in Vancouver, British Columbia.
This discussion has been archived. No new comments can be posted.

Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes

Comments Filter:
  • by ndsurvivor ( 891239 ) on Tuesday August 12, 2025 @09:32PM (#65586354)
    And it makes common sense. The problem is that there is a percentage of people who like to be lied to, and who will deny a video was manipulated, or won't care.
    • by AmiMoJo ( 196126 ) on Wednesday August 13, 2025 @04:59AM (#65586782) Homepage Journal

      Will it even work? Things like this tend to be vulnerable to AI being trained to reproduce it.

      • If the light is encoded properly, you could embed SHA-256 hashes, which, while mimicable they would not be able to produce the hash if the video were altered.

    • Actually I do not see how this is great for similar reasons: it relies entirely one someone saying that a video is fake because it did not have some coded light pattern. How do we know that this person is telling the truth? If a genuine, but embarassing, video was filmed under normal lights they could simply claim that it was fake because it lacked the special light pattern.

      This method lets the creator of the video know whether or not it has been manipulated but since they were there they should know tha
    • by znrt ( 2424692 )

      acritical thinking is way more widespread than you think ...

  • by kwerle ( 39371 ) <kurt@CircleW.org> on Tuesday August 12, 2025 @10:16PM (#65586440) Homepage Journal

    I reread 1984 a few years ago and the thing that really struck me is what Orwell got wrong: the notion that you need to erase evidence of factual data (at great effort/expense) in order to propagate lies. It turns out that you just need to shout a little louder and a lot of folks will eat it up.

    Which should have been obvious by then, but which was not even obvious to me when I read it the first time (in HS - around '84). But at this point we've all very much lived through it (and continue to).

    The number of people who care about what's factual or actual isn't enough.

    • Re: (Score:3, Informative)

      by ndsurvivor ( 891239 )
      I read 1984 in 1984 myself :-), it seemed like an interesting thing to do. I think we all knew from the 1930's Germany that repeating lies over and over again makes it true in some peoples mind. Trolls get some kind of joy, I think, in repeating lies. Simply pointing them to a fact check does not seem to matter. That is what this technology does, it points them to a fact check, but I don't think it will matter much. The Nancy Pelozi video where she looked drunk went viral, and all MAGAs seem to rep
    • by Mr. Dollar Ton ( 5495648 ) on Tuesday August 12, 2025 @10:38PM (#65586468)

      what Orwell got wrong: the notion that you need to erase evidence of factual data (at great effort/expense) in order to propagate lies

      No, he didn't. That, as was made patently clear by the long and tedious dialogs between O'Brien and Smith in the torture room, was a temporary measure while people were trained to deny reality. There was even the break-through scene, really visual:

      "'How many fingers do you see, kwerle?' asked the ICE agent gently. And this time, for the first time kwerle saw with clarity two fingers raised" or somesuch.

      He even got right the bit that even if social pressure and fake news will do it without the need of physical torture, the torture is still necessary because cruelty is the point. It ain't a surprise, Orwell experienced it first-hand in Spain, so he knew how it works. Fakery and pretense was everyday life behind the Iron Curtain not 40 years ago, the deep fa... I mean the "AI revolution" has only made it just a tad smoother and that's all.

      • off topic, but that reminds me of when Picard was captured by the Cardassians and was tortured in Star Trek TNG. I never linked it to the book 1984 before.
        • Yea, it's the "2 + Torture = 5" trope [tvtropes.org]. That Chain of Command Part II episode is one of the best of the season, possibly of the series. And a hell of a lead into the start of a new spin off, Deep Space 9.

        • that reminds me of when Picard was captured by the Cardassians and was tortured in Star Trek TNG.

          They made him listen to Kanye?. That would be pretty awful.

      • by znrt ( 2424692 )

        spot on comment, but one small precision: orwell didn't experience torture "first hand" in spain, at least there is no account from him that i know of. he did have knowledge about some of his comrades being subjected to torture and dying in suspicious circumstances.

        it's very likely though that he heard many testimonies that he didn't record. torture was ofc pretty common, specially and more systematically on the fascist side, and is well documented but still keeps getting denied today by believers despite

        • orwell didn't experience torture "first hand" in spain, at least there is no account from him that i know of

          It's been a long while since I've read anything about or from Mr. Blair, but I'm imagining that I remember reading bits from his biography about his time as a journalist in Spain, his contact with the Stalinist faction there, which was trying to take the lead of the anti-fascist movement there, playing as dirty against the competition as you can imagine, and how the whole Stalinist campaign disgusted him so much that he turned into a skeptic that democracy can work. That, of course, on top of what he was cr

    • I reread 1984 a few years ago and the thing that really struck me is what Orwell got wrong: the notion that you need to erase evidence of factual data (at great effort/expense) in order to propagate lies. It turns out that you just need to shout a little louder and a lot of folks will eat it up.

      That was a time when "photoshop" meant an actual workshop with lenses, cameras, artists, photographers, and so much else that only a government or the exceedingly wealthy could afford. I saw a YouTube video recently on how wealthy women in the days of black and white photography would spend good money to have photos of themselves taken and manipulated to show them as having unnaturally thin waists or whatever was fashionable. Even then they'd take steps to simplify the process by having a blank background

      • I am practiced in the art of retouching images including editing 4"x5" negatives on glass, which involves picking at tones to reduce them and using a stipple brush to build tones with the same grain as the film. Mostly, the technique was used to 'paint in' open eyes, as the exposure was so long the subject posed eyes closed. All portraits from about 1870 to 1900 received this standard treatment, so manipulation was sortof baked in from the get-go.
    • reread 1984 a few years ago and the thing that really struck me is what Orwell got wrong: the notion that you need to erase evidence of factual data (at great effort/expense) in order to propagate lies. It turns out that you just need to shout a little louder and a lot of folks will eat it up.

      I think you may have missed the point. It's not just about controlling Outer Party members, it's the hubris of literally trying to change the past, of eliminating objective reality. A hundred years hence when some cog in the MinRec machine looks up historical choco ration figures they will only see what the Party says they were.

      Reality is what the Party decides it is.
      Eventually even Inner Party members believe the world is what they say it is. It's not just about subjugation, despite the famous jackboot lin

  • What's a postage stamp?

    I'm only half joking as I know what a postage stamp looks like but I'm not sure I put a postage stamp on anything in the last decade. I've sent things in the mail but it's been either in a prepaid envelope or I took something to the post office where they did whatever to indicate postage paid on it. At least they didn't give this in Libraries of Congress, or football fields. If the press is going to use such size comparisons then perhaps they could use something more relatable to t

  • So the same way the gov tracks the location of videos. Recorded ac power fluctuations from all power plants detected from small lighting changes in the tracked video.

    I seen this on a documentary years ago. Not new.

    • I was kind of "yawn" at the thought that this is new nor original, so I started talking about... ok.. this is fake. what now? a portion of the population doesn't seem to care.
    • So the same way the gov tracks the location of videos. Recorded ac power fluctuations from all power plants detected from small lighting changes in the tracked video.

      I seen this on a documentary years ago. Not new.

      I'm curious, can someone fill me in on more? I'd like to know more on how and when such techniques were used.

      I do recall hearing how secure facilities would break the connection to the electric grid with a motor-generator set. This was to prevent anyone on the outside from picking up signals from the inside on the wires. The idea was that the mass of the spinning motor-generator was a kind of low pass filter that would stop any attempts to pick up data over the wires. Now I see that this works both ways

      • I think this is what was being referenced. Its not actually from light fluctuations, but from the hum from 50Hz EMF from all the power lines. It fluctuates all the time, so if you isolate the background hum, map the fluctuations, they can compare it to values from the grid in the past to find a real timestamp of the video.

        https://www.perpetuityresearch... [perpetuityresearch.com]
  • So entire videos are now being deepfaked. Somehow, they think that noise encoding won't be deepfaked along with it? Strange. Almost as if the researchers don't understand what a deepfake is.
    • Personally, I just use common sense to know if a video is deepfaked. I know that almost everything on Bitchute is fake, and almost everything on a credible news source is credible (after checking a few other sources). There does seem to be a disturbingly large percentage of Americans who seem to have let their brains fall out when they tilt their head.
    • by ceoyoyo ( 59147 )

      It's much harder to get very subtle details right. Especially subtle details that are right down in the noise and have to be consistent across the entire video in all four dimensions. You can probably train an image generator to create a decent version of this watermark but it would likely be much more difficult than getting the right number of fingers on the hands.

  • will senator Vreenak be able to tell if it's a fake?
    • I looked up the reference to that... seems like a good reference: Upon examination of the data rod, however, Vreenak discovered that the evidence was forged, and headed back to Romulus threatening to expose the plot. However, due to sabotage by Garak, Vreenak's shuttle exploded while en route, killing the senator, and at least two of his guards. The subsequent investigation by the Tal Shiar uncovered the fabricated evidence, but its defects appeared to be the result of the explosion. The Romulans logically
  • Until future ai models are trained to replicate the coding in surrounding parts of the video

  • I see some problems with this approach, even though using structured light is intrinsically cool.
    1. This is only to prevent deepfake video not photos.
    2. Unless they can choose watermarks better, in the worst case it adds fluorescent light flicker which is indeed perceptible and annoying.
    3. Authors say it is generally robust but weak against at least one type of attack (reflectance-only) and it is likely to be an evolving threat landscape.
    4. Adversary who can derive a watermark, read it from the equipment or

  • Can one make a light that will mimic this watermarking on a real video? Imagine a reported with such a light filming something which is later flagged as AI generated because the illumination has that watermark. Criminals could carry such a light to ensure no security footage was ever admissible in court.
  • Legally require all AI audio, video and image software to to embed watermarks in the content that identifies it as AI generated. Even generated text could contain a watermark if it was sufficiently large.
    • How would you legally require it for models not hosted in the country(/countries) where such legislation would hypothetically be implemented? How would you handle the existence of open source models without requiring intrusive surveillance to achieve the goal?
      • by DrXym ( 126579 )
        You start by making it the law somewhere big like the EU and any AI company wanting to do business there will comply and probably enable it everywhere. Open source or not is irrelevant.
        • But that's the thing, many open source models are not tied to any company, and many can absolutely be made that aren't tied to any one company. Not to mention that it is still the internet, if it is avaiable online publicly, people will get to it.
  • First, watermarks have never performed well. This one will not either. Second, If you try to use a watermark to convince somebody that something is genuine. that person has to be smart enough to understand the argument behind it. Most people are not.

  • by nashv ( 1479253 )

    It is relatively trivial to remove out any such pattern using an FFT and pyramidal decomposition.

The only problem with being a man of leisure is that you can never stop and take a rest.

Working...