Intel Unveils Real-Time Deepfake Detector, Claims 96% Accuracy Rate (venturebeat.com) 27
An anonymous reader quotes a report from VentureBeat: On Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes -- that is, synthetic media in which a person in an existing image or video is replaced with someone else's likeness. Intel claims the product has a 96% accuracy rate and works by analyzing the subtle "blood flow" in video pixels to return results in milliseconds. Ilke Demir, senior staff research scientist in Intel Labs, designed FakeCatcher in collaboration with Umur Ciftci from the State University of New York at Binghamton. The product uses Intel hardware and software, runs on a server and interfaces through a web-based platform.
Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color. With FakeCatcher, PPG signals are collected from 32 locations on the face, she explained, and then PPG maps are created from the temporal and spectral components. "We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real," Demir said. "Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real time and up to 72 concurrent detection streams."
"FakeCatcher is a part of a bigger research team at Intel called Trusted Media, which is working on manipulated content detection -- deepfakes -- responsible generation and media provenance," she said. "In the shorter term, detection is actually the solution to deepfakes -- and we are developing many different detectors based on different authenticity clues, like gaze detection." The next step after that will be source detection, or finding the GAN model that is behind each deepfake, she said: "The golden point of what we envision is having an ensemble of all of these AI models, so we can provide an algorithmic consensus about what is fake and what is real." Rowan Curran, AI/ML analyst at Forrester Research, told VentureBeat by email that "we are in for a long evolutionary arms race" around the ability to determine whether a piece of text, audio or video is human-generated or not.
"While we're still in the very early stages of this, Intel's deepfake detector could be a significant step forward if it is as accurate as claimed, and specifically if that accuracy does not depend on the human in the video having any specific characteristics (e.g. skin tone, lighting conditions, amount of skin that can be see in the video)," he said.
Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color. With FakeCatcher, PPG signals are collected from 32 locations on the face, she explained, and then PPG maps are created from the temporal and spectral components. "We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real," Demir said. "Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real time and up to 72 concurrent detection streams."
"FakeCatcher is a part of a bigger research team at Intel called Trusted Media, which is working on manipulated content detection -- deepfakes -- responsible generation and media provenance," she said. "In the shorter term, detection is actually the solution to deepfakes -- and we are developing many different detectors based on different authenticity clues, like gaze detection." The next step after that will be source detection, or finding the GAN model that is behind each deepfake, she said: "The golden point of what we envision is having an ensemble of all of these AI models, so we can provide an algorithmic consensus about what is fake and what is real." Rowan Curran, AI/ML analyst at Forrester Research, told VentureBeat by email that "we are in for a long evolutionary arms race" around the ability to determine whether a piece of text, audio or video is human-generated or not.
"While we're still in the very early stages of this, Intel's deepfake detector could be a significant step forward if it is as accurate as claimed, and specifically if that accuracy does not depend on the human in the video having any specific characteristics (e.g. skin tone, lighting conditions, amount of skin that can be see in the video)," he said.
This won't last long (Score:5, Insightful)
This is a PR move but not anything real useful. It will be incredibly easy to train an AI model to defeat this, so the 96% rate would only last as long as the criteria isn't worth adding to the weights of the model.
Re: (Score:2)
Exactly my first thought. All this does is make deepfakes better.
Re: This won't last long (Score:1)
Re: (Score:1)
Generation 1:
A: Deep Fakes
B: Deep Fake Detector
Generation 2:
A: Deep Deep Fakes
B: Deep Deep Fake Detector
Generation 3:
A: Deep Deep Deep Fakes
B: Deep Deep Deep Fake Detector
Etc...
Re:This won't last long (Score:4, Insightful)
Exactly. Intel just made the perfect algorithm to plug into my GAN to make deep fakes that are THAT much harder to detect. :-)
This is how AI works... the cat/mouse/cat/mouse cycle can nearly be automatic as a detector can then be used as a trainer in a GAN.
Re: (Score:2)
You'd still have to retrain the model, since this information hasn't been ingested into any of the existing models but that is only a matter of time.
Re: (Score:2)
Retraining a model is a $1 million dollar or less thing to do. Look at stable diffusion. This is within the realm of some multimillion/billionaire/government to do to play with people's lives and deepfake stuff.
Re: (Score:2)
The paper on this technique was published two years ago. It's actually old news, so there's already been plenty of time for someone to have done it already
https://ieeexplore.ieee.org/do... [ieee.org]
Re: (Score:3)
And in an environment that is still not mostly deepfakes, "96% accuracy" means that most of the time when it identifies a fake it will be wrong... (If 1% of images are deepfakes, this algorithm will nominally identify 4 out of 100 as fake, thus being wrong 75% of the time).
Re: (Score:2)
Except it identifies fakes by failing to find them authentic, not looking for markers of fakeness. The dataset where they achieved 96% was the "Face Forensics" dataset, but it doesn't say whether it's the version where 100% are fakes or the one where 50% are fakes. Neither would give you a 4% error rate on real world data because there is a much higher percentage of authentic videos relative to the test data.
For reference, the actual article was published on the research over 2 years ago. It's technicall
Re: This won't last long (Score:2)
Will not matter if it is consistently wrong when they are the defacto deepfake detector. Everyone will look to intel to tell the people if it is a deepfake or not and everyone will take that as gospel. You wonâ(TM)t need an algorithm you will just need someONE to say "this is fake [or not]". Obvious deepfakes will continue to be obvious but people will believe it because the authority on detecting deep fakes said it is okay to believe.
Re: (Score:2)
Yeah, just buy it, and train your ai to evade it. Then they buy your ai and train theirs to detect yours and the arms race continues long past the point where humans can no longer detect it. Humans end up paying big companies who train their ais to detect each other. This money is pooled and used to power the ai training arms race for tricking humans about what is real. Want to know what's real? Only if you can afford it - only if your financial will and ability not to be tricked is greater than someon
So fakers will take this... (Score:2)
Should have named it "FakeBlock" (Score:2)
I expect a similar level of success from Intel's version.
96% success (Score:2)
60 hz noise (Score:4, Interesting)
Re: (Score:2)
Other Uses (Score:3)
If they're doing this for something as trivial as deepfakes, I bet the same tech is being used for much more interesting purposes by intelligence agencies. I wouldn't have thought you could measure blood flow like that, but since you can tell from a simple video, how about measuring the health and heart condition of other world leaders? How about a sort of lie detector, especially when combined with other aspects like pupil dilation, etc - seems like you could effectively simulate a lie detector test as long as you knew that certain things said on the video were true or false. Probably other more interesting uses I'm not even thinking of as well.
Re: (Score:2)
how about measuring the health and heart condition of other world leaders
There's an order of magnitude higher complexity in measuring whether something changes compared to inferring something about someone's underlying condition from it. It's frankly astonishing that this kind of subtle change makes it through compression algorithms as I would have expected these kind of minor visual deviations would be compressed out.
Incidentally photoplethysmography is used for medical analysis. Heck you probably have a device which uses this technology on your wrist right now measuring your h
It occurs to me that this technology.... (Score:2)
Re: (Score:2)
The problem with detecting fakes (Score:3)
Re: (Score:2)
It's quite simple, I just wait for the post to show up on Truth Social and assume the opposing view is right. It's a trend that has stood up so far :-)
Re: (Score:2)