Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel AI

Intel Unveils Real-Time Deepfake Detector, Claims 96% Accuracy Rate (venturebeat.com) 27

An anonymous reader quotes a report from VentureBeat: On Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes -- that is, synthetic media in which a person in an existing image or video is replaced with someone else's likeness. Intel claims the product has a 96% accuracy rate and works by analyzing the subtle "blood flow" in video pixels to return results in milliseconds. Ilke Demir, senior staff research scientist in Intel Labs, designed FakeCatcher in collaboration with Umur Ciftci from the State University of New York at Binghamton. The product uses Intel hardware and software, runs on a server and interfaces through a web-based platform.

Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color. With FakeCatcher, PPG signals are collected from 32 locations on the face, she explained, and then PPG maps are created from the temporal and spectral components. "We take those maps and train a convolutional neural network on top of the PPG maps to classify them as fake and real," Demir said. "Then, thanks to Intel technologies like [the] Deep Learning Boost framework for inference and Advanced Vector Extensions 512, we can run it in real time and up to 72 concurrent detection streams."

"FakeCatcher is a part of a bigger research team at Intel called Trusted Media, which is working on manipulated content detection -- deepfakes -- responsible generation and media provenance," she said. "In the shorter term, detection is actually the solution to deepfakes -- and we are developing many different detectors based on different authenticity clues, like gaze detection." The next step after that will be source detection, or finding the GAN model that is behind each deepfake, she said: "The golden point of what we envision is having an ensemble of all of these AI models, so we can provide an algorithmic consensus about what is fake and what is real."
Rowan Curran, AI/ML analyst at Forrester Research, told VentureBeat by email that "we are in for a long evolutionary arms race" around the ability to determine whether a piece of text, audio or video is human-generated or not.

"While we're still in the very early stages of this, Intel's deepfake detector could be a significant step forward if it is as accurate as claimed, and specifically if that accuracy does not depend on the human in the video having any specific characteristics (e.g. skin tone, lighting conditions, amount of skin that can be see in the video)," he said.
This discussion has been archived. No new comments can be posted.

Intel Unveils Real-Time Deepfake Detector, Claims 96% Accuracy Rate

Comments Filter:
  • by omnichad ( 1198475 ) on Wednesday November 16, 2022 @05:05PM (#63056680) Homepage

    This is a PR move but not anything real useful. It will be incredibly easy to train an AI model to defeat this, so the 96% rate would only last as long as the criteria isn't worth adding to the weights of the model.

    • by gweihir ( 88907 )

      Exactly my first thought. All this does is make deepfakes better.

      • With computer generated content, the limit is your imagination. If you can imagine a deepfake, a computer can render it. That is going to be the end state, and we should be prepared for it. This reminds me of the movie Equilibrium, a highly underrated Christian Bale movie set in a dystopian future where all human emotion is suppressed by regimented 12 hour injection of a drug. It is justified by a post-WW3 conclusion that mans inhumanity to man was emotion based. In the movie; -SPOILER ALERT- there are elit
      • by Tablizer ( 95088 )

        Generation 1:
        A: Deep Fakes
        B: Deep Fake Detector

        Generation 2:
        A: Deep Deep Fakes
        B: Deep Deep Fake Detector

        Generation 3:
        A: Deep Deep Deep Fakes
        B: Deep Deep Deep Fake Detector

        Etc...

    • by anonymouscoward52236 ( 6163996 ) on Wednesday November 16, 2022 @05:25PM (#63056734)

      Exactly. Intel just made the perfect algorithm to plug into my GAN to make deep fakes that are THAT much harder to detect. :-)

      This is how AI works... the cat/mouse/cat/mouse cycle can nearly be automatic as a detector can then be used as a trainer in a GAN.

      • You'd still have to retrain the model, since this information hasn't been ingested into any of the existing models but that is only a matter of time.

        • Retraining a model is a $1 million dollar or less thing to do. Look at stable diffusion. This is within the realm of some multimillion/billionaire/government to do to play with people's lives and deepfake stuff.

    • And in an environment that is still not mostly deepfakes, "96% accuracy" means that most of the time when it identifies a fake it will be wrong... (If 1% of images are deepfakes, this algorithm will nominally identify 4 out of 100 as fake, thus being wrong 75% of the time).

      • Except it identifies fakes by failing to find them authentic, not looking for markers of fakeness. The dataset where they achieved 96% was the "Face Forensics" dataset, but it doesn't say whether it's the version where 100% are fakes or the one where 50% are fakes. Neither would give you a 4% error rate on real world data because there is a much higher percentage of authentic videos relative to the test data.

        For reference, the actual article was published on the research over 2 years ago. It's technicall

      • Will not matter if it is consistently wrong when they are the defacto deepfake detector. Everyone will look to intel to tell the people if it is a deepfake or not and everyone will take that as gospel. You wonâ(TM)t need an algorithm you will just need someONE to say "this is fake [or not]". Obvious deepfakes will continue to be obvious but people will believe it because the authority on detecting deep fakes said it is okay to believe.

    • Yeah, just buy it, and train your ai to evade it. Then they buy your ai and train theirs to detect yours and the arms race continues long past the point where humans can no longer detect it. Humans end up paying big companies who train their ais to detect each other. This money is pooled and used to power the ai training arms race for tricking humans about what is real. Want to know what's real? Only if you can afford it - only if your financial will and ability not to be tricked is greater than someon

  • ...and feed it into a "Generative Adversarial Network" model builder and have even better fakes!
  • I expect a similar level of success from Intel's version.

  • They checked 100% of Facebook and TicTok they couldn't verify the other 4% as fake yet. Still wondering why it says the cat playing the piano is real
  • 60 hz noise (Score:4, Interesting)

    by laughingskeptic ( 1004414 ) on Wednesday November 16, 2022 @05:58PM (#63056796)
    PPG is effectively filtering for find-grained blood induced changes in a skin surface. I should be really easy to add this as a post-processing step to a deep fake.
    • by adrn01 ( 103810 )
      On the other hand, after an uploaded vid is processed by whatever is at the server end, will the needed fine detail still be there? Maybe the original high-def can be sussed out, but will the "YouTube" quality be verifiable?
  • by imunfair ( 877689 ) on Wednesday November 16, 2022 @07:17PM (#63056922) Homepage

    If they're doing this for something as trivial as deepfakes, I bet the same tech is being used for much more interesting purposes by intelligence agencies. I wouldn't have thought you could measure blood flow like that, but since you can tell from a simple video, how about measuring the health and heart condition of other world leaders? How about a sort of lie detector, especially when combined with other aspects like pupil dilation, etc - seems like you could effectively simulate a lie detector test as long as you knew that certain things said on the video were true or false. Probably other more interesting uses I'm not even thinking of as well.

    • how about measuring the health and heart condition of other world leaders

      There's an order of magnitude higher complexity in measuring whether something changes compared to inferring something about someone's underlying condition from it. It's frankly astonishing that this kind of subtle change makes it through compression algorithms as I would have expected these kind of minor visual deviations would be compressed out.

      Incidentally photoplethysmography is used for medical analysis. Heck you probably have a device which uses this technology on your wrist right now measuring your h

  • ... is liable to be used for feedback in learning algorithms to produce even more believable deepfakes in the future.

  • by John.Banister ( 1291556 ) * on Wednesday November 16, 2022 @10:52PM (#63057236) Homepage
    If a fake is aimed at the public, how do you communicate to the public the detected fakeness in an unfakable manner? If I make a deepfake and you detect it, what happens when the minute you report your detection to the public, I generate a report appearing to have identical origin and the opposite conclusion. Each of us will call ourselves the touchstone of truth and the other a liar in a manner that only Intel's detector knows isn't true. Unless everyone reading the internet has personal access to that detector, it will be more useful to researchers than to the general public.
    • It's quite simple, I just wait for the post to show up on Truth Social and assume the opposing view is right. It's a trend that has stood up so far :-)

      • A fun thing to do on Truth Social would be to make a fake Trump account, and post things Trump actually would have said if he'd thought of it. The conflict between taking the win and protesting the authenticity would be fun to watch, and, if the account didn't get deleted then it could post other opinions as Trump's later on.

If you think nobody cares if you're alive, try missing a couple of car payments. -- Earl Wilson

Working...