Comment Problematic (Score 1) 56
I see some problems with this approach, even though using structured light is intrinsically cool.
1. This is only to prevent deepfake video not photos.
2. Unless they can choose watermarks better, in the worst case it adds fluorescent light flicker which is indeed perceptible and annoying.
3. Authors say it is generally robust but weak against at least one type of attack (reflectance-only) and it is likely to be an evolving threat landscape.
4. Adversary who can derive a watermark, read it from the equipment or control its definition could compromise all video ever taken with that equipment.
5. A fake watermark to be applied to video that did not use watermarking technology to make false suggestions about authorship, integrity of the video (i.e. no cuts or insertions), or that a faked or cut video is actually true. These possibilities are even scarier. Potentially modeling and simulating a scene in 3D could allow a realistic watermark that changes depending on the video content.
6. In a game of analyst vs. adversary the paper says, "The adversary’s goal is then to find a point on the plausible manifold that can be used to spread false or misleading information". Reducing the manifold by adding specific criteria like the weather on the day of the recording, or ensuring he has the only copy of the video in existence, makes the analyst's job easier. However if the video producer is the adversary, or if there was no watermarking on the original video and it was cut and only then watermarked prior to dissemination, then it would seem the shoe is on the other foot and the analyst is out of luck.