These are picked up by peers when they review a paper formally, but usually skipped when they peruse the paper cursorily.
Peruse means something different than what you think it means.
Linux natively supports these devices, and the V4L2 APIs make it trivial to reads frames. Using libavcodec from the ffmpeg project you can encode the frames to practically any format imaginable. I encode all 5 cameras to MPEG-4 at 30 fps using minimal CPU power. All of this is possible with only about 500 lines of C code. Of course in my own version I've added a lot of fancy features over time and the project has gotten quite large with support for low-bandwidth streaming, crude motion detection, time-lapse video, etc.
I won't lie to you and tell you that the documentation for ffmpeg is any good. But there are tutorials out there that explain how to use libavcodec and everything else is a piece of cake. Don't overestimate how simple is it to get something basic working.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout]
"Scancode Map"=hex:00,00,00,00,00,00,00,00,02,00,00,00,2a,00,3a,00,00,00,00,00
Always draw your curves, then plot your reading.