mpv allows the user to supply GLSL scripts using the --opengl-shaders=filename option, and it can save single screenshots to files after those shaders have been applied (Ctrl-S), and mpv is scriptable (in Lua or C), so all you need to do is write a script that single-steps through the video, then writes such a post-processed screenshot to a pipe which you can use as input to "ffmpeg".
Doesn't mpv support direct output to a series of PNGs? MPlayer does it simply with -vo png.
Incidendally, I'm working on something related to the original question. I use shaders for math art demos, and I already have the option of using image files as the input (shameless example). It would be trivial to accept a new file for each frame, so it could process video from a series of images. The speed would only be a couple of FPS due to I/O bottleneck, but it won't be realtime anyway. The reason I haven't done this so far is that my focus is on the math of iterated shaders, not processing some existing video. Still, it would be fun to do some day, and of course I'm looking at ways to do it in realtime (the GPU is fast enough, but I/O is harder).
Lastly, you could use a screen recording software instead of the clunky series-of-screenshots idea. I did this for putting my first few demos on Youtube, but the quality is awful, so I much prefer taking the PNGs and encoding separately.