Reading the abstract, it is clear that what they did was to do image analysis using an algorithm (albeit in FPGA) modeled on what happens in the retina. Other than the speed advantage, there is nothing special about this that makes it an artificial retina. If you take a picture with a cellphone and do edge detection using software, is that an artificial retina? I would argue no more or less than what is described here.
TFS makes it sound like the image detectors are actually doing edge detection like the retina. The image sensors (CCD or CMOS or whatever) is doing no such thing. The image sensors are providing raw images that are being analyzed using edge detection algorithms using an FPGA.
There are VLSI implementations of retina-like processing, i.e. center excite, surround inhibit, that can do edge detection/enhancement, but this ain't it.