So, to summarize the paper
http://arxiv.org/pdf/1409.1565... :
They have developed an algorithm for quickly giving a rough interpretation of the raw data stream coming out from the detector, i.e. converting the information that "value pixel A =12, value of pixel B = 43, ..." into useful physics data like "a particle with momentum vector P and charge Q was probably created 2 m from the collision point". This algorithm is special in that it can be implemented on an FPGA, and is somehow inspired by the retina of our eyes. Because it can run on an FPGA, it has the potential to be much faster, and can handle much larger data fluxes than current algorithms.
This is needed, because in a few years, we will upgrade the LHC such that it produces many more collisions per second, i.e. the data rates will be much higher. We do this to get more statistics, which may uncover rare physics processes (such as was done for the Higgs boson). Not all of this data deluge can be written to disk (or even downloaded from the detector hardware), so we use a trigger which decides which collisions are interesting enough to read out and store. This trigger works by downloading *part* of the data to a computing cluster that sits in the next room (yes, it does run on Linux), quickly reconstructing the event, and sending the "READ" signal to the rest of the detector if it fits certain criteria indicating that (for example) a heavy particle was created. If the data rate goes up, so must the processing speed, or else we will run out of buffers on the detector.