You have particles entering the detector every ~40ns and hundreds of different instruments making measurements, which leads to a ton of data very quickly.
Not exactly true. It's running at 40 MHz, so that's 25 ns bunch spacing. Further, you don't exactly have to 'crunch' the data as it comes in, there are multiple triggers that throw lots of data away based momentum cuts and other criteria before it ever makes out of the detectors.
In ATLAS, for example, there are ~ 10e+9 interactions/sec. The Level1 Trigger, consists of fast, custom electronics programmed in terms of adjustable parameters to control filtering algorithms. Input is from summing electronics in the EM and hadron calorimiters, and signals from the fast muon trigger chambers. The info is rather coarse at this point (transverse momentum cuts, narrow jet criteria, etc), and at level one the info rate is decreased in about ~2us (including communication time), from 40MHz to about 75KHz. Level2 now does a closer look, taking more time and focusing on specific regions of interest (RoIs). This process takes about 10ms, and data rate is reduced to about 1KHz for sending to the event filter. Here, the full granularity of the detector (the 'detector means all the bits - Inner detectors: Pixels, strips, Transition Radiation tracker - The calorimiters - The muon tubes at the outside radius) and runs whatever selections algorithms are in use. This takes a few seconds, and output is reduced to about 100Hz and written to disc for a gazillion grad students (like myself) to analyze endlessly and get our PhDs.
There is much more to it of course, but you can find info about it on line if you really are interested in the details. Have a look at the ATLAS Technical Design Report: http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/TDR/TDR.html