Well assuming the laser pulses are completely uniform (which they are very close to being), then each wavefront of light is mathematically indistinguishable from the one before and after it, so in a very metaphysical sense, you could say that stitching together a video with frames of data taken from successive pulses is absolutely no different in the end than if they had collected all the data in a single shot. However they actually need to aggregate data from millions of frames into a single shot, because they get on average something like half a photon expected per frame without aggregation to increase the SNR -- see the longer presentation
here (it's quite a remarkable presentation, I have seen about a third of it so far -- it talks a lot initially about their work to see around corners -- this is not a new thing to try to do, it has been done before with standard camera equipment and a projected pattern with some success, but doing it with laser pulses is new).
The cool thing about what they have done is that you can watch the actual wavefront move like an expanding contact lens through the scene. To my knowledge, nobody has ever seen this before. Sure, it's data from billions of expanding contact lenses, but it shows you in a very visual way that the universe works the way the mathematics say it does.
No, the camera is not a trillion frames per second, but it shows you events that happen over trillionth-of-a-second timescales, if it were possible to capture data that quickly (which it is not) and if it were possible to solve the SNR issues over that timescale (which it is not).