They do, but some surge protection devices have a limited number of surges they can absorb before they have to be replaced. If there were a number of surges, it's certainly feasible for the protection chain to fail at some point.
An anecdote from a few weeks ago with a data center I help manage. It has a backup generator, automatic switch gear and a Schneider Electric Galaxy double conversion UPS. Yes we don't have two, but we ain't an airline. We do have another data center on another site to take over if needed though.
So a few weeks back our phones go wild with texts fired off by the UPS tossing SNMP traps around. One sprint later, the UPS console is showing no input power and our in-house electricians lay rubber from one end of the campus to the other to get to the sub in time. As we wait for the UPS to hit that magic 5 minutes when it triggers the auto-shutdown sequences on the servers, the sparkies discover the sub's output is fine and the generator isn't running.
Then all shit breaks loose, ten power cycles on the UPS input, some lasting long enough to switch from battery to mains, some not. With ten minutes left on the batteries, the UPS gives up, shuts the inverter and charger down and switches the load to static bypass. Room goes silent except for the UPS alarms, and then the eleventh return cycle comes and goes in about three seconds. We hear PSU fans starting and then winding down. I dropped the master breaker on the DB and isolated the room from the UPS. Down until the sparkies figure it out. There goes three hours of our lives.
Turns out that the automatic switch gear had some arc damage on the utility-side contactor feeding the control boards, probably caused by the eight months of load-shedding (read utility driven power cuts to ration power) we had experienced two years ago. That was enough to drop the voltage in one sensor to below the trigger threshold and caused that contactor and the main load contractor to open. Before it could start the generator up, the control board then decided the utility had returned, so it closed the contractors again. And open again, and close again. The sound of a 3-phase 480V 500A contactor switching twice a second is enough to make the sparkies use words a sailor would be proud of.
We had to lock out the sensors, rig a temporary bypass on the contactors to power the room from the generator feed side and replace the damaged contactors before we were fully safe again. We lost 2 PSUs out of 90 and no data. We were lucky.
I relate this to show that no matter how good the power protection architecture is, multiple UPSes, twin feeds etc, shit can and does happen. We were lucky we had people on the site who knew what trouble sounds like and were willing to isolate the room.
So I'm willing to accept that BA lost a data center to power problems. But I'm not willing to accept that the loss of a single data center can shut down global operations. BA must have multiple redundant data centers with a seamless failover mechanism. And that is a failure of IT pure and simple.