Normally you are supposed to design these facilities with lots of margin. If they can't even take 40 C, that is not a lot of margin from what normally happens in a year. Hardware tends to be rated anywhere from 60 C to 100+ C in max operating temperature, so it is just a game of temperature delta's. Also if power is an issue on the hottest days, there are backup generators on site.
Seeing these outfits can do whatever they want with the hardware design, the obvious answer is to do negative pressure liquid cooling to all of the real heat producing components, which have ratings anywhere from 85 C to 100+ C and back of rack radiators so air cooling can be used for the lowered powered stuff, some of which have max ratings as low as 60 C. If you have ever liquid cooled a server CPU with many cores or large die GPU / other large accelerator, they tend to stay fairly close to the temperature of the liquid in the water block on top of them. If there is a problem with say a high performance CPU, you can always dynamically scale back the TDP of the CPU so its temperature delta doesn't get too high above warm coolant on an exceedingly hot day. For the lower powered components receiving air cooling, it shouldn't be too energy consuming to say chill the coolant to say 30 C when it is say 40 C outside for the back of rack cooler. I found for example my high efficiency for a window unit window AC was chilling the inside air down to -20 C according to my temperature gun when it was 35 C outside, granted it was still warm in the room. It is just that a window AC needs to be designed to much different cooling parameters than a back of rack cooler only focused on lower powered server components while a different more direct cooling loop is used for the high powered components.
Talking about temperature deltas in a direct liquid cooling setup, my high end PCs that are liquid cooled are seeing ~6 C from the room temp to the coolant temp and up to 2.5 C from the 'cold' side to the 'hot' side of the loop when under full load and this is with some of the higher power enthusiast hardware running at full load. Then for liquid cooled GPUs for example I see another ~12 C rise over the coolant temperature when pulling ~320 W. This all adds up to you could use 40 C air at full load no problem and see a GPU only get up to ~60 C when it is rated for 85 C max. A data center would probably try to squeeze more out of the radiators placed outside and try to do more with a lower flow rate per device, so then maybe you end up with a GPU getting up to 70C when it is 40C outside. Still perfectly acceptable and room to spare as the climate crisis gets worse. CPUs are generally rated up to 100 C and when you have a big CPU package with 64 cores to spread the heat load across, keeping that CPU under 100 C with a water block on it should be easy enough.
If push comes to shove, maybe you start setting up the servers with a liquid metal thermal compound between the die and water block, granted the heat load on server CPUs is much more spread out in most cases, so only say an extreme case where license restrictions on say your Oracle server pushes you to one of those server CPUs setup more like a gaming CPU to extract more from each CPU core to keep the per core licencing costs down.