They made db change. They have effectively infinite amounts of real production flow data they can use to test changes.
In this case, the system failed because it was hard coded to 200 max tests but they added more. If they had tested against that it would've been found.
This is not an obscure rarely used feature. This is a key feature of what their entire service is built around. They are selling filtered/clean incoming traffic against very large production sites. Who thought it was a good idea to have a hard cap of how many rules could be applied in the first place?
This is very much "no one will ever need more than 640k" thinking.
The closest I've ever got to doing similar was using a numeric incremental dns naming scheme based on 3 or 4 digit names like web001-999 or service0001-9999 knowing that it wouldn't be a surprise if we ever ran out of names, especially considering we had a dozen servers at the time which could easily handle 50x the current traffic load. But a numeric naming scheme isn't a surprise when you run out. Long before that we changed to "datacenter-service-number" so web005 became dc3-web-005 giving us up to 1000 web servers per data center in data centers that didn't have space for another 1000 servers anyway.
But this secret hard coded db limit is simple incompetence and lack of real world experience.
Again, this is the very core of their business model. Yet no one knew anything about how their systems work. It wasn't a complex problem. It was a dumb hard coded cap.
I have also seen very complex systems collapse under their own weight. This was not one of those times.