...why?
Your outermost gateway should be a simple NAT/port-forwarder/load balancer and a honeypot server. Web traffic goes to the front-end servers, all else goes to the honeypot server. There should be no live DNS. Computers don't need readable names, strings are often where mistakes are made and replying to an IP doesn't require name resolution. The NAT/load balancing would be per-inbound-packet at this level, not per-session or per-time-interval. That means attacks on server resources (if they get through at all) are divided across your cluster evenly. Buys the machines time to detect and counter the problem.
Your front-end servers should be not much more than static content delivery systems, proxying the rest through your outer defences. OpenBSD is ideal for this - fast, simple, bullet-proof. Middle level defences should be a very basic firewall (maximum stability and maximum throughput) and an Active NIDS running in parallel (so as not to slow down traffic).
Inside that, you have at least two load-balancers, one on hot standby, farming dynamic requests to mainline servers. Mainline servers have no static content, only dynamic content. If dynamic content changes slowly (eg: BBC), have a cache server sitting in front of the actual content server. No point regenerating unchanged content.
Content servers send through another firewall (it can also be simple) to your database servers. Unrelated data should be on distinct servers for security and seek time. Since the content servers are read-only, they need hit only database cache servers with actual databases behind those. If you absolutely have to have FQDNs, zone transfer the critical stuff. Bounce all other DNS requests via the internal network to the regular DNS source. That way, your at-risk gateway doesn't contain stupid holes in the wall.
The internal corporate network would have a firewall and switch linking up to the content servers and cache servers, then a different firewall to the database servers. These would be heavier-duty firewalls as the traffic is more complex. Logins of any kind should be permitted only over an IPSec tunnel. All unused ports should be closed.
For the outermost systems, logins should be by IPSec only from a cache server. (Content servers have three Ethernet connections, none going to the firewall.)
This arrangement will take punishment. The arrangements where everything (database included) is in the DMZ with no shielding against coding errors, THOSE are the ones that fall over when people sneeze.
Ok, so my topology would cost a few thousand more. To Amazon, the BBC, any of the online banks, any of the online nuclear power stations - a few thousand might be spent on an executive lunch, but considerably more than a few thousand would certainly be spent and/or lost in a disaster. My layout gives security and performance, though the better corporate giants might be able to do better in both departments.
Doesn't matter if they can. What matters is that nobody at that level should be less secure than this. This is your minimal standard.