So, time for me to rant, but on-topic, for a second.
Everybody knows, I would hope, that best practice is to never allow an Internet-facing server to initiate outbound traffic. This is both because, should the server get compromised, it becomes a new attack vector - as in Code Red or SQL Slammer. This is also because, as in Target's case, it makes it fairly trivial to exfiltrate stolen data.
But services still persist that require that this very access be enabled. My current case in point: ReCAPTCHA. Google hosts the URL for this service, intended to provide additional security, on a www.google.com URL, which means that, at minimum, I have to allow outbound access from any server hosting a ReCAPTCHA on port 443 to everything Google owns. In practice, of course, it's all but impossible to keep track of Google's address space for firewall purposes, so this means that I have to allow that server out on port 443 to the entire Internet. It's either that, or set up a proxy solution that can do URL filtering and then require the CAPTCHA verification code to use that. Not exactly something your typical smaller company using ReCAPTCHA is apt to do.
I've talked to competing, for-pay, services, and they require the same thing, despite the fact that they're smaller and have only a few, well-defined networks, but they won't commit to keeping me up-to-date with network changes.
We really need to start pushing back on this crap. Servers accepting inbound traffic should never need to initiate outbound communications.