Of the firms I've worked for, only the large ones (>$20B/yr) that depend heavily on IT had a dedicated in-house incident response team. Smaller shops ($5-20B) or those that rely less on IT would outsource it. Small enterprises with a 1-5 man security team probably have just a written plan that's never tested. Anything under $1B/yr in revenue probably doesn't have a security team at all unless they are an Internet-based company.
Corporate use is inspection of traffic to detect security breaches, but Service Provider use is surveillance?
Use of wildcard certs is one thing, but BlueCoat technology isn't designed for surveillance any more than network analysis tools are.
All you need to do is read Cisco's documentation to learn about their backdoors.
But the foolish design thing here was having the machine know the outcome of the ticket before it prints (or even at all).
By law, individual machines generally need to maintain a guaranteed payout rate. As a result, they need to know whether the player will win or not. When the numbers are computer-generated, then it can be exploited via software. If it's a roll of tickets it is distributing, then the roll is already configured with a specific payback rate.
The thing I don't like about the public cloud is the real possibility for permanent vendor lock-in, IBM mainframe style.
What many people don't realize is that this is why OpenStack is so popular. As cloud providers "standardize" on the OpenStack platform and APIs (except for AWS, which doesn't do it because they are the 900 lb gorilla in the market), they become interchangeable by nature. The common denominator for compatibility is how your provisioning and migration engine interfaces with the cloud provider. And if you're based on the OpenStack API, then you can basically migrate or provision your workloads on any provider that supports that API - no lock-in. All you need to do is update DNS to point to your new hosting provider and you're in business.
Case in point, the NASA shuttle avionics system. CMMI level 5 certified software development program, track record of 2 Sev-1 defects per year during development.
Timeline Analysis and Lessons Learned (see page 7/slide 6) You'll find that there were hundreds of unknown latent Sev-1 defects (potentially causing loss of payload and human life) and even ~150 defects 15 years after the program started.
The question isn't whether your team is capable or willing to fix the issue, you must acknowledge that there is nearly 100% certainty that there are unknown vulnerabilities in any software you write. The question goes back to whether a bug bounty program will ever cross the inflection point of a ROI chart.
What was that in response to??
The answer lies in quantifying the project impact, not in calling it low/medium/high (which is a subjective, relative term). Also, as business grows (or shrinks), the measurement of impact should be weighted as well. For example, a project that generates $1M/yr in revenue is a big deal when you're making $2M/yr, but not as much when you're making $20M/yr.
In the end, limited resources need to be focused on the area where it makes the most impact rather than trying to solve everyone's problems. That is exactly what IT management's job is.
The other answer is that no group/team/company does this really well, it comes down to individual manager's or IC's style and how you dismiss the trivial requests.