You don't think breaches of this kind would negatively affect the company?
Besides the point entirely. I was accused of victim blaming. Say you are a construction worker and you lend me your tools to look after until tomorrow. I don't give a damn and leave your tools lying around where anyone can steal them. Next day, the tools are gone and when you ask me for them I just shrug my shoulders. Who is the victim - me or you? Yeah ok, I have lost credibility. You will never lend me any tools again. But I wouldn't call me a VICTIM. If anything, I am an accomplice.
Is it any wonder that UX designers are getting a horrible reputation among some segments of the tech-savvy crowd?
The main reason for this is that people who self-describe as UX experts, as opposed to HCI experts, tend to be the ones that favour form over function and ignore the last 40 or so years of research into how to design useable interfaces. Most of them wouldn't know Fitts' Law if it dragged them to the corner of the screen and made them infinitely long.
There isn't much testing of the C bindings. They're also in the process of being deprecated in favour of machine-generated ones that are less API stable and have no ABI stability guarantees (precisely because most people don't actually use them from C, they use them from some other language with C bindings). For everything else, there's a bit regression test suite that works by feeding some code (source code when testing clang, IR or assembly when testing bits of LLVM) into one of the tools and then checking that the output matches. Bugs still slip in quite easily, unfortunately. The second tier of tests involves compiling and running a bunch of benchmarks and similar sample code and checking that they haven't got slower (by a statistically significant margin) and that they still produce the right answers. There's a network of buildbots that runs on a variety of operating systems and architectures that first builds and runs the regression test suite on every commit and then (less frequently) runs the executable tests. These catch most regressions, but not all - the rest are caught by users testing betas and filing bug reports.
There's been a lot of research work on improving this. The LLVM Miscompilation Detector, for example, had a semantic model of LLVM IR and would feed real and randomly-generated IR through the optimisation pipeline and then use a theorem prover to attempt to prove that the semantics of the before and after versions were the same. This could then be combined with the LLVM bugpoint tool to find the optimisation pass that performed an invalid transform.
It's a tradeoff. Blowing away the i-cache is a good way of killing performance, but so is having a load of function calls that do almost no work. If you had to do a virtual method call for comparing two unsigned integers and a different virtual function call for comparing two signed integers when inserting them into a set then you'd have a lot more overhead. In a typical std::set implementation, the compare operations are inlined and so the costs are very low.
The real problem with C++ is that the programmer has to make the decision about whether to use static or dynamic dispatch up front and the syntax for both is very different, so you can't trivially switch between them when it makes sense to do so.
They will never be able to get through them all.
I dunno you can get through a lot of cases quickly when all you ever do is take them round the back and cut their heads off.
Nothing is unhackable.
In theory. In practice humans go for the low hanging fruit. This store was probably hacked because of ridiculous password security or SQL injection, or some other trivial technique. You don't need to build government-level security to convince a bad guy to move on to an easier target.
Also the store is not the victim. The customers who trusted the store are.