And in many cases they probably can't do it over. We're talking about major financial and operational programs that weren't designed so much as they evolved along with the business over the course of the last half-century (since the introduction of the IBM System/360). The specs and requirements, if they exist at all, are buried in the back of a warehouse the size of Warehouse 13 and have probably been turned into nests for the mutant rats. The source code in many cases doesn't match the binaries or doesn't exist at all thanks to errors in migrating data and mistakes in editing files. The running binaries may literally be the only authoritative statement of what rules the company's accounting follows. There's a reason every single IBM mainframe since the S/360 has been capable of emulating an S/360 down to the hardware level, after all.
The difference between someone with a Computer Science degree and one who's learned practical coding is the difference between a residential-home architect and a construction-oriented master carpenter. The first can design your home and tell you why it's designed that way. The second can actually build it, tell you what goes into the construction and why, and when certain design elements are going to muck up the physical realities. In the end, you're going to need both skillsets unless you restrict yourself to just building cookie-cutter copies of existing house plans. And ideally your senior people should have both skillsets so their designs take into account the grungy details of turning them into working code.
The absolute worst situation is senior architects/designers with no practical experience, they tend to turn out beautiful, elegant masterpieces that're a nightmare to actually implement. That's followed only slightly by pure practical programmers trying to do high-level design while being ignorant of the overarching principles and abstract concepts that help guide you when it comes to what's the best way to approach a problem.
I'd note that most software engineers aren't philosophically opposed to dressing well, or to reasonable dress codes. They're mostly opposed to stupid dress codes that make them uncomfortable while working for no good reason. Reasonable dress for a meeting with outside customers is different from that for a group of engineers banging out a solution to a code problem, and what's reasonable when you've hauled someone in on their day off to deal with an emergency isn't the same as what they'd wear during a normal workday. Management tends to lose sight of all this because they've got much different jobs from the engineers and the dress norms for them are going to be different from those for engineers because the routine situations are going to be different.
The problem there is that the Windows firewall itself creates it's own attack surface. You have such a large range of internal machines that need access to so many different services on the servers for monitoring, administration, deployment, support and so on, and so many of those services are either so poorly documented or multiplex so many different functions/services over the same port that it's difficult to write specific rules for them, that in the end your firewall rules for the servers end up being unmanageably complex. They end up not protecting you nearly as much as you think they are, and they actually cause problems and contribute to failures (I could count on spending at least half a day every week diagnosing firewall-rule-related problems, and every release tended to result in several rollbacks and re-deployments over the course of a couple of days because of errors or omissions in firewall rule changes which we also had to diagnose). Plus, for all that cost, the primary threat wasn't from other compromised servers, it was from internal machines which legitimately had access to the servers (ie. the desktops belonging to DBAs, sysadmins, managers and so on) which were compromised by malware coming in via other vectors that bypassed all the firewalls.
You said they disabled the local firewall. That's how I'd run most Windows servers on a network of any size, because the local firewall just eats up resources on the server that could be better used for the server's actual job. The firewalls should be proper hardware firewalls built into the networking infrastructure located a) between the outside world and the client networks to control access to the network in general, b) between the POS terminal segment and the server segment to control what access the terminals have to the servers and to block the servers from unnecessary access back to the POS terminals, and c) between the two client networks you mention to control what access each client has to the other's network.
The Windows Firewall itself is fairly useless in a large network because as far as incoming connections go it can't control things any better than a hardware firewall can, and for outgoing connections it's pointless because any malware that might try making unwanted outbound connections has to be assumed to have enough access to disable or bypass the Windows Firewall.
People who're worried about climate change would likely be people who've already started cutting electricity usage. If you've already been doing things to cut down for several years already, how likely are you to be able to still make big gains? Not very. It's a lot easier to get those when you haven't cared and can still do the easy things like replacing burned-out incandescent bulbs with CFLs or LEDs, or replacing an old less-efficient refrigerator with a new one when remodeling the kitchen. It's not so easy when you did all those things, and replaced the windows with double-pane insulated ones and had the heating/cooling system upgraded to a modern unit, several years ago and now all that's left would be very-big-ticket items like a solar power system or infeasible stuff like completely rebuilding the house using modern materials and construction.
It's a good idea as long as everything's working perfectly, but the failure mode in the event of avionics problems makes it unacceptable.
Amazon is confusing users by making it so that setting the parental controls to "no in-app purchases allowed" leaves the game in a condition where in-app purchases are still allowed. If I get in a car, put the car into Reverse to back out of a parking spot, then put it in Drive to go forward, a reasonable person would expect the car to go forward. They wouldn't expect it to continue to act as if it were in Reverse for another few minutes before the Reverse setting expired and it began to act in accordance with the gearshift setting. Similarly when you set the parental controls in an app you'd expect the app to act according to the controls, not to ignore your setting for several more minutes because you've entered the password recently (as part of setting the parental controls, not to authorize purchases).
I think Amazon's problem is going to be that just refunding the purchases doesn't help the parents. If the kid maxes out the credit-card on in-app purchases, the parents have to deal not just with those purchases but the fees and interest from over-limit charges on the card and/or the additional costs associated with any declined charges (eg. if I pay a bill on-line using my card and the charge is declined, I get hit for late fees and possibly service disconnections). Having this happen when you're out-of-town (eg. the kid does this while the family's on vacation, and when you go to check out of the hotel you can't pay your hotel bill and you have to figure out why without being able to check your accounts on-line to see what unexpected charges are there). The only acceptable way of handling things is what Amazon should've done from the start: once parental controls are turned on in an app, all actions that would cause a charge or affect parental controls always require a PIN (and ideally there'd be an option to say "don't allow charges period until parental controls are turned off again").
That assumes they're paying their Excel programmers. More likely they don't have any programmers on staff to pay, they subcontract that tedious and non-core-business detail out to an outsourcing firm in India or China or somewhere.
Sounds like a typical bollix-up: the system was a drastic change from the existing one and difficult to use, and has performance problems on top of that, but management still sent it live and turned the old system off without making sure everyone had thorough training. On top of that they didn't have any extra resources on hand to help with the extra workload as people learned the new program on the job and didn't have anybody familiar with the program on hand to help the users. End result: the entirely predictable train wreck occurred. But of course the management responsible for this will never be held accountable for it. Instead the blame will be put on "the software", instead of the management who signed off on the software being acceptable when it manifestly was not.
I'm minded from earlier cases of problems with Chinese-sourced products that the Chinese attitude is very much "It's the buyer's responsibility to make sure they're getting what they ordered and paid for. If they don't check, it's their fault for being so gullible.". Not exactly the attitude I'd be looking for out of a manufacturing center.
This is just an extension of the kind of coerced upgrade Microsoft's attempted before. With Vista and then with Win7, when they didn't take off on their own MS tried to force the issue by making the latest versions of IE and DirectX and such only available for Vista/7, not XP. This is the same thing: "Upgrade to Win8 or take the heat for running a vulnerable OS.". Thing is, it'll backfire the same way the "no latest DirectX on XP" did. Win7's such a large base that developers can't afford to write code that won't run on it, so they won't be able to use the new Win8-only safe functions. Which means applications will remain vulnerable on Win8, just like on Win7 where they also run.
If they're going to track your cel phone, that means they're assuming you have your cel phone on you. So why not send the authorization code to your cel phone and let you give it to the merchant? That way it doesn't matter if the card's stolen, the merchant can't get an auth code if you aren't present with your phone. Or better yet, have an app that'll let you punch in the merchant's ID and transaction number and initiate the payment from your end, rather than having the merchant handle your card? That makes stealing the card pointless, because just having the card isn't enough to let you make a charge.
Unlike with Lavabit, there's no single master key for TrueCrypt that can be gotten from the developers that'll decrypt any TC partition. The best the NSA could get is the ability to create their own signed binary package with their own modifications and have it appear as the official package on TC's site. The problem with that is that the TC code's open so anybody can build from source and compare with the official build and see that they aren't the same. And any compromise of the source (eg. weakening the cryptography) would be instantly revealed in the diffs. The whole NSL thing sounds dodgy, and doesn't quite fit. It seems more likely that, with Win7 and later moving to supporting only GPT disks, the TC developers found they can't add that support and decided to throw in the towel.
In any case, the version of TC from before this change is still available and as far as anyone can tell is still secure. I'd be leery of switching to other encryption software that's known to be less secure until someone comes up with a definitive vulnerability in 0.71.