The 'topmost tiers' are threatened by other tiers, even when they are not direct replacements. Workload might not have another viable closed-source DB or Unix player or Mainframe platform to move to, but many of those workloads are moving out of those tiers entirely instead. On the flip side, you don't see a lot of workload living happily outside of IBM's wheelhouse eager to jump in. The signs all suggest that IBM's most believable favorable outcome is slowing the erosion rather than capturing a lot of new growth. This wouldn't be such a terrible thing, except that their business leaders and shareholders think that no growth == dead and act accordingly.
at least not worry about Chinese spyware
Considering the reality of the manufacturing and supply chain of *all* the vendors, there isn't a scenario where you are justified in not worrying on that score. The nationality of the CEO doesn't really help or hurt the ability of intelligence agencies to infiltrate product development and manufacturing.
Well, perhaps some of my comment should be modded down, but I really want people to cringe if they find themselves ever typing backtick, popen, or system() when doing web development, exploit or no exploit. It's just a very bad thing to do only as a very last resort in a very controlled situation.
any CGI program + any non-Debian Linux => vulnerable
No, only CGI programs that use system/popen/etc to call out to things that may be bash.
For once, the PHP programmers are ahead security wise due to the ubiquity of mod_php...)
Well for one most languages the equivalent facility is available and usually used since it is a requirement to scale. For another, even the silly 'fork and exec' perl or php or python isn't vulnerable if said script avoids system/popen/backticks/whathaveyou.
I guess I was wrong to play down the severity of bash, but my hope was for people to just consider themselves to make a mistake by ever potentially having bash in a cgi context, for reasons beyond this exploit.
The DHCP case is truly a bad situation.
I still say that people shouldn't be using things like system() in cgi context except in very limited hacked up internal-only little web pages. It has the same problem as using bash directly, it's a massive waste of resources for an HTTP request to spawn a new process.
This blog post mentions php, c++, python, et alia, as another attack vector.
And while that underscores the appropriate need for this to be fixed, it should also be an opportunity to educate people to be wary of popen or system. If you leverage those a lot in a cgi context, you have created a significant potential bottleneck to scaling, versus using language libraries to accomplish the same goal without fork/exec being mandatory.
I guess the point is that if a significant application is either written in bourne script or even doing something like system() to do nearly anything and it isn't some internal low security thing, then there is something that is bad going on.
That's not to say the bash thing isn't bad, but it *should* also be a wake up call to people to be mindful of invoking external utilities willy nilly when it is not appropriate.
Ok, perhaps I undermined the importance, but if you are using 'xzgrep' in cgi context in a serious situation, I would say that is still a mistake. Forking and execing in response to an http request is terrible performance wise before getting to the security dubious of it all.
The dhclient-script stuff is pretty significant and I think I would be in a weak position saying that those have no business execing system commands/scripts. However it does suggest it may be worthwhile to have a helper that is non-root with capabilities to allow it to do key stuff to limit it's ability.
To be fair, anyone using bash as the cgi handler for anything remotely serious was already doing it wrong. Bash by it's nature is a facility trying to let the presumably authenticated user of it to do whatever they want, even if it looks somewhat weird. Yes this bug warrants fixing, but putting bash or similar in a path where untrusted environment variables and/or argv is present is a very dubious design decision. Besides, fork and exec for every request is a huge no no, and that's the only way to fly with bash.
Outside of malicious HTTP headers landing in environment variable in CGI land, I'm hard pressed to think of another reasonable vector for this bug to be a problem...
Netflix still cocks up randomly on a stream and forces retries. I suspect it's not as rosy as they like to say and that the random death of services is more disruptive than they notice or acknowledge.
Meanwhile, even with their 'kill stuff randomly' methodology, the wrong thing still dies ever so often and brings the whole thing to a screeching halt.
That seems like a very very big oversite.
It's nature of the beast. Live migrations without shared storage are really not commonplace. Amazon does not bother with shared storage and thus cannot live migrate. Even if they did have the ability to live migrate with no shared storage, the time to live migrate such a workload would be impractical.
In short, EC2 strives for cheap and no migration is part of 'cheap'.
My personal favorite Azure feature, is that SQL Azure randomly drops database connections by design.
I have seen that mentality in a few places beyond azure, I find it moderately annoying. I guess the theory is assuring that *some* failure will happen to you soon even if you don't properly test so you don't go too long without failure and get surprised. However it tends to lead to stacks that occasionally spaz out for a particular user and accepting that as ok because the user can just retry.
You are actually required to program your application to expect failed database calls.
On the other hand, you should always design your application to expect failed database calls. There might be some regrettable performance or unavoidable awkwardness in some cases around a failed database call (making it rude to randomly drop needlessly), but such an occurrence is to be expected at least occasionally no matter the stack.
That is, support *functional* dependencies between processes,
Well, explicit stated dependencies are there. If you mean something beyond that, I get very concerned.
caching of input/output.
What i/o are you referring to? I/O generally is already cached as intelligently as the filesystem or block subsystem can manage. At filesystem or lower or inside the application are your opportunities to enhance things, not much room in between. If you mean cache data that is piped around or networked around, that is absolutely a horrible idea that is really infeasible unless it's in the application (it is impossible for an infrastructure to ascertain whether cached result is good enough in a generic fashion since it isn't in the middle of the transactions or understanding the flow.
automatic starting of processes when configurations change, etc.
This would be horrible. If it is a process that reads config only at startup, you have no idea of knowing when the changed on-disk copy is 'ready'. You cannot graft magic onto such a daemon. On the fly reconfiguration is already available even in standard libraries if applications want to do that. This is another problem that cannot be reasonably added in a sensible way without cooperation of the managed applications.
Right now, my computer has to reboot whenever stuff changes
Something is very very very wrong in your case. Updates sometimes are more practical to reboot to just be sure that stale copies of vulnerable libraries are surely out (and certain platforms require a reboot to replace open files at all), but no reconfiguration necessitates a reboot short of reconfiguring very particular kernel/driver settings.
Being paid to program doesn't make you a professional.
Being paid to do anything by definition makes you a professional. Professional does not mean 'better', it just carries the connotation since frequently someone who cannot get paid for their work where another can is due to things that lack. In coding, sometimes being 'professional' versus 'amatuer' really boils down to being loud enough to get taken seriously.
People have reported corrupt log files. The result is all the data is unrecoverable. The complaints have been answered 'as designed'.
When things are right, it works as intended. When things are bad, it can go far off the rails. Considering it is the system log used to debug what is wrong when things are off the rails, a full binary log is a dubious proposition.
There are benefits to binary log, but they could have been done to varying degrees with structured text and/or external binary metadata, rather than a corruptable binary blob.