RHEL 7 is systemd though. Which means Cent is going to switch. And that means Facebook is going to switch.
If they stick with CentOS, sure. That's not a given at this point, and it's definitely not likely to happen until systemd is more ready for primetime. It's why I took issue with your 2018 time frame: systemd is going to be the way forward, whether people like it or not. The issue is that it's -not- ready for primetime, or to be the default init system for 95% of the market. Because of this, the OS upgrade road, which is always difficult to begin with, is going to be slower than usual, because no sane VP of Engineering or CIO is going to risk their ass on being the first one to do a wide deployment without a compelling reason to do so, and there's not a compelling reason to do so.
I know for a fact they are using it. They are using it for a backend I'm working with. Though it isn't terribly consequential to anything so it isn't a great piece of evidence, the system its on would run fine on Xenix.
I'm pretty confident that the production engineers over there know what they're working with.
I included the quote. You missed the part where I said they were rolling their own hardware and the key point of who certifies it.
Fair enough, but the hardware certification is a marketing point Red Hat sells, not anything useful for day to day configs. Incompatible hardware very quickly becomes compatible when you have Facebook money to throw at the problem. I'll be happy to tell you stories of vendor chats I had when working at Apple, and how the threat of losing seven-and-eight figure contracts quickly turns unsupported use cases into supported use cases, including the backporting of kernel patches for a kernel that wasn't the vendor's preferred one.
Your claim from the start has been that systemd is unsuitable for server. Though your claim below is much weaker and something I would mostly agree with. So in terms of not backing it up you are disambiguating.
My explanation is why it's unsuitable for servers. Just because you can run servers with it doesn't mean it's anywhere near the best tool for the job, and it introduces more headaches than it solves. Can that change? Of course. Given that the vendors are shoving it down our throats and I'll probably have to upgrade to it within the next few years, I'm not just advocating against it for now, I'm doing what I can to make sure that it gets shored up.
You picked Facebook from my list:
a) They run CentOS. Cent OS is switching
b) They get their hardware certified by RedHat who is the single largest proponent of systemd.
I would say that's not opposite. But most importantly if you read the context I gave Facebook as someone advocating PaaS not someone advocating systemd. The PaaS vendors are the ones who care (and should care) about OS level components like systemd. I don't think clients like change.org should be concerned with the infrastructure at all. That's the whole point of DevOps it helps to further break the accidental bleed over between platform specifics and higher level software, which is what the whole enterprise Java movement was attempting to do for client / server.
Hey, it'd be nice if I could do less infrastructure. But when we've tried to switch over to platforms, it hasn't gone well. Our Chef setup was handled by Opscode until it became unreliable and their suggestion was to run our own. We've tried a few vendors for platforms, and they can't handle our traffic patterns when a petition goes viral.
I'm not comfortable going into more detail about our infrastructure on a public forum (unsurprisingly, we get targeted a lot by parties getting sunlight shone on them), but you're welcome to email me at jpierce at change dot org and I can go into the troubles we've had relating to vendors and external platforms.
I think that's unfair and untrue. We just disagree about what constitutes a reliable source. I'm mainly interested in vendors because they have breadth you are mainly interested in engineers because they see things up close. The way you are phrasing it is unnecessarily harsh.
That's a fair criticism. I do tend to be direct and blunt when communicating over the 'Net, because subtlety doesn't work on it. I tend to be wary of vendors because their goal is to get me to be a return customer, and that goal does not always line up with the needs of the engineering staff (and company as a whole).
That's not about systemd but just to defend our guys:
a) I'd love to do accurate cost assessments where IT companies use a sane rate of interest and depreciate their IT infrastructure over 10-20 years. We aren't the ones who force companies to do ROI accounting as if their depreciation / cost of borrowing / interest rate were 400%. That's not the engineers either (they are mainly on our side about that one). Blame your finance guys not us. But ultimately if the customer is mainly focused on the 1 year or 3 year cost, then we build a solution to keep the 1 or 3 year cost low and often by letting it explode in the out years.
Well, that's a symptom of bean counters viewing infrastructure engineering as a cost center rather than a value provider, and undervaluing employee retention in engineering. It's also why I don't work at those sorts of shops anymore except on short-term contract at exorbitant rates, because those tend to be the most soul-crushing jobs in the industry.
b) In terms of staff churn often the point of an integrated solution is to prompt staff churn i.e. displacement of the people. We get involved quite often because peopel are unhappy with what they are getting from their in house engineering staff. When the in house engineering staff is buying it they are generally picking a technology they are enthusiastic to use / learn or they already have the right skill set. If you aren't the ones buying it you aren't the customer.
I'll point out that the companies that are kicking ass right now around this area are the ones where the engineering staff are intimately involved in these sorts of purchases and decisions. Displacing current staff and building against their core skills tends to be very much like engineers wanting to greenfield existing infrastructure -- it never works as planned, it ends up costing a ton more than expected, and you spend most of that money just getting back to where you were, whether it's in engineered products or engineers themselves.
c) Of course there is often vendor lock-in! We have long term ongoing relationships with the vendors where we work account after account after account. In a vague sense we are on the same team as the vendors. The vendor lock-in is often how we get the good price. But we (as a profession generally not individually) are happy to construct solutions with less lock-in if the customer (who remember is many times not the IT group) wants it. As an aside, in-house software creates employee lock-in which is for most companies worse.
Vendor lock-in is currently why I'm having to spend far more than I should be for certain parts of my infrastructure. Said vendor-neutral solutions also don't have to be in-house ones -- NBH syndrome can be (and often is) just as bad as vendor lock-in. No arguments there. In-house software should be documented as well as any vendor solution when done right. If it's not, that's a company culture issue. I comment and document my code like the person coming after me is a serial killer that knows where I live. I expect the same from the engineers I work with.
I agree with this except for the developers being unable to think in a multi-system context and this not being something driven by customers. I think I deal with more customers than you do. Getting away from knobs and switches that experienced sysadmins use and towards generic solutions and commoditization is exactly what the customers do want. You may not like that they want that, but that's the reality. Almost all customers love hardware abstraction, the more the better. And they are willing to use an extra 2-5% of boxes to achieve it.
I'm not complaining. Someone has to clean up after the mess when bean-counters run an infrastructure into the ground, and it's a lucrative side gig. They all love it until they have to engineer around it.
This ain't the Linux of the 1990s when it was hobbyist OS for guys like me who couldn't afford an SGI or Sun at home and wanted a Unix. Linux today is a professional server OS. Systemd came out of RedHat. If vendors didn't matter Debian wouldn't be following in Redhat's wake on systemd and we wouldn't be having this conversation. Damn right vendors matter.
It won't stay a professional server OS (at least, current distributions of it) if major vendors continue to radically change how things are done using software that's not ready for the task. It's gotten the reputation for being a professional server OS because the enterprise distros have historically been VERY conservative with changes and introduced them iteratively. The idea that you're lecturing the engineers with hands on, professional experience about us treating it like a hobbyist OS is, frankly, ridiculous.
We also won't mention that the largest on-demand infrastructure provider out there isn't implementing systemd. That also happens to the Linux distro that their platform products use as well. Not that it matters, because vendors don't matter. Vendors aren't keeping businesses afloat, especially those focused on technology.