Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:What? (Score 1) 555

RHEL 7 is systemd though. Which means Cent is going to switch. And that means Facebook is going to switch.

If they stick with CentOS, sure. That's not a given at this point, and it's definitely not likely to happen until systemd is more ready for primetime. It's why I took issue with your 2018 time frame: systemd is going to be the way forward, whether people like it or not. The issue is that it's -not- ready for primetime, or to be the default init system for 95% of the market. Because of this, the OS upgrade road, which is always difficult to begin with, is going to be slower than usual, because no sane VP of Engineering or CIO is going to risk their ass on being the first one to do a wide deployment without a compelling reason to do so, and there's not a compelling reason to do so.

I know for a fact they are using it. They are using it for a backend I'm working with. Though it isn't terribly consequential to anything so it isn't a great piece of evidence, the system its on would run fine on Xenix.

I'm pretty confident that the production engineers over there know what they're working with.

I included the quote. You missed the part where I said they were rolling their own hardware and the key point of who certifies it.

Fair enough, but the hardware certification is a marketing point Red Hat sells, not anything useful for day to day configs. Incompatible hardware very quickly becomes compatible when you have Facebook money to throw at the problem. I'll be happy to tell you stories of vendor chats I had when working at Apple, and how the threat of losing seven-and-eight figure contracts quickly turns unsupported use cases into supported use cases, including the backporting of kernel patches for a kernel that wasn't the vendor's preferred one.

Your claim from the start has been that systemd is unsuitable for server. Though your claim below is much weaker and something I would mostly agree with. So in terms of not backing it up you are disambiguating.

My explanation is why it's unsuitable for servers. Just because you can run servers with it doesn't mean it's anywhere near the best tool for the job, and it introduces more headaches than it solves. Can that change? Of course. Given that the vendors are shoving it down our throats and I'll probably have to upgrade to it within the next few years, I'm not just advocating against it for now, I'm doing what I can to make sure that it gets shored up.

You picked Facebook from my list:

a) They run CentOS. Cent OS is switching b) They get their hardware certified by RedHat who is the single largest proponent of systemd.

I would say that's not opposite. But most importantly if you read the context I gave Facebook as someone advocating PaaS not someone advocating systemd. The PaaS vendors are the ones who care (and should care) about OS level components like systemd. I don't think clients like change.org should be concerned with the infrastructure at all. That's the whole point of DevOps it helps to further break the accidental bleed over between platform specifics and higher level software, which is what the whole enterprise Java movement was attempting to do for client / server.

Hey, it'd be nice if I could do less infrastructure. But when we've tried to switch over to platforms, it hasn't gone well. Our Chef setup was handled by Opscode until it became unreliable and their suggestion was to run our own. We've tried a few vendors for platforms, and they can't handle our traffic patterns when a petition goes viral.

I'm not comfortable going into more detail about our infrastructure on a public forum (unsurprisingly, we get targeted a lot by parties getting sunlight shone on them), but you're welcome to email me at jpierce at change dot org and I can go into the troubles we've had relating to vendors and external platforms.

I think that's unfair and untrue. We just disagree about what constitutes a reliable source. I'm mainly interested in vendors because they have breadth you are mainly interested in engineers because they see things up close. The way you are phrasing it is unnecessarily harsh.

That's a fair criticism. I do tend to be direct and blunt when communicating over the 'Net, because subtlety doesn't work on it. I tend to be wary of vendors because their goal is to get me to be a return customer, and that goal does not always line up with the needs of the engineering staff (and company as a whole).

That's not about systemd but just to defend our guys:

a) I'd love to do accurate cost assessments where IT companies use a sane rate of interest and depreciate their IT infrastructure over 10-20 years. We aren't the ones who force companies to do ROI accounting as if their depreciation / cost of borrowing / interest rate were 400%. That's not the engineers either (they are mainly on our side about that one). Blame your finance guys not us. But ultimately if the customer is mainly focused on the 1 year or 3 year cost, then we build a solution to keep the 1 or 3 year cost low and often by letting it explode in the out years.

Well, that's a symptom of bean counters viewing infrastructure engineering as a cost center rather than a value provider, and undervaluing employee retention in engineering. It's also why I don't work at those sorts of shops anymore except on short-term contract at exorbitant rates, because those tend to be the most soul-crushing jobs in the industry.

b) In terms of staff churn often the point of an integrated solution is to prompt staff churn i.e. displacement of the people. We get involved quite often because peopel are unhappy with what they are getting from their in house engineering staff. When the in house engineering staff is buying it they are generally picking a technology they are enthusiastic to use / learn or they already have the right skill set. If you aren't the ones buying it you aren't the customer.

I'll point out that the companies that are kicking ass right now around this area are the ones where the engineering staff are intimately involved in these sorts of purchases and decisions. Displacing current staff and building against their core skills tends to be very much like engineers wanting to greenfield existing infrastructure -- it never works as planned, it ends up costing a ton more than expected, and you spend most of that money just getting back to where you were, whether it's in engineered products or engineers themselves.

c) Of course there is often vendor lock-in! We have long term ongoing relationships with the vendors where we work account after account after account. In a vague sense we are on the same team as the vendors. The vendor lock-in is often how we get the good price. But we (as a profession generally not individually) are happy to construct solutions with less lock-in if the customer (who remember is many times not the IT group) wants it. As an aside, in-house software creates employee lock-in which is for most companies worse.

Vendor lock-in is currently why I'm having to spend far more than I should be for certain parts of my infrastructure. Said vendor-neutral solutions also don't have to be in-house ones -- NBH syndrome can be (and often is) just as bad as vendor lock-in. No arguments there. In-house software should be documented as well as any vendor solution when done right. If it's not, that's a company culture issue. I comment and document my code like the person coming after me is a serial killer that knows where I live. I expect the same from the engineers I work with.

I agree with this except for the developers being unable to think in a multi-system context and this not being something driven by customers. I think I deal with more customers than you do. Getting away from knobs and switches that experienced sysadmins use and towards generic solutions and commoditization is exactly what the customers do want. You may not like that they want that, but that's the reality. Almost all customers love hardware abstraction, the more the better. And they are willing to use an extra 2-5% of boxes to achieve it.

I'm not complaining. Someone has to clean up after the mess when bean-counters run an infrastructure into the ground, and it's a lucrative side gig. They all love it until they have to engineer around it.

This ain't the Linux of the 1990s when it was hobbyist OS for guys like me who couldn't afford an SGI or Sun at home and wanted a Unix. Linux today is a professional server OS. Systemd came out of RedHat. If vendors didn't matter Debian wouldn't be following in Redhat's wake on systemd and we wouldn't be having this conversation. Damn right vendors matter.

It won't stay a professional server OS (at least, current distributions of it) if major vendors continue to radically change how things are done using software that's not ready for the task. It's gotten the reputation for being a professional server OS because the enterprise distros have historically been VERY conservative with changes and introduced them iteratively. The idea that you're lecturing the engineers with hands on, professional experience about us treating it like a hobbyist OS is, frankly, ridiculous.

We also won't mention that the largest on-demand infrastructure provider out there isn't implementing systemd. That also happens to the Linux distro that their platform products use as well. Not that it matters, because vendors don't matter. Vendors aren't keeping businesses afloat, especially those focused on technology.

Comment Re:What? (Score 1) 555

Facebook had been on a CentOS variant running on hardware certified by RedHat Labs for years. RHEL is systemd. Whether they have switched yet or not I don't know, but by 2018 or earlier they will be on systemd. If it isn't in production, it isn't in production yet.

RHEL 6 (and its CentOS variants) are upstart, not systemd. Just admit you were wrong about them using it. You forget to mention that it's in testing alongside other, non-systemd Linux variants.

Also, Facebook's been rolling its own hardware for quite a while now, dude.

I don't work out there. Absolutely not. But PaaS is much bigger than the Valley. The ideas and technologies developed for DevOps are being deployed much more broadly

The point is that if you knew engineers working at those companies out here, you could have found out what they were actually running on by asking rather than making claims you can't back up.

Maybe time to use your real name if you are going to play that game.

Hi. I'm Jeff. My LinkedIn is the Homepage link next to my name. My apologies for not having it there previously.

No you haven't. You've said that you have to change stuff you do for an existing infrastructure. That's about it. Lots of hyperbole and nonsense claims about desktop.

Well, we've already established that you've been lax about doing your research before making claims.

I don't administer systems. I most certainly do design the internals of system architectures. What do you think happens when you sell a solution that you put together random parts without thinking about how they work? I've done the specialist job when you knew every little detail of a system, and I went through large scale changes before as an engineer. I remember when I was an engineer people like you gripping about the change over from DECnet to IP and how the sky would fall. I was intimately involved in the migration of working systems from Metacode to Postscript and AFP. The systems are far better today for progress.

From my experience, given that for many of my jobs I've been the guy hired to clean up after a "systems integrator" with a cost sheet full of buzzwords and marketing woo came in and sold some magic beans to bigwigs who didn't listen to their engineers, your line of work tends to over-engineer a "solution", under-calculate cost of operations, and end up leaving a company with severe vendor lock-in disease and an engineering staff with a new solution that's outside of the team's core expertise, which leads to staff churn, high retraining costs for those that do stay, and dissatisfaction all around.

When was the last time you actually acted as an implementor on one of your plans?

If you're one of the rare good ones out there, then please accept my most heartfelt apologies. But thus far, I don't see it. Your argument is "It must be fine because vendors use it and my consulting firm deploys it!" and that size of installation equals complexity of installation. You haven't provided a technical argument in favor of systemd on the server in the slightest.

As far as progress is concerned, systems are better than before because of it. However, just because something is new and shiny doesn't mean it's progress. There was rapid adoption of Puppet and Chef because those tools massively increased productivity and added flexibility to cluster design. Docker has taken off because the cost in system performance is outweighed by the man-hours savings in testing on/for multiple platforms, and because they successfully implemented a Linux equivalent to Solaris zones and/or BSD jails.

I have, Unix since 1988, Linux since 1995. I've also touched a wide range of the big box Unixes, zSeries, iSeries and VMS. Which gives some perspective on having seen styles of solutions for process management over the years. I'm certainly not a Linux specialist. I rely on Linux specialists.

So, if you're not a Linux specialist, under what merit are you arguing for systemd? Big shot names don't matter, especially when half or more of those you mentioned aren't running systemd in production. Some of the biggest system shitshows I've seen have been in larger shops because the architects hadn't touched production in years (hi, Citigroup!).

Yep. What do you think management does?

Takes other people's word for things rather than testing it themselves. Good engineering management runs interference for the people in the weeds so they can be as productive as possible. They tend to be more concerned about process than implementation.

They also get wowed by big vendor names, tasty lunches at expensive steakhouses, and drop buzzwords like they're going out of style.

You might have been an engineer once, dude. But you've got nothing but PHB-speak coming out of your mouth now.

I don't use PaaS? Really?

Go back and read what you quoted again, then point out where I said that you don't use platforms.

You tell me what am I using for the clients we are deploying too?

You're claiming that all the companies you've namedropped thus far are your clients?

I don't know about every client out there. I do know that the claim that you were making that systemd was for desktop and didn't have support for server is BS.

And if I had actually claimed that, it would be BS. But I didn't. Systemd is focused on the desktop and mobile. One common API, like it provides, is pretty awesome in those spaces. You might be confusing me for someone else. I don't think systemd is terrible software. It's not optimal for servers, and the lead developers on it are shows some serious Miguel de Icaza syndrome, so I have trouble trusting the longevity of the project.

You can run a server on it, but it's not ideal. It removes many of the knobs and switches that experienced sysadmins and engineers used to get extra performance out of their systems. It adds appreciable layers of overhead just to do the same thing the parts it replaces has been doing for years. The developers themselves have shown an inability to think about it in a multi-system context. It presents a large attack surface because of the dependency chain its seeking out and building up. Maybe it'll be ready for primetime by 2018. I know I'll be hacking on it and submitting patches since the major vendors decided that selling new integration packages was more important than keeping their users and customers happy.

I talk to far too many vendors who sell server solutions to believe that.

You still think vendors matter. This ain't Windows.

I talk to far too many people system admin cloud or colos admins who see hundreds of clients to not know if systemd were causing problems to any significant fraction. I work with one of the research groups that publishes studies on this collecting the data.

Then you're not paying attention.

So sure you are an admin in the valley. So what? You aren't the only place doing DevOps, though you guys do invent many of the best ideas.. Its gone mainstream. And believe it or not, people on opposite coast do talk to one another.

I realize we're not the only place that does DevOps, if for no other reason than the amount of internal recruiters from companies nationwide blowing up my inbox to the tune of 10 to 20 messages each weekday. But, you missed the location point entirely -- I'm not bragging about where I work, I'm saying that I've worked with or interact with on a regular basis thanks to social events the SREs and DevOps engineers many of the big shops you're namedropping, and they're not using what you say they're using.

I appreciate the discussion, though!

Comment Re:What? (Score 1) 555

You are so full of it.

Facebook most definitely does not use a single distro in production that uses systemd.

Neither does Netflix.

You don't work out here. I do. It's not that big of an industry when it comes to systems administration and DevOps out here. Don't make claims that anyone with a LinkedIn worth a damn can debunk with a few messages.

What's my side? Argue against what I've said, thanks. Where have I said anything about monolithic being a problem? I've been very clear about the gripes I have about it.

You don't administer systems. You don't design the internals of system architecture. I can't even tell from your LinkedIn (we're third degree contacts) that you've touched a Linux system in your life. You've been management for over a decade. You seriously don't think the same infrastructure that some shops you namedrop are -the- solution for every use case, especially out here, do you? Especially when they don't even use what you're advocating for?

You're not an engineer. You're an integrator. I'm sure you're great at it. But stick to your expertise and don't try to lecture engineers when you're out of your depth there. Thanks.

Comment Re:What? (Score 1) 555

If you think platform setups are the most complex server installations out there, I've got some oceanfront property in Denver I'd like to sell you.

I still haven't seen anything that makes me want to use it on a server. There's a clear use case on desktop. Nothing you've posted changes that, or even bothers to make a case for it. It's all buzzword salad from you.

Comment Re:What? (Score 2) 555

Yeah, that user tried the same thing with me as well a few day back.

While linking me to Red Hat's PaaS offering.

I've got the same complaints about systemd, namely that I either have to find engineering man-hours to convert all of the existing supporting infrastructure I have to a systemd world, or find engineering man-hours to maintain an in-house distro. Even if systemd is a clear upgrade over every single component it has its tentacles in (for the sake of argument), it isn't enough to justify refactoring a working infrastructure on a DevOps team that's already understaffed as it is.

I'm sure Red Hat has a Solutions Architect that's happy to have me pay tens-to-hundreds of thousands just to get my infrastructure back up to where it was prior to systemd, though!

Comment Re: Agner Krarup Erlang - The telephone in 1909! (Score 1) 342

It doesn't speed things up. It serves two purposes -- optics to bring it more customers, and ensure there's the optimal amount of orders going through the system for as long as possible. If you're hungry, and you see a long drive-through line, it may dissuade you from stopping at the restaurant. Two lanes merging into one hides some of that.

Comment Re:Hope! (Score 1) 522

I've not had good experiences with Heroku.

Beyond that, if I want PaaS functionality, I've got Docker and/or Elastic Beanstalk for the simpler applications. PaaS is fine if you're running a simple app backend or a medium-traffic frontend. But a data warehouse isn't going there. Log analysis isn't happening there, as much as I'd ike to outsource that, but the expense is ridiculous, My time series metrics and monitoring isn't going there. Sensitive PII isn't going up there.

Those options (sans Heroku) are great if you're trying to get a proof-of-concept off the ground, or you've got high enough margins going that you can eat the pain when the cost of outsourcing so much of your infrastructure catches up with you. Thise "good problems to have" aren't so good to have as people think.

Comment Re:Hope! (Score 1) 522

Tell you what. You go do that with RHEL/CentOS 7 or with the expected package set for Jessie, and tell me what you come back with.

In reality, if you think that process is any sort of worthwhile for large server installations, then you work for a small shop handling bullshit traffic or you're riding your coworker's coattails while you screw around with hobbyist installations -- and that's if you work as a server/infrastructure engineer at all.

Let's say that even works right now to give me a working box. I lead an infrastructure team, boss. My ass gets fired or my stock becomes worthless if I'm not working on a 5-10 year outlook plan. That might mean I'll have to go with a systemd-based distro, which means internal tooling and software chains need to retested, if not outright rewritten, much of the automation in place, along with additional monitoring on the new attack surface that systemd opens up.

The OS is supposed to the be base of this stack that's dumb as hell and a stable foundation for the software that does the actual work on top of it, and provides as few attack vectors as possible. The major distributions seem to be tossing away that role away, long with embedded systems, in favor or trying to beat iOS and Android on mobile and Windows and OSX on the desktop.

Noble goals, but that ain't what butters the bread, not what's kept Linux kicking for this long.

Comment Re:What are they going to change to? (Score 1) 522

As someone who does spin up a metric fuckload of instances in the cloud (or more specifically, has his monitoring system trigger a set of scripts to do it based on site and API traffic), I can guarantee you that you are full of shit and haven't actually had to do those things as part of your career thus far.

I -love- new technology that makes my life easier. I'm a big fan of the Vagrant -> Docker -> Deploy workflow for apps where that flexibility outweighs the overhead costs. There's now way I could manage my cluster in a sane matter without the central config management apps.

Systemd ain't the way. At best, it's a large attack surface and single point of failure. At worst, it's an anti-pattern.

Comment Re:Hope! (Score 1) 522

I'm all for having systemd available as a choice, or even having a "desktop" spin that defaults to it. While I can't speak for everyone, I don't mind the concept of systemd, because there are use cases where the market has decided that they value speed over stability -- see mobile apps, desktops, etc.

I don't want it as the init system for my servers, though. Even if I think it's cool for the init system, I sure as hell don't want it handling login, messaging, logging, acting as a superserver, etc. My metal boxes reboot when a security patch requires it. The virtual instances boot the first time and never again. My log aggregator wants plaintext logs. I already have supervisord or monit keeping an eye on my daemon processes. Systemd could be $DEITY's gift to UNIX, and it still wouldn't go on an existing cluster of mine because any gains from it are offset in the engineer man-hours I'd have to pay out to thoroughly test all of the software I run against, convert all of our in-house tooling and software to it, etc.

Comment Re:They're hiring you... (Score 2) 224

Then, as the holder of the patent, you have the option to not license the patent to them if the monetary offer for a license is not sufficient. If that blows up the hiring process, consider yourself lucky that you fund out the sort of assholes you'd be working for prior to signing the paperwork and starting the gig. If you've got patents under your belt, it's not like you'll be hurting for work, since it pretty much acts as a credential signifying that you'll do good work.

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...