Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:This is great news! (Score 1, Interesting) 485

Up until the Goldwater Republicans fiscal conservatives wrested control of the party from the law-and-order business Republicans in the late 70s, the Republicans were actually very, very responsible -- at least, they implemented policy with responsibility in mind, rather than governing with the intent to make government not work. They believed in spending on infrastructure, and making sure the budget was something resembling balanced, not just as small as possible.

Even 1994's GOP is to the left of where it is today.

AS far as personality cults go, though, Ted Cruz scares the crap out of me. He strikes me as the sort that'd make Joe McCarthy look like a reasonable man.

Comment Re:This is great news! (Score 5, Insightful) 485

MighyYar's right, and this is coming from a bleeding heart California liberal that is not happy the GOP is going to get rewarded for its antics with increased power in DC, and is also really not happy that Silicon Valley (also known as where I work and live) is starting to tilt to the right.

The current difference between the two parties right now is pretty solely on wedge issues. They have the same monetary policy, the same foreign policy, neither party is realistic about tax policy on the middle class (it needs to be higher, along with the high earners), neither party wants to bust the cap on Social Security and Medicare (while I appreciate the extra bucks at the end of the year, I think those programs need it more than me), etc.

For all the hype about the "core differences" in the 2012 election, Mr. Romney and Mr. Obama were so close on the political compass that it was a John Jackson vs. Jack Johnson situation.

I happen to feel that the social issues are important enough for the Democratic party to be the clear choice, but to get back to MightyYar's point -- Silicon Valley is very business-driven, and CA law would preserve nearly all protections that the Republicans could take away at the federal level (barring the PPACA) as far social politics are concerned. From a Silicon Valley business perspective, both parties are roughly the same when considering the direct effect they'd have, and even more so when you realize that FWD.US and other H1-B visa supporters are realizing that they only way they'll get those increased H1-Bs they want is to get some sort of immigration reform done, even if that means supporting an odious Republican policy rather than a Democratic solution that isn't showing any signs of life.

Not to mention that most Republicans in the Bay Area would be considered Democrats down in Bakersfield or Orange County.

Comment Re:Process management in a consistent way (Score 1) 928

You need to use systemd.automount and/or have it managed by an /etc/nfstab entry to ensure that a remote filesystem is available to include it as a dependency for bringing up the service rather than just ensuring autofs is running as a dependency, which may or may not have actually mounted the filesystem in question.

Only despair lies down the path of trying to make those two coexist.

Comment Re:Process management in a consistent way (Score 1) 928

All it takes is one buggy release, or a junior admin making the sort of bonehead error that we all made while earning our stripes.

$DEITY forbid that you want to remount your NFS shares because you're cutting over to a snapmirror of your Netapp to perform maintenance (or have the vendor perform it). Pardon my language, but systemd absolutely shits the bed when that happens.

These are edge cases, to be fair, but they're edge cases that affect many common enterprise-grade setups, and I'd hope that it's common sense to expect that an application-level edge case won't crater my systems. Like another poster had said earlier, I really wish they'd split off systemctl from the rest of it. systemctl's a sweet piece of software that does handle services much, much better than previously. It's the rest of the bundle that turns systemd from a plus into a liability.

Comment Re:Process management in a consistent way (Score 2) 928

My counter to that:

If I'm at a shop that has 5 9s SLAs, I don't want a superserver crash to take out my whole OS. If supervisord or inetd craters and does so in a fashion that doesn't allow it to come back up without intervention, I want the option of running in a degraded state by bringing up the supervised processes via init.d until I can either troubleshoot the superserver, or chalk it up to gremlins and bring up a replacement instance or spare box.

If there's one fatal flaw in systemd's design, it's that system designs based around systemd assumes that systemd will never crash. The best software out there crashes or locks up to the point of being a de facto crash, so as a systems architect, I need to have failsafes to keep me running in a degraded state so I don't fail my SLAs, and systemd tells me "Don't worry, I won't fail!".

Comment Re:What? (Score 1) 555

RHEL 7 is systemd though. Which means Cent is going to switch. And that means Facebook is going to switch.

If they stick with CentOS, sure. That's not a given at this point, and it's definitely not likely to happen until systemd is more ready for primetime. It's why I took issue with your 2018 time frame: systemd is going to be the way forward, whether people like it or not. The issue is that it's -not- ready for primetime, or to be the default init system for 95% of the market. Because of this, the OS upgrade road, which is always difficult to begin with, is going to be slower than usual, because no sane VP of Engineering or CIO is going to risk their ass on being the first one to do a wide deployment without a compelling reason to do so, and there's not a compelling reason to do so.

I know for a fact they are using it. They are using it for a backend I'm working with. Though it isn't terribly consequential to anything so it isn't a great piece of evidence, the system its on would run fine on Xenix.

I'm pretty confident that the production engineers over there know what they're working with.

I included the quote. You missed the part where I said they were rolling their own hardware and the key point of who certifies it.

Fair enough, but the hardware certification is a marketing point Red Hat sells, not anything useful for day to day configs. Incompatible hardware very quickly becomes compatible when you have Facebook money to throw at the problem. I'll be happy to tell you stories of vendor chats I had when working at Apple, and how the threat of losing seven-and-eight figure contracts quickly turns unsupported use cases into supported use cases, including the backporting of kernel patches for a kernel that wasn't the vendor's preferred one.

Your claim from the start has been that systemd is unsuitable for server. Though your claim below is much weaker and something I would mostly agree with. So in terms of not backing it up you are disambiguating.

My explanation is why it's unsuitable for servers. Just because you can run servers with it doesn't mean it's anywhere near the best tool for the job, and it introduces more headaches than it solves. Can that change? Of course. Given that the vendors are shoving it down our throats and I'll probably have to upgrade to it within the next few years, I'm not just advocating against it for now, I'm doing what I can to make sure that it gets shored up.

You picked Facebook from my list:

a) They run CentOS. Cent OS is switching b) They get their hardware certified by RedHat who is the single largest proponent of systemd.

I would say that's not opposite. But most importantly if you read the context I gave Facebook as someone advocating PaaS not someone advocating systemd. The PaaS vendors are the ones who care (and should care) about OS level components like systemd. I don't think clients like change.org should be concerned with the infrastructure at all. That's the whole point of DevOps it helps to further break the accidental bleed over between platform specifics and higher level software, which is what the whole enterprise Java movement was attempting to do for client / server.

Hey, it'd be nice if I could do less infrastructure. But when we've tried to switch over to platforms, it hasn't gone well. Our Chef setup was handled by Opscode until it became unreliable and their suggestion was to run our own. We've tried a few vendors for platforms, and they can't handle our traffic patterns when a petition goes viral.

I'm not comfortable going into more detail about our infrastructure on a public forum (unsurprisingly, we get targeted a lot by parties getting sunlight shone on them), but you're welcome to email me at jpierce at change dot org and I can go into the troubles we've had relating to vendors and external platforms.

I think that's unfair and untrue. We just disagree about what constitutes a reliable source. I'm mainly interested in vendors because they have breadth you are mainly interested in engineers because they see things up close. The way you are phrasing it is unnecessarily harsh.

That's a fair criticism. I do tend to be direct and blunt when communicating over the 'Net, because subtlety doesn't work on it. I tend to be wary of vendors because their goal is to get me to be a return customer, and that goal does not always line up with the needs of the engineering staff (and company as a whole).

That's not about systemd but just to defend our guys:

a) I'd love to do accurate cost assessments where IT companies use a sane rate of interest and depreciate their IT infrastructure over 10-20 years. We aren't the ones who force companies to do ROI accounting as if their depreciation / cost of borrowing / interest rate were 400%. That's not the engineers either (they are mainly on our side about that one). Blame your finance guys not us. But ultimately if the customer is mainly focused on the 1 year or 3 year cost, then we build a solution to keep the 1 or 3 year cost low and often by letting it explode in the out years.

Well, that's a symptom of bean counters viewing infrastructure engineering as a cost center rather than a value provider, and undervaluing employee retention in engineering. It's also why I don't work at those sorts of shops anymore except on short-term contract at exorbitant rates, because those tend to be the most soul-crushing jobs in the industry.

b) In terms of staff churn often the point of an integrated solution is to prompt staff churn i.e. displacement of the people. We get involved quite often because peopel are unhappy with what they are getting from their in house engineering staff. When the in house engineering staff is buying it they are generally picking a technology they are enthusiastic to use / learn or they already have the right skill set. If you aren't the ones buying it you aren't the customer.

I'll point out that the companies that are kicking ass right now around this area are the ones where the engineering staff are intimately involved in these sorts of purchases and decisions. Displacing current staff and building against their core skills tends to be very much like engineers wanting to greenfield existing infrastructure -- it never works as planned, it ends up costing a ton more than expected, and you spend most of that money just getting back to where you were, whether it's in engineered products or engineers themselves.

c) Of course there is often vendor lock-in! We have long term ongoing relationships with the vendors where we work account after account after account. In a vague sense we are on the same team as the vendors. The vendor lock-in is often how we get the good price. But we (as a profession generally not individually) are happy to construct solutions with less lock-in if the customer (who remember is many times not the IT group) wants it. As an aside, in-house software creates employee lock-in which is for most companies worse.

Vendor lock-in is currently why I'm having to spend far more than I should be for certain parts of my infrastructure. Said vendor-neutral solutions also don't have to be in-house ones -- NBH syndrome can be (and often is) just as bad as vendor lock-in. No arguments there. In-house software should be documented as well as any vendor solution when done right. If it's not, that's a company culture issue. I comment and document my code like the person coming after me is a serial killer that knows where I live. I expect the same from the engineers I work with.

I agree with this except for the developers being unable to think in a multi-system context and this not being something driven by customers. I think I deal with more customers than you do. Getting away from knobs and switches that experienced sysadmins use and towards generic solutions and commoditization is exactly what the customers do want. You may not like that they want that, but that's the reality. Almost all customers love hardware abstraction, the more the better. And they are willing to use an extra 2-5% of boxes to achieve it.

I'm not complaining. Someone has to clean up after the mess when bean-counters run an infrastructure into the ground, and it's a lucrative side gig. They all love it until they have to engineer around it.

This ain't the Linux of the 1990s when it was hobbyist OS for guys like me who couldn't afford an SGI or Sun at home and wanted a Unix. Linux today is a professional server OS. Systemd came out of RedHat. If vendors didn't matter Debian wouldn't be following in Redhat's wake on systemd and we wouldn't be having this conversation. Damn right vendors matter.

It won't stay a professional server OS (at least, current distributions of it) if major vendors continue to radically change how things are done using software that's not ready for the task. It's gotten the reputation for being a professional server OS because the enterprise distros have historically been VERY conservative with changes and introduced them iteratively. The idea that you're lecturing the engineers with hands on, professional experience about us treating it like a hobbyist OS is, frankly, ridiculous.

We also won't mention that the largest on-demand infrastructure provider out there isn't implementing systemd. That also happens to the Linux distro that their platform products use as well. Not that it matters, because vendors don't matter. Vendors aren't keeping businesses afloat, especially those focused on technology.

Comment Re:What? (Score 1) 555

Facebook had been on a CentOS variant running on hardware certified by RedHat Labs for years. RHEL is systemd. Whether they have switched yet or not I don't know, but by 2018 or earlier they will be on systemd. If it isn't in production, it isn't in production yet.

RHEL 6 (and its CentOS variants) are upstart, not systemd. Just admit you were wrong about them using it. You forget to mention that it's in testing alongside other, non-systemd Linux variants.

Also, Facebook's been rolling its own hardware for quite a while now, dude.

I don't work out there. Absolutely not. But PaaS is much bigger than the Valley. The ideas and technologies developed for DevOps are being deployed much more broadly

The point is that if you knew engineers working at those companies out here, you could have found out what they were actually running on by asking rather than making claims you can't back up.

Maybe time to use your real name if you are going to play that game.

Hi. I'm Jeff. My LinkedIn is the Homepage link next to my name. My apologies for not having it there previously.

No you haven't. You've said that you have to change stuff you do for an existing infrastructure. That's about it. Lots of hyperbole and nonsense claims about desktop.

Well, we've already established that you've been lax about doing your research before making claims.

I don't administer systems. I most certainly do design the internals of system architectures. What do you think happens when you sell a solution that you put together random parts without thinking about how they work? I've done the specialist job when you knew every little detail of a system, and I went through large scale changes before as an engineer. I remember when I was an engineer people like you gripping about the change over from DECnet to IP and how the sky would fall. I was intimately involved in the migration of working systems from Metacode to Postscript and AFP. The systems are far better today for progress.

From my experience, given that for many of my jobs I've been the guy hired to clean up after a "systems integrator" with a cost sheet full of buzzwords and marketing woo came in and sold some magic beans to bigwigs who didn't listen to their engineers, your line of work tends to over-engineer a "solution", under-calculate cost of operations, and end up leaving a company with severe vendor lock-in disease and an engineering staff with a new solution that's outside of the team's core expertise, which leads to staff churn, high retraining costs for those that do stay, and dissatisfaction all around.

When was the last time you actually acted as an implementor on one of your plans?

If you're one of the rare good ones out there, then please accept my most heartfelt apologies. But thus far, I don't see it. Your argument is "It must be fine because vendors use it and my consulting firm deploys it!" and that size of installation equals complexity of installation. You haven't provided a technical argument in favor of systemd on the server in the slightest.

As far as progress is concerned, systems are better than before because of it. However, just because something is new and shiny doesn't mean it's progress. There was rapid adoption of Puppet and Chef because those tools massively increased productivity and added flexibility to cluster design. Docker has taken off because the cost in system performance is outweighed by the man-hours savings in testing on/for multiple platforms, and because they successfully implemented a Linux equivalent to Solaris zones and/or BSD jails.

I have, Unix since 1988, Linux since 1995. I've also touched a wide range of the big box Unixes, zSeries, iSeries and VMS. Which gives some perspective on having seen styles of solutions for process management over the years. I'm certainly not a Linux specialist. I rely on Linux specialists.

So, if you're not a Linux specialist, under what merit are you arguing for systemd? Big shot names don't matter, especially when half or more of those you mentioned aren't running systemd in production. Some of the biggest system shitshows I've seen have been in larger shops because the architects hadn't touched production in years (hi, Citigroup!).

Yep. What do you think management does?

Takes other people's word for things rather than testing it themselves. Good engineering management runs interference for the people in the weeds so they can be as productive as possible. They tend to be more concerned about process than implementation.

They also get wowed by big vendor names, tasty lunches at expensive steakhouses, and drop buzzwords like they're going out of style.

You might have been an engineer once, dude. But you've got nothing but PHB-speak coming out of your mouth now.

I don't use PaaS? Really?

Go back and read what you quoted again, then point out where I said that you don't use platforms.

You tell me what am I using for the clients we are deploying too?

You're claiming that all the companies you've namedropped thus far are your clients?

I don't know about every client out there. I do know that the claim that you were making that systemd was for desktop and didn't have support for server is BS.

And if I had actually claimed that, it would be BS. But I didn't. Systemd is focused on the desktop and mobile. One common API, like it provides, is pretty awesome in those spaces. You might be confusing me for someone else. I don't think systemd is terrible software. It's not optimal for servers, and the lead developers on it are shows some serious Miguel de Icaza syndrome, so I have trouble trusting the longevity of the project.

You can run a server on it, but it's not ideal. It removes many of the knobs and switches that experienced sysadmins and engineers used to get extra performance out of their systems. It adds appreciable layers of overhead just to do the same thing the parts it replaces has been doing for years. The developers themselves have shown an inability to think about it in a multi-system context. It presents a large attack surface because of the dependency chain its seeking out and building up. Maybe it'll be ready for primetime by 2018. I know I'll be hacking on it and submitting patches since the major vendors decided that selling new integration packages was more important than keeping their users and customers happy.

I talk to far too many vendors who sell server solutions to believe that.

You still think vendors matter. This ain't Windows.

I talk to far too many people system admin cloud or colos admins who see hundreds of clients to not know if systemd were causing problems to any significant fraction. I work with one of the research groups that publishes studies on this collecting the data.

Then you're not paying attention.

So sure you are an admin in the valley. So what? You aren't the only place doing DevOps, though you guys do invent many of the best ideas.. Its gone mainstream. And believe it or not, people on opposite coast do talk to one another.

I realize we're not the only place that does DevOps, if for no other reason than the amount of internal recruiters from companies nationwide blowing up my inbox to the tune of 10 to 20 messages each weekday. But, you missed the location point entirely -- I'm not bragging about where I work, I'm saying that I've worked with or interact with on a regular basis thanks to social events the SREs and DevOps engineers many of the big shops you're namedropping, and they're not using what you say they're using.

I appreciate the discussion, though!

Comment Re:What? (Score 1) 555

You are so full of it.

Facebook most definitely does not use a single distro in production that uses systemd.

Neither does Netflix.

You don't work out here. I do. It's not that big of an industry when it comes to systems administration and DevOps out here. Don't make claims that anyone with a LinkedIn worth a damn can debunk with a few messages.

What's my side? Argue against what I've said, thanks. Where have I said anything about monolithic being a problem? I've been very clear about the gripes I have about it.

You don't administer systems. You don't design the internals of system architecture. I can't even tell from your LinkedIn (we're third degree contacts) that you've touched a Linux system in your life. You've been management for over a decade. You seriously don't think the same infrastructure that some shops you namedrop are -the- solution for every use case, especially out here, do you? Especially when they don't even use what you're advocating for?

You're not an engineer. You're an integrator. I'm sure you're great at it. But stick to your expertise and don't try to lecture engineers when you're out of your depth there. Thanks.

Comment Re:What? (Score 1) 555

If you think platform setups are the most complex server installations out there, I've got some oceanfront property in Denver I'd like to sell you.

I still haven't seen anything that makes me want to use it on a server. There's a clear use case on desktop. Nothing you've posted changes that, or even bothers to make a case for it. It's all buzzword salad from you.

Comment Re:What? (Score 2) 555

Yeah, that user tried the same thing with me as well a few day back.

While linking me to Red Hat's PaaS offering.

I've got the same complaints about systemd, namely that I either have to find engineering man-hours to convert all of the existing supporting infrastructure I have to a systemd world, or find engineering man-hours to maintain an in-house distro. Even if systemd is a clear upgrade over every single component it has its tentacles in (for the sake of argument), it isn't enough to justify refactoring a working infrastructure on a DevOps team that's already understaffed as it is.

I'm sure Red Hat has a Solutions Architect that's happy to have me pay tens-to-hundreds of thousands just to get my infrastructure back up to where it was prior to systemd, though!

Comment Re: Agner Krarup Erlang - The telephone in 1909! (Score 1) 342

It doesn't speed things up. It serves two purposes -- optics to bring it more customers, and ensure there's the optimal amount of orders going through the system for as long as possible. If you're hungry, and you see a long drive-through line, it may dissuade you from stopping at the restaurant. Two lanes merging into one hides some of that.

Comment Re:Hope! (Score 1) 522

I've not had good experiences with Heroku.

Beyond that, if I want PaaS functionality, I've got Docker and/or Elastic Beanstalk for the simpler applications. PaaS is fine if you're running a simple app backend or a medium-traffic frontend. But a data warehouse isn't going there. Log analysis isn't happening there, as much as I'd ike to outsource that, but the expense is ridiculous, My time series metrics and monitoring isn't going there. Sensitive PII isn't going up there.

Those options (sans Heroku) are great if you're trying to get a proof-of-concept off the ground, or you've got high enough margins going that you can eat the pain when the cost of outsourcing so much of your infrastructure catches up with you. Thise "good problems to have" aren't so good to have as people think.

Slashdot Top Deals

This file will self-destruct in five minutes.

Working...