Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:What? (Score 1) 555

RHEL 6 (and its CentOS variants) are upstart, not systemd.

RHEL 7 is systemd though. Which means Cent is going to switch. And that means Facebook is going to switch.

Just admit you were wrong about them using it.

I know for a fact they are using it. They are using it for a backend I'm working with. Though it isn't terribly consequential to anything so it isn't a great piece of evidence, the system its on would run fine on Xenix.

"on hardware certified by RedHat Labs "
Also, Facebook's been rolling its own hardware for quite a while now, dude.

I included the quote. You missed the part where I said they were rolling their own hardware and the key point of who certifies it.

The point is that if you knew engineers working at those companies out here, you could have found out what they were actually running on by asking rather than making claims you can't back up.

Your claim from the start has been that systemd is unsuitable for server. Though your claim below is much weaker and something I would mostly agree with. So in terms of not backing it up you are disambiguating. You picked Facebook from my list:

a) They run CentOS. Cent OS is switching
b) They get their hardware certified by RedHat who is the single largest proponent of systemd.

I would say that's not opposite. But most importantly if you read the context I gave Facebook as someone advocating PaaS not someone advocating systemd. The PaaS vendors are the ones who care (and should care) about OS level components like systemd. I don't think clients like change.org should be concerned with the infrastructure at all. That's the whole point of DevOps it helps to further break the accidental bleed over between platform specifics and higher level software, which is what the whole enterprise Java movement was attempting to do for client / server.

Hi. I'm Jeff. My LinkedIn is the Homepage link next to my name. My apologies for not having it there previously.

Fair enough. Change.org DevOps architect is legit experience.

Well, we've already established that you've been lax about doing your research before making claims.

I think that's unfair and untrue. We just disagree about what constitutes a reliable source. I'm mainly interested in vendors because they have breadth you are mainly interested in engineers because they see things up close. The way you are phrasing it is unnecessarily harsh.

From my experience, given that for many of my jobs I've been the guy hired to clean up after a "systems integrator" with a cost sheet full of buzzwords and marketing woo came in and sold some magic beans to bigwigs who didn't listen to their engineers, your line of work tends to over-engineer a "solution", under-calculate cost of operations, and end up leaving a company with severe vendor lock-in disease and an engineering staff with a new solution that's outside of the team's core expertise, which leads to staff churn, high retraining costs for those that do stay, and dissatisfaction all around.

That's not about systemd but just to defend our guys:

a) I'd love to do accurate cost assessments where IT companies use a sane rate of interest and depreciate their IT infrastructure over 10-20 years. We aren't the ones who force companies to do ROI accounting as if their depreciation / cost of borrowing / interest rate were 400%. That's not the engineers either (they are mainly on our side about that one). Blame your finance guys not us. But ultimately if the customer is mainly focused on the 1 year or 3 year cost, then we build a solution to keep the 1 or 3 year cost low and often by letting it explode in the out years.

b) In terms of staff churn often the point of an integrated solution is to prompt staff churn i.e. displacement of the people. We get involved quite often because peopel are unhappy with what they are getting from their in house engineering staff. When the in house engineering staff is buying it they are generally picking a technology they are enthusiastic to use / learn or they already have the right skill set. If you aren't the ones buying it you aren't the customer.

c) Of course there is often vendor lock-in! We have long term ongoing relationships with the vendors where we work account after account after account. In a vague sense we are on the same team as the vendors. The vendor lock-in is often how we get the good price. But we (as a profession generally not individually) are happy to construct solutions with less lock-in if the customer (who remember is many times not the IT group) wants it. As an aside, in-house software creates employee lock-in which is for most companies worse.

You're claiming that all the companies you've namedropped thus far are your clients?

No I'm claiming I have DevOps clients. I named those companies as being large users of DevOps and PaaS. Netflix incidentally is a client. Though I'll be honest here, I don't give a crap about their software I only care about some of their handoffs to various local cable companies. I don't care what their software does as long as it uses X amount of bandwidth at the right times.

You can run a server on it, but it's not ideal. It removes many of the knobs and switches that experienced sysadmins and engineers used to get extra performance out of their systems. It adds appreciable layers of overhead just to do the same thing the parts it replaces has been doing for years. The developers themselves have shown an inability to think about it in a multi-system context. It presents a large attack surface because of the dependency chain its seeking out and building up. Maybe it'll be ready for primetime by 2018. I know I'll be hacking on it and submitting patches since the major vendors decided that selling new integration packages was more important than keeping their users and customers happy.

I agree with this except for the developers being unable to think in a multi-system context and this not being something driven by customers. I think I deal with more customers than you do. Getting away from knobs and switches that experienced sysadmins use and towards generic solutions and commoditization is exactly what the customers do want. You may not like that they want that, but that's the reality. Almost all customers love hardware abstraction, the more the better. And they are willing to use an extra 2-5% of boxes to achieve it.

You still think vendors matter. This ain't Windows.

This ain't the Linux of the 1990s when it was hobbyist OS for guys like me who couldn't afford an SGI or Sun at home and wanted a Unix. Linux today is a professional server OS. Systemd came out of RedHat. If vendors didn't matter Debian wouldn't be following in Redhat's wake on systemd and we wouldn't be having this conversation. Damn right vendors matter.

Comment Re:What? (Score 1) 555

Facebook most definitely does not use a single distro in production that uses systemd.

Facebook had been on a CentOS variant running on hardware certified by RedHat Labs for years. RHEL is systemd. Whether they have switched yet or not I don't know, but by 2018 or earlier they will be on systemd. If it isn't in production, it isn't in production yet.

You don't work out here. I do. It's not that big of an industry when it comes to systems administration and DevOps out here.

I don't work out there. Absolutely not. But PaaS is much bigger than the Valley. The ideas and technologies developed for DevOps are being deployed much more broadly

Don't make claims that anyone with a LinkedIn worth a damn can debunk with a few messages.

Maybe time to use your real name if you are going to play that game.

I've been very clear about the gripes I have about it.

No you haven't. You've said that you have to change stuff you do for an existing infrastructure. That's about it. Lots of hyperbole and nonsense claims about desktop.

You don't administer systems. You don't design the internals of system architecture

I don't administer systems. I most certainly do design the internals of system architectures. What do you think happens when you sell a solution that you put together random parts without thinking about how they work? I've done the specialist job when you knew every little detail of a system, and I went through large scale changes before as an engineer. I remember when I was an engineer people like you gripping about the change over from DECnet to IP and how the sky would fall. I was intimately involved in the migration of working systems from Metacode to Postscript and AFP. The systems are far better today for progress.

I can't even tell from your LinkedIn (we're third degree contacts) that you've touched a Linux system in your life.

I have, Unix since 1988, Linux since 1995. I've also touched a wide range of the big box Unixes, zSeries, iSeries and VMS. Which gives some perspective on having seen styles of solutions for process management over the years. I'm certainly not a Linux specialist. I rely on Linux specialists.

You've been management for over a decade.

Yep. What do you think management does?

You seriously don't think the same infrastructure that some shops you namedrop are -the- solution for every use case, especially out here, do you? Especially when they don't even use what you're advocating for?

I don't use PaaS? Really? You tell me what am I using for the clients we are deploying too?

I don't know about every client out there. I do know that the claim that you were making that systemd was for desktop and didn't have support for server is BS. I talk to far too many vendors who sell server solutions to believe that. I talk to far too many people system admin cloud or colos admins who see hundreds of clients to not know if systemd were causing problems to any significant fraction. I work with one of the research groups that publishes studies on this collecting the data.

So sure you are an admin in the valley. So what? You aren't the only place doing DevOps, though you guys do invent many of the best ideas.. Its gone mainstream. And believe it or not, people on opposite coast do talk to one another.

Comment Re:What? (Score 1) 555

Google, IBM, Facebook, Netflix, Random House, GE... Those are complex server installations. Your side hasn't said much of anything. Mainframes, minis, bigbox Unix have had monolithic systems for decades. I'm presenting evidence you aren't presenting any well known vendor who disagrees.

Comment Re:I still don't see what's wrong with X (Score 1) 226

>>And it sucks for that your use case tramples their use case. These things are symmetrical. There are choices. Some are helped and some are harmed.

Bull!

You can't run around claiming you don't want to talk about internals and then just make statements like that. You don't know what you are talking about. The fact that you don't like reality doesn't mean it isn't reality. Yes there are choices in life. Not everyone gets to have everything. Do you really think that the debates regarding network transparency for the last four decades were because of spite and that: Microsoft, Apple, Commodore, Digital, NVidia, Intel, IBM, Sun, SGI, the current X-Windows team... are just wrong while your "I don't like it, so it isn't true" is right. Grow up!

This is /. Stop cursing. Start thinking about these problems from the developer's end. Either the application and graphical buffer is shared or it is separate. Either the network protocol is intelligently decomposing graphical objects or it is oblivious to them and just passes buffers around. No you cannot have both.

If supporting the X protocol means someone else's video doesn't play right then fine, don't support the X protocol. (makes no sense to me, I can play videos all day long with seeing tearing, what ever the hell that is). Just make sure it still has the possibllity to do somethign that works like an X terminal but uses RDP or VNC or whatever protocol makes you happy.

That's what it does. There is no reason you can't build a VNC client that boots instantly on bad hardware. The VNC use case is covered by Wayland today. But that is not network transparency.

So? The least common automatically gets no support?

No. But it a well designed system when there are tradeoffs that's the group that is disadvantaged.

Somebody always must be descriminated against? Where is the logic in that?

The logic of that is we live in the real world.

I'm sure there are many others using LAN support. So what if it's the least common. It's still common enough to be important!

No it isn't. And we know that because we have data. NEC didn't leave the XTerm industry 15 years ago because sales were too strong to keep up. SunRay are selling for less than the case it worth on ebay, that's not a sign of strong demand. It is time to deal with reality. Your use case is a tiny niche. A well supported niche and one likely to continue to be supported in a limited fashion for decades. But you aren't under 2%, not 90% anymore.

Comment Re:What? (Score 1) 555

Even if systemd is a clear upgrade over every single component it has its tentacles in (for the sake of argument), it isn't enough to justify refactoring a working infrastructure on a DevOps team that's already understaffed as it is.

So don't use it. But that's very different than claiming it is only for desktop and not for server. Which both of you tried and which is provably false given that the most complex server installations are pushing systemd. Devops BTW being one of the its best use cases.

Comment Re:How (Score 1) 555

As far as the init system goes, the vast majority of packages are not daemons. Only daemons require init support.

I agree. Most packages aren't a problem. But many packages depend directly on indirectly on daemons. Which is how chains of dependencies form.

But the task of maintaining a couple hundred init scripts wouldn't be hard for a small committee of volunteers.

That's easy. But that's not the task. systemd does process monitoring. Systemd has ties to PaaS. Systemd handles power management and alerting applications to be responsible regarding their power usage... All that code needs to be maintained. This is where it gets to be serious programming.

For the non-init stuff, the trick is to convince upstream developers to support diversity, which can be done by continuing to embrace open standards and APIs.

How? The fact that upstream developers liked the features of systemd and kept wanting to use them is what drove Debian to feel that they had to make the switch in the first place. Sure if the world were different Debian would have made other choices. But how do you convince developers to embrace "open standards". Especially since FreeDesktop has put out a systemd spec, there exists a systembsd which is implementing this spec so systemd is arguably an open standard.

Comment Re:What? (Score 1) 555

First, the majority of the market is not PaaS vendors.

True. But the claim was, "Systemd may be fine for a desktop, but not fine for a server". Obviously the PaaS vendors are doing server.

I know of some that don't want it

Which PaaS vendor has come out against systemd?

Comment Re:I still don't see what's wrong with X (Score 1) 226

So somebody else's problem with X11 means that my own use case gets trampled? That really sucks.

And it sucks for that your use case tramples their use case. These things are symmetrical. There are choices. Some are helped and some are harmed.

. I'm sure I'm not alone here in using network transparency.

You aren't. But you are of the 3 main cases (local, LAN, WAN) the least common.

If you want Windows Remote Desktop why not just use Windows?

They could say the same thing to you. If you want 1990 Unix why not use a 1990 Unix?

What I do really really care about in Ratpoison is the tiling I like... Can tiling be done with Wayland?

There are tiling compositors for Wayland since 2012. The algorithms for tiling are standard programming exercises there are easy to implement so they should be in the major compositors once larger issues get resolved.

Comment How (Score 1) 555

Let's ignore the issue of whether the fork is a good idea. How are they going to accomplish this? Debian has thousands of packages. Upstream developers mostly like systemd. At least a few dozen packages are becoming hard dependent on systemd. Assume this number doubles every year (not unlikely). What is the Debian fork going to do? Assume that about 200 or so already have reduced functionality without systemd, again let that go up 50% per year for the next few years. How are they going to fix this?

This sounds like hundreds or not many thousands of man years of work per year every year trying to keep up. How is the Debian fork possibly going to make it? The best they can do is release a traditionalist subdistribution which uses init. OK that's easy, but that's not a fork. And frankly if they start patching a few things, why not just roll those patches either upstream or into Debian?

How is this fork going to work and what are they going to do?

Comment Re:And this is why Linux will never win the deskto (Score 0) 555

Linux works out of the box in the same way that MacOS or Windows does.

Not really. It is has gotten worse at this in the last decade. 10 years ago I'd say Linux is likely easer to install on random hardware. Today the relentless desire to hack up drivers has dried up (understandably a ton of work that never stops). The better desktop distributions went broke. Mandrake is gone. Caldera (pre SCO) is gone. RedHat makes a server but not an OS. YellowDog (PPC) gone.... Xandros gone. It is getting harder and harder to get Linux to install and work on the desktop.

Comment Re:I still don't see what's wrong with X (Score 1) 226

I was using XTerms (the real thing not emulators) starting in 1988, and was using as my primary computer by late 1992. I know what XTerms are. The LTSP was just a way in the early 1990s to get Linux boxes, primarily cheap old PCs that couldn't run Windows 3.1 / 95 anymore to run XTerms. I've been familiar with that project for two decades. I'm not failing to understand you. But you were being a bit unclear about what you wanted before.

Check out the Linux Terminal Server Project ltsp.org. Can something like that be implemented in Wayland?

If by that you mean a dumb system giving you near real time performance, no it can't. That's what network transparency means, and that's what Wayland doesn't support.

X-terminal can be a truly cut down device with little more than a kernel and X. Boot time is super fast because all you are loading is a kernel plus X.

It doesn't even really need anything as complex as a Windows kernel. You can cut it way below that. X11 ran on DOS. You can easily create a dumb X-term which would be done booting before you could move your arm from the power switch to the keyboard. The NCR used an 88100 @ 20MHz and could boot in under 5 seconds.

By X11 having that do you mean PulseAudio?

There are lots of solutions. The X11 protocol is extendable one extensions that's been implemented multiple times is sound. Anyway to setup Pulse Audio: http://www.freedesktop.org/wik...

I want a terminal that is basically a dedicated second head to the main machine.

That Wayland doesn't do. You have your choice: smart networking or application and video card on the same bus. Someone might figure out some way to get that to work by running virtual machines on either side and hacking together a virtual bus that is running over the network but what you want is what X11 is optimized for. Keep running X11 as long as you can and see where the world is in 2030 or so.

Slashdot Top Deals

Never test for an error condition you don't know how to handle. -- Steinbach

Working...