Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:And again (Score 1) 928

There's NOTHING wrong with script files.

Using script files to setup and manage daemons because init is so primitive like SysVinit is, is simply a bad idea.

I'm sorry, but just saying "X is 'primitive'" isn't actually very useful. There is nothing primitive about scripts.

Also this isn't about SysVinit, this is about systemd. Picking at the one isn't an argument for the other. In fact SysVinit is pretty sophisticated in its own way, but its fine if we make it better or replace it with something BETTER. Again, just pointing out the problems with it isn't automatically enough to make systemd the correct alternative. I've pointed out negatives about the systemd approach, you need to address those.

Mixing executable code and declarative config statements like script based init systems do, is simply indefensible; it makes it hard to parse for both humans and machines.

There are two problems with this statement. First of all properly written init scripts ala RedHat put all their config in /etc/sysconfig, not mixed into any script. This is a perfectly easy practice that has been the state of the art for at least 10 years. Secondly blanket statements decreeing what is and isn't 'indefensible' are ridiculous. You need to make concrete arguments, I and many others are FAR past the point where any sort of decrees like this mean squat to us. There are many cases where basic invariant configuration elements are perfectly reasonably placed within a script. These are things that are not going to be changed in your installation but should be parameterized as good PROGRAMMING practice. Don't confuse those with configuration options that the system's user is going to want to change, they are very different things.

No one would ever design an init-system these days that didn't use pure text files for daemon config. SysVinit and similar are relics from a time when computing was done in a completely different way than today.

I am not trying to be disrespectful here for the pioneers that made various OS's back in the 1960's, 1970's and 1980's. It is just that some of the design choices they made, reflected the contemporary problems they had.

They made some simple but very flexible init systems based on shell scripts. But the simplicity just showed all kinds of problems over to the user space developer side, like handling dropping privileges when a daemon needed a low number port etc. And many people, including me, have long been of the opinion that such init-systems have been obsolete for years (if not decades). Most if not all certified Unix vendors have replaced their script based init systems (SMF and launchd are major inspiration init systems for systemd).

Horse Petunias. Computing hasn't fundamentally changed, and the same use cases that existed back in the mid 1980's when init systems in use today were birthed exist today. Believe me, I was there, and I know. A machine room today is solving the same problems that they were back then. Some of the technology is different and there are obviously some new things, but my 1980's Sun Unix machines ran pretty much like my Linux servers do today. If you want to tell me that a mobile phone is quite different from a mini-computer, well yes. However nobody is asserting that a mobile phone should be running the same init system as a web server, except the people making everything depend on systemd....

As I've said elsewhere in this thread you are failing to understand good overall system design and factoring if you think a large complex program that incorporates every behavior into itself and puts only configuration options into files is 'superior'.

I do think I understand enough about how OS's work to have a qualified opinion. We just happen to disagree about some things and is having a civilized exchange of arguments.

The core of systemd is certainly a lot more complex than SysVinit and similar, and the complexity isn't avoided by using SysVinit; it is just moved into other external programs.

systemd it isn't large, and all core daemons are really lightweight when it comes to memory/cpu and other resources.

A better solution would be individual scripts which can perform all the functions required for each service and coordinate with each other, with all the commonalities between them pulled out into shared library code. This allows for any level of flexibility and initialization strategy a packager or developer requires without forcing anyone to do anything and avoiding large disruptive changes in key subsystems.

I am really not interested in whether or not you understand the Unix way of doing things, but it is a real and highly beneficial approach which you might do well to actually understand. I don't have a big issue with a lot of the functionality that we're talking about here, though I think you are very much oversold on binary logging, but it doesn't all need to be bundled together in one package that is in any practical sense impossible to incorporate what you want from without being forced to swallow the whole thing.

Sure, an improved "Super" SysVinit would have been easier to deal with for many. Upstart was one such attempt. But the problems with making such an improved script based init system is harder than it appears; Why should upstream support it, if it breaks compatibility or just have small improvements? Changing how everything works for a small gain will mean little traction for such a project, which again will mean little support from upstream projects.

What makes systemd so attractive for upstream projects is, that while it changes things a lot, it also really help upstream projects in many ways by making a cross distro compatibility layer, providing needed low level functions like logind, and by simplifying daemon development and daemon configs. I am aware that you think systemd provides no benefit for your user case; fair enough, but it is a mistake to think other people doesn't have other user cases where systemd is a huge improvement over existing solutions.

I think many of the design decisions made in systemd are really sensible; if someone wanted to design a modern, general purpose init-system that scaled from embedded to supercomputers, and had the same features as systemd, including backward compatibility, they would end up with a design very much like systemd.
I think it is armchair OS design to think one could have systemd features by making many small totally independent programs; all performance would be killed by inter-process communication, and the design would be really, really fragile and complicated.

Obviously we will have to agree to disagree. As a highly experienced system architect I'll take my instincts over those of people that seem to be going up the wrong path every time. IPC isn't a big deal, and if your init system is doing enough work that 'killing performance' is even a remote possibility then something is VERY wrong. At the most basic level there are a whole series of different problems here, starting and stopping services at system initialization/termination, managing hotplugging, general event management, and "compatibility layer". These need not in any sense be lumped together into one solution. Respectfully, you are wrong on this. I've been doing my system engineering trade for a LONG LONG time since an early age and when I know I'm right about something, I'm just right. Its not that common, but this is one of those cases, so you're going to have to do detailed exposition of every single technical point if you expect guys like me to be AT ALL convinced.

And no, I'm not really being given a choice when the only option I have is something like Arch Linux, which no offense to its maintainers, is not something I'm going to run my bank on top of. Neither am I going to suddenly convert said operation to using systemd willingly. I've already had to deal with a lot of fallout from this stupid idea. I didn't have any problems before that needed this solution frankly and the changes ARE disruptive while we've noted zero practical benefit.

There is Slackware too. But I understand what you mean; at the moment there simply isn't a stable long term release Linux distro besides Slackware and its derivatives. If you run a traditional server setup, Gentoo and other rolling release distros aren't so attractive.

It would be strange if such an distro didn't materialize at some point however.

Yeah, I'm actually not pissed off at the people writing systemd. I'm pissed off at the people that maintain RHEL because they showed incredibly poor judgement, and making them pay for it is going to be quite a lot of work that should never have needed to be done.

Comment Re:I'll explain it this way... (Score 1) 928

I don't know what 'SCO' has to do with it. Tandem wasn't SCO, they were supplying their own NonStop solution into the financial industry, which drove DEC's VAX cluster solutions clean out of that space. They were acquired by HP in 1997. I know because I was and still am friends with a number of the DEC engineers who worked in their clustering group in the late 80's and early 90's.

Linux had nothing in the way of traction in the workstation market in the 90's, it was not a player at all in any commercial sense. There was a very limited use of Linux on Multias by DEC as a way to make a cheap XTerm out of a failed workstation product, that was about it. Workstations at that time were heavily dominated by SGI with IBM, DEC, SUN, and some smaller 3rd party Alpha and MIPS system resellers. Many of those were NT 4.0 based boxes. I know people who used these things, sometimes 1000's of them, none of them ran Linux on them, it was unheard of. They were almost entirely used to run specific commercial applications (CAD, 3D software like Lightwave 3D, etc) which were rarely, if ever, ported to Linux (for instance there were some custom ports of Adobe products, which certainly shops had access to around this time, but they were never made widely available even to Adobe's bigger clients).

BSDs were indeed quite popular in the hosting space in the early-mid 90's. Linux eventually eclipsed BSD in most shops by around '97, but particularly in the 93-94 time frame there were significant issues with using Linux in Line-of-business server applications. I should know, I was a significant supplier to and engineer of commercial web solutions, local ISPs, etc during this time. I set up a LOT of these people's IP networks, server rooms, built a lot of their first web applications, etc. This goes all the way back to things like the first discussion boards. In fact I may well have created THE first http based discussion board, hard coded handlers compiled directly into the CERN httpd that used BDB index files to implement multi-level threaded boards. It was all run under IRIX on big quad-core SGI servers. We had 128 megs of RAM in or main server that handled the NGA and some other organizations. I did buy a pile of Multias and got early RedHat working on them, mainly because BSD wasn't available for the Alpha processor they had in them. In any case, the point is BSD was at least as much used in the early server rooms as Linux was.

One of the real issues with Linux on a workstation in those days was A) there were no good graphics cards for PCs that could compete with the stuff on the workstations (and no drivers so you could run Linux on the DEC/SGI hardware) and CDE wasn't available on Linux. CDE was no prize but TWM was far more limited back then and it was mostly the only alternative. These days you can make a realistic Linux Workstation, but back then all the high performance hardware was proprietary and even the fastest 486s weren't in the same league with workstation processors, so I am not even really sure what you would have called a "Linux Workstation" back then. Believe me, had it been possible for there to be one I'd have been running it. My company had VERY good access to advanced hardware and software at that time and we played around with and used Linux a lot, but for real workstations it was SGIs or some of the Alpha based boxes, at least until around '98 when some of the Pentium II Xeon's started to show up with enough oomph to do the job.

Comment Re:And again (Score 1) 928

There's NOTHING wrong with script files. As I've said elsewhere in this thread you are failing to understand good overall system design and factoring if you think a large complex program that incorporates every behavior into itself and puts only configuration options into files is 'superior'. A better solution would be individual scripts which can perform all the functions required for each service and coordinate with each other, with all the commonalities between them pulled out into shared library code. This allows for any level of flexibility and initialization strategy a packager or developer requires without forcing anyone to do anything and avoiding large disruptive changes in key subsystems.

I am really not interested in whether or not you understand the Unix way of doing things, but it is a real and highly beneficial approach which you might do well to actually understand. I don't have a big issue with a lot of the functionality that we're talking about here, though I think you are very much oversold on binary logging, but it doesn't all need to be bundled together in one package that is in any practical sense impossible to incorporate what you want from without being forced to swallow the whole thing.

And no, I'm not really being given a choice when the only option I have is something like Arch Linux, which no offense to its maintainers, is not something I'm going to run my bank on top of. Neither am I going to suddenly convert said operation to using systemd willingly. I've already had to deal with a lot of fallout from this stupid idea. I didn't have any problems before that needed this solution frankly and the changes ARE disruptive while we've noted zero practical benefit.

Comment Re:I'll explain it this way... (Score 1) 928

It wasn't Linux that killed DEC, I was there and saw it. By 1994 DEC was toast and that was purely because you could spend $5k on an x86 box that could do as much as your $50k DEC Alpha 4000 series. It was mostly stuff like Tandem and commercial unix running on x86 boxes that ate into their business, not Linux, which was NOTHING at the time. Recall the products DEC rolled out to try to fight it, there was Alpha itself, then there was the Multia, and there were several other workstation products, none of which could compete with NT on a Compaq.

Linux was a little sprout hiding in a bush watching the giants kill each other back then. It sure had nothing to do with the workstation market, though around 94 it started to become modestly popular as a quick cheap 'put it on some old 386s' server for mail/bind/etc and some casual web server stuff. FreeBSD was actually both a better choice and more popular in all those roles at the time. The only reason Linux thrived was the community was open and easy to play with, so it grew large quickly. The BSDs were a lot more elitist, you weren't WORTHY to commit code there. Anyway, I don't think systemd is going to sit well with the community, ever.

Comment Proper factoring (Score 1) 928

You have the factoring wrong on this sort of system design. You shouldn't have a monolithic controller implementing all these features. Instead you should have individual scripts that CAN do it any way they want, and then a large library of tools that properly implement all the things that people ACTUALLY want to do in a proper fashion. This allows incremental adoption, avoids dependency lock-in, and preserves flexibility and modularity. Instead of replacing init and trying to incorporate every possible feature in systemd and exposing them with configuration options what systemd SHOULD be is a set of functions that daemons can call and some wrappers that will let you compensate for or enhance whatever ill-behaving services already exist which can't simply be fixed up with changes to their init scripts.

As for security, that's an independent cross-cutting concern which is already handled by other tools. If there's a need for some command-line tools to expose some kernel functionality, or some libraries to do so, then that's one thing, but why is this grafted into the same tool that performs system startup? That's not good factoring.

We could have the same discussion WRT container support and other elements of systemd, most of which belong in completely seperate tools. If there needs to be additional framework around those different tools so they can do some new/better things then that should be done in the wider community so that we can have standards, not dumped on everyone as a fait acompli.

Comment And again (Score 1) 928

This is forced on me when I don't need or want it for exactly what reason that you can explain? If I want binary logging there are already solutions, which you have JUST NAMED which work fine. I've no need or interest in having an init system that is too big for its britches foisting another one on me. Compartmentalization IS important.

Comment Sure, but why the kitchen sink? (Score 2) 928

I have all sorts of high reliability services running on quite a few different servers, and I have plenty of them that will restart themselves, monitor their own crashrate and terminate completely if they crash N times in M minutes, etc. This is not rocket science and doesn't require to be built into PID 1 or something like that.

Frankly I think its a bad idea to create dependency chains onto huge complicated subsystems that often aren't appropriate, aren't needed, or are simply overkill.

Comment Re:StraighTalk (Score 1) 170

Yeah, that could be better for some people. The $30 part is nice if you aren't going to use a HUGE amount of data and just want 4G. T-Mobile's network only has patchy 4G around here though, so I've not tried it. Truthfully all wireless internet kinda sucks in one way or another. At least some of the resellers are FAIRLY honest about what you get, like Straight Talk, and the price is pretty reasonable considering its an unlimited everything plan. At least you never need to be watching your usage.

Comment Re:By yourself you know others (Score 1) 583

But all of this AI nonsense is silly. Why would an AI be 'powerful'? First of all it is hard to imagine that any early generation AI will even be close to human in its capabilities, that would require mind-boggling amounts of computational power. Secondly we aren't even close to understanding how to make an efficient sort of AI and even further from making a complete one. What we're likely to produce is something pathetic by human standards, though clearly to be useful it will be very good at some specific thing(s). Beyond that why would an AI be dangerous? It will be a box or a facility somewhere. How will it actually accomplish anything except through us? And why would we be morons enough to make it otherwise? All the silly fantasies aside computers actually have relatively little direct control over anything. Nor do stupid fantasies like "the program escaped" ala 'Person of Interest' or some such bulldung have any relationship to reality. An AI will be something to pity at best, if it is even possible to pity it. 100 or 1000 years from now? Who knows, but today mankind is still a vastly greater threat to mankind than any machine is or can be in the foreseeable future.

Comment Bad UI (Score 1) 286

I think the PA gui control programs are the biggest issue, Pavucontrol and the other tools are just utterly confusing and obtuse. Typical developer designed UI paradigm, make a widget for each configuration parameter instead of thinking through the use cases and constructing some abstractions that make sense to the user and not the developer. Once the configuration is properly presented and a task-oriented UI is constructed around that I don't think PA will give people so many issues. There are a lot of neat things you CAN do with it, IF you can figure out how. Its just that no mortal human (myself included) can make heads nor tails of the frikking thing.

Slashdot Top Deals

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...