Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:What OpenRC ? (Score 1) 95

As for what starts and stops at each runlevel, that's as easy as an "ls". Beats grepping a myriad of MSDOS ini files

Agreed that it isn't quite as nice and easy as an "ls",

Scratch that. It is as easy as an "ls". I was poking around in the /etc/systemd directory and realized /etc/systemd/system is functionally equivalent to /etc/rcX.d/. It is maintained by the systemd daemon, but it consists of a bunch of symlinks to service unit files that allow you to very quickly and easily see which services are dependencies of a particular target.

Comment Re:Well that's nice (Score 2) 95

So if I update any of the libraries that init uses, all I have to do is a "telinit q"?

systemctl daemon-reexec

That one isn't mapped to a telinit equivalent (I don't think).

Last I checked, that was broken in upstart

Lots of things were broken in upstart. Systemd is a tremendous improvement over upstart, much to the chagrin of Mark Shuttleworth.

And systemd now lets me drop to single user mode? That's an improvement.

That's been there, for a while. I'm not sure who these "perps" are that you speak of, but systemd will do anything you tell it to. If you want a single user mode, define a target that creates a single user mode, just like you would define runlevel 1 to not start multiple ttys. I didn't follow every development of systemd as it was happening, and only really jumped in when it came (officially) to Red Hat 7.2 and then Ubuntu 16.04. While there are some lingering integration issues still (mostly with specific daemons), I would say it works pretty well, and the distro maintainers have done a lot good work with backwards-compatibility scripts to help people transition from sysvinit. So yes, there is a rescue.target, which is also called runlevel1.target on both Fedora and Debian.

Comment Re:What OpenRC ? (Score 1) 95

One of the absolute worst features of systemd (and inittab when abused) is automatic restart.

I never said anything about automatic restart. Systemd allows you to be alerted to and to respond to process failures. To me, that's predictability. If I start a bunch of network services and one of them fails, systemd will decide whether to continue (ie: the dependency tree allows it) or to fail. Regardless, the outcome is entirely predictable. Services that depend on other services (which includes the target state itself) will have all of their dependencies satisfied, or they won't be started, and anything that fails will be logged in a consistent manner that is easily parseable by a system monitoring utility. When you "telinit 3", sysvinit runs all of the scripts in /etc/rc3.d and if they start they start, if they fail they fail. It's up to you to scrape the logs and keep tabs on all of the daemons. The state "runlevel 3" is not guaranteed.

Because the start order is 100% predictable.

Ah, ok, that's a different kind of predictable. I agree, start order is not predictable with systemd. I would argue, though, that it doesn't need to be because you have explicit dependencies that allow depending on the actual started and functioning state of a prior process (as opposed to just a numbering scheme), and logged events that allow you to determine precisely when and where (and often why) a dependency tree failed. You don't need to step through the boot process one script at a time because the log tells you exactly what failed, and you can start your debugging there right at that point.

As for what starts and stops at each runlevel, that's as easy as an "ls". Beats grepping a myriad of MSDOS ini files

Agreed that it isn't quite as nice and easy as an "ls", but it definitely is not as complicated as grepping the unit files and trying to figure out when things start. The nice thing about explicitly declaring your dependencies is that you can have systemd show them to you. Let it do the work so you don't have to.

And I find it to combine the worst aspects of Windows 95 .ini files

That's kind of funny because .ini files were one of the better parts of Win95. They were simple text, easy to read and edit (by a human) configuration files, that happened to use [bracketed] headers, but whatever. Samba actually uses that convention for smb.conf, by the way. Anyway, Win became much worse when they took away the .ini files and replaced them with the registry. If I need to change a configuration, I would rather do it in a plain non-executable text file, rather than something structured but cumbersome like XML, or something with variables and conditionals built-in but takes time to parse and study like a shell script.

Comment Re:What OpenRC ? (Score 1) 95

I like good old-fashioned runlevels, and not named abstractions that may differ from system to system

Um, why do you think "old-fashioned" runlevels are any less abstract than named process groups. A runlevel is just a group of processes to start that happens to be named as a number (ex: runlevel 3 could just as easily be "network-enabled" and function identically). The fact that most linux distributions used runlevels may have been a convention, but it was hardly a standard. In fact, Red Hat famously used runlevel 5 to distinguish between an X and console environment, whereas Debian used runlevel 3 to distinguish between single-user vs multi-user environment regardless of whether there was a desktop session manager running. So I would definitely call runlevels "named abstractions that may differ from system to system". Since derivative distributions (ex: Ubuntu from Debian and Mandrake from Red Hat) tended to adopt the original's runlevel classification, it may have given the appearance that there was a de facto standard, but there really wasn't.

Predictability is good.

Correct. Which includes knowing that your processes actually started and not just that you told them to start, but maybe they failed, when you change runlevels.

So are posix scripts, which continue working even on systems where /bin/sh is lightweight ash or some other bourne family shell that isn't bash.

Some do, some don't. It depends on who wrote the script. When Ubuntu switched to dash, which was one of the first attempts to speed up boot times years ago, quite a few of the boot scripts broke and had to be rewritten. If you upgraded Ubuntu and suddenly one of your services didn't start, switching back to bash was usually the easiest fix. They eventually ironed out all of the bugs, but it was shaky for a while.

Comment Re:IoA (Score 1) 125

1 IPv4 address hosting a subnet with NAT vs. an IPv6 /64 prefix are roughly equivalent

Uhhh....
    2^32 = ~4.3 billion
    2^64 = ~18 billion billion

So they are only "roughly equivalent" if by that you mean "within 10 orders of magnitude of each other".

I think you meant "one IPv4 Internet (4.3 billion hosts) where each host NATs an entire IPv4 internet vs. one IPv6 /64 prefix (4.3 billion IPv4 Internets) are exactly equivalent".

In practice you can't assign anything smaller than a /64
-- snip --
It's still way more address space than we'll ever reasonably need, but not quite as ridiculous as it looks at first glance.

While true, you will get that /64 assigned to you within a /64 prefix. In other words, there are 18 billion billion prefixes each with 18 billion billion addresses, so it really is as ridiculous as it looks at first glance. Not complaining, though....

Comment Re:And this will change nobody's minds.. (Score 1) 378

This is why some 3rd world countries won't use it, not fear of GMO itself, but they don't want to be beholden to an American company for their seeds.

Hardly. Most developing countries want to use GM crops (read: farmers want to use it, but government forbids it), but the countries they export to, like Germany, are poised to instantly block all imports if they allow GM crops to be used. Seriously, it really is that crazy. Needing to buy seed from somewhere is the least of their concerns.

Comment Re:Brace for shill accusations in (Score 1) 378

Safety is a red herring.

When you are talking about GM the technology and whether it should be used, safety is the only consideration that matters, because that is the only unknown (that and effectiveness). Every other consideration is concerned with the industry of agriculture itself and has nothing to do with GM directly.

I have two objections to GM crops: biodiversity and lock-in

Biodiversity was a concern long before GM crops were on the scene. Any kind of controlled breeding and selection of popular varieties (driven by the free market) can create problems with biodiversity. Cavendish bananas are not genetically-modified, and yet they are by far the most widely used cultivar globally. Lock-in is a more valid concern, although the recent Supreme Court decision on the patentability of genes may make it less so. If seed companies can only patent the seeds, but not the actual genetic modifications, it would be the same situation as currently exists with patentable crop varieties where there is plenty of room for free market competition. Of course, the government could also refuse to recognize any patents on food crops. Either way, it is a regulatory problem, not a technology problem.

Comment Re:Stupid appers (Score 1) 127

It's not really the version of the library that's the problem, in the majority of cases. As a few have already mentioned, the interfaces often don't change between library versions, so older software can often compile fine against newer libraries. The problem is, most people want binary distributions. Source distributions are great, and very flexible, until you want to A) install something closed-source, or B) install something large and complex, like LibreOffice. Most people, myself included, don't care to sit around and wait for 2-3 hrs for something to compile just so they can get some work done. If you can just install it, you are usually much happier.

The problem is, to get binaries to share dependencies, it is not just the filenames and locations that must be same. The symbol tables have to be exactly what the app is looking for (ie: linked against). So that means, the build environment has to be the same, or at least generic enough, to get the required compatibility. If you change something like glibc, everything compiled against it has to be recompiled and relinked. That is the major source of frustration, and it is not an easy problem to solve.

Comment Re:Why? (Score 1) 127

Not really. Deb and rpm can handle multiple versions just fine, as long as the underlying software supports multiple versions. Remember, a package is just a collection of files with some instructions where to put them. So if you try to install two files with the same name in one place, you are going to have problems. In other words, it's not the versioning, it's the inherent limitation of the filesystem itself. If a library renames itself between versions, though, it won't have problems. Very few libraries go to the trouble to do that, though.

Comment Re:Read about it before commenting, people! (Score 1) 127

Interesting. That's more information than I was able to find anywhere else. Thanks.

Here's what I'm most worried about, though. How dependent is it on non-lazy packagers? In other words, the easiest and most convenient way to package anything is to ship with all dependencies and the app uses those. The problem, though, is each application is then solely responsible for updating itself, including to patch bugs in any dependencies, so it quickly leads to running a million app updaters in the background, which is the current nightmare on Windows and OS X. Ideally, this system would be smart enough to use the base system by default and only use the supplied dependency if the base system can't provide it or if there is a conflict of some sort. But I doubt it will do that, which means it is on the packagers to check the base system first before installing their own dependencies. Somehow I doubt they are going to do that, though.

Comment Re:Scant on details, high on assumptions (Score 1) 127

g. RPM dependencies are calculated from files and SONAMEs, but can also be specified manually by the packager, including version inequalities of other packages.

Debian has this too, and I think it is actually a good deal more flexible than rpm, at least from what I remember from my brief stint with Red Hat back in the day. There's a reason Debian was able to have apt long before Red Hat/Fedora had yum.

https://www.debian.org/doc/man...

Well, then that's really a problem with the community not enforcing proper requirement standards that reflect reality on important packages.

This is the real problem. And I dare say it is 99% an Ubuntu problem because they really like to break everything with each subsequent release. Debian has been a rolling release distribution since forever and is renown for the incredible robustness with which packages can be shared between stable, testing, and even unstable, as well as the ease of transitioning from one to the other (ex: when testing becomes the new stable). And they have once or twice had to do some massive renaming of library dependencies, but managed without a hiccup, which is a testament to the quality of the deb system.

No, the problem is Ubuntu. Their versioning is a constant clusterfsck of broken, incompatible package naming. And they heavily abuse "virtual" packages for their own purposes which leads to the breakage in Samba like the GP described. It is horrible release management and is one of the many things wrong with Ubuntu. However, Ubuntu manages to stay more up-to-date, and has some pretty nifty userland tools, so I find myself using it much more than Debian. But I lament every time I have to upgrade, or if I want to move packages between versions.

Snap sounds like a system with some much-needed features, but what I would really like is for those features to be integrated into deb. Unfortunately, Ubuntu is following their usual pattern of aloofness. Both Debian and Ubuntu would benefit tremendously if they could work together to enhance deb. Transactional updates? Who doesn't want that? That is a great feature. But nope, that's not going to happen, apparently. We're going to end up with another Mir, or Upstart, or Compiz (shudder).

Comment Re:Scant on details, high on assumptions (Score 1) 127

The details on this new packaging system are scarce--and I've checked--but it looks like a reimplementation of Docker,

I guess we'll find out more in time, because I too couldn't find any details on how this is implemented. If it does use containers (a la Docker), that would be really cool. As soon as Docker started getting more fleshed out, this was the first application I thought of that would be perfect for it.

An application being able to use alternative libraries is definitely a need on modern linux. I can't count the number of times that I needed to do massive upgrades of the system just to install a newer version of an app I was using. My only worry is that this will depend on the non-laziness of app developers to work well. Snap packages can use the underlying system, but only if app developers take the time to specify their dependencies, which is something they already don't want to do, apparently. So instead, they bundle their own libraries, even if they are already available on the system, and we get OSX bundles, which I'm not a big fan of. Ideally the snap system would default to using a system library if it meets a dependency and only a bundled library if that dependency is not met, but based on the scarce information available that doesn't seem to be the case.

It would also be nice if there was a quick way to determine library versions in all installed snaps, so that you can see which might be vulnerable to recent security errata, for example. Not sure if they have plans for tools like that, but it would sure be useful.

Comment Re:Why? (Score 1) 127

Why? That has been the standard way to do what you are trying to do on linux since forever. The way installers, like the Ubuntu installers, work under the hood is 1) they format the disk, 2) they set up a staging area, and 3) they install everything the system needs using the package manager. Afterward they will do some initial config to get the system to boot. The package manager will only install files that belong to its packages. So it won't 1) delete or empty directories that have already been created, or 2) overwrite or delete files that don't belong to it (ie: user-created files). Config files that have been modified from their defaults will be overwritten, but in many cases there is a *.d/ directory that allows you to put in custom config that won't get overwritten when you update or reinstall. That's why things like network interfaces are preserved when you update, because the interface configs are written into a .d/ directory, allowing the package-owned config file to be upgraded without wiping away the interfaces.

So to do a safe reinstall, the instructions are accurate. Tell the installer not to format the disk, use the same partitions that were already in place, and that's it. It is actually a very well designed system. If I anticipated needing to do this frequently, the only thing I might do is keep /home on a separate partition (and maybe /usr/local depending on how much I use it) so that I would be able to format the root partition safely. But like I say, not necessary to safely reinstall without losing your files.

Comment Re:The future of dosage? (Score 1) 113

For that matter, the machine would not be producing the drugs, it would just be packaging them

That was my reaction (no pun intended!) too at first, but no, this is actually chemical synthesis from starting materials. It is not quite as modular as they imply from the summary. You need to clean and restandardize the system to change product. But the idea is that it is capable of following a programmed synthesis and purification strategy. The purification is actually the coolest part to me. The synthesis uses an optimized flow chemistry design (think no solvents, short reaction times, high temperatures and pressures), but this is fairly standard process chemistry. The purification is the complicated part because the machine has to do liquid partitioning, column purification, and multiple recrystallization steps without human monitoring. And it has to meet USP standards for quality control at the end. That is really cool. There was some serious engineering that went into it, so even though it has somewhat limited applicability right now, it is an impressive feat.

That said, I'm not sure where this really fits. I can't think of many situations where you would benefit from on site synthesis. Remember, you would still need to preselect which drug you want to synthesize ahead of time and have all of the materials ready to go. And it would still take anywhere from 1-3 days start to finish. So in a hospital where you might work with hundreds of drug formulations, which ones are you going to maintain this system for? And is it really easier to synthesize on site as opposed to just managing shipments from manufacturing facilities? It might be able to help in the case of manufacturing shortages, but that seems like it would be a fairly rare occurrence....

Slashdot Top Deals

There is nothing so easy but that it becomes difficult when you do it reluctantly. -- Publius Terentius Afer (Terence)

Working...