Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re:Old codes I remember using (Score 1) 611

Modern printers don't use Epson ESC/P or any other traditional imperative escape code-driven control. ESC/P hasn't been used by mainstream software directly since the end of DOS style word processors. At best, newer stuff would just use the control codes to do single/double/quad density raster printing. They are now either dumb, basically accepting a raw bitmap or equivalent to drive the print head, or they accept PostScript/PDF/PCL. It's been that way since the dawn of scalable type (DR GEM used Type 1, MS Windows with TrueType, MacOS). The only place I've used ESC/P in the last two decades was for point of sale software, driving receipt and label printers, including barcode printing, and also dot matrix output for billing. But obtaining printers which supported it, even from Epson, was difficult 12 years ago, so not sure what it's like today. Most of their stuff from the last decade dropped ESC/P support entirely; it's all REMOTE mode which is basically going back to those giant bitmaps. It's all in the client-side driver for the most part. (I used to be one of the printer driver maintainers for Linux.) It does exist, but it's in custom niches with software written to drive specific hardware.

Comment Re:Money to be made... (Score 1) 307

They have, but not at the small scale required. I've tried to find a link for this but can't. When I was chatting to a specialist concrete engineer, he mentioned that some reactors use digital electronics embedded into the reactor walls. Transistor junctions are created from ceramic pieces wrapped/plated with gold, which are then further encased in ceramic and then embedded in the concrete walls. This is mega-electronics rather than micro-electronics, operating at very high voltages, so you can't have that much logic due to the limited space, but for simple but safety critical stuff in high radiation environments this is so large it's not significantly affected by radiation and radiation damage as it is on a small scale.

Comment Re:Radiation wrecks robots? (Score 3, Insightful) 307

Where "everything" is a light water reactor such as the BWRs and PWRs of this period, it certainly looks like that's the case. That isn't universally true though; the graphite-moderated Magnox and AGR designs of the same era can passively cool entirely by CO convection. The downside is they have a lower power density, but the only failure I've read about was a partial melt of a single Magnox fuel rod after a blockage in a single channel interrupted the airflow.

Comment Re:"wannabe GitHub alternative" ? (Score 1) 101

A runner is a job scheduler running on a remote host, which can be your own machine or hosted wherever you like. When you push a branch or open a merge request etc., you can have it trigger builds on any registered runners (you can have as many as you like). A workspace is a place to store stuff resulting from that build such as libraries, binaries, documentation etc. This means that you can have a CI workflow and deployment hooked directly into the merge request and code review process. This stuff also works at a higher level than github. With github, travis and other CI builds are tied to a project. With gitlab they can also operate at the level of an organisation, so you can use workspaces to test multiple projects in sequence, so that you can do CI and deployment stuff across multiple repositories to test all downstream dependencies or whatever you need to do. It's all documented on the gitlab site. I'm still in the early stages of trying all this stuff out myself.

Comment Re:"wannabe GitHub alternative" ? (Score 4, Interesting) 101

GitLab does a bunch of stuff which GitHub doesn't. The most significant for me is the integrated CI, and that you can host your own runners and workspaces on your own infrastructure (or some cloud provider). Compared with Travis or some other CI hook on GitHub, this is vastly more flexible and powerful. I also find the ability to assign people for review, milestones and such on issues and merge requests to be very nice features which GitHub lacks. It is a GitHub clone, but they seem to have taken the lead in implementing more advanced functionality. At work, we're currently looking into a trial of GitLab plus our own multi-platform CI runners as an alternative to GitHub+Travis and internal Jenkins with several hundred jobs. It stands to greatly simplify the amount of failures, admin time and developer time keeping that lot going.

Comment Re:What happens to ZFS? (Score 1) 127

Same underlying codebase, yes. But it's integrated into the system much better on FreeBSD. On FreeBSD, I can use the full NFSv4 permissions model; on Linux it's restricted to standard permissions and maybe also POSIX ACLs mapped to NFSv4 ACLs (not sure if it's functional). With a suitable NFS or CIFS client, those extended ACLs are available and useful on client systems as well. On FreeBSD any user can run the zfs, zpool and other commands with proper permissions control over actions which may be performed. On Linux, these all require root. On FreeBSD, it's also possible to delegate admin permissions on a per-dataset basis, e.g. to allow a user or group to snapshot and clone, send and receive datasets. None of that works on Linux. On FreeBSD, I can set the sharenfs properly and immediately get a dataset (and all its child datasets) recursively exported. Doesn't work on Linux.

Those are just the few I noticed. If you want to use ZFS seriously, FreeBSD gives you a much more useful environment; Linux needs to better integrate it at several levels to bring it up to the same place. Linux sorely needs NFSv4 ACL support in the VFS for starters; it would also make NFSv4 vastly more usable.

Comment Re:Thank you Debian maintainers (Score 1) 124

NFS is still broken for me (Ubuntu 16.10). Fails to mount on startup almost every time; sometimes it succeeds but the chance is about 1 in 10. Some race I guess, but who knows? It's too much of a black box to easily debug. With sysv-rc, I could step through every script by hand, and pinpoint a failure to the line. Contrast that with the "old" FreeBSD and Debian systems using BSD init or sysvinit. They manage to mount the NFS filesystems reliably, every damned time. Which is of course what I'm looking for. Is reliable startup too much to ask for?

Comment Re:One can hope (Score 1) 124

There's also another factor to the migration which is familiarity.

Moving from a pre-systemd to a systemd Linux system fundamentally changes many aspects of system administration and maintenance, not just because of the init system replacement, but also from the replacement of all the other tools which systemd absorbed or replaced, or made mandatory. While FreeBSD has its minor differences, a contemporary BSD system is significantly more similar to a pre-systemd Linux system than a systemd Linux system. I personally feel more at home there (it's very similar to a traditional Debian system in many respects), and that my 2+ decades of Unix expertise is as relevant there as it ever has been.

I'm not a Luddite, but change for the sake of change isn't automatically a good thing, and while systemd has some good ideas, the design and implementation of the thing leave a lot to be desired. There doesn't seem to have been as much consideration towards backward compatibility of interfaces and configuration than could have been achieved. Linux was a mature platform, and you don't go around making gratuitous incompatible changes to such systems. At least, not without major fallout, which we continue to see.

Comment Re:One can hope (Score 4, Interesting) 124

This behaviour is where I really dislike the sytemd way. The "it will be done this way, and only this way" attitude. It's my system, why should I not be the one who gets to decide policies such as this? In the initscripts world, this would have been effected through a little configuration file in /etc/default, which would customise the behaviour of the script (or you could edit the script as well if you wanted something truly custom). While systemd does allow some modicum of customisation in the unit files, there's a heck of a lot of policy and behaviour directly encoded in the implementation, which an admin isn't going to be able to touch without rebuilding the thing. While old and crusty, sysv-rc and initscripts left every part out in the open and hence amenable for changing and tweaking. So the "don't boot if a single filesystem in fstab fails to mount" policy would have been a tweak to the mountall script (or better, one of the mount helper shell functions).

Comment Re:One can hope (Score 1) 124

Yep, exactly.

I used Linux exclusively from the mid-90s until almost exactly two years back. I was aware of the BSDs, occasionally read about them and once installed it for a few hours just to see, but never had any real reason to bother with them. Seemed like it would be a lot of pain for a worse experience, particularly when you had to build all the ports and cope with worse hardware compatibility.

One of my work colleagues is a long-time Debian user. For the last 18 months, his servers are running OpenBSD.

When it came time to upgrade from wheezy to jessie, I had the option of futzing with the system to retain or reinstall sysvinit, but since it's clearly not supported properly and several key packages deliberately depend upon systemd, you get an inferior experience which is likely to continue to regress. So I looked at FreeBSD, anticipating it would be awful. However, what a revelation. With the new pkg tool, installing and upgrading packages is on a par with apt, and with 25000 packages, it's rare to find anything missing, being at least as comprehensive as the Debian archive. It's also gratifyingly up-to-date for the most part, and if you track the weekly (rather than default quarterly) builds, you're pretty much always up to date with the tools you need (e.g. cmake, clang for me). And then there's ZFS, the "killer feature". Absolutely superb, and really well integrated; vastly easier to use and more featureful than the Linux port. This alone makes it worth using for archiving and serving files.

While there are plenty of people who cope with systemd, or even like it, it's spurred an awful lot of people to step out of the "comfort zone" of Linux, and take a proper look at the alternatives. For some of us, it's been an eye-opener to see just how capable those alternatives are, and we've not looked back.

Comment Re:One can hope (Score 4, Informative) 124

Yes, we did have that modularity.

We previously had these components:
  • sysvinit (low-level process spawning and runlevel change triggering; all done from /etc/inittab)
  • sysv-rc (intermediate-level script to effect runlevel changes by batch invoking rc.d scripts)
  • initscripts (high-level rc.d scripts to do the actual work of bringing the system up or down, with helper scripts to unify logic shared between multiple scripts)
  • package-specific scripts whch use the initscripts helpers and to some package-specific action
  • insserv and startpar, to use LSB script headers to compute a global dependency graph with make-like syntax and allow sysv-rc to start the scripts up in parallel with proper dependency constraints

Not only is it modular, the system is fully composable, allowing the admin to build each layer upon each layer to their own liking. The layers are not tightly-coupled, and it's entirely possible to replace any or all of the layers:

  • You can swap out sysvinit but retain sysv-rc and all higher-level parts; example: s6
  • You can swap out sysv-rc; example: file-rc which gives a bsd-style startup with a single file to configure what starts, or daemontools which runs directly from inittab and does process supervision
  • You can replace the initscripts with whatever you like, but retain sysvinit; example: openrc, which replaces sysv-rc and uses its own scripts

When people complain about sysvinit being old and outdated, these claims are usually considering the sysvinit+sysv-rc+initscript triad as a single entity. sysvinit is old, but it's a tool with just two purposes: running specified programs and runlevel switching. You can build anything you want on top of that. It does exactly what it was designed to do, and *only* what it was designed to do. It's not broken, and never was. If you want more functionality, you build that on top of it.

Some parts of the old system were crusty, for example dynamic networking configuration. But the vast majority worked pretty well, and pretty efficiently. And it would have been perfectly possible to fix those issues, with vastly less effort and disruption than throwing it all away and breaking much backward compatibility in the name of inter-distribution uniformity (and consequent stagnation).

Note that while common distributions came with their default, it was absolutely possible to run with all sorts of different combinations of components; Debian supported several. file-rc was a supported alternative to sysv-rc, and daemontools and other alternatives were also available. It's this very flexibility which allowed systemd to be swapped in relatively easily. But consider that once systemd was adopted, the vast majority of this flexibility was lost. The low-level init, the rc runner and the initscripts are all in one place, and it's no longer possible to swap one part for another or tweak one little bit. It's all or nothing, and that will effectively entrench it. As an ex-Debian sysvinit/sysv-rc/initscripts maintainer, I wasn't dictating that you use them all together. You want to use openrc, or daemontools or s6? Go for it, you don't need me to approve it, you do what you like. Want to change the initscripts around to do something different, be my guest. We took care not to break any custom setups on upgrade as well, e.g. preserving file-rc configuration when adding/removing/upgrading packages, as well as helper script API stability. Contrast that with the top-down dictatorial approach which comes from the systemd people: you'll use the system the way we tell you to, and no, we don't approve of you doing anything non-standard unless we like it (and good ideas only come from us, so forget it). And if you do change stuff and it breaks, that's 100% your fault since we don't care to consider this. That's the real difference, the attitude and thought behind the design, and how that affects your freedom to use your system as you see fit. And that's one major reason why my servers now run FreeBSD.

Comment What's better (Score 1) 403

Why would it be desirable to run bash on Windows 10 when I'm going to get a better experience using bash on anything else be that Linux, BSD, either native or virtually. I can understand for some people this might be their only choice, but that doesn't make it good, it's just making the best of a bad situation. If they want me to try it, they'll have to make it better than on Linux, not just "good enough to ship". Because if I'm going to use Windows 10, it had better have some concrete benefit given all its massive downsides.

Slashdot Top Deals

We will have solar energy as soon as the utility companies solve one technical problem -- how to run a sunbeam through a meter.

Working...