Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:plausible for some setups (Score 2) 164

And what you say about mainframe stability vs other HW - I don't think this is entirely true.

If you mean just individual servers, I agree, regular HW isn't too bad, and doesn't take a ton of admin power. I was thinking more of the case of replacing a big mainframe with a cluster, which has a whole different kind of administrative overhead. You can generally assume that a mainframe stays internally connected and working: CPU cards don't randomly lose connections to each other, your database and application software don't have to deal with networking hiccups or node failures, etc. Whereas if you move that to a cluster architecture, even using some kind of orchestration layer like OpenStack, you can't really assume that everything will "just work". Instead it has to be architected and administered as a distributed system, which needs quite a bit of effort on both the development and operations sides.

Comment Re:they are dying (Score 4, Insightful) 164

Thanks, that's an interesting comment. Especially with x86 servers getting fairly big these days (the 80-core, 4TB-ram monsters you mention), I can see that being plausible for some scenarios. Are all the services you ran previously each able to fit in a single x86 server now? If so that sounds like it'd greatly ease migration. One of the big pain-points of migration from mainframes to x86 clusters has traditionally been that it's hugely expensive to re-architect complex software so that it will run (and run reliably) on a distributed system, if it was originally written to run on a single system. But if the biggest single service you run is small enough to fit in your biggest x86 box, then you don't have to do the distributed-system rewrite.

Comment plausible for some setups (Score 5, Insightful) 164

The IBM pricing really is quite high (there are a ton of licensing fees for the hardware, maintenance, and software). But the systems work reliably. You get a giant system that can run a whole lot of VMs, with fast and reliable interconnects, transparent hardware failover (e.g. CPUs inside most mainframes come in redundant pairs), etc. To get a similar setup on commodity hardware you need some kind of "cloud" orchestration environment, like OpenStack, which can deal with VM management and migration, network storage, communication topology, etc. The advantage of an x86-64/OpenStack cluster solution is that the hardware+licensing costs are loads cheaper, and you don't have IBM levels of vendor lockin. The disadvantage is that it doesn't really work reliably; you're not going to get 5 9s of uptime on any significantly sized OpenStack deployment, and it will require an army of devops people to babysit it. The application complexity also tends to be higher, because failures are handled at the application level rather than at the system level: all your services need to be able to deal with non-transparent failover, split-brain scenarios, etc. Also the I/O interconnects between parts of the system (even if you're on 10GigE) are much worse than mainframe interconnects.

Comment Re:Fuck Me (Score 4, Informative) 553

Putting it in pid1 is mostly driven by cgroups (the Linux kernel's hierarchical process-grouping/resource-management system). The initial kernel design for cgroups was that it was a shared resources managed via a pseudofilesystem (cgroupfs), but the developers of that subsystem seem to have decided that design was unworkable, and are moving towards a design where there can be exactly one userspace controller of the cgroups system at any given time. That more or less has to also be the process supervisor, or else you can't really do sensible things with tying resource-management to services (and increasingly, containers). And that all has to happen when the system is brought up, too. So either it needs to be in PID1, or it needs to be in several PIDs that are tightly coupled via an IPC mechanism. The systemd designers consider the second design more complex and error-prone. See e.g. here, plus a third-party comment here.

Comment Re:Fuck Me (Score 4, Informative) 553

It's a process supervisor / service management system. Booting the machine isn't really the most difficult job of such a system, just the special case of starting some things on boot. More of the work goes into the non-boot case, and at the moment a lot of interest is in container-based virtualization. The kernel cgroups system provides the basic primitives for building such systems: hierarchical process groups, resource limits, etc., but you need a userspace layer to make it usable, e.g. managing creation/destruction of containers (and their associated networking, resources, etc.). Systemd is the userspace layer.

There are fairly similar approaches in other Unixes, though with pros and cons. Solaris uses SMF, and OSX uses launchd, both of which replaced more old-school shell-script-based systems for similar reasons. FreeBSD has toyed on and off with porting launchd from OSX, but the porting effort stalled. For the moment it relies on a more "DIY" solution where it's up to the sysadmin to maintain a tangle of shells scripts plugging things together, e.g. integrating jail management with resource constraints (RCTL), services, and networking. All the pieces are there, but either you write your own shell scripts to glue them together, or you can use something like cbsd. That has some pros and cons as well.

Comment not great, but probably not very important either (Score 1) 105

This kind of exploit, a local privilege escalation exploit, used to be very significant, but is significant in a declining number of cases, as old-style Unix multiuser systems are a smaller and smaller proportion of systems. In all likelihood anyone with a user account on a North Korean computer is pretty heavily monitored, and ensuring nobody violates policy can be enforced by "other means" than Unix permissions.

Comment Re:Nope (Score 2) 165

In this case it appears to actually be a firm hired by the porn company in question. The complaint was sent by Takedown Piracy LLC, which is one of those fly-by-night operations that sends out mass incompetently drafted DMCA requests on behalf of clients (it's important not to do any competent lawyering, because that would reduce the profit margin). In this case they were hired by a company called Adam & Eve (NSFW, obviously) and sent the letters on that company's behalf.

Comment any repercussions? (Score 5, Interesting) 165

In theory submitting a false DMCA request is illegal. And there are theoretically plausible civil suits as well, if someone submits a false or reckless DMCA request that damages your business. But has anyone in history actually suffered any repercussions from submitting false DMCA requests? It seems people submit false ones all the time, and not only borderline mistakes but things ranging from reckless disregard for the truth to outright maliciously false requests (e.g. for SEO purposes). Yet I have never heard of anyone being prosecuted or sued for it.

Comment Re:Even more useless than politicians (Score 1) 300

Some are less sci-fi than others though. An astrobiologist studying the possibility of life on Mars at least has some pretty concrete work they can do: there is new data coming in, there are experiments that can be performed with probes to confirm or rule out some theories, etc. An astrobiologist studying the possibility of star-eating lifeforms in deep space has... less concrete work to do.

Comment Re:feeding the trolls (Score 1) 41

Yeah, the FCC has a bit of a garbage-in, garbage-out problem with the whole complaint system. The proportion of knowledgeable complaints about things like signal interference or fraudulent business practices is pretty low. Lots more people use the FCC complaint system to file a complaint about curse words or a flash of a breast on network TV.

Slashdot Top Deals

Pascal is not a high-level language. -- Steven Feiner

Working...