While being kept on life support by those who still care is definitely alive, I wouldn't say well for any of those. They're all in a long tail phase of life where those who still use them won't change unless forced, but basically nothing new is being done with them so the user and support bases will slowly dwindle until it truly is dead.
Exactly. Using ridiculous namecalling for folks challenging systemd such as "immature twats" is taking sides.
Let's take a look at the full quote again...
Glad to hear. And for what it's worth, I think it's a shame some elements in the community behaved like they did. I chalk it off to them being immature twats, but mostly it's that people are people, and a good chunk of humanity are just idiots.
To me, that's calling the subset of the community who were shitty about it immature twats, not taking sides in the debate at all. Anyone can agree that there were plenty of people on both sides of the systemd issue who were most certainly deserving of the title "immature twat", so I don't have any problem with this statement.
I think you then sort of pot-kettle-black yourself with this:
It's not possible to have a reasonable collaboration so long as systemd has activist fans who do not take the time and care
to understand the criticisms.
As if the pro-systemd side was the only one with activist fans who don't understand the actual situation. Many of the criticisms have merit, but others do not and yet are continually parroted. The binary log for example, where the anti-systemd folks constantly complain that it's not ACID and that it'll be easily corrupted in a crash but never quite manage to explain how the plain text log doesn't have the same problem.
Personally I dislike systemd's breadth due to its impact on portability for those apps that would have to interact with it in a systemd environment, but I want an init system that is aware of dependencies so my boot process doesn't have to wait on something slow. As a side benefit I can have it automatically restart dependent services when something important needs to be restarted, like networking. Basically I'm not a fan of systemd, but it seems to be the only realistic chance to get what I want in an init system any time soon so given the choice between it and the status quo I say all hail our new integrated overlords.
Problem is, switching from bash scripts to systemD isn't going to make you go any faster. Bash scripts were designed for systems with clock speeds of single-digit megahertz. On a modern system, spawning a dozen is hardly noticeable.
It's not the script itself, it's the fact that init scripts are run sequentially. I have eight cores, there's no reason for seven of them to be sitting idle while one of the services being started shoves its thumb up its ass. Anything else that doesn't care whether the stuck service lives or dies should be able to start on its own.
Traditional init scripts can not do that.
Obviously that doesn't matter to the person who admins a cloud style infrastructure where each host (virtual or physical) has only one role and thus dependencies are basically sequential, but to anyone on an end-user machine or anyone operating a multi-role server it's very useful.
tl;dr: systemd may not be the right answer, but bash scripts are not without some major flaws which will require reducing or eliminating their influence on the boot process to resolve.
If the task is complex, you aren't making it any simpler by hiding it in a black box. Hiding the details only makes things look deceptively tidy. It doesn't actually make them tidy.
If the task is complex, you can do it over dozens of times or you can do it once well. No one's hiding anything, it's all open source. It's just that with systemd or a similar system any tangled messes that may have to remain aren't directly exposed to people who just want to change a command line flag or run their homebrewed app as a service.
When you use a lot of software from outside the normal system packages you tend to notice that a lot of init scripts end up looking very much like a lot of others. There seem to be a few basic templates that people started modifying and ended up forked a bunch of different ways, all to accomplish the same basic things.
If you implement all of those common tasks in one standard tool or package of tools, suddenly all of those different and potentially buggy in their own way reinventions of the wheel become one easily updated thing. You have to get that right, but with large distros going this way any major issues are likely to show themselves quickly. As long as they can be fixed in a timely fashion it's not a huge problem.
For the record I take no position on systemd. I believe that a parallel, dependency-based replacement for init is far overdue so I like the concept as far as that goes but I don't know enough about the implementation to judge systemd specifically.
I may be mistaken on my understanding of heat and efficiency, but I believe that if you have electric heat in your home the "waste" heat from a computer costs the same per preferred unit of heat.
If gas is cheaper than electricity for you like it is for me that doesn't really help as much, but if you have electric heat you may as well run Folding/SETI/Bitcoin/whatever during the cold season.
Agreed as a general principle, but in this case it's a matter of power in vs. power out. If the average power in over a reasonable time the average power out, it's producing power. There's not much room for fooling as far as this is concerned, we can easily measure both. If we can objectively measure a claimed ability it's just a matter of having enough unrelated individuals or teams perform the measurement that a reasonable doubt is defeated. All that's needed here is a search for experimental flaws. If they're truly metering power in and power out, the rest is a matter of what the numbers say.
"Psychics" and those claiming magical powers over subjective matters, that's where magicians shine. When the test involves a judgement of the mind, bring in those who make a living by fooling it.
Perhaps I'm being a little hard on the Linux users. Many of those at Linux conferences are carrying MacBooks.
I was this guy for a while. I like open source, but it's not my top priority. I just want a computer that lets me get done what I need to get done most effectively. For admining *nix boxes and diagnosing networks it's pretty helpful to be on a *nix box yourself, but which variety is not really important as they all tend to have the same basic tools available.
A Mac laptop provides a *nix machine that is 100% supported by the manufacturer with a well-known OS and a good selection of both free and commercial applications. For a while they were better built than most too, my last Macbook Pro was the generation before the "unibody" models and only Thinkpads really compared as far as sturdiness. I haven't yet found a laptop that feels stiffer than one of the unibody models, though some of the PC vendors have adopted that design as well and are thus in the same range.
These days I'm running a refurb Acer because the battery on the MBP wore out and I couldn't see putting more than a third of its remaining value in to a replacement on a Core 2 where the motherboard couldn't reliably handle 4GB RAM sticks (only 2+2 was officially supported, though 2+4 was reported to work with most sticks, 4+4 was not usable in most cases). Core i7, 8GB, 1080p, and 100% of the hardware works in Ubuntu 14.04 so it fits what I need in a laptop.
Why anyone who doesn't need Final Cut would ever buy a Mac desktop I'm not really sure. The G5-like Mac Pro was intermittently price competitive with other workstations in dual-socket form but the new trash can model strikes me as the second coming of the G4 Cube.
Mozilla hasn't made any notable public comments I can find since acknowledging that they would support EME (Encrypted Media Extensions) back in May. I do not see anything about the feature having made it to even the trunk yet, so it'll probably be a while.
Also curious, what difference do you see between Firefox and Chrome as far as UI? I'm on a Windows machine right now so I can't verify if it's the same on Linux but aside from slightly rounder tabs and more blue Firefox 32 looks pretty much the same as Chrome 39. Firefox has a separate search bar by default and the back/forward/refresh layout is a bit different but if I ignore the extra buttons my various extensions have added to both the color scheme is the most significant difference.
Did I ever say I had a problem with Windows overall? I don't, at least no more than any other ordinary OS. It's that second part...the one that starts with an X and ends with a P. That's the problem. Like I said, deploying new Windows XP is fucking stupid.
Windows itself is a fine core platform these days. The key is these days, meaning not a full major revision and two lesser (but hard to call minor) revisions ago.
I'd still personally prefer Linux or a BSD, but I'd have a hard time making a purely technical case for that.
Now they are cheap PCs running poorly configured operating systems.
The important part. Brand new systems are still being deployed with Windows XP. Anyone who doesn't see how fucking idiotic that is should never be allowed to make an IT-related decision again, but unfortunately the people who make these decisions don't know and aren't held accountable for their stupidity.
Most of the local banks have installed new Diebold ATMs that scan checks automatically. I saw one reboot the other day. Take a wild guess what OS...
Fuck "enterprise IT" and the bullshit anti-update mentality. If you can't update, you're doing it wrong.
It's called free market: demand sets the price. Suck it up.
Free market requires competition. If you're required to use this specific model there is not competition. That is not the free market. Suck it yourself.
Because school districts taxing property owners and buying calculators is so much more efficient than students obtaining their own calculators with that same money.
Who said the students would keep the calculators? The only situation where you MUST HAVE THIS SPECIFIC CALCULATOR is in the classroom. Keep the calculator there! The special calculator stays where people find it worthwhile, everywhere else the rest of us can use a computer like a normal person.
If you're actually going in to a field where having a fancy calculator is useful versus a smartphone you can buy it yourself then. Most of us have absolutely no need for these things beyond the few tests for which they're required.
Best not to say "Try it in Linux" on Slashdot, you're a lot more likely to run in to someone who already has. My laptop and server are exclusively Linux and my desktop dual-boots. Ubuntu LTS all around, 14.04.1 on the desktop/laptop and I haven't gotten around to upgrading the server from 12.04.5 yet. AMD even lost performance per clock compared to themselves with their recent chips.
My home server previously ran a Phenom II X4 945, a 3 GHz quad core released in mid-late '09. That motherboard blew up after a power event so I switched to an A10-7850K, a 3.7-4.0 (turbo) GHz quad core released in January of this year. It's both faster clocked and a full four years newer, plus I threw double the RAM at it since I had to get new sticks anyways for DDR3 vs the old DDR2, yet somehow it's slower in the real world. My usenet downloads parcheck/extract slower, my Minecraft server bogs down more often, and it can't even manage to proxy Steam traffic at the full 100mbit/sec my internet connection allows.
As for Phoronix, how's this one? The very processor I'm running, the top model of the latest core AMD has released.
In looking at the results the AMD A10-7850K is supposed to be in line with the Intel Core i5 4670K according to AMD's expectations. However, with Ubuntu Linux on this hardware the Core i5 4670 (non-K) was generally running noticeably faster than the Kaveri APU. This is a big deal since the Kaveri APU sells for $190 USD where as the i5-4670 is not much more at a price of about $218.
It barely holds with the Core i3s on average.
I have historically defaulted to AMD. My last Intel outside of laptops was a 300 MHz Pentium II. I regret going with AMD for the server and unless they pull something huge out of their ass in the near future I'll be changing my desktop over as soon as the prices drop on the now last-gen Intels.
Since slashdot stupidly still doesn't allow edits, here's the mandatory car analogy:
SSDs are like snow tires in Colorado. Sure you can get along without them but you're losing a lot by doing so.
Because SSDs are literally the best thing you can do for your computer's performance in desktop applications. Most of the time you're nowhere close to CPU limits and these days standard RAM levels are finally high enough that only the cheapest shitboxes hit swap in normal browsing/chatting/office type tasks. Everything is waiting on the slow old hard drive. Make that an order of magnitude faster and it shouldn't be a surprise that you can rejuvenate even an old computer.
My work laptop is a Dell Vostro from 2010 with a sub-2GHz Core 2 Duo processor. It runs circles around most of my customers' computers in day-to-day stuff even when they have Core i-series processors solely because it has enough RAM (8GB) and more importantly a SSD. It's not even a great SSD, just a cheap Kingston, but it makes a huge difference.
The correct answer for any new computer is a reasonable sized SSD for the OS and applications combined with a regular hard disk for larger stuff like media collections where random access time isn't as important. Only gamers really need to compromise, with so many games these days exceeding 10GB it's still too expensive for a lot of us to have our entire game collections on SSD, but in that case it's still not hard to just install whatever you play most to the SSD and put older/less commonly played titles on the HD.