I've found avahi works quite well for local/P2P name resolution (at least, much more reliably than SAMBA + winbind). Now admittedly I started using it long after it had stabilized, but I would like to know what you were using instead.
The overarching problem is that systemd can decide to rearrange the boot sequence at any moment.
If a admin has set a sequence to be XYZ they have a very good reason for doing so. So why should the init suddenly decide that YZX is the way to go?
Because parallelism is good for performance, or so the argument goes.
The real issue is that there's no way to reproduce that partial-ordering. IMO, there should be some kind of debug facility to restrict it to sequential startup, then try all possible orders. (Yes, factorial complexity is never fun, but at least it would provide a means to debug this kind of problem. Making open source software easy to debug (or at least capture logs for) is hugely important, because that's where most of your drive-by patches come from.)
Time to start masculanism movement, because anti-male gender discrimination hit mainstream.
It exists, but isn't particularly well known.
IMO, feminism and masculinism are both inherently flawed, in that they only focus on one set of symptoms and don't try to fix the underlying issues. Both genders have issues, but the moment you try to draw attention to this people tend to take it as a competition / zero sum game. The real solution is something like egalitarianism, where the focus is on not discriminating against people regardless of their gender, etc.
KDE 4 broke a lot of the functions I used on 3 (like, for instance, email. KMail was great, now I'm stuck with the inferior but functional Thunderbird). And they never did fix them. Still broken and worse with every revision.
So I'm dreading the day when the only supported KDE will be the still-not-fully-functional version 5. What have they broken now, never to fix?
Speaking as someone who only started using KDE after 4 had stabilized, what did they break in Kmail?
The reasons are clearly described here
I read through that and didn't see anything about "We're all idiots".
Their reasons involve context switching and interprocess communications. Context switching has got to happen (unless they run IE in kernel space) so just get it over with. Interproces communication has always been a weakness in Microsoft systems. Since day one. Multitasking OSs are here, folks. Get over DOS.
If your context switches are too slow, the correct solution is to fix the kernel or add syscalls to reduce the overhead (see sibling post). Moving parts of your application into kernel-space is bad design no matter how you look at it. (Besides, wasn't it only a few years ago they had a vulnerability in their kernel-mode font driver?)
It's usually cheaper to use consumer drives and some better software to manage the inevitable failure than to use enterprise drives.
There is NO difference in reliability between "consumer" and "enterprise" drives. The only reason to buy enterprise drives is because you have excess money that you are too stupid to keep. All the big storage companies use consumer grade drives, and several of them, including Google and Backblaze, have published data that clearly show there is no reliability or performance reason to buy "enterprise" drives. They are a scam.
IIRC, there is one difference: how they respond to read errors. Consumer drives will keep trying to maximise the likelihood of a successful read, while the enterprise ones will just fail immediately since they're expected to be RAID, so there's another copy of the data and taking longer to reply just kills the throughput.
Serious question: how does one meaningfully distinguish between hardware and software patents? As I understand it, software is supposed to be unpatentable because it is just math, but the same could also be said of Widlar's negative feedback amplifier, since the mathemathical models by which transistors function were well established at that point.
The best argument I've heard against software patents is that they inhibit interoperability (e.g. the MPEG patents), but that's not specific to software - the same is true of Apple's magsafe connector.
OK, so let's see. Other than the network card, mouse, 2D graphics, sound, CPU, 3D graphics, battery and the fact that normal usage melted it, it works awesome. I think I'll stick with Windows 10 TP on my laptop, where I've only had minor network issues requiring a reboot to get it back sometimes.
To be fair, Arch is a distro for people who are fine with things breaking all the time, which is what you'd expect of a bleeding edge rolling release distro. A review of someone who spent a year running Ubuntu on their laptop would be far more realistic in terms of what a casual user would experience.
If they offer average pay, they'll get average employees. If they offer above average pay, they miss out on exceptional employees. And if they offer exceptional pay, they'll likely go bankrupt as most of their employees will not be exceptional.
Negotiation exists because there's no objective way to evaluate the value of the employee to a company before they've been hired. If someone can get twice the work done (and can demonstrate this), they can justifiably demand twice the pay. Of course, the subjectivity is a double-edged sword, because it means that individual prejudices can affect the hiring process.
One way to solve this problem is to handle all negotiation through a well-defined algorithm. The would-be employee shouldn't even interact with a person for this part of the process, just with a webpage. Strong AI is obviously impractical, but you could probably do a pretty good job of predicting performance if you managed to trawl a big enough dataset for some key statistics.
Another approach is what this poster suggested, where pay (above a base salary) is determined by one's peers. In that case, the individual prejudices are averaged (which ideally negates them), and the valuation of the employee is done by their peers. You'd have to be very careful about how you implemented it though, as you run the risk of creating some major social/political problems with that approach.
git pull --rebase origin master
There might possibly be no other command in the history of software development that has saved more man-hours than this gem.
Except when you forget the --rebase and now have hours of work fixing your tree.
You could avoid that by setting master.rebase for that repo, or setting branch.autosetuprebase=always globally. But even if you did accidentally merge instead of rebasing, the merge commit will contain the hashes of both parents, so it's simple enough to reset the branch.
So use a different GUI library. WPF is pretty bad even when compared to WinForms. The Qt libraries are cross-platform, have existing bindings for C#, and are an absolute pleasure to work with. IMO, they blow the pants of WPF.
The value in Microsoft open sourcing
C# is an excellent language, and arguably the best-positioned language in its niche - C++ has too much cruft, Java is struggling to catch up (and seems to lack a decent GUI library), D's userbase and funding is close to non-existent, and Objective-C/Swift aren't used outside of Macs. All the other languages are too immature, or lack the requisite feature set, to be competitors.
I lost it at "The Apple Watch crown is a revolutionary new interface."
IT'S A FUCKING SCROLL WHEEL on a watch.
(This is a direct reference to the "on a computer" patents, for the humour impaired.)
I wonder if IBM would produce a Power cpu for the desktop at less than Intel I7 pricing. I would not mind if the chip was made in China.
If they do, hopefully it will be a 96bit version, with programmable little Endien/Big Endien mode.
Why on earth would you want a 96-bit CPU? Even the current 64-bit ones can 'only' address 48-bits of memory (i.e. 281 TB).
And if you want it for a certain computationally expensive load, 128-bit would make more sense (or just doing the computation over 2 -bit words).
The main advantage of Github, etc. isn't the hosting - you can use any SSH-capable server for that. It's in the issue tracking and other built-in features.
That means it makes more sense to have your backup server pull updates from Github, since it can't provide that.
Of course, an even better approach would be to use an alternative like Gitorious (now Gitlab?) that allows you to host it yourself, so you don't lose access to anything if your primary hosting goes down...
I keep explaining that we need to cut away the entire college education system from the Government's hands. Leave that to the market; leave it to businesses to say, "Fuck! We are paralyzed, because we have to pay $250,000 for a professional, and need more than available to accomplish our business strategies!" Businesses should never be in this position, because their mode of growth gives them more-than-adequate warning about what positions they'll need filled; therefor, they should hire, train, and send to college cheap entrant employees, with preference for the lower-risk but similar-cost investment of hiring an available professional.
The problem I see with this is that it would give corporations power over the employees they have educated. No business would pay to have an employee educated if there was a chance they'd leave immediately after, so they'd either require them to stay with the company for N years, or make the entire debt repayable the moment they quit.
There also seems to be the implicit assumption that there will be enough jobs available for the entire population. I don't know how true that will be in the future, or if it's even true now...