A new magazine says it will donate half the "profit."
What else should they donate? The costs?
If the Debian team never shove that unneeded thing down the throats to the users none of the heated exchange would have happened in the first place
"Shoving down" implies some external force. But last I checked, everyone is completely free to choose between Linux distributions. I think, the license even allows it, to start your own distribution, without having to pay anyone anything. Possibly, you can even start your own distribution without asking anyone for allowance. So talking about "shoving down" seems to be a little bit exaggerated.
The people doing the hard work behind Debian are completely free to change the project any way they like. Under no circumstance they should get any harassment for how they decide to change their project (because I think it only belongs to the people putting effort into it). If they decided to remove the init system at all and let the user manage all services by hand, it would be their (should I say: "god given"?) right. They are free people and they can change their project as they choose. If they decided to switch to the NT kernel and toss Linux completely, it would be their choice and everyone else would have to accept it.
Oh, good to know that you are reading this thread. So I will take the opportunity to say thank you for maintaining systemd in the past. I really appreciate it, that you put time and effort in maintaining Debian packages. I used Debian in the past and are currently running Ubuntu, so I am directly and indirectly profiting from your hard work. Until now I did not contribute to any init system, so I am a complete freeloader here.
With regards to init systems, I have no real strong opinion. But I know for sure, that without an init system, I would have to manage all the services by hand. I have done this on embedded hardware (anyone remember the uCsimm embedded Linux system?) and it was no fun at all. So even when systemd was really bad (and I doubt it is so), it would still save me a lot of work. So thank you very much for providing me some software that saves me from managing services by hand.
What I don't really get are all the freeloaders, thinking they can harass the people doing the real work. When doing real work, you always have to make some sub-optimal decisions. There is no perfect way, and reality always wants its tribute. But in the end you get something that works (up to a certain degree...). Just to think about it: Facebook was written in PHP - and it works! So what really counts is getting things done. And the people who build and maintain systemd are getting things done. And getting things done is something that I am really respecting. A bunch of freeloaders, speculating about conspiracy theories (e.g. Redhat enslaving the other distributions through systemd...) and harassing working people are not getting anything done. Worse: They are sabotaging the working people. And this is something that we should never accept.
Fair competition and objective and calm discussion are good for everyone. They help to get a better end result. But harassment and sabotage (psychological or otherwise) are bad and should not be an accepted option. They don't lead to a good end result (because bullies are usually not deep thinkers and emotions are usually no good guides in complex systems) and worse: they are unfair to the people doing real work.
One day corps might get their dream and have no wage bill at all, but then no one will have any jobs to get any money to buy their stuff so where will they be then? [...]
When you come to such conclusions, it is a clear hint, that your mental model has come to its limit. Capitalism is just a model for human interaction and resource sharing. It works quite well when used properly, but you describe a situation, where capitalism will not work any more.
In your scenario, the following will happen: There are machines that transform matter automatically to a given form (e.g. crops to bread, all automatically). There are people controlling these machines. They can let the machines supply them with the things they need. No money needed, no corporation needed.
Then there are people not controlling these machines. In the worst case they starve, in better cases they can get some of the surplus from the people controlling the machines.
And then there are some empty corporations and some irrelevant money. The corporations are empty, because no manual work is needed. The money has some arbitrary value (e.g. zero or infinite), because there is nothing useful you can buy with it. The machines are controlled by other means and the people controlling the machines could just let them create money if they would have any use for it.
Capitalism as a model also doesn't describe the real situation properly, when people cannot make well informed choices (e.g. information is hold back or poisoned with noise) or when there is no fair arbitration (e.g. because someone's life is at stake).
I know somebody that tried this. At around 5000 threads he got no real progress whatsoever anymore.That was a while back, but at that time Java was already a few years old.
The thread limitation comes from the operating system, not from the Java virtual machine. Modern operating systems are not designed to handle a huge amount of parallel threads. The handling of the threads and the synchronization between the threads usually eats up most of the system's resources.
The Java VM indeed has some shortcomings regarding mutli processing: when using multiple cores on the same socket, the Java VM sometimes accesses the same cache lines from all cores. This leads to strange patterns with cache invalidation and slows down all the affected cores. Currently there is no way to mitigate this using Java APIs or VM parameters. But this is a very special problem. When this is indeed the bottleneck, the underlying application is very likely already very well optimized and running quite fast.
If you want to process a huge amount of data, currently the best approach is to run exactly as much threads as you have processor cores. Then you feed each thread with working tasks, using non blocking data structures. For the communication, the threads should use non-blocking IO. Java is prepared extremely well for this scenario, perhaps even better than node.js. Examples are Vert.x and Akka. In some older benchmarks, Vert.x had no problem to serve over 300'000 parallel requests per second on a six core machine.
Edited on on 18:05 Wednesday 17 September 2014: fixed some typos.
- User says the software feels sluggish
- User is not as productove as before
- User says that in general buttons and menu items are not where expected
Bug trackers are not the right tool to deal with usability problems. Just imagine how many usability bugs it would need, to get from a Nokia 6070 to the first iPhone.
There is a third (unproven, but likely) option:
3) Bribe the officials to starve to project to dead. Wait until valid complaints from the users come in.
Clues for this (but of course this does not prove anything):
- According to wikipedia, they use the following quite outdated software:
version 4 available from August 2011 is based on Ubuntu 10.04 LTS, although using KDE Desktop 3.5 and version 4.1 available from August 2012 is also based on Ubuntu 10.04 LTS
(Source: wikipedia page about LiMux). Especially the desktop environment is really old, first published around 2002 if I remember correctly.
- Microsoft moves its German headquarter to munich (source)
- Munich lord mayor Reiter is a self-confessed Microsoft fan (source)
Correction: An earlier version of this story reported that the study was funded in part by the James S. McDonnell Foundation and the Army Research Office. In fact, the study received no external funding.
Which kinds of dependencies do you track?
Out of my head I can think of run-time libraries, compile time libraries, compile time tools (code generator, etc.), test libraries, test tools (e.g. dummy applications).
Are you tracking any others?
Read more at: http://phys.org/news/2014-03-w..."