[firefox user's voice]
[firefox user's voice]
My thesis is that (1) each package is more complex which leads to (2) each individual package maintainer concentrating on fewer packages which (3) makes the perceived (to them) importance of this package go up which (4) makes them more eager to push down changes that improve the package but might hose people's ystems. My friends say there has been no change there. Instead the problem is that my personal systems got more complex and hence I get stabbed in the back more often. What's your take?"
The truth of the matter is that it is very hard to support random other languages on VMs written for certain languages.
All these dynamic languages do one thing or another that puts a hole in your plan. Ruby with it's continuations is right up there but Python with "modify anything fundamental anytime" isn't much better. The native environment has a huge headstart.
We should all move to LLVM.
FreeBSD is about to release 8.0, with ZFS, and ZFS has officially been labeled production-ready:
Myself, it works good for me on a storage server. There are some edge cases left, for example I could provoke a panic by combining ZFS on a 3-disk array with automatically parking the disks on timeout. Accessing the filesystem with power down disks didn't go so well. This was in 9-current, though.
ZFS is still the only thing out there providing all of:
- attributes such as compression can be turned on and off by directory tree
- a "filesystem-aware raid", aka something that avoids the RAID hole
- and as mentioned optional extra redundancy (more than one disk can die) and the checksumming. That means, for example, you can have your filesystem like raid-5 but some directories as redundant as raid-6
Until Linux gets BTRFS (and I'm not sure how complete it is with regards to all those features) it's the best thing for a storage server out there.
Apple's diversion either means they do their own thing (ZFS seems excessively hard to integrate) or is based on patent concerns, or the former because of the latter.
But if you look at the core of it, NetApp tries to claim patents on everything that does filesystem-integrated snapshots (as opposed to the lame LVM raw device layer snapshots in Linux). Reimplementing a filesystem with snapshots, whether you call it ZFS or not, won't help here.
Memtest86 tests much less of the memory than you think. It is 100% no-load. It does find outright broken memory cells but it does nothing if the memory interface runs unreliably.
To test your memory interface under stress you use a program named "Superpi", you run the "32M" test. It is available for Linux and runs on FreeBSD. I find a lot more problems with SuperPi than with memtest, a lot of memtest-stable machines don't actually work right once you stress-test.
To test the CPUs/cores, you use "MPrime" or "Prime95" (same thing). It is the hardest load test that the overclocking record chasers have found, and they try very hard to find more and more nasty tests to proof that their competitors' overclock is not valid. They do this all day long, you should profit from their research.
You run MPrime with one instance per core. Available for Linux, IIRC also works on FreeBSD.
Be warned that the CPU temperature during MPrime will raise to levels that no other program I am aware of reaches. That's the point. MPrime also has a very high amount of plausibility checking on it's intermediate results. The combination of those two factor is why it is such an effective hardware test.
So, in summary:
1) MPrime for 36 hours (all cores simultaneously, one MPrime each)
2) 24 hours of memtest86
3) a whole bunch of SuperPi 32M.
If there is any 3D graphics ever used you also run Futuremark's 3DMark (Windoze only).
Oh, and you will have to note the CPU temperature that you get during that mprime run and never exceed that temperature during everyday work from then on. This usually isn't a problem since mprime will heat your CPU like nothing else.
Good luck. Notebooks in particular, and cheap ready-made desktops not distributed by Dell tend to fail this outright. If any of these steps fail you can't pass any important data through this computer, it can and sooner or later will scramble you harddrive contents, silently, so that you backup USB drive already has the corrupted version by the time you notice.
All of us cursed GNU creeping featurism in the commandline utilities
Funny. I tend to curse the lack of features in non-GNU utilities. For instance, grep -o.
Well, it goes both ways. GNU dropped `tail -r` which is pretty bad since at the time many scripts were written that was the only portable way to reverse the lines in a file.
But anyway, this was part of the "what this is not" intro. If you run this you get GNU *utils.
So, can I expect debian packages of BSD userland soon?
What you probably want to do is have a FreeBSD userland in a chroot.
That way you have binaries built in Debian/FreeBSD binaries at the root. You have regular Linux binaries in
If said scenario still exists, there needs to be some long, loud, and pointed complaining engaged in on X's mailing lists.
My attempts to even get a single new Xorg bug fixed ended up in frustration and me feeling sure that I have been lied to by the developer in question when he said it works on his installation. I went through all the trouble of doing a git version Xorg build (there's no single module-tree with a central build anymore and that Python metabuilder they have did not go through for me). Only to find that the bug behaved the same way in their devhead.
I haven't been very active in the FreeBSD project since I now work for a company that uses Linux. The pressure from things that don't work at work is off. I'd like to dig my teeth into something that is broken for me every day. But even though Xorg fits that criteria (too well...) they pretty much excluded themselves from the list.
And I got a hunch that I am not the only one who feels that way. Why would people who value correctness join a project that doesn't? Human nature says otherwise.
Maybe a second fork from pre-license change XFree86, emphasizing on correctness instead of features and instead of support for hardware that long-time computer professionals don't use in the first place is the way to go.
This sounds insane to people who approach this from the usual angle. Linux has a lot more support for all the junk and semi-junk hardware out there, but some of the GNU core Unix userland is of questionable quality. All of us cursed GNU creeping featurism in the commandline utilities and GNU libc problems at some time or another. You would think people want the Linux kernel and the FreeBSD Unix userland. So why go the other way round?
There are very specific needs being addressed by using the FreeBSD kernel inside a Debian.
FreeBSD's ports system for third-party applications only has a devhead, and that has caused an increasing number of problems. FreeBSD has stable branches and releases for kernel, for "core Unix" userland including binutils and gcc/g++, but not for third-party applications. At the time that this was created it was great, because what we wanted at the time was a stable base system to do "server stuff" with, and the ports/applications were just for accessing the things, a light desktop that didn't do much except run xterm and emacs.
Today, I see two main problems with what worked a few years back:
1) those "server style" third-party applications aren't sitting flat on a Unix anymore. They are stacks of dependencies of considerable depths. It's not an apache with mod_cgi and the base perl system anymore.
2) some third-party applications became very aggressive lately and can be unusable in their newest releases. Many people bash GNOME and/or KDE, myself my favorite target is Xorg. The Xorg server has caused the most headaches across all my Linux and FreeBSD machines in the last years.
So, here's the trick. FreeBSD only has one branch in ports, so even if you use an older -STABLE release branch of the FreeBSD core system you still get the newest releases of third-party applications via ports. That's why my *most* stable OS (FreeBSD) had caused me the most headaches lately, because it upgrades me to the newest Xorg *first*, not last like it should.
I don't want to distract too much from the point of this posting by giving reasons why people want the FreeBSD kernel, let's just say there are enough of us. But no matter how much you want the FreeBSD kernel, many see increasing problems with ports/applications for the reasons I gave.
Debian provides stable branches for all applications, and that makes some people who don't generally like Linux still go "PLING!".
In addition to all that, Debian's packaging system, and the way that it is kept working (few package screwups upgrading), the way that it integrated
So, very suddenly you have a demand for the FreeBSD kernel in a Debian application provision system and here we are.
(BTW, what blows my mind for real is that FreeBSD is now partially sold based on driver availability. Because they kept their NDIS windoze driver integration system alive and maintained when Linux didn't. That is
People mentioned the self-checks and debugging features that used to be turned on in FreeBSD development branches and beta releases.
Self-checks, which are the major source of kernel slowdowns in those kernel options, are not turned on in the 8.0 release candidates.
Debugging is on, but unless you are very short of memory it should not cause a noticeable slowdown.
FreeBSD's slowness in these benchmarks can be attributed to two factors:
1) the compiler. The GPL v3 is unacceptable for FreeBSD, so newest GCCs cannot be used as the base compiler. Users can choose to install a new gcc on their own (as a port) and then even go and compile all ports and even the base system with that new compiler (some parts might not have been cleaned up to comply with new language strictness that might have come with new gccs).
2) threading, as in the userland threading library support. It is very hard to tell whether there is some performance problem in FreeBSD's thread libraries or whether applications just happen to have been optimized and tested only with Linux.
That happens a lot and you can also see Solaris with it's M:N threading fail miserably for some threading benchmarks that do dump things, such as just creating and destroying threads at a high rate.
The problem of "thread benchmarks" benchmarking bogus things and/or just having been written with Linux' thread model in mind has been going on for 12 something years now. Benchmarking thread systems in a realistic manner is very difficult. In real-world applications you don't create and destroy threads at a high rate and you minimize locking. Benchmarking this is almost as hard as benchmarking programming languages.
I haven't benchmarked threading libraries, knowing that it will take time that I don't have right now. I can tell you, however, that just the I/O subsystem in FreeBSD, as in filesystems and networking, isn't any slower than Linux. Not to mention GbE and today's disks are too slow to really show an OS difference for most tests anyway.
The real question of I/O subsystems will come in when ZFS+Zraid is standard in FreeBSD and BTRFS is standard in Linux. In a couple of years from now nobody will understand why we ran today with no snapshots, with the raid hole (from block layer raid systems) and without transparent compression in the subtrees where we want it.
But these filesystems are complicated and there's some real performance difference visible.
An area completely overlooked in the benchmark is the VM subsystem. Namely - what happens when you overload your RAM and paging commences? Linux used to make very bad choices here (dropping readonly pages, which is a wise thing as such, at rates about 10 times higher than wise).
FreeBSD used to be the go-to OS on memory shortage thanks to John Dyson's VM work (backed by a large database company that provided support and a realistic benchmarking environment during that development).
But today? Nobody knows. I'm not aware of any benchmarks that you can download that simulate memory stress and map the tradeoffs that the OS makes.
In general, the biggest obstacle to improve Linux, FreeBSD and everybody's else's OS performance is a lack of high-quality benchmarks.
Why don't people develop more benchmarks? Because they get annoyed that today, in 2009, no realistic OS benchmark will show a single number as a result. All OS benchmarks today can only make a map of what tradeoffs the OS chose, what part of the running application suite got the short end in favor of what other part. This isn't sexy and publishers don't like it.
People like reality reduced to single numbers, but in the area os OS benchmarks (and language benchmarks) that party is over.
Myself, I found myself gasping many years ago when I benchmarked network I/O versus userland CPU load. I hammered a couple of GbE interfaces while at the same time running moderate memory-intensive CPU benchmarks (with no network access from those CPU load processes). That was back in the day when GbE would consume much of a then-current system. Much to my surprise the supposed better desktop OS Linux would keep the networking flow up and penalize the CPU eaters (and badly so), whereas the ninja-macho networking OS FreeBSD would provide CPU resources to the userland CPU benchmarks and had network traffic break in. About the opposite of what the normal public image of the OSes would have indicated.