Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:EXT appropriate for desktop? (Score 1) 319

If you look at the numbers, the majority of the files on a Linux desktop are not "small files" (by much I mean files substantially smaller than a blocksize). Given that this is the case, why optimize for them?

As far as whether or not the defaults of ext3 are "acceptable" or not --- it's open source! You can change the defaults if you want, or a distribution can change the defaults if they want. I suppose I could add a tuning knob to /etc/mke2fs.conf so you can change the defaults for your system. Regardless, I think it's rather silly to choose an open source filesystem based on whether you like the defaults. After all, Homo Sapiens is a thinking animal; it has the ability to think for itself, and if it doesn't care about the safety of the files stored on the filesystems (or he/she knows that it is protected in other ways, such as RAID-6 with hot sparse, PLUS regular full and incremental backups) he/she can use different filesystem tunining parameters. Or the defaults can be changed --- if you want to distribute a fork of e2fsprogs called "fast and loose with your data progs", there is absolutely nothing in the GPL which stops you from doing that.

Comment Re:A metaphysical question (Score 1) 319

The ext2/ext3/ext4 filesystems do a periodic check of the filesystem for correctness out of paranoia; because PC class disks, well, have the reliability of PC class disks, and that's what most people use these days on Linux systems. Other filesystems, such as reiserfs and xfs, are subject to the same kind of potential random filesystem corruption caused by hardware errors that ext3 is; in fact, in some cases their filesystem formats are more brittle than ext2/3/4 against random hardware failures in that a single bad block that corrupts the root node of a reiserfs filesystem, for example, can be catastrophic. It's just that their filesystem checkers don't require doing a periodic check based on time and/or the number of mounts.

If you want to configure ext3 filesystems to have the same happy-go-lucky attitudes towards assuming hard drives never fail as reiserfs, you can do so; it's just a matter of using the tune2fs program; check out the man page options for the -c and -i options to tune2fs. Then you won't do a filesystem check at reboot time.

What I do recommend, especially if you are using LVM anyway, is periodically (say, once a month, Sundays at 2am, or at some other low utilization period), have a cron script run that takes a snapshot of your filesystem, runs e2fsck on the snapshot, and if it has errors, sends e-mail to the system administrator advising them that it is time to schedule downtime to have the filesystem corruption repaired. This has the best of both worlds; you can now do much more frequent checks to make sure the filesystem is consistent, and you don't have to take the system down for long periods of time to do the test, since you can run e2fsck on the snapshot while keeping the system live.

Comment Re:Problems: IO priority, large #s of files. (Score 4, Interesting) 319

NFS semantics require that the data be stably written on disk before it can be client's RPC request can be acknowledged. This can cause some very nasty performance problems. One of the things that can help is to use a second hard drive to store an external journal. Since the journal is only written during normal operation (you need it when you recover after an system crash), and the writes are contiguous on disk, this eliminates nearly all of the seek delays associated with the journal. If you use data journalling, so that data blocks are written to the journal, the fact that no writes are required means that the data can be written onto stable storage very quickly, and thus will accelerate your NFS clients. If you want things to go _really_ fast, use a battery-backed NVRAM for your external journal device.

Comment Re:The article is incorrect with respect to ext4.. (Score 5, Informative) 319

Oh, by the way... forgot to mention. If you are looking for benchmarks, there are some very good ones done by Steven Pratt, who does this sort of thing for a living at IBM. They were intended to be in support of the btrfs filesystem, which is why the URL is http://btrfs.boxacle.net/. The benchmarks were done in a scrupulously fair way; the exact hardware and software configurations used are given, and multiple workloads are described, and the filesystems are measured multiple times against multiple workloads. One interesting thing from these benchmarks is that sometimes one filesystem will do better at one workload and at one setting, but then be disastrously worse at another workload and/or configuration. This is why if you want to do a fair comparison of filesystems, it is very difficult in the extreme to really do things right. You have to do multiple benchmarks, multiple workloads, multiple hardware configurations, because if you only pick one filesystem benchmark result, you can almost always make your filesystem come out the winner. As a result, many benchmarking attempts are very misleading, because they are often done by a filesystem developer who consciously or unconsciously, wants their filesystem to come out on top, and there are many ways of manipulating the choice of benchmark or benchmark configuration in order to make sure this happens.

As it happens, Steven's day job as a performance and tuning expert is to do this sort of benchmarking, but he is not a filesystem developer himself. And it should also be noted that although some of the BTRFS numbers shown in his benchmarks are not very good, btrfs is a filesystem under development, which hasn't been tuned yet. There's a reason why I try to stress the fact that it takes a long time and a lot of hard work to make a reliable, high performance filesystem. Support from a good performance/benchmarking team really helps.

Comment Re:what fs out there... (Score 5, Informative) 319

Ext4 supports up to 128 megabytes per extent, assuming you are using a 4k blocksize. On architectures where you can use a 16k page size, ext4 would be able to support 2^15 * 16k == 512 megs per extent. Given that you can store 341 extent descriptors in a 4k block, and 1,365 extent descriptors in a 16k block, this is plenty...

Comment The article is incorrect with respect to ext4... (Score 5, Informative) 319

The article states that ext4 was a Bull project; and that is not correct.

The Bull developers are one of the companies involved with the ext4 development, but certainly by no means were they the primary contributers. A number of the key ext4 advancements, especially the extents work, was pioneered by the Clusterfs folks, who used it in production for their Lustre filesystem (Lustre is a cluster filesystem that used ext3 with enhancements which they supported commercially as an open source product); a number of their enhancements went on to become adopted as part of ext4. I was the e2fsprogs maintainer, and especially in the last year, as the most experienced upstream kernel developer have been responsible for patch quality assurance and pushing the patches upstream. Eric Sandeen from Red Hat did a lot of work making sure everything was put together well for a distribution to use (there are lots of miscellaneous pieces for full filesystem support by a distribution, such as grub support, etc.). Mingming Cao form IBM did a lot of coordination work, and was responsible for putting together some of the OLS ext4 papers. Kawai-san from Hitachi supplied a number of critical patches to make sure we handled disk errors robuestly; some folks from Fujitsu have been working on the online defragmentation support. Aneesh Kumar from IBM wrote the 128->256 inode migration code, as well as doing a lot of the fixups on the delayed allocation code in the kernel. Val Henson from Red Hat has been working on the 64-bit support for e2fsprogs in the kernel. So there were a lot of people, from a lot of different companies, all helping out. And that is one of the huge strengths of ext4; that we have a large developer base, from many different companies. I believe that this wide base of developer is support is one of the reasons why ext3 was more succesful, than say, JFS or XFS, which had a much smaller base of developers, that were primarily from a single employer.

Editorial

Submission + - An Ethical Question Regarding Ebooks (thunk.org)

tytso writes: "Suppose there is a book that you want to read on your ebook reader, but it is out of print (so even if you purchase the dead-tree version of the book used, the author won't receive any royalties) and the publisher has refused to make it available as an ebook. You can buy it from Amazon as a used book, but that isn't your preferred medium. It is available on the internet as a pirated etext, however. This blog post outlines a few possibilities, and then asks the questions, ``What is the right thing to do? And why?'' I'm also curious if the answers change depending on whether you are a Baby Boomer, or a Gen X, Gen Y, etc. — I've noticed that attitudes around copyright seem to change depending on whether someone is a college student or a recent college graduate, versus someone who can remember a time when the Internet did not exist."

Slashdot Top Deals

In any formula, constants (especially those obtained from handbooks) are to be treated as variables.

Working...