Comment Filesystems are fine (Score 1) 209
Filesystems are fine. The author needs to bone-up. I have several filesystems with in excess of 50 million inodes on them right now. We have a grok tree with over 100 million inodes in it. I'd post the DF outputs but slashdot's lameness filter won't let me.
To be fair, an old filesystem like UFS is creaky when it comes to directories, but modern filesystems have no problems with directories. And no filesystem has had issues with large files for ages (even UFS did a fairly decent job back in the day). BTRFS, EXT4, ZFS, HAMMER2 (my personal favorite since I wrote it), XFS (which is actually a very old filesystem that we used on SGI Challenge systems many years ago). It is just not a problem.
Generally these filesystems are using hashes, radix trees, or B-tree / B+tree style lookups for directories and inodes. H2, for example, uses a variable block-size radix tree, which means that a directory with only a few entries in it will be very shallow (even just all in one level if its small), despite the 64-bit filename hashes being evenly spread throughout the entire numerical space. But as the directories grow in size, the tables are collapsed into radix ranges and slowly become deeper. Indirect radix blocks are 64KB, so it doesn't take very many levels to cover a huge directory.
The only way one could do better (and only slightly better, to be perfectly frank), is to use some of the object-store features built directly into later NVMe chipset standards. Basically the idea there is that any SSD has an indirect block table anyway, why not just make it directly into a (key,data) object store at the SSD firmware level and then have filesystems use the keys directly instead of linear block numbers? Its totally doable, and not even very difficult given that most modern filesystems already use keys for directory, inode, and block indexing.
In anycase, the author needs to do some serious research and catch up to modern times.
-Matt