Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment The basic premise is already not scaleable (Score 1) 209

"In a Substack article, Didgets developer Andy Lawrence argues his system solves many of the problems associated with the antiquated file systems still in use today. "With Didgets, each record is only 64 bytes which means a table with 200 million records is less than 13GB total, which is much more manageable," writes Lawrence. Didgets also has "a small field in its metadata record that tells whether the file is a photo or a document or a video or some other type," helping to dramatically speed up searches."

Yah... no. This is the "if we make the records small enough we can cache the whole thing in ram" argument. It doesn't work in real life. UFS actually tried to do something similar to work-around its linear directory scan problem long ago. It fixed only a subset of use cases and blew up in only a few years as use cases exceeded its abilities.

The problem is that you have to make major assumptions as to both the size of the filesystem people might want to use AND the amount of ram in the system accessing that filesystem.

The instant you have insufficient ram, performance goes straight to hell. Put those 13GB on a hard drive with insufficient ram, and performance will drop to 400tps from all the seeking. It won't matter how linear that 13GB is on the drive... the instant the drive has to seek, its game-over.

This is why nobody does this in a serious filesystem design any more. There is absolutely no reason why a tiny little computer (or VM) with a piddling amount of ram, should not be able to mount a petabyte filesystem. Filesystems must be designed to handle enormous hardware flexibility because one just can't make any assumptions about the environment the filesystem will be used in.

This is why hierarchical filesystem layouts, AND hierarchical indexing methods (e.g. B-Tree/B+Tree, radix tree, hash table) work so well. They scale nicely and provide numerous clues to caching systems that allow the caches to operate optimally.

-Matt

Comment Re:It's mostly about the metaphor. (Score 1) 209

Yes, you can still have trees with an object store. The object identifier can be wide... for example, the NVMe standard I believe uses 128-bit 'keys'. Sigh. Slashdot really needs to fix its broken lameness filter, I can't even use brackets to represent bit spaces.

So a filesystem can be organized using keys like this for the inode:

parent_object_key, object_key

An this for the file content:

object_key, file_offset|extent_size

For example, a file block could easily be encoded as a 64-bit integer byte offset, with a 63 bit positive offset space, and an extent size encoded in the low 6 bits (radix 1 to radix 63, allowing extents up to (1 63) bytes. Since the low 6 bits are used for the extent, the minimum extent size would be 64 bytes. The negative key space could be used for auxillary records associated with the file or directory. HAMMER2 uses this very method to encode its radix trees, allowing each recursion to use a variable-sized extent and to represent any 64-bit sub-range within the hash space (but H2 runs on top of a normal block device, it doesn't extent the encoding down to the device).

A set of directory entries could be encoded as follows, where [object_key] is the inode number of the directory.

object_key, filename_hash_key

Though doing so would almost certainly not be optimal since directory entries are very small.

Inode numbers wind up just being object keys.

This is readily doable... actually, this sort of methodology has been used many times before. I did a turnkey system 20 years ago that used this method to create a simple-stupid filesystem for a NOR flash filesystem.

The problem with this methodology is that if done at the kernel/filesystem-level, it requires the underlying storage to directly implement the key-store, as well as to support the key width required by the filesystem.... which seriously restricts what the filesystem can be built on top of.

-Matt

Comment Filesystems are fine (Score 1) 209

Filesystems are fine. The author needs to bone-up. I have several filesystems with in excess of 50 million inodes on them right now. We have a grok tree with over 100 million inodes in it. I'd post the DF outputs but slashdot's lameness filter won't let me.

To be fair, an old filesystem like UFS is creaky when it comes to directories, but modern filesystems have no problems with directories. And no filesystem has had issues with large files for ages (even UFS did a fairly decent job back in the day). BTRFS, EXT4, ZFS, HAMMER2 (my personal favorite since I wrote it), XFS (which is actually a very old filesystem that we used on SGI Challenge systems many years ago). It is just not a problem.

Generally these filesystems are using hashes, radix trees, or B-tree / B+tree style lookups for directories and inodes. H2, for example, uses a variable block-size radix tree, which means that a directory with only a few entries in it will be very shallow (even just all in one level if its small), despite the 64-bit filename hashes being evenly spread throughout the entire numerical space. But as the directories grow in size, the tables are collapsed into radix ranges and slowly become deeper. Indirect radix blocks are 64KB, so it doesn't take very many levels to cover a huge directory.

The only way one could do better (and only slightly better, to be perfectly frank), is to use some of the object-store features built directly into later NVMe chipset standards. Basically the idea there is that any SSD has an indirect block table anyway, why not just make it directly into a (key,data) object store at the SSD firmware level and then have filesystems use the keys directly instead of linear block numbers? Its totally doable, and not even very difficult given that most modern filesystems already use keys for directory, inode, and block indexing.

In anycase, the author needs to do some serious research and catch up to modern times.

-Matt

Comment A few simple rules to follow (Score 1) 225

A few simple rules will make life easier.

#1 - Have two credit cards accounts. Put trusted services that would be annoying to re-enter on one (e.g. trusted online accounts such as Amazon, Apple), and make untrusted / ad-hoc, and over-the-counter purchases with the other.

#2 - Use Apple Pay or Google Pay instead of a physical card whenever possible. And in particular at gas stations. You can associate your 'secure' card account with your phone. Always use the phone-based nfc payment whenever possible.

#3 - Never make purchases with your debit card. Use the debit card for ATMs only, preferably inside locations or at banks.

#4 - If your bank's online account has the feature, have it text you the location and amount whenever your card is used.

That's it pretty much. Either my wife's or my card (usually my wife's) would get lifted maybe once year, but it was never an inconvenience because it was always the untrusted card, and almost always due to use at a gas station that did not have wireless payment at the gas pump.

These days of course we have chip cards, and even cards which don't print the credit card number on the front any more (but still print it on the back). Doesn't matter though because modern day thieves often don't lift cards via the magnet strip any more, they install cameras that take pictures of both sides of the card instead.

It is also possible to copy a chip card (actually not even all that hard because the extra information on the chip that isn't on the face of the card is often not actually checked by the CC company, so blank cards can be programmed fairly easily). And it is also possible to fake or break a chip reader at a merchant, causing the merchant to back-off to a slider or direct entry.

And regardless of that, if your CC number is stolen at all thieves will use it at locations which don't require chip cards. And even though your bank will back the charges out for you, you still have to go through the hassle of changing cards. Which is why you have two accounts... it removes that hassle. So chip cards, even secure ones, don't actually remove the hassle. Thus, take the steps above to minimize it instead.

-Matt

Comment Re:Chrome was superior when introduced (Score 1) 408

Firefox is still more cross-platform than chrome. Firefox will build on several of the BSDs, but not all of them. It will build on many other operating systems as well but they must have a rust compiler, unfortunately. Google won't take upstream patches for anything but the big 3... win/mac/linux. For some people, that is computing now but the reality is that Google is killing the little guy and they love it.

The number one complaint I get is about browsers. Can't upstream to Chromium. Too small for firefox. LLVM wants me to buy them servers to get upstreamed. That is a dependency for rust. What does this all mean? I have to spend 2 weeks port LLVM. A few weeks on rust. A few months on firefox. Meanwhile, I can get a new webkit building in a few hours. Mozilla needs to stop fighting open source and embrace it. Take the patches. Work with people. Be the exclusive browser in more places. Push it forward. Get momentum among tech oriented people again. People ask us what to use. We tell them what we use. It's that simple.

Comment Re:"at it's best, it's ultimately customizable" (Score 1) 408

webkit exists. it's the basis for safari, gnome web (epiphany), and midori. We should be focusing on getting that up to snuff because it's the only thing that's tolerable to port to different platforms.

Firefox is a mess. it needs rust. It has thousands of ifdef os checks throughout the code. It's not thought out. It doesn't assume a default of a unix like os in many places which is much more common than anything else.

You can spend months porting firefox now. It used to take a few weeks. It's a nightmare to work with. They won't take upstream patches for smaller OS projects. That mean reporting over and over and over and over.

On platforms that firefox supports, it's actually not terrible aside from losing features over time. That's the most frustrating part. I think I'm two features away from being able to switch to firefox as my primary on some of the OSes I use.
1. Easier integration with yubikey. It does support it but it's a hassle whereas it just works in chrome.
2. CASTING. If i could cast content to a chromecast without using chrome it would be awesome.

Comment Re:It might be fast already... (Score 1) 153

This is also true for most languages in the last few decades. Look at PHP. it was break after break up through 7.x. 4.x to 5.x was really bad. Rust devs love to keep adding/modify and then expect everyone to do yearly upgrades. Go lang is the same way.

The biggest problem with the python upgrade from 2.x to 3.x is all the build software on top of 2.x. So many apps depend on python code to build including X.org, Gnome, Chrome and it's variants, etc. I just recently started deprecating large parts of python 2.x modules in my OS. Still issues with that too. So much software has to be removed from package repos due to this.

Luckily people are starting to wise up. Ninja is being rewritten in C now as samurai. Hoping many of the other tools are as well. Can't take the hit again with python 4.

Comment Re:High risk (Score 1) 83

You don't actually know that. Right now IBM's strategy is to become RedHat to save it's cloud initiative after consulting alone failed. They used to push OS/2, Lotus and DB2. Lotus got sold. DB2 is dying. IBM morphs every decade or so into something new. RedHat will die, it's just when.

At best it will be Lotus Notes or OS/2, and live on with a small company somewhere after the fall.

By destroying trust, IBM has weakened the RedHat brand.

Comment Re:Apple should join Chrome (Score 2) 14

Google already controls the web, no reason to go down to 2 engines instead of 3. Also, remember that apple and google can't get along at this. Google used to use webkit and forked blink because of it. It was the same engine previously!

Webkit is the most portable browser engine followed by firefox's engine. Killing webkit means killing browsers on some platforms entirely. Firefox is now limited to platforms and architectures that can run rust. Google refuses to take upstream patches for other operating systems.

Comment Re:unique passwords (Score 1) 25

Unifi gear does need a controller, but they have an ISP line (edgerouter, etc) that does not require a controller. It's the same hardware platform, but different software. Being able to login to one app and control all of your hardware is convenient and the unifi controller can run on linux, freebsd, windows or macOS in addition to a physical one. You can just spin up a VM for it if you want. The hardware device they sell can be powered over POE from the switch.

Slashdot Top Deals

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...