Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:standard plug is need and no 3rd party repair l (Score 1) 85

CCS can go up to around 400kW. Well, actually I think it is 500kW now. Which is 1000VDC x 400A or 500A.

Most BEVs canâ(TM)t go that high. In fact, I think there are only one or two that can actually max out current 350kW chargers for any decent amount of time. Neither of Them T

-Matt

Comment Re:Feeding stations... (Score 1) 85

Yes, but nobody fast DC charges to 100%. The charge rate drops modestly past 60% and precipitously above 80%. So people only charge to 60-80% and no more. Usually 30-40 minutes max. And if your final destination is close and destination charging is available, only enough to get there. So for trips just beyond the vehicle range, the charging stop can be very short, like 10-15 minutes.

At home, or at a destination, people will charge to 100% overnight if they will be taking a long trip the next day, and otherwise only charge to 70% or 80%. Unless itâ(TM)s a model 3 with a LFP battery, in which case people charge to 100% overnight.

-Matt

-Matt

Comment Re:Feeding stations... (Score 1) 85

Yah. The connector standard has settled down, which is good. Chargers are typically only able to do AC or DC anyway, not both. CCS on the vehicle allows both J1772 (AC only) and also has the extra pins for high amperage DC.

L1 (120VAC) 11-16A (in vehicle charger)
L2 (240VAC) 24-80A (in vehicle charger)
L3 (was never implemented)
Fast DC charging, direct DC to battery, dynamically managed up to 1000VDC and 500A.

Limited by the lower of what the external unit can supply and the vehicle can accept.

Chademo is being steadily removed. The cable standard was too limited. So if you own an old leaf, you need to start carrying around an adapter.

-Matt

Comment Re:How about at highway rest areas? (Score 1) 85

Iâ(TM)m sure it is looked at. The bigger fast DC chargers have to be located near fairly hefty distribution lines (several thousand volts AC is preferred), in order to be able to situate a sufficient number of DC supplies at a location. A DC fast charger outputs 300VDC to 1000VDC based on the vehicleâ(TM)s battery pack requirements, and up to 500A. All dynamically controlled via continuous communication with the vehicle.

-Matt

Comment The basic premise is already not scaleable (Score 1) 209

"In a Substack article, Didgets developer Andy Lawrence argues his system solves many of the problems associated with the antiquated file systems still in use today. "With Didgets, each record is only 64 bytes which means a table with 200 million records is less than 13GB total, which is much more manageable," writes Lawrence. Didgets also has "a small field in its metadata record that tells whether the file is a photo or a document or a video or some other type," helping to dramatically speed up searches."

Yah... no. This is the "if we make the records small enough we can cache the whole thing in ram" argument. It doesn't work in real life. UFS actually tried to do something similar to work-around its linear directory scan problem long ago. It fixed only a subset of use cases and blew up in only a few years as use cases exceeded its abilities.

The problem is that you have to make major assumptions as to both the size of the filesystem people might want to use AND the amount of ram in the system accessing that filesystem.

The instant you have insufficient ram, performance goes straight to hell. Put those 13GB on a hard drive with insufficient ram, and performance will drop to 400tps from all the seeking. It won't matter how linear that 13GB is on the drive... the instant the drive has to seek, its game-over.

This is why nobody does this in a serious filesystem design any more. There is absolutely no reason why a tiny little computer (or VM) with a piddling amount of ram, should not be able to mount a petabyte filesystem. Filesystems must be designed to handle enormous hardware flexibility because one just can't make any assumptions about the environment the filesystem will be used in.

This is why hierarchical filesystem layouts, AND hierarchical indexing methods (e.g. B-Tree/B+Tree, radix tree, hash table) work so well. They scale nicely and provide numerous clues to caching systems that allow the caches to operate optimally.

-Matt

Comment Re:It's mostly about the metaphor. (Score 1) 209

Yes, you can still have trees with an object store. The object identifier can be wide... for example, the NVMe standard I believe uses 128-bit 'keys'. Sigh. Slashdot really needs to fix its broken lameness filter, I can't even use brackets to represent bit spaces.

So a filesystem can be organized using keys like this for the inode:

parent_object_key, object_key

An this for the file content:

object_key, file_offset|extent_size

For example, a file block could easily be encoded as a 64-bit integer byte offset, with a 63 bit positive offset space, and an extent size encoded in the low 6 bits (radix 1 to radix 63, allowing extents up to (1 63) bytes. Since the low 6 bits are used for the extent, the minimum extent size would be 64 bytes. The negative key space could be used for auxillary records associated with the file or directory. HAMMER2 uses this very method to encode its radix trees, allowing each recursion to use a variable-sized extent and to represent any 64-bit sub-range within the hash space (but H2 runs on top of a normal block device, it doesn't extent the encoding down to the device).

A set of directory entries could be encoded as follows, where [object_key] is the inode number of the directory.

object_key, filename_hash_key

Though doing so would almost certainly not be optimal since directory entries are very small.

Inode numbers wind up just being object keys.

This is readily doable... actually, this sort of methodology has been used many times before. I did a turnkey system 20 years ago that used this method to create a simple-stupid filesystem for a NOR flash filesystem.

The problem with this methodology is that if done at the kernel/filesystem-level, it requires the underlying storage to directly implement the key-store, as well as to support the key width required by the filesystem.... which seriously restricts what the filesystem can be built on top of.

-Matt

Comment Filesystems are fine (Score 1) 209

Filesystems are fine. The author needs to bone-up. I have several filesystems with in excess of 50 million inodes on them right now. We have a grok tree with over 100 million inodes in it. I'd post the DF outputs but slashdot's lameness filter won't let me.

To be fair, an old filesystem like UFS is creaky when it comes to directories, but modern filesystems have no problems with directories. And no filesystem has had issues with large files for ages (even UFS did a fairly decent job back in the day). BTRFS, EXT4, ZFS, HAMMER2 (my personal favorite since I wrote it), XFS (which is actually a very old filesystem that we used on SGI Challenge systems many years ago). It is just not a problem.

Generally these filesystems are using hashes, radix trees, or B-tree / B+tree style lookups for directories and inodes. H2, for example, uses a variable block-size radix tree, which means that a directory with only a few entries in it will be very shallow (even just all in one level if its small), despite the 64-bit filename hashes being evenly spread throughout the entire numerical space. But as the directories grow in size, the tables are collapsed into radix ranges and slowly become deeper. Indirect radix blocks are 64KB, so it doesn't take very many levels to cover a huge directory.

The only way one could do better (and only slightly better, to be perfectly frank), is to use some of the object-store features built directly into later NVMe chipset standards. Basically the idea there is that any SSD has an indirect block table anyway, why not just make it directly into a (key,data) object store at the SSD firmware level and then have filesystems use the keys directly instead of linear block numbers? Its totally doable, and not even very difficult given that most modern filesystems already use keys for directory, inode, and block indexing.

In anycase, the author needs to do some serious research and catch up to modern times.

-Matt

Slashdot Top Deals

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...