Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:standard plug is need and no 3rd party repair l (Score 1) 85

CCS can go up to around 400kW. Well, actually I think it is 500kW now. Which is 1000VDC x 400A or 500A.

Most BEVs canâ(TM)t go that high. In fact, I think there are only one or two that can actually max out current 350kW chargers for any decent amount of time. Neither of Them T

-Matt

Comment Re:Feeding stations... (Score 1) 85

Yes, but nobody fast DC charges to 100%. The charge rate drops modestly past 60% and precipitously above 80%. So people only charge to 60-80% and no more. Usually 30-40 minutes max. And if your final destination is close and destination charging is available, only enough to get there. So for trips just beyond the vehicle range, the charging stop can be very short, like 10-15 minutes.

At home, or at a destination, people will charge to 100% overnight if they will be taking a long trip the next day, and otherwise only charge to 70% or 80%. Unless itâ(TM)s a model 3 with a LFP battery, in which case people charge to 100% overnight.

-Matt

-Matt

Comment Re:Feeding stations... (Score 1) 85

Yah. The connector standard has settled down, which is good. Chargers are typically only able to do AC or DC anyway, not both. CCS on the vehicle allows both J1772 (AC only) and also has the extra pins for high amperage DC.

L1 (120VAC) 11-16A (in vehicle charger)
L2 (240VAC) 24-80A (in vehicle charger)
L3 (was never implemented)
Fast DC charging, direct DC to battery, dynamically managed up to 1000VDC and 500A.

Limited by the lower of what the external unit can supply and the vehicle can accept.

Chademo is being steadily removed. The cable standard was too limited. So if you own an old leaf, you need to start carrying around an adapter.

-Matt

Comment Re:How about at highway rest areas? (Score 1) 85

Iâ(TM)m sure it is looked at. The bigger fast DC chargers have to be located near fairly hefty distribution lines (several thousand volts AC is preferred), in order to be able to situate a sufficient number of DC supplies at a location. A DC fast charger outputs 300VDC to 1000VDC based on the vehicleâ(TM)s battery pack requirements, and up to 500A. All dynamically controlled via continuous communication with the vehicle.

-Matt

Comment The basic premise is already not scaleable (Score 1) 209

"In a Substack article, Didgets developer Andy Lawrence argues his system solves many of the problems associated with the antiquated file systems still in use today. "With Didgets, each record is only 64 bytes which means a table with 200 million records is less than 13GB total, which is much more manageable," writes Lawrence. Didgets also has "a small field in its metadata record that tells whether the file is a photo or a document or a video or some other type," helping to dramatically speed up searches."

Yah... no. This is the "if we make the records small enough we can cache the whole thing in ram" argument. It doesn't work in real life. UFS actually tried to do something similar to work-around its linear directory scan problem long ago. It fixed only a subset of use cases and blew up in only a few years as use cases exceeded its abilities.

The problem is that you have to make major assumptions as to both the size of the filesystem people might want to use AND the amount of ram in the system accessing that filesystem.

The instant you have insufficient ram, performance goes straight to hell. Put those 13GB on a hard drive with insufficient ram, and performance will drop to 400tps from all the seeking. It won't matter how linear that 13GB is on the drive... the instant the drive has to seek, its game-over.

This is why nobody does this in a serious filesystem design any more. There is absolutely no reason why a tiny little computer (or VM) with a piddling amount of ram, should not be able to mount a petabyte filesystem. Filesystems must be designed to handle enormous hardware flexibility because one just can't make any assumptions about the environment the filesystem will be used in.

This is why hierarchical filesystem layouts, AND hierarchical indexing methods (e.g. B-Tree/B+Tree, radix tree, hash table) work so well. They scale nicely and provide numerous clues to caching systems that allow the caches to operate optimally.

-Matt

Comment Re:It's mostly about the metaphor. (Score 1) 209

Yes, you can still have trees with an object store. The object identifier can be wide... for example, the NVMe standard I believe uses 128-bit 'keys'. Sigh. Slashdot really needs to fix its broken lameness filter, I can't even use brackets to represent bit spaces.

So a filesystem can be organized using keys like this for the inode:

parent_object_key, object_key

An this for the file content:

object_key, file_offset|extent_size

For example, a file block could easily be encoded as a 64-bit integer byte offset, with a 63 bit positive offset space, and an extent size encoded in the low 6 bits (radix 1 to radix 63, allowing extents up to (1 63) bytes. Since the low 6 bits are used for the extent, the minimum extent size would be 64 bytes. The negative key space could be used for auxillary records associated with the file or directory. HAMMER2 uses this very method to encode its radix trees, allowing each recursion to use a variable-sized extent and to represent any 64-bit sub-range within the hash space (but H2 runs on top of a normal block device, it doesn't extent the encoding down to the device).

A set of directory entries could be encoded as follows, where [object_key] is the inode number of the directory.

object_key, filename_hash_key

Though doing so would almost certainly not be optimal since directory entries are very small.

Inode numbers wind up just being object keys.

This is readily doable... actually, this sort of methodology has been used many times before. I did a turnkey system 20 years ago that used this method to create a simple-stupid filesystem for a NOR flash filesystem.

The problem with this methodology is that if done at the kernel/filesystem-level, it requires the underlying storage to directly implement the key-store, as well as to support the key width required by the filesystem.... which seriously restricts what the filesystem can be built on top of.

-Matt

Comment Filesystems are fine (Score 1) 209

Filesystems are fine. The author needs to bone-up. I have several filesystems with in excess of 50 million inodes on them right now. We have a grok tree with over 100 million inodes in it. I'd post the DF outputs but slashdot's lameness filter won't let me.

To be fair, an old filesystem like UFS is creaky when it comes to directories, but modern filesystems have no problems with directories. And no filesystem has had issues with large files for ages (even UFS did a fairly decent job back in the day). BTRFS, EXT4, ZFS, HAMMER2 (my personal favorite since I wrote it), XFS (which is actually a very old filesystem that we used on SGI Challenge systems many years ago). It is just not a problem.

Generally these filesystems are using hashes, radix trees, or B-tree / B+tree style lookups for directories and inodes. H2, for example, uses a variable block-size radix tree, which means that a directory with only a few entries in it will be very shallow (even just all in one level if its small), despite the 64-bit filename hashes being evenly spread throughout the entire numerical space. But as the directories grow in size, the tables are collapsed into radix ranges and slowly become deeper. Indirect radix blocks are 64KB, so it doesn't take very many levels to cover a huge directory.

The only way one could do better (and only slightly better, to be perfectly frank), is to use some of the object-store features built directly into later NVMe chipset standards. Basically the idea there is that any SSD has an indirect block table anyway, why not just make it directly into a (key,data) object store at the SSD firmware level and then have filesystems use the keys directly instead of linear block numbers? Its totally doable, and not even very difficult given that most modern filesystems already use keys for directory, inode, and block indexing.

In anycase, the author needs to do some serious research and catch up to modern times.

-Matt

Comment A few simple rules to follow (Score 1) 225

A few simple rules will make life easier.

#1 - Have two credit cards accounts. Put trusted services that would be annoying to re-enter on one (e.g. trusted online accounts such as Amazon, Apple), and make untrusted / ad-hoc, and over-the-counter purchases with the other.

#2 - Use Apple Pay or Google Pay instead of a physical card whenever possible. And in particular at gas stations. You can associate your 'secure' card account with your phone. Always use the phone-based nfc payment whenever possible.

#3 - Never make purchases with your debit card. Use the debit card for ATMs only, preferably inside locations or at banks.

#4 - If your bank's online account has the feature, have it text you the location and amount whenever your card is used.

That's it pretty much. Either my wife's or my card (usually my wife's) would get lifted maybe once year, but it was never an inconvenience because it was always the untrusted card, and almost always due to use at a gas station that did not have wireless payment at the gas pump.

These days of course we have chip cards, and even cards which don't print the credit card number on the front any more (but still print it on the back). Doesn't matter though because modern day thieves often don't lift cards via the magnet strip any more, they install cameras that take pictures of both sides of the card instead.

It is also possible to copy a chip card (actually not even all that hard because the extra information on the chip that isn't on the face of the card is often not actually checked by the CC company, so blank cards can be programmed fairly easily). And it is also possible to fake or break a chip reader at a merchant, causing the merchant to back-off to a slider or direct entry.

And regardless of that, if your CC number is stolen at all thieves will use it at locations which don't require chip cards. And even though your bank will back the charges out for you, you still have to go through the hassle of changing cards. Which is why you have two accounts... it removes that hassle. So chip cards, even secure ones, don't actually remove the hassle. Thus, take the steps above to minimize it instead.

-Matt

Submission + - Slashdot Alum Samzenpus's Fractured Veil Hits Kickstarter

CmdrTaco writes: Long time Slashdot readers remember Samzenpus,who posted over 17,000 stories here, sadly crushing my record in the process! What you might NOT know is that he was frequently the Dungeon Master for D&D campaigns played by the original Slashdot crew, and for the last few years he has been applying these skills with fellow Slashdot editorial alum Chris DiBona to a Survival game called Fractured Veil. It's set in a post apocalyptic Hawaii with a huge world based on real map data to explore, as well as careful balance between PVP & PVE. I figured a lot of our old friends would love to help them meet their kickstarter goal and then help us build bases and murder monsters! The game is turning into something pretty great and I'm excited to see it in the wild!

Comment Re: For anyone wondering how Robinhood makes money (Score 1) 147

Actually, they probably do front-run trades, just indirectly. Not robinhood themselves, but the exchange they route the orders too might by virtue of timing latencies to the consolidated tape. This is not specifically illegal but it is exploited.

Basically the way it works is that trading specialists with servers sitting right next to the major exchanges can arbitrage a trade that comes up on one exchange with a better price that exists on another exchange. This is in addition to payments for routing. Arbitrage in of itself is perfectly reasonable, that's how many trades can get execution. But it can be exploited in ways that slowly drain money from frequent traders without those traders actually understanding that it is happening.

So Robinhood basically just has to route the order to the worst exchange (which would likely be BATS) and stays just on the legal side of the law. Since robinhood's investors are idiots who day-trade, the cumulative payments and losses from poor order flow execution and delays wind up being significant.

--

In anycase, as a long-time investor myself it is very obvious to me that robinhood is exploiting the stupidity of its investor base. People with no experience trying to execute complex options trades (or even normal trades using 'intelligent' trading modes such as stop-loss orders) are virtually guaranteed to lose all of their money. Experience rules the roost here. Gambling addictions make it worse. Particularly when options are used... it is very easy for an inexperienced retail investor to see a few 'wins' from their day-trading and unconsciously discard the losses... not realizing that they are actually taking on inordinate risk and losing their nest-egg until they've actually lost most of it.

-Matt

Comment Re:amd needs an e3 level cpu with ECC for systems (Score 1) 115

The actual ECC handling is done by the CPU, not the motherboard. So it doesn't matter if its a cheap motherboard or an expensive one. The motherboard support itself amounts to early-boot configuration of the memory controller by the BIOS.

For AMD, all AGESA updates (part of the BIOS image) as of around 2 years or so ago included full detection and enablement support for ECC on Zen, Zen+, and Zen 2 platforms.

There really isn't much of a distinction between 'cheap' motherboards and 'expensive' motherboards any more these days. It really just comes down to overclockability and how beefy the VRMs are and after that you are simply paying for board features.

In fact, there is almost no distinction between commercial and consumer motherboards any more these days either... it again just comes down to what bits of hardware the mobo maker puts on the motherboard. So, e.g. any modern AMD motherboard with IPMI is effectively server(colo, machine-room, rack)-capable.

-Matt

Comment Re:About those motherboards (Score 1) 115

That's not really how it works. Both file data and filesystem meta-data is cached in memory. If you have 128GB of ram, then that's up to potentially 128GB worth of data and meta-data subject to bit-rot.

Many modern filesystems must update significants amount of meta-data whenever they synchronize modifications to storage. Even small modifications can result in hundreds of kilobytes being written to storage. Much of that is meta-data that had been cached in ram and then modified as part of the topology sync.

You certainly cannot depend on getting 'application errors' when ram goes bad. That sort of thing is far more likely if the CPU messes up (e.g. due to an unstable overclock for example). Bitrot in ram can be far more insideous.

-Matt

Comment Re:PBO works? (Score 1) 115

Unless you are gaming at a low resolution like 1080p, or have an extreme refresh-rate monitor (120hz or higher), the fps differences are irrelevant. And even if you do, if you game at any decent resolution (1280p maybe, but mostly 1440p or higher), the game will be gpu-bound and the differences will not really be noticable.

I look at it this way... why spend an extra few hundred dollars on an Intel chip when you could instead spend that extra dough on a better GPU or more memory and still get a CPU that's just as fast? Intel just isn't price competitive against AMD any more these days. So unless you are on an unlimited budget, AMD is the better choice.

In anycase, PBO isn't going to do much better than stock settings. It really comes down to how much power you are willing to burn to get performance, cooling, and memory, and that's it. So you really need to line-up the balls to lift-off from stock settings. i.e. you need to use DDR 3800 (3733) (i.e. max out the IF in 1:1 mode) and a good liquid or tower cooler, along with good case cooling for the VRMs, to see a performance uplift vs stock.

The same is true for a modern Intel CPU these days as well. Though it is modestly easier to O.C. an Intel chip the power consumption goes into insane-land a whole lot quicker. It often comes down to just how much power (and thus heat), and noise you are willing to tolerate to get the O.C. you want. For many people it just isn't worth the heat and noise.

-Matt

Slashdot Top Deals

I think there's a world market for about five computers. -- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943

Working...