Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Oh goody (Score 1) 264

The point of write amplification is solely to get more sectors into the erasure party. A single small write forces the SSD to migrate full sector A to empty sector B.

If you could send direct physical sector erasure and physical write commands to the media, you could just tell it to erase each sector and rewrite its first byte repeatedly until that sector failed, and then march to the next sector.

But, you don't have the opportunity to do that. Instead, you must interact at the filesystem level, and there's an FTL between the file system and the media. So, if your goal is to ruin the media quickly with those two layers between you, you want to minimize the FTL's ability to filter writes out and reduce the required number of erasures.

I'm aware that erase times for sectors are huge, and will slow your I/O rate accordingly. You're right that write amplification doesn't necessary shorten the calendar days to failure, as a different write pattern may have triggered the same number of erasures in the time frame with a larger number of writes. But, writes aren't free either, so there's at least some, err, benefit to minimizing the number of writes required to ruin your SSD, if your goal is to ruin your SSD as quickly as possible.

Comment Re:Oh goody (Score 1) 264

If you want to write once and forget it, you can fill the thing right to the top of the advertised capacity, and you don't have to worry about failure due to wear. Instead, you have to worry about failure due to electrons migrating off the bits. So, you need to refresh all the bits every so often, much like DRAM, only with a much slower refresh interval. Even if you refresh all the bits on the drive once a day, if you do so in a nice, orderly manner, I'd imagine you won't reach the rewrite limit for the drive in your lifetime.

Still, I'm not sure I'd choose an SSD for that.

Comment Re:Oh goody (Score 1) 264

Yes, I know what Stacker, DoubleSpace and DriveSpace were as a technical implementation. My point is mainly that modern Windows still offers a mechanism to compress files on a live filesystem. It just lets you select at folder granularity rather than whole-disk granularity. The fact that they're not implemented at the same layer of the stack I didn't think was relevant here.

Stacker et al worked at the sector level so that you didn't need to modify DOS or the umpteen programs that made use of sector-level access to the filesystem, and insisted on that level of access in order to function. Copy protection schemes and disk editors both relied on it. (Defraggers too, although defragging a compressed volume is... crazy.) Databases may have as well; I'm not certain. Windows NT forces programs through a narrower, more controlled window of APIs to access the file system.

I actually had a Stacker'd hard drive back in the day and had read up on all the tech, so it's not like I'm unfamiliar with it.

Comment Re:Oh goody (Score 1) 264

Ok, I just checked in my WinXP box: You can right click on a folder, go to "Properties". Click "Advanced", and there's an option to "Compress to save disk space." I'm too lazy to go get my Win7 laptop to see if that's still there.

So, some version of TroubleSpace...err...DoubleSpace...err...DriveSpace survived beyond Win98.

Comment Re:Oh goody (Score 3, Informative) 264

If you know something about the drive's sector migration policies, in theory you could construct a worst-case amplification attack against a given drive. Leverage that against the drive's wear leveling policies. But, that seems rather unlikely.

Flash pages retain their data until they're erased. You can write at the byte level, but you must erase at the full page level. You can't rewrite a byte until you erase the page that contains it. That's the heart of the attack: Rewriting sectors with new data. You can't rewrite a sector in-place. You mark the old location as "dirty but free", and write the new data to a new location. The SSD can't reclaim the dirty-but-free sectors for writing until they're erased.

Thus, the basic idea goes something like this: Fill the disk to 99.9% full. Then, selectively rewrite individual sectors, forcing the sector to migrate to a new flash page. Wash, rinse, repeat until the drive fails.

If the drive only performs dynamic wear leveling, all subsequent rewrites will erase and reuse only among the free space. (Note: This free space includes all of the space the drive reserves to itself for dynamic wear leveling purposes.) Now all you need to do is reach the erase/rewrite limit among the available dynamic wear leveling pool, which is significantly smaller than the full drive capacity. You can achieve this by rewriting a small subset of sectors until the disk falls over.

Modern drives perform a blend of dynamic and static wear leveling. Dynamic wear leveling only erases/rewrites among the "free" space. Static wear leveling gets otherwise untouched sectors into the fray by wear leveling over all sectors. This blended approach defers static wear leveling until it becomes absolutely necessary. The flash translation layer (FTL) detects when the wear difference between sectors gets too imbalanced, and migrates static sectors into the worn regions and wear-levels over the previously "static" sectors.

A successful attack would take this into account and attempt to keep track of which sectors would be marked "static" vs. "dynamic". It would also predict how the static sectors were grouped together into pages, so it could cherry-pick and inflict the maximum damage: All it needs to do is write to a single sector in each static flash page (creating a bunch of unallocated "dirty-but-free" holes), continuing until the SSD was forced into a garbage collection cycle. That GC cycle then would have to touch all the static pages (or at least a significant fraction) to compact the holes away and make space available for future writes.

If you can keep that up, you can magnify your writes by the ratio between the page size and the sector size. If you have 512 byte sectors and 512K bytes pages, the amplification factor is 1024.

But, as I suggested above, to achieve this directly, you need to have some idea of how the SSD marks things static vs. dynamic. Without such knowledge, you have to approximate.

I imagine if you really wanted to kill an SSD without any knowledge of its algorithms, you could do something simple like rewrite every allocated sector in an arbitrary order, shuffling the order each time. SSD algorithms assume a distribution of "hotness" (ie. some sectors are "hot" and will be rewritten regularly, and most are "cold" and will be rewritten rarely if ever), and so rewriting all sectors in a random order will cause rather persistent fragmentation, recurring GC cycles, and pretty noticeable amplification.

You wouldn't get to the 40 day mark, but if you started with a mostly full SSD, you might get to a few months.

That's my back-of-the-napkin, "I wrote an FTL once and had to reason through all this" estimate.

Comment Re:It's great, but we try not to use it. (Score 1) 435

I actually use C++ for embedded programming, because when used with care, it can actually do a better job than C for a number of things. I use template meta-programming to compute various things at compile time, such as, say, register initialization values and what not. Sure, I can do the same with #define and a boat load of macros, but that has its own issues. Not only are macros messy in their own way, they don't provide a good way to sanity check your settings. With templates and types done right, I can actually get the compiler to sanity check my settings at compile time. I don't know how many times I've chased down a bug due to swapped macro parameters that could have been caught compile-time with some type checking / trait checking.

I've written an entire C++ based support library just for this purpose. One of its goals is extreme compactness and cycle efficiency, since the code often needs to run in RTL simulation. Software RTL simulation of a large SoC runs in the 10s to 1000s of cycles per second, so cycle efficiency is at an extreme premium.

What my library largely replaces is other C and assembly code that (often hamfistedly) computes everything at run time, and so my code can handily beat that.

I haven't quite hit the nirvana of generating an entire MMU page tree from a compact memory map description using templates (I have a perl script for that), but it sure beats 100,000s cycles or more computing it at run time when that translates to hours of sim time. (Fun fact: Some rather popular modern processors run really slow until you turn the MMU on, because they can't cache any data until you do.)

I have however written dynamic code generators that use templates and function overloading to resolve as much of the opcode encoding as possible at compile time, so that the run-time portion usually is just a "store constant" or maybe a quick field insert into a constant followed by a store. Those can pump opcodes to memory as fast as an opcode per cycle (and in some special cases, faster), which is pretty darn good. Again, all typechecked as much as possible at compile time, to minimize or eliminate the possibility I generate invalid instructions.

Comment Re:STL is painful to use (Score 1) 435

Suppose you want to determine if a collection c contains an element e. In any other language, you'd write something like c.contains(e).

I have good news for you! Sure, you still need to provide begin() and end() to specify a range, but it's a step forward. And, with the new non-member begin() and end() you can even use it on plain arrays.

Yeah, you still have to put all the pieces together yourself, but the pieces are a bit more uniform now and there's usually fewer to worry about. (Especially now with auto.)

Comment Re:C++ admits it is too complex with "auto" for ty (Score 1) 435

Before auto it seemed like C++'s error messages were downright passive aggressive: "If you don't know what to put here, I'm not going to tell you." At least, it's not going to tell me the concise thing to put there. It will tell me the completely flattened type, which can be quite huge if you're trying to, say, get an iterator to a nested STL container holding a template class composed against some other classes (that themselves might be templated) a'la the Policy pattern.

I just wish I didn't have to code for the lowest common denominator compiler at work, so I can be sure I can use auto with impunity. :-)

Comment Re:Simple (Score 1) 435

I unfortunately claim ignorance on the license for the runtime. I know some of my employer's products use Dinkumware for the C++ library, but I'm not sure what this processor uses. (TMS320C6600 family, if you're curious.) I'm usually at the other end of the pipeline, using the pre-alpha tools before the silicon exists, so I'm pretty far removed from the customer toolchain distribution end of things. Sorry I can't be more helpful on that detail. I can tell you all about VLIW instruction scheduling and cache memory system pipeline behavior though!

Comment Re:The STL is too general purpose (Score 1) 435

You do have to be sure to compile with full optimization enabled, though, for STL to have a minimal hit. I use STL quite happily to do things I only too eagerly rolled my own implementation of years ago, and then clung to, even if it wasn't a perfect fit. For example, for eons I carried around this AVL tree implementation I wrote for a data structures class, and used it to implement associative containers, just so I wouldn't have to do it again. These days, it's simply map< yadda, yadda > and I'm on my way. I'm willing to bet map<> beats that creaky old AVL tree any day.

Without optimization, the STL containers can slow down quite a bit. I've heard the effect is especially large on some versions of MSVC++, since they have special debugging versions of the iterators that incur their own cost penalties in return for other checks. I wouldn't know; I do all my development under Linux or for embedded processors on bare metal.

With optimization on, I rarely if ever notice a performance issue due to STL. I do run into the occasional limitation, such as needing an actual resizeable 2-D array-like structure. (A vector< vector< ... > > doesn't cut it, because resizing the inner dimension doesn't resize all rows.) But, that's more exception than rule.

My biggest complaint about C++11 is that I won't realistically be able to use it for another few years. Grrr.

Comment Re:Simple (Score 1) 435

Just for fun, I tried the same experiment on one of our DSPs, and it pulled in just over 64K. I think our library is generally leaner in the locale department. In fact, I didn't see any locale data linked in. Most of what it pulled in looks to be actual ios/istream/ostream stuff, basic_string<char> and basic_string<wchar_t>.

Comment Re:Phones yeah (Score 1) 227

Yes, imagine a world where the laws of thermodynamics don't apply.

The peak theoretical efficiency of an internal combustion engine is bounded by the efficiency of an equivalent ideal Carnot cycle, which if I remember my ME301 Thermo class, is a bit below 40%. Wikipedia backs me up on this, quoting a limit of 37% for a steel engine block. That jibes with what I remember learning in Thermo.

To get 80% efficiency out of gasoline would require a different method of releasing its energy than an internal combustion engine.

Slashdot Top Deals

"I don't believe in sweeping social change being manifested by one person, unless he has an atomic weapon." -- Howard Chaykin

Working...