Follow Slashdot stories on Twitter


Forgot your password?

Comment: Dumbass developers, too (Score 1) 121 121

I'm reminded of the old bag of glass SNL skit - some products (or product features) are just plain dangerous, and saying "but we explain the risks in page 17 of the manual" isn't a good excuse.

How much effort would it take to set defaults that (1) disable anonymous FTP for addresses outside of the local subnet, and (b) inject a fake robots.txt that prevents search engine indexing? And then add an explanation of the risks if you try to disable those defaults?

Comment: Re:Interesting idea, nasty downsides (Score 4, Insightful) 93 93

Drive performance is kind of like airplane legroom - people gripe about it, but in the end they ignore it and buy the cheap ticket.

Shingled drives aren't better - they're bigger, and that's what people pay for. WD's 10TB helium drive is shingled, and I would guess that every drive over 10TB will be shingled for the foreseeable future. By the time HAMR and BPM come out, SSDs will probably have killed off the high-performance drive market, so those technologies will probably be released as capacity-optimized shingled drives, too.

+ - New Seagate Shingled hard drive teardown

Peter Desnoyers writes: Shingled Magnetic Recording (SMR) drives are starting to hit the market, promising larger drives without heroic (and expensive) measures such as helium fill, but at a cost — data can no longer be over-written in place, requiring SSD-like algorithms to handle random writes.

At the USENIX File and Storage Technologies conference in February, researchers from Northeastern University (disclaimer — I'm one of them)
dissected shingled drive performance both figuratively and literally, using both micro-benchmarks and a window cut in the drive to uncover the secrets of Seagate's first line of publicly-available
SMR drives.

TLDR: It's a pretty good desktop drive — with write cache enabled (the default for non-server setups) and an intermittent workload it performs quite well, handling bursts of random writes (up to a few tens of GB total) far faster than a conventional drive — but only if it has long powered-on idle periods for garbage collection. Reads and large writes run at about the same speed as on a conventional drive, and at $280 it costs less than a pair of decent 4TB drives. For heavily-loaded server applications, though, you might want to wait for the next generation.

Videos (in 16x slow motion) showing the drive in action — sequential read after deliberately fragmenting the drive, and a few thousand random writes.

Comment: Those travel time signs on the highway... (Score 1) 168 168

in Massachusetts (and probably other places) use Bluetooth phone tracking:

"The GO Time real time traffic system measures travel times between two points by anonymously tracking the Bluetooth enabled devices carried by motorists and their vehicles. The system complies with new federal legislation that requires real time traffic information to be provided to the public."

Comment: Reasonable conclusions, bad methodology (Score 1) 256 256

The author of the study makes a lot of arguments based on factors that are easily changed, like the configuration of an SSD. However there are a few basic technological trends:

1. Disks and NAND flash are both getting more dense at fairly comparable speeds - disk has been getting cheap faster than flash lately, but may have a hiccup in the next few years. Where flash has conclusively replaced disk is in applications like iPods and mobile where "enough" storage is cheaper than a single disk. (the iPod went flash when 2GB of flash reached $50, which is the price of a micro-disk) It's not going to replace disk for high volume data storage anytime soon.

2. With today's disks and chips, a hard disk drive has a relatively fixed cost (the cost of the factory amortized over the number of drives produced) and similarly flash has a relatively fixed cost (cost of fabrication plant over the number of chips produced in its useful lifespan). The number of bits on each doesn't really matter - that's why packing them more tightly makes the bits cheaper.

3. Disk bandwidth for 7200K drives isn't going to go over say 300MB/s anytime soon with today's perpendicular recording technology - if the disk is moving past the head at a constant speed, the only way to get more bits through per second is to pack them more closely on the platter. And the best you can do by spinning faster is a factor of 2, at 15K. (and those are very low capacity and very expensive)

2 and 3 mean that flash can easily supply cheaper bandwidth than disk - it's the SSD maker's choice how widely they want to stripe data over the chips in the drive. (64 ways isn't unreasonable) There's a huge advantage today, and it will stay the same (see #2) if flash chips don't get faster, and get bigger if they do. (at some point getting that speed may require paying for more flash than you need, but at that point a single disk will be bigger than you need, too)

For years flash was getting slower and less reliable (requiring more complex error correcting codes) as it got denser - that's partly why it got cheap so much faster than e.g. RAM, where you can't cut those corners. The next generation of flash (3D NAND) may reverse that for a while; in addition SSDs are finally a noticeable fraction of the market so there's an incentive for vendors to make faster flash. (3 years ago SSDs were 3% of the flash market, and the rest went into iPods, phones, and removable drives and cards - SSD vendors had to make do with flash that was designed for systems where you don't care about performance)

Comment: Bubble memory anyone? (Score 1) 256 256

Does anyone else remember when bubble memory was supposed to replace hard drives? There's a long road between the current state of post-NAND technologies (Phase Change Memory, spin-torque-transfer magnetic RAM, Resistive RAM, and a few others) and mass-market high-volume chips. If one of them becomes good enough for someone to risk a $5 billion fab on, and it gives more bits per dollar than flash, then it will probably replace flash almost instantly. If no one bets a cutting edge fab, however, it doesn't matter how promising the technology is. (in particular, the "10x better" is based on assumptions that e.g. PCM can be built in sizes vastly smaller than today's flash - of course we don't know how to build the fabrication plants to do that yet. No one has a story for something 10x better at the same feature size.)

Comment: not quite... (Score 1) 256 256

The paper is from Steve Swanson's group at UCSD, *not* Microsoft Research.
And the reasons for slowdown with more bits per cell: (a) writing is done in incremental steps, which have to be smaller for the more precise levels needed for 8 or 16 levels per bit, requiring more steps, and (b) the charge on a flash cell can't be measured directly; instead the chip can measure which cells in a page are above (or below) a threshold voltage, so sensing 16 levels requires 15 separate read operations.

Comment: Re:Conflating open access and open source (Score 1) 172 172

Remember that most scientific papers are vehicles to describe work which has been done by the authors. It makes no sense for someone else to modify it - they typically don't have enough information about the work the paper is based on, and when they're done it's no good to them, as it still describes someone else's work.

There are probably people - e.g. tech journalists - who could make use of my writing if I used a liberal license, but you don't have the same reciprocity you have in open-source software. People who contribute to open source software also benefit from others' contributions, while in the case of scholarly writing the benefits would primarily flow in one direction.

Comment: Re:Special snowflakes (Score 1) 370 370

We academics understand perfectly well that other people in the world have hard jobs too.

Well, as another academic, I think *some* of us are aware of that. In general it seems that a lot of people with desk jobs seem to feel that their profession is uniquely difficult, and that the reason the guy who cleans their office in the middle of the night gets paid less is because he doesn't work as hard. Academics seem just as likely to believe that as anyone else.

    All we ask is that other people recognize that our jobs are, first and foremost, jobs, like anyone else's.

Amen to that.

Comment: Re:TRIM equivalent (Score 1) 205 205

All SSDs have a bit more storage than their rating. Partitioning a little less space on a vendor-fresh drive can double or triple the extra storage available to the SSD's internal wear leveling algorithms.

This won't actually work - partitions don't exist from a disk's point of view, but are just bytes in sector 0. The SSD will religiously preserve the useless data in the sectors outside of the partition you create, using up space that could otherwise be put to good use.

As other posters have explained in bits and pieces, flash chips can be written in pages (2KB or 4KB, usually) but have to be erased in blocks (64 or 128 pages). If you overwrite 10 pages in the middle of a block, the writes will go into fresh pages somewhere else, and the original 10 will be useless until you get around to erasing the entire page. Since those other pages are holding data, you can't erase them until they either (a) get replaced by additional new writes, or (b) are moved somewhere else. If you end up having to copy say 3 pages of data for every new page you write, your write performance is going to go down by a factor of 4.

The more free space you have, the more likely it is that even with totally random writes there will be some blocks that are entirely empty and can be erased without having to copy any data. That's why the 32GB Intel X25-E (the enterprise drive) has 40GB of flash chips inside it. On the other hand, just about every consumer drive has 6.7% or so free space, because that lets them use say 64GB (64 * 2^30) of flash chips and legally advertise it as 64GB (64 * 10^9).

Typically your file system has a fair amount of free space (compared to 6.7%), because performance suffers and you run the risk of running out of space when you get close to 100% usage. Without TRIM, however, the SSD can't make use of that space, and carefully preserves the contents of every block on the file system free list. In theory TRIM should allow the OS to identify the file system free list to the SSD, which will then have much more space available for garbage collection, resulting in reduced copying and better performance. In practice, your mileage may vary.

Hacking's just another word for nothing left to kludge.