Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Facebook's application is poorly coded (Score 1) 370

by jjgm (#28480309) Attached to: Facebook VP Slams Intel's, AMD's Chip Performance Claims

Unfortunately, enterprises tend to pre-purchase shared storage and buying 8TB of disk when you only need 1TB of space tends to get you noticed during economic downturns.

There will always be a market need for small, fast drives, and to bring this back to the original guy's point - it's because by some very practical considerations, several performance metrics per raw TB have actually declined.

Comment: Re:Facebook's application is poorly coded (Score 2, Informative) 370

by jjgm (#28479655) Attached to: Facebook VP Slams Intel's, AMD's Chip Performance Claims

That may be so. The new drive may indeed have four times the raw read throughput. But how much larger are they? Five times.

And even more tellingly, look at the seek performance. I looked up those two drives you mentioned. You'll find it's unchanged at 8.5ms. So we're seeking at the same speed, for more data.

In practice, then, in terms of throughput per provisioned GB, we are 24% worse off, and in terms of seek time per megabyte we are TEN times worse off today!

To illustrate what I mean, based on those numbers above: slurping 10TB off an idealised JBOD array of those newer drives would take 89 seconds; slurping 10TB off an idealised array of the older drives in parallel would take only 72 seconds. A similar (but far worse) story applies to random seek time performance, especially for busy transaction systems.

One might challenge the exact figures, but it doesn't matter - the point is, drive size is an important gotcha in storage performance optimisation today, and it's because performance has not really kept pace with drive size. The issue is not offset by the bigger caches they're turning up with, although that helps for some workloads.

We haven't talked dollars. The cost is important, but that's another dimension. Let's keep this to engineering chatter.

So what happens in shops that need really high performance? Well, if it's an application with lots of random reads but with hotspots, then cache will do nicely. But for raw random write performance i.e. the heavy transaction processing applications, it's gotta be more 15K RPM spindles at lower capacity. Or go crazy and solid state, but that's another party.

When some people discover the truth, they just can't understand why everybody isn't eager to hear it.