Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Do the math (Score 1) 512

I run about 90% of the systems I manage in RAID 10 (there are a few oddballs in there, some only support 2 drives, those are RAID 1, and there's a few where I don't care about performance, but do care about drive space, those run RAID 5/6). The real world performance difference of RAID 10 over a single drive is very large. Assuming a four drive RAID 10 array, expect between 2x and 4x improvement in both random and sequential read/write performance.

WIth that in mind, at $dayjob, we run a lot of VMs. Before SSDs were affordable, we could usually fit between 6 and 8 VMs on a single host (with 4x or 6x 7200 rpm drives in RAID 10) before they became unusably slow, with tons of time spent in disk wait. CPU time and memory usage were rarely limiting factors. As soon as we started deploying SSDs, the only problem was running out of space. Right now we have over 50 VMs running on a single 8x SSD RAID 10 array, and it's blindingly fast.

There's a similar story with databases. Back before SSDs were affordable, we bought a machine with enough RAM to keep the entire database cached in memory, as it was just too slow to run off of 15k RPM SAS drives. On a fresh boot, we'd still need to precache the database into memory, and with said HDDs, that's a job that took something like 10 minutes and was almost entirely disk bound. We recently upgraded that machine to SSDs, and the same precache task now takes under 30 seconds.

As for home users, well that's a different story. Personally I think it's downright irresponsible to run any system with a single drive (HDD or SSD), but the overwhelming majority of existing machines with a single drive suggest that my opinions on this matter are not widely held.

I guess my issue with your proposal is that I just can't see very many cases where it's practical. The low end of the market is dominated by Laptops/Desktops/Tablets/whatever that cost under $500 and all have only a single drive, as an extra $100 for another drive is going to be a dealbreaker most of the time (if another drive would even physically fit). The high end of the market where performance is critical, is completely dominated by SSDs. You can read countless stories of big companies replacing full racks (42U) of HDDs with 1U or 2U of SSDs. I guess somewhere in the middle there is a small set of people who:

  • store a lot of non-media* files (over 500G or 1T)
  • are not overly concerned with performance
  • have the technical know-how to set up and maintain a RAID array
  • are significantly more concerned with reliability than most
  • are still relatively cost-sensitive

Those people would probably be better served by a 4x HDD RAID 10 array than a 2x SSD RAID 1 array.

* If you're storing media files on SSDs, you either have too much money to burn, or zero sense. They're huge and 99% of the time are read/written sequentially.

Comment Re:Do the math (Score 1) 512

Most workloads are in fact dominated by small, mostly random, reads and writes, which is why SSDs are just that much faster in the majority of cases.

If you're talking about mainly sequential reads, then the situation for the four RAID1 HDDs is even grimmer. RAID1 provides virtually no speedup for single reader sequential reads, as to do so would require tons of seeks from the drives (which as we know, HDDs fail at), or an extremely large file and very large stripe size (and also a matching amount of memory for intermediate buffers). Most RAID1 implementations don't even bother trying.

Having said that, HDDs are substantially better at sequential reads and writes than random ones, and if your workload really, truly is dominated by sequential operations (and it probably isn't), you can generally match the performance of a single SSD with a RAID10 of roughly a dozen HDDs (or a RAID0 of half a dozen, but say goodbye to reliability). This ignores the fact that a dozen of even the cheapest HDDs is substantially more expensive than an SSD, due to actual unit cost, the extra power draw, the extra physical space required for them, the extra HBA(s) to plug the drives into, the extra manpower to install/manage them and the extra manpower to deal with them when they die.

There are still reasons to use HDDs, but performance is absolutely not one of them. It's not even close. Take it from someone who manages several hundred HDDs + SSDs.

Comment Re:Do the math (Score 1) 512

More to the point, you can buy 4 4TB HDDs for $800 and setup a RAID1 and get a lot of the same read performance as an SDD while having heavy redundancy.

Where by "a lot of", you mean less than 1% of, right?

Typical IOPS on a 7200 RPM HDD is around 80. Typical IOPS on a garden variety SSD is 80,000. We'll be generous and assume linear speedup for the four HDDs, which gives us 320 IOPS, or 0.4% of the performance of a single SSD.

Comment Re: Foreshadowing (Score 1) 376

Well I might be the odd man out among tech-savvy users, but I run most applications maximized on my 24" screen...

This is frustratingly common, it is incredibly painful when I see other people do this. Currently I have 19 windows visible on the current virtual desktop and a total of 76 windows open across 5 virtual desktops. Needless to say, alt tabbing through 76 windows doesn't cut it.

Comment Re:I just had this conversation with a coworker: (Score 1) 547

I expect to get modded down, but what's so bad about not having to keep track of a silver disk to play a game? Steam has that model.

Because when the mothership (ie, Valve, or Microsoft) decides that you're no longer allowed to play said game, you're no longer allowed to play said game. Make no mistake, that day WILL come, the only question is when.

Comment Re:Steambox (Score 1) 435

I don't care about second hand games, I'd rather buy a new one.

Steam however is pretty much the worst possible thing that could have happened to gaming in a long time. Not only is it a massive single point of failure, but it forces DRM on every game distributed through it. On top of that, it is increasingly common for games to be distributed exclusively on Steam, even when the developer of said game isn't Valve. However, that's not even the worst part. The worst part is that so many people not only turn a blind eye to the fundamental problems of Steam, but that they treat it as some sort of panacea of gaming.

Comment Re:Specs for the interested (Score 1) 168

I don't really understand the market for something like this either. When the S1200 was launched, Intel was careful to point out that if you try to scale it up as a cheap alternative to E5/E7 Xeons, the economics and power consumption of the S1200 (let alone the complexity of an order of magnitude more servers to manage) is not favourable. Totally understandable, as Intel would be foolish to cannibalize their own Xeon market.

Having said that, I do like the S1200, but more for something like a low traffic VPN gateway, where you want IPMI (which is orthogonal to the actual CPU, but due to the positioning of the S1200 as a server chip, will be easy to find in conjunction with the S1200) and the added reliability of ECC memory, but really won't use any of the extra horsepower or expandability (and cost and power usage) you'd get from a real Xeon.

Comment Re: Adoption by Mass Market? (Score 1) 301

The peak transfer rate for the mini-SAS interface is 3Gbs (3 Gigabits, not bytes, per second) this is an absolute maximum of 375 MB/sec.

I'm sorry, but you're wrong. Have a look at this review, for example.

Each mini-SAS cable provides four lanes of SAS (3 Gbit/s), SAS2 (6 Gbit/s), or SAS3 (12 Gbit/s), depending on the HBA in use. That equates to 12 Gbit/s, 24 Gbit/s or 48 Gbit/s per cable. Also, with SAS2 being out since 2009, it's pretty hard to even find a SAS1 card anymore.

Slashdot Top Deals

"Money is the root of all money." -- the moving finger

Working...