Haswell is a laptop/desktop/server microarchitecture, but Intel doesn't care very much about the desktop anymore, so expect little press coverage of that angle.
I don't care about second hand games, I'd rather buy a new one.
Steam however is pretty much the worst possible thing that could have happened to gaming in a long time. Not only is it a massive single point of failure, but it forces DRM on every game distributed through it. On top of that, it is increasingly common for games to be distributed exclusively on Steam, even when the developer of said game isn't Valve. However, that's not even the worst part. The worst part is that so many people not only turn a blind eye to the fundamental problems of Steam, but that they treat it as some sort of panacea of gaming.
I don't really understand the market for something like this either. When the S1200 was launched, Intel was careful to point out that if you try to scale it up as a cheap alternative to E5/E7 Xeons, the economics and power consumption of the S1200 (let alone the complexity of an order of magnitude more servers to manage) is not favourable. Totally understandable, as Intel would be foolish to cannibalize their own Xeon market.
Having said that, I do like the S1200, but more for something like a low traffic VPN gateway, where you want IPMI (which is orthogonal to the actual CPU, but due to the positioning of the S1200 as a server chip, will be easy to find in conjunction with the S1200) and the added reliability of ECC memory, but really won't use any of the extra horsepower or expandability (and cost and power usage) you'd get from a real Xeon.
The peak transfer rate for the mini-SAS interface is 3Gbs (3 Gigabits, not bytes, per second) this is an absolute maximum of 375 MB/sec.
I'm sorry, but you're wrong. Have a look at this review, for example.
Each mini-SAS cable provides four lanes of SAS (3 Gbit/s), SAS2 (6 Gbit/s), or SAS3 (12 Gbit/s), depending on the HBA in use. That equates to 12 Gbit/s, 24 Gbit/s or 48 Gbit/s per cable. Also, with SAS2 being out since 2009, it's pretty hard to even find a SAS1 card anymore.
Compare a thunderbolt cable to a Cisco 10 gig copper cable and tell me thunderbolt is overpriced.
Sure, but even Denon's $500 ethernet cable looks like a great deal compared to Cisco gear.
What could I connect this to?
Several RAID arrays...
I wouldn't suggest it. It'll only take two SSDs to saturate a Thunderbolt bus (or 4 SSDs with 20 Gbit Thunderbolt).
Prior to this you needed really expensive FibreChannel equipment to deliver the same kind of performance.
No, not really. You can get an 8 bay enclosure (like this) with SFF-8088 connectivity for half a grand and get 4 gigabytes per second read/write.
Which is unfortunate. That was the main reason I opened the PDF.
space is at an extreme premium in those drives. There's a reason they feel so heavy/dense.
I don't know what SSDs you've been using, but I've never picked up an SSD (OCZ Vertex 2/3, Intel X25-M/320/330/335/510/520) that didn't feel light and sound nearly hollow.
In the same way that SSDs help virtually all workloads in a big way, so does more RAM, by functioning as an extended disk cache. All modern operating systems do this transparently, possibly even Windows these days. Unless your entire data set (including the stuff stored on disk) is smaller than RAM, then more RAM will help. In light of the 10T disk array, it's very likely that his dataset is quite large, and any time those CPUs need to hit the disk for data, they're going to be starved.
With dual E5s, you can have up to 256G of RAM for a linear cost increase from 64G (or 512G for a very large premium).
2x eight core, 64Gb RAM, 9.3Tb RAID5, Quadro 6000 and 30" + 24" IPS screens
You dropped $2200*-$4200** on CPUs, but only put 64G ($500) of RAM in the machine? Cheapskate.
- * Xeon E5-2650 = $1107
- ** Xeon E5-2960 = $2061
Not all XEONs have hardware virtualization. Only some of the most expensive chips have it and even then, it can be spotty.
Not true. Every Xeon since 2006 has shipped with VT-x support. Look at the Xeon 5030 for example. Absolute bottom of the line ($150 at launch) from 2006, and it supports VT-x.
You're probably thinking of Intel's desktop line where to do artificially hobble large swaths of their CPUs with respect to VT-x.
Yes, that is the disgustingly awful part about C-11, but you missed the upside:
The goverment eventually arrived a trade-off that most Canadians would make: a tougher provision to target sites that facilitate infringement (the law already allows rights holders to do this) in return for a full cap on liability for non-commercial infringement. This applies not only to individuals (likely bringing to an end the prospect of file sharing lawsuits in Canada) but to any non-commercial entity including educational institutions and libraries (who may adopt more aggressive interpretations of the law with less risk of liability).
Emphasis mine, see http://www.michaelgeist.ca/content/view/6544/125/
Drivers, installed base, drivers, familiar windows interface, drivers, most users can barely power their machine on much less install linux, drivers, forget installing linux software...see comment before the last comment, drivers, lack of vendor support, and drivers.
Oh did I mention drivers?
You play weird video games. Personally, I like playing the "my computer works already, I didn't have to hunt down twenty drivers from twenty different sites and make sure I kept them all up to date individually" game, that's why I already use Linux (and have for nearly a decade).
Just wondering: Is there a point (or is this close to it) where in using HDDs and certain RAID configurations, you can match or beat speed while maintaining better redundancy with larger capacity, cheaper drives? What is the main application these excel at? I assume power would be one, and cached content on webservers? Help me understand
You'd need several dozen hard drives to even approach the IOPS of a single consumer level SSD. The SSD wins so many times over it's not funny.
Now, if you're talking about sequential read/write speeds, that's a whole different matter. You'd need roughly 3-4 hard drives (in RAID 0 (no redundancy)... double that figure for RAID 10) to match the typical sequential read/write speeds of an SSD. At that point, the raw cost of the hard drives far exceeds that of the SSD, and that's ignoring the need for the extra SATA ports, cooling, physical space and the extra drive failures you need to deal with. So, the SSD wins again, hands down.
Now, say you needed to store more than roughly 200 gigabytes of data and performance didn't matter at all, in that case, hard drive(s) will be more cost effective than SSDs.
Basically, hard drives excel at bulk storage of stuff where performance doesn't matter. SSDs excel at everything else.