Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re: Quad Sockets: (Score 1) 138

OP specifically said quad socket motherboards hadn't been made since the Athlon 64 era (10+ years ago). I provided two examples of current generation quad socket servers that are on sale today.

Sure, Google and Facebook probably don't run quad socket servers (they're not built for that kind of thing), but your bank probably does.

Comment Re:Quad Sockets: (Score 2) 138

Fun fact: Quad sockets are SUPER-RARE.

Go ahead. Try and Google for a quad motherboard. They haven't made them since like... the Athlon 64-era Opterons.

Really? You should tell all the people that are making quad socket servers today.

Maybe you should try following your own advice.

Comment Re:UPS (Score 1) 236

The numbers I provided are actually generously overstated, as you'll only hit them when you're 100% loading everything.

When you're doing something useful (like playing a game), you don't have all 4+ CPU cores loaded 100%, nor do you have all 2+ GPUs 100% loaded, so their actual power usage is significantly lower than their max TDP. It is extremely difficult to put 100% load on a highly parallel resource like a multi-core CPU or (multiple) modern GPUs, and to do both at the same time (while doing anything useful) is exponentially more difficult, and extremely system specific. Even with a stationary target like a console, developers still struggle to fully utilize all of the hardware after 5+ years of experience with it.

Comment Re:UPS (Score 2) 236

Are you running quad SLI? No? Then your computer isn't using 700W of power. Typical gaming rigs these days rarely break 200W, even though they may be sold with 1000W (or more) PSUs.

Even with four GTX 980s in quad SLI, they would only be drawing 4*165W (= 660W) at peak. Even with a $1000 CPU (~130W TDP) and a dozen drives you're still going to be extremely hard pressed to get anywhere close to 700W.

Also, here's a 865W UPS for $180, that's pretty typical pricing and hugely oversized for a single machine.

Comment Re:SSDs will outpace platter drives (Score 4, Interesting) 296

With a fraction of the energy usage, densities increasing, and hopefully a reversal in the recent trend towards less durability, SSDs will probably also overtake platter drives in price per terabyte within 5 years.

It's not so much a trend as it is an unavoidable problem with increasing densities of SSDs. As you shrink the process size to fit more stuff in the same space, the durability goes down. As you change from SLC to MLC to TLC to fit more stuff in the same space, durability goes down. Substantial technological advances will be required to produce a 20T SSD with both acceptable durability and cost.

In comparison, shortscreen monitors (often mislabeled as widescreen) are a trend which has no logical or technical underpinning.

Comment Re:Ok seriously though ... (Score 1) 367

Sure. Kylix and Quake 2 are the first that come to mind (in terms of commercial software). But if you want to see something more GPL/Open Source originating, take, say, XFree86 from Slackware 4.0 and try to run it on Slackware 14. Same thing.

Now you're talking about something different. The OP specifically said upgrading a kernel (not a distribution). You can take the kernel from RHEL 6 and run it on RHEL 5 (in fact, this is exactly what Oracle does with OEL).

Userspace backwards compatibility is a whole different can of worms. For userspace, you're at the mercy of any libraries you dynamically link against, few promise binary compatibility indefinitely. Your Linux native hello world program compiled in 1991 will still run, unmodified, on today's distros, as it doesn't require any libraries. For more complex programs, you're looking at shipping local copies of the libraries you depend on, either via static linking, or copies of the dynamically linked libraries. The latter option can even be done after the fact.

Of course, if you still have the source, things are much easier. A simple recompile is often sufficient to fix any dynamic linking issues, source compatibility is broken far less often than binary compatibility. While not every old Linux program may run out of the box, it should be fairly trivial to make them work on a modern distro.

Now, if you want to talk about running old programs on new versions of Windows, let's talk about IE6 on Windows 8, without using virtualization. Good luck with that!

Comment Re:Do the math (Score 1) 512

I run about 90% of the systems I manage in RAID 10 (there are a few oddballs in there, some only support 2 drives, those are RAID 1, and there's a few where I don't care about performance, but do care about drive space, those run RAID 5/6). The real world performance difference of RAID 10 over a single drive is very large. Assuming a four drive RAID 10 array, expect between 2x and 4x improvement in both random and sequential read/write performance.

WIth that in mind, at $dayjob, we run a lot of VMs. Before SSDs were affordable, we could usually fit between 6 and 8 VMs on a single host (with 4x or 6x 7200 rpm drives in RAID 10) before they became unusably slow, with tons of time spent in disk wait. CPU time and memory usage were rarely limiting factors. As soon as we started deploying SSDs, the only problem was running out of space. Right now we have over 50 VMs running on a single 8x SSD RAID 10 array, and it's blindingly fast.

There's a similar story with databases. Back before SSDs were affordable, we bought a machine with enough RAM to keep the entire database cached in memory, as it was just too slow to run off of 15k RPM SAS drives. On a fresh boot, we'd still need to precache the database into memory, and with said HDDs, that's a job that took something like 10 minutes and was almost entirely disk bound. We recently upgraded that machine to SSDs, and the same precache task now takes under 30 seconds.

As for home users, well that's a different story. Personally I think it's downright irresponsible to run any system with a single drive (HDD or SSD), but the overwhelming majority of existing machines with a single drive suggest that my opinions on this matter are not widely held.

I guess my issue with your proposal is that I just can't see very many cases where it's practical. The low end of the market is dominated by Laptops/Desktops/Tablets/whatever that cost under $500 and all have only a single drive, as an extra $100 for another drive is going to be a dealbreaker most of the time (if another drive would even physically fit). The high end of the market where performance is critical, is completely dominated by SSDs. You can read countless stories of big companies replacing full racks (42U) of HDDs with 1U or 2U of SSDs. I guess somewhere in the middle there is a small set of people who:

  • store a lot of non-media* files (over 500G or 1T)
  • are not overly concerned with performance
  • have the technical know-how to set up and maintain a RAID array
  • are significantly more concerned with reliability than most
  • are still relatively cost-sensitive

Those people would probably be better served by a 4x HDD RAID 10 array than a 2x SSD RAID 1 array.

* If you're storing media files on SSDs, you either have too much money to burn, or zero sense. They're huge and 99% of the time are read/written sequentially.

Comment Re:Do the math (Score 1) 512

Most workloads are in fact dominated by small, mostly random, reads and writes, which is why SSDs are just that much faster in the majority of cases.

If you're talking about mainly sequential reads, then the situation for the four RAID1 HDDs is even grimmer. RAID1 provides virtually no speedup for single reader sequential reads, as to do so would require tons of seeks from the drives (which as we know, HDDs fail at), or an extremely large file and very large stripe size (and also a matching amount of memory for intermediate buffers). Most RAID1 implementations don't even bother trying.

Having said that, HDDs are substantially better at sequential reads and writes than random ones, and if your workload really, truly is dominated by sequential operations (and it probably isn't), you can generally match the performance of a single SSD with a RAID10 of roughly a dozen HDDs (or a RAID0 of half a dozen, but say goodbye to reliability). This ignores the fact that a dozen of even the cheapest HDDs is substantially more expensive than an SSD, due to actual unit cost, the extra power draw, the extra physical space required for them, the extra HBA(s) to plug the drives into, the extra manpower to install/manage them and the extra manpower to deal with them when they die.

There are still reasons to use HDDs, but performance is absolutely not one of them. It's not even close. Take it from someone who manages several hundred HDDs + SSDs.

Comment Re:Do the math (Score 1) 512

More to the point, you can buy 4 4TB HDDs for $800 and setup a RAID1 and get a lot of the same read performance as an SDD while having heavy redundancy.

Where by "a lot of", you mean less than 1% of, right?

Typical IOPS on a 7200 RPM HDD is around 80. Typical IOPS on a garden variety SSD is 80,000. We'll be generous and assume linear speedup for the four HDDs, which gives us 320 IOPS, or 0.4% of the performance of a single SSD.

Slashdot Top Deals

Advertising is a valuable economic factor because it is the cheapest way of selling goods, particularly if the goods are worthless. -- Sinclair Lewis