Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:License Audit (Score 2) 57

Or do you think Microsoft desperately wants a share of that market?

Actually, Microsoft does. Because that's a heck of a lot of PCs, and if they are running Windows and Office, that's a heck of a lot of PCs not running Linux, OpenOffice or other software. Even if Windows and Office are pirated.

All the big commercial vendors pretty much say as such - it's better to have the software pirated than to have those users seek out the competition, whatever it may be.

So even if a user uses pirated Windows, that makes them less likely to use Linux instead. Because if they try the competition, they may like it.

Comment Re:Oh yeah, almost forgot about Ebola... (Score 1) 70

We have no experience curing Ebola.

We have lots of experience trying to keep people alive while what's left of their immune system defeats the virus.

It helps if the people who catch it are fit and well before they catch it.

You know most diseases aren't cured by drugs, but by the immune system. The drugs just help out by making you feel better so you don't feel so terrible and harm your immune system. E.g., the painkillers and all that make it easier to get rest so you're not so worn down that your immune system is compromised.

And Ebola talk died down because people were freaking out over it, when in fact they're far more likely to die of influenza than Ebola. Yes, the flu has killed an order of magnitude more people in the US than Ebola has worldwide.

Oh yeah, it helps if you catch the flu that you're fit and well too. The people that die from the flu generally are immunocompromised or vulnerable. Enterovirus D68 (part of the influenza family) is particularly deadly to children and it's huge so far.

Ebola outcomes are more successful in the western world because we have access to clean drinking water - Ebola works by wreaking havoc by causing the blood vessels to leak, so being able to get fluids into the body means the person has a fighting chance at living.

Comment Re:Shyeah, right. (Score 1) 284

I used to work at an LTO manufacturer and asked why we never drove the older generations down into the SMB space and it is simply this - the components are *really* expensive, the majority of the component cost of the drive is the R/W head, that alone probably accounts for 25% of the drive and you just can't push the price down much further, it costs what it costs. Also, the HUGE majority of these things go into libraries with hundreds of drives, thousands of slots and robots that can move upwards of 90km per hour.

Why? Is the head made of exotic materials that cost a lot?

I mean, a hard drive has a read-write head that is tiny and made to fine precision, but because of the immense R&D that went into production and simply mass production forced optimizations in cost and production methods that drove the price down so much that you can pick up a ton of storage for not a lot of money. Like 2TB portable hard drives for under $100. And that neglects the fact that the mechanical parts of said hard drive are far tinier and have tight tolerances in order to stuff that much storage in the space smaller than a single tape.

The price is probably expensive because no one's ever bothered to scale it from the thousands to millions of units.

Comment Re:Fuck That Shit (Score 1) 64

uck naming shit to appeal to the plebes and media. It's not a popularity contest. It's a fucking security vulnerability that needs to be patched. You don't get points for media mentions.

I know, I mean, if they didn't call it "heartbleed" there would be millions of easily exploitable servers and security appliances out there to rip data from. instead they had to get media attention and force people to actually examine their systems and update them. After all, a few months later about 80% of vulnerable machines were patched.

And stuff like OpenVPN would be much easier to break into if people didn't force updates to their VPN appliances and stuff.

Comment Re:HDD Pros (Score 1) 438

HDDs are not as recoverable as you seem to think. I have several bricked drives to show for it. Plus there is a trade-off in that your HDD's chance of failure goes up dramatically over time no matter how little or how much you use it. Even keeping it on a shelf won't make it last longer. SSD failure mechanics are very different beasts. If your SSD is barely worn after 3 years of operation (and most will be), the failure rate will not be appreciably higher than when it was new. The chance of multi-bit failures eventually overcoming the automatic SCAN/relocation (in SMART) will increase once appreciable wear occurs, but the wear is write-based and not time-based and for most SSD users that means reliability will be maintainable far longer than the 3 years one can normally depend on a HDD for (assuming it isn't one of those 5% of HDDs which fails every year anyway).

And, again... You don't make backups? Depending on the recoverability of your hard drive virtually guarantees that you will lose all your data one day.

-Matt

Comment Re:I like both (Score 1) 438

I hear this argument quite often and gotta ask... what, you don't have backups? When any of my storage dies I throw the drive away, stick in a new one, and restore from one of my two real-time backups (one on-site, one off-site). For that matter, I don't even trust any HDD that is over 3 years old. It gets replaced whether it reports any errors or not. And I've had plenty of HDDs fail with catastrophic errors over the years. Relying on a HDD to fail nicely is a false assumption.

Another statistic to keep in mind is that SSD failure rates are around 1.5% per year, compared to 5% failure rates for HDDs. And, I suspect, since HDD technology has essentially hit up against a mechanical brick wall w/regards to failure rates (if you still want to pay $80 for one), that SSD failure rates (which are more a function of firmware) will continue to drop while HDD failure rates remain about the same, from here on out. And that's assuming the HDD is powered on for the whole time. Power-down a HDD for a month and its failure rate goes up dramatically once you've powered it back on. HDDs can't even be reliably used for off-line backups, SSDs can. SSDs have a lot of room to get even better. HDDs just don't.

It is also a lot easier to run a SSD safely for many more years than a HDD simply by observing the wear indicator or sector relocation count ramp (actual life depends on the write load), where-as a hard drive's life is related more to power-up time regardless of load. If I only have to replace my SSDs (being conservative) once every 5-7 years vs my HDDs once every 3 years, that cuts many costs out right there. I have yet to have to replace a single SSD, but have replaced several HDDs purchased after that first SSD was bought. Just looking at the front-end cost doesn't really tell the whole story. Replacement cost, lost opportunity cost, time cost (time is money). There are many costs that matter just as much.

In terms of speed, I think you also don't understand the real problem. The problem is not comparing the 100-200 MByte/sec linear access time of a HDD to the 500-550 MByte/sec linear access time of a SSD. The problem is that once the computer has to seek that hard drive, that 100-200 Mbytes/sec drops to 20 MBytes/sec, and drops to 2 MBytes/sec in the worst-case. The SSD, on the other hand, will still maintain ~400-550 MBytes/sec even doing completely random accesses. Lots of things can cause this... de-duplication, for example. Background scans. Background applications (dropbox scans, security scans). Paging memory. Filesystem fragmentation. Game updates (fragmented data files). Whatever.

People notice the difference between SSDs and HDDs because of the above, and it matters even for casual users like, say, my parents, who mostly only mess with photos and videos. They notice it. It's a big deal. It's extremely annoying when a machine reacts slowly. The SSD is worth its weight in gold under those conditions. And machines these days (laptops and desktops certainly) do a lot more work in the background than they used to.

There are still situations where HDDs are useful. I use HDDs on my backup boxes and in situations where I need hundreds of gigabytes of temporary (but linear) storage... mostly throw-away situations where I don't care if a drive dies on me. But on my laptops and workstations it's SSD-only now, and they are a lot happier for it. For that matter, in a year or two most of our servers will likely be SSD-only as well. Only the big crunchers will need HDDs at all.

Nobody who has switched from a HDD to a SSD ever switches back. People will happily take a big storage hit ($150 2TB HDD -> $150 256GB SSD) just to be able to have that SSD. Not a whole lot of people need huge amounts of storage anyway with so much video and audio now being streamed from the cloud. For that matter, even personal storage is starting to get backed up 'on the cloud' and there is no need to have a completely local copy of *everything* (though I personally do still keep a local copy).

-Matt

Comment Re:What about long-term data integrity? (Score 2) 438

You might as well ask the same question about a hard drive. If you power down a hard drive and put it on a shelf for a year, there is a better than even change that it will be dead when you try to power it up again, and an even higher chance that it will die within a few days.

A powered-down SSD that has been written once should be able to retain data for ~10 years or so. Longer if kept in a cool place. As wear builds up, the retention time drops. You can look up the flash chip specs to get a more precise answer. A powered-up SSD should be able to retain data almost indefinitely as the self check will relocate failing sectors as they lose charge. However, in practical terms, it also depends on how the drive firmware is stored. The drive will die when the firmware is no longer readable. But that is true for hard drives as well.

-Matt

Comment Re:Question (Score 1) 438

Hybrid drives do not use their meager flash to cache writes. The flash would wear out in an instant if they did that. It's strictly useful only for boot data and that is pretty much it, if a few seconds matters to you and you don't want to buy a separate SSD. For any real workload, the hybrid drive is a joke.

-Matt

Comment Re:Question (Score 1) 438

Never buy hybrid drives, period. You are just multplying the complexity of the firmware (resulting in more bugs, as Seagate's earlier attempts at Hybrid drives revealed), and decreasing the determinism of the failure cases. And there's no point. A hybrid drive has a *tiny* amount of flash on it. It's good for booting and perhaps holding a program or two, and that is pretty much it. For someone who does so little on their computer that it would actually fit on the flash portion of a hybrid, a hard drive will be almost as fast. For someone who uses the computer more significantly, the hybrid flash is too small to matter.

My recommendation is to use only a SSD for workstations and desktops as long as you don't need terrabytes of storage. For your server, if you can't afford a large enough SSD, then a SSD+HDD combination (or SSD + HDD/RAID) works very well. In this situation you put the boot and swap space and the SSD, plus you cache HDD data on your SSD.

This is pretty much what we do on our systems now. The workstations and desktops are SSD-only, the servers are SSD + HDD(s).

The nice thing about this is that with, say, a 256G SSD on the server caching roughly ~200GB worth of HDD data, the HDD's do not require a lot of performance. We can just use 2.5" 2TB green drives. Plus we can use large swap-backed ram disks and so on and so forth. Makes the servers scream.

-Matt

Comment Re:LOL (Score 1) 438

A 7200 rpm HDD can do 200-400 IOPS or so, semi-random accesses (normal database access patterns). A 15K HDD can do ~400-600 or so. Short-stroking a normal drive also gains you at least 100 IOPS (so, say 300-500 IOPS on a short-stroked 7200 rpm HDD). That's off the top of my head.

A SATA SSD, of course, can do 60000-100000 IOPS or so and a PCI-e SSD can do even more.

-Matt

Comment Re:Reliability (Score 2) 438

Depends on the application. For a workstation or build box, we configure swap on the SSD.

The point is not that the build box needs to swap, not with 32G or more ram, but that having swap in the mix allows you to make full use of your cpu resources because you can scale the build up to the point where the 'peaks' of the build tend to eat just a tad more ram resources than you have ram for (and thus page), which is fine because the rest of the build winds up being able to better-utilize the ram and cpu that is there. So putting swap on a SSD actually works out quite nicely on a build box.

Similarly, for a workstation, the machine simply does not page enough that one has to worry about paging wearing out the SSD. You put swap on your SSD for another reason entirely... to allow the machine to hold onto huge amounts of data in virtual memory from open applications, and to allow the machine to get rid of idle memory (page it out) to make more memory available for active operations, without you as the user of the workstation noticing when it actually pages something in or out.

A good example of this is when doing mass photo-editing on hundreds of gigabytes of data. If the bulk storage is not a SSD, or perhaps if it is accessed over a network that can cause problems. But if the program caches pictures ahead and behind and 'sees' a large amount of memory is available, having swap on the SSD can improve performance and latency massively.

And, of course, being able to cache HDD or networked data on your SSD is just as important, so it depends how the cache mechanism works in the OS.

So generally speaking, there are actually not very many situations where you WOULDN'T want to put your swap on the SSD. On machines with large ram configurations, the name of the game is to make the most of the resources you have and not so much to overload the machine to the point where it is paging heavily 24x7. On machines with less ram, the name of the game is to reduce latency for the workload, which means allowing the OS to page so available ram can self-tune to the workload.

-Matt

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...