The consumer-grade SSD in my laptop can happily handle 2-300MB/s of sustained writes (and, simultaneously, 200MB/s of sustained reads). If you're doing linear writes, then you're the optimal workload for wear levelling. You'll be hard pressed to find a drive that isn't guaranteed for 5 years of writes at the maximum throughput the drive can handle.
Although, as other posters have pointed out, you'll get better sequential write speed and reliability from a RAID array of slower disks.
Now what about storage durability? With 3 bits per cell, how long before the data fades?
I was under the impression that the controller would handle this. Cells are typically marked as dead once their thresholds are such that you can't guarantee that they'll hold their contents for a year (there was an interesting paper at EuroSys this year about extending the lifespan by using these cells for short-lived data and exposing that functionality to the OS). If a cell is getting close to the time when the data has been unmodified for long enough that its integrity is in danger, then the controller will use the same mechanism as wear levelling to read it and write it back (either in the same place or somewhere else). Most of the time, this will happen as part of normal wear levelling, as unmodified data are moved around to sit on cells that have been rewritten a few times and spread the wear onto some of the cells that were only written once.
Tape was never alive for consumers
Not true. In the '90s, you could buy a tape drive and one tape to back up your £120 hard drive for £100 for the drive and £20 for the tape. I remember quite a few companies including tape drives on their more expensive consumer machines for exactly this reason. The tapes have stayed about that price, but now the drive is £1000 - and that's a single drive, not a tape library. That doesn't just price it out of the market for consumers, it does for small businesses too. It won't be long before it's also too expensive for medium businesses. For very large companies like Google, it's also too expensive because the bandwidth to tape is too slow unless you buy so many drives that the cost is prohibitive.
Also, we use raw storage in the context of _individual_ incompressible backup sets, not backup data at scale, because very few places backup a high ratio of incompressible data overall.
I'm not convinced that's true. At home, my NAS uses compression, so the raw capacity of the tapes is likely the relevant one, unless the tape somehow manages to recompress lz4-compressed blocks and gain a benefit (not entirely impossible, as lz4 is optimised for speed, but pretty unlikely). At work, the NetApp filer that the tape backups run from also uses compresses and deduplicates online, so not much redundancy there either.
What's the cost of doubling your storage capacity with either technology, for a few iterations? It's buy more tapes vs. $2&%fhqwgads!!1
Not really, unless you're talking about longer backup cycles. With tape, the backup time can quite quickly become a bottleneck, so you end up needing a second jukebox.
I'd love to use tape for home use. I have a NAS that I back a couple of laptops up onto. It has 3x2TB drives in a RAID-Z configuration with compression and deduplication enabled for the backup volumes. If I could get an eSATA tape drive something with 2-4TB cartridges then I could easily back that up and store the tapes somewhere else. LTO-5 / LTO-6 would do the job. LTO-5 tapes are pretty cheap now and LTO-6 isn't too bad, but the drives are insanely expensive. For the price of the drive, I could buy two more NAS boxes with the same size disks, stick them in different people's houses, and zfs send to them periodically for backups - and still have enough money left over to pay for their electricity consumption for the next 5 years.
If LTO-6, with its 6.4TB tapes, launches with a consumer-oriented drive, then I might consider being an early adopter, just buying a couple of tapes initially and assuming that the price will go down in a few years. It won't though. And we know from history that any industry that concentrates on the high-margin high end of the market that eventually that market shrinks as the quality of cheaper low-end equivalents gradually improve until no one can justify them anymore (see SGI, US steel for examples).
However, there is another issue that is mostly unrelated: the U.S. is less densely populated than most "Western" countries, and the cost of infrastructure for providing comparable service is provably higher
And that's exactly the sort of thing that the grandparent is talking about. The vast majority of the US population lives on the coasts near the big cities, where the population density is significantly higher than most of Europe, yet the telephone networks are inferior. If you look at the population density of the USA, the nationwide statistics are skewed by the enormous areas where basically no one lives. If you focus on the areas where 95% of the population lives, the US and most of the EU have quite similar population density. Both can suck for Internet access if you're in the 5%, but the US also doesn't do a very good job for the 95%.
Not quite, but we're getting there. This is part of the reason why lots of people are moving to smartphones and tablets as their primary computing platforms: something with the computing power and memory of a laptop from 5 years ago is ample for their needs. If it can browse the web, play back music and video, send and receive emails, and edit basic office documents, then that's enough for a massive chunk of the population. It's not enough for everyone, and some of the people that it's not enough for have very deep pockets.
I was recently talking to someone at ARM about Moore's law and how it applied to different market segments. Moore's law says that the number of transistors that you can get on an IC for a fixed cost doubles every 12 months. In desktop processors, that's meant that the price has stayed roughly constant but the number of transistors has doubled. In the microcontroller world, they've been using about half of the Moore's Law dividend to increase transistor count and half to reduce cost. A lot of customers would rather have cheaper microcontrollers than faster ones and getting ones that are a bit faster and a bit cheaper every generation is a clear win (faster reduces development costs, cheaper reduces production costs). I just got a Cortex M3 prototyping board. It's got 64KB of SRAM, 512KB of Flash, and a 100MHz 3-stage pipelien. That's an insane amount of processing power and storage in comparison to the microcontrollers of 20 years ago, but it's nowhere near as big a jump as mainstream CPUs have made. It used to be that a microcontroller was a CPU from 10 years earlier (that's about the time for the Z80, for example, to go from being a CPU in home computers to being an embedded microcontroller), but the M3 isn't even as powerful as the MIPS chip from 1993, by quite a long way. The M0 has the same transistor count as the very first ARM chip back in the early '80s.
Happiness is twin floppies.