It is very easy to find SLC drives, you just have to look at enterprise drives instead of consumer drives. For example OCZ offers this: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-slc.html 50PB endurance...
The firmware on most SSDs is stored in a dedicated SRAM chip on the board, but most drives are set up to stop functioning once S.M.A.R.T. shows the drive as failed or there are too many bad blocks to hold all of the data. This is why a 256GB drive holding 20GB of data will basically never go bad, but if it is constantly holding 250GB it will go bad very fast.
It is mostly not the controllers. Depending of course which Agility/Vertex you are referring to. In non-sync NAND drives most of the time it is the NAND having bad blocks - those drives are cheap and not meant to be used in high-usage scenarios, so if you do, you probably won't have very good results. Generally the drives that use first generation Sandforce-based controllers, which did not have very efficient garbage collection, have problems too - if you don't enable TRIM, your drive will destroy its own NAND pretty quickly.
What it comes down to is that if a drive was something like $15 more expensive, but had failure protection, the consumer market would simply not buy it. Right now we are really forced into making drives as cheap as possible and as fast as possible, with reliability being a lesser concern. This issue is mostly that once a single SSD company makes a less reliable, but faster and cheaper drive, they will outsell the companies that make the slower, expensive, reliable drives, so everyone has to follow suit. There are lots of very reliable drives out there though - they are just made as enterprise drives. They will have better controllers, more reliable firmware, more over-provisioning, but will be much slower, and cost roughly double that of a consumer drive.
What you are basically describing is an enterprise-class SSD. Many companies make them - they are normally over-provisioned around 100% (120GB drive will actually have 240GB NAND).
Most SSDs are not designed for iMac use, it is basically something that is thrown in later (excluding the ones made to be built in). You probably could get a huge performance increase by updating your firmware and/or waiting for a better firmware release.
I am an engineer at an SSD company and I would like to vouch for this being a great explanation. Thank You.
That is insanely cheap, wow. Hop on that. One thing to note though, is that this is the first generation Sandforce controller, not the 2200 series in most of the drives discussed here.
And to make this more ridiculous, the Agility 3 is like the 5th fastest SATA drive OCZ makes.
I have probably 40TB of SSDs sitting on my desk at work, but zero at home, and don't need any. Right now with prices on all HDDs and SDDs so high, I found that I didn't care about load times enough to invest another $200, so I just use an old 320GB SATA HDD. The benefits for me would be situational at best though, since I tend to play light-weight games and never restart my computer.
yea seriously, its UCSD and thats how its always been
Tech.Luver writes: "Telegraph reports, " New claims that Leonardo da Vinci's The Last Supper contains a hidden image of a woman holding a child are provoking a storm of interest on the internet. The figure allegedly appears when the 15th Century mural painting is superimposed with its mirror image, and both are made partially transparent. According to Slavisa Pesci, an Italian amateur scholar, the resulting composite picture shows a figure clutching what appears to be a young child. ""
Link to Original Source
Link to Original Source
John Regehr writes: "Students, prospective students, and professors care about how their department is ranked relative to other departments. Current ranking schemes for computer science departments are subjective: they are compiled, for example, by asking department chairs to rank other departments. An alternative, objective way to rank research departments is the meta h index, which we have compiled for a number of computer science departments. The h index, a measure of productivity of an individual researcher, is "the number of papers with citation number higher or equal to h." So to have an h index of 5, one must have written 5 papers that are each cited at least 5 times. The meta h index, then, extends this idea to measure the number of researchers in a department with h index higher than or equal to h. Of course it is easy to find flaws in this way of measuring research departments, but it has the advantages of being simple and of having no parameters to tune. Also, perhaps surprisingly, the meta h index correlates rather well with the ranking produced by US News."
strredwolf writes "Despite generating over $12K in funds, well short of the $250K goal, the Tux 500 Project was able to secure a spot in the Indy 500 with driver Roberto Moreno piloting the Linux #77 Indy car. He's back in the pack in 31st place (only 5.5 MPH separates 31st place from 1st) but was able to secure it by re-qualifying with an average speed of 220.299 MPH. Will Moreno be able to pilot the penguin-tipped Indy car to victory next week at the 91st Indianapolis 500?"