Start-up Claims SSD Achieves 180,000 IOPS 133
Lucas123 writes "Three-year-old start-up Pliant Technology today announced the general availability of a new class of enterprise SAS solid state disk drives that it claims without using any cache can achieve up to 180,000 IOPS for sustained read/write rates of 500MB/sec and 320MB/sec, respectively. The company also claims an unlimited number of daily writes to its new flash drives, guaranteeing 5 years of service with no slowdown. 'Pliant's SSD controller architecture is not vastly different from those of other high-end SSD manufacturers. It has twelve independent I/O channels to interleaved single level cell (SLC) NAND flash chips from Samsung Corp. The drives are configured as RAID 0 for increased performance.'"
/me gets out the tub o' salt (Score:1)
Re: (Score:1, Interesting)
Actually current SSD's are bottlenecked by the SATA connection at 300MB/s read so getting 500 with specialized hardware doesn't seem all that fantastic.
Re: (Score:2)
Actually current SSD's are bottlenecked by the SATA connection at 300MB/s read so getting 500 with specialized hardware doesn't seem all that fantastic.
The easy way around the SATA speed limit is software RAID and multiple drives. I have two Kingston 160GB (relabelled Intel G1) SSD Drives on an Intel Matrix Controller MB with software RAID 0. I get read rates over 400MB/s with technology that is roughly a year old. I'm sure newer technology on higher end controllers can easily achieve 500.
Re: (Score:3, Insightful)
That doesn't get around the bottleneck at all. You've got the same ratio of actual bandwidth used to theoretical bandwidth possible.
A single drive with multiple SATA interfaces, acting like RAID 0, would alleviate the bottleneck.
Re:/me gets out the tub o' salt (Score:5, Insightful)
That doesn't get around the bottleneck at all.
I get nearly 2X the speed of a single drive that is limited by SATA. Theoretically, that might not be the same thing but for all *PRACTICAL* purposes, it gets around the bottleneck just fine for me :-)
Re: (Score:2, Insightful)
Yep, doubling your bus count usually doubles your transfer speed. *rolling eyes*
SAS not SATA (Score:2, Insightful)
TFA said serial-attached SCSI (SAS) was currently 6Gb/sec going on to 12 by 2012. SATA III is also 6Gbit/sec.
0.5GB/sec is 4Gbit/sec, which is under the SAS limit.
Even if it were SATA @ 3Gbit/sec that would still be quite fast.
Re:SAS not SATA (Score:5, Informative)
Due to the 8/10 encoding on SATA, SAS, and a few other serial technologies, it's really easy to convert between megabits/gigabits of total bandwidth and megabits/gigabits of encoded bandwidth. For SATA/SAS 3Gib/s, it's 300MiB/s. For 6Gib/s, it's 600MiB/s.
Re: (Score:3, Interesting)
Re: (Score:2)
By making new and better controllers, end users are now on a software upgrade path for hardware.
Why sell the captivated enthusiast market 1 drive in 3-5 years when you can try or 2 or 3 over the same time?
Re: (Score:1)
That's why their product is an Enterprise SAS drive, not a SATA drive. SAS can get 3 gigabits per second.
SAS is Serial Attached SCSI, which isn't the same thing as SATA.
SATA is a consumer-level/workstation technology. Whereas SAS is for servers.
You can plug a SATA drive into a SAS port, but can't plug a SAS drive into a SATA port. :)
Re: (Score:3, Informative)
Re: (Score:1)
Yay. (Score:1, Funny)
Re: (Score:2, Insightful)
What an elaborate comment.
Re: (Score:1)
Indeed
Re: (Score:2)
"The company refused to release [...] retail price (Score:5, Funny)
Re:"The company refused to release [...] retail pr (Score:1, Troll)
So they will realize the Data is useless and go with what the CEO has asked for.
Re:"The company refused to release [...] retail pr (Score:5, Funny)
Only worth about 10$? You're crazy, I'd pay up to 20$ for such a drive!
Re: (Score:2)
Heck, I'd even pay $25 Canadian!
Re: (Score:1, Redundant)
Only worth about 10$? You're crazy, I'd pay up to 20$ for such a drive!
Heck, I'd even pay $25 Canadian!
Mod parent redundant. :)
Re: (Score:2)
Not really since today's rate was 20 U.S. dollars = 21.6840012 Canadian dollars. ;)
Anyway I'm also Canadian, so he was willing to pay 5$CAD more than me, which is completely insane!
Re: (Score:2)
But I then would wand FOUR of them! ^^
Man, some people just deserve to be ripped off!
Re:"The company refused to release [...] retail pr (Score:1)
The secret: they're actually trying to price the 2gb model and scaling from there.
Re: (Score:1)
The list price for Sun's 16GB flash drive (XTA7210-LOGZ18GB) is >$8,000 US. This is write-optimized flash for use in a ZFS hybrid storage pool.
E.g. The intended use is you use it as the "log device" (Sun equivalent to ext3 journal) to accelerate the speed of synchronous writes to your storage.
And it has a small fraction of the IOPs and speeds Pliant is claiming.
When they bring this to market, we can expect it to cost 15x to 20x that price per GB, easily, at least initially.
Re: (Score:2)
No, nobody would buy Pliant SSDs at 15x the price of STEC SSDs since Pliant only has 5x the IOPS.
Congrats (Score:5, Insightful)
Start-up Claims SSD Achieves 180,000 IOPS
Claims? As in no one else but the company has stated this "fact"? I wish this article waited for a review before being posted :S
Re:Congrats (Score:5, Funny)
I can claim that I have confirmed it if you like.
Re: (Score:2)
I can claim that I have confirmed it if you like.
Who do you think you are, Netcraft?
Re:Congrats (Score:4, Funny)
Re: (Score:2)
Reviews of enterprisey hardware are near-impossible to find, so you may be waiting a while.
Re: (Score:2)
Completely possible: they acquired another company which performed testing.
Re: (Score:1)
A company doesn't necessarily even have to test their product for as long as they claim it will last. Often times they'll just test the product at a usage level x times greater than is expected to be utilized on average and do the math. An example with context to this story might be testing a SSD with an amount of reads and writes 5 times greater than they'd expect an average person to use it. If it lasts for a year, they can claim it will last 5.
Re: (Score:3, Informative)
In this case it's probably more a matter of just doing the math.
They know their cells can handle 100,000 writes in their lifetime, they know the maximum number of writes they'll see (180,000/s for 5 years for the 3½ inch model), and they can merely do the math to figure out how many cells they need to have in their product to survive.
I did the math elsewhere, and to do it with 4 kB/write they'd only need 136 GB. Even when looking at the 320 MB/s write rate, you're only averaging 1.9 kB/write if you're
Re: (Score:2)
Re: (Score:2)
Congrats! Oh wait...
Start-up Claims SSD Achieves 180,000 IOPS
Claims? As in no one else but the company has stated this "fact"? I wish this article waited for a review before being posted :S
It's not outside what I'd expect for a next-gen enterprise SSD. The PCI-E FusionIO cards can easily do 100K IOPS sustained. I'm just surprised the SAS bus can send that many commands per second. I guess the SCSI wire protocol scales better than I assumed.
The bandwidth numbers are actually relatively low - that's the bus speed limiting the drive. I suspect that pretty soon, most enterprise-grade SSDs will connect using the PCI bus to avoid that.
I've used pre-production versions. They are FAST. (Score:5, Informative)
I used pre-production versions of these. I tested them with Terabytes of test data in random write tests. They are amazing, and can saturate a 1Gb FC connection with random writes. They are very resilient. We put these in my company's demo boxes to show that our architecture can compete with EMC. Kind of cheating, but we told them that it was a special drive that enables us to show the limits of our storage management architecture in a small, 1U box, instead of just showing you the limits of physical hard drives.
We beat their 8Us of EMC hard drives by 34% with just one of these 2.5" drives, and we had bottlenecks all over the place in our small demo box. And they did the testing, not us.
The thing about these drives is that they are more expensive ($/GB) even than registered ECC DDR2/3 RAM, which obviously is going to be even faster.
Re: (Score:1)
1GbFC? How well can this stand up to modern 8GbFC or 10GbE iSCSI?
Re: (Score:2)
Re: (Score:2)
The thing about these drives is that they are more expensive ($/GB) even than registered ECC DDR2/3 RAM, which obviously is going to be even faster.
So, how much do they cost exactly?
Re: (Score:2)
Thing with these kind of prices is that you start off with the off the shelf price - if any - and then negotiate the real price. And this final price is - of course - confidential otherwise clients start comparing prices on the internet. If he would post the price they would directly point to the company that was paying the price and signed the confidentiality contract.
Re:I've used pre-production versions. They are FAS (Score:3, Interesting)
Two problems:
1) They're bottlenecked by SAS, which, if they're using 3gbit controllers, probably won't go that much higher than ~500MB/s
2) Their cost is probably insane, if they're setting the upper bounds at $6000
By comparison, Fusion-IO claims 100,000 IOPS (not as high, but not far off) on their drives, and are about to introduce a new model for $895. They use a PCI-e 4x slot, which assuming v1.x, should give them about 10gbit/s (before overhead) to play with.
Also, Woz is their chief scientist, so bonus.
T
Re:I've used pre-production versions. They are FAS (Score:2)
The thing about these drives is that they are more expensive ($/GB) even than registered ECC DDR2/3 RAM, which obviously is going to be even faster.
That only means it would probably be better to use RAM for read-only applications. An application that needs to commit a write to a database, ensuring that the bits are actually written to a physical medium, will be be able to utilise flash rather than a hard disk. A lot of database servers would gain increased performance from such an arrangement.
Re: (Score:2)
The $/GB of DRAM is misleading. Sure, 300 GB of RAM is cheap, but how much does the server cost that can hold it?
Re: (Score:2)
That's why we need more development in RAM-based disk emulators. Much like the now-archaic Gigabyte i-Ram, I would kill for a PCI-E card that takes 8 or more registered RAM modules and spits out a bunch of SAS or SATA connectors to be Raid-0'ed, with battery backup. It would be cheaper than a high-end SSD and much much faster.
Re: (Score:2)
I'm late reading this article, but there *are* products out there that do exactly what you state. I can't recall any company names off-hand (and it'd sound too much like an advertisement anyway), but I think one of them was mentioned earlier by someone. The ones I've seen will take 6 sticks of ECC-R DDR2, and have a small external connector for power to maintain the contents of RAM while the computer is off. You're still limited by how many PCI-E slots you have in your servers (most 1U servers have 1-2 for
Re: (Score:2)
An application that needs to commit a write to a database, ensuring that the bits are actually written to a physical medium, will be be able to utilise flash rather than a hard disk.
Log files are sequential, so there is no speed benefit in using flash media instead of hard drives.
Re: (Score:2)
In an application where you need to ensure data is written to somewhere physical on commit the database will call fsync(). This can only be done 250 times per minute on a 15k rpm disk drive [livejournal.com]. That limits the database to 250 commits a minute. Battery backed or flash cache increases performance here - "The really enormous performance increases that have been found for update-heavy database loads come from another hardware enhancement, namely the RAID controller with battery-backed cache.... On a test involving [linuxfinances.info]
Re: (Score:2)
Write speeds for a given rotational speed have been getting better over the years, and any real system is going to be using a battery backed cache anyway. With a properly tuned file system I don't see any benefit from an SSD for purely sequential writes. Add in cache on a controller card, and you don't even need that. Modern drives can cache the fsync() operations and perform them as a long single write.
Re: (Score:2)
Re: (Score:2)
Of course I meant 250 commits per second, not per minute. That's still assuming the best case of one transaction being written every rotation of the disk. Actual world results will probably be lower.
Re:I've used pre-production versions. They are FAS (Score:5, Interesting)
Well, I don't know the whole setup, just that it was about 10 drives (15k) SCSI (not SAS) in a RAID 5. I don't know how much cache. It was a Clarion unit. But, the customer thinks, "Wow, your little box that I've never heard of has just beaten EMC." They don't get into the technical details when they make that sort of decision.
Re: (Score:2, Interesting)
Re: (Score:3, Insightful)
awesome (Score:2, Funny)
Re: (Score:2)
I thought the need to store information then write it in buffer was kind of important especially with writing as fast as SAS is supposed to be able to transfer it.
What's the point of a buffer if SAS can barely keep up with the drive's IO speed? All writes will be at the limit of the SAS. Surely you don't think they put the cache buffers in hard drives for data integrity do you? That makes no sense, the cache is more volatile than the storage medium, by definition.
Think of it this way, slow drives need lots and lots of cache, fast drives need very little cache. Does your RAM have some other cache before it sends stuff off to the CPU? Same thing here, except its g
Considering they're in RAID 0 (Score:3, Interesting)
Re:Considering they're in RAID 0 (Score:4, Informative)
The 12 independent channels can be accessed as RAID-0 if needed, giving upwards of 12x the speed of a single channel, but this is done by the onboard controller, not by anything else.
Intel uses 10 independent channels to achieve their speeds, also in a "RAID-0" like setup.
Re: (Score:2)
Re: (Score:2)
Intel X25-E delivers about 5000 IOPS. How you are going to match drive doing 9x more with RAID?
Summary prematurely terminated (Score:1)
Re: (Score:2)
Wonder what controller they used (Score:4, Interesting)
Re: (Score:2)
With SAS there are basically two choices: LSI 1068 (with IT firmware for maximum performance) or the not-yet-released LSI 9210.
Re: (Score:2)
I'll soon be configuring another ICH10 box with an X25-E; if you want to send me some benchmarks to run I could probably do it.
Re: (Score:2)
Re: (Score:2)
Milk them for a few more years, then its a race to the bottom as a commodity product.
Re: (Score:2)
The solution to your problem is to not have a chipset in the way - like with FusionIO ioDrives. PCIe based SSD, direct connection to the CPU! Doesn't even use much CPU time, because it's so damn fast that unlike HDDs, it doesn't spend much time waiting.
Coincidentally, those ioDrives are also faster. I'd love the 1.4/1.5 GB/sec write/read variety, but I have a feeling I'd have to sell my car, and maybe my house. Even their low end model "for Desktop PCs" costs $900 for 80GB. If these guys can keep the price
Re: (Score:2)
Doesn't even use much CPU time
It sounds like you haven't used Fusion io; when you saturate the card the driver uses an entire CPU core.
Re: (Score:2)
It sounds like you haven't used Fusion io; when you saturate the card the driver uses an entire CPU core.
I haven't, and I was aware of that.
Perhaps I would've been more correct in saying "makes efficient use of CPU time"?
Most of us have quad-core CPUs with multiple cores sitting idle while we game or work on stuff. Using one core to give other cores and programs access to data much faster is a good tradeoff.
And HDD access for a relatively small number of drives(12?) will saturate a core too, unless you have a decent controller card. It's all that RAID parity checking and stuff. But 12 HDDs won't come close in
Unlimited writes? (Score:3, Interesting)
They're using SLC NAND flash which has a lower wear than MLC NAND [wikipedia.org] but that doesn't mean there is no wear at all. It looks like a nice drive anyway.
Re: (Score:2)
True, but in this context the word "unlimited" is being used to mean "you can't wear it out in 5 years". It's vaguely similar to "unlimited" Internet: The ISP may not slow you down at a set data limit, but you still can't pull more than ~300GB through a 1Mb connection per month.
But yeah, I don't like how marketing departments use the word unlimited either.
Re:Unlimited writes? (Score:5, Informative)
They didn't say "unlimited writes forever" they said "unlimited writes for 5 years", and that's obviously limited to what the drive can do, i.e. 180,000 operations per second for their 3½ inch drive.
At 180,000 IOPS * 5 years you're looking at 28,401,233,400,000 write operations.
At 320 MB/s * 5 years you're looking at writing 47 petabytes worth of data.
Now, obviously none of those figures are realistic, as there is no way you would be writing 100% and never ever reading your data again. But they are claiming that their drives can handle those loads without failing. In order for their device to handle that many writes, they'll need a minimum of 284,012,334 cells. That's assuming 1 bit/write of course. The more realistic thought is 4 kB/operation. Now you're looking at 9,306,516,160,512 cells or 136 GB, and I think it's safe to assume that their 3½ inch drive will store more than 136 GB of data.
It's not unlimited forever, it's unlimited within a timespan and capabilities of the device. And just doing the math makes this seem entirely plausible.
Re: (Score:2)
That's only plausible assuming you only use a log-structured filesystem. Use something that stores something in a fixed position on disk, say... a journal, and you'll find it can survive a lot less than 28 trillion writes.
Re: (Score:3, Informative)
Re: (Score:2)
But they are claiming that their drives can handle those loads without failing.
Yeah, right. Just as HDD manufacturers claim, their drives would survive a decade of normal loads, for about decades now. While in reality often failing in 2-3 years or less.
Sorry, but with my experience, I won't believe a word of their claims. My data is worth too much.
Re: (Score:2)
4KB per op, 5 years, 180000 operations per second, 100000 overwrites allowed before the flash becomes unreliable.
That gives me: 4kB * 2.840184 * 10^13 total ops/ 100K writes = 1 TB. Not 136GB.
Or perhaps the SLC flash they are using allows 1M overwrites? But in another post your assumption was 100k.
What am I missing?
Re: (Score:2)
No, you're not missing anything. I used Google to calculate the number:
135 GB [google.com].
However, I did have to do a bit of checking up, and I found out what was wrong:
1.06 TB [slashdot.org]
Capitalization isn't just important in husbandry.
when I did my 135 GB calculation, I checked their website and found that their lowest capacity unit was 150 GB, so I expected that I was right and the kb vs kB error hadn't crossed my mind.
I did just check Wikipedia [wikipedia.org] though:
Sounds good except that I ..... (Score:2)
have to wonder about the accurary of the following claim:
I have no problems with their claimed speed since frankly, if you run multiple smaller internal unit in parallel, you can pretty much get any speed you desire. But it's my understanding that the wearing out of the storage cells is a physical problem and in order for their claim to hold true, the
Re: (Score:2)
onWrite(data,location){
if(location.writeCount>threshold*drive.writeMinimum){
write(drive.writeMinimum.data,location)
write(data,drive.writeMinimum)
} else
write(data,location)
}
(i'm sure this is a sub-op
ASIC to the rescue (Score:2, Informative)
Seems like the same massive advantage of an Application-Specific Integrated Circuit (ASIC) over general processors and even FPGAs that I see in video compression, a field I keep tabs on.
At one time I had wondered why a $100 camcorder could encode video in real-time, when my seemingly much more powerful desktop took hour
Re: (Score:2)
Most ASICs really aren't that good. For instance, the efficiency of compression on most cameras is not that good, but it only has to be good enough to compress down to a size that it can write fast enough to its medium. On a PC the encoders are heavily optimized towards efficiency as you want to reduce storage and transfer costs as much as possible.
Today, with the sheer number of transistors on a general purpose CPU, it would cost way too much to develop an ASIC that is faster for many purposes in raw pow
After all the data loss because of HDs and SSDs... (Score:2)
I stopped caring about speed. As long as it's fast enough to boot before I'm done in the kitchen and bathroom, and as long as it manages to stream HD movies and the like, it's OK.
I rather prefer a good ZFS pool and a set of reliable drives, that survive the first 3 years, but also the next 20!
And when they do fail, (Score:2, Informative)
in Raid 0 you are in deep deep do-do.
Most peole know that striping 2 or more disks can give a performance increase but the idea of putting business critical data in a Raid 0 config is IMHO just plain crazy.
Re:And when they do fail, (Score:4, Informative)
Yeah, but a head crash on a hard drive kills the entire drive, same with a motor failure or most hard drive failures, even though there are multiple heads and platters. Think of channels in a SSD as platters in a hard drive, not separate hard drive-lets.
With a solid state drive, with block recovery algorithms, no moving parts, etc, it's less of a risk. There's still a risk of course, but it's less ridiculous. Anyway, internal RAID 0, RAID 5, RAID 10, all killed totally by a total device failure.
Re: (Score:3, Informative)
Um, what now? RAID5 can sustain at least one drive failure (or more, depending on the configuration of the array), and RAID10 can sustain one to two drive failures depending which drives go. Unless the whole controller goes, in which case you're totally screwed.
But in theory, SSDs should be a bit more durable than spinning platters - and I'd assume it's also easier to recover the data (or at least most of it) without the need for a clean room. Emphasis on "in theroy" as I had an SSD go with absolutely no w
Re: (Score:3, Insightful)
Emphasis on "in theroy" as I had an SSD go with absolutely no warning less than 48 hours after installation, but I'm filing that under bad luck.
I'd call that good luck. Bad luck would be 48 days.
Re: (Score:2)
So you get 4 of these things and you RAID5 the 4 of them. So it's actually RAID50, but c'est la vie.
Think of it like the SSD being itself a RAID array, and you can just RAID it like normal with other SSDs. Duh. It's exactly what IT admins have been doing. For bulk data or large writes use RAID5/RAID6, for database IO use RAID10. Don't concern yourself with what the SSD is doing. It may be more or less reliable than a hard drive. Probably more. But ignore that, treat it like a regular hard drive, just really
Not compatible with RAID (Score:3, Informative)
All this talk of RAID is nonsense and doesn't apply to these drives. RAID stands for "Redundant Array of *Inexpensive* Disks". These SSD are probably bloody expensive.
Re: (Score:2)
+3 Informative? Really? *rolls eyes*
I guess I forgot to include the "sarcasm" HTML tags.
Re: (Score:2)
Re: (Score:1)
It really isn't necessarily that crazy to mirror stuff..
You need backups for business-critical data anyway; the things true RAID* gives you protection from are only a part of the spectrum of things that can go wrong. If you have a good backup policy and don't care so much about high availability that you'd get from RAID, but you do care about performance (maybe you're doing video editing or something, I dunno), then mi
Re: (Score:1)
Yes, of course. /me goes to crawl under a rock
Why not internal RAID5? (Score:2)
The same principle should be extendable to RAID5.
Several separate, smaller devices combined into a RAID5 array, all inside one 3.5" case. That would take care of failures in one of the sub-devices. In case the "mainboard" that connects them all and holds the SAS interface fails, make the "mainboard" exchangeable. Swapping it will revive the drive.
Re: (Score:1)
Because RAID5 is the footgun of storage solutions. It sux. It always did. It always will. Don't use it. Ever!
Now, I'll admit, with SSD a few of the arguments against it do no longer apply, but there are still enough arguments left. http://www.baarf.com/ [baarf.com] has a nice list.
Re: (Score:2)
Re: (Score:2)
True, but the latency on your approach is a deal-breaker (see also: recent carrier pigeon vs. African ISP experiment).
re: your sig (Score:1)
Re - your sig "How are sites slashdotted when nobody reads TFAs?"
My guess: Robot overlords [wikipedia.org], including those which are allegedly evil [bing.com], allegedly non-evil [google.com], and others [wikipedia.org].
Re: (Score:1)