Sun Unveils RAID-Less Storage Appliance 249
pisadinho writes "eWEEK's Chris Preimesberger explains how Sun Microsystems has completely discarded RAID volume management in its new Amber Road storage boxes, released today. Because it uses the Zettabyte File System, the Amber Road has eliminated the use of RAID arrays, RAID controllers and volume management software — meaning that it's very fast and easy to use."
Sun shoots, and... well, you already know. (Score:5, Insightful)
Correct me if I'm wrong, but doesn't charging enterprise prices for simplified hardware that relies on commodity software solutions, kind of defeat the point?
Unless I'm misunderstanding this hardware, the entire idea is to move data safety away from hardware redundancy toward software-driven duplication. In that way, the data is safe from failure in the same way that GoogleFS protects against individual machine failures. The only difference is that Google probably doesn't pay $11,000 for 2TB of storage. :-/
One of these days, I really will understand why Sun regularly shoots themselves in the foot. Until then, I suppose I must trust them to somehow find a customer who's willing to pay exorbitant prices for an otherwise good idea. (i.e. I'd really love to see Sun bring Google-style reliability from unreliability to the market.)
BTW, here's the link to Sun's marketing on this:
http://www.sun.com/storage/disk_systems/unified_storage/index.jsp [sun.com]
It's actually pretty cool tech. Sun could own the market if they just understood how the market views pricing and features.
Re:Sun shoots, and... well, you already know. (Score:5, Funny)
I suppose I must trust them to somehow find a customer who's willing to pay exorbitant prices for an otherwise good idea.
Have you worked with any of Sun's customers recently? I believe P.T. Barnum was involved in the development of their business strategy.
You're missing the point (Score:4, Insightful)
This is meant to be 100x faster than the storage you're talking about:
First [sun.com]: This uses Hybrid Storage Pool:
The Hybrid Storage Pool combines DRAM, SSDs, and HDDs in the same system, dramatically reducing bottlenecks and providing breakthrough speed.
Second [sun.com]: The system's hybrid architecture gives you the speed and performance you need to shatter the I/O bottlenecks with no administrator intervention. In fact, Hybrid Storage Pools with SSDs can improve I/O performance by 100x compared to mechanical disk drives.
Re: (Score:3, Interesting)
> Correct me if I'm wrong, but doesn't charging enterprise prices for simplified hardware
> that relies on commodity software solutions, kind of defeat the point?
Yea, that is amazing. Ya could put in a pair of 1U servers with RAID1 on each for a fraction of that pricetag. Use any of a number of ways to make the two units cluster, including using OpenSolaris and you get everything they are selling except the pretty front end for about half the sticker, Go SCSI/SAS on all of the drives in 2U machines
Re:Sun shoots, and... well, you already know. (Score:5, Informative)
Re:Sun shoots, and... well, you already know. (Score:4, Informative)
Is that 2U 1M IOPS unit racked right next to your 1U, 64K core, 1024 TeraHertz system?
FYI, a loaded HDS 9990V (i.e. hundreds of spindles and multiple gigabytes of cache) manages to provide 200,000 IOPS (SPC-1). Even Texas Memory Systems RamSAN 400 (i.e. SDRAM) can only make 400,000 IOPS. Hell, it was only a couple of weeks ago that TMS was announcing that they sold a RamSAN-5000, which is the only storage device I've ever seen specced to 1,000,000 IOPS. And it's 10 different RAM cached, flash backed units.
Sounds like apples to oranges comparison (Score:4, Informative)
to me. Coming from high performance transaction processing land where an operation means 'the data is ON the platter' you can't do that more often than the platter rotates to the point where the head is over the sector where the write operation starts. Basic math, 15k RPM spindle = roughly 300 times/sec. Multiply by however many spindles you got, that's what you're max throughput is.
This is one reason why IN THEORY at least an SSD would be so great, that latency is much less. So basically I'm thinking they just aren't talking about what you're talking about, and maybe that makes sense, if you're running a trading operation say, you just DO NOT CARE what is buffered someplace, if it isn't physically on the drive, it doesn't exist.
Re: (Score:3, Insightful)
You, sir, are never allowed anywhere near my data centers!
Re:Sun shoots, and... well, you already know. (Score:5, Insightful)
Well, that's a nice little department file server that you specced out there.
Sun targets a *slightly* different market with their device (think: databases, mission critical, pink slips).
Re:Sun shoots, and... well, you already know. (Score:5, Informative)
Re:Sun shoots, and... well, you already know. (Score:5, Insightful)
If you're driving 48 SATA drives on one bus, you're:
A) Not looking at the minimum 11.5TB layout
B) Not paying $35,000
C) Not a small-business customer
Which brings me back to: Sun is promising to target the small business and yet totally missed the mark. This is Enterprise hardware.
Re:Sun shoots, and... well, you already know. (Score:4, Informative)
Just a comment about the 48 disk setup; it is not always about getting the most space, but often about getting fastest response time. In this case the important factor is the amount of spindles. 11.5TB divided on 48 disks would be ~240GB a disk. Many companies would want 48 70GB disks as they are not in need of more space, only faster response times.
Re: (Score:3, Interesting)
Small Businesses are businesses that make under $25M/year by definition. I can imagine small businesses being in the market for inexpensive, high throughput, SANs.
Re: (Score:2)
the hardware to drive 48 SATA drives and not saturate the bus still isn't cheap.
Actually, SAS HBAs and JBODs (which is what Sun is using) are cheap; that's why it's odd that Sun is charging so much. For example, the 7210 is the same hardware as the X4540 yet it appears to cost much more.
Re: (Score:2)
You're not reading what you quoted. The hardware to *DRIVE* the HDs are not cheap. You're not talking about using ONE PCI bus for the entire server, for example.
Re: (Score:2)
If you can't understand what I wrote, go back to trolling newegg please, kthx hand
Re:Sun shoots, and... well, you already know. (Score:5, Insightful)
I don't think it's that expensive.
I use Promise's VtrakJ610s at work (16x1TB SATA), and it cost about half that - but I still need a server for it (DL385 in our case). And I need to fit the disks myself (16x4 countersunk screws...) into the ultra-cheap harddrive containers.
A MSA70 full of SAS-disks (25) costs 10k, IIRC - but you need a server, HBAs etc.
I'm soooooo sick of the "I could build one for XXX% less using YYY"-comments.
Please, all the winers: go and start your own company selling and supporting storage-systems.
Good night and good luck....
Re:Sun shoots, and... well, you already know. (Score:5, Funny)
It's SUN.
They're goal is to stay relevant, their strategy is to make headlines.
It's like a cross between a child acting up for attention and an emo cutting themselves.
Re: (Score:2)
You can build a single server containing 12TB powered by ZFS and RAIDZ for about $3000-3500 Canadian, including hot-swap drive bays. And the drives.
Sure, there's probably a lot of redundancy in these Sun boxes, but if they're relying on ZFS/RAIDZ to provide much of the reliability, and you build your $3500 box (which is housed in a mid-tower case with 9 drive bays) using OpenSolaris, you're most of the way there. At that point, you've got the data reliability, you just might not have quite the same uptime.
T
Re: (Score:2)
Yea, I recently built an 8T server. It cost me about $5000, and has no raid controller, and uses linux software raid.
If I really wanted, I could buy a second $5000 server and do DRBD between them to have 2x redundancy.
Re: (Score:3, Insightful)
the entire idea is to move data safety away from hardware redundancy toward software-driven duplication
You are exactly right. When you pay the exorbitant price, you pay for great hardware, the development of great software (which you could have gotten for free), the convenience of a prepackaged solution, and for the hardware and software support.
Should anything happen to these machines, you can always get your data back. If you can't afford another set of machines like these, simply plug the drives into anything that runs Solaris (or generally ZFS), and you have your data.
Just because it's open doesn't m
Re: (Score:3, Insightful)
Even if you put it together and test it using slave labour, you're not getting much change from $11k.
Sure, you could just plonk three 1.5T Seagates in there, shove a RAIDZ over it and call it a day, and that would
Re: (Score:2)
At the enter
Hate to be the one to tell you all this... (Score:3, Insightful)
But SUN is FAR from being the inventor of charging people $50k for something they could just as well get for free...
Name ANY big IT vendor, they all do it. My father can tell some amazing stories on that subject. Not a new phenomenon either.
Now, if you are the GOVERNMENT, they'll give you the special bonus public sector price, $150k!!!
Re:Sun shoots, and... well, you already know. (Score:5, Insightful)
With the same level of assurance that the solution will operate, first time - every time?
With the same level of confidence that Some Vendor will bend over backwards to fix it if it doesn't work?
Will your solution be as well tested and engineered?
It's not like you can just grab 3 1TB SATA drives, throw them into RAID-5 and say that you've got 2TB of production ready storage. Well, you can, but you'd be an idiot.
Your "home brew" solution will not meet any of the objectives Sun are achieving with this product. Your spindle count will suck, so concurrent access will be slow. You will probably be limited to one of iSCSI, CIFS, NFS or WebDAV, I doubt your solution would have all - and if it does, the integration will suck.
Will your solution have the diagnostic tools that Sun can provide? Oh wait, you don't have the millions of dollars to invest in engineering quality diagnostics, right from disk analysis (Sector scanning, remapping, etc) through to performance related faults? Well, then your solution will suck. What about snap-cloning?
In short, yes - storage is cheap. You can grab large drives very cheaply and put together something that works. That does not mean it will be good. Production quality storage is expensive, and for good bloody reason.
As for doing this using SSD storage, that's just ridiculous. 2048GB of storage would be at least 16 128GB SSD disks - this is not counting any disks for redundancy (i.e, raid-5/6 parity), or hot-spares. Assuming 2 drives for RAID-6 parity and 2 hot swaps, you'd need 20 SSD disks - with 10 grand, you're expecting to pay $500 per disk - and no other hardware, i.e, motherboard, case, cooling (more important than you think), etc.
So, until you have a clue about designing production quality storage systems, please refrain from making statements you have no clue about, you're only serving to confuse those people who are actually interested in what this product has to offer them. Keep to building crappy 3 or 4 disk RAID-5 systems using extremely large drives for storing your music, movies and pr0n on, but don't ever ever ever ever think about using those in any situation where your financial livelihood depends on that data.
Re:Sun shoots, and... well, you already know. (Score:5, Interesting)
Okay perhaps not with SSD.
I've built 1.5 TB systems over a year ago, RAID for $1200. FULL including swappable drives, gig ethernet and plenty 2 GB of ram to cache the system. They are FAST and reliable. They worked FIRST TIME, and have worked exactly perfectly for 1.5 years. Drives fail, we have hot swaps available. While not quite 2 TB, they are also 1.5 years old now, and I'll be replacing them in another 1.5 years with bigger drive systems.
And these, will be spares and lower priority sytems when I update them with newer stuff in a year or two.
Expensive Technology for the sake of all that other stuff you listed is just silly. It is exactly why SUN doesn't get it, and why some pointy hair boss is buys the bs.
Production quality storage means that it works for the time needed. I've actually had WORSE reliability from Name brand "Server" quality stuff. We've got HP Proliant Servers in production, and at least THREE from three different lots have all failed due to MOBO Failures. While they do send out a tech to replace the MOBO, it is really really annoying to have to tell people that the server is down because the MOBO failed. And all the great diagnostic tools HP has on those servers didn't predict nor would they fix the errors.
You can build it 1/2 as much then you can easily have two on hand, in case one dies.
Re:Sun shoots, and... well, you already know. (Score:5, Interesting)
We've got HP Proliant Servers in production
Some things you should keep to yourself no matter how bad it operates ;)
This is a real case of quality, support and "bling" factor. To use a (bad) car analogy: There is no need to buy a Mercedes when you can own a Nissan for half the price and it has exactly the same features (it may even be more powerful in some cases). However anyone can drive a Nissan (or can afford to), so there is a certain bling factor to driving the Mercedes. Just like there is a hell of a "bling" factor to owning Sun equipment as opposed to the "hack job" we can all put together. Personally I would prefer to spend twice as much and know that it's no longer my problem, even if it crashes, but that's just the opinion of one Network Admin.
Completely off hand: I've never had a mobo fail in any server, IBM or Dell based.
Re: (Score:2, Interesting)
Re: (Score:2)
Re:Sun shoots, and... well, you already know. (Score:5, Interesting)
Sure.
Heck, I'll even throw in the same vendor!
Even better. It will have had the same testing and engineering, PLUS a pre-existing history of operating in the marketplace.
I give you, the Sun Fire X4500 Server [sun.com]:
12TB (48x250GB) - $23,995.00
24TB (48 x 500GB) - $34,995.00
48TB (48 x 1TB) - $61,995.00
Let us compare with Sun's new line, shall we?
11.5 TB (46 x 250GB) - $34,995.00
22.5 TB (45 x 500GB) - $71,995.00
44.0 TB (44 x 1TB) - $117,995.00
So... twice the price for the same storage? To steal a line from a very famous "programmer":
Brillant
Re: (Score:3, Informative)
12TB (48x250GB) - $21,995.00 [sun.com]
and is virtually identical to the new 7210 box (config with 48x250GB) that sells for $34,995.00. Therefore proving that the same hardware is sold at a 60% markup ! Someone mod the parent up, he laid out the perfect counter-argument to the GP.
Re:Sun shoots, and... well, you already know. (Score:5, Interesting)
That's exactly what Google and many others do, and they spend their money, and significantly less than this, on managing that storage effectively. It works. When it boils down to it, you can have all the exorbitantly expensive and brilliant 'enterprise ready' tools you want but the bottom line is you need redundancy - and that's pretty much it.
Sun say they are targeting small businesses, and they have lost already with this poor showing. They have advanced no further than when they stiffed all the Cobalt Cube customers and withdrew the product, who then went out and bought Windows SBS servers ;-). If you think people are going to jack them in for this then you need a stiff drink.
Ahhh, shit. I'm heart broken. What I'd like to know is how a small business will handle a behemoth like that, how they'll fund the electricity for all those drives and who'll manage it all. I expect that will be an ongoing cost to Sun support ;-).
I have news for you. People have been doing it for years, and the reason why Sun's business has gone down the toilet to commodity servers, Linux and Windows, especially with small businesses, for the past ten years is exactly for this reason.
Sun need to stop pretending that they can package up some commodity shit with some features very, very, very, very, very few need (and is waaaaaaaaaaaaay outside their target market) and label it as 'enterprise ready', which they think justifies an exorbitant price tag and ongoing support. They lost with this strategy with x86 and Solaris where they tried to protect SPARC, they lost with the exodus from SPARC after the dot com boom and they will keep on losing.
Re: (Score:2)
It's not like you can just grab 3 1TB SATA drives, throw them into RAID-5 and say that you've got 2TB of production ready storage. Well, you can, but you'd be an idiot.
That's exactly what Google and many others do, and they spend their money, and significantly less than this, on managing that storage effectively. It works. When it boils down to it, you can have all the exorbitantly expensive and brilliant 'enterprise ready' tools you want but the bottom line is you need redundancy - and that's pretty much it.
You're trolling, right? You spend money for reliability either on hardware engineering, or software engineering. If you're doing things on the scale Google is, obviously it saves you more money to do software engineering, since that can be replicated easily on a large scale. For most companies, even "enterprises", hardware reliability gives you a better bang for the buck, because you can't bloody afford multiple data centers.
Re: (Score:3, Informative)
For most companies, the "cheap crap" is more than adequate.
They simply don't have the money to spend and they get by
perfectly well despite not having drunk the cool-aid.
Also, overpriced hardware with the label "enterprise"
plastered on it is not going to do anything to prevent
the need for a multiple data centers. Overpriced enterprise
hardware and multiple data centers solve different problems.
Re: (Score:3, Insightful)
the bottom line is you need redundancy - and that's pretty much it.
Um, well, good for you if that's all you need, and Sun will surely be happy to sell you something appropriate for that too, but for some of us, we kind of need that 2TB to do more than 300 IOPS.
Re: (Score:2)
It seems others disagree with you about 2TB SSD systems.
http://www.superssd.com/products/ramsan-500/ [superssd.com]
Ironically, these devices are less than an order of magnitude more than the Sun storage.
Re: (Score:2)
How does 2TB of Solid State Memory compare to 2TB (14 x 146GB) of spinning disks [sun.com]? Apples to oranges, my friend. Apples to oranges.
(FWIW, you need to spend about $71 grand [sun.com] (!) to get a mere 18GB of SSD. Almost like a cracker-jack prize or something. :-P)
Re: (Score:2)
Will your solution be as well tested and engineered?
Sun doesn't do much engineering here. Grab a bunch of SATA drives engineered by someone else, put them on a SATA chipset engineered by someone else, skip adding RAID to cut engineering costs and you have a storage array. Someone in China crimps the cables and packs it in a box, one metal and then one cardboard.
But I could go to my local PC enthusiast store buy a pair of 800W or 1200W power supplies, a mobo with quad core, 8GB of RAM, 6 SATA ports built in, 2 x 12 port SATA add ons, 30 1.5TB Seagate drive
Re:Sun shoots, and... well, you already know. (Score:4, Insightful)
Re: (Score:2)
Re:Sun shoots, and... well, you already know. (Score:4, Insightful)
Nice. How's the replication work on that rig you just built? And how many IOPS you getting? And how quickly will your vendor bring replacement hardware to you? How many filesystem snapshots can you take with your fancy ICH9 supporting linux? You gonna back that up over NDMP? How's the thin provisioning working out for you there? How much data you pushing through those two slots? Where's the other 2 gig ethernet ports. You got hot swappable power supplies there? After you're done stuffing all that gear onto your mobo, how many pcie slots you got left for future growth?
No offense, but try and get some clue as to what it takes to have a commodity class storage appliance.
Re: (Score:2)
Re: (Score:2, Insightful)
Do you ever use an EMC Clariion? Did you check those insane prices? The CX4-120 costs around $4000 and the software for 1 user another $4000 (prices vary)
The folks at Sun are not stupid, specially when it is HPC.
And BTW, storage space isn't the most important thing. Have you ever wondered why Google keeps offering more space for GMail? They need huge amounts of IOPS (
Re: (Score:2)
I could build a setup that would be way more powerful and less costly and more storage for way less.
What kind of drugs are these people on? two TB for 10K ???? ARE YOU NUTS SUN????
I could probably build this using SSD for less. SHEESH
Promise VTrack 16-drive array...$4,500
(16) Seagate 1TB SATA300 @ $130 each...$2,080
$6,580 for 16TB of disk space connected to one server.
Then you install OpenSolaris and install ZFS.
Re: (Score:3, Insightful)
You forgot the SSD's for ZFS secondary ARC cache. Oh, and the server.
Re: (Score:2)
Re: (Score:2)
Sun replace RAID with RAID (Score:2)
What a stupid and misleading title. You can, and I suspect most people will, use RAID with these boxes. RAID-Z more than likely, though other types of RAID are possible too. It is not a RAID-less box, it's a box without a dedicated RAID controller.
Re:Sun replace RAID with RAID (Score:5, Funny)
"Sun replace RAID with RAID"
No, they replaced it with "RAD"; they took the "I" right out of it.
Re:Sun replace RAID with RAID (Score:4, Funny)
No, they replaced it with "RAD"; they took the "I" right out of it.
It's all fun and games until someone loses an "I".
Re: (Score:2)
How well does it integrate in a M$ environment? (Score:2)
Re: (Score:2, Informative)
It supports active directory, and user mapping between AD and LDAP. The CIFS stack is in-kernel.
No RAID? (Score:5, Informative)
"All of the new unified storage systems include comprehensive data services at no extra cost, Fowler said. These include snapshots/cloning, restores, mirroring, RAID-5, RAID-6, replication, active-active clustering, compression, thin provisioning, CIFS (Common Internet File System), NFS (Network File System), iSCSI, HTTP/FTP and WebDAV (Web-based Distributed Authoring and Versioning)."
Note that this system includes "RAID".
Re: (Score:3, Funny)
"All of the new unified storage systems include comprehensive data services at no extra cost, Fowler said. These include snapshots/cloning, restores, mirroring, RAID-5, RAID-6, replication, active-active clustering, compression, thin provisioning, CIFS (Common Internet File System), NFS (Network File System), iSCSI, HTTP/FTP and WebDAV (Web-based Distributed Authoring and Versioning)."
Note that this system includes "RAID".
(overheard in the Sun IT break room)
"You know that fucking clueless Marketing guy? Yeah, he asked me to write up something for the new RAID-free array. Heh, I hooked him up."
Sun's storage strategy (Score:5, Interesting)
Considering that they've purchased MySQL, StorageTec and Cluster File Systems (of Lustre fame), developed ZFS, implemented CIFS in OpenSolaris from scratch (not Samba based), participated in NFSv4 and constructed the thumper, these machines hardly come as a surprise.
For the last two years, almost all their moves are targeted towards one goal: Enter the storage market from a non-conventional angle. They want to do it unconventionally, because they know that storage more than anything else is becoming The commodity and today's toys won't cut it. Plus, at this point, all the mainstream storage vendors have difficulty tapping the low end. They may be able to sell their expensive products to clients with deep pockets, but for small businesses it's a different story. No to mention that they are unwilling to reinvent themselves. OTOH with all these inventions Sun may be trying to do what it did with workstations when it started in the 80s, start low and increase. Remains to be seen whether they can pull it.
Re:Sun's storage strategy (Score:4, Informative)
Sun CIFS isn't reimplemented from scratch by Sun, it was code they got from their Procom acquisition. It remains to be seen if putting a CIFS server into an otherwise stable kernel is a good idea or not :-).
Jeremy.
Re: (Score:2)
Plus, at this point, all the mainstream storage vendors have difficulty tapping the low end. They may be able to sell their expensive products to clients with deep pockets, but for small businesses it's a different story.
This doesn't seem like the "low end" for small business to me. Someone up the page quoted that their cheapest model is $11k for 2TB. You should be able to get >10TB of disk space for that price.
I'm not trying to say that Sun is a bad value. You might get some really great features for all that extra money. I wouldn't know because it's not worth investigating at those prices. There's no way I could justify spending $11k for 2TB.
What Everyone is Missing (Score:4, Informative)
This system will intelligently move the data around to put frequently accessed bits on the SSDs. This is a lot more than a 2u server with a few TB drives in a raid 10.
Re: (Score:3, Informative)
Sun rocks.
Real engineering here.
Re: (Score:2)
SSD drives can usually detect they are part of a cluster of traditional disks, and can request that data to be written to the traditional disks in the cluster be written to them instead, to improve I/O?
Re: (Score:2)
Not according to the Intel X25-E specs...
Care to provide some proof? I don't believe for a second that an SSD drive can intelligently copy bits from a hard-drive. It would violate the way storage drivers work. The system does this, not the drive.
Re: (Score:2)
I apologize if I jumped the gun, but that wasn't clear from his post.
Zettabyte? (Score:3, Informative)
ZFS doesn't stand for zettabyte anything. "The name originally stood for "Zettabyte File System", but is now an orphan acronym." from wikipedia, sourced from http://blogs.sun.com/bonwick/entry/you_say_zeta_i_say [sun.com] .
and of course "RAID Array" is lovelily redundant phrasing.
Re: (Score:2)
It could be a cheap, redundant, RAID array of drives :P
Re: (Score:2)
Should have just called it the Zinc File System. Then when people ask, tell em it's cause Zinc is good for you.
For the love of FSM... (Score:5, Informative)
That's not even apples and oranges, it's more like apples and redwoods.
Last I checked Netapp was still charging $10,000 per TB! [dedupecalc.com] Do you really think there is no reason for this?
Re:For the love of FSM... (Score:4, Insightful)
I hate to say it, but for the small business market, they should be compared. If you're selling a 2TB redundant storage device to a small business without a huge IT department, then you're competing against what can be built from commodity parts (aka, crap from Newegg + Linux + RAID) because often cost, not performance, is the defining factor.
Re: (Score:2)
Last I checked Netapp was still charging $10,000 per TB! Do you really think there is no reason for this?
Of course there's a reason for this...the CEO needs a new private jet.
Really, the profit in the storage arena is insane. Part of that because the marketing convinces management they need 50,000 IOPS to run what is essentially a flat-file database. Add in the "reliability" and "support" (which generally means that if you don't mind being told that a replacement part won't be in for a week and as long as you follow standard backup strategies, you probably won't actually lose any data), and it's pretty much
RAID-Less how??? (Score:5, Insightful)
The third one I believe--the rest I'm skeptical about...
Re:RAID-Less how??? (Score:4, Funny)
Maybe instead of an array of disks it is a hash or maybe even a linked list.
Re: (Score:2)
Re:RAID-Less how??? (Score:5, Insightful)
Re: (Score:2)
It's a RAEP solution, which sounds like it's probably illegal.
A RAEP-ier solution would be even worse. http://en.wikipedia.org/wiki/The_Xtacles [wikipedia.org]
Re: (Score:2)
Okay, I've still got a couple of points left. Just let me mod you up now.
Re: (Score:2)
Crap.
Re: (Score:2)
The same way you can have computers around the world that are international business machines, but you can't call them IBM computers. RAID is a branding term as much as a redundancy system. If you're using something that's noticeably different, you need to say so.
RAID-less? Not so! RTFA! (Score:2)
FTFA:
So, these RAID-less devices all include optional RAID-5 and optional RAID-6?
Putting the RAID as part of
oh ok... (Score:4, Interesting)
Also, to those who say small businesses can't afford this, its really an option. Some places like open source hodgepodges of hardware and some do not because their small business generates enough money that investing in enterprise class hardware with gold 4 hour response from a solid company with a history of UNIX experience and integration with Solaris.
Also, said Fortune 500 companies get massive discounts, as what you're seeing is retail price.
this is not competing with the diy market (Score:5, Informative)
The goal of this product is to compete with Netapp. If you've ever experienced Netapp licensing/pricing, this Sun solution is a bargain. People seem to be forgetting that this is a storage appliance.
Re: (Score:2)
While that may be true, you're hardly comparing apples to apples. The entry level 2TB model has 14 146GB 10K RPM SAS drives.
You'd still be able to whitebox it for a lot cheaper, but not 4x1TB SATA cheap.
It was also mentioned on the pre-announcement discussion that some people at Sun wanted to price it lower, but internally the powers that be didn't want their hardware to look "cheap". As such, prices went up. The good news is that supposedly the VARs will have some room to play with on the pricing. Not
Re:Looks great.. but (Score:5, Informative)
Will that $600 box be using 14 146 GB 10k RPM SAS disks?
These boxes aren't about providing stupid storage, their about providing massive I/O throughput. The larger boxes scale to 44TB and 576TB respectively. This also automatically moves frequently accessed data to flash drives (and RAM) for even faster I/O.
These are absolutely monstrous compared to anything you could build for $600. There seems to be quite a bit of custom hardware to power this setup.
Re: (Score:2)
Second level ARC is standard in recent ZFS; you could just plonk some X25-M's in your X4240 [sun.com], attach a disk shelf [compaq.com] to it, configure ZFS to use the SSD's as secondary ARC for it, and pretty much have something like what Sun are selling.
You know, just with less vendor support, and more effort involved in building, configuring, tuning and testing. If you come out of it with change from $10k, you probably earned it with the effort you put in.
Re: (Score:2)
You should use a mix of SLC and MLC. MLC for the frequent read, infrequent write, SLC for the frequent write.
There is more underneath the covers than meets the eye.
Re: (Score:3, Informative)
With that said, linux REALLY needs ZFS , and not in userspace.
Due to deliberate licensing issues we won't have native ZFS in Linux any time soon. However, BtrFS [kernel.org] should be merging into the mainline kernel soon enough (~2.6.29), and it includes most of ZFS's features plus a few of its own: storage pools, checksumming, mutable snapshots, built-in extent-level striping and mirroring, etc. It even supports in-place, reversible conversion from ext3 via a copy-on-write snapshot.
Re: (Score:3, Interesting)
It's funny how this viewpoint is always the one promoted on slashdot. One could argue that the Linux GPL is the problem. FreeBSD and Mac OS X had no problem integrating ZFS into their code precisely because the ZFS license (CDDL) allowed it.
Re: (Score:3, Insightful)
Yes yes yes, you can do that with just $1000 and a afternoon at Fry's or browsing Newegg, right?
Everyone's missing the point here, and a lot of what is being said could be applied (just as wrongly) to NetApp... after all, those are just x86 boxes running a BSD kernel.
The special sauce here is not so much the underlying OpenSolaris OS (which does provid the IO and services such as CIFS, NFS, iSCSI, data replication, and so on) but the Fishworks software put on top of it. Built-in failover clustering, the int
Re: (Score:2)
Re:look at Sun x4500 (Score:4, Informative)
Re: (Score:2)
For anyone interested, when I registered to allow me to do the download, it took the Sun server 30 seconds from the time I hit the "Register" button until it returned the result page.
I guess they aren't using the bad-ass storage they are trying to sell.
Re: (Score:2)
This is BS. Clearly.
You have certainly never done this yourself.
First of all, the P800 is a PoS for anything but the included RAID5 or 6 (we haven't even tested RAID6, IIRC).
It has a maximum number of logical disks it can create and you will most likely have to reboot the server and go into Array Manager to setup another "array" (single disk). You can't use the RAID on the card, because ZFS wants to control the disks themself, without a RAID-controller in between (and ideally no Cache-RAM).
My co-worker's be
Re: (Score:2)
Re: (Score:2)
Linux does not have ZFS.
Re: (Score:2)
It does, using Fuse. If OP used ZFS via Fuse for production boxes, he's a bigger moron than I thought possible.
Re: (Score:2)
You have certainly never done this yourself.
Or maybe he's still on his 1st or 2nd home-built RAID. In my experience techs tend to build maybe 3 or 4, and be around a while to see the problems, before they say "fuck this" and buy from a vendor.
Home built RAID is great for home use and as a learning experience. Personally, I'd go with a Enhance UltraStor RS16-JS SAS and an Areca ARC-1680ix, but that's just me and because I use macs at home. And I don't really mind it taking 16 hours to rebuild. Oh no, I can't stream my movies at full speed today. Try t
Re: (Score:2)
When you lose a disk ZFS has to rebuild and unless your system was over-engineered to begin with, there will be performance degradation. But now there's a pretty graph showing you how big the degradation is.
Re:DL180/185 (Score:5, Interesting)
Guh. Sorry. I'm tired, and re-reading my comment the english is well-formed but the concepts are jumbled nonsense. Let me try again, by your leave...
Yes, it's unavoidable to rebuild when you lose a disk, and there will be a performance hit unless you go for full on 100% redundancy, and not many companies can afford to do that with a lot of data.
ZFS offers a number of benefits, though, in the event of drive failure-triggered rebuild, in that it basically knows where the data is and only bothers with that. A hardware controller has no idea what's data and what is blank space and so just redoes everything. In theory, assuming the MB/s of rebuild is the same, a ZFS rebuild of a half-full array should take half the time of a traditional controller.
It is also much more intelligent about *what* it rebuilds, starting at the top and then descending down the FS tree, marking it as known good along the way. This means that if a second drive fails halfway through the resync, instead of a catastrophic failure you still have the data up to the point of failure.
I can't remember where I read that; maybe here: http://blogs.sun.com/bonwick/entry/smokin_mirrors [sun.com]
But I didn't even want to talk about drive-failure rebuilding, what I actually wanted to say that ZFS is, in theory, less likely to get itself into an inconsistent state in the case of power fluctuations, controller RAM failures, drive failures w/ pending writes, that kind of thing. That's the kind of rebuild I meant - after some kind of catastrophic failure. I should probably have said "integrity checking" though.
By design, ZFS never holds critical data in memory only and so at least in theory should always be consistent on-disk. Basically it shouldn't need to fsck. That is a giant advantage to me, if it turns out to be as good in reality as it sounds on paper. Of course, that also has a lot to do with the capabilities of the FS proper, but removing the evil, evil HW controllers from the picture can only be a plus.
I don't know why, but RAID controllers are the most unreliable pieces of hardware I have ever known, besides the drives themselves (but at least they are consistent and expected to fail). Get a few of them together and something WILL go wrong, more often than not in a horrible and unexpected way. When some RAM goes bad in a HW RAID controller you are in for a whole lot of subtle, silent-error-prone fun. Anything that gets the HW controllers out of the picture is a win for me.
And don't even mention the batteries in HW raid controllers. They are the wrong solution to the power failure problem, especially since it's always after a failure that a disk will decide it's had enough of spinning and would just like to sit still for a while, thank you very much. Drive failure with pending writes! Exactly the words every administrator wants to hear. Almost as good as power failure with pending writes. Combine the two (highly likely!) for maximum LULZ. Ok, this is turning into a rant, I better stop.
Anyway, thanks for the corrections. My original comment (and probably this one) came across as a confused mess upon re-reading .. sorry .. will sleep now : )
Re: (Score:2)
And furthermore, everyone knows Sun RRPs are just there to make the discounts they then offer you look better. No-one pays more than 50-80% of that if you buy more than a couple of things at a time and you have a decent VAR.
Oh man, you haven't seen IBM work that game. There's retail. Then there's preferred which is about 1/2 X. Then there's the "just because we love you discount" bringing things down to about 2/5 X. Still above reasonable, but, hey, senior management never got fired for buying IBM right?
Then, next year, you find out what the annual support and maintenance cost. 20%. Of the freaking *RETAIL* price.
Re: (Score:2)
I did a benchmark comparison of the HP P400 in a DL185 vs a simple LSI 1068E SAS jbod controller. (HP branded of course) HP said it wouldn't work with the 1068E in the 12x DL185 expander setup, but of course it only took me about 10min of looking at the LSI website to find the Initiator Target firmware and get it fixed.
Basically my benchmark showed that in all cases the JBOD setup with linux software raid was 10-50% faster than the P400 controller.
They didn't ship a BBU for the P400 so it wouldn't do any
Re: (Score:2)
I'm not sure which part of ZFS is considered "old". Glad RAID is working out for you. Be happy that you have not hit any of the issues that RAID has.
But we're in the terabyte size drive age now. If all you can do is raid, your data is going to go bye bye. You need ZFS. Go google for the intro paper they wrote on why use ZFS.