Sun Unveils Thumper Data Storage 285
zdzichu writes "At today's press conference, Sun Microsystems is showing off a few new systems. One of them is the Sun Fire x4500, known previously under the 'Thumper' codename. It's a compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives. Yes, when standard for 4U server is four to eight hard disks, Thumper delivers forty-eight HDDs with 24 TB of raw storage. And it will double within the year, when 1TB drives will be sold. More information is also available at Jonathan Schwartz's blog."
:O (Score:4, Funny)
Twitterpated. (Score:4, Funny)
Orly Owl: Why, don't you know? He's twitterpated.
Thumper: Twitterpated?
Orly Owl: Yes. Nearly everybody gets twitterpated in the Thumper room. For example: You're walking along, minding your own business. You're looking neither to the left, nor to the right, when all of a sudden you run smack into a pretty rack holding 24 TBs of pretty racks! Woo-woo!
Re::O (Score:2)
Re::O (Score:5, Interesting)
Re::O (Score:5, Funny)
Re::O (Score:3, Insightful)
I second the motion (Score:2)
Re::O (Score:2)
Re::O (Score:2)
TNG= 7 seasons
DS9= 7 seasons
VOY= 7 seasons
ENT= 4 seasons
Re::O (Score:2)
Re::O (Score:2)
Re::O (Score:2)
Re::O (Score:2)
I keep it on all the servers on the internet.
( Apologies to Steven Wright ).
I want one! (Score:3, Interesting)
Re:I want one! (Score:5, Informative)
48 Hds, 2CPUs, and still less than 1200 Watts.
Oh many. Datafarm in a single rack.
Re:I want one! (Score:2)
I doubt this is the case though. Sun tends to make pretty good hardware. At least that's my limited experience.
Re:I want one! (Score:3, Interesting)
Re:I want one! (Score:3, Informative)
Re:I want one! (Score:2)
Re:I want one! (Score:3, Insightful)
Re:I want one! (Score:2)
Re:I want one! (Score:2)
I am suddenly begining to doubt that greenhouse gases are responsible for global warming. Al Gore needs to make a movie about 48 drive servers.
Interesting (Score:3, Funny)
Hey, honey - remember how I said I wanted to store *all* the movies on the server? Get a load of this
Holy SHIT! (Score:2)
Did you see how tightly packed the drives were? Is heat a concern or is there a tornado cooling system in place?
http://religiousfreaks.com/ [religiousfreaks.com]Re:Holy SHIT! (Score:2, Insightful)
Re:Holy SHIT! (Score:2, Insightful)
Re:Holy SHIT! (Score:5, Informative)
Its 48 hds in a 4U case. 48HDs is about 600W under full load.
If you compare this to the fact that there are dual-socket - dual core servers out there that push 300W through a 1U case, thats nothing.
Also, a 4U case allows the use of nice fat 12cm fans in the front, while the horizontal backplane allows for free airflow (in contrast to vertical ones like used before)
Re:Holy SHIT! (Score:3, Interesting)
Meanwhile, the x4600 (8 dual core Opteron system) does apparently use 2 12cm fans.
With all those disks, I suppose it might not make much difference, but I would have rather seen them using 12cm fans on the x4500 as well.
Re:Holy SHIT! (Score:3, Interesting)
$42,000 (Score:2)
Doesn't sound like much.. but that's $42,000 for the top 24TB model.
Perhaps it's time to start using "per TB" costs for these things. Surely no one sells sub-terabyte storage servers anymore.
Re:$42,000 (Score:2)
Most "storage" servers sold today have less than a terabyte
of capacity.
Re:$42,000 (Score:2)
Re:$42,000 (Score:2, Interesting)
If I didn't, only a fool would buy the more expensive version. Just go in for the cheap array, and purchase 750GB drives yourself, re-sell the ori
Re:Indeed, Sun's list prices are way too high (Score:2, Insightful)
Re:Indeed, Sun's list prices are way too high (Score:4, Insightful)
Another problem is vibration. If you don't have a good mounting scheme for all these disks, cross-drive vibrational issues will adversly affect not only performance, but MTBF as well.
Lastly, what about performance? I've seen this machine sustain raw access to the disks at 3GB/s.
That's *bytes*. Through the filesystem (ZFS), you get close to 2GB/s if you're careful. The machine has 10 fully-independant PCI busses inside - not a bottleneck in sight. Let's see the PCI bridge of your $500 mobo take that.
Once you do all of this, you're not $1/GB anymore, you don't fit in 4RU anymore, and you certainly
won't get the same performance. So I think that to build a similar box, there's no way you can
significantly beat the price. Plus, you have to remember that almost nobody pays Sun's list price.
Most VARs that sell Sun gear will give you a good discount. Comparing Sun list price to We-won't-be-here-next-week computers is not a valid comparison, either.
Re:Indeed, Sun's list prices are way too high (Score:2)
Here:
http://www.rackmountnet.com/Rackmount-Chassis/5U-R ackmount-Chassis/5U-Rackmount-Chassis-48-Multilane -SATA-hot-swap-34-depth-1350W-redundant-power-supp ly-RMC5D/ [rackmountnet.com]
But you need 1 RU more. It's 5 RUs.
Amazing that SUN (or Andy B.'s Keralia) was able to cramp it into 4 RUs.
Re:Indeed, Sun's list prices are way too high (Score:3, Interesting)
The disks would go in the chassis (see my itemized list). You may not know it but Sun is not the 1st company to use a chassis with vertical bays. Here is one example [datadirectnet.com] among many. The price would be more likely around 2 or 3 grands by the way, instead of 1 grand. But anyway this doesn't change the fact that this Sun box is way overpriced, even with a good 40% discount.
Regarding the mobo, just pick one with two AMD 8131 or 8132 PCI-X bridges. This will give you 4 independent PCI-X busses. The two PCI-X
Re:Indeed, Sun's list prices are way too high (Score:3, Insightful)
Re:Indeed, Sun's list prices are way too high (Score:3)
OK. Your works out to be 3x larger than the 4U Sun's comes in. Did you have some other math for me to do?
As far as External SATA, I was wrong. I'm OK with that sometimes in a congenial conversation. It's little different to say "I may be wrong" than posting with IANAL. This is
Yeah, cabling arbi
Re:$42,000 (Score:2)
$42,000 for 24TB is dirt cheap.
It is a pretty good price, but not insane. Apple's cheap RAID servers cost about $1.85 per gig. (I know, Apple + low price = crazy.) Mind you, you'd need 8u not 4u to fit them. That is really where I see the the advantage here for companies that need more storage but have real space constraints.
Okay... (Score:5, Funny)
Re:Okay... (Score:4, Funny)
Re:Okay... (Score:5, Funny)
Re:Okay... (Score:2)
Re:Okay... (Score:2)
Thumpers have the opposite effect on the maker, IIRC.
Ah, yes...a machine labelled "Sun"... appropriate! (Score:2)
Really snazzy tech, but that's a lot moving parts in a little space... and probably too hot to touch. Could you imagine the cooling required for a densely-packed data center of these things?
Or am I way off base here?
Re:Ah, yes...a machine labelled "Sun"... appropria (Score:3, Interesting)
Dune.. (Score:4, Funny)
Re:Dune.. (Score:2)
Re:Dune.. (Score:5, Funny)
cooling (Score:3, Interesting)
Re:cooling (Score:2)
I'd be interested to see how your actual overall power consumption within a rack and within a data center is affected by this thing.
Re:cooling (Score:2)
Re:cooling (Score:2)
Ok. Now if I could just afford it. (Score:2)
Re:Ok. Now if I could just afford it. (Score:2)
NAS servers [elitepc.com]
Nowhere near those Suns in capacity or performance, but they are less expensive.
ATA over Eithernet (Score:2)
Re:ATA over Eithernet (Score:2)
http://www.opensolaris.org/os/project/iscsitgt/ [opensolaris.org]
Wow (Score:2, Funny)
And if MY math is right (Score:2)
That brings me back... (Score:2)
Re:Wow (Score:2)
Holy shit, this can easily be one 1024x768 jpeg image per person! If we automatically throw out the dudes and everyone under 18* or over 35**, there would be enough space for a small gallery!
--
*, ** - adjust to your preference; further filtering depends on available data
Re:Wow (Score:3, Funny)
Variable redundancy? (Score:5, Insightful)
It would be nice if the system had a setting where you could transparently specify a redundancy factor in sacrifice of capasity. For example, I could set a ratio of 1:3 where each bit is stored on three separate disks. This ratio could increase to the number of disks in the system. And of course, little red lights appear on failed disks, at which point you simply swap it out and everything operates as if nothing happened (duH). Sure, we have a degree of this already, but managing redundant arrays is still a very manual process and when we start talking about tens or soon hundreds of terabytes, increased automation becomes a necessity.
Re:Variable redundancy? (Score:4, Informative)
It makes managing this sort of storage box a snap, and allows you to dial up or down the level of redundancy by using either mirroring (2-way, 3-way, or more) or RAIDZ. And soon, RAIDZ2.
Additionally, Solaris running on the machine has fault managment support for the drives, and can work with the SMART data to predict drive failures, and exposes the drives to inspection via IPMI and other management interfaces. Fault LEDs light when drives experience failures, making them a snap to find and replace.
Re:Variable redundancy? (Score:4, Informative)
Re:Variable redundancy? (Score:2)
Is that a missing zero, or have I misunderstood something?
(I'm really not sniping, it sounds interesting)
Re:Variable redundancy? (Score:2)
Re:Variable redundancy? (Score:2)
I was thinking about that. With that case design, you have to pull the entire server and pop off the cover to yank one drive. I couldn't tell from the pictures how the enclosures worked, either; handles didn't seem evident. The idea is interesting, though. It almost looks like they could even squeeze it down to 2U with some creative cooling.
Since they have 12-ro
Re:Variable redundancy? (Score:2)
Tp.
Pfft... (Score:2)
Re:Pfft... (Score:3, Interesting)
Re:Pfft... (Score:2)
The return of Sparc Storage Array (Score:2)
Considering.... (Score:2)
Is it even possible (even on Slashdot) (Score:2)
Sorry, couldn't resist; I'm usually about a day late for that particular well-worn meme.
24TB for $70k (Sun) or 24TB for $16k (generic) (Score:3, Interesting)
But, unfortunately, they're not quite as cheap as I had thought. (Friend on the inside thought Sun was going to price them at $1.25 per GB, not $2 per GB)
Instead, we've been using these. Very good cooling:
http://www.rackmountpro.com/productpage.php?prodi
32 SATA-II 750g drives = 24TB, same as the Sun X4500, but for only $16,000 for the entire system (chassis, mobo, ram, drives) instead of $70,000 for the Sun Thumper. Huge difference especially if you're ordering many of them.
Re:24TB for $70k (Sun) or 24TB for $16k (generic) (Score:2)
Re:24TB for $70k (Sun) or 24TB for $16k (generic) (Score:3, Interesting)
Re:24TB for $70k (Sun) or 24TB for $16k (generic) (Score:3, Insightful)
Also, the one you're linking to is a 7U unit, whereas Sun's is a 4U unit. IOW you can mount I think 6 units from Rackmount or 10 units from Sun, for 144 TB/rack vs 240 TB/rack. (That's with a 42U rack, which I believe is standard).
I won't get into anything wrt servicability, management etc., as I've absolu
Re:24TB for $70k (Sun) or 24TB for $16k (generic) (Score:5, Insightful)
Unfortunately, with a generic motherboard and an off-the-shelf SATA RAID controller, good luck fixing the thing when a drive fails. What's that? The RAID controller is reporting a bad drive, but you have no idea which drive it is because there's no way to light it up without shutting down the server and going into the RAID controller BIOS and telling it to flash the drive light?
Tough luck. There is a reason why Sun is a little more expensive: RAS. RAS is Sun's main hardware principle. It stands for Reliability, Availability, and Serviceability. Sun hardware is truly built with these concepts in mind. Concepts like: A failed component should trigger a visible alert (warning light), as well as a human readable syslog message that calls out the exact part that failed. You will never see these things in a self-built beige box without some serious hardware hacking on your own, and at that point, you might as well hire a team of EEs to reinvent the wheel.
Congratulations, Jonathan! (Score:2)
uh oh (Score:2)
ZFS (Score:5, Insightful)
ZFS blurs the traditional boundaries between volume management, RAID and file systems. All disks are added into one big pool that can be carved out into either the native ZFS filesystem format or virtual volumes that can be formatted as other filesystem formats. It has many other interesting features like instantaneous snapshots and copy-on-write clones.
Re:ZFS (Score:2)
Two and a Half Libraries of Congress (Score:3, Funny)
Crazy (Score:3, Funny)
Beware of the 2.5" disk drives (Score:3, Informative)
Re:Beware of the 2.5" disk drives (Score:2)
See my benchmarks [spod.cx] - I might get around to doing one of the X4100s we've got for comparison...
Bad idea from a storage management point of view (Score:4, Interesting)
If you are interested in storage consolidation and increasing utilization while reducing storage islands. This isn't for you.
With 48disks, you'll want protection... all implemented in software raid. So you do raid-5, probably create raid groups of 12 disks? 8 disks? as the number of disks in the raid group goes down, the amount of disk you waste on parity, and the amount of CPU cycles done on calculating parity goes up.
As the industry moves to FC boot and iSCSI boot to alleviate the need to stock disk drives from 15 different vendors, this is an interesting idea for those who don't want to have a raid array. But in most shops, huge internal storage is sooooo '90s.
How do you replicate this beast? VeritasVolume Replicator. Serverless backup? Nope.
Re:Bad idea from a storage management point of vie (Score:4, Informative)
Re:Bad idea from a storage management point of vie (Score:2)
Re:Bad idea from a storage management point of vie (Score:3, Informative)
What 4U "standard" (Score:3, Informative)
Bullpucky. Maybe on your planet. A PC 4U NAS box in my world holds 24 SATA HDDs. Oh, you mean a standard 4U Server... Which usually means a quad-CPU box with 4GB of RAM and a couple fugly FC controllers. See, your problem is that thumper is for Storage in which the 4U form-factor is for drives, and the standard is more like 12 to 24.
</flame>Re:Software RAID only, plus 7200 RPM no10k or 15k (Score:4, Informative)
The old-time "big-ticket" was checksum calculation, but that is now an "also-ran". Distributing the i/o? Software can do it as well as hardware.
Both hardware and software have to be familiar with the blocking factor.
Where software wins is that it can be aware of, and skip reading to fill blocks if the block has never been used (or is not PRESENTLY in use). Which hardware RAID controllers cannot avoid doing.
The idea is to tie the RAID more tightly into the filesystem.
As to lower speed drives -- did you count the heads? Each is active at the same time. Yes, an individual i/o would complete faster with 10k or 15k spin, but the total throughput is based on the number of heads. For RAID5, reading multiple blocks will give you pretty much all the read performance you can stomach.
Write performance for an individual write operation would be improved; but generally application buffering deals with it. The tradeoff is number of heads, spin rate, and heat. The right balance? For you, write performance up, and, keeping heat constant, number of heads down (I presume that you are dealing with transactional loads, with commits). For me? tends to go the other way (my workload is general storage, with a bit of database).
As always, YMMV
Ratboy
Re:Software RAID only, plus 7200 RPM no10k or 15k (Score:4, Informative)
From this box, one can serve out file systems with NFS and/or SMB/CIFS (aka a traditional NAS), and in future releases of Solaris 10, also serve out LUNs over iSCSI and FCP while having all that data backed by the performance, reliability, and features of ZFS. The only thing it's missing is a consolidated, centralized CLI for manipulating storage, a la NetApp and ONTAP... but all the requisite pieces are there to turn Solaris, and especially Solaris-on-Thumper, into a NetApp killer at less cost.
Re:Software RAID only, plus 7200 RPM no10k or 15k (Score:3, Informative)
The other hidden advantage here is storage density. If for some reason you needed 1PB of data storage in as small a space as possible, this is a big win for you. You would need about 45 of these servers to get 1PB of capacity. That would fit nicely into less than 5 racks of space, with room to spare for your networking and monitoring gear. A 1PB EMC Symmetrix is going to be a _LOT_ bigger.
No other storage platform has higher density (that I am aware of). Power use is good but not a
Re:Software RAID only, plus 7200 RPM no10k or 15k (Score:3, Insightful)
Re:Vista (Score:2)
Re:sun infatuated with sw-raid ? (Score:5, Informative)
I can give you a few reasons they might. Having been through some hardware RAID nightmares I have first hand experience with a few of them.
HW RAID makes you dependent upon the manufacturer of the card both for RAID implementation and for drivers. We once a a couple hardware RAID cards managing a large (at the time) RAID0+1 array that would occasionally glitch and fail a drive or two (or occasionally every drive on the controller). The driver and monitoring daemon wouldn't report anything until a second drive failed. Despite battery backup on the card cache, a single drive failure would often corrupt the data on the mirrored drive. The manufacturer was nowhere to be found when requesting updates or bug fixes.
We eventually switched to software RAID and found that in addition to making the array reliable it improved our performance. This was in part because the 6 CPUs on the machine were significantly faster than the 25MHz i960 managing the RAID cards. We could also mirror across controllers on the 4 separate PCI busses which gets rid of a major bottleneck (the I/O on a PCI bus can be easily saturated by a few drives)).
There are other benefits to being able to RAID across controllers. A RAID controller is a single point of failure. If a controller fails on a HW raid system, your array goes down. On SW RAID (done properly) a single controller can go away without a problem.
The most reliable storage system we have (a Network Appliance rack) is entirely software RAID. (RAID 4, a number you don't hear often).