Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Sun Unveils Thumper Data Storage 285

zdzichu writes "At today's press conference, Sun Microsystems is showing off a few new systems. One of them is the Sun Fire x4500, known previously under the 'Thumper' codename. It's a compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives. Yes, when standard for 4U server is four to eight hard disks, Thumper delivers forty-eight HDDs with 24 TB of raw storage. And it will double within the year, when 1TB drives will be sold. More information is also available at Jonathan Schwartz's blog."
This discussion has been archived. No new comments can be posted.

Sun Unveils Thumper Data Storage

Comments Filter:
  • I want one! (Score:3, Interesting)

    by andrewman327 ( 635952 ) on Tuesday July 11, 2006 @04:23PM (#15700952) Homepage Journal
    This is perfect for the space constraints applied to many server rooms now days. I wonder how they managed to control the heat output. My laptop only has one HDD and it gets pretty warm. I am very impressed that (according to Sun) costs $2 per gig! As always, I hope it works as promised.
  • cooling (Score:3, Interesting)

    by Zheng Yi Quan ( 984645 ) on Tuesday July 11, 2006 @04:31PM (#15701027)
    Heat output from all those drives is a concern, but if you look at the photo on the ponytailed hippie's blog, you can see that the box has 20 fans in the front and probably more in the back. Makes you wonder what the thrust-to-weight ratio is. This box is going to make a screaming database server. 2GB/sec throughput to the internal disk beats anything out there, -and- the customer doesn't need to invest in SAN hardware to do it.
  • Re:$42,000 (Score:2, Interesting)

    by bryerton ( 524453 ) on Tuesday July 11, 2006 @04:42PM (#15701135)
    Is it? I recognize that the system itself is impressive. But to buy 48 750GB SATA-II 3.5" drives costs around $24,000, and gives you ~36TB. If you notice the pricing, it becomes obvious SUN is drastically over-pricing the drives. The only diff I noticed at a first glance between the $40k and the $90k option was the size of the drives. Perhaps I missed something...

    If I didn't, only a fool would buy the more expensive version. Just go in for the cheap array, and purchase 750GB drives yourself, re-sell the original 48x250GB ones, and you'll save yourself a rather large sum of money.
  • Re::O (Score:5, Interesting)

    by FuturePastNow ( 836765 ) on Tuesday July 11, 2006 @04:54PM (#15701223)
    28 seasons of Star Trek + all the movies = 250GB.
  • by geoff lane ( 93738 ) on Tuesday July 11, 2006 @05:06PM (#15701314)
    The (redundent) power supply is rated at 1800Watts which implies about 6300BTU/Hr heat out of the box. For 24Tb and a server that is remarkably low.
  • by linuxbaby ( 124641 ) * on Tuesday July 11, 2006 @05:11PM (#15701353)
    We were waiting anxiously for this item to be announced, because we have about 100TB of storage (now) and add about 8TB per month. Perfect customer for these.

    But, unfortunately, they're not quite as cheap as I had thought. (Friend on the inside thought Sun was going to price them at $1.25 per GB, not $2 per GB)

    Instead, we've been using these. Very good cooling:
    http://www.rackmountpro.com/productpage.php?prodid =2348 [rackmountpro.com]

    32 SATA-II 750g drives = 24TB, same as the Sun X4500, but for only $16,000 for the entire system (chassis, mobo, ram, drives) instead of $70,000 for the Sun Thumper. Huge difference especially if you're ordering many of them.
  • Re:Pfft... (Score:3, Interesting)

    by spun ( 1352 ) <loverevolutionary@@@yahoo...com> on Tuesday July 11, 2006 @05:32PM (#15701531) Journal
    You can also buy commodity 3U server chassis that hold 16 drives. We built a number of these as ROCKS cluster head nodes for Los Alamos National Labs. Two 3ware SATA raid cards running 8 drive RAID 5 arrays, bonded together in software as a RAID 0 array. Decent performance relatively inexpensively. Which is after all what the I in RAID is supposed to stand for. If you do this, get the SATA backplane that uses 4 Infiniband cables instead of 16 SATA cables and the cards that support that. I've done it both ways, and trust me, your knuckles will thank you for the four-fold reduction in cables. As an interesting aside, the chassis we used has a space up top for a 2.5" laptop hard drive to use as the system disk. It's is the only way to fit a system disk in that chassis.
  • by larien ( 5608 ) on Tuesday July 11, 2006 @05:47PM (#15701616) Homepage Journal
    If you buy 10 at a time, it comes down to around $47k each (http://store.sun.com/CMTemplate/CEServlet?process =SunStore&cmdViewProduct_CP&catid=151017). Also, if you're paying list price on Sun kit, you're doing something wrong.
  • Re:Holy SHIT! (Score:3, Interesting)

    by UberLame ( 249268 ) on Tuesday July 11, 2006 @06:09PM (#15701775) Homepage
    It might have allowed for 12cm fans, but it you had looked, you would see that they are using 10 much smaller fans. Ick.

    Meanwhile, the x4600 (8 dual core Opteron system) does apparently use 2 12cm fans.

    With all those disks, I suppose it might not make much difference, but I would have rather seen them using 12cm fans on the x4500 as well.
  • Re:Holy SHIT! (Score:3, Interesting)

    by buysse ( 5473 ) * on Tuesday July 11, 2006 @06:19PM (#15701846) Homepage
    Sun typically worries more about redundancy than noise. The 10 small fans are hot-swappable and run at ridiculous speeds (and yes, sound like a A320 revving up for takeoff), but I bet the thermal budget allows four of them to be dead at any given time.
  • by HockeyPuck ( 141947 ) on Tuesday July 11, 2006 @06:21PM (#15701862)
    If you liked the concept of the e450, you'll like this box.

    If you are interested in storage consolidation and increasing utilization while reducing storage islands. This isn't for you.

    With 48disks, you'll want protection... all implemented in software raid. So you do raid-5, probably create raid groups of 12 disks? 8 disks? as the number of disks in the raid group goes down, the amount of disk you waste on parity, and the amount of CPU cycles done on calculating parity goes up.

    As the industry moves to FC boot and iSCSI boot to alleviate the need to stock disk drives from 15 different vendors, this is an interesting idea for those who don't want to have a raid array. But in most shops, huge internal storage is sooooo '90s.
    How do you replicate this beast? VeritasVolume Replicator. Serverless backup? Nope.
  • by h8sg8s ( 559966 ) on Tuesday July 11, 2006 @07:07PM (#15702167)
    When I saw my first 1.6 MB Diablo removable disk, the rumor was "they go bad, don't trust them. Not as reliable as drum memory.." When I saw my first CDC 300MB "washing machine" style disk (circa 1980) the rumor was "they crash - not as reliable as the Diablo.." When I saw my first Winchester (Fujitsu Eagle) the rumor was "they crash - not as reliable as the old CDC drives.." When I saw my first 5.25" Winchester I freaked. This couldn't be a good idea. I was wrong. When I saw my first 3.5" disk I thought "cute, but it'll never catch on." I was wrong. When I saw my first sub 1" high 3.5" disk, I wondered how it would perform. It blew me away. When I saw my first SFF drive in the Sun 4200 server I was cautiously optimistic. So far it's worked out great. Each one of these changes increased disk reliability and performance greatly. Change is good, conventional wisdom usually isn't.
  • Re:I want one! (Score:3, Interesting)

    by hpavc ( 129350 ) on Tuesday July 11, 2006 @08:15PM (#15702489)
    The heat and energy solution is the amazing part of that product, use of sinks and pipes it works well.
  • by this great guy ( 922511 ) on Wednesday July 12, 2006 @04:19AM (#15703933)

    The disks would go in the chassis (see my itemized list). You may not know it but Sun is not the 1st company to use a chassis with vertical bays. Here is one example [datadirectnet.com] among many. The price would be more likely around 2 or 3 grands by the way, instead of 1 grand. But anyway this doesn't change the fact that this Sun box is way overpriced, even with a good 40% discount.

    Regarding the mobo, just pick one with two AMD 8131 or 8132 PCI-X bridges. This will give you 4 independent PCI-X busses. The two PCI-X bridges would have to be on 2 different HT links in order to not dangerously approach the theoretical one-way data throughput limit of 3.2 GB/s of one 1600 MT/s 16 bits HT link. The two PCI-X bridges could be either connected to different CPUs or to the same CPU because the Opteron XBAR _can_ easily handle the ~3 GB/s you speak about, it has been designed to support 19.2 GB/s of HT traffic and even more with the recent upgrade to 2000 MT/s ccHT links. Now with the 4 independent PCI-X busses, you could put 4 SATA HBAs on the 1st and 2nd busses, and 2 HBAs on the 3rd and 4th busses. This way the first 2 busses will run at 100 MHz and the 2 others will run at 133 MHz, giving a practical throughput of 3.4 GB/s (2 * (100 MHz * 64 bits / 8) + 2 * (133 MHz * 64 bits / 8), and assuming a 90% efficiency as found on most PCI/PCI-X busses), this is enough to handle the 3 GB/s you are speaking about. There are plenty of single AMD 8131 mobo on the market right now starting at $250. I am sure you can find one with two AMD 813x for $500 max.

    Now when I think about it you could even use SATA port multipliers in order to reduce the number of HBAs, allowing all busses to run at 133 MHz. I am aware of 12-port and 24-port SATA HBAs (Areca comes to mind) but those are outrageously expensive and are not necessary to handle all that throughput. My experience and those of my friends playing with high-end enterprise gear prove that _very_ simple and inexpensive PCI-X SATA chips such as the SII3124 or Marvell 88SXxxxx are way sufficient to handle the max combined read throughput of any number of disks attached to their SATA ports. The reason being that the designers of such chips have come up with a simple and performant hardware interface optimized to reduce the CPU load. I know for a fact that the SII3124 design is somewhat close to the AHCI spec which is the best example of a performant SATA hw interface.

    So I _do_ believe that it is possible to build a $13-14k server with 48 SATA disks in 4U offering ~3 GB/s of raw read throughput. I don't understand why so many people refuse to believe that, especially since other posters in this thread have pointed out that some vendors are already selling similarly priced servers !

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...