Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Sun Unveils Thumper Data Storage 285

zdzichu writes "At today's press conference, Sun Microsystems is showing off a few new systems. One of them is the Sun Fire x4500, known previously under the 'Thumper' codename. It's a compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives. Yes, when standard for 4U server is four to eight hard disks, Thumper delivers forty-eight HDDs with 24 TB of raw storage. And it will double within the year, when 1TB drives will be sold. More information is also available at Jonathan Schwartz's blog."
This discussion has been archived. No new comments can be posted.

Sun Unveils Thumper Data Storage

Comments Filter:
  • Re:I want one! (Score:5, Informative)

    by cyanics ( 168644 ) on Tuesday July 11, 2006 @04:25PM (#15700972) Homepage Journal
    and they are especially showing off the low power usage in that kind of space..

    48 Hds, 2CPUs, and still less than 1200 Watts.

    Oh many. Datafarm in a single rack.
  • Re:I want one! (Score:3, Informative)

    by Jeff DeMaagd ( 2015 ) on Tuesday July 11, 2006 @04:33PM (#15701041) Homepage Journal
    It's not that big of a problem. A 7200RPM drive might take 15W max. 48 drives brings the total up to 675W. Not that bad in the server world, especially given the capacity.
  • by Anonymous Coward on Tuesday July 11, 2006 @05:02PM (#15701273)
    Check out ZFS-- http://www.opensolaris.org/os/community/zfs [opensolaris.org]

    It makes managing this sort of storage box a snap, and allows you to dial up or down the level of redundancy by using either mirroring (2-way, 3-way, or more) or RAIDZ. And soon, RAIDZ2.

    Additionally, Solaris running on the machine has fault managment support for the drives, and can work with the SMART data to predict drive failures, and exposes the drives to inspection via IPMI and other management interfaces. Fault LEDs light when drives experience failures, making them a snap to find and replace.
  • Re:Holy SHIT! (Score:5, Informative)

    by imsabbel ( 611519 ) on Tuesday July 11, 2006 @05:03PM (#15701282)
    Why does everybody here get so up with "The HEAT!!111".
    Its 48 hds in a 4U case. 48HDs is about 600W under full load.
    If you compare this to the fact that there are dual-socket - dual core servers out there that push 300W through a 1U case, thats nothing.

    Also, a 4U case allows the use of nice fat 12cm fans in the front, while the horizontal backplane allows for free airflow (in contrast to vertical ones like used before)
  • Actually, software RAID is an advantage, performance-wise.

    The old-time "big-ticket" was checksum calculation, but that is now an "also-ran". Distributing the i/o? Software can do it as well as hardware.

    Both hardware and software have to be familiar with the blocking factor.

    Where software wins is that it can be aware of, and skip reading to fill blocks if the block has never been used (or is not PRESENTLY in use). Which hardware RAID controllers cannot avoid doing.

    The idea is to tie the RAID more tightly into the filesystem.

    As to lower speed drives -- did you count the heads? Each is active at the same time. Yes, an individual i/o would complete faster with 10k or 15k spin, but the total throughput is based on the number of heads. For RAID5, reading multiple blocks will give you pretty much all the read performance you can stomach.

    Write performance for an individual write operation would be improved; but generally application buffering deals with it. The tradeoff is number of heads, spin rate, and heat. The right balance? For you, write performance up, and, keeping heat constant, number of heads down (I presume that you are dealing with transactional loads, with commits). For me? tends to go the other way (my workload is general storage, with a bit of database).

    As always, YMMV
    Ratboy
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Tuesday July 11, 2006 @05:04PM (#15701296) Homepage
    ZFS can provide anywhere between 200% and 10% redundancy depending on what mode and stripe size you use. It should also automatically repair when failed disks are replaced.
  • by Anonymous Coward on Tuesday July 11, 2006 @05:09PM (#15701333)
    It would be nice if the system had a setting where you could transparently specify a redundancy factor in sacrifice of capasity. For example, I could set a ratio of 1:3 where each bit is stored on three separate disks. This ratio could increase to the number of disks in the system. And of course, little red lights appear on failed disks, at which point you simply swap it out and everything operates as if nothing happened (duH).

    with ZFS it's as easy as that.

    I saw a demonstration at LinuxTag in Wiesbaden, germany. The used files instead of harddisks, and just filled one of them with random bytes. Everything worked as if nothing had happend...


    --
    http://moritz.faui2k3.org/ [faui2k3.org]
  • by E-Lad ( 1262 ) on Tuesday July 11, 2006 @05:09PM (#15701335)
    This box is 100% designed to be used in mutual full advantage with ZFS. Thumper is what you would call a modern RAID array, as ZFS in this case blurs the destinction between hardware and software RAID. The CPU and memory horsepower is there for RAID-Z.

    From this box, one can serve out file systems with NFS and/or SMB/CIFS (aka a traditional NAS), and in future releases of Solaris 10, also serve out LUNs over iSCSI and FCP while having all that data backed by the performance, reliability, and features of ZFS. The only thing it's missing is a consolidated, centralized CLI for manipulating storage, a la NetApp and ONTAP... but all the requisite pieces are there to turn Solaris, and especially Solaris-on-Thumper, into a NetApp killer at less cost.
  • by Anonymous Coward on Tuesday July 11, 2006 @05:27PM (#15701503)
    Why does sun hate hardware raid solutions?

    Remember that "hardware" raid is simply a dedicated low-end microcontroller (compared to an Opteron) running pretty much the same software that "software" raid runs -- but in an environment that's harder to apply patches&bug fixes (need to re-flash the firmware).

    This extra dedicated microcontroller is nice when it's in a server that's doing a lot of other CPU intensive stuff (rendering); but on a dedicated file server you're far better off having the powerful main CPUs performing the RAID logic == software RAID.

  • by xenophrak ( 457095 ) on Tuesday July 11, 2006 @05:30PM (#15701522)
    I'm glad that they are at least offering a server in this class with 3.5" disks. The 2.5" 10K RPM SAS disks that are on the x4100 and x4200 are just junk pure and simple.
  • by this great guy ( 922511 ) on Tuesday July 11, 2006 @05:47PM (#15701617)

    The 12 TB config [sun.com] is sold at $33k, or $2.75/GB, but assembling such a server yourself is possible and can be done today for 1/3rd of this price:

    • 1 x dual-Opteron mobo = 1 x $500
    • 2 x Opteron 285 = 2 x $1100
    • 8 x 2 GB DDR400 registred DIMM = 8 x $300
    • 6 x 8-port PCI-X Marvell SATA card = 6 x $100
    • 48 x 250 GB 7.2kRPM SATA disks = 48 x $110
    • 1 x Chassis+PSU+Rails = 1 x $1000
    • Total = $11980 or $1.00/GB

    (I have actually slightly overestimated the above prices.) Of course people are going to say that such a server is not be as reliable as a Sun server, that it does not come with technical support, etc. But in most cases such arguments are invalid because you save so much money that you can afford assembling/maintaining the server and replacing faulty hardware parts yourself. Time is money, but by having saved money you can now afford time ;-) The living proof that such a model would be successful is Google: instead of buying Sun servers like most startups in their time, they built their servers themselves to save money.

  • by h8sg8s ( 559966 ) on Tuesday July 11, 2006 @06:50PM (#15702062)
    As to backup and replication, think zfs: http://www.sun.com/2004-0914/feature/ [sun.com] Lots of folks are seeing this as simply a 2 socket server with lots of disk. With zfs it's more like a huge disk farm with an open, hackable interface and nice manners at the back end.
  • Re:Seek time? (Score:1, Informative)

    by Anonymous Coward on Tuesday July 11, 2006 @07:13PM (#15702194)
    Seek time is a property of disks. It's a couple of milliseconds. It doesn't matter whether there's one disk or lots. Having lots of disks generally reduces overall response time, though, because ios can be serviced concurrently.
  • by TinyManCan ( 580322 ) on Tuesday July 11, 2006 @07:18PM (#15702221) Homepage
    I 100% agree with you.

    The other hidden advantage here is storage density. If for some reason you needed 1PB of data storage in as small a space as possible, this is a big win for you. You would need about 45 of these servers to get 1PB of capacity. That would fit nicely into less than 5 racks of space, with room to spare for your networking and monitoring gear. A 1PB EMC Symmetrix is going to be a _LOT_ bigger.

    No other storage platform has higher density (that I am aware of). Power use is good but not amazing (look at Petabox) and price is excellent for the size, but loses out as you scale.

    Overall, I am stoked on them and want to try using them as backup servers. Attach one or two LTO3s and a couple 10gbs ethernet cards and you have everything you need! You can spew data over the network from the clients and then spend the whole day making very good use of your tape drive resources.

  • by SETIGuy ( 33768 ) on Tuesday July 11, 2006 @08:12PM (#15702477) Homepage
    Why does sun hate hardware raid solutions?

    I can give you a few reasons they might. Having been through some hardware RAID nightmares I have first hand experience with a few of them.

    HW RAID makes you dependent upon the manufacturer of the card both for RAID implementation and for drivers. We once a a couple hardware RAID cards managing a large (at the time) RAID0+1 array that would occasionally glitch and fail a drive or two (or occasionally every drive on the controller). The driver and monitoring daemon wouldn't report anything until a second drive failed. Despite battery backup on the card cache, a single drive failure would often corrupt the data on the mirrored drive. The manufacturer was nowhere to be found when requesting updates or bug fixes.

    We eventually switched to software RAID and found that in addition to making the array reliable it improved our performance. This was in part because the 6 CPUs on the machine were significantly faster than the 25MHz i960 managing the RAID cards. We could also mirror across controllers on the 4 separate PCI busses which gets rid of a major bottleneck (the I/O on a PCI bus can be easily saturated by a few drives)).

    There are other benefits to being able to RAID across controllers. A RAID controller is a single point of failure. If a controller fails on a HW raid system, your array goes down. On SW RAID (done properly) a single controller can go away without a problem.

    The most reliable storage system we have (a Network Appliance rack) is entirely software RAID. (RAID 4, a number you don't hear often).

  • by QuantumMajo ( 744804 ) on Tuesday July 11, 2006 @08:52PM (#15702637)
    Japan's TSUBAME (see the system at http://www.gsic.titech.ac.jp/ [titech.ac.jp]) is made up of both x4500 and x4600 systems. I've been in the Thumper room - it's loud as a jet engine in there and cooling is an issue, but only because the room is old. It's an impressive set-up, and made to be upgraded. They've got 1.1 Petabytes of storage now.
  • Re:$42,000 (Score:1, Informative)

    by Anonymous Coward on Tuesday July 11, 2006 @08:57PM (#15702654)
    Fortunately ZFS doesn't care about the geometry as long as the replacement drive is at least as large as the original.
  • by Anonymous Coward on Tuesday July 11, 2006 @09:26PM (#15702761)
    iSCSI. :) Most OSes have a iSCSI target available now. ~400 MB/s out of the box. You could throw two Infiniband or 10Gb/s Ethernet adapters for more fun. The SPOF for me is the lack of redundancy, but there are many apps that don't require that extra level of redundancy.

    As an HPC system administrator, these three boxes fill an interesting niche. The x4600 in particular is interesting. There are smaller tier-2s that make 8 socket Opterons (iWill, Supermicro, and Tyan(?) make the mobos), but these machines have a mixed reputation. Sun's support infrastructure is attractive, and I've been impressed with their hardware engineering. It's also easier to get an order for Sun, Dell, HP, or IBM through purchasing than these smaller vendors. Same story with the x4500. The 8000 makes sense when you look at is as a IO-intensive system (rather than as a high-density system) or as a VMWare Infrastructure box.
  • Re:I want one! (Score:2, Informative)

    by dlasley ( 221447 ) on Tuesday July 11, 2006 @09:38PM (#15702807) Homepage
    SAN is not always the answer to large storage. Take Oracle 10G RAQ, for example: say you have a multi-master setup in several datacenters, and you want dedicated high-speed fault-tolerant local storage for each instance. You set up a 4900 with internal storage for the OS and Oracle, then add your data partition(s) on a Thumper in each location. Even the best fiber mpxio connection from the 4900 would be hard-pressed to match the speed, reliability, and responsiveness of a tuned array.

    Given the disasters I've seen with SAN storage, I would happily spend 25% more to get the Thumper and know I could rely on Sun to have someone onsite fixing whatever problems we have within four hours - and know that the person who shows up knows not only what to do with the Thumper, but also what to do with the 4900 once things are fixed.

    &laz;
  • Re::O (Score:2, Informative)

    by Silver Gryphon ( 928672 ) on Tuesday July 11, 2006 @10:30PM (#15703010)
    1 http://en.wikipedia.org/wiki/Library_of_Congress [wikipedia.org]Lo C ( ~20TB ) ... 80 STCs
    1 Thumper @ 24TB ... 1.2LoC

    Being named Slashdot's biggest geek for knowing how many bytes are in a Star Trek Collection ... priceless.

  • by Paul Jakma ( 2677 ) on Wednesday July 12, 2006 @03:09AM (#15703772) Homepage Journal
    OpenSolaris iSCSI target [opensolaris.org] support is underway.
  • What 4U "standard" (Score:3, Informative)

    by thenerdgod ( 122843 ) on Wednesday July 12, 2006 @07:40AM (#15704257) Homepage
    Yes, when standard for 4U server is four to eight hard disks

    Bullpucky. Maybe on your planet. A PC 4U NAS box in my world holds 24 SATA HDDs. Oh, you mean a standard 4U Server... Which usually means a quad-CPU box with 4GB of RAM and a couple fugly FC controllers. See, your problem is that thumper is for Storage in which the 4U form-factor is for drives, and the standard is more like 12 to 24.

    </flame>
  • by theProf ( 146375 ) on Wednesday July 12, 2006 @09:12AM (#15704636)
    You don't.
    You run ZFS, which does not require defragging.

Factorials were someone's attempt to make math LOOK exciting.

Working...