Sun Unveils Thumper Data Storage 285
zdzichu writes "At today's press conference, Sun Microsystems is showing off a few new systems. One of them is the Sun Fire x4500, known previously under the 'Thumper' codename. It's a compact dual Opteron rack server, 4U high, packed with 48 SATA-II drives. Yes, when standard for 4U server is four to eight hard disks, Thumper delivers forty-eight HDDs with 24 TB of raw storage. And it will double within the year, when 1TB drives will be sold. More information is also available at Jonathan Schwartz's blog."
Re:I want one! (Score:5, Informative)
48 Hds, 2CPUs, and still less than 1200 Watts.
Oh many. Datafarm in a single rack.
Re:I want one! (Score:3, Informative)
Re:Variable redundancy? (Score:4, Informative)
It makes managing this sort of storage box a snap, and allows you to dial up or down the level of redundancy by using either mirroring (2-way, 3-way, or more) or RAIDZ. And soon, RAIDZ2.
Additionally, Solaris running on the machine has fault managment support for the drives, and can work with the SMART data to predict drive failures, and exposes the drives to inspection via IPMI and other management interfaces. Fault LEDs light when drives experience failures, making them a snap to find and replace.
Re:Holy SHIT! (Score:5, Informative)
Its 48 hds in a 4U case. 48HDs is about 600W under full load.
If you compare this to the fact that there are dual-socket - dual core servers out there that push 300W through a 1U case, thats nothing.
Also, a 4U case allows the use of nice fat 12cm fans in the front, while the horizontal backplane allows for free airflow (in contrast to vertical ones like used before)
Re:Software RAID only, plus 7200 RPM no10k or 15k (Score:4, Informative)
The old-time "big-ticket" was checksum calculation, but that is now an "also-ran". Distributing the i/o? Software can do it as well as hardware.
Both hardware and software have to be familiar with the blocking factor.
Where software wins is that it can be aware of, and skip reading to fill blocks if the block has never been used (or is not PRESENTLY in use). Which hardware RAID controllers cannot avoid doing.
The idea is to tie the RAID more tightly into the filesystem.
As to lower speed drives -- did you count the heads? Each is active at the same time. Yes, an individual i/o would complete faster with 10k or 15k spin, but the total throughput is based on the number of heads. For RAID5, reading multiple blocks will give you pretty much all the read performance you can stomach.
Write performance for an individual write operation would be improved; but generally application buffering deals with it. The tradeoff is number of heads, spin rate, and heat. The right balance? For you, write performance up, and, keeping heat constant, number of heads down (I presume that you are dealing with transactional loads, with commits). For me? tends to go the other way (my workload is general storage, with a bit of database).
As always, YMMV
Ratboy
Re:Variable redundancy? (Score:4, Informative)
Re:Variable redundancy? (Score:1, Informative)
with ZFS it's as easy as that.
I saw a demonstration at LinuxTag in Wiesbaden, germany. The used files instead of harddisks, and just filled one of them with random bytes. Everything worked as if nothing had happend...
--
http://moritz.faui2k3.org/ [faui2k3.org]
Re:Software RAID only, plus 7200 RPM no10k or 15k (Score:4, Informative)
From this box, one can serve out file systems with NFS and/or SMB/CIFS (aka a traditional NAS), and in future releases of Solaris 10, also serve out LUNs over iSCSI and FCP while having all that data backed by the performance, reliability, and features of ZFS. The only thing it's missing is a consolidated, centralized CLI for manipulating storage, a la NetApp and ONTAP... but all the requisite pieces are there to turn Solaris, and especially Solaris-on-Thumper, into a NetApp killer at less cost.
Re:sun infatuated with sw-raid ? (Score:1, Informative)
Remember that "hardware" raid is simply a dedicated low-end microcontroller (compared to an Opteron) running pretty much the same software that "software" raid runs -- but in an environment that's harder to apply patches&bug fixes (need to re-flash the firmware).
This extra dedicated microcontroller is nice when it's in a server that's doing a lot of other CPU intensive stuff (rendering); but on a dedicated file server you're far better off having the powerful main CPUs performing the RAID logic == software RAID.
Beware of the 2.5" disk drives (Score:3, Informative)
Indeed, Sun's list prices are way too high (Score:1, Informative)
The 12 TB config [sun.com] is sold at $33k, or $2.75/GB, but assembling such a server yourself is possible and can be done today for 1/3rd of this price:
(I have actually slightly overestimated the above prices.) Of course people are going to say that such a server is not be as reliable as a Sun server, that it does not come with technical support, etc. But in most cases such arguments are invalid because you save so much money that you can afford assembling/maintaining the server and replacing faulty hardware parts yourself. Time is money, but by having saved money you can now afford time ;-)
The living proof that such a model would be successful
is Google: instead of buying Sun servers like most startups in their
time, they built their servers themselves to save money.
Re:Bad idea from a storage management point of vie (Score:4, Informative)
Re:Seek time? (Score:1, Informative)
Re:Software RAID only, plus 7200 RPM no10k or 15k (Score:3, Informative)
The other hidden advantage here is storage density. If for some reason you needed 1PB of data storage in as small a space as possible, this is a big win for you. You would need about 45 of these servers to get 1PB of capacity. That would fit nicely into less than 5 racks of space, with room to spare for your networking and monitoring gear. A 1PB EMC Symmetrix is going to be a _LOT_ bigger.
No other storage platform has higher density (that I am aware of). Power use is good but not amazing (look at Petabox) and price is excellent for the size, but loses out as you scale.
Overall, I am stoked on them and want to try using them as backup servers. Attach one or two LTO3s and a couple 10gbs ethernet cards and you have everything you need! You can spew data over the network from the clients and then spend the whole day making very good use of your tape drive resources.
Re:sun infatuated with sw-raid ? (Score:5, Informative)
I can give you a few reasons they might. Having been through some hardware RAID nightmares I have first hand experience with a few of them.
HW RAID makes you dependent upon the manufacturer of the card both for RAID implementation and for drivers. We once a a couple hardware RAID cards managing a large (at the time) RAID0+1 array that would occasionally glitch and fail a drive or two (or occasionally every drive on the controller). The driver and monitoring daemon wouldn't report anything until a second drive failed. Despite battery backup on the card cache, a single drive failure would often corrupt the data on the mirrored drive. The manufacturer was nowhere to be found when requesting updates or bug fixes.
We eventually switched to software RAID and found that in addition to making the array reliable it improved our performance. This was in part because the 6 CPUs on the machine were significantly faster than the 25MHz i960 managing the RAID cards. We could also mirror across controllers on the 4 separate PCI busses which gets rid of a major bottleneck (the I/O on a PCI bus can be easily saturated by a few drives)).
There are other benefits to being able to RAID across controllers. A RAID controller is a single point of failure. If a controller fails on a HW raid system, your array goes down. On SW RAID (done properly) a single controller can go away without a problem.
The most reliable storage system we have (a Network Appliance rack) is entirely software RAID. (RAID 4, a number you don't hear often).
Seen them in operation already (Score:2, Informative)
Re:$42,000 (Score:1, Informative)
Re:Bad idea from a storage management point of vie (Score:1, Informative)
As an HPC system administrator, these three boxes fill an interesting niche. The x4600 in particular is interesting. There are smaller tier-2s that make 8 socket Opterons (iWill, Supermicro, and Tyan(?) make the mobos), but these machines have a mixed reputation. Sun's support infrastructure is attractive, and I've been impressed with their hardware engineering. It's also easier to get an order for Sun, Dell, HP, or IBM through purchasing than these smaller vendors. Same story with the x4500. The 8000 makes sense when you look at is as a IO-intensive system (rather than as a high-density system) or as a VMWare Infrastructure box.
Re:I want one! (Score:2, Informative)
Given the disasters I've seen with SAN storage, I would happily spend 25% more to get the Thumper and know I could rely on Sun to have someone onsite fixing whatever problems we have within four hours - and know that the person who shows up knows not only what to do with the Thumper, but also what to do with the 4900 once things are fixed.
&laz;
Re::O (Score:2, Informative)
1 Thumper @ 24TB
Being named Slashdot's biggest geek for knowing how many bytes are in a Star Trek Collection
Re:Bad idea from a storage management point of vie (Score:3, Informative)
What 4U "standard" (Score:3, Informative)
Bullpucky. Maybe on your planet. A PC 4U NAS box in my world holds 24 SATA HDDs. Oh, you mean a standard 4U Server... Which usually means a quad-CPU box with 4GB of RAM and a couple fugly FC controllers. See, your problem is that thumper is for Storage in which the 4U form-factor is for drives, and the standard is more like 12 to 24.
</flame>Re:I just can't wait. . . (Score:2, Informative)
You run ZFS, which does not require defragging.