IBM's High Performance File System 208
HoosierPeschke writes "BetaNews is running a story about IBM's new file system, General Parallel File System (GPFS). The short and skinny is that the new file system attained a 102 Gigabyte per second transfer rate. The size of the file system is also astonishing at 1.6 petabytes (petabyte == 1,024 terabytes). IBM has up a page with more information and specs on the system.."
Nothing new here. Move along. (Score:3, Informative)
Re:Nothing new here. Move along. (Score:2, Informative)
Re:Nothing new here. Move along. (Score:5, Informative)
Mox
Re:Nothing new here. Move along. (Score:2, Interesting)
Re:Nothing new here. Move along. (Score:2, Funny)
Re:Nothing new here. Move along. (Score:4, Funny)
Re:Nothing new here. Move along. (Score:5, Informative)
According to this article [internetnews.com], the idea was just to see how fast a sustained transfer rate they could achieve. That rate was 102 GiB/s, which apparently is a record. The purpose of the project apparently has something to do with reducing the bottlenecking in parallel-computing interconnects. The machine they used, ASC Purple (a weapons-research system at Lawrence Livermore Labs) has about 10,000+ processors, so that's their obvious application.
The filesystem itself doesn't seem to be anything new -- I have no idea why the poster fixated on that, since it's kind of a minor footnote in most of the articles I've read about this today.
Well.... (Score:2, Funny)
Re:Well.... (Score:5, Funny)
Oh, come now. They just finished winning their latest legal round on FAT [slashdot.org]
Give them a moment to catch their breath, will you?
introducing OrigamiFS, you write it out on paper then fold it in half as many times as you can
Re:Well.... (Score:3, Informative)
Apparrently it can only be folded 12 times [pomonahistorical.org], at most. Unless M$ has created a new form of highly (unstable) foldable OS
Re:Well.... (Score:2)
10 Tbytes? (Score:4, Funny)
Re: 10 Tbytes? (Score:5, Funny)
Running out of space too... maybe I should build a beowulf cluster of them.
Daniel
Re: 10 Tbytes? (Score:2, Funny)
Re: 10 Tbytes? (Score:3, Funny)
Great, but you only ever watch 7 minutes at a time! That's like 100 billion years of pr0n!!
Re: 10 Tbytes? (Score:3, Interesting)
But they have 1000 clients.. so its only 100MB/s/client.. so 1Gbps/s/client.. so the clients are probably gigabit ethernet... Otherwi
Re: 10 Tbytes? (Score:5, Informative)
According to the published/unclassified spec sheet [llnl.gov]:
"Purple has 2 million gigabytes of storage from more than 11,000 Serial ATA and Fibre Channel disks.
I think that it was this last thing, the Federation interconnect, that they were pushing the data over in this test, since it forms the backbone of the machine and links the storage nodes to the login node controllers, which then connect to the login nodes themselves (of which there are apparently over 1,400 of, according to this [llnl.gov]). I couldn't find much information on Federation, as it seems to only be used in a few systems, of which Purple is the most notable. One reference [sandia.gov] I found seems to put it at 1.49 GB/sec (11.92 Gbit/s) bandwidth, although it's not clear if that's "dual plane" Federation or not. 4X SDR Infiniband is around 10 Gbit/sec, IIRC, so Federation's a little faster.
It does sound a little like it was a case of "hey, what can we do with $230M worth of hardware? I know, let's break some records." So they did. I'm not sure that there's anything there that anyone else couldn't do, with different technologies, given the same investment of capital -- it's just a matter of who else wants to, and has the capability.
Re: 10 Tbytes? (Score:5, Insightful)
2000 x 431.99 = $863,980CAD
I don't think that that's a lot of money for a petabyte raid. Hell, you might even get a 20% discount. Now think back about 20 years. That sum of money could have bought you 1 GB - that is an order of magnitude less in hard drive space. But here is the kicker:
Approx. 20 years down the road you will get at least two magnitudes more for the same amount of money (wo/ inflation). Why? Because approx. 30 years ago, that sum of money bought you 1 MB of space.
Ray Kurweil calls it the "Law of Accelerating Returns" [kurzweilai.net]. 20 years down the road I will call it my petaporn array . Or maybe better not [peta.org].
Re: 10 Tbytes? (Score:2)
Re: 10 Tbytes? (Score:2)
Oh, don't worry. There is never enough storage(TM). Movie encoding quality will increase, games will get more immersive (maybe movies too), more detail, more of this, more of that. If transmission speeds increase quality will go up, if quality goes up transmittion speeds will have to increase. Mix in new technologies at any point and the more-storage-than-I-know-what-to-do-with-dept. won't close anytime soon ;)
Try six orders of magnitude (Score:3, Informative)
Peta = 1 000 Tera = 1 000 000 Giga = 1 000 000 000 Mega
Re: 10 Tbytes? (Score:2)
petabye hdd, an all in one enclosed unit.
what might be called an array in the future might be a zettabyte
array or something similar in size.
Arash
Re: 10 Tbytes? (Score:2)
2010: 5 terabyte
2014: 50 terabyte
2018: 0.5 petabyte
The main problem is, what are people going to use them for?
We need a common benchmark (Score:5, Funny)
Re:We need a common benchmark (Score:2)
*NIX Integration (Score:2, Interesting)
Translation: (Score:2, Funny)
Re:Translation: (Score:3, Informative)
GPFS supports the current releases of AIX 5L and selected releases of Red Hat and SUSE LINUX Enterprise Server distributions. See the GPFS FAQ1 for a current list of tested machines and also tested Linux distribution levels.
Re:Translation: (Score:2)
Can I use it? (Score:4, Interesting)
Fast Stuff (Score:4, Funny)
So what about JFS? (Score:2)
Re:So what about JFS? (Score:4, Informative)
You *like* JFS? (Score:2)
Essentially, I liked philosophically that the act of mounting and journal replay are separated, it really makes sense. Journal replay should be more an fsck option, thought that was neat. And when you mount read-only, you *mean* read-only, no journal reply or anything even on a 'dirty' filesystem.
However, I found all too frequently that after power failures, it would replay the journal and think everything was fine, until a few hours of usage lat
Bad Article Title (Score:5, Funny)
I'm Surprised (Score:5, Funny)
That aside, how do I get one for my TiVo?
Re:I'm Surprised (Score:2)
Re:I'm Surprised (Score:2)
Re:I'm Surprised (Score:2)
I'd rather have you fetch me a shrubbery.
since the /. blurb doesn't explain it... (Score:5, Informative)
It's basically data striping across 1000 disks. I suppose the hard part is coordinating all of that parallelism.
So, could someone who actually knows this stuff tell me how well I did?
Re:since the /. blurb doesn't explain it... (Score:5, Funny)
humm that was quick
Re:since the /. blurb doesn't explain it... (Score:2, Funny)
root@ibm# rm -rf *
And as always on storage/bandwidth topics: the pr0n/ogg/divx potential of that thing... *sorry*
I only know what IBM have published (Score:2)
Since GPFS is basically RAID on speed, it should be easy for IBM to write a wrapper for Linux that would allow you to read/write GPFS, without needing to port GPFS per-se. As IBM sells Linux-based machines, being able to access GPFS
Available now. (Score:2)
Some people further up in the discussion have warned however that it's not as stable on Linux as it is on AIX, which is really its native platform.
From IBM's page on GPFS [ibm.com]:
"GPFS is available as:
* GPFS for AIX 5L on POWER(TM)
* GPFS for Linux on IBM AMD processor-based servers and
IBM eServer
Re:I only know what IBM have published (Score:2)
But you sound right to me. Having said that, I would have absolutely no objection to IBM porting support for ultra-parallel RAID to Linux. In fact, there are probably a number of areas in the kernel that they could use their experience in parallel architectures to tighten up on.
NOOOO!!!! You've just finally provided SCO with the evidence it needed! Filesytems were used in UNIX and SCO owns everything UNIX related. Now they know that IBM could maybe consider integating, ehm, UNIX technologies, we mean UNIX
Re:since the /. blurb doesn't explain it... (Score:2)
When I started playing with gpfs on our linux machines about a year ago, I got pretty angry at it pretty often (mostly because we, with other people, were making it do things that the
Re:since the /. blurb doesn't explain it... (Score:3, Informative)
Most of all... (Score:3)
Further evidence that "editor" is a misnomer 'round these parts.
Re:Most of all... (Score:3)
Re:since the /. blurb doesn't explain it... (Score:3, Informative)
It's also striping across many machines in a cluster. Each of those nodes maxes out at 'only' 15 GB/s of I/O, so they wire up all the nodes to a bunch of fibre channel cards, and plug them all into the raids, to distribute the I/O access to the nodes. GPFS also lets you do the I/O over the cluster interconnect, but then you
GPFS Information and links (Score:5, Informative)
GPFS Whitepaper - http://www-03.ibm.com/servers/eserver/pseries/soft ware/whitepapers/gpfsprimer.pdf [ibm.com]
"GPFS is a cluster file system providing normal application interfaces, and has been available on AIX® operating system-based clusters since 1998 and Linux operating system-based clusters since 2001. GPFS distinguishes itself from other cluster file systems by providing concurrent, high-speed file access to applications executing on multiple nodes in an AIX 5L cluster, a Linux cluster or a heterogeneous cluster of AIX 5L and Linux machines. The processors supporting this cluster may be a mixture of IBM System p5(TM), p5 and pSeries® machines, IBM BladeCenter(TM) or IBM xSeries® machines based on Intel® or AMD processors. GPFS supports the current releases of AIX 5L and selected releases of Red Hat and SUSE LINUX Enterprise Server distributions. See the GPFS FAQ1 for a current list of tested machines and also tested Linux distribution levels. It is possible to run GPFS on compatible machines from other hardware vendors, but you should contact your IBM sales representative for details.
GPFS for AIX 5L and GPFS for Linux are derived from the same programming source and differ principally in adapting to the different hardware and operating system environments. The functionality of the two products is identical. GPFS V2.3 allows AIX 5L and Linux nodes, including Linux nodes on different machine architectures, to exist in the same cluster with shared access to the same GPFS file system. A cluster is a managed collection of computers which are connected via a network and share access to storage. Storage may be shared directly using storage networking capabilities provided by a storage vendor or by using IBM supplied capabilities which simulate a storage area network (SAN) over an IP network.
GPFS V2.3 is enhanced over previous releases of GPFS by introducing the capability to share data between clusters. This means that a cluster with proper authority can mount and directly access data owned by another cluster. It is possible to create clusters which own no data and are created for the sole purpose of accessing data owned by other clusters. The data transport uses either GPFS SAN simulation capabilities over a general network or SAN extension hardware.
GPFS V2.3 also adds new facilities in support of disaster recovery, recoverability and scaling. See the product publications for details2."
GPFS is not new (Score:3, Informative)
I'll caution everyone that you can get 100GB/s of throughput, only if you have a hundred million dollar collection of computers and disks like Livermore has.
Google File System (Score:2)
If it's scalable, there's no reas
Re:Google File System (Score:3, Insightful)
When you care about throughput as well as capacity.
Re:Google File System (Score:2)
So will this mean cheaper storage costs (Score:3, Interesting)
Given the managment/maintenance levels of discs wil be more intergrated and distrubutable with this I cant help but think that OS/features and the trend in (and rightly so) resiliance,easy and sharing resources approach towards what Plan 9 was setout to be.
The more we move on the more we seem to get towards the lego-type appraoch to IT were you can just buy another box of bricks and add on and keep your older bricks instead of throwing the whole lot out and/or hacksawing the end of a brick of and gluing it onto the side of....
Storage wise this is a nice step forwards and having worked on AIX and its many filesystems and managment tools and the ease of getting the job done with the option to get clever if you wish (you chose and not forced) this looks funky albeit its RAID for SAN's in a way.
What I realy want is a FS that will propergate automaticaly and resiliantly in a way that accomodates network diversaty already and I still come down to me wanting, what is all intent a filesystem sat on a database sat on a p2p network, alas atm performance would suck, least today but you know how long code takes to get right and how fast hardware moves - remember alot of code in windows XP has origins to when it was written on a humble 386 cpu if not lower.
What this does show is how netowrk/storage interfaces have moved forward and I/O requests dont hammer CPU's as much as they used to, getting there
Re:So will this mean cheaper storage costs (Score:2)
Well, it is one thing to send output from thousands of nodes to thousands of nodes, and achieve very high performance, as seems to be the case here. It is another to send output from a single node to a single node as you describe, where one node is a storage controller (ESS, whatever) and the other is a large database server that is probably not running a f
Re:So will this mean cheaper storage costs (Score:2)
Tech details (Score:3, Informative)
The article says 102 GB/s transfer. This PDF about the ASC Purple says they have 11,000 SATA & fiber channel disks (amongst other neat stats). So cursory math says that's about 10 MB/s from each disk.
My question is how useful is that transfer? Pulling in at 102 GB/s is fast and all, but if you can't consume it then it's just ego boosting. What kind of useful data transfer can you do on it? Surely it's for parallel processing (ASC = Advanced Simulation & Computing) of some kind so can this parallel app handle 102 GB/s collectively?
Re:Tech details (Score:2)
http://www.llnl.gov/asc/platforms/purple/sc2005-p
Re:Tech details (Score:3, Insightful)
Re:Tech details (Score:2)
Just to give an idea, when LHC will turn into operation at CERN, it will produce data at a rate equivalent to the whole EU telephony/data network capacity. And, this only part of the story. Since you have to analyze data, compute, compare, etc. You need to be able to move it fast between processors.
Imagine nuclear weapons simulations, hurrican
Function of Purple (Score:3, Informative)
Since they can't actually do tests, either aboveground or below, by treaty anymore, they do simulations instead. I assume these have something to do with modeling how radioactive decay affects the weapons' usability and yield over time (since I don't think they're really in the business of designing new toys, but who knows really), so that you know that a bomb is going to go "pop" instead of "fizzle" when you want it to.
I'd imagine that those
unit correction (Score:2, Informative)
petabyte == 1,000 terabytes
ref: http://en.wikipedia.org/wiki/Petabyte [wikipedia.org]
Kibibytes is just so much more fun to say. Especially when it leads to "kibbles & bits."
SCREW THAT!!! ;-) (Score:4, Insightful)
the exact number in common practice could be either one of the following:
Real geeks use powers of two; powers of ten we're only introduced for marketing purposes, which real geeks eschew.
Re:SCREW THAT!!! ;-) (Score:2)
base 2 was used because it was easier to count with.
and some jerkwad decided that 1024 is close enough.
I speek for thousands of nerds when I say (Score:2)
binary prefixes (Score:5, Insightful)
A petabyte == 1000 terrabytes
A pebibyte == 1024 terrabytes
Please see the NIST definition page:
http://physics.nist.gov/cuu/Units/binary.html [nist.gov]
Re:binary prefixes (Score:2)
The plethora of SI prefixes gets more and more confusing. And remember, not everyone has or is in any way bound to adopt the NIST convention, after all megabyte = 1024 kilobyte was around and in use long before nist got into the act!
Re:binary prefixes (Score:2)
1000 Meter = 1 KiloMeter
was around long before 'megabyte = 1024 kilobyte'
the prefix's mean something.
just cause some CS tard decided that 1024 is close enough, and that base 2 is easier then base 10
does not make it correct
Re:binary prefixes (Score:2)
Re:binary prefixes (Score:5, Informative)
Context-sensitive conversion of SI prefixes isn't all that difficult. Really. It's commonly understood that data is stored in powers of 2, and the subject is only relevant if (1) you're a sales type, or (2) you are being overly pedantic about an unwanted and unneeded SI standard.
Re:binary prefixes (Score:2)
Re:binary prefixes (Score:2)
A pebibyte == 1024 terrabytes
ROFL. Not to mention "terra" is Earth, "tera" is SI. One of many issues with the new names is that they sound like complete and utter crap. I'll never ever move away from mega-, giga- and terabytes. I abbriviate them correctly with the i's (MiB,GiB,TiB) and for anal people I'd specify it as "decimal *-byte" and "binary *-byte" or just give it in raw bytes. But those names.... OMG what nerd came up with those?
Re:binary prefixes (Score:2)
Yeah, I agree. "bi"? Maybe the guy was confused.
It would have been so much easier just to change the "a" to an "i" - petibyte instead of petabyte.
Re:binary prefixes (Score:2)
1.6 petabytes isn't that big a deal (Score:5, Informative)
thats a whole load of data:
"Although we'd all like Moore's Law to continue forever, quantum mechanics imposes some fundamental limits on the computation rate and information capacity of any physical device. In particular, it has been shown that 1 kilogram of matter confined to 1 liter of space can perform at most 1051 operations per second on at most 1031 bits of information [see Seth Lloyd, "Ultimate physical limits to computation." Nature 406, 1047-1054 (2000)]. A fully-populated 128-bit storage pool would contain 2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg.
That's a lot of gear."
No, the limits are much higher than that (Score:5, Informative)
Um, no, that's wrong.
Bremmermann's Limit [wikipedia.org] is the maximum computational speed in the physical universe (as defined by relativity and quantum mechanical limitations) and is approximately 2 x 10^47 bits per second per gram (or, for those who prefer sexagesimal [jean.nu], one jezend [jean.nu], 60^11, bits per second per gram).
Bousso's covariant entropy bound [elyseum.com] also called the holographic bound is a theoretical refinement on the Bekenstein Bound [wikipedia.org] that may define the limit of how compact information may be stored, based on current understanding of quantum mechanical limits, and is theorized to be equal to approximately one yezend [jean.nu] (60^37, or ~10^66) bits of information contained in a space enclosed by a spherical surface of 1 sq. cm.
Given this, 1 kg of matter can perform approximately 2 x 10^50 bit operations per second per kilogram, in a space much smaller than 1 liter of space. Of course, other physical constraints (non-quantum related) probably limits us to a couple of orders of magnitude less computation, in a couple of orders of magnitude more space, but of course what those limits might be is very speculative
Re:1.6 petabytes isn't that big a deal (Score:3, Informative)
That'll be 10^51 and 10^31...
Comparisons to other Parallel/Clustered FS? (Score:2)
Also how does this compare to clustered storage that is not run on the hosts themselves like NetApp new Spinnaker based clustering. You also have folks like Isilon [isilon.com], Panasas [panasas.com], and Terrascale [terrascale.com].
Anybody have an good data on this?
-Ack
Re:Comparisons to other Parallel/Clustered FS? (Score:2)
For that matter, how does it compare with Tivoli TotalStorage SAN Filesystem [ibm.com], which seems to be another shared-storage filesystem from IBM/Tivoli? Trying to read IBM's descriptions is an exercise in marketing-fluff cryptography.
Re:Comparisons to other Parallel/Clustered FS? (Score:2)
don't forget about the Parallel Virtual File System (PVFS) [clemson.edu]
-metric
I don't get it (Score:2, Funny)
Well, ... (Score:2, Funny)
Bad Experience with GPFS (Score:4, Interesting)
Unfortunately we had a lot of problems with it. For one, performance was quite bad in ceratin cases... doing an ls in a large directory would take a very long time. Doing finds would take a very long time. Once you had a specific file you wanted, opening and reading it was reasonable (though all disk ops were still on the slow side), but multi file operations lagged on the level of 10s of seconds or more. I think it was having to issue network checks to every machine in the set for each file or something.
Also, the CPU usage was very high across all our machines, primarly from lock manager communications. It really taxed the system. And perhaps worst of all, it would caused crashes sometimes. A single machine in the set would die (usually a GPFS assert), and though that didn't break the set permanently, a multi-minute freeze on all disk reads would take place until the set determined the machine was unavailable. We spoke with IBM about all this stuff... provided debugging output and everything, we used the latest patches. But we never got the issues resolved. It was a very rough few months indeed. I probably averaged 4 hours sleep per night.
When I say "slow" what am I comparing it to? In the end we switched to NFS and we came up with a somewhat clever way to avoid the need for file locking. NFS used the same SAN hardware, but had a single point of failure: the head server. We doubled up there with warm failover. The load on all servers dropped dramatically (I'm talking from ~40 load to ~.1 load). Disk operations were orders of magnitude faster. And we've not had a single NFS related lockup or failure in the past year and a half *knocks on wood*.
Anyways -- GPFS probably has some good uses. But I would not recommend it for a very high-volume (lots of files, lots of traffic) mission critical situation. Unless they've made some major improvements.
Cheers.
Re:Bad Experience with GPFS (Score:2)
1.6 petabytes is overkill ... (Score:2, Funny)
The marketing geniuses at IBM strike again!! (Score:2)
First it was OS2 (OS 2 ? does the "2" stand for 2nd rate? Is it your 2nd attempt? Is it just a big piece of "#2"?) and now it's this. Don't get me wrong, I think their products are great, but I really think they'd have a hard time marketing air on the Moon!
(Slightly more) seriously, IBM could stand to hire the same marketing folks the beer companies hire...Especially since their markets overlap so much.
Re:The marketing geniuses at IBM strike again!! (Score:2)
Re:The marketing geniuses at IBM strike again!! (Score:2)
I ran into this the other day trying to search for discussions of it... GPFs is overwhelmingly used as the plural for General Protection Fault...
my gpfs problem (Score:3, Informative)
NTFS (Score:3, Informative)
Re:NTFS (Score:2)
Re:NTFS (Score:2)
It has to be said.... (Score:2, Redundant)
Re:How many (Score:2)
Re:Darn! (Score:2)
Chuck Norris (Score:3, Funny)
Re:Chuck Norris (Score:4, Funny)
Re:1 petabyte = 1000 terabytes, not 1024. (Score:2)