Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

GFS, OCFS, and GPFS - Which Filesystem for Oracle? 36

Posted by Cliff
from the proper-care-and-feeding-of-a-high-octane-RDBMS dept.
amani asks: "My company has a Oracle 9i RAC database running on a Sun cluster. In 6 months we are looking to replace the cluster with either a Linux or an AIX solution that will involve SAN storage. I see that their are a variety of filesystems for Oracle and Linux. Sistina (Red Hat) has the GFS, Oracle has the OCFS, and IBM has GPFS. Does anyone know the pros and cons of each of these filesystems ,and which one would be better for a continuously growing database?"
This discussion has been archived. No new comments can be posted.

GFS, OCFS, and GPFS - Which Filesystem for Oracle?

Comments Filter:
  • VCS is the way to go (Score:3, Informative)

    by Androclese (627848) on Monday January 26, 2004 @08:54PM (#8095660)
    Have you looked at a Veritas Cluster? (VCS) The company I work for uses it and we have found it to be very stable.
  • Until there is a high quality, well maintained, open source clustered file system along the professional level of reiserfs, I'd say nothing out there is worth using. They are all either 1) closed source and by definition poorly maintained and near non-usable with open source operating systems, 2) aren't *real* clustering file systems or 3) so ungodly expensive only fortune 500 companies can justify the expense.
    • What about Lustre (http://www.lustre.org)?

      BTW, implicitly, closed != bad. Yes, sometimes it does, but not always.

      Also, by what definition is a filesystem a "cluster filesystem". One in which the cluster nodes can (a) access, (b) provide or (c) access and provide the filesystem? Not every flavour of clustered filesystem falls in the same category.

      I do agree with the license comment on closed source systems - the per-node license fees are ridiculous.
  • by mcdrewski42 (623680) on Monday January 26, 2004 @09:19PM (#8095938)

    As someone involved in building and architecting ludicrously sized realtime transaction processing systems, I can honestly tell you that the answer is "whatever".

    If you have lots more updates than accesses, you need your redo logs etc on RAW devices, no filesystem required, these will be your biggest bottleneck. The rest, well, just go for a decent hardware RAID implementation, since software RAID is a joke.

    If you have lots more accesses than updates then it's your RAM which will probably make the real impact.

    And at the end of the day, if you're looking at advice, and you're sporting a cheque in your pocket - ask the vendors to tell you which one you should buy! Ask the tricky questions and put their answers in your contract so that they pay you if they lie :)

    I know - it's a nice dream.
  • choose AIX/JFS/SSA (Score:1, Informative)

    by Anonymous Coward
    I run a 500 MB Oracle DB for SAP on top of AIX/JFS/SSA disks. It runs fine. Everything is very stable. Performance is good. SSA is a IBM SAN-like disk technology. SSA is pricey, however is very mature. With AIX 5.2 you can add/delete/move/remove FS/disks/SSA trays with all the applications running. Avoid JFS2, still not mature enough to be stable. Create Oracle datafiles up to 2 GB.
    • 500 MB? lol.

      How would your setup handel 100 gig of data?
      • by Anonymous Coward
        100 Gig?

        Simple SW raid with 3X 300GB IDE drives for growth
        ext3, metadata journaling only,
        and I'd toss oracle and use postgresql.

        3TB is more interesting.

        (more realistically, yes, I have a 100G postgresql, ext3 database that works fine. I also have a 5GB oracle database on some veritas file system. What does the size of 100 Gig matter?)

        • Just damn. The quality of /. keeps going down and down and down.

          That setup would get you fired at most companies.
        • 100 gig was a number I picked because that is the current size of the DB I'm working with. The DB has the potential to grow to tens of terabytes if we sign up more customers.

          PostgreSQL? come on man...like the other poster said....you'll be fired in no time if shit hits the fence. if PostgreSQL corrupts our data who get the blame? With Oracle, we'll just call Oracle and have them foot the bill for damage done.

          IDE drives? lol.
          • With Oracle, we'll just call Oracle and have them foot the bill for damage done.

            Did the lawyers at Oracle forget to include the usual "we do not warrant this software for any particular purpose and in any case the maximum we will be liable for is the cost of the software" clause in the End User License Agreement, which every other piece of proprietary software seems to have, for their permission-to-use-our-server-software license?

        • Simple SW raid with 3X 300GB IDE drives for growth

          Please tell me you're not implying RAID5...

        • RAID on IDE? Eww. Make sure you at least turn the write cache off on all the drives so you don't end up with a corrupted database on power loss.

          Disclaimer: Yes, IDE RAID has it's place, but I wouldn't want to be stuck using it for a database that I cared about.
      • I've run systems basically the same hardware setup as that and its been fine up to ~100gb of data, (how well you tune your system can make a lot of difference to the performance on this sort of setup), although I wouldn't want to go much larger than that on that sort of kit, (give me a HDS9960 over Fibre and then we can start talking reasonable size database systems).

        A couple of points for the grand parent post:
        1). Removing the SSA disks and replacing them online (assuming mirrored fs's) worked WAY befor

        • Yes, HDS9960's are great, simply because of the write performance. Basically, these have a chunk of battery backed RAM (about 32GB, I think) to which writes are stored. These are then written to disk when the array gets a chance, but write times are in the order of 1ms, as opposed to the 10ms we were seeing in FC disks (times as reported by vxstat in Veritas volume manager). For one database we have, this is a major boost as the overnight batch jobs were generating a lot of small transactions and the bot
  • ask us? (Score:2, Informative)

    by Anonymous Coward
    Um.. perhaps call your Oracle support people. If your company is at all sizable, they probably have support contracts with the companies that provide them their mission-critical software? And their professional services/technical engineering people would surely be the best people to ask.
  • Why? (Score:5, Insightful)

    by sql*kitten (1359) * on Tuesday January 27, 2004 @04:44AM (#8097999)
    The real question is, why are you migrating your hardware? Is it because you want to save some money on infrastructure in the short term? Is it because you're thinking long-term and are worried about the viability of the Sun platform? Is it because performance and/or reliability aren't good enough with your present system? Is it because your company has been acquired and your new owners are in bed with IBM? Or is it because Linux is the buzzword of the day and your boss insists you use it? Forgive my nosiness, but they question you are asking isn't really a tech question that has a straightforward answer. What is the outcome you are looking for? A wise engineer chooses his tools according to the job at hand, not the other way round.

    Figure out what you want to accomplish, then figure out what you need to do that. It's easy enough to try all three and see...
  • depends on how well your projected growth is known.

    If you are going from Solaris to A.N.Other Unix-like O/S be prepared for a learning curve. Doesn't matter what the O/S is it will require retraining - adds to costs.

    Also how write heavy is you App? You'll need to watch the O/S - Oracle tuning as they (esp Oracle) will need specific tuning, remember having to set alsort of new stuff for Oracle 9 and Solaris?#

    Best advice is get your self a decent Oracle DB-admin, even for a short term contract as this will
    • I have to agree with this.

      Unless you are talking about some unholy large data warehouse and you need to squeeze every single drop of performance out of every circut, your resources are better off on the high level logical settings of Oracle.

      With a good RAID-5 set up, its really hard to optimize things for Oracle.
  • Check out Oracle's web site. they are very specific about WHICH linux distributions and WHICH apps/tools that are supported under OCFS. (http://oss.oracle.com/projects/ocfs/files/support ed/) Impliction ? Anything not listed MIGHT corrupt the files !
  • I run one of the 99 other 9i RACs in the world. I hate it. OCFS is slow, difficult to mount, load, and install. Plus it consumes all the resources in the system. The Oracle guys don't know anything else so they like it. But as the system admin I seem to be constantly fixing Oracle which the Oracle guys can't. So I ask you, what good are they if they can't even fix their own stuff. I also find that Oracle is a hack. Even the DBAs say this. Why do they use java to install stuff. What's wrong with a g

Kiss your keyboard goodbye!

Working...