Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

Ask Slashdot: NFS on Free OSes Substandard? 107

Yet another fearless member of Clan Anonymous Coward wrote in with this intriguing issue: "I am trying to convince my company to move off of Digital Unix and Sun OS to either FreeBSD or Linux as our primary server platform. The main argument I am getting is the NFS client performance on these free OSes is much worse than that of Solaris or DU. Can anyone give any recent data on relative NFS performance on these platforms?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: NFS on Free OSes Substandard?

Comments Filter:
  • by Anonymous Coward
    If they are arguing that Linux NFS is slow, they should show some data to support that argument, and not just say it. It is something like say this OS is better than another without test to prove it. It is just FUD.
  • by Anonymous Coward
    I am running a Linux NFS file server having now more than 20 workstation, for around 2 years.
    The network is partly shared 100BaseT, partly switched 100BaseT.
    Some of these workstations have multiple processors and are heavily used for big jobs with a lot of I/O to the NFS filesystem. Other systems are every now and then abused by students.
    At the beginning, I had some filesystem consistency problems, 4 or 5 in a year. Since I have installed Redhat 5.* (almost one year ago) there were absolutely no problems at all.

    On the other hand, a Solaris server sharing some home directories with other two other Sun/Solaris computers gave me troubles about once a month. In the end the owners switched to PCs running Linux.

    So, as robustness, my experience shows Linux is superior at this moment.

    As performance, I don't have any useful data.
    The systems work fine, I didn't see any delay to access a file, all the users are happy, so I didn't make tests.
  • by Anonymous Coward
    > Though the performance was not anything great,
    > it was usable. NFS is relatively easy to
    > configure and use, but it needs NIS database
    > for user authentication.

    Actually, NFS does /not/ require NIS, but it makes it a lot easier with anything more than a handful of clients. The real necessity is UID/GID consistency. I origionally implemented this at home by hand.
  • by Anonymous Coward
    We use Linux as NFS server for Solaris and other Linux clients here. I agree with others that NFS is one of the Linux's weakest spots. In desperation I tried moving to knfsd and the situation became worse. Right now I restart the
    NFS daemon every 15 mins from cron. I noticed that 2.2.7 has NFSv3 support. Hopefully by the time we come to 2.2.30+ and the knfsd stabilizes, we'll have a solid NFS server (perhaps 6 mo from now). Meanwhile, evaluate for yourself whether you have the luxuxry of explaining to colleagues or boss that things will most probably get better very soon. If you don't, leave it to the those who have the clout in their offices who can stand by Linux.

    On the other hand, Linux is an excellent Samba server in my experience. Push for it, if you see a need. Once the confidence builds, push for Linux as NFS server. Hopefully, Linux NFS should be stable by then.

    Ramana
  • by Anonymous Coward
    We currently have a Sun SPARCstation 5 with 64mb ram and on two 10baseT ethernet lines as our main server, running SunOS 4.1.3_U1. At the end of this semester, we are upgrading our systems to a dual pII 400 system, 256 megs of mem, and a pair of 100baseT (although we will have to stay at 10baseT due to old hubs), and 6 IBM 7200 RPM SCSI disks (1 for the OS, 5 in software RAID).
    I havent really tested for the purpose of getting hardcore numbers, but IMHO the new linux server beats the pants off the SPARC, and basically due to the huge increase in horsepower under the hood. I recently tried coping over about 3 gigs of files, using the Sun as a NFS server, and it was chunking them across pretty good, although it loaded up to about 21 load average and spawned about 10 nfsd procs. I did the same thing, this time as with the linux server as the nfs server. It only had one nfsd proc running, and it made it to about, oh, .2 load average. Needless to say, i was impressed. This was done using the 2.0.36 kernel. (I would love to use a 2.2 kernel on this beast, but the etherpower II ethernet cards wont work on anything about a 2.0 series yet).

    Of course, this doesnt really mean much, as the old server was being used by users at the time (although no more than about 3 of em) and the new was not.

    In other tests I have done, i copied over a 600 meg cd image from this new server to a linux box, and while the server didnt break a sweat at about 1 load average, the nfs process was comsuming about 30% of the cpu time.

    For us, linux is the only way to go, as a department with no more than a 5 grand a year equipment budget, we cant afford to upgrade our Suns at all, and for a fraction of the price of a new Ultra, we have a pretty awesome machine.

    Hope this helps somewhat

    -hamster
  • by Anonymous Coward
    I just ran a series of benchmarks/simulations for NFS on FreeBSD 3.0 and Linux 2.2.1. FreeBSD NFS performance was very poor compared to Linux; this result was counter-intuitive because most of the other tests I ran for disk and network performance showed FreeBSD slightly besting Linux.
    Linux showed a 14% NFS performance penalty over local disk access, and FreeBSD showed a 55% NFS performance penalty.
    If anyone cares about the test specs, send me email at mka@ieee.org
  • by Anonymous Coward on Friday April 30, 1999 @08:37AM (#1909393)
    NFSv3 support was finished for Linux 2.2 last week. Check for the patches in the kernel-list archives.

    http://www.linuxhq.com/
  • by Anonymous Coward on Friday April 30, 1999 @09:25AM (#1909394)
    NFS preformance has increased /greatly/ with the new kernel based NFS implementation.

    It is unfortunately not NFSv3 yet, though a fair amount of the features have been implemented. And there are still a few lingering bugs from what I've heard, though I personally have yet to run into a problem with it.

    The raw preformance I've seen is around 40% faster on a single client, which is corroborated by RedHat's experience. They also claim over 400% improvement in multi-client environment.

    Probably the most important considerations is WHAT ARE YOUR NEEDS?

    If you have extremely large traffic requirements (read large number of clients or large files) or if you absolutely need NFSv3 compliance (for 64-bit file handles, etc), then don't use Linux or *BSD.

    If, on the other hand, you are handling a couple dozen clients with low-to-middling NFS requirements, save yourself a boatload of money and use a Redhat server.

    But even if the free Unices don't make sense for your NFS servers, by all means recommend them for other tasks - mail server, web server, database server, router, etc, etc.
  • by Anonymous Coward on Friday April 30, 1999 @08:07AM (#1909395)
    Some weeks ago there was a huge thread on freebsd-hackers about NFS and the implementation in FreeBSD which has "slight" problems currently. JKH estimated the time necessary to fix these problems in months. It was even suggested to fund one or two developers to take care of the NFS "thingies."

    Then, there is Linux. The 2.0 kernel suffers from the userland-only nfsd implementation which has a real impact on the speed on especially fat pipes (>100MBit/sec). The interaction between userland/kernel demands many context changes and data copying between the two areas decreasing the overall speed and increasing the server load.

    Linux 2.1/2.2 uses a kernel nfs implementation which is currently under heavy development and as such overall reliability cannot be foreseen. It still suffers from problems with using bigger read/write blocksizes. But HJLu (I think he is working on that) and the other contributors are doing a great job, so this area will improve over time.

    If you want to run a big network with many clients (300+), you should currently go with a commercial OS such as Solaris (I don't know anything about HP-UX' NFS performance/reliability) and run it on vendor hardware (yes, I'm conservative). At the current stage of open source implementation of NFS, it would only discredit open source and yourself as a open source advocate, if you would suggest to use open source software for running a huge network. You can easily go with Linux or FreeBSD, if you want to build a rather small network (I have a client with ~70 networked stations depending on a FreeBSD 3.0 server) and don't need a really scalable solution.
  • by Anonymous Coward on Friday April 30, 1999 @08:25AM (#1909396)
    We have a small development network of about 8 client machines (linux & sparc boxes) and a linux box for an nfs server. These are some of the times I have collected while I was following the optimization section of the linux NFS howto:

    I use the following commands to write and read a file, respectively (see the NFS howto):

    (1) time dd if=/dev/zero of=/opt/stuff/testfile bs=16k count=4096
    (2) time dd if=/opt/stuff/testfile of=/dev/null bs=16k


    One of the sparc clients has a line like this in its /etc/vfstab file

    linuxServer:/opt/stuff - /opt/stuff nfs - yes rsize=1024,wsize=1024,rw

    A linux client has the following line in it's /etc/fstab file:

    linuxServer:/opt/stuff /opt/stuff nfs rsize=4096,wsize=4096,hard,intr,suid 0 0

    This is a typical (I say typical because I'm substituting the average times)
    out put from (1) on the sparc client:

    4096+0 records in
    4096+0 records out

    real 0m20.90s
    user 0m0.24s
    sys 0m2.49s

    And for (2):

    4096+0 records in
    4096+0 records out

    real 0m0.69s
    user 0m0.04s
    sys 0m0.62s


    For the linux client, (1):

    4096+0 records in
    4096+0 records out
    0.01user 2.05system 0:21.00elapsed 9%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+0outputs (89major+15minor)pagefaults 0swaps

    and (2):

    4096+0 records in
    4096+0 records out
    0.00user 1.49system 0:36.13elapsed 4%CPU (0avgtext+0avgdata 0maxresident)k
    0inputs+0outputs (96major+15minor)pagefaults 0swaps


    For (2), the sparc is significantly faster.

    If I change the rsize and wsize of the sparc client to 4096 each, the sparc
    client will crash nfsd on the linux server.

    We are just a small group of developers who happen to be linux enthusiasts, so we configured this setup ourselves. In short, we don't claim to be masters at configuring unix networks.

    The sparc client is an ultra 5, the linux client is an 450MHZ HP Vectra (p.o.s.), and the linux server is a 333MHZ Dell Dimension.

  • It's been a while since I looked through the CODA pages, but there was something about defining a cache size on the client for disconnected use. Is there anything to specify what directory to backup to the cache, or does it just put the last 500mb or whatever in there?
  • Actually, Matt Dillon has been working on NFS for a while. Fully-working (and totally stable too) NFS will be in 3.1-STABLE relatively soon, and even sooner in 4.0 of course. Also, TCP NFS does not work at all in 3.X because of a recently found bug; Matt has it working tho.
  • That's kinda true (at least a few years ago) because Solaris NFS wanted 8k packets, while Linux does 1k by default. No hard numbers for a performance change, but increasing rsize and wsize to 8192 should help out a lot.
  • Posted by Mac Daniel:

    Looking at the numbers reported on The Internet Operating System Counter , it's fairly obvious that a lot of people are migrating to Linux.

    I published an analysis "Growing Internet Presence" at which notes that during the past 3 months, the number of web servers (www, ftp, mail) grew a bit over 27% -- but Linux increased by 39%. (The Mac OS was the only other OS to grow faster than the market, while Windoze was a bit behind the market.)

    That doesn't demonstrate performance, but does show that a lot of servers the used to be Windows or other Unix variants are now going Linux. With 31.3% of all servers using Linux, more experts are choosing it than any other OS.

    Dan Knight, Mac Advocate dknight@reformed.net
    Low End Mac
    the iMac channel

    "In view of the fact that God limited the intelligence of man,
    it seems unfair that He did not also limit his stupidity."
    - Konrad Adenauer
  • Posted by Grumpy_Old_Manager:

    We've got IRIX, Solaris, and linux running as clients and servers. IRIX and Solaris support NFS version 3. Linux currently supports only version 2. It has been my experience that NFSv3 helped performance quite a bit. Our mail server supports NFSv3. We noticed that mail user agents such as pine were running much slower on NFSv2 clients such as linux that it was on NFSv3 clients. Turns out that reading mail is a very write intensive operation! NFSv3 optimizes writes. There are of course other ways to optimize writes on a server mostly involving saving written data to memory in a RAID controller.

    Another feature of Solaris is commands like nfsstat which can help you determine what's going on with your NFS servers and clients.

    When I was your age we didn't have NFS. We had to feed stacks of computer cards into card readers which sometimes jammed sending you to a key punch to retype the 80 byte data card. Yep NFS seems pretty fast to me so be happy you at least got NFSv2.
  • Posted by The King of the Potato People:

    If it works, why do you want to change it? Surely your loyalty to Linux/FreeBSD isn't clouding your judgement now, is it? :) It's okay, that's the standard 'I'm new here, I don't know how anything works, let's use Linux!!' opinion that most newbies who don't know Solaris have. Maybe I've just seen this too many times.. I hate it when people get hired and want to change OPERATING SYSTEMS on the servers because they have a personal preference..

    Why reinvent the wheel?


    ash
  • CODA is a new look at networkable filesystems. It is also not at the level of trustworthy.

    My big complaint was it needed a partition or at least a swap file-esque section for the share. So you can not just drop files in and expect it to work, you must know the size ahead of time.

    Plus side is that it does disconnected access, so a laptop which edits a file in the share and then reconnects will copy over the new version.
  • At least on 2.0.*, Linux NFS tended to crash like crazy in my experience as well as the widely publicized Titanic file. Linux was used in making Titanic. The engineering team at Digital Domain had some comments about Linux NFS crashing constantly and having to hack the kernel to fix it. The hacks of course, were thrown out by Linus. Linux 2.2.* may be better. It does allow you to access remote block devices over NFS.
  • I've heard a lot of talk that NFS on linux is a little subpar. I haven't done any benchmarks to find out but there seems to be a lot of concensus.


    FWIW, I'd try to run an AFS based filesystem like CODA. I'm not sure if there are any free DFS implementations for linux but I know you can get CODA up and running. I'm not sure how it performs relative to other networking filesystems but it is actively developed and I'm sure they are aiming at providing a high performance solution. Development is open. There are some ways to tune NFS but unless you specifically need it for legacy support or something I'd go with CODA.

  • You throw all NIS+ security out the window when you run it in NIS compatibiliy mode. All the pain of NIS+ and none of the pleasure.
  • by BadlandZ ( 1725 ) on Friday April 30, 1999 @11:19AM (#1909407) Journal
    Sorry, I thought I might have the chance to ask this, because reading this thread REALLY raised a question in my mind.

    Doesn't the Hardware itself play a large role in the NFS server? I can't see an NFS server needing massive CPU power, but I can draw some lines to Memory I/O bandwidth, SCSI systems, and Network Interface devices. When any hardware componant is "weak" it could potentially effect the preformance some percentage, right?

    So, comparing a Sparc w/ Solaris to a x86 w/ Linux/FreeBSD just makes me think your actually comparing a lot more than just OS's, and I would want to know the detailed specs on the systems being compared.

    Or can someone somehow prove to me that the software is the over-riding influencing factor, and the hardware doesn't matter?

  • We have a RedHat 5.1 box used for writing CDs. Initially we just used scp to copy files over to the linux machine to burn to the CD. Then someone suggested mounting directories off our SGI O200 server so that we could write the disk image directly to the local disk off the NFS drives. Result? Every time we tried this, the O200 was dead within an hour. Any file access, either for other (IRIX and Solaris) NFS clients locked up the process making the request. Even typing 'ls' in a local directory when logged in to the O200 hung the shell. The only way we could get things back was to reboot - we also had to reboot the NIS domain controller (a Sun).

    Nick

  • Yeah, the broken "Ask Slashdot" slashbox is weird too. It's been stuck for weeks now.
  • by mikenguyen ( 3505 ) on Friday April 30, 1999 @08:50AM (#1909410)
    I have to agree with others who say the NFS implmentation in Solaris is the one that others should be measured against. Rock solid, having adminned Solaris and SunOS for many years.

    Recent versions of HP-UX seem to have borrowed a lot of Sun technology (an update to 10.20 gave NFSv3, Sun's autofs, and ONC+ in one fell swoop, and 11.00 incorporates all these also). I work at an all HP shop right now, and I've have to say it works OK, though we don't make heavy use of NFS (no shared home dirs, SW builds on NFS filesystems, etc.), just some light data sharing.

    As for FreeBSD, I only have a 3.0-CURRENT box current as of Jan. or so, and a 2.2.8 box, so I don't have firsthand experience, but reading the mailing lists, significant progress has been made on general NFS stability and functionality (e.g. NFS over TCP). I don't have a Linux box, so I can't comment there.

    Mike.
  • We had the same problem here with an SOLARIS 2.5 box as NFS server and Linux 2.0.36 as client. After some access from the linux box (5 min. at best) any access from any machine to the same dir (NFS or not!) hung processes.

    The Problem went away with Linux 2.2

    IRIX 6.4 works fine here, but IRIX 6.3 as server and Linux (2.2.2 at the moment) as client locks up directories on the client machine. But this might be due to the use of autofs...
  • by Juggler ( 5256 ) on Friday April 30, 1999 @08:17AM (#1909412) Homepage Journal
    This is one of the topics Alan Cox covered in the talk he gave in Iceland last week, IIRC. In short, Linux doesn't have NFSv3 support - NFSv3 stands for "NFS done right". Commercial Unixes have this, Linux doesn't. Not sure about the *BSD family - but basically, due to the inferior design of older versions of NFS (which Linux does support) you DO want to stay away from Linux if performance matters to you.

    Of course, people are working on fixing this...

  • These patches are available, however they are considered (extremely) beta. I would not run them on anything but a test system until more people are beating on them.

    Of course, if you're willing to hack a little, load 'em up (especially if you have a variety of OS's and versions on NFS server) and report bugs to the kernel list.

  • Last time I checked, there was a package, I think called rumba, that implemented a samba client in user space. Also, you can always use smbclient from the command line if you just need to get the occasional file.
  • After looking at some comments here and in preparation for some upgrades around here, I ran a couple of highly unscientific tests between 2 Ultra Sparc 5s and a FreeBSD-Stable machine. Just sharing drives and mapping them (100TX FullDuplex via Cisco 2924 on each) with no extra options got me about 1Meg/sec writes from the Sun to the Sun, and 6.5M/sec writes to the FreeBSD server. As a client, FreeBSD could only muster 800K/sec back to the sun. I need to move down another FBSD or Linux machine to attempt FBSD->FBSD system...

    How can that be, that FreeBSD outperformed SOLARIS (2.6, patched), well, probably because the FreeBSD system has a CMD Ultra-Daytona cacheing (64M) SCSI RAID array versus 4 SCSI drives on the Ultra 5 (both systems using Adaptech 3940-ish cards). Now, some may call this cheating, but the cost of the FreeBSD HW/SW setup was only about $400 more than the SUN, and it has about 4G more space. (and we bought the Suns at edu discount, and the disks/controller 3rd party)

    Now I admit that I don't know how stable this system will be under heavy use, it's been mostly just me for the past month or so. And Linux not having quite as 'finished' as v3 implementation or Journaling or similar FS (I'm also using softupdates under FreeBSD) may even be a few months behind FreeBSD, though it typically catches up quick when it falls behind in an area. So if you need a large, high-performance NFS server right now, lay out the cash for a NetApp. But if you are looking for something medium-performance/size and/or planning for a fall deployment, select a quality hardware cacheing RAID subsystem and I suspect FreeBSD and/or Linux will be able to easily outperform a similarly priced chunk of SUN hardware.

    Also, of course, I've glossed over the essentially free Solaris/Intel, partly because I haven't had the resources to experiment with it, and partly because the last time I did (> 1year past), I was underwhelmed...
  • by tim pickering ( 6930 ) on Friday April 30, 1999 @11:52AM (#1909416) Homepage
    i have a P5-100 and an ultra 10 sitting next to each other on my desk at work. i use NFS to cart crap back and forth between them and they're on their own ports on a 100base-T switch. with 2.0.x and 2.2.x and the latest userland nfsd (whatever the latest RH 5.2 update was) i got 2 MB/sec pretty consistently going both ways. i didn't tweak read and write block sizes; they're whatever the out-of-the-box default was. now the P5-100 is running RH 6.0, kernel 2.2.5, and knfsd 1.2.2 (also not tweaked) and i get 3 MB/sec going both ways which is faster than i usually get via ftp. if i had a faster cpu and disk on the linux end it would probably be even better. it also seems like knfsd is a lot more responsive for automounting and grabbing lots of small files, but i don't have any numbers to back that up. as a comparison, i get 5-5.5 MB/sec when moving files between two ultra 10's running solaris 2.6 (seagate cheetah drives on both ends).

    linux's forte really is as a desktop unix (my friggin' P5-100 is _so_ much more responsive under X on the console than the ultra it isn't funny) and i think even the performance hit of the userland nfsd is outweighed by the performance gains in other respects (mostly X and file caching). it's also true that linux comes with a lot more software prepackaged whereas with most commericial unices you have to spend a week digging up and compiling such basic stuff as perl, python, or even bash.

    until knfsd shakes out a bit more i probably wouldn't want to use linux as a really hardcore NFS server (multiple hundreds of clients, heavy load, etc.), but in my experience it's fine in more modest environments and as an NFS client with HP or Sun NFS servers.

    tim
  • It is not "just FUD" ... I'd like to be able
    to post performance numbers, say from a specific
    number of operations between a Netapp filer and
    a sun, lintel, linux-alpha, dec-alpha, and winnt
    box, but I don't have the extra .25 million for the netapp... But please don't dismisss this
    report as FUD... We love linux to pieces but the
    NFS performance has been a showstopper for us.
    Tuning it isn't the solution; NFS writes are slow.
    No fud here -- it's a certainty, not uncertainty,
    no doubt about it at all.
  • by fishbowl ( 7759 ) on Friday April 30, 1999 @08:55AM (#1909418)
    (damned enter key, excuse me...)
    It seems like most of the responses lean
    toward "not using NFS" or "using something
    else (CODA, AFS)" but apparently we are still
    lacking in the NFS department.

    Unfortunately for linux in at least one place
    I know, this is terrible. What if (your shop) wants
    to use Network Appliance for your storage solution? All of a sudden you have an argument
    against linux based strictly on a technical merit -- NFS Performace. Not good. (When you get a
    Netapp filer talking coda, call me!) Even dyed-in-the-wool linux advocates in my company are
    forced to bite the bullet and use other platforms
    because linux isn't suitable to task for NFS with lots of writes.

    NFS itself is not to blame, after all Digital Unix
    performs well in this context. Even a commercial
    NFS implementation would be okay for a solution.
    Poor NFS performance has been a problem with linux
    for too many years now. I keep waiting for the
    problems to magically go away, but I guess they
    aren't going to. I've studied filesystems but still don't think I can fix this...
  • I had big problems with the userland NFS server under linux, which went away with the kernel level server.

    The server was/is a P133 with IDE disks on a 10baseT network. (Yes, I know... pretty weak server...) When writing ~50 MB from two clients (in this case, Sun ultra 1's) the userland NFS server would hang. I am sure the problem would be worse on a 100baseT network.

    Switching to knfsd fixed the problem. AFAIK, the NFS server on solaris and the other big commercial flavors of unix are kernel level servers.



  • I don't think -STABLE has the latest NFS patches that make it stable. In -CURRENT, I
    hear NFS has been almost perfected (in the kernel.)
  • It does? I've seen it used without NIS everywhere, even on SCO. Are you sure that your /etc/nsswitch.conf (On Solaris, can't remember if it's the same file on SCO) isn't forcing you to use NIS?
  • When I came to my current job, they had a small network of Suns, but for some inexplicable reason, they had an NFS server running on NT. Worse yet it was a demo copy of some NT NFS software, and the license would expire every so often, and the Suns would hang trying to access it!

    The good news is that we ordered a Sun box for our NFS server. It should be in "any day now".

    About NIS+, I had to enable "NIS compatibility" on the Sun NIS+ server to allow Linux NIS clients to work with it, and it seems to work out of the box on RH 5.2, anyway.
  • Ok, the question didn't mention that you were using such an old version.
  • Is there something for Solaris that allows it to mount SMB shares (like smbfs for Linux)?

    I thought I remember reading that Sun had a product to do this, I can't remember the name, though.
  • It's a distributed filesystem developed at CMU. On of the cool features is how it works for disconnected clients, E.G. a laptop.

    A disconnected client still acts like it's connected to the CODA system, and you can make changes to CODA files and they will be resynced when you connect back to the network.

    At least this is what I gather from reading about it, I haven't used it myself yet.
  • Are you only sharing data between Linux systems?
    It's likely that Linux NFS was designed to work with Linux NFS, it doesn't work well with others.
  • What I need is something that allows Solaris to mount SMB shares, I already have Samba installed, and it works great for serving, but it doesn't allow Solaris to mount shares.

    (Sure smbclient allows access, but it doesn't allow shares to be mounted either)
  • All of the tools you mention (perl, make, Bash) are available for Sun, from sites like www.sunfreeware.com. The only catch is once you install them, you have to call the OS GNU/Solaris (Just kidding! ;-) )
  • I have had more problems with Linux NFS than anything else on the system. Sometimes it just fails to work, both client and server side, for no apparent reason, All the correct rpc.* daemons are running, and the exports file is correct. Other times the same setup works properly.

    Also try mounting a Linux export from a Sun system and watch the Sun complain.

    This is with the userspace NFS server, I haven't tried the new kernelspace one yet.

    Why is it necessary to migrate from Sun and Digital anyway?
  • Didn't Samba recently beat the tar out of a NetApp CIFS box in recent tests?
  • well, actually, the Linux AFS client that's been distributed in binary form hasn't been a Transarc Linux AFS client. Some guy signed a NDA (hmm... non-disclosure agreement is right word, yet that acronym doesn't seem right...) and developed the Linux AFS client on his own. It has supported the latest 2.0 kernels for a while... it had an annoying caching bug that kicked in after longish periods of uptime, but worked well otherwise... course only for x86.

    Now Transarc has decided to take over this development on their own, and just finally released their first AFS client for Linux. Course, me being without money, I've not had a chance to use it yet... but i'd imagine it works... course, my university has been having a lot of trouble with transarc lately...

    oh, and i've played with Arla. that was a while back tho.. it might be time now to look into it again...
  • hmm, i looked into Arla back around when it was at 0.10 or so (don't remember exactly) and it looked like too much effort to be worthwhile, but it seems the time has come again to look into this.

    thanks!
  • by prijks ( 9686 ) on Friday April 30, 1999 @08:03AM (#1909433) Homepage
    Linux NFS is just not in good shape, IMHO. I would recommend against using it in any critical situation... but then again, I'd recommend against using NFS in general... =)

    Since I don't yet have access to Transarcs AFS client for Linux 2.2, I am using NFS to mount AFS off another machine that is capable of AFS. Working out authentication was a pain (luckily somebody did most the work for me) and it still isn't trustworthy. I login remotely to machines that are really on AFS if I have to do anything more than just a quick edit...

    but among the odd things i've noticed: files I don't have permission to just don't show up. This is really annoyingish, when say, I accidently open a file with a stupid mode and it suddenly disapears (spent a good while debugging programs tonight before i thought to log into a remote machine and voila... there was my file, mode 000) I don't know for sure, though, if this is Linux's fault or NFS in general...

    anyway, to sum up, I'd say stick with something else for NFS for now (I like Slowlaris... it was my first UNIX)... Linux still has a ways to go, methinks...
  • % dd if=/dev/zero of=bigfile bs=1024 count=10000
    10000+0 records in
    10000+0 records out
    10240000 bytes transferred in 0.653652 secs (15665829 bytes/sec)
    % time cp bigfile /cshomes/spfarrel/
    cp bigfile /cshomes/spfarrel/ 0.00s user 0.48s system 10% cpu 4.735 total
    % time cp bigfile /cshomes/spfarrel/
    cp bigfile /cshomes/spfarrel/ 0.00s user 0.46s system 9% cpu 4.932 total
    % time cp bigfile /cshomes/spfarrel/
    cp bigfile /cshomes/spfarrel/ 0.00s user 0.45s system 10% cpu 4.364 total

    this is with 3.1 client & server over a v3 tcp mount, 100baseT, full duplex.
  • The only reason for moving it in the kernel is performance. But that is an indication to me that the system APIs simply aren't complete or efficient enough to allow implementation of something like NFS.
    I disagree. While a better API (such as one that avoids kernel/userland memory copies) can help, part of this is not an API problem at all: on every processor out there (at least every one I've ever seen) the switch between kernel and user mode is very expensive. There is no fix for this, except not to switch.

    cjs

  • For years, Sun NFS would insert blocks of nulls at random when reading large files over NFS. Sun NFS write performance was also horrendously poor, even (or in particular) on Suns. And I remember lots of crashes relaed to Sun NFS. And even today, it isn't all that speedy.

    Sun NFS is probably the best NFS there is. My point is that people have been calling it "stable" and "rock solid" even when it had very serious bugs and performance problems, and I'm not at all convinced that those aren't still lurking around (I simply gave up using NFS for anything serious).

  • I find it understandable but regrettable that Linux also has gone down the path of implementing the NFS server in the kernel. As the user level NFS server shows, there is no need for kernel privileges to implement the NFS protocol. The only reason for moving it in the kernel is performance. But that is an indication to me that the system APIs simply aren't complete or efficient enough to allow implementation of something like NFS. If this problem got addressed in a more general way, other network services (SMB, httpd, etc.) might benefit as well.
  • If implemented according to Sun's pre-NFSv3 spec, I believe writes are always going to be very slow because they are required to be synchronous (I have no idea what v3 requires, it's academic for me at this point). In the past, with Sun NFS, writes were often 4-5 times slower than reads. That used to be fixable only by adding battery backed up RAM (e.g., PrestoServe) to buffer the writes that NFS guaranteed to be synchronous.

    You don't need very complex test setups to measure those differences--simply read and write a bunch of big files with "dd". If you want to do it simultaneously from several clients, there are some simple, free tools that let you execute the same command on multiple systems in parallel.

  • by sboss ( 13167 )
    Linux does support NIS+ with an addon (I do not beleive their is an RPM for it). I saw it on Freshmeat a few weeks (maybe days) back. They have not implemented everything yet but they have done alot of it. It supposely is solid. I am going to implement NIS+ on my Linux boxes in my data center. It will be a few weeks before I can begin so I can not give you feedback anytime (realitivily) soon.

    Thanks
    Scott

    Scott
    C{E,F,O,T}O
    sboss dot net
    email: scott@sboss.net
  • The web address is http://www.suse.de/~kukuk/linux/nisplus.html and has the fully functional client and a semi-working server. Scott
    Scott
    C{E,F,O,T}O
    sboss dot net
    email: scott@sboss.net
  • I've used NFS clients on PCs running Windows95 w/o using NIS. Note that this was not PC-NFS but some other vendors implementation.
  • The guy addressed this question earlier. They've got an old version of SunOS/Solaris which isn't Y2K compliant. They have to upgrade, and the Solaris licenses are quite expensive.

    That's what someone said earlier, I disclaim any knowledge of knowing whether or not it's true.

    On a different note, Sun boxes come with a real paucity of software, as far as I can tell. O fcourse, most of my experience is with older sun boxes, not newer ones, so I don't know if they do include all the utilities that I'm used to with Linux (such as compiler for more than 5 languages, make, bash, perl, etc.)

    Note: I'm not advocating blindly replacing everything with Linux. It'll take a while before Linux is ready to be put on 64 CPU sun boxes (though I hear that there are people working on High end SMP stuff).
  • You can give different priorities to various files or directorys, ie so that you dont fill the cache with just mp3s.

    You can also preload, that is, if youre going somewhere with your laptop and know youre going to use emacs you can do some magic to make sure all the necessary files are cached.

    Clear conscience is nothing but an expensive form of luxury
  • I couldn't let this one go. Market share, or market penetration or even mindshare have no technical relevance. If you're responsible for implementing a solution and want to go with the best equiptment, you need to compare the options on a technical basis. Otherwise, you'll nearly always end up with NT.

    Think about it, If you argument is "everybody's using..",or "everybody moving to....", 99% of the time your sentence will end with windows. Maybe your career too. Unless of course, you're a manager ;-)
    -earl

  • I was just looking through the comp.benchmarks FAQ and noticed this. I remembered this question coming up here a few days ago, and upon quick inspection, I did not see any references to it here. I hope it is not too late for this to be of use to somebody, at least for further research:

    2.9. Nhfsstone

    Benchmark intended to measure the performance of file servers that follow the NFS protocol. The work in this area continued within the LADDIS group and finally within SPEC. The SPEC benchmark 097.LADDIS (SFS benchmark suite, see separate FAQ file on SPEC) is intended to replace Nhfsstone, it is superior to Nhfsstone in several aspects (multi-client capability, less client sensitivity).


  • I disagree that it's admitting defeat. While advances in userland NFS MIGHT be of benefit to other protocols... the performance of those other protocols cannot show that NFS userland can be good performance-wise. The NFS protocol really was designed with kernel integration in mind. And it imposes some restrictions that really cause performance problems for user implementations. I think kernel NFS is good also because it allows NFS to share the buffering that the kernel provides normally.

    Overall, I do feel linux is very weak on NFS. It gets hit from too many directions. Being NFSv2 and userland exacerbate problems of NFS itself and problems with it being a "young" implementation.

    But, I do have hope. And I think it will get there soon. Now that linux is being seriously thought of as a server I think NFS will be pushed along.
  • A friend of mine has actually written NFS for other platforms, and he's none too chuffed with the Linux NFS in several areas.
    Apparently there are still soft spots and non-standard bits floating around in there.

    He does send problems and patchlets in to knfsd, but has no patience with the userspace nfsd.
    Remember that the Titanic Linux network often got stuck when the (SGI) NFS server fell over. (I severely doubt that particular problem is still around).

    Personally I'm waiting for CODA - NFS isn't the prettiest creature in any light, or on anyones platform.

  • You might want to check out Arla [stacken.kth.se], which is a free AFS client implementation. They also include an experimental AFS server.

    About two years ago I tried the Transarc Linux AFS client. It worked but was a real pain in the a**. It was a kernel module distributed in binary form only, and always for a very old kernel (which, incidentally, didn't support all of my hardware). Perhaps this has changed.

    -Tom
  • by divbyzero ( 23176 ) on Friday April 30, 1999 @10:51AM (#1909449) Journal
    Just to clarify what was Enry said, the rsize and wsize options are flags you set either in the fstab or on the mount command line for every share you want to mount. Nothing need be changed on the server exporting the share.

    This is documented right at the top of the nfs man page, and makes a world of difference. My group at work has a very similar situation to yours (most shares served by Digital Unix but adding more Linux boxes every day), and NFS was definitely a problem until we fixed this.

    Div.

    --
    But my grandest creation,
    As history will tell,
    Was Firefrorefiddle,

  • Linux of 1.2 and prior era had lousy NFS.

    2.0 Linux had a reasonable, but not brilliant client - certainly slower than the better commercial unices. It didn't do locking (at all), but was pretty stable (we had some occaisional problems on SPARC, but none on intel). If you didn't need locking then it worked fine (the other performance benefits of Linux outweighed the NFS degredation).

    2.2 Linux is meant to be much better performance and locking is getting there... but its currently flakey. However I haven't used it seriously yet so ignore me on post 2.0 NFS :-)

  • -STABLE doesn't have some of the VM changes which detstabilised NFS in recent -CURRENT kernels :-).

    I would not go so far as to say that NFS in -CURRENT was perfected but it does look good with Matt's latest patches. I really must review them properly and try to help get them committed into -CURRENT...
  • This is almost certainly a problem with using NFSv2. The correct behaviour for an NFSv2 server is to perform all write requests synchronously which reduces performance significantly.

    Some NFS servers can change this behaviour to improve performance but I would not recommend it since it can cause data loss if the server reboots unexpectedly. For FreeBSD, the command 'sysctl -w vfs.nfs.async=1' will improve the performance of NFSv2 clients.

  • All of the *BSD family have NFSv3 support since it was part of the original 4.4BSD-lite2 release. The performance of FreeBSD's NFSv3 implementation is pretty good (good enough to do software development on an NFS mounted driver without appreciable performance loss which is what I do regularly).

    The two stage write-commit protocol of NFSv3 is necessary to get any kind of performance out of NFS without gross hacks at the server and which violate the protocol semantics.
  • by dfr ( 30895 ) on Friday April 30, 1999 @08:17AM (#1909454)
    While there are some obscure bugs in FreeBSD's implementation of NFS client under very high loads (many of which are fixed in FreeBSD-current or will be fixed by pending changes to FreeBSD-current), I believe it to have extremely good performance. The attribute cache which was introduced in FreeBSD-3.x reduces network and server load significantly.

    I am biased since I work on the FreeBSD kernel and in the past have been involved with fixing and optimising the NFS client but I also use NFSv3 on a regular basis and see excellent performance with 100baseTX (I haven't measured performance for about a year but I seem to remember multiple Mb/sec write performance).

    I can't comment on Linux performance since I have only tested RedHat 5.2 (which has terrible NFS performance, IMHO) and I believe that many improvements have been made in the 2.2.x kernel series.
  • The latest experience I had trying to fix the NFS problem was with kernel version 2.2.2 on the Linux box, and the Indy is running 6.5.2m. lockd and statd were running okay, but when little things shells don't even start properly, that's sorta problematic.

    Perhaps I'll try a later Linux kernel again - though that's sort of a pain due to LILO being dumb about IDE and SCSI drives :P

    As for the 5.3 IRIX problems, it could help by using the -32bitclients option on your IRIX NFS server - however, not having more information on your setup, that's only an educated guess (from having seen it before)

  • I can't give you any performance data, but I can give you an experience of mine with Linux running as an NFS server.

    My network was setup originally with an SGI Challenge S box as an NFS server. The client machines were a combination of PCs running Linux or Solaris, and a couple of SGI Indys. With this setup, there was little to no problems with NFS.

    However, I moved a whole bunch of stuff over to a Linux server (some home directories) and I got hit with problems on the client side hard.

    The Linux clients talk to the Linux NFS server fine, but clients like the Indy take a real dislike to it. Even forcing the Indy to use NFSv2 and trying static mounts over automount/autofs, any process on the Indy that tries to use NFS to the Linux server just hangs.

    No matter what I've tried, I can't seem to fix the problem. I've tried the Linux kernel implementation of nfs and the userspace versions. Same problem.

    My advice; stick with commercial versions of NFS for the time being, ie those that come with Solaris and IRIX (especially since NFS comes packaged with 6.5 :)

  • >You're not going to find anything faster than dedicated network-attached storage like a Network Appliance filer for NFS or CIFS.

    The #1 reason for this is that a dedicated filer isn't even _allowed_ to do other work. The #2 reason is that the dedicated filer is configured to a CPU/memory/bus/disk balance appropriate for file service.

    When you peek under the covers, though, neither the hardware nor the software is really anything special or unique. A reasonably intelligent person who knows something about performance measurement and tuning would generally have little trouble duplicating the functionality and performance of a Netapp (or any of their competitors) using commidity parts and software...for about 1/3 the price.
  • I must agree here. I'm a big fan of linux, but it still cannot match solaris in *most* critical environments. solaris is refined, and linux is not (although i think this line will fade in a few more years). linux will have its day, but for now, i will stick with solaris for my mission critical boxes.
  • I guess it doesn't need NIS if all the clients are Unix workstations. But when you are running DOS based PC-NFS clients there is no other way of authentication except NIS.
  • You might find that read cacheing is whats
    making your benchmarks 'bogus'.

    Where do I find the 'bonnie' benchmark s/w?
  • Here are 2 benchmarks I've had to do recently.
    They aren't a complete test of NFS performance
    by any means but illustrate that a Network
    Appliance (F720) can match local disk performance
    and trounce a SUN at NFS server performance.
    I would add that though I use linux as an NFS
    server at home for my 4 machine network with
    few problems, my empirical experience is that
    it is not ready for high-load mision critical
    NFS server applications.
    My opinion is that a Network Appliance's
    clever write caching technology reduces the
    NFSv2 write penalty dramtically and increases
    linux client useability. We run 100+ fast
    linux clients and 20+Suns and I know that this
    setup is best server ($$$ and performance)
    by a Netapp rather than ANY UNIX solution.

    This first one shows raw sustained NFS
    performance by using dd to read or write a
    100MByte file over a switched full duplex
    100MBit ethernet between a P-II 333 (3COM 905)
    and Redhat 5.2 and a NetApp720.
    In summary I can achieve approximately:
    33Mbits/sec NFS write performance to the network appliance and
    62MBits/sec NFS read performance from the network appliance.

    XXX.8x8.com 28: time dd if=/dev/zero of=/mnt/ianb/testfile bs=16k
    count=6250
    6250+0 records in
    6250+0 records out
    0.050u 3.740s 0:25.24 15.0% 0+0k 0+0io 96pf+0w
    XXX.8x8.com 29: time dd if=/dev/zero of=/mnt/ianb/testfile bs=16k
    count=6250
    6250+0 records in
    6250+0 records out
    0.040u 3.840s 0:23.13 16.7% 0+0k 0+0io 95pf+0w
    XXX.8x8.com 30: time dd if=/mnt/ianb/testfile of=/dev/null bs=16k
    6250+0 records in
    6250+0 records out
    0.040u 2.690s 0:14.60 18.6% 0+0k 0+0io 99pf+0w
    XXX.8x8.com 31: time dd if=/mnt/ianb/testfile of=/dev/null bs=16k
    6250+0 records in
    6250+0 records out
    0.030u 3.150s 0:12.58 25.2% 0+0k 0+0io 114pf+0w
    XXX.8x8.com 32: ls -al /mnt/ianb/testfile
    -rw-r--r-- 1 ianb users 102400000 Mar 11 10:29
    /mnt/ianb/testfile


    This 2nd benchmark is a GCC compilation of
    8000 lines of C in many files. It illustrates
    both the dramtic differences between local
    disk/netappNFS/solarisNFS and the benefits
    of using gcc and make options to improve
    efficiency by running parallel compiles
    (which uses idle CPU that is lost whilst the
    OS waits for the remote RPC's to complete in an
    unparallelized compile) and by using
    interprocess comuncation insted of files in
    /tmp to communicate between different
    compile stages. Conclusion using 100Mbit
    network, compile performance approaches local
    disk performance.

    NADS BUILD (GCC) ON SUN NFS DISK (10Mb/s net)
    6.480u 1.460s 0:38.36 20.6% 0+0k 0+0io 12596pf+0w

    NADS BUILD (GCC) ON SUN NFS DISK (100Mb/s net)
    6.480u 1.110s 0:29.69 25.5% 0+0k 0+0io 12596pf+0w

    NADS BUILD (GCC) ON SUN NFS DISK (10Mb/s net)
    WITH GCC -PIPE (NO /tmp files) AND MAKE -j 5
    6.890u 1.770s 0:33.17 26.1% 0+0k 0+0io 12597pf+0w

    NADS BUILD (GCC) ON SUN NFS DISK (100Mb/s net)
    WITH GCC -PIPE (NO /tmp files) AND MAKE -j 5
    6.660u 1.280s 0:25.22 31.4% 0+0k 0+0io 11736pf+0w

    NADS BUILD (GCC) ON NETAPP NFS DISK (10Mb/s net)
    6.650u 1.230s 0:17.67 44.5% 0+0k 0+0io 12596pf+0w

    NADS BUILD (GCC) ON NETAPP NFS DISK (100Mb/s net)
    6.490u 1.250s 0:09.35 82.7% 0+0k 0+0io 12596pf+0w

    NADS BUILD (GCC) ON NETAPP NFS DISK (10Mb/s net)
    WITH GCC -PIPE (NO /tmp files) AND MAKE -j 5
    6.940u 1.430s 0:13.93 60.0% 0+0k 0+0io 11741pf+0w

    NADS BUILD (GCC) ON NETAPP NFS DISK (100Mb/s net)
    WITH GCC -PIPE (NO /tmp files) AND MAKE -j 5
    6.830u 1.140s 0:08.77 90.8% 0+0k 0+0io 11736pf+0w

    NADS BUILD (GCC) ON LOCAL DISK
    5.730u 1.020s 0:11.31 59.6% 0+0k 0+0io 11069pf+0w

    NADS BUILD (GCC) ON LOCAL DISK
    WITH GCC -PIPE (NO /tmp files)
    5.700u 1.040s 0:09.71 69.4% 0+0k 0+0io 10271pf+0w

    NADS BUILD (GCC) ON LOCAL DISK
    WITH GCC -PIPE (NO /tmp files) AND MAKE -j 5
    6.000u 1.100s 0:08.19 86.6% 0+0k 0+0io 10280pf+0w


  • Actually the concept of NFS over TCP vas invented for BSD4.4 and then adopted by Sun. They added the 64 bits stuff and a few others things and it gaves NFSv3.
  • You're not going to find anything faster than
    dedicated network-attached storage like a
    Network Appliance filer for NFS or CIFS.

    BTW, Dell has one coming out soon, I hear.
    Maybe the price will be better.
  • Are these performance penalty measurements derived from NFS read operations, write operations, or a mixture of the two? If NFS writes were involved in the measurement, there is a reason for the relatively poor measured performance of FreeBSD.

    According to RFC1094, NFSv2 servers must commit data to nonvolatile storage before acknowledging that a block write request has completed successfully. Given the relatively small size of NFS blocks (8K or less for NFSv2), forcing a separate write/sync/acknowledge cycle for each block that is written can result in relatively poor NFS write performance.

    There are several solutions to this problem. Linux (and some commercial UNIX variants) acknowledge NFS write operations without forcing a disk write and sync for each individual block. This is obviously somewhat more dangerous, but yields much better NFS write performance. Some vendors offer hardware add-ins (such as the PrestoServ board) that provide nonvolatile storage that can be written to faster than a disk.

    NFS version 3 has much better support for asynchronous writes without resorting to such hacks. Hopefully, Linux NFS3 will be usable soon.

    If you are willing to live with the risk of data not necessarily being written to disk on the server when clients think that it has been, you can force FreeBSD to acknowledge NFS writes asynchronously using the command

    /sbin/sysctl -w vfs.nfs.async=1

    Retrying the benchmark after configuring FreeBSD to act more like Linux may yield different results.
  • You can get a Solaris version (2.5 - 7) of Samba at: http://sunfreeware.com

    It's a handy URL to have if you use Solaris a lot.

    John.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...