Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Speeding up Firewire File Transfers? 187

Milo_Mindbender asks: "I've got a pretty common problem: copying a ton of files from an old Windows XP computer to a new one. After noticing how long transfers were taking over my 100mbps Ethernet, I hooked up a IEEE1394/Firewire cable and things were much faster. Strangely though, Windows is still only using about 10% of the cable's 400mbps bandwidth. Does anyone know any tips/tricks for speeding this up or any Shareware mass-file-copy tools that would be faster than Explorer/file sharing? Right now, the older machine is setup with Windows file sharing and the new machine is copying from it, neither machine is using much CPU and the disks are nowhere near their max speed. The number and size of the files might be what's slowing it down, since it's gigabytes of files in the 100-200k size range."
This discussion has been archived. No new comments can be posted.

Speeding up Firewire File Transfers?

Comments Filter:
  • archive then move? (Score:4, Informative)

    by vjl ( 40603 ) <vjl&vjl,org> on Thursday June 29, 2006 @11:50PM (#15633936) Homepage Journal
    Have you tried to archive/compress them first [gzip/zip/etc], then move the big file over? Lots of small files take longer to move than fewer larger files. /vjl/
    • archive/compress them first [gzip/zip/etc], then move the big file over?

      Pr0n jpgs do not compress very well. WTF not just let it run overnight?

      • by biglig2 ( 89374 ) on Friday June 30, 2006 @12:34AM (#15634129) Homepage Journal
        Nothing to do with compression (although that may help); it's about one big file being faster to copy than lot's of small files that add up to the same size. Even if you zip them up without compressing (it'll be an option somewhere) then this will help.

        Another thing is that even without looking at third party tools, you should be using XCOPY in preference to windows explorer.

        There is an Exchange server utility that is optimised for moving gigantic files very fast; doubtless you can find similar programs about.
        • I haven't used xcopy in a while. Certainly not on an XP machine. The last time I used it was on a w98 box and it truncated all of the filenames to 8.3 format (7~.3 really). Is there now a version that supports long filenames?
        • by blincoln ( 592401 ) on Friday June 30, 2006 @01:36AM (#15634321) Homepage Journal
          Robocopy is approximately a hundred trillion trillion trillion times better than xcopy.

          To put that in perspective, you would need to weld fourteen quadrillion VW Beetles end to end, then use the resulting Beetle Bar as a lever and an object with the displacement of eleven million Libraries of Congress as the fulcrum in order to give xcopy the same Windows command-line file copying power as Robocopy.
          • Robocopy has nice features, but it's slow. I've started using HAS [heatsoft.com] instead. It costs, but It's got more features, a nice GUI if you happen to like that, and most importantly, it's about 3x faster on my servers than Robocopy.
        • you should be using XCOPY in preference to windows explorer.

          Better, use xxcopy [xxcopy.com]. Similar CLI, free; avoids the common problem of long/short file names getting scrambled. The "pro" version apparently has network features, but I've never used that.

        • There is an Exchange server utility that is optimised for moving gigantic files very fast; doubtless you can find similar programs about.

          Perhaps you mean ROBOCOPY.EXE (RobustCopy)? It is a tool in the Wind2k/2k3 Admin toolkit designed to do just what you describe: Faithfully, accurately, (and efficiently) copy large files from one place to another.
        • you should be using XCOPY in preference to windows explorer


          We used to run a Windows to Windows performance lab to collect file transfer statistics. FTP beat the pants off of any Windows networking thing, even XCOPY.

      • The point is not to make them smaller - it is to make them all one file. Of couse, smaller is better too.

        I wonder how the microsoft backup too would work in this situation. Backup to a file, copy the file, then restore from it. I've done it in the past, though not for the same reason, and it's worked...not sure about speed.

        Clearly, moving the disk would be the best option. I often use a firewire Wiebetech Drive (useful to have around if you often find the need to do such things) dock to perform a similar ta
      • Put a freeware FTP server on one of them and FTP everything. Windows file copy is TERRIBLE if you want speed
    • by mgv ( 198488 ) *
      Have you tried to archive/compress them first [gzip/zip/etc], then move the big file over? Lots of small files take longer to move than fewer larger files.

      Is it just possible that you are confusing bits with bytes per second? 400 Mb/s is about 40 MB/s (or pretty close, especially as you rarely the full theoretical 50 MB/s that you would think this would equate to).

      Michael
    • It would be faster to 'tar' directly to the target machine. Tar will stream the output and is probably much more efficient than Windows Explorer.

      Also, booting the machine from a Linux boot CD and mounting the drive read-only and using tar to move the files is probably the fastest method. I do this all the time to recover/backup machines at work.

      boot gentoo install cd...
      mount network share/external drive
      mount local drive '-o ro'
      cd /path/to/local/drive
      tar -vcf ./* /path/to/remote/drive/backup.tar
  • by TheArtfulTodger ( 879073 ) on Thursday June 29, 2006 @11:52PM (#15633946)
    Why not just plug the old hard drive on the secondary channel on the new PC, reboot and then just file copy? Or do I need to reread the question?
  • Here (Score:5, Informative)

    by abscissa ( 136568 ) on Thursday June 29, 2006 @11:52PM (#15633948)
    Here you go.

    Firewire is crippled in Windows by default. You need the patch here [microsoft.com] to restore functionality.
  • This being slashdot, I'm sure someone will correct me if I'm wrong, but your hard drive is most likely not fast enough to receive the full 400 Mb/s stream from the firewire. The fastest SCSi drives are 320Mb/s and that's not sustained.

    To get full firewire transfer goodness, you need a raid of fast drives, on both systems.
    • by Anonymous Coward
      your hard drive is most likely not fast enough to receive the full 400 Mb/s stream from the firewire. The fastest SCSi drives are 320Mb/s and that's not sustained.

      You are confusing MByte/s and MBit/s. Firewire is 400 MBit/s, while SCSI is 320 MByte per second.
    • by ArbitraryConstant ( 763964 ) on Friday June 30, 2006 @01:17AM (#15634275) Homepage
      Firewire 400 is 400 megabits per second.

      A modern SATA drive can do just shy of 70 megabytes per second, which is 560 megabits.
    • Acuatlly hardrives both IDE/EIDE, and normal scsi are measured in MB/s note the capital B meaning BYTEs not BITs. External communication channels are measured in bits/s. Only the new SATA, and SerialSCSI drives are bits/sec. If you calculate it out, a 3.0Gb/s SATA drive pulls about 375MB/s burst rate. Its marketing manipulation.

      I'm not sure why you're transfers aren't that fast, for me firewire from my external harddrive is just as fast and getting stuff off my fileserver *6 disk raid 0*. I have yet to
    • It is remarkable that a /. user, who should have some measure of tech-savvy, mistakes megabits per second with megabytes per second. This is pretty basic stuff, kiddies. Firewire (the IEEE1394A type) tops out at 400 Mb/s, which is 50 MB/s. Eight bits in a byte, remember? SATA is 150 MB/s or 1.2 Gb/s. SATA-2 doubles that. Ultra320 SCSI is 320 MB/s or 2.56 Gb/s. The newer IEEE1394B is 800 Mb/s, but you will only typically find that on a Mac.
  • by Bob Cat - NYMPHS ( 313647 ) on Thursday June 29, 2006 @11:55PM (#15633962) Homepage
    I bet when you wake up in the morning, things will look much brighter. They always do.

    IF you wake up. Muahahahahahaaa....
  • If your not maxing out your connection, any number of things could be limiting the speed of the transfer; cpu, bus speed, harddrive performance etc. Use a system monitoring utility to see what's at 100% utilization, and then upgrade the part. Transfering larger files that are sequential on the disk will also help.
    • Since the poster already mentioned CPU, I suspect they know enough to look at the basic utilization stats. Most likely, however, the limiting factor is either the hard drive speed or the fact that Windows explorer is a piece of crap.

  • by digerata ( 516939 ) on Friday June 30, 2006 @12:01AM (#15633989) Homepage
    "...since it's gigabytes of files in the 100-200k size range."

    That's quite a collection of pr0n!

    • "...since it's gigabytes of files in the 100-200k size range."
      That's quite a collection of pr0n!


      You've obviously never seen my pr0n archives. Which reminds me, anyone know of good prices on RAID cards? Newegg maybe?
  • Some things to try (Score:2, Interesting)

    by Tycho ( 11893 )
    What is the manufacturer of Firewire controllers in the computers you are using? VIA controllers are usually not the best Firewire controllers. Texas Instruments controllers are usually better. For that matter, depending on the situation try compressing the files. Also, do not depend on time remaining in Windows I have found it wildly inaccurate at times. Windows seems to estimate the time remaining to be way too high, so YMMV, literally.
  • by inio ( 26835 ) on Friday June 30, 2006 @12:11AM (#15634031) Homepage
    Your file size, and disk seek time, are the problem. Lets say your drive has a 5ms seek time (that's pretty damn fast). writing each file actually requires three writes: to the file allocation tabe, to the directory, and the contents of the file itself. Assuming the writes take another 5ms, that's 20ms per file. that limits you to 50 files per second. At 200kiB per file that's about 10 megs per second.
    • You're asking Windows to create, write, and close maybe 500 files per second. Windows file creation isn't that fast. What's the file system format on the destination side?

      Try transferring a 1GB file and report how long that takes.

    • First off, I'm not sure NTFS has a file allocation table. But more relevantly, there are two filesystems I know of that do lots of small files really well: Reiser4 and XFS. At least in the case of Reiser4, Microsoft could license it.

      The key feature here is lazy allocation. It not only keeps your drive from getting as fragmented, it also means that when it does decide it has to write, it's writing all the files at once, and can make intelligent decisions like, write all the metadata out, then write all t
  • If you want to keep the files separate.. use Robocopy.. (free from MSFT)..

    If you don't mind the files being in one glump.. use a Win32 port of Tar..

    Both options above seem to speed up Firewire (or any) transfer.
  • by rwa2 ( 4391 ) * on Friday June 30, 2006 @01:18AM (#15634278) Homepage Journal
    Well, I could think of a lot of ways to speed it up under Linux using various combinations of rsync, and... well, really just rsync. See if there's a good rsync clone for Win32 that will preserve your precious file attributes. Even running it under cygwin may be better in the long run, especially because inevitably (speaking from experience) your large copy will be interrupted halfway through by an "unreadable file" or some such rubbish, and you'll find yourself having to try to fix it and start the copy all over again from the beginning, or else trying to just transfer the remaining directories you think you're missing.

    Using cygwin's rsync via ssh: (after running "ssh-host-config" on your new box and setting a "passwd" as Administrator )

    rsync -azve ssh --progress /cygdrive/c/pr0n/ Administrator@newxpbox:/cygdrive/c/pr0n/

    will do the trick, and you can just keep running it over and over again until all the files are mirrored. It will take a long time to buld a list of all the files you need to transfer, but it will only tranfer the files you're missing, and will attempt to do some compression (which should help because you're more IO bound than CPU bound, but just remove the -z if your CPU is pegged). Plus, you'll find rsync & scp damn useful for many other common tasks you take on.

    The bottleneck is probably your windows filesystem, and cygwin's extra abstraction layer will only make that worse. But using rsync under cygwin means you only have to transfer the files once - which will be a much bigger time saver than trying and failing to do the entire transfer several times.

    If you were doing this often, I'm guessing you might see an improvement if you defragment your old drive first, but you obviously don't really want to waste time on that for a once and final transfer.

    Also, the Windows TCP/IP stack is typically tuned for 2 - 10Mbps links. Here's some information on how to fix that: http://rdweb.cns.vt.edu/public/notes/win2k-tcpip.h tm [vt.edu] It's mainly geared towards improving throughput on high-capacity WAN links, but parts are also relevant to achieving decent performance on 100Mbps+ networks as well. Also remember that a lot of network drivers suck too and are incapable of pushing the throughput even to a fraction of its rating... that's been a factor too, especially on cheap windows crap. An updated NIC driver /might/ get your net transfer to catch up with your firewire transfer somewhat.

    Since you're getting 40Mbps / 400Mbps firewire, you're really not doing too bad. Converting to bytes, 5MB/s is a decent fraction of the 20MB/s to 50MB/s raw speed of your older hard drives, and actually seems reasonable given that you're sending lots of small files and not a few big ones where you can actually make good use of your drive's readahead cache.
  • by Bogtha ( 906264 ) on Friday June 30, 2006 @01:20AM (#15634283)

    Strangely though, Windows is still only using about 10% of the cable's 400mbps bandwidth.

    Are you sure you aren't confusing mbps [wikipedia.org] with MBps [wikipedia.org]? 400mbps is equal to 50 megabytes per second, and "12.5% of the cable's bandwidth" sounds suspiciously like your description of the problem, "about 10% of the cable's bandwidth".

    • Are you sure you aren't confusing mbps with MBps? 400mbps is equal to 50 megabytes per second...

      Are you sure you aren't confusing mbps [wikipedia.org] with Mbps [wikipedia.org]?

      Last I checked, 400mbps is equal to 0.4 bytes per second. I remember getting speeds like that back in the days of dial-up. Like back in the day when you picked up the phone and read off ones and zeros to your friend on the other line as he copied your fortran program by punching holes in his punch card. Oh yeah, and uphill both ways. In the snow.

      ; )

      [/

    • I was about to post in a similar vein, wondering how 10% of 400 Mbps can be faster than 100 Mbps, but after reading your post it makes sense; the question submitter is saturating the LAN connection and getting ~10 MBps, and is also saturating the firewire connection thinking it can do faster because he is expecting it to go 400 MBps.
  • First of all, I'm confused by what you mean when you say it's only using 10% of bandwidth. 400Mbps, means you're gonna get something like a max of about 40 Megabytes per sec transfer. (remember, 8 bits per byte, plus some overhead) Are you seriously only getting 4 MB/s?

    As far as copying faster. You might want to try robocopy from the Windows 2003 resource kit [microsoft.com] or xxcopy [xxcopy.com]. I've tried xxcopy and it seems to buffer things well, such that I can do a sustained 25 MB/s or so when backing up files to my 500 Gig
  • While I'm sure it's great to get the firewire working at full speed, why don't you just put the drives on separate IDE channels in the same machine. You'll get a much higher throughput.

    You'd still need to use something like xcopy32 (or boot in linux and use tar - if both drives are fat32)... or find a windows version of tar (url:http://unxutils.sourceforge.net/)
  • Are you sure that you're not missing the bit -vs- bytes distinction? A difference of about 1/10 would appear if you are.
  • by R3d M3rcury ( 871886 ) on Friday June 30, 2006 @02:19AM (#15634472) Journal
    I did a freelance gig back in '98 where I had to use a Mac (an 8600/300 w/64 megs of RAM). It took well over 20 minutes to copy a 17 meg file from one folder on the hard drive to another. 20 minutes! At home, on my Pentium Pro 200 running NT4, the same operation would take about 2 minutes.

    (Admit it. You knew this [wikipedia.org] was coming.)
  • Have you tried this? (Score:3, Informative)

    by Anonymous Coward on Friday June 30, 2006 @02:40AM (#15634530)
    There is a problem in Windows XP SP2 with firewire transfer. Albeit that it could be numberous small files creating problems but it should be faster than 100mbps ethernet. Try this blog regarding Windows XP SP2 Firewire Slowness [hishamrana.com] for a link to the KB and a links to few other work arounds or just go direct to the KB article [microsoft.com].
  • Use gigabit network cards? Faster than Firewire.

    The fastest way to do this is to put the old drive in the new machine (or perhaps an external drive enclosure if we are talking about a laptop) and copy that way.

    If you are worried about special file or folder attributes then use MSBackup to copy the drive to a backup file as it will preserve everything.
  • Why not just plug in the old machine's hard disk to the new machine? Leave the lid off, have an IDE ribbon cable dangling over the side of the case, prop the disk upright with a chipped mug and a spare copy of Tanenbaum's Minix book... this is the correct old-school approach to moving data and many times faster than anything involving slinging a cable between the two boxes.
    • Dude, although this makes sense, it's also mighty inconvenient, even for a hardware geek (which I am). Firewire would do the trick much better I think. Your method involves opening up both PCs, connecting the ribbons to the other computer. Firewire method involves taking a cable, plug it in to the ports on both machines, and it's ready to go. I think simplicity wins personally, IMHO.
  • by Svenne ( 117693 ) on Friday June 30, 2006 @06:04AM (#15635024) Homepage
    No one's mentioned this?

    Bring up the properties of the firewire disk in "Device Manager". Go to the Policies tab and make sure it's set to "Optimize for performance".
    • Comment removed based on user account deletion
    • I have another post in this article describing my speed issues as well. I've also tried the setting you suggested in the past and that had no noticable change at all. Maybe that is useful for short bursts with a few files. The bottleneck always seems to be how Windows treats or handles large amounts of files. It might not even be the amount of files but the latency or overhead involved with opening and closing a file multiplied by the number you have adds up to significant delays. Anything above 10k or
  • NSCopy (Score:2, Informative)

    by megabyte405 ( 608258 )
    I use NSCopy for any decently-sized Windows File Sharing file transfer - it can copy a whole directory tree and throttle the speed down or up (to maximum "plaid") Just google for it, it's free.

    If you want more speed, I'd say get FireZilla (an FTP client) and FireZilla Server (an easy to use FTP server), both open source and free. Set up the server on the "source" computer, and download as fast as you can! It will use the bandwidth much better.

    One of the other suggestions about moving the hard drive would
  • 10% of firewire 400 is 40mbps. fast ethernet is more than that even with the collisions. So I dont know how you got faster transfers with firewire.

    One option is to just pull the drive from the old machine and use it as the slave drive. I use this when moving large files. Another option is to have a gigabit card, now around $14 everywhere. Newer PCs already have gigabit cards. Just use a crossover cable if you wont buy the (also cheap) gigabit switch.

    As far as firewire is concerned, I've never used it to tra
  • ...I typically use ssh or rsh, depending on if it's local or not. If it is, there's no reason to blow the CPU time on [de]compression, and I use rsh. If you use ssh, it looks something like this:

    initiated from files; location:
    tar cvf - file1 file2..filen | ssh user@host '( cd /someplace ; tar xvf - )'

    initiated from files' destination:
    ssh user@host '( cd /someplace ; tar cvf - file1 file2..filen)' | tar xvf -

    Not exactly a new trick but one that bears repeating. You get prompted for a password a

  • by HunterZ ( 20035 )
    If it's only using 10% of a 400mbps link, that would be 40mbps. How is that faster than a 100mbit ethernet link?
  • Synctoy (Score:2, Informative)

    by joeaic ( 56950 )
    If you don't want to use the command line xcopy, then I suggest you download a copy of Synctoy from Microsoft.

    Whitepaper: http://www.microsoft.com/downloads/details.aspx?fa milyid=49818CF1-2287-40EA-8A6F-57BD8695F23D&displa ylang=en [microsoft.com]

    Download:
    http://www.microsoft.com/downloads/details.aspx?fa milyid=E0FC1154-C975-4814-9649-CCE41AF06EB7&displa ylang=en [microsoft.com]
  • "FIREHOSE gives you a basic data transfer over multiple network devices supporting TCP/IP layers. Stripe multiple 100Mbit, Gigabit, 10 Gigabit, or firewire to give one humungous pipe for firehosing your gigabytes and gigabytes of data.

    "Unlike RAID striping, FIREHOSE striping load balances the network devices so every ounce of bandwidth is utilized. Combine a 400Mbit firewire eth device with a 100Mbit eth device to get 500Mbits of power. Combine 10 100Mbit ethernet ports for a gigabit pipe. The number of dev

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...