Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

bdflush - Streaming Buffer-to-Disk vs. Burst I/O? 11

A not-so Anonymous Coward asks: "I'am trying to sustain major data input&output on a ftpserver. I have been tweaking with /proc/sys/vm/bdflush and buffermem settings, but the IO is still very bursty. Disks are IDE and use lots of CPU when the update decides to flush the dirtybuffers. The opions differ on the net, what can I do to leave some CPU to the NIC while for example 75% is doing the data-to-disk stream? I have read the Documentation/sysctl/vm.txt and /sysctl/proc and a couple of other hints, but no real improvements so far. Please include detailed motivations and maybe even test results."
This discussion has been archived. No new comments can be posted.

bdflush - Streaming Buffer-to-Disk vs. Burst I/O?

Comments Filter:
  • by DreamerFi ( 78710 ) <johnNO@SPAMsinteur.com> on Wednesday March 07, 2001 @09:57PM (#377004) Homepage
    You could spend either a lot of time to make CPU cycles available to servicing the NIC, or a certain amount of your money by getting rid of the IDE disks and putting in a better disk subsystem, for example SCSI, but I'm sure slashdot readers will give you lot of other options as well. I don't know how valuable your time is, you'll have to do the math yourself.
  • Go into single user moder first, I forgot to do that once and fsck'd my system.
  • by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Wednesday March 07, 2001 @10:09PM (#377006) Journal
    Try man hdparm to show how to turn DMA on. Test fist because this could cause disk corruption, but I have never had any troble with it. /sbin/hdparm -d 1 /dev/hdX where X is the letter of your drive. /sbin/hdparm -t 1 /dev/hdX is a simple benchmark you can use to test your changes. Also check out Linux.Com for articles on tuning IDE disks under Linux.
  • by shippo ( 166521 ) on Thursday March 08, 2001 @04:56AM (#377007)
    High CPU utilisation on disk activity with IDE hardware indicate that DMA/UDMA is not being used, and the processor is using PIO to write all data. DMA offloads the moving of data elsewhere, letting the CPU perform other things.

    Using 'hdparm' may help, but only if your kernel has been compiled with the relevant drivers. Without these DMA may work, but not at the optimum performance. You will have to investigate which IDE chipset you are using.

    Neglecting to use DMA is the most common setup up problem for IDE hardware accross all operating systems. I know of one major UK company who rolled out Windows95 and NT, and none of the machines had DMA enabled.

    A poor choice of NIC can also cause high CPU utilisation. Avoid NE2000 clones at all costs as these use PIO.

  • I've never had this problem.

    It is advantageous to run hdparm very early in the boot sequence, ideally before fsck checks the filesystem. Recompiling the kernel to automatically enable DMA, or passing "ide0=dma" as a boot-time option will also help.

  • by Detritus ( 11846 ) on Wednesday March 07, 2001 @10:20PM (#377009) Homepage
    My experience with high speed file I/O on a number of operating systems is that you need to do two things:
    • Bypass the buffer cache.
    • Use large I/O buffers. Try 32K or 64K on PCs.

    Asynchronous I/O is helpful if the operating system supports it.

    A good bus-mastering SCSI host adaptor will take much of the load off the CPU.

  • by Kz ( 4332 )
    The main advice seems to be "get rid of IDE drives, use SCSI", mostly because of the lower CPU utilization (the SCSI cards are far more intelligent than the IDE chips on the mainboard)... but SCSI drives are so much more expensive per GB!

    so... check the Arena Array, it's a harware RAID box, with a SCSI interface, but with IDE bays! it uses a separate IDE channel for each drive, so the througput is as high as possible, and the host interface is SCSI, so you can use those low-CPU-demmanding cards.
  • by Matthew Weigel ( 888 ) on Thursday March 08, 2001 @07:35AM (#377011) Homepage Journal
    so... check the Arena Array, it's a harware RAID box, with a SCSI interface, but with IDE bays! it uses a separate IDE channel for each drive, so the througput is as high as possible, and the host interface is SCSI, so you can use those low-CPU-demmanding cards.

    No, the throughput is not as high as possible. It's better, sure, and it will take a load away from the CPU, which might be sufficient, but not the best. The problem is that each IDE drive can still only service one request at a time; thus, optimally, you can handle n requests for n drives. Since the article specified input and output, you can't do a simple RAID1 system to handle n reads of any data. Instead he has to go with striping, which would lead to having n reads and/or writes distributed across n drives if each request is for data on a different drive.

    My advice? If you can afford new hardware, buy the best you can afford; the Arena Array may be sufficient, and it will certainly be better, but you'll need to do some resarch - how many concurrent connections, how are the file requests and uploads distributed, etc. - to see if you can expect an IDE array to fix your problems.

    It boils down to either a) the problem has to be solved, and you'll just have to do whatever it takes to fix it (and I don't think you'll fix it with software), or b) you'd like to fix it but it's not that important. In the case of b), hdparm might be your best bet, as well as making sure you have a NIC that doesn't need a lot of hand-holding by the CPU.

  • by barneyfoo ( 80862 ) on Thursday March 08, 2001 @07:56AM (#377012)
    I can help you with hdparm, but I'm not familiar with vfs tuning.

    Most others here suggest turning dma on, which is obvious, but there are many other things you can do with hdparm to help performance.

    This is the command I use on startup:

    hdparm -d1 -c1 -u1 -A1 -a255 -m16 -X66 -W1 /dev/hda

    -d1 -- turn on dma
    -c1 -- enable 32bit transfers (helps alot)
    -A1 -- enable readahead
    -a255 -- set readahead to 255 (maximum on ide drives)
    -m16 -- permits the transfer of multiple sectors per interrupt (16 in this case - max for my hd)
    -X66 -- set UDMA33 transfer mode. 67 is uata66, 68 is uata100.. dont bank on that, however.
    -W1 -- enable write cacheing (this would help you alot) make sure your hard drive is stable with all other settings before trying this, as it's dangerous.

    Also try getting the program powertweak for setting all kinds of tuning parameters. There is a powertweak-gtk with descriptive tooltips as well.
  • I forgot to mention -u1

    -u1 -- a setting of 1 permits the driver to unmask other interrupts during processing of a disk interrupt, which greatly improves linux's responsiveness.
  • I forgot to do that once and fsck'd my system.

    Actually it's probably a good idea to fsck your system before making changes to your harddrive setting anyway. ;)
    _____________

Neckties strangle clear thinking. -- Lin Yutang

Working...