Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Linux 2.6.17 Released 444

diegocgteleline.es writes "After almost three months, Linux 2.6.17 has been released. The changes include support for Sun Niagara CPUs, a new I/O mechanism called 'splice' which can improve the performance greatly for some applications, a scheduler domain optimized for multicore machines, driver for the widely used broadcom 43xx wifi chip (Apple's Airport Extreme and such), iptables support for the H.323 protocol, CCID2 support for DCCP, softmac layer for the wireless stack, block queue IO tracing, and many other changes listed at the changelog"
This discussion has been archived. No new comments can be posted.

Linux 2.6.17 Released

Comments Filter:
  • by shird ( 566377 ) on Monday June 19, 2006 @12:10AM (#15559963) Homepage Journal
    Modules... Only the modules (read: 'drivers') that are needed are loaded. It needs to be in the kernel because it accesses the hardware (the net card) at a fairly low level.
  • Re:Go Linux! (Score:5, Informative)

    by freralqqvba ( 854326 ) on Monday June 19, 2006 @12:12AM (#15559970) Homepage
    sendfile(2) is now a call to splice() so programs that use the old syscall will benefit as well and without modificaiton.
  • by doti ( 966971 ) on Monday June 19, 2006 @12:28AM (#15560005) Homepage
    Some stuff I found interesting on the human-friendly changelog [kernelnewbies.org].

    Block queue IO tracing support (blktrace). This allows users to see any traffic happening on a block device queue. In other words, you can get very detailed stadistics of what your disks are doing. User space support tools available in: git://brick.kernel.dk/data/git/blktrace.git

    New /proc file /proc/self/mountstats, where mounted file systems can export information (configuration options, performance counters, and so on)

    Introduce the splice(), tee() and vmsplice() system calls, a new I/O method.
    The idea behind splice is the availability of a in-kernel buffer that the user has control over, where "splice()" moves data to/from the buffer from/to an arbitrary file descriptor, while "tee()" copies the data in one buffer to another, ie: it "duplicates" it. The in-buffer however is implemented as a set of reference-counted pointers which the kernel copies around without actually copying the data. So while tee() "duplicates" the in-kernel buffer, in practice it doesn't copy the data but increments the reference pointers, avoiding extra copies of the data. In the same way, splice() can move data from one end to another, but instead of bringing the data from the source to the process' memory and sending back to the destination it just moves it avoiding the extra copy. This new scheme can be used anywhere where a process needs to send something from one end to another, but it doesn't need to touch or even look at the data, just forward it: Avoiding extra copies of data means you don't waste time copying data around (huge performance improvement). For example, you could forward data that comes from a MPEG-4 hardware encoder, and tee() it to duplicate the stream, and write one of the streams to disk, and the other one to a socket for a real-time network broadcast. Again, all without actually physically copying it around in memory.
  • Re:Go Linux! (Score:4, Informative)

    by ip_fired ( 730445 ) on Monday June 19, 2006 @12:42AM (#15560037) Homepage
    The kernel is written in C, and so are those system calls. I don't believe you can overload a C function.
  • Re:Video Editing? (Score:1, Informative)

    by Anonymous Coward on Monday June 19, 2006 @12:46AM (#15560049)
    Like using MOL (Mac-On-Linux)?
  • Microkernel anyone? (Score:4, Informative)

    by argoff ( 142580 ) on Monday June 19, 2006 @12:46AM (#15560051)
    Why are the network drivers part of the kernel? It seems like this would make it more difficult to adopt newer hardware types. Also, since most computers have 1-2 NICs at the most, wouldn't that clog up the kernel with tons of drivers for hardware you'll never use?

    This is the essence of the Microkernel debate. http://en.wikipedia.org/wiki/Microkernel/ [wikipedia.org] The truth is that the Microkernel model probably is a better design, but in terms of when the Linux kernel was starting out - its implementation simply wasn't pratical. It didn't help that the people who thought they knew how to build a better kernel decided to try and intellectually brow-beat Linus into doing it instead of implementing it themselves and putting it under the GPL. This led to a lot of bitterness and resentment between the two camps. The HURD http://en.wikipedia.org/wiki/Hurd [wikipedia.org] project is a GPL microkernel project, but it simply wasn't managed as well as Linus managed Linux.

    I think over time, things eventually will move to a microkernel model even though there are other ways to emulate some of their security and flexability benefits - like xen http://en.wikipedia.org/wiki/Xen [wikipedia.org]

  • by billstewart ( 78916 ) on Monday June 19, 2006 @12:57AM (#15560077) Journal
    So you've learned what RTFM means by now? :-) Ok, it's been a while since I've read up on kernel structure either.... but you _should_ do so. Linux is rather famously [coyotos.org] not a microkernel architecture [oreilly.com] that lets you partition off little pieces into user space - it's a big honkin' kernel plus loadable modules that let you add even more things. There are hardware-dependent and hardware-independent parts of the kernel. Device drivers inherently hardware-dependent, and sharing address space with the kernel makes it easier to do things like DMA without having to do a lot of data copying.

    As far as network drivers in particular go, the layers that use them, such as IP, live in the kernel, so it's rather annoying for them to talk to drivers that are up in user space. Specific network cards, especially wireless, might have bits that live up in user space, such as user interfaces for loading in crypto keys, but the bulk data transfer applications normally belong down in or near the kernel.

    Why are there a whole pile of network card drivers in the kernel when you'd normally use only one or two? Same reason there are a whole pile of drivers for other devices in the kernel, when you've normally only got one graphics card and one sound card. If you're shipping a pre-compiled kernel, you want it to support as many different users as possible, and all it costs you is some RAM to store the code you're not using, or if you can handle them as loadable modules, it only costs you the work to keep track of those. But if you want to compile your own kernel for specific machines, and leave out the drivers you don't want, and while you're at it compile all your applications programs with the level of optimization your hardware supports, get a copy of Gentoo Linux [gentoo.org] and have fun learning lots more detail about Linux internals.

  • Re:Video Editing? (Score:5, Informative)

    by Anonymous Coward on Monday June 19, 2006 @01:05AM (#15560101)
    Insightful? How about Kino [kinodv.org] or Cinelerra [cinelerra.org] or Lives [sf.net] or Mainactor [mainconcept.com]?
  • not like that (Score:5, Informative)

    by r00t ( 33219 ) on Monday June 19, 2006 @01:23AM (#15560151) Journal
    This is really just a way for app code to manipulate data without needing to have it copied or memory-mapped.

    Linus refused the FreeBSD-style zero-copy because it is often a lose on SMP and with modern hardware. Page table and TLB updates have huge costs on modern hardware.

    If you do like the Microsoft way, use Red Hat's kernel. The in-kernel server works very well.

  • by nick this ( 22998 ) on Monday June 19, 2006 @01:29AM (#15560170) Journal
    I read that as ip conntracking to allow videoconferencing devices that follow the h.323 standard to be natted.

    obtw: your pedant bit is apparently stuck high. just a fyi -- didn't know if you realized it. :)

  • Re:Go Linux! (Score:5, Informative)

    by pavon ( 30274 ) on Monday June 19, 2006 @01:34AM (#15560180)
    Obviously though it is necessary to write new functions on occassion; for example when the new function is worse than the old function is under some circumstances.

    That is exactly why it was done. More information about can be found at kerneltrap: here [kerneltrap.org], and here [kerneltrap.org]. It was also previously on slashdot [slashdot.org], although you would be best to skip that - it has more misinformation than the other kind.

    In short, all the known ways of implementing zero-copy within the existing API's cause the most common usage cases of those API to be slower than they are now. Therefore, it made more sense to export this new API for the applications where speed is critical.

    In the the first kernaltrap article, Linus also explains why splice is different from sendfile, contrary to the posts here claiming they are essentially the same.

  • Re:Go Linux! (Score:3, Informative)

    by iabervon ( 1971 ) on Monday June 19, 2006 @01:51AM (#15560213) Homepage Journal
    In this case, the new syscall is because a specific situation can be optimized compared to using the existing functions, but the more efficient function only works at all for certain special (but important) cases. In this case, the optimization is that copying data from outside of the program to outside of the program is more efficient if the data doesn't have to go through the program; obviously, this can't be used for the common case where the program is trying to use the data. The case that it helps a lot is when a program is sending data from the disk to the video card, or network to the disk, etc. With the new syscall, the data doesn't have to be copied into the program's address space (or the program's page tables changed to bring it in, which generally costs a TLB flush).

    Programs that want to be backwards compatible can rely on getting an error result of ENOSYS when they try, and can then fall back to using the traditional method.
  • by Osty ( 16825 ) on Monday June 19, 2006 @02:01AM (#15560242)

    Why not try an Intel board? Not to be a fanboy or anything but their linux support is good. Intel NICs can't be beat for support and performance either.

    One may find it difficult to fit an AMD CPU on an Intel motherboard. Pesky competition.

    As the grand-parent said, he had to exchange a motherboard. That means he wasn't intentionally upgrading, thus expecting to continue to use the rest of his hardware (memory, disk, CPU). Were this a system upgrade rather than a replacement of a faulty part then it may be useful to suggest he look at an Intel-based solution. As it is, he obviously already had an AMD CPU and so he needed another motherboard that would work with said CPU.

  • Re:Missing driver? (Score:5, Informative)

    by WhodoVoodoo ( 319477 ) on Monday June 19, 2006 @02:11AM (#15560259)

    try hitting '/' on make menuconfig, type ov511 hit enter. That's a hot tip that's saved me quite a bit of time...
    It'll find it if it's there.
  • by Lobais ( 743851 ) on Monday June 19, 2006 @02:13AM (#15560262)
    But on the other hand, there are nearly no drivers for FreeBSD and Solaris.
  • Re:Where is 2.7? (Score:5, Informative)

    by x2A ( 858210 ) on Monday June 19, 2006 @02:21AM (#15560278)
    The stable/development branches might be a nice idea in theory, but in practice it doesn't work. Distros would ship, for example, a "stable" 2.4.xx kernel, except it wouldn't actually be that. They would spot nice features in the 2.5 kernel that they wanted to offer their users, and so back-port them... and any other nice patches floating around the net while they're at it. The result being that the kernels that ship with distros were so heavily modified, that stability (from one machine to another) went right out of the window. You couldn't go to kernel.org and download an updated kernel, as without all the patches, it wouldn't work. So you had to stick to the distro's kernels.

    So instead, the 2.6 goal is to have development/stable parts of the cycle, rather than seperate branches. Roughtly: patches that could break things get submitted at the beginning of the cycle, and -pre1/-pre2 tarballs are released. If you want bleeding edge, you go here. Release candidates are released, where developers get chance to fix bugs etc in the code. Then, any code that's still [known to be] buggy gets dropped for the final release (eg, 2.6.17). The developer can work on it, and try add it again during subsequent cycles. When it works, it can be included in a final release.

    During this cycle, security and other urgent bug fixes take place in the ultra-stable branch, with version such as 2.6.16.1, 2.6.16.2.

    (This is the rough idea I believe, there could be some slight inaccuracies in how it actually takes place, I haven't followed it 100%, but this should be close enough to get the right idea).

  • by nick this ( 22998 ) on Monday June 19, 2006 @02:21AM (#15560280) Journal

    Unlikely or not, that's what it appears to be. h.323 conntrack nat helper [netfilter.org]

    This patch (or module, actually) comes with an H.323 decoding library that is based on H.225 version 4, H.235 version 2 and H.245 version 7. It is extremely optimized for Linux kernel and decodes only the absolutely necessary objects in a signal. ... The total size of code plus data is less than 20 KB.

    Doesn't look like a gatekeeper or anything, that looks like an honest-to god ipconntrack nat implementation.

    For the other responder to my initial post. I have taken your offer into consideration but have decided to decline.
    lol.

  • Re:Where is 2.7? (Score:5, Informative)

    by iabervon ( 1971 ) on Monday June 19, 2006 @02:25AM (#15560288) Homepage Journal
    That was the theory. But in practice, if Y was even, the kernel was obsolete, while if Y was odd, the kernel was broken. Except, of course, 2.even.0, which was actually stable, but broke compatibility with the previous kernel that worked. And occasionally, 2.even was kept up-to-date because nobody could use 2.odd for development, because it didn't work at all. You could tell that the old model didn't actually work, because no distribution shipped any kernel that used that model; they all shipped 2.even with an arbitrary set of patches (generally hundreds) from 2.odd and elsewhere. With the new model, distros are shipping kernels with only a few patches, and those patches are getting merged upstream.

    The stable kernels aren't remotely on the bleeding edge; they contain only features which have been tested over the past three months, after being filtered out of the bleeding-edge development as being things that have already stabilized and stand a good chance of being proven in three months. It's effectively very similar, except the development series isn't left known-broken and the stabilization process happens on a quick schedule, with stuff that isn't ready pushed off to the next cycle rather than delaying the current cycle. Also, the version numbers change by less (development gets -mm, -rc, or -git; stable series change the third digit by one instead of the second by two; and bugfix releases change the fourth digit instead of the third).
  • Re:Video Editing? (Score:4, Informative)

    by Bent Mind ( 853241 ) on Monday June 19, 2006 @02:33AM (#15560303)
    I've been doing video editing with Avidemux [fixounet.free.fr]. It's a nice little program for Windows, OSX, and Linux.
  • by Frater 219 ( 1455 ) on Monday June 19, 2006 @02:44AM (#15560325) Journal
    "Zero copy" tends to be overrated. It makes some benchmarks look good, but it's only useful if your system is mostly doing very dumb I/O bound stuff. In environments where web pages have to be ground through some engine like PHP before they go out, it won't help much.
    On the contrary, there are many cases in a dynamic serving system where you can determine that, after some point, the rest of the operation merely involves copying data from a file or buffer out to the network. Or, similarly, that a large portion of the operation involves such copying.

    So even though the whole operation can't be reduced to a splice() or sendfile(), a substantial portion of it can. And the speed improvement you take isn't just that you avoid copying -- as "zero copy" implies; you also avoid unnecessary cache dirtiness.

    The usual effect of adding "zero copy" to something is that the performance goes up a little, the complexity goes up a lot, and the number of crashes increases.
    I wonder what your sample size is for "usual" there. As far as I can tell, you discuss only one case where this is so: Windows. And since we know that Windows has lots of other architectural problems leading to crashes, and in any event has an architecture entirely different from Linux's, we know that case to be irrelevant.

    All in all, your analysis is critically, embarrassingly bad. There is no "serving Web pages from the kernel" going on here. There is simply an optimization for a common case, with no degradation to the less-common cases -- that's why it's implemented as a separate system call.

  • by Nutria ( 679911 ) on Monday June 19, 2006 @02:50AM (#15560336)
    just eat up a few megs of space in /usr

    /lib/modules/$(uname -r)
    but we know what you meant...
  • Re:not like that (Score:3, Informative)

    by master_p ( 608214 ) on Monday June 19, 2006 @03:25AM (#15560402)
    "This is really just a way for app code to manipulate data without needing to have it copied or memory-mapped."

    I think you are wrong. Splice'd data are not processed by userland at all: they are piped from one file to the other at kernel level by page copying.
  • That and (Score:5, Informative)

    by Sycraft-fu ( 314770 ) on Monday June 19, 2006 @03:29AM (#15560407)
    For kernel operations, you want everything pretty efficient. You want it as fast as possible and you don't want a lot of extra code hanging around. Unfortunately, the higher level a language you use, the more inefficency there is. For most programs it doesn't matter. They are either not the sort of thing that needs speed (like a word processor) or one where you can optimize the small part of the code that takes most of the time (like a game). However the kernel is a little different. Everything in there is time critical essentially.

    C is the best compramise. While assembly might give you the theoritical best code, it'll big a giant mess to try and totally unmaintainable. Might actually be slower and larger for it. C is pretty good because it's easy enough to generate deceant code in, but it isn't much higher up the abstraction chain so it compiles quite efficient.

    You have to remember that object orientation and such are all human creations. Processors don't think in objects, for that matter they don't really even think in functions. They think in memory locations, and jumps to those locations. Doing OO code means a whole messy layer the compiler has to go through to translate that in to something the processor actually understands.
  • Broadcom 43xx HOWTO: (Score:5, Informative)

    by cbhacking ( 979169 ) <been_out_cruisin ... m ['hoo' in gap]> on Monday June 19, 2006 @04:22AM (#15560487) Homepage Journal
    Haven't tried the release of 2.6.17 yet, but rcX versions required extracting the firmware for your Broadcom card from a binary such as bcmwl5.sys (Windows driver). The tool bcm43xx-fwcutter [berlios.de] does this.

    I'm not an Ubuntu guy, but this reference [ubuntuforums.org] might be useful to anybody trying to make the new Broadcom Wifi driver work in Linux. Very easy steps, and most non-Ubuntu users should find it easy to adapt for their specific distros.
  • Re:Go Linux! (Score:4, Informative)

    by waveclaw ( 43274 ) on Monday June 19, 2006 @04:32AM (#15560500) Homepage Journal
    The kernel is written in C, and so are those system calls. I don't believe you can overload a C function.


    There is no overloading going on here. Overloading is to create a new function with the same name, but taking different parameters.

    Ahem. The original function, sendfile(2), was rewritten to call splice() instead of doing something else.

    Everybody that wrote code that used the old function now has to deal with splice() running instead of the old function's logic.

    Just to hammer it home:
    Old - app -> sendfile(2) -> some logic -> return to app
    New - app -> sendfile(2) -> splice() -> splice's logic -> return to sendfile(2) -> return to app

    With the Linux kernel, as this exepmlifies, you can improve the original code and get everyone (well, those to lazy to revert the changes) to use it. In this case you have a fixed API (sendfile(2) which is well known and published) so you don't just want to tell everybody to recompile with called to splice().

    See the difference? Feel the difference.

    The kernel is GPL and thus the actual source code used to compile the binary kernel you use is available to you. With a closed source kernel you might be able to purchase an SDK with linkable binaries and some (probably undocumetned) header files. Programmers in this situation need things like function overloading and class inheritence just to do anything. One way of looking at the history of languages like C++ is as a technical solution to the ethical problem of closed source programming. Those languages focus on extending on the outside. With OSS you can usually replace, fix and improve on the inside. BSD and GNU differ on a the point of GNU wanting everyone to share the source to those fixes if they share the resulting binaries. But I digress.

    And I can't wait to see if this breaks something.
  • by Anonymous Coward on Monday June 19, 2006 @04:52AM (#15560534)
    With this module you can support H.323 on a connection tracking/NAT
    firewall.

    This module supports RAS, Fast-start, H.245 tunnelling, RTP/RTCP
    and T.120 based data and applications including audio, video, FAX,
    chat, whiteboard, file transfer, etc. For more information, please
    see http://nath323.sourceforge.net/ [sourceforge.net].
  • Re:module shotguns (Score:5, Informative)

    by wertarbyte ( 811674 ) on Monday June 19, 2006 @05:07AM (#15560555) Homepage
    Many a linux distribution I've used (most noticeably Debian) applies the "shotgun" approach to module-loading because the hardware detection and hotplug methods are so convoluted and undependable. Kind of defeats the purpose of loadable modules if the distribution simply loads everything under the sun to see what sticks.
    Obviously you haven't used Linux for a long time. Modules are not loaded to detect hardware, instead the hardware acquires the driver module: The kernel identifies hardware via PCI or USB device ids, which are also stored in the modules. So Hotplug (and newer versions of udev) can load the appropiate module once hardware is added to the system.
    Worse, many modules aren't smart enough to determine "hey, I'm a driver for [some non-removable component]. If I can't find my hardware, maybe I should print an error to ksyslogd and unload myself."
    The driver will not be loaded if there is no hardware, unless you explicitly tell your system to do so.
  • Re:not like that (Score:3, Informative)

    by cnettel ( 836611 ) on Monday June 19, 2006 @05:45AM (#15560608)
    Your parent is right. The user mode code can control what happens to data, without ever mapping it to its own memory. You are right in that it's not processed, but that's not what the original post said.
  • by ookaze ( 227977 ) on Monday June 19, 2006 @06:26AM (#15560652) Homepage
    The "splice" system call seems to be an answer to one of Microsoft's bad ideas - serving web pages from the kernel

    WHat is this nonsense ? The khttp in-kernel web server was implemented on Linux first, then copied by MS.
    IIRC it isn't even in 2.6 kernels anymore.

    So now Linux has a comparable "zero copy" facility

    Linux already had a zero-copy facility, splice is just a new improved one.
    What are you talking about ?

    "Zero copy" tends to be overrated. It makes some benchmarks look good, but it's only useful if your system is mostly doing very dumb I/O bound stuff. In environments where web pages have to be ground through some engine like PHP before they go out, it won't help much.
    The usual effect of adding "zero copy" to something is that the performance goes up a little, the complexity goes up a lot, and the number of crashes increases


    Not in this case. Linux made it right.
  • Re:linux (Score:3, Informative)

    by ElleyKitten ( 715519 ) <kittensunrise AT gmail DOT com> on Monday June 19, 2006 @06:50AM (#15560681) Journal
    The broadcom driver that came with ubuntu(same sources, maybe earlier version) has somesort of issues with my BCM4318 :( , so it just doesn't work,
    Try this. [ubuntu.com]
  • Re:Video Editing? (Score:4, Informative)

    by miracle69 ( 34841 ) on Monday June 19, 2006 @07:34AM (#15560750)
    I recently ran across jashaka [jahshaka.org] which is also cross-platform.
  • Re:Broadcom 57xx (Score:2, Informative)

    by LinuxOnEveryDesktop ( 14145 ) on Monday June 19, 2006 @08:05AM (#15560807) Homepage
    what about the Broadcom 57xx?

    That's Broadcom's wired gigabit interface. Has been *long* supported by the tg3 driver. Heck, Broadcom even has an alternative GPL'd driver for this interface downloadable from their website.
  • Re:module shotguns (Score:5, Informative)

    by FireFury03 ( 653718 ) <slashdot&nexusuk,org> on Monday June 19, 2006 @08:18AM (#15560840) Homepage
    Sure there is. There's just not a consistent ABI, and that's on purpose.

    If you're contributing a driver, GREAT. It'll compile against the currently installed kernel just fine.


    Untrue I'm afraid. If your modules aren't in-tree then they *will* break every so often because the kernel API is not stable. Especially under the 2.6 development model - under the previous 2.4/2.5 model you were pretty much guaranteed that API breakages would only be happening in the 2.5 tree, now they happen at any point in the 2.6 tree. (Yes, I do know this stuff - I work on out-of-tree kernel code).

    There is some arguement that all drivers should be in-tree, and for common hardware it is definately a Good Thing to have the drivers in the tree - as the API changes then the person implementing the API change will fix up all the in-tree code that uses that API.

    For very specialist and expensive hardware it poses a problem though: the person who does the API change won't have the hardware to test with, and probably all the people who use that hardware are using enterprise distributions so breakages to the module won't be spotted for a long time. It's hard for the hardware vendor to track these kinds of updates and perform the necessary regression testing.
  • Re:Go Linux! (Score:3, Informative)

    by SpinyNorman ( 33776 ) on Monday June 19, 2006 @09:16AM (#15561024)
    Overloading is just syntactic sugar - it doesn't give you any fucntionality.

    There's no functional difference between using an overloaded name f(a), f(x, y), f(p, q, r) or three separate ones f_a(a), f_xy(x, y), f_pqr(p, q, r).

    If you want default arguments C has them, and if you want polymorphism then C has it too (function pointers).
  • Re:S-ATA hotplug (Score:2, Informative)

    by The_Morgan ( 89220 ) <exadeath@NoSPam.yahoo.com> on Monday June 19, 2006 @09:33AM (#15561091)
    I setup a backup system using DAR and SATA hard drives, since they are a fairly cheap backup medium. I wanted hotplug, really just hotswap so I could take out the full drive and put in an empty one. The commands I found after searching went along the lines of:

    echo "scsi remove-single-device 3 0 0 0" >/proc/scsi/scsi
    echo "scsi add-single-device 3 0 0 0" >/proc/scsi/scsi

    It has to be watched through dmesg, sometimes the drive migrates from /dev/sdX to the next unused letter. The other thing I noticed is that the drive has to be online while booting up to be available, it wouldn't find a new drive. It has been a while since I set the system up so I should upgrade to find out if this still applies.
  • Re:Video Editing? (Score:3, Informative)

    by Khuffie ( 818093 ) on Monday June 19, 2006 @09:37AM (#15561103) Homepage
    Spoken like a true mac fan boy. Final Cut Pro is great, but Premiere Pro really cuts the gap. And After Effects still remains one of the better compositing tools out there.
  • by Dan Ost ( 415913 ) on Monday June 19, 2006 @09:52AM (#15561167)
    I agree that using GOTO is a bad idea when another control structure is adaquate,
    but, at least in C, there are times when using GOTO is the most natural and,
    unequivically, the best choice.

    Off the top of my head, I can think of two situations where using a GOTO is
    the best solution:

    1. breaking out of nested loops. In C, the break command can only break
    out of a single loop level. If you need to break out of 2 or more loops, you
    can play an ugly game of setting and checking state flags at each level
    of looping or you can simply create a label at the exit point and use
    GOTO to get there. (sometimes you can wrap your loops as a function call,
    but that's often the ugliest solution)

    2. shared cleanup code. In a function with multiple exit points, instead
    of doing cleanup at each exit point, it is often clearer to set your
    return value and then GOTO a label that handles all cleanup before
    returning.

    Be cautious when using GOTO, but don't be afraid of it. Learn to
    recognize when GOTO is appropriate and when it should be avoided.
  • Re:S-ATA hotplug (Score:3, Informative)

    by labratuk ( 204918 ) on Monday June 19, 2006 @11:21AM (#15561699)
    http://linux-ata.org/software-status.html#hotplug [linux-ata.org]

    It's not just a matter of enabling it, it's about making it reliable.

    More here: http://linux-ata.org/features.html [linux-ata.org]
  • by Anonymous Coward on Monday June 19, 2006 @12:53PM (#15562425)
    Networking -->
         <M>   Generic IEEE 802.11 Networking Stack
         [*]     Enable full debugging output
         <M>     IEEE 802.11 WEP encryption (802.1x)
         <M>     IEEE 802.11i CCMP support
         <M>     IEEE 802.11i TKIP encryption
         <M>     Software MAC add-on to the IEEE 802.11 networking stack
         [*]       Enable full debugging output

    Device Drivers --> Network device support --> Wireless LAN drivers (non-hamradio) & Wireless Extensions
         <M>   Broadcom BCM43xx wireless support
         [*]     Broadcom BCM43xx debugging (RECOMMENDED)
  • by Dan Ost ( 415913 ) on Monday June 19, 2006 @12:54PM (#15562428)
    1) If you can create a condition where a goto is to be placed, you can add that same condition to the top loop in the nest and let it exit out gracefully.

    If that leads to clearer code, then in the cases where you can do that, fine. Do that.

    However, there are situations when a condition doesn't make sense until you've already
    entered the nested loops at least once (for example, when allocating lots of chuncks of memory,
    you can't test to see if you've successfully allocated memory until after you've tried to
    allocate memory). Also, if there are several conditions that might require a break, but
    they can all be handled the same (at least until after you break out of your loops),
    do you really want each one to be tested at every loop test? Think how big and confusing that
    would make your continuation test for your outer loops.

    2) Use a clean-up function. It will return to the correct place without all the spagetti code.

    There's nothing wrong with using cleanup functions if they are convienent for your
    particular purpose, but if you have to free 11 objects before returning, then you'll
    need to pass all 11 to the cleanup function each time you call it. I don't know about
    you, but I usually find functions with 5+ arguments to be ugly. I would rather simply have
    a 'goto cleanup' that jumps to a label that does all the cleanup in place. An acceptable
    compromise would be to define a macro that does the cleanup in place but hides it from casual
    code inspection, thus keeping the code clear, but avoiding the use of GOTO.

    Using GOTO in the manners I've described will not lead to speghetti code since the flow of control
    will be clear and uni-directional (the antithesis of speghetti code). In case (1), the use
    of GOTO is equivalent to raising an exception in Java, C++, or Python from within the loop and
    capturing the exception outside the loop (idioms commonly accepted in all three communities).
    In case (2), the use of GOTO maps multiple exit points to a single exit point. If you feel
    that these techniques qualify as speghetti code, then I would suggest that you've never
    seen real speghetti code.

    When Djikstra wrote "Goto considered harmful", he was talking about using GOTO to jump outside
    the scope of the current function, something not possible in with C's goto (C's goto can only
    jump to a label within the current function). See BASIC and PASCAL (I think) for examples of GOTO that
    can jump anywhere in the program.

Today is a good day for information-gathering. Read someone else's mail file.

Working...