Linux 2.6.17 Released 444
diegocgteleline.es writes "After almost three months, Linux 2.6.17 has been released. The changes include support for Sun Niagara CPUs, a new I/O mechanism called 'splice' which can improve the performance greatly for some applications, a scheduler domain optimized for multicore machines, driver for the widely used broadcom 43xx wifi chip (Apple's Airport Extreme and such), iptables support for the H.323 protocol, CCID2 support for DCCP, softmac layer for the wireless stack, block queue IO tracing, and many other changes listed at the changelog"
linux (Score:2, Funny)
Re:linux (Score:3, Interesting)
They have included much of the stuff into their version of 2.6.15.23,
but ofcourse that doesn't have everything. The broadcom driver that came
with ubuntu(same sources, maybe earlier version) has somesort of issues
with my BCM4318
to do with soft interrupt stuff.
ps. broadcom, next time make interrupts stiff.
Re:linux (Score:3, Informative)
Really helped (Score:4, Interesting)
Re:Video Editing? (Score:2, Funny)
Re:Video Editing? (Score:3, Informative)
Re:Video Editing? (Score:4, Funny)
Re:Video Editing? (Score:2)
Re:Video Editing? (Score:5, Informative)
Re:Video Editing? (Score:4, Informative)
Re:Video Editing? (Score:4, Informative)
Re:Video Editing? (Score:2)
I'm currently using MainActor 5.5 (for linux).
Re:Video Editing? (Score:3, Funny)
Re:Really helped (Score:2, Insightful)
Re:Great, how about stable firewire support someda (Score:5, Funny)
The above comment has been marked WORKSFORME, and is now closed.
-:sigma.SB
what (Score:2, Funny)
Question for the masses. (Score:3, Interesting)
Why are the network drivers part of the kernel? It seems like this would make it more difficult to adopt newer hardware types. Also, since most computers have 1-2 NICs at the most, wouldn't that clog up the kernel with tons of drivers for hardware you'll never use?
Re:Question for the masses. (Score:5, Informative)
Re:module shotguns (Score:5, Informative)
Re:module shotguns (Score:5, Insightful)
Sure there is. There's just not a consistent ABI, and that's on purpose.
If you're contributing a driver, GREAT. It'll compile against the currently installed kernel just fine.
If it's closed-source, go die. The kernel's GPL, not lGPL
Re:module shotguns (Score:5, Informative)
If you're contributing a driver, GREAT. It'll compile against the currently installed kernel just fine.
Untrue I'm afraid. If your modules aren't in-tree then they *will* break every so often because the kernel API is not stable. Especially under the 2.6 development model - under the previous 2.4/2.5 model you were pretty much guaranteed that API breakages would only be happening in the 2.5 tree, now they happen at any point in the 2.6 tree. (Yes, I do know this stuff - I work on out-of-tree kernel code).
There is some arguement that all drivers should be in-tree, and for common hardware it is definately a Good Thing to have the drivers in the tree - as the API changes then the person implementing the API change will fix up all the in-tree code that uses that API.
For very specialist and expensive hardware it poses a problem though: the person who does the API change won't have the hardware to test with, and probably all the people who use that hardware are using enterprise distributions so breakages to the module won't be spotted for a long time. It's hard for the hardware vendor to track these kinds of updates and perform the necessary regression testing.
Re:Question for the masses. (Score:2)
Second, usually you don't want to compile every driver into the kernel, so you wouldn't get that clutter. Best case scenario, you compile in only the specific driver you'll need. Worst case, you compile them as modules and load them at runtime.
Re:Question for the masses. (Score:2)
I think you've got that mixed up. Unless you enjoy recompiling the kernel every time the hardware changes.
Re:Question for the masses. (Score:3, Insightful)
In the grandparent's instance, his hardware may not change for months or years on end because, well, he dosen't want to shut his computers or servers down to experiment with random hardware... Because of this, it might make sense for him to compile the drivers directly into the kernel for a tiny boost in performance and memory utilization... That would make sense for embedded computers, too obviousl
Re:Question for the masses. (Score:2)
Re:Question for the masses. (Score:5, Interesting)
Many have argued that Linus needs to stablize the kernel driver ABI. But on the other hand, by not doing so and encouraging drivers to be open source and in the kernel source tree brings us a large amount of stability that Windows just cannot achieve. Most windows stability problems are not caused by the kernel, which is as stable as Linux, but by third-party device drivers. Anyway it is a trade-off, and one that is hotly contested. Personally, everything I currently use has open source drivers that come with my kernel bundle (Fedora Core). They are loaded on demand, so they don't cause memory bloat. If I was to compile my own kernel, I could choose not to build many of the drivers, reducing the disk bloat too.
One of the biggest things for me in this kernel release is the Broadcom wireless driver. Kudos to the team that clean-room reverse engineered the driver.
Re:Question for the masses. (Score:5, Interesting)
It's worth pointing out that pretty much every remotely mainstream OS *except* Linux manages to work (and work well) with a stable kernel ABI. Including ones considered at least - if not more - stable than Linux, even by Linux zealots, like FreeBSD and Solaris.
Re:Question for the masses. (Score:5, Insightful)
FreeBSD example then just proves that a stable ABI won't bring more drivers to Linux, thus destroying the GP argument that Linus needs to stabilize the kernel driver ABI.
Re:Question for the masses. (Score:5, Insightful)
Also worth pointing out that much of the stability trouble in Windows is caused by shoddy drivers - FOSS drivers are traditionally more stable than closed drivers (not least because when bugs are found, people with a vested interest in fixing them will often do so rather than waiting for the manufacturer to get their finger out).
Whilest a stable ABI may result in more drivers being made available, I fear it could lead to a lot of "Windows quality" drivers. And if closed drivers are officially legitimised, many companies will refuse to release open drivers since there is very little in it for them. At the moment, many of the open drivers are there because the vendor believes that releasing a binary driver is legally dubious at best - legitimise binary drivers and this motivation goes away.
Anyone who's dealt with bugs in the nVidia drivers will know of the problems of closed development - I've reported bugs that have taken years for nVidia to fix which I would've been happy to try and fix myself if only the code was open.
Re:Question for the masses. (Score:3, Insightful)
1:
Compile everything you need for your machine to run into the kernel... no more, no less... then you're good to go. No clutter, no loading at runtime... nothing.
2:
You have no idea what you actually need past boot(and root) FS, cpu, and hard drives. Compile everything else as a module(driver) to be loaded when you need it, and voila, no bloat to the kernel, but a few dozen MBs taken up on the HD.
In the grand scheme of things, a few extra modules for network cards will cause you no tro
How many Network Adapter Cards in a server? (Score:2)
Laptops these days usually have two types of network interfaces - one wired and one wireless. Occasionally you'll have different types of wireless cards to plug in, e.g. an 802.11a vs. .11g or something.
Microkernel anyone? (Score:4, Informative)
This is the essence of the Microkernel debate. http://en.wikipedia.org/wiki/Microkernel/ [wikipedia.org] The truth is that the Microkernel model probably is a better design, but in terms of when the Linux kernel was starting out - its implementation simply wasn't pratical. It didn't help that the people who thought they knew how to build a better kernel decided to try and intellectually brow-beat Linus into doing it instead of implementing it themselves and putting it under the GPL. This led to a lot of bitterness and resentment between the two camps. The HURD http://en.wikipedia.org/wiki/Hurd [wikipedia.org] project is a GPL microkernel project, but it simply wasn't managed as well as Linus managed Linux.
I think over time, things eventually will move to a microkernel model even though there are other ways to emulate some of their security and flexability benefits - like xen http://en.wikipedia.org/wiki/Xen [wikipedia.org]
Re:Microkernel anyone? (Score:5, Insightful)
Either way, you've got a ton of drivers sitting around that you'll never use. They don't clog up the kernel, since the kernel image rarely contains many drivers. Instead, most Linux distros use modules that get loaded as needed. On a microkernel, they would be driver binaries that would get run as needed. They clog things up to exactly the same extent; they sit around on the hard drive doing nothing.
Either way, it's hard to add new drivers to old kernels. This is not a result of the fact that drivers are in the kernel, but of the fact that Linus refuses to use a stable driver API. This would preclude driver compatibility between versions just as effectively on a microkernel as it does now.
As I said, the two issues are unrelated.
Re:Microkernel anyone? (Score:3, Insightful)
The point is not a resource usage point, but a flexability point. If you want to add a new driver or even
Re:Microkernel anyone? (Score:2)
Yeah right... the truth is probably something like an opinion.
Re:Question for the masses. (Score:3, Informative)
Re:Question for the masses. (Score:2)
Now, as to the network, it is divided into a number of sections (hardware drivers vs. logical h
Re:Question for the masses. (Score:2)
No. If you compile your kernel the way it should; ie editing config before compiling by removing un-needed driver support and other things you do not need, you won't have to worry about compiling modules or loading modules you will not need.
On the other hand, there are generally two ways to go about this. When compiling modules and vmlinuz, you can let kernel decide what to l
Re:Question for the masses. (Score:3, Insightful)
And of course, Linus is free to do what the hell he wants. He doesn't owe us a thing.
Go Linux! (Score:5, Insightful)
1- More work for developers, some of whom may never learn about these faster calls.
2- Old applications can't benefit
3- Applications that wish to be backwards compatible can't benefit
Obviously though it is necessary to write new functions on occassion; for example when the new function is worse than the old function is under some circumstances. It may be that all the new functionality is of this type, but I don't have enough information to know for sure.
Re:Go Linux! (Score:5, Informative)
Re:Go Linux! (Score:2)
Re:Go Linux! (Score:4, Informative)
Re:Go Linux! (Score:3, Interesting)
Re:Go Linux! (Score:3, Interesting)
That and (Score:5, Informative)
C is the best compramise. While assembly might give you the theoritical best code, it'll big a giant mess to try and totally unmaintainable. Might actually be slower and larger for it. C is pretty good because it's easy enough to generate deceant code in, but it isn't much higher up the abstraction chain so it compiles quite efficient.
You have to remember that object orientation and such are all human creations. Processors don't think in objects, for that matter they don't really even think in functions. They think in memory locations, and jumps to those locations. Doing OO code means a whole messy layer the compiler has to go through to translate that in to something the processor actually understands.
Re:That and (Score:3, Insightful)
That's complete nonsense. What do you support that on? A higher level language is only as inefficient as the compiler and/or libraries used. Which is just as true for a low-level language.
C is the best compramise. While assembly might give you the theoritical best code
As someone who's actually spent some years coding assembler, I'll tell you this: Hand-coded assembler is rarely ever better. And with the developments in process
There are valid uses for a GOTO (Score:5, Informative)
but, at least in C, there are times when using GOTO is the most natural and,
unequivically, the best choice.
Off the top of my head, I can think of two situations where using a GOTO is
the best solution:
1. breaking out of nested loops. In C, the break command can only break
out of a single loop level. If you need to break out of 2 or more loops, you
can play an ugly game of setting and checking state flags at each level
of looping or you can simply create a label at the exit point and use
GOTO to get there. (sometimes you can wrap your loops as a function call,
but that's often the ugliest solution)
2. shared cleanup code. In a function with multiple exit points, instead
of doing cleanup at each exit point, it is often clearer to set your
return value and then GOTO a label that handles all cleanup before
returning.
Be cautious when using GOTO, but don't be afraid of it. Learn to
recognize when GOTO is appropriate and when it should be avoided.
Re:There are valid uses for a GOTO (Score:5, Informative)
If that leads to clearer code, then in the cases where you can do that, fine. Do that.
However, there are situations when a condition doesn't make sense until you've already
entered the nested loops at least once (for example, when allocating lots of chuncks of memory,
you can't test to see if you've successfully allocated memory until after you've tried to
allocate memory). Also, if there are several conditions that might require a break, but
they can all be handled the same (at least until after you break out of your loops),
do you really want each one to be tested at every loop test? Think how big and confusing that
would make your continuation test for your outer loops.
2) Use a clean-up function. It will return to the correct place without all the spagetti code.
There's nothing wrong with using cleanup functions if they are convienent for your
particular purpose, but if you have to free 11 objects before returning, then you'll
need to pass all 11 to the cleanup function each time you call it. I don't know about
you, but I usually find functions with 5+ arguments to be ugly. I would rather simply have
a 'goto cleanup' that jumps to a label that does all the cleanup in place. An acceptable
compromise would be to define a macro that does the cleanup in place but hides it from casual
code inspection, thus keeping the code clear, but avoiding the use of GOTO.
Using GOTO in the manners I've described will not lead to speghetti code since the flow of control
will be clear and uni-directional (the antithesis of speghetti code). In case (1), the use
of GOTO is equivalent to raising an exception in Java, C++, or Python from within the loop and
capturing the exception outside the loop (idioms commonly accepted in all three communities).
In case (2), the use of GOTO maps multiple exit points to a single exit point. If you feel
that these techniques qualify as speghetti code, then I would suggest that you've never
seen real speghetti code.
When Djikstra wrote "Goto considered harmful", he was talking about using GOTO to jump outside
the scope of the current function, something not possible in with C's goto (C's goto can only
jump to a label within the current function). See BASIC and PASCAL (I think) for examples of GOTO that
can jump anywhere in the program.
Re:There are valid uses for a GOTO (Score:3, Insightful)
Re:Go Linux! (Score:2)
There are plenty, and I suppose you could write your part of the kernel in whatever language you want as long as you weren't worried about it being part of the official distribution. But even if they suddenly started allowing other languages in the kernel, and AFAIK they don't, you'd still have to write your interfaces in straight C. It's the only language that basically every other language has bindings for.
Re:Go Linux! (Score:5, Funny)
Like LISP? That's what they used to use, but C was chosen for UNIX, and UNIX caught on big time, so C is the language now. I think it's about time to write an OS (kernel + tools) in LISP, so we can return to the good-old-days of Lisp machines.
Re:Go Linux! (Score:3, Informative)
There's no functional difference between using an overloaded name f(a), f(x, y), f(p, q, r) or three separate ones f_a(a), f_xy(x, y), f_pqr(p, q, r).
If you want default arguments C has them, and if you want polymorphism then C has it too (function pointers).
Re:Go Linux! (Score:3, Insightful)
Re:Go Linux! (Score:4, Informative)
There is no overloading going on here. Overloading is to create a new function with the same name, but taking different parameters.
Ahem. The original function, sendfile(2), was rewritten to call splice() instead of doing something else.
Everybody that wrote code that used the old function now has to deal with splice() running instead of the old function's logic.
Just to hammer it home:
Old - app -> sendfile(2) -> some logic -> return to app
New - app -> sendfile(2) -> splice() -> splice's logic -> return to sendfile(2) -> return to app
With the Linux kernel, as this exepmlifies, you can improve the original code and get everyone (well, those to lazy to revert the changes) to use it. In this case you have a fixed API (sendfile(2) which is well known and published) so you don't just want to tell everybody to recompile with called to splice().
See the difference? Feel the difference.
The kernel is GPL and thus the actual source code used to compile the binary kernel you use is available to you. With a closed source kernel you might be able to purchase an SDK with linkable binaries and some (probably undocumetned) header files. Programmers in this situation need things like function overloading and class inheritence just to do anything. One way of looking at the history of languages like C++ is as a technical solution to the ethical problem of closed source programming. Those languages focus on extending on the outside. With OSS you can usually replace, fix and improve on the inside. BSD and GNU differ on a the point of GNU wanting everyone to share the source to those fixes if they share the resulting binaries. But I digress.
And I can't wait to see if this breaks something.
Re:Go Linux! (Score:2)
Re:Go Linux! (Score:2)
Linus's take on the whole thing is that if you want portability, you should just use the read(), write(), etc system calls, since they perform pretty well anyway. If you absoloutely must do something platform-specific for every ounce of performance, you should have a clean API to do it with.
Re:Go Linux! (Score:2)
Re:Go Linux! (Score:2)
Linux is for other devices too (Score:5, Insightful)
Re:Go Linux! (Score:3, Insightful)
Another way of saying this: It sucks to know that even in this day and age of faster and faster computers there are still people who cut corners and use specific hacks to gain speed instead of simply building clean and well-designed systems and let the hardware do the work.
Just saying..
Re:Go Linux! (Score:2)
Re:Go Linux! (Score:2)
Re:Go Linux! (Score:3, Insightful)
Another way of saying this: It sucks to know that even in this day and age of faster and faster computers there are still people who cut corners and use specific hacks to gain speed instead of simply building clean and well-designed systems and let the hardware do the work.
Why do you assume that all optimizations are hacks? Lifting an invariant calculation out of a loop can potentially make things MUCH faster, yet is hardly a "hack." Or how about strength-reducing "2 * x" into "x + x," is that a hack? S
Re:Go Linux! (Score:5, Informative)
That is exactly why it was done. More information about can be found at kerneltrap: here [kerneltrap.org], and here [kerneltrap.org]. It was also previously on slashdot [slashdot.org], although you would be best to skip that - it has more misinformation than the other kind.
In short, all the known ways of implementing zero-copy within the existing API's cause the most common usage cases of those API to be slower than they are now. Therefore, it made more sense to export this new API for the applications where speed is critical.
In the the first kernaltrap article, Linus also explains why splice is different from sendfile, contrary to the posts here claiming they are essentially the same.
Re:Go Linux! (Score:3, Informative)
In case of slashdot, break mirror (Score:4, Funny)
Re:In case of slashdot, break mirror (Score:2)
some highlights from the changelog (Score:5, Informative)
Block queue IO tracing support (blktrace). This allows users to see any traffic happening on a block device queue. In other words, you can get very detailed stadistics of what your disks are doing. User space support tools available in: git://brick.kernel.dk/data/git/blktrace.git
New
Introduce the splice(), tee() and vmsplice() system calls, a new I/O method.
The idea behind splice is the availability of a in-kernel buffer that the user has control over, where "splice()" moves data to/from the buffer from/to an arbitrary file descriptor, while "tee()" copies the data in one buffer to another, ie: it "duplicates" it. The in-buffer however is implemented as a set of reference-counted pointers which the kernel copies around without actually copying the data. So while tee() "duplicates" the in-kernel buffer, in practice it doesn't copy the data but increments the reference pointers, avoiding extra copies of the data. In the same way, splice() can move data from one end to another, but instead of bringing the data from the source to the process' memory and sending back to the destination it just moves it avoiding the extra copy. This new scheme can be used anywhere where a process needs to send something from one end to another, but it doesn't need to touch or even look at the data, just forward it: Avoiding extra copies of data means you don't waste time copying data around (huge performance improvement). For example, you could forward data that comes from a MPEG-4 hardware encoder, and tee() it to duplicate the stream, and write one of the streams to disk, and the other one to a socket for a real-time network broadcast. Again, all without actually physically copying it around in memory.
Where is 2.7? (Score:5, Insightful)
This was a major boon for Linux: if you needed the bleeding edge, you could get it, whilst acknowledging the risks in doing so. If you needed something stable, again, you could get it. Now? It seems that the supposedly stable kernel is right out there on the bleeding edge
Re:Where is 2.7? (Score:2)
Re:Where is 2.7? (Score:2)
Re:Where is 2.7? (Score:5, Informative)
So instead, the 2.6 goal is to have development/stable parts of the cycle, rather than seperate branches. Roughtly: patches that could break things get submitted at the beginning of the cycle, and -pre1/-pre2 tarballs are released. If you want bleeding edge, you go here. Release candidates are released, where developers get chance to fix bugs etc in the code. Then, any code that's still [known to be] buggy gets dropped for the final release (eg, 2.6.17). The developer can work on it, and try add it again during subsequent cycles. When it works, it can be included in a final release.
During this cycle, security and other urgent bug fixes take place in the ultra-stable branch, with version such as 2.6.16.1, 2.6.16.2.
(This is the rough idea I believe, there could be some slight inaccuracies in how it actually takes place, I haven't followed it 100%, but this should be close enough to get the right idea).
Re:Where is 2.7? (Score:5, Informative)
The stable kernels aren't remotely on the bleeding edge; they contain only features which have been tested over the past three months, after being filtered out of the bleeding-edge development as being things that have already stabilized and stand a good chance of being proven in three months. It's effectively very similar, except the development series isn't left known-broken and the stabilization process happens on a quick schedule, with stuff that isn't ready pushed off to the next cycle rather than delaying the current cycle. Also, the version numbers change by less (development gets -mm, -rc, or -git; stable series change the third digit by one instead of the second by two; and bugfix releases change the fourth digit instead of the third).
Re:Where is 2.7? (Score:5, Interesting)
Or bane. The "old way" meant that the vanilla-kernel (kernel offered by kernel.org) was stable. But new features took a LONG time to appear in the vanilla-kernel. But users and distros still wanted those advanced features that were not part of the kernel (yet). What happened was that distros offered their own vendor-kernels, that were VERY different from vanilla-kernel. Distros then spent their time and energy fixing their own vendor-kernels, instead of vanilla-kernel.
This new system changes things so that new features are added to the vanilla-kernel, which means that the difference between vanilla and vendor-kernels is not that big. The distributors can focus on stabilizing the kernel, instead of adding new features to it. And porting those fixes to vanilla is a lot easier than porting changes in the old system. This means that if you want to use REALLY stable kernel, you should use the vendor-kernel.
In short: this new system means that things progress a lot faster for everyone, with new features appearing in the kernel. And we can still have the stability we want if we use the tested and patched vendor-kernels.
Broadcom 43xx (Score:2)
Indeed, Airport Extreme support is HUGE (Score:5, Insightful)
I'm somewhat shocked that nobody else has pointed out the new Broadcom 43xx/Airport Extreme support. That's the one thing that grabbed my attention in the whole paragraph. Not having support for Apple's built-in wireless hardware has been a showstopper for a lot of people to even consider trying out Linux on a Mac, especially the portables. This driver will open up several million possible new computers for Linux to be installed on, since at this point the wireless hardware was about the last incompatible piece of hardware on the Mac side. This is a very big deal for anyone with Mac hardware or anyone planning to buy a Mac, and for all the geeks who are already running Linux on their Mac.
Very cool.
Broadcom 43xx HOWTO: (Score:5, Informative)
I'm not an Ubuntu guy, but this reference [ubuntuforums.org] might be useful to anybody trying to make the new Broadcom Wifi driver work in Linux. Very easy steps, and most non-Ubuntu users should find it easy to adapt for their specific distros.
Sounds good (Score:5, Funny)
Sincerely,
The New Guy
"splice" - because Microsoft did it? (Score:5, Interesting)
The "splice" system call seems to be an answer to one of Microsoft's bad ideas - serving web pages from the kernel. At one point, Microsoft was claiming that an "enterprise operating system" had to be able to do that. So now Linux has a comparable "zero copy" facility.
"Zero copy" tends to be overrated. It makes some benchmarks look good, but it's only useful if your system is mostly doing very dumb I/O bound stuff. In environments where web pages have to be ground through some engine like PHP before they go out, it won't help much.
The usual effect of adding "zero copy" to something is that the performance goes up a little, the complexity goes up a lot, and the number of crashes increases.
not like that (Score:5, Informative)
Linus refused the FreeBSD-style zero-copy because it is often a lose on SMP and with modern hardware. Page table and TLB updates have huge costs on modern hardware.
If you do like the Microsoft way, use Red Hat's kernel. The in-kernel server works very well.
Re:not like that (Score:3, Informative)
I think you are wrong. Splice'd data are not processed by userland at all: they are piped from one file to the other at kernel level by page copying.
Re:not like that (Score:3, Informative)
Re:"splice" - because Microsoft did it? (Score:3, Informative)
On the contrary, there are many cases in a dynamic serving system where you can determine that, after some point, the rest of the operation merely involves copying data from a file or buffer out to the network. Or, similarly, that a large
Re:"splice" - because Microsoft did it? (Score:3, Informative)
WHat is this nonsense ? The khttp in-kernel web server was implemented on Linux first, then copied by MS.
IIRC it isn't even in 2.6 kernels anymore.
So now Linux has a comparable "zero copy" facility
Linux already had a zero-copy facility, splice is just a new improved one.
What are you talking about ?
"Zero copy" tends to be overrated. It makes some benchmarks look good, but it's only useful if your
OK, so where are they? (Score:2)
Am I missing something here? So are the mentioned changes part of a release-candidate (unstable is at RC-2) or am I missing something?
Re:OK, so where are they? (Score:5, Interesting)
If you have a 2P dual-core setup the best performance for two independent tasks would be spread to both chips. Specially in the AMD camp. That means each task gets a full memory bus to themselves. The trick is to pick up when two tasks have shared memory between each other and schedule that for one chip. Specially on the Intel side of things with their massive shared L2 cache.
Tom
Glad for AirPort Extreme support (Score:2)
DCCP (Score:3, Interesting)
Any applications out there using it yet?
S-ATA hotplug (Score:3, Interesting)
Re:S-ATA hotplug (Score:3, Informative)
It's not just a matter of enabling it, it's about making it reliable.
More here: http://linux-ata.org/features.html [linux-ata.org]
Re:Obligatory. . . (Score:2)
Look right at the top... very first comment
Re:Nice (Score:5, Funny)
Re:Nice (Score:2)
Re:support for the h.323 protocol, quite unlikely (Score:5, Informative)
obtw: your pedant bit is apparently stuck high. just a fyi -- didn't know if you realized it.
Re:support for the h.323 protocol, quite unlikely (Score:4, Informative)
Unlikely or not, that's what it appears to be. h.323 conntrack nat helper [netfilter.org]
This patch (or module, actually) comes with an H.323 decoding library that is based on H.225 version 4, H.235 version 2 and H.245 version 7. It is extremely optimized for Linux kernel and decodes only the absolutely necessary objects in a signal. ... The total size of code plus data is less than 20 KB.
Doesn't look like a gatekeeper or anything, that looks like an honest-to god ipconntrack nat implementation.
For the other responder to my initial post. I have taken your offer into consideration but have decided to decline.
lol.
Re:support for the h.323 protocol, quite unlikely (Score:3, Interesting)
They managed to squeeze both PER and also H225/235/245 into just 20kbyte of object code?!
(why implement h235? thats crypto and wouldnt work unless you know the keys?)
That is VERY impressive.
My PER decoder alone ( http://anonsvn.wireshark.org/wireshark/trunk/epan/ dissectors/packet-per.c [wireshark.org] ) is way larger than that, and that is just aligned PER decoding (ok with some unaligned PER additions recently) and that one itself is >>20kbyte. Adding 225/245 into the mix. Impossible!
I am
Re:Diversity in advocacy (Score:2, Insightful)
Guys will have female wallpaper because the female form is what they like to see. In the same way, girls will often like to see a bit of masculinity (granted
Re:Had to exchange a motherboard (Score:2, Informative)
One may find it difficult to fit an AMD CPU on an Intel motherboard. Pesky competition.
As the grand-parent said, he had to exchange a motherboard. That means he wasn't intentionally upgrading, thus expecting to continue to use the rest of his hardware (memory, disk, CPU). Were this a system upgrade rather than a replacement of a faulty part then it m
Re:Missing driver? (Score:5, Informative)
try hitting '/' on make menuconfig, type ov511 hit enter. That's a hot tip that's saved me quite a bit of time...
It'll find it if it's there.