Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Writing Drivers For Multiple Operating Systems? 173

Matt write: "I ran across this place while searching dmoz. KRF Tech has a piece of software called WinDriver that claims you can write hardware drivers once and compile for Linux, Windows 9x/NT/2000/CE, OS/2, Solaris AND VxWorks. My question: why isn't everyone and their mother using this software? It seems this software would make driver portability a thing of the past. They even have a free 30-day trial." The theory is cool, but how well does this work in practice?
This discussion has been archived. No new comments can be posted.

Writing Drivers For Multiple Operating Systems?

Comments Filter:
  • by Anonymous Coward
    and I've written a lot of c-code, c++, ada,
    Unlucky, I also had to write ADA. That's why I now work at Burger King.
  • http://roadrunner.swansea.linux.org.uk/i2o.shtml http://www.i2osig.org http://www.simon-shapiro.org/drivers.html http://www.wrs.com/products/html/intro_ixworks.htm l
  • by Anonymous Coward
    Because most of these high level driver kits don't really allow you to write a real device driver. There have been several of these over the years and typically they supply a kernel mode "Generic" device driver that speaks to their own API (a DLL or SO). The customer writes a user level application against their API that talks to their "Generic" device driver. This is a cool solution for a quick and dirty driver, but it falls down in the real world in several ways. These solutions are a least common denominator solutions. You can only access hardware that they have already provided for and you can only touch the hardware in the ways that they have already provided for in their API. Second because they try to support as much hardware and API functionality as possible it's a heavy weight solution. You are using a driver that will typically have many times more code in it than you will ever actually use. Third some of the solutions like this that I've evaluated only allow a single application to access the driver. So it's an ok solution if you want to hardware enable a specific application but not workable if you are a hardware vendor trying to create a shippable driver. Fourth, most of these are not open source and if you want your driver to end up shipping with the kernel it's not going to happen if it's not open source...
  • by Anonymous Coward
    Free Software works when lots of hobbyists have the necessary skills and equipment to develop a particular system. But only a few people understand signal processing, trellis coding, and characteristics of noise sources on our antiquated telephone network and can lay hands on line simulators, and almost all of them work for modem vendors under nondisclosure. The 2400bps protocol was probably the last one J.Random Hacker could implement adequately.
  • these folks [stg.com] are working on the linux udi implementation.

    it seems they are already in a test phase. you can read their two relevant press releases here [stg.com] and here [stg.com].


  • ha! the second press release is actually here [stg.com].
  • I see, so in other words free software needs to
    compromise in order to fit with your view.
    Bullshit Jack
    First off if you want proprietary Unix, "sarcasm"
    I'm sure you can find someone to sell it to you"/sarcasm".
    Second, if you you have a need for a peice of hardware
    Thats not supported under Linux and you insist on using Linux,
    you have _extremely_poor_judgement_.
    Third, the whole open/proprietary thing is like a
    stare down contest, you blink, you lose

    Also where is it written that it's anybodies god given right to use _any_ of this stuff?
    You talk about supporting companies that "get it",
    what _you_ don't seem to get is is that
    by compromising on issues like this you get the worst outcome of each method.

    You're in such a hurry to do their work for them?

    What do you give a shit if somebody wants to spend
    some of their own time writing a driver?
    You complain its takes developers away from improving existing software.
    I don't know about the planet you live on, but where I live free software improves every day.
    Take a look at drivers for emu10k1 chipset [creative.com] soundcards as an example.
    Then look at a feature comparison [rockfish.net].
    Creative says a release quality proprietary driver is due out "summer 2000".
    I requested a feature on the open driver
    and it was implemented in less than two weeks.
    Let's see a vendor do _that_
  • While yer at it score a _recent_ linux
    distribution that supports that scanner
    and compare the performance in that with the
    driver you get using WinDriver
  • Aren't Linux users the ones who target Microsoft for its abuse of users?

    Those users have paid Microsoft for a product. That product does not measure up to the promises that are made, and the users are not given the opportunity to correct the problems themselves. In the Linux world, you don't pay the developers, no promises are made about the software, and if there's something you want done you are free to do it yourself. If that means a fork, then it means a fork. At least you have a choice.

    It has nothing to do with being abusive toward users. If all the users are doing is taking from the community without contributing (since most of them can't anyway), then I fail to see why someone who has invested his own time and effort into a piece of software that represents his view of correctness should be required to support these users by either working on or accepting patches that conflict with his own ideas. If you want something with UDI, you are free by virtue of the GPL to put out Fredix, or to pay SCO for it. If you don't like these options, tough. You have in the Free Software community a large number of people willing to give you something for nothing. If that something is of no value to you, then you've still come out ok. I fail to see the parallel with Microsoft; you pay them $500 for an OS and a web server that they promise will work, and then it doesn't work. Well, in that case you're out $500. You pay Linus nothing for Linux, which is delivered to you as-is; if it doesn't work, so what? Where along the line did you acquire the right to insist that he add UDI (or anything else) to his operating system for your benefit?

    The way I see it, Linux belongs to Linus. It is his OS; he has generously allowed you to have, use, modify, and redistribute the fruits of his efforts, as have all the other hundreds of major contributors. If it's useful to you, great. But it still belongs to them, and if they don't want your changes in it, that's their right. They owe you nothing.

  • And if that change happens to annoy Inexperienced End User Who Wants To Give You Market Share But Can't Because Of A Philosophy Decision, is it tough shit too?

    I do not care how much or how little market share Linux has. Whether someone else uses it means nothing to me; I can continue to use it as I please even if every single other person stops using it. Welcome to Free software.

    More users result in more developers. More developers result in more support. More support results in more things that you want for your OS.

    I don't really think this is true any more. The type of new users Linux is attracting today doesn't know a stack frame from a superblock, and doesn't intend to learn. These people aren't kernel hackers. They aren't applications developers. They aren't even helpful bug-report-submitting users who understand what quality software is all about and are proud to help it along in some small way. In other words, unlike previous Linux adopters, they take, take, take, and give nothing back. From a developer's point of view, these people do nothing to help the process of making a great system; all they do is clutter up mailing lists with useless bug reports and inane feature requests. New projects? Yeah, right. Patches? Don't hold your breath.

    Even if these people were useful developers, they wouldn't be able to help much with the issue at hand, device support. Today's support for devices is limited by documentation, not coders. Help is always appreciated of course, but one more programmer's manual would make a lot more difference than one more developer.

    You can ... sign mass petitions to get a native driver.

    I don't do that. I vote with my currency units. No open source? No specs? No money, then. Petitions are for whiners.

    To be more precise, the Linux purists are afraid that Linux users will be satisfied with the UDI drivers and won't be as eager to join mass driver movements.

    These people were satisfied with windows six months ago!!! Of course they'll be satisfied with UDI drivers; they'd be satisfied with three tons of steaming shit and a more comfortable chair. The nice thing about Linux is that the people who make it won't be satisfied with UDI drivers. But these "mass driver movements" you mention actually have very little to do with getting native drivers written. In fact, what usually ends up happening is that some competitor gets wise and mails a manual to some hacker who then writes a driver. The hundreds of petition signers don't know how to write drivers and in most cases are probably just following the Slashdot crowd anyway ("what's the petition for? d00d, it's a driver for the Mumblebazco Vapourware 5200!!! The WHAT? Oh well, I'll sign anyway.")

    If you don't want users to be complacent with UDI drivers, then tell them so.

    Right. "That driver offers 30% suboptimal performance!" "So? It plays quake."

    But don't take away the choice just because you're too lazy to do it.

    But you see, we don't write this code for others. We write it for ourselves, and share it with others because we feel it's the right thing to do. We don't have any responsibility to the users whatsoever. For the 99.8% of us who write Free Software and don't get paid for it, the fact that some user wants UDI means nothing. The simple fact is that the people who generously volunteer their time and skills to work on these projects have earned the right to decide what goes in and what does not. And UDI will not. Not because we're too lazy to do it (not that it would matter even if we were; we owe you nothing), but because we don't think it's a good idea. If you want UDI, you're free to write it yourself and maintain your own tree. Did Linus wait 10 years for Tanenbaum to open-source Minix? Did RMS sit around and wait for all the proprietary software vendors to collapse? Of course not. They rolled up their sleeves and got to work. Unlike them, you aren't forced to be a pioneer; you'll find a ready and willing crowd of people to test and maybe even contribute to your project. You can even get free project hosting if you want it. In fact, all you need to provide are your skills and time.

    There are only two ways to ensure you get what you want: pay someone for it, or do it yourself. Berating volunteers for being "too lazy" to give you what you want, against their own better judgment, is rude and selfish. Stop and think for a moment; if you felt UDI were a bad idea poorly implemented, would you want it in a world-famous product that bears your name?

  • How much hardware with available specs is unsupported?

    Thanks for making this point so succinctly. To answer the rhetorical question, essentially none. :)

  • It all seems like a wash to me. How does UDI actually make this problem worse than it is anyway?

    By significantly reducing the amount of work needed to produce something that can realistically be called "Linux support." Producing a fully native Linux driver requires significantly more work and knowledge than taking the UDI boilerplate code and throwing in the microcode for your device's ASIC. Producing a UDI driver (or letting someone else do so for their own proprietary OS) and then saying look, it works with Linux is much easier. It's a quick and easy way to add the increasingly important Linux checkbox to their marketing gloss without giving up any of their docs or paying someone to write a driver.

    But maybe we're just splitting hairs anyway. The simple facts are that, politics and licensing aside, UDI will not be a part of mainline Linux anytime soon, if ever, and that this decision has been made for technical reasons, rightly or wrongly. We can argue the nontechnical side of things, but the people who matter don't care in the slightest.

  • I'm not up on the technical details, SciTech being non-free software, but AFAIR they have an OS-specific driver which runs a small virtual machine, mapping system calls in a theoretical 'nice' OS to system calls on the current OS, and then a binary 'driver' which is loaded, and interacts.

    May sound questionable (performance wise), but their drivers work in OS/2 and Windows at least (I remember reading that they also had a Linux version for X displays), and I've heard good things about both stability and performance.
  • Based on the way they've explained it, you don't really have to get your own copy to understand how it would have to work. There are only so many ways to code a driver. So those of us who do have experience writing drivers -- and a few of us have already posted -- do have something to offer the discussion, thank you very much.
  • >The next question goes that since winmodems suck and are slow, and
    >opensource development is supposed to yield superior results to
    >proprietary schemes, why doesn't anyone bother to show up the winmodem
    >manufacturers by making a driver for the modem that yields better
    >performance while using less CPU cylcles?

    Maybe because most people running Linux or a BSD don't own a Winmodem, and have no intention of buying one?
  • Take the detonator drivers on the Riva 128 and TNT for example. Performance increased a good 35% (brought a 128 close to a Voodoo2 for QuakeII), just by changing drivers. You rarely see that kind of optimization in Open source stuff.

    Thats because its usually done right to begin with. Any company can push a crappy driver out the door and then release what they should have in the first place and claim a 35% increase.

    With open source drivers this simply doesn't happen. You put out crap, someone goes over the source and you get a whole lot of feedback that goes a lot like "Dude, your code sucks." Now, once in a while one of those flame mails will come with helpful patches, but the flames are a form of QA on open source drivers, or anything for that matter.
  • As I understand it (I'm a Mac person myself) WinModems shift a lot of the work of the modem off into software that runs on the CPU. This is a hell of a lot more work b/c you're not just sending commands to a black box - you have to rewrite half of the box to begin with.

    Additionally, that's a fairly inelegant way of doing that right now (particularly given how many cycles it eats up) and so I don't think anyone really wants to support them.
  • It is highly doubtful that a driver designed for an os like Windows could work on an OS like linux or the HURD.

    If the world could agree on one microkernel and then put a uniform driver layer.... Then something like this would be more feasible. After the driver layer the OS could look-like anything you want.

    Since the world doesn't work this way I have many doubts about how well something like this would work. I don't even think it can work but then I had doubts about VMWare too!

    If it's true-- great. If not... Oh well it looks like April fools day is a month long...

    Leimy
  • Actually the sad thing is most of the posts are from those who have experience in some side or another. Maybe they don't have enough experience or just want to speculate on why they think it isn't possible.

    If it works despite all of this speculation then you can see how groundbreaking a product this is.

    From what I have been reading so far it does work but isn't for high performance settings(like everyday use for me). The question isn't does it exist or does it work but WHY?

    Oh yeah and I write TONS of code (C++, LISP, Scheme, Python, C, ASM, PHP, Java, BASIC, COBOL and whatever else I want to learn) so don't blindly state what you obviously don't know.

    I have no respect for Anonymous Cowards!!!!

    Leimy
  • And it has already been decided that UDI won't be accepted into the official Linux releases.
  • In fact, the UDI driver code can be simpler, because it doesn't deal with interrupt masks, task switching, synchronization, etc. (Since the environment has to deal with these issues, the environment implementation is probably harder, but it only has to be done once for all drivers on that OS.)

    If they don't handle anything it will create very ineffective results. Don't have to handle synchronization? Implementation on SMP then means that the Big Driver Lock must be held whenever the driver code is executing on one CPU, whether it needs it or not. Goodbye scalability.

    Surely the kernel would be more manageable if this mass of code wasn't so tightly coupled with kernel internals?

    Only if you don't care about enhancing the Linux kernel. An API set in stone means that this has to be supported as written, no matter if you have to work around five corners to support it because it doesn't fit your kernel.

    Linux sometimes breaks driver interfaces and adds different ones during the development kernel cycle. Do they do it to annoy driver writers? No, they do it because the new interface is cleaner, more effective or fits better another updated part of the kernel. UDI means at least for hardware drivers stopping development and watch performance going down the drain. Or it means designing the kernel to fit UDI, which certainly won't happen.

    Why aren't the Linux/BSD communities driving UDI? UDI can be good for everyone but Microsoft.

    UDI is mainly good for hardware vendors who don't want to release driver sources or specs. Why should the free software community support that? If the driver specs are available then drivers can be developed and maintained for every OS, no problem.

    How much hardware with available specs is unsupported?

  • This allowed for a smaller memory footprint for the kernel boot image, without requiring any of the drivers to be removed from the system.

    But modules have a higher memory footprint on running kernels. Statically compiled, the functions of all drivers follow each other one after the other. Modules on the other hand are loaded into full memory pages (4kB on x86) and some of the memory in the last page may be left unused.

  • by Anonymous Coward
    its really not that hard and yes it can work. Be inc, made a wrapper to compile linux drivers in BeOS. This since disappeared as its a GPL violation, and to my knowledge they only used it once. Its all a matter of wrappers for the most part. Some of which can be done through macros. I myself have done the same with QNX, BeOS, and Linux. Created my own functions/macros for things like pci_find_device(). writing to ports is also easily done. Then you can have IRQ wrappers and macros. Once you get basic functions written its fairly simple to create a skeleton source for a particular function to go along with your cross platform library. And in some cases you could see a performance hit, but that can be minimized greatly by any decent programmer and the constant use of macros. It's really not that tough. It's much like writing NT or linux drivers that work on multiple platforms.
  • by Anonymous Coward
    This happens with I2O... and you see how much it's taken off.. In I2O, it's done in the hardware-- the OS is I2O compliant and the adapter cards are I2O compliant -> each vendor writes one driver (for thier OS or device) and it works with any OS with an I2O driver. Of course, if has to be the same type of device.. a network driver on the OS side won't work with a bulk storage driver on the other side.

    Don't know why it hasn't gone far, but oh well..

    Slightly informed AC...

  • Then they're idiots. Consistent interfaces are the lifeblood of continued interoperability

    In the Linux world, the binary KPI is not considered relevant. The ABI has remained compatible from 1.2 (when ELF appeared) to the present day and is unlikely to change in the foreseeable future. The source KPI remains consistent within a stable series. Since the source is open and the source KPI is consistent, who cares about the binary KPI?

    To this end, BTW, documenting that behavior in detail is a valuable step but all too rare in the Linux world. Documenting dependencies on specific behavior would be nice too.

    It's happening. Look in the latest 2.3.99 versions; parts are being documented in DocBook format.

    For example, as a filesystem developer I can only shake my head at the Linux VFS and buffer-cache layers.

    Try 2.3. You aren't the only one who thought the old way sucked.

    What's missing is a widespread recognition that the most important reason to protect code from change is the importance to other components of maintaining its current behavior even in small details.

    No. The most important reason to protect code is that it is correct, fast, and maintainable. While one correct, fast, and maintainable solution can be replaced with another that is easier to interface with, no decrease in correctness, performance, or maintainability is considered acceptable. It's true that some of the kernel hackers have axes to grind in some portions of the code. This is true in any project. Unlike closed-source projects, Linux offers you the opportunity to go off on your own, implement what you want, measure its performance, and post your results.

    Unfortunately, doing this usually requires foresight in the _original_ version of the code, leaving room for versions or capability flags so that both called and callers know which behavior to expect or provide. Sadly, the code most in need of an upgrade is inevitably also most marked by lack of programmer foresight.

    This type of thing has a name: cruft. Programmer foresight is a nice thought, but nobody can think of everything. Everyone tries anyway. The nice thing about Linux is that when a failed effort to think of everything starts to cause problems, it gets replaced. In the proprietary world it stays around forever (IDE, the 640 limit, the list goes on).

    >In Solaris or Windows, where most people don't get decent performance anyway That's a pretty offhand and inaccurate statement, at least for Solaris. It's not great, but neither is Linux.

    Look, I've used Solaris. I admin it sometimes. It sucks on SPARC, and is simply godawful on x86. I bitch at Linux too, sometimes for other things (like NIS), but when it comes to Solaris I gripe about performance. It just sucks.

  • Tell that to the large number of people waiting for drivers that aren't forthcoming, who don't have the skills to write the driver themselves.

    I do, unapologetically. If you don't release specs so that an open source driver can be written, I won't buy your product. It's just that simple. It's unfortunate that some people won't switch to a Free operating system because they have hardware that isn't supported, but their annoyance and indignation - and that of the people who have to explain this to them - should be directed at the vendors, where it belongs, rather than at the legions of volunteers who have written excellent drivers for documented hardware. Compromising with the devil is rarely a profitable exercise. I would much prefer Linus to announce today that he will no longer allow binary-only drivers.

    A driver version should not be tied to a kernel version in the first place. With a well-defined API (i.e. UDI), this sort of backward-compatibility and forward-compatibility will work and should be encouraged. Needing to rebuild every driver because you updated the kernel is a waste of time and effort, especially when the drivers need updates to match kernel changes.

    No. Firstly, kernel source PIs do not change within a stable series. But what you are forgetting here, since we're talking about binary compatibility now, is that Linux has no consistent or guaranteed kernel binary PI. That's right, none. Linus and everyone else doesn't see the need, and I wholeheartedly agree. Linux is an open source operating system, and the source KPI is guaranteed within a stable series. So given driver source, there is no issue here. The kernel hackers can feel free to make changes that break binary module compatibility, and it only harms binary-only drivers. Big deal. If a change needs to be made, it needs to be made. And if that change happens to annoy Big Proprietary Hardware Vendor Inc., tough shit.

    It only hinders development if poor API's were chosen to begin with.

    Only as long as those PIs continue to exist. In Linux, when it's determined that an existing KPI (it's not an API if they're not applications after all) sucks, it gets replaced in the next development series. We can do that in an open source world. Your proprietary binary-only world prevents those kinds of changes and instead encourages setting arbitrary limits like the ones you mention ("since we can never change it, we'll just make it as high as we think is necessary today"). In fact, it is the anchor of binary compatibility that holds you to the rocky bottom of 1980's OS technology.

    Jury's still out on performance. [UDI]

    Not really. Alan Cox has explained in several l-k posts, in detail, why UDI drivers, in Linux anyway, cannot have performance equal to their native counterparts. In Solaris or Windows, where most people don't get decent performance anyway, UDI sounds like a good idea. But in Linux, wherein crazy socially maladjusted individuals pride themselves on removing one cycle from the kernel's execution time, any penalty is too much. The rationale is simple: our OS kicks the living shit out of yours; why should we be the ones to change to your driver model? I have yet to hear a good answer to this question.

    Untrusted drivers could be loaded into userspace and run slowly but safely. After they've proven themselves, the user/sysadmin could choose to allow the driver to run in kernelspace for performance. Best of both worlds. (This switching could potentially even be automated...)

    Right. So on my production system, I have three choices: a userspace UDI driver that most likely won't crash my system but will run so slowly that nobody can get their work done; a kernelspace UDI driver that will do God-knows-what to my system; and a 7-year-old battle-tested open source native driver held together by volunteers who know their device better than the people who made it. Guess which one I'm going to choose, even if it means using different hardware?

    UDI represents the best hope for "fringe" operating systems (e.g. HURD) to get comprehensive driver support.

    This is not a bad point but for one thing: if the people who use those OSs want, they can take a driver from another open source OS and port it. The real "best hope" is for every piece of hardware to have available a programmer's manual. Linux is where it is today because many people felt it was a good use of their time to write drivers for hardware that wasn't previously supported. The same can be said of FreeBSD and other projects as well. If nobody feels that writing a HURD driver for some odd hardware is worth doing, then it won't get done. In 1992, nobody picked up Linux and started using it because it supported their hardware. In fact, some of them picked it up because it didn't support their hardware.

    Too much is made today of how to get the computer-illiterate digital have-not GNOME-wanting RedHat-investing Luddite fuckwits to come use our great new wonderful OS. Am I the only one who doesn't really care whether they do or not? Linux is for the people who make it. If someone else finds it useful and wants to use it, they are free to do so by virtue of the license. But it's not the responsibility of the kernel developers to put in support for every two-bit half-assed self-serving proprieatry piece of crap "standard" that comes their way. In fact, in most cases, it's not their job to work on Linux at all. They do it because they choose to, not so that you can feel all warm and fuzzy telling all your 1337 friends how well it supports your brand-new XFR3-5432DSA fuxor card.

  • Again, where exactly is the compromise here? How would enabling UDI drivers enable disreputable proprietary behaviors that aren't already being practiced anyway?

    By asking this question you illustrate your lack of understanding. The point is not that keeping UDI of of the mainstream Linux kernel somehow prevents proprietary practices. Rather, if UDI is put in (ignoring for a moment the substantial body of technical reasons it should not be), and vendors begin providing the same low-quality binary-only drivers for Linux that they do for 'doze, the new Linux users unaccustomed to our high standards will accept these inferior drivers. Once they have done so, there will be less motivation than ever for hardware vendors to provide documentation ("Why do you need the docs?" "To write a Linux driver." "But we already provide one! See, here - foo4355.o; it works on UP Red Hat 7.2 on Intel only" "But I have a Sun and I run Debian" "You need foo4355.o for Linux"). Don't kid yourself - this is guaranteed to happen. In fact, it's already a problem because binary modules are allowed at all. Thankfully the developers refuse to support kernels with binary-only modules loaded. Still, UDI will only make the problem worse.

  • No, a closed source driver is not better than nothing.
    Tell that to the large number of people waiting for drivers that aren't forthcoming, who don't have the skills to write the driver themselves.
    Anyone who expects to run Linux on their computer should really buy hardware with Linux in mind. For those who are using existing hardware, I don't think UDI would offer any benefit. It is unlikely that hardware manufactures are going to provide support for old hardware via UDI, because there's little economic benefit -- they've already sold their product, support or no support, so there's no reason put extra effort forth (unless you want to keep your good name, but that seems to mean little to most hardware manufactures).

    Hardware support for Linux is pretty good right now. While not everything is supported, a little of everything is supported. If you want a nice graphics card, you can get one -- not any graphics card, but you don't need any graphics card, just a single one that works well.

    UDI would keep even many good companies from releasing their specs, because they wouldn't have to do so to provide support. It would hurt Linux and its bretheren.

    UDI drivers would (finally) separate out policy decisions and leave them in the kernel, where they belong. More improvements could be made to the kernel's driver code, because the API remains unchanged and drivers need not be recoded for architectural changes.
    That's not the way things work. The ideal of providing the perfect abstraction is a nice, but unattainable, ideal. Innovation very often is a matter of changing the boundaries of abstraction. And Linux is great because there is that freedom to innovate. That freedom is one that is used a lot in the development of the kernel, and is part of what keeps the Linux kernel from getting too unwieldy.

    There will be a UDI 1.1 or 2.0 or something. Even if the people behind UDI are quite intelligent and thoughtful, they aren't oracles, they can't see into the future, they can't predict what the demands on future APIs will be.

    Open Source drivers don't need to predict the future, because they can change. If other Unices want to get the hardware support that Linux has, they can GPL their kernels and make their kernel APIs compatible with Linux's, and then they are set. This isn't wholely unreasonable. And if they don't, well, tough luck for them.

    If Linux wants to get better hardware support, UDI offers almost nothing. Is someone going to make a UDI driver but not a Linux driver, because UDI has a 10% larger share? No. Are they going to make one for UDI because it's binary-only? Sure, but that's not what Linux needs.

    UDI represents the best hope for "fringe" operating systems (e.g. HURD) to get comprehensive driver support.
    I think it's highly unlikely the HURD people want anything to do with UDI. Don't support it for their benefit.
  • I'd see the primary market for the product to be that of people that want to hook up one-off instruments, for things like factory control, or scientific analysis.

    In such cases, there may never be a "real" driver, as the device may be so esoteric that there will only be ten of 'em on the planet.

    In the long run, it might be preferable to have some generic "bus" (think: RS-232, Parallel Port, USB, IEEE-488, FireWire, ...) to support all sorts of such devices, and have the support code sit in user space. That is what is likely to, in the long run, limit the usefulness of this system...

  • It's probably useful to note that licensing constraints are not limited to the GPL. Those that favor BSD-style licenses, that would prefer to use the (flameworthy) term "GPV" to describe GNU software will have some similar problems if WinDriver has a somewhat restrictive license.

    With a BSD-like license, the notion that KRFTech "owns" portions of the driver would make it just as unacceptable to try to push a WinDriver-based driver into FreeBSD or NetBSD as it would be with Linux, and perhaps moreso. In effect, *BSD sets a "higher" standard for freedom in this regard, in requiring that anything that becomes part of an "official" kernel be released under a license that allows people to build it into proprietary systems. That's a somewhat more "intense" requirement than is provided for in the GPL.

    The fact of having permission to release "binary-only" versions would permit some releases of software that the GPL would forbid, with the net result that "You win some, you lose some" when taking WinDriver into a BSD-related context.

    But the crucial point here is that Licensing Really Is Important.

    There are some legitimate disagreements that may legitimately persist over which approach to licensing of "free software" is preferable.

    My position is that this parallels the multiple forms of noneuclidean geometry, where there are multiple viewpoints that provide useful mathematical insight.

    The universal is that which ever "ethic" you prefer, it is critical to understand and think through the use of your "favored" licensing scheme, as there can be great troubles otherwise.

  • SciTech has been contracted by IBM to make a "light" version of their drivers [ibm.com], so that IBM can take developers off video card support.
    I have heard good things about performance, but it could be because the previous IBM or manufacturer drivers were worse or just the switch to the newer GRADD driver model.
    Meanwhile, Scitech keeps improving a beta OS/2 version of their commercial "full" drivers
    __
  • as far as i know, the difference is that its not just a 2%-5% drain, think of it as a real time 2%-5% drain... unlike most applications, the modem can't just sit waiting for resources, it must have it all the time. so if it ends up waiting for too long, because you started some resource hungry app, then "blip" there goes your network connection.

  • Maybe we can get the people who make win modems to create a module for us linux folks now.
  • For the PC platform the single driver multiple platform idea is ingenious. I mean just imagine the ultimate efficiency of creating allowing your drivers and hardware to penatrate every market, appliance, etc. If you bring it one step further you can allow your laser printer to say a palm pilot or gameboy. Anyone with common sense can see that this is good for the PC market.

    Hardware engineers are up to the task of making universal standards and interfaces for computers. The limiting market I am sorry to say is the software market. Who will be in control of the technology behind the universal drivers? Will it be open or closed. If Microsoft is in control the answer is simple: You won't have universal drivers since all they want you to run is windows! What if Sun is in control? They will want their Java technology to get into the picture.

    To make a long story short, as long as these companies are fighting for domination and complete control, smart ideas like univesal drivers will not hit the PC market to a great degree.
  • It means that the device specific software is in the application instead of the kernel. This is OK for things like industrial control and data acquisition, where the programmer writes a custom application for a specific task. It is useless if you want a standard device driver for a sound card, video card etc. The operating system and its applications assume that each class of device driver, such as sound or tape, provides a standard set of features specific to that class. The driver provided with this software does not provide those features. Every application that uses the device driver must include device specific code for the hardware that it supports. It is similar to games on MS-DOS where the sound and video drivers were part of the game, not the operating system.
  • Does it much matter if it's ineffecient? Once the driver was functioning correctly on multiple platforms, you could start including conditional code to optimize performance on particular platforms.

    In radical cases, you might branch and rework the architecture to bring it more in-line with the target OS's native device driver interface, but even in that case you would have a couple of advantages - a working reference implementation (regression testing, anyone?), multi-platform support, and the ability to perform the more expensive optimization work under various OS's if and when you determine that enough customers are asking for it.

    Heck, release the Linux/BSD versions under the appropriate licenses, and you'd probably have an optimized version for each OS within a few months, without much more cost than developing a single driver for Windows.

    All in all, sounds like a good deal; makes me wonder why more companies wouldn't take this route. Is the company new on the scene, or is their product just not mature enough yet to be worth spending time on?

  • Ah! Thought I saw this on their page:

    WinDriver for Linux features performance enhancement tools enabling you to move parts of your code to the Kernel level, thus eliminate unnecessary context switches between the kernel module and your application.

    So it looks like they provide for some level of perfomance tuning without even going the #ifdef/branch & rewrite route.

  • It's a quick and easy way to add the increasingly important Linux checkbox to their marketing gloss without giving up any of their docs or paying someone to write a driver.

    Since UDI exists, you can't prevent this kind of game. They can still write a UDI driver, say "we support Linux!" and add "by the way, you need this patch to use our UDI driver" in the fine print. The glossies can still have that little checkbox, regardless of how hostile Linux kernel hackers may be to UDI. The solution to this problem is user education, not rejecting UDI environment code for a perceived (but illusory) idealogical benefit.

    But maybe we're just splitting hairs anyway. The simple facts are that, politics and licensing aside, UDI will not be a part of mainline Linux anytime soon, if ever, and that this decision has been made for technical reasons, rightly or wrongly. We can argue the nontechnical side of things, but the people who matter don't care in the slightest.

    Let's get real here. The decision was not technical -- plenty of low-quality drivers are allowed into even stable-series releases, even with little technical merit. As long as it's marked "EXPERIMENTAL", Linus will allow almost any extra drivers to be included. UDI environment code could be marked "EXPERIMENTAL", and it wouldn't impact those who don't enable the option. This wouldn't hurt anyone, and should be done. However, the kernel developers don't like the idea of supporting UDI, so they find technical excuses for why it "cannot" work to avoid including it.

    I thought the Open Source way was to give the code a chance and let it succeed or fail on its own merits?

    I'll give you another example. Years ago, Oracle said they were reluctant to consider porting their database (in part) because Linux had no support for raw disk devices. Linus (for whatever reason) hated the idea of raw I/O and insisted that all disk I/O must be buffered. His response to this genuine need was to the effect of "screw that, I don't think we need it, and Linux will never support raw I/O to disk devices." (Not an actual quote.) He insisted that he would reject patches for raw disk I/O even if someone else did all the work. It was because he didn't like it.

    So, what happened with raw disk I/O in the end? It's going to be in the Linux 2.4 kernel. This is a tacit admission that there is a real need for this feature. It was resisted for years only because Linus preferred his way. (Of course, now devfs is being used as a technical excuse for this delay, because it "would have required too many device numbers" -- that never stopped other UNIX vendors...)

    UDI is a lot like raw disk I/O -- it's against the preferences of the Linux kernel developers, and they don't feel it's needed. Nevertheless, there is a real need for this stuff, and it will probably happen eventually in some fashion; refusing to allow it just causes unnecessary strife. It's clear that when it comes to UDI, Alan Cox doesn't like it. Must we go through the same struggle as with raw disk I/O?
  • Face it, your typical Joe Linguru doesn't give a rat's ass about a new potential convert, especially a Joe Sixpack one.

    It's the dichotomy of the Linux community, isn't it? On the one hand, you have people wanting to replace Windows everywhere with Linux everywhere. On the other hand, people are lamenting about how the lower quality of the average Linux user as more newbies show up.

    We can't have it both ways, folks. The average computer user never gets far past "clueless newbie" in their skills. Now that Linux is becoming more of a mainstream OS, we're stuck with more average users. It's a trade-off that's tough to evade. If you don't like it, switch to an OS like HURD that is more hostile to average user. (Of course, if you could move UDI drivers from Linux to HURD, that would be a lot easier...)

    Many corporates just can't see things that way.

    Quite true. However, some companies are starting to see the light. Matrox, for example, is paying for an accelerated Linux driver to be written for the G400, to be released as Open Source. The result? They get a competitive advantage. (I just bought a G400, actually, for this reason.) Even if most companies don't get it, you only need a few enlightened ones...

    Nice, really. But legally speaking, you won't be allowed to distribute these drivers. If you don't believe me, here's a quizz:
    The KDE project is based on Qt, which isn't GPL: Yes/No.

    The KDE project is under GPL: Yes/No.

    As Debian pointed out, you can use KDE but not distribute it, as the GPL forbids you to distribute your software if it requires a non-opensource component: Yes/No.
    The correct answers are "Yes", "Yes" and "Yes". We'll have the same problem with UDI, which will be to the driver what Qt is to KDE.

    (This response is based on my understanding of copyright law, but I am not a lawyer; standard disclaimers apply. Could a real lawyer please check this and comment?)

    This is an oversimplification. Copyright law only allows the GPL to exert influence over "derived works", which is why "mere aggregation" on a distribution media does not bring other code under the scope of the GPL. (This is explicitly acknowledged in the GPL, in fact.) Since KDE is a derived work of its own GPL code and the proprietary Qt code, the GPL should apply to the whole. (Hard to enforce when the KDE developers don't seem to care...)

    UDI, on the other hand, is a specification which is not under the GPL. If Microsoft were to implement a UDI environment (an idea they're probably more to than Alan Cox is), that UDI environment code would be a derived work of the UDI specification, and could remain proprietary. Meanwhile, if a GPL'd Linux kernel driver were ported to make a UDI driver (perhaps by Microsoft), that UDI driver would be covered by the GPL, since it would be a derived work of the original GPL code. It would also be a derived work of the UDI specification, which wouldn't interfere with using the GPL.

    So, this hypothetical scenario has a proprietary UDI environment, and a GPL-licensed UDI driver. Each is kosher on its own. The key question is, what happens when you combine them? As I understand it, if the proprietary OS loads the UDI driver dynamically, the GPL couldn't apply, because there would be no derived work for it to apply to. (This is akin to loading a GPL'd user program with a proprietary OS.) A direct quote from the GPL:
    These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those

    sections when you distribute them as separate works.
    Here is the "mere aggregation" quote from the GPL:
    In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
    Even from the words of the GPL itself, it seems clear that a proprietary OS with UDI (which is clearly identifiable as a separate work; it could also use proprietary UDI drivers) and the GPL'd UDI driver could even be on the same distribution medium without violating the GPL. Static linking, on the other hand, probably does bring the whole under the GPL. All the proprietary vendor needs to do is keep it dynamic loading, and I think they're in the clear.

    Is there anything non "100% pure GNU" that Stallman don't consider as an international conspiracy of proprietarist to eradicate free software? :> (evil grin)

    Probably not! :-)
  • If you have any platform other than i386 then 99% of the time closed-source is exactly the same as nothing.

    That's usually true right now. A big part of the reason is the cost required to port to other platforms and support them. On the other hand, if no actual porting is involved (just a recompile), even proprietary companies might be more willing to release binaries for multiple platforms, since it would expand their market for a smaller cost...
  • Consistent interfaces are the lifeblood of continued interoperability, whether you're talking about GUI fluff or kernel grunge. If lots of people call a function or use a data structure, great care should be taken to preserve that item's behavior or change it only for very good reasons.

    Thank you!

    Finally, someone who understands the real issues here and why the ad hoc methodology used until now is so limiting...
  • In the Linux world, the binary KPI is not considered relevant. The ABI has remained compatible from 1.2 (when ELF appeared) to the present day and is unlikely to change in the foreseeable future. The source KPI remains consistent within a stable series. Since the source is open and the source KPI is consistent, who cares about the binary KPI?

    This is a serious problem. Do you remember what a nightmare it was to manually upgrade a.out systems to ELF? Even with all the source readily available, recompiling all that code is very painful. The more code, the more painful it is. Now that it's been stable under ELF from one kernel version to the next, hasn't life been easier? Application developers don't have to worry about capricious changes in the ABI for their ELF binaries; why aren't device driver developers accorded the same courtesy? This deliberate nonchalance about stable interfaces isn't "studly hacking"; it's amateurish and it shows.

    There's over a million lines of device driver code in the Linux kernel. That's a lot of code to review and modify when changes are made. Device driver development could be less painful if a solid API/ABI (or KPI/KBI if you prefer) could be trusted not to change often. UDI aims for that kind of interface stability. It also aims for high performance. I don't know if it will meet these goals, but I'd rather give it a chance instead of running it out of town on a rail.

    Imagine the uproar it would cause if the kernel hackers suddenly decided that they didn't care if ELF changed with every kernel revision and couldn't be bothered to keep backward compatibiltiy with old ABI's, and refused to accept such efforts contributed by others. After all, who care, if you have source code and the API hasn't changed? Oh, you don't have source to Oracle? Too bad, proprietary code sucks -- that'll teach 'em not to release their source!

    This attitude is childish, and impedes the progress of the entire Linux community. Clean, solid abstractions are the basis of complex code. Do you really think you'd have such fancy browsers and word processors if they had to twiddle the bits on the screen at the application level instead of using GUI toolkits? I doubt it.

    We need more stable interfaces, to make way for the Next Big Thing, whatever that may be. Right now it's hard to see the forest for the trees.
  • How will we know until we see it? If it sucks that badly, they'll probably either revise the UDI spec to make it perform well (which would eliminate the main argument against it), or punt and implement a traditional interface (which would be a tacit admission of failure). I don't know what will happen, but I wish them luck.
  • I was told by someone at SCO that Monterey would have UDI as its sole API for drivers, but after a little searching, I found a reference [sco.com] on SCO's website about it -- "UDI device driver model" is listed under "Common Enabling Technologies".
  • "A driver version should not be tied to a kernel version in the first place."

    Linus and everyone else on linux-kernel disagrees with you.


    Are you suggesting that their opinions are beyond even debating then?

    Currently, device driver modules must be tied to the Linux kernel because code written for one kernel version often won't work with another version. If the API and ABI are stable, then you only need to version those and not every kernel code change, large and small.
  • Who says free software needs to compromise? UDI is a technology that has significant potential benefits for free software, independent of political considerations. Yes, UDI also offers significant benefits to commercial, proprietary developers, but that's incidental. Don't cut off your nose to spite your face; use the best technology instead of hobbling yourself for purely political reasons.

    Again, where exactly is the compromise here? How would enabling UDI drivers enable disreputable proprietary behaviors that aren't already being practiced anyway?

    As for whether developers want to work on drivers, I have no objection if that's how they want to spend their time. However, I'm concerned about people who insist that vendors shouldn't write a driver and should only release specs so we can write the driver. Even with specs, not all drivers get written, and little-used drivers might be just as poor quality as typical vendor fare. I believe that the vendor should be expected to release open source to a basic driver and specs.

    If they only release specs and we write the code, we're subsidizing their efforts -- let them shoulder the burden of making their products work. Once they've gone that far, it's more reasonable for them to ask the community to fix bugs and enhance the driver. Again, why should they expect us to do their work for them?
  • Anyone who expects to run Linux on their computer should really buy hardware with Linux in mind.

    That's true for early adopters. When I ordered my computer at the end of 1992, I made sure to select hardware (e.g. Adaptec 1542CF) that was supported by Linux and NetBSD, since I wasn't sure yet which I would run. (I quickly decided I liked Linux better.) Selecting the best hardware for Linux is obviously ideal, but not everyone has that luxury, or necessarily even knowledge of which hardware is best.

    For those who are using existing hardware, I don't think UDI would offer any benefit. It is unlikely that hardware manufactures are going to provide support for old hardware via UDI, because there's little economic benefit -- they've already sold their product, support or no support, so there's no reason put extra effort forth (unless you want to keep your good name, but that seems to mean little to most hardware manufactures).

    I'll tend to agree with you here -- vendors probably won't rewrite drivers for old hardware. On the other hand, old hardware is more likely to have Open Source drivers available. New hardware tends to be more troublesome; if new hardware started becoming available with UDI drivers, that hardware could be used immediately. If open source drivers were later written (or maybe higher-performance Linux-native drivers?), you could switch later.

    Of course, there's always the "don't use that hardware" argument, but that attitude doesn't mesh well with the quest for World Domination (tm). Face it, your typical Joe Sixpack doesn't give a rat's ass when he buys his computer whether the hardware has Linux drivers -- but if you can make it run Linux anyway, and better than it runs Windows, then you may have a potential convert on your hands...

    Hardware support for Linux is pretty good right now. While not everything is supported, a little of everything is supported. If you want a nice graphics card, you can get one -- not any graphics card, but you don't need any graphics card, just a single one that works well.

    Linux probably has the best device driver support of any free OS, and better than many proprietary OS's. It can't match Windows 98, but it might actually have better support than NT 3.51 or NT 4.0 did? Linux is reaching critical mass and starting to become a significant target market for vendors, although still less important than Windows.

    Now that Linux is more mainstream and less of an "alternative" OS, what about the other alternative OS's? What about HURD, FreeBSD, NetBSD, and even BeOS? They may all be great OS's, but they lack the driver support of Linux (especially HURD and BeOS), which is a barrier to entry. Years ago, people used to complain about how they'd like to use Linux, but it had little driver support -- now you hear the same about HURD or BeOS. Won't we be better off if new OS's can compete on features and merit rather than sheer volume of available device drivers?

    UDI would keep even many good companies from releasing their specs, because they wouldn't have to do so to provide support. It would hurt Linux and its bretheren.

    We can't stop companies from being shortsighted. You can always boycott such companies; that's nothing new. Smart companies would release UDI driver source (both for the "with enough eyeballs, all bugs are shallow" effect, and to allow the driver to run on as many possible platforms without needing to directly support each binary configuration) and specifications (so that the community could do most of their support work for them, reducing their support costs). Dumb companies that keep their source and specs closed will suffer a competitive disadvantage, which should encourage them to change their ways.
    UDI drivers would (finally) separate out policy decisions and leave them in the kernel, where they belong. More improvements could be made to the kernel's driver code, because the API remains unchanged and drivers need not be recoded for architectural changes.
    That's not the way things work. The ideal of providing the perfect abstraction is a nice, but unattainable, ideal. Innovation very often is a matter of changing the boundaries of abstraction. And Linux is great because there is that freedom to innovate. That freedom is one that is used a lot in the development of the kernel, and is part of what keeps the Linux kernel from getting too unwieldy.

    At over 1.6 million lines of code, the Linux kernel is already somewhat unwieldy. (Nothing compared to Windows 2000, of course.) Over 1.1 million lines of that is drivers; decoupling driver development from kernel development would give both kernel developers and driver developers more freedom to innovate, because they wouldn't spend so much time tripping over each other!

    While I agree that attaining a perfect abstraction is probably impossible, UDI looks to be a very, very good one. I'm amazed at how flexible it is, especially for an API designed by committee. It remains a matter of speculation whether or not it will be fast enough to be a system's primary API, but SCO and IBM must have some faith in it -- I'm told that UDI will be the only device driver API supported by Monterey, the "High Volume Enterprise UNIX Platform" (as their whitepaper describes it) being developed for Merced. I'll be interested to see if that succeeds; it sounds like a demanding environment.

    There will be a UDI 1.1 or 2.0 or something. Even if the people behind UDI are quite intelligent and thoughtful, they aren't oracles, they can't see into the future, they can't predict what the demands on future APIs will be.

    Of this, I am certain -- sooner or later, UDI will surely have to be revised, unless it dies completely. (The obvious reason why a revision might be necessary would be for performance reasons.) Project UDI has prepared for this; the UDI specs and the drivers themselves are versioned; if a UDI environment and UDI driver don't support the same API version, they will be able to tell. An OS supporting UDI 1.0 won't accidently try to run a UDI 1.1 driver and crash the system; it will simply refuse to run the driver at all.

    In any case, hopefully such revisions would be rare occurrences. An OS could, of course, support several UDI versions simultaneously if necessary. Really, what are the odds that the UDI specification will change more often than the Linux kernel driver API already has?

    Open Source drivers don't need to predict the future, because they can change. If other Unices want to get the hardware support that Linux has, they can GPL their kernels and make their kernel APIs compatible with Linux's, and then they are set. This isn't wholely unreasonable. And if they don't, well, tough luck for them.

    Actually, the cat's out of the bag on this one. The very existence of the UDI spec presents an opportunity for proprietary OS vendors (even Microsoft) to support UDI in their system, then port Linux drivers to the UDI model. The ported driver would be GPL, but the UDI environment could remain entirely proprietary, and it would not violate the GPL since their code changes wouldn't be a "derived work" of the GPL code. It would be a derived work of the UDI spec, which isn't bound by the GPL. They can do this whether or not the Linux kernel developers support UDI.

    On the other hand, this bugaboo isn't worth getting too alarmed about. Microsoft has little incentive to "steal" drivers this way -- they already have the most comprehensive driver support, courtesy of hordes of bootlicking vendors. Proprietary UNIX vendors have an incentive, but they still have a problem with the value proposition of proprietary systems -- increasingly, Linux is becoming recognized as a best-of-breed UNIX platform in more and more areas, and the proprietary UNIX vendors are bound to fall by the wayside sooner or later, with or without device driver parity with Linux. In short, this one is a red herring.

    If Linux wants to get better hardware support, UDI offers almost nothing. Is someone going to make a UDI driver but not a Linux driver, because UDI has a 10% larger share? No. Are they going to make one for UDI because it's binary-only? Sure, but that's not what Linux needs.

    Vendors can make binary-only Linux drivers now, and some are starting to. This is more problematic, because it makes it difficult to upgrade the Linux kernel.

    As for why vendors might release UDI drivers instead of Linux ones, it's a question of leveraging their investments. If a vendor is targetting Unix-like systems, getting all versions with UDI would be better than investing in learning each individual native interface. Also, the UDI driver could actually be easier to write with a stable API that hides issues such as MP sychronization from the driver...
    UDI represents the best hope for "fringe" operating systems (e.g. HURD) to get comprehensive driver support.
    I think it's highly unlikely the HURD people want anything to do with UDI. Don't support it for their benefit.

    Given Stallman's hostility towards UDI (he seems to believe it is a plot for proprietary vendors to rip off free software developers), you may be correct in believing that the HURD people will have nothing to do with UDI. That doesn't mean they wouldn't be shortsighted for that. If they ever want "the GNU system" (with HURD) to replace "GNU/Linux", they'll need to address the driver issue, and UDI is a better solution than trying to race Linux in these driver parity wars...
  • As you admit, it's already a problem where proprietary vendors are releasing binary-only Linux drivers and refusing to release source or specs. New Linux users can be just as lulled by these (potentially) low-quality binary-only native Linux drivers. This is a user training issue and a community issue. The problem isn't going to go away on its own.

    It all seems like a wash to me. How does UDI actually make this problem worse than it is anyway?
  • I hate to reply to my own comment, but I can't figure which one of it's replies this is most appropriate for. My question: If Winmodems suck so badly, why does everyone and their mother around here whine about how they're not open enough? And in a day and age of 500 MHz Celeron chips, GHz Athlons, etc... I seriously doubt that a little cpu drain caused by the CPU emulating a modem would be missed by more than 5% of the people around here. Honestly, most people's machines CPU's aren't pegged at 100% 24/7, unless they're running Distributed.net or Seti clients. At least to me, saving $100 bucks here or there is much nicer than getting an extra 10 blocks per day on distributed.net.

    The next question goes that since winmodems suck and are slow, and opensource development is supposed to yield superior results to proprietary schemes, why doesn't anyone bother to show up the winmodem manufacturers by making a driver for the modem that yields better performance while using less CPU cylcles?
  • I thought the whole beauty of open source was that the users/developers would make drivers as they needed them? The linuxppc guys can reverse engineer the mac motherboards, XFree86 people write their own video drivers when none are made available, but no one can take the time to figure out the modems that seem to appear in almost every shipping PC? What gives?
  • Yeah, OTOH win95/98 being closed source and having a "stable driver interface" really helped in getting a ntfs driver for it.
    The same goes for drivers for aged hardware or hardware from vendors which went down the drain, when you want to use them in newer windows versions. This will not work with older/bad nt4 drivers. Try the pppoe driver winpoet 1.2 from ivasion in win2000 and see how well it works or tell me why every graphics card manufacturer comes out with drivers for win2000.

    This "stable interface" is mostly a myth.
  • but this is not a feature, this is a misguided cure against non-opensourceness. And - it works well most of the time, but not always, take a brief look at support.microsoft.com and search around. There are many problems with drivers across service packs and nt/2000
  • In many of the comments Im reading here make me think of the almighty hill much of the Slashdot communitity sits on.

    Here is a company TRYING to do a good thing and help people out making driver writing easier within the space. No nobody ever claimed that this method of doing things was faster then regular code. One word: DUH! Its running outside the native OS OF COURSE its not going to run as fast. This from the same people that carry on about Transmeta chips and how fast their going to run and blow up Intel.

    Listen, everybody step down from their high and mighty and come down to the normal user for the moment.

    1. Joe Blow user wants simple running, if you want to keep Linux within the confines of the computer elite then keep this attitude about anything easier coming along. Someone how Grandma likes running Windows 95 because its easy and she doesnt want to recompile a kernel.

    2. Keep putting anything down that might actually help the OS that wasn't suggested by Linux or Bruce Perens. There are a lot of smart people out there and if you continually dog good ideas instead of trying to nurture them and WOW maybe improve on them then Linux will become pardon for lack of a better term "an imbred communitity."

    3. Not accepting the fact that some stuff will be closed source. Not all companies can make money from support. Reverse engineering of chipsets becomes significantly easier with all the specs laid out in front of you. Think about the company's interest for a moment:

    a. Release closed source drivers for Windows and Mac and not worry about competition. Put up with Linux user requests.

    b. Release specs to the Communitity and have competitor figure out what I'm doing and move before I can. Whether this is a logical or reasonable arguement doesn't matter.

    Personally, I like the UDI idea and think its about time. Whether it works as quickly or not as native code really doesnt matter to me if I can run Solaris, Win2000, RedHat, and Windows 95 on the same system without having to download different sets of drivers for each OS. You can't tell me that this problem cannot be beaten with a little humility, ingenuity and hardwork within the community instead of a "It CANT BE DONE, ITS HARDWARE and NATIVE argument." Hell if Linus can program a processor to emulate x86 code and nice people can get Windows to play in Linux sandbox and vice versa then this can be done. Dance with the same girl you brought and live by the motto "It just works." Worry about upgrading performance later and free your mind. Live by the motto below:

    Hangtime
    If you continue to think the way you always thought, you will continue to get what you have always got. - Anonymous
    Hangtime's collary
    If the Linux communitity adopts the same attitude it has had, it will never grow beyond the elitiest of computer users and never pass Apple in marketshare.
  • I can see how a "write once run many" theory can be employed with apps, as apps can use abstractions layers (winelib etc) to abstract the running code.

    As far as drivers go, with all the changes in the kernels, different compilers, different platforms (x86, ppc, alpha,...), I just don't see how this can be easily accomplished.

  • Actually, you might want to check the devel kernels, as I think preliminary winmodem support is built-in.

    Go to edge.kernelnores.org for all da juicy details!
  • It looks as if the drivers run in userspace and there is a stubbed "superdriver" kernel module that provides a generic API for driver style interfaces between this userspace "driver" and the Linux kernel.

    Its pretty possible that the "superdriver" stub could be a binary only module too from the looks of things.

    I didn't read far enough in to look for licence restrictions, redistribution royalties etc.

    I am certainly no kernel programming expert but it seems to me that there could be some definite performance issues with this approach, aside from these other issues I have outlined.

    Would anyone suitably clever care to comment on the performance issue here?

  • >Linux has no consistent or guaranteed kernel binary PI. That's right, none. Linus and everyone else doesn't see the need

    Then they're idiots. Consistent interfaces are the lifeblood of continued interoperability, whether you're talking about GUI fluff or kernel grunge. If lots of people call a function or use a data structure, great care should be taken to preserve that item's behavior or change it only for very good reasons. (To this end, BTW, documenting that behavior in detail is a valuable step but all too rare in the Linux world. Documenting dependencies on specific behavior would be nice too.) Failure to preserve behavior will lead to other components failing, often in very subtle and non-obvious but no less severe ways, and in ways which require more than a mere recompile to address.

    One of the biggest problems with Linux is the lack of good internal abstractions that allow people to do different things without interfering with one another. For example, as a filesystem developer I can only shake my head at the Linux VFS and buffer-cache layers. It seems like there's always some asshole changing a function or data structure or locking schema to make their own "neat idea" work, without considering the impact their change will have on others. I've seen project after project lose time from this lack of development discipline. Many times I've seen people make the situation worse by countering one hack with another instead of improving the underlying abstraction so it serves everyone's needs better.

    Some code is "politically protected" by the fact that some Major Figure in Linux development will scream at you when you touch it. Other code is protected by being hard to understand, or in other words bad code is less likely to change. What's missing is a widespread recognition that the most important reason to protect code from change is the importance to other components of maintaining its current behavior even in small details.

    >In Linux, when it's determined that an existing KPI (it's not an API if they're not applications after all) sucks, it gets replaced in the next development series.

    And that's great. Sucky interfaces should be replaced with better ones. However, "sucky" includes "incompatible" as a major component. Major surgery should be undertaken only after understanding and agreement have been reached on the goals and ramifications, and all too often that is not the case.

    Even when an interface is completely overhauled, I see no good reason not to maintain compatibility with at least version N-1 of that interface. Unfortunately, doing this usually requires foresight in the _original_ version of the code, leaving room for versions or capability flags so that both called and callers know which behavior to expect or provide. Sadly, the code most in need of an upgrade is inevitably also most marked by lack of programmer foresight. Humility is not a common trait in this community. Everyone thinks their version can never be improved upon, so they don't build in the features that allow future improvements to be made with minimum pain.

    >In Solaris or Windows, where most people don't get decent performance anyway

    That's a pretty offhand and inaccurate statement, at least for Solaris. It's not great, but neither is Linux.
  • >The source KPI remains consistent within a stable series. Since the source is open and the source KPI is consistent, who cares about the binary KPI?

    I thought I was answering that question, but apparently not clearly enough. In brief, then: relying on people to change their source when you change an interface is an invitation to error, because sometimes they don't. Sometimes they're not around to do it. Sometimes they don't recognize that they need to change their code. Sometimes they change their code and introduce a whole new bug in the process. It's better for system stability if their code continues to work without intervention, thank you very much.

    >Try 2.3. You aren't the only one who thought the old way sucked.

    I'll believe it when I see it. I've seen enough false claims about what has improved in past releases that I'm just a little bit skeptical. That code had a _long_ way to go, and kt seems to have a lot of stuff about this filesystem or that filesystem still needing updates to comply with the new interface. My build of 2.3.99-pre4 just last night kept failing in umsdos because of this stuff. Still seems to be very much a work in progress, from where I sit.

    >While one correct, fast, and maintainable solution can be replaced with another that is easier to interface with, no decrease in correctness, performance, or maintainability is considered acceptable

    That's pretty much what I've been saying. Unfortunately, the new interface is often not provably faster or more correct than the old one - nobody even bothers writing or running the tests that would allow them to make such a statement out of knowledge rather than bravado - while the changes have severe implications for the correctness of other code that worked fine with the old interface.

    In my experience, the people who are most afflicted by the "rewriting fixes everything" are junior programmers. Senior programmers have learned the hard way that rewriting something that's anywhere near functional _will_ introduce errors not present in the original, that doing it properly involves ten times more effort in regression testing than in coding, and that if you're not going to do it right you should leave well enough alone no matter how it offends your aesthetic sense. It's too bad that in the open-source community the people whose egos won't be satisfied until they've left their mark on the code - no matter how poorly justified their changes are - tend to outnumber and outyell the people who know and care about producing quality software. The signs of the creeping rot that results from all this marking of territory are everywhere in Linux, and some of the big dogs are the ones pissing on things the most.

    >The nice thing about Linux is that when a failed effort to think of everything starts to cause problems, it gets replaced. In the proprietary world it stays around forever (IDE, the 640 limit, the list goes on).

    Neither of those examples is of an attempt to think of everything. In fact, both are examples of the _lack_ of foresight that is antithetical to what I'm talking about.

    >I've used Solaris. I admin it sometimes

    And you suppose that have less experience with Solaris? Or, for that matter, with Linux? Why? Remember, people who disagree with you may occasionally do so because _you_ are the one who needs educating.
  • There Ain't No Such Thing As A Free Lunch.

    I'm fairly certain Heinlein didn't make it up, but he did make it popular.

    Cris E
    St Paul, MN

  • Check out www.i2osig.org [i2osig.org] for an example of another project. I am working on an I2O FibreChannel-RAID adapter at my job.
    I2O allows hardware vendors to concentrate on making great hardware/firmware. The people who know the most about the OS, the OS vendors, can then concentrate on making great device drivers. The spec is open but currently you have to join the sig to redistribute the headers. Alan Cox, who is working on the Linux I2O drivers, is part of an effort to change that.
    There are I2O drivers for every major OS except one. You guessed it...Windows. Microsoft's I2O drivers have been infinitly delayed. They are next expected as part of Win2k SP2 if ever. What is crazy is that they are a major sig member and can't even get a working set of drivers. Thanks to them I can now claim to be a NT4/Win2k device driver export. Joy.
    Supported devices include LAN, Tape, Block Storage (usual disks), Generic SCSI and more.
  • While that SEEMS like a great idea that would work, we already have (some) trouble getting SOFTWARE drivers to work properly for some hardware on Linux, if the company won't release software drivers, what's to say that they'll release firmware drivers?

    Or what's to say that they won't charge extra for the hardware because it's being used on an 'under supported' 'minority' or 'rebel' OS?

  • What gives is that, as I understand it, winmodems are missing a big chunk of conventional modem HARDWARE that is made up in SOFTWARE as a driver. To create a video card driver, one only (although not trivial) needs to figure out how to access the video card.

    The winmodem concept applied to a videocard would be a card missing video ram, frequency generators, and rendering algorithms. It would essentially be a PCI/AGP card with (s)VGA plug on the end.
  • Oh get of you're jihad. Sure that's your opinion, but you can't go around saying that that's the "Right Thing" without proof, evidance, or some shred of anything that this is something other than your opinion. If I got on here and said
    NO MORE OPEN SOURCE KERNEL MODULES! people would flame my ass so fast it isn't even funny. What gives people the right to do the opposite thing? In my opinion, a driver manufacturer should do what they want to do with their source. If that means that you won't buy their cards, so be it. I doubt they care. Look where demanding open source drivers has got us. We have all this infrastructure without many company made drivers. Linus knows it, thats why binarly only kenel modules have gotten so easy. The XFree guys know it, thats why XFree4.0 supports binary drivers so well. Coming from a windows world, I can tell you that drivers matter. I still think that the driver manufacturer are in a better position to write drivers for their cards. Its just from the dynamics of the thing. How is a person not deeply involved with a project supposed to know about all the intracacies of the hardware? Docs only go so far. And if they did document all these naunces, they might was well write the driver themselves. Take the detonator drivers on the Riva 128 and TNT for example. Performance increased a good 35% (brought a 128 close to a Voodoo2 for QuakeII), just by changing drivers. You rarely see that kind of optimization in Open source stuff.
  • Excuse me? Since when were the original Riva drivers are crappy? nVidia's drivers are among the most stable ones available and have been for a long time. I don't see current OSS projects being a pinnacle of stability either. Look at GNOME it is bloated and slow and getting even more bloated and slow with upcoming releases. (Mozilla used as an HTML viewing object? Isn't it really far along and still leaks like hell and uses a bloody huge memory footprint? God help us! Cobra used to implement reusable objects? What the hell is wrong with them! Even the Berlin guys admit that Cobra is slow. At least KDE has the right idea. KOM or something else similar to COM is the way to go for reusable objects. Stuff that needs cobra-like feautures can implement those at a higher level as to avoid bloat in the core. Still, KDE isn't exactly the worlds most efficiant GUI either. As for OSS drivers, what about all the crappy sound drivers that don't support duplexing and whatnot. True, in general OSS drivers are higher stability, but they are rarely as fast. The only graphics driver that runs faster in Linux than in Windows (ie. one that makes Linux look like it should) is the Matrox made G400 driver.
  • Then you get the mac-style plug-n-play card where you plug something in and it just works, no kernel rebuild, no install wizards etc etc

    I built Mac cards like this years ago - they were a dream to support.

    There is a downside however - once you do this you freeze an API

  • If it works, it's a really cool thing. Just imagine: there would be no more reasons to use Winblows. Every Vendor could compile his drivers with this tool.
    But on the other side: 1.) Drivers made by this tools will be rather big (not to say huge) and slow (slow as a snail ...).
    2.) The tool isn't Open Source and the drivers won't be, too.
    3.) It's hard to believe that this works for ALL kind of drivers.
  • This has got to be some sort of weird performance issue with a universal driver.

    How could you write low-level code for various devices and get ANY sort of compatability, especially Windows CE, which doesnt traditionally run on x86 systems while most of the others do.

    Point here is, i dont think this is such a miracle bullet. How could it possible do all of that without some sort of emulation, or a REALLY REALLY good object model to compile down with.

    It could work if you abstracted basic commands and stuff, but since things like SoftICE and DDKs that sort of thing are still highly used in making driver code, then i dont think this product is a miracle.

    Of course it always comes down to marketing, i suppose.

    --jay
  • microsoft cant even get the same code to compile the same way on both the 9x and nt... so how does krfTech do it??? the answer is simple... you have to plug the 'windriver' kernel module into whatever os you are running. its the windriver, i assume, that interperets "cross-platform" hardware calls. anybody out there have any experience w/ this software package... does system performance take a hit?? --dexsun
  • Personally I would rather have a poorly performing driver than not have a driver at all. Since with Linux I am usually in the latter category for just about every odd device around I think that something like this would make a great interim solution to a very difficult problem. Unfortunately, you would still have to write the driver in the first place because this is not going to just work for the current drivers on NT. Is this a good idea? Only if you are a company that is trying to be first to market.
  • What I find interesting, is that all the big commercial UNIX vendors are listed as "participants" (SCO, Sun, HP, IBM), but there are NO LINUX VENDORS participating.

    What's up with that?

  • The thing is that those modems that you see today,
    well they aren't modems at all!

    They don't contain a DSP or any other processor or any electronics that does _anything_ of interest.
    So, really the companies that make these modems are more _software_ companies than anything.
    Their modems are dumb cards with thus the necessary circuitry to get the signal into the computer.
    The drivers do all the work - a lot of code that isn't easy to write.
    And its likely different for each of the hundreds of kinds of modems. Now for every modem to be reverse engineered
    when they each follow their own propreitary standard is a _big_ job.

    And futhermore they suck, because they eat up processor time.
  • IANA3h, but, I don't think there really is any steady state performance benefit for compiliing the drivers directly into the kernel.

    The linux kernel is monolithic, meaning that just about everything, including drivers, is compiled into the kernel image. This has changed with the advent of loadable modules, where the drivers could be loaded into the kernel memory space on the fly when needed. This allowed for a smaller memory footprint for the kernel boot image, without requiring any of the drivers to be removed from the system.

    There is some overhead during module loading, but once the module is loaded, there really should be no performance loss/gain over a regular compiled driver statically linked into the kernel.

    Of course, i just re-read your command and realized that I didn't really answer your question, but maybe I got a couple of 31773 points for the pseudo-explanation :-)

  • I was originally writing a comment to say that there really shouldn't be a major performance issue. Once I thought about it, I found out I was quite wrong. The original architecture that I thought of was that of a kernel level module that passed an API which would allow for access to ports, PCI, memory, etc, which any kernel module can do. Since Linux implements only a 2 layer protection model (I'm pretty sure it doesn't implement the 4 ring protected mode model supported by ix86), any module loaded into the kernel should have full access to hardware. Then all it has to do is provide an API which acts like hardware access. Simple, right?

    Under this model, which might not be how it is done, you require 1 system call for every hardware access, unless they have a nifty command language which allows for burst access' to be sent during a single call. System call over head is usually pretty high, especially if you're only trying to read/write a port. I could not see a huge amount of performance off of such a system, and it would be almost useless for anything which would require pseudo-realtime response. But, it could be used to create a portable driver for a number of things, so more power to it.

    Of course, the architecture could be completely different from what I'm assuming, so this comment could just be full of crap. If it is, please enlighten me :-)

  • It wouldn't be to hard to make a cross OS driver developement library. The drivers them selves would not be that much bigger either, but they probably would end up being inefficient. Each OS has certian parts you can accelerate in different ways, and there's little tricks you can do specific to your card for a certain OS to get it to run faster, these would be lost in a generic cross OS driver developement kit.
  • by The Man ( 684 ) on Tuesday April 11, 2000 @08:55AM (#1139303) Homepage
    There's also the problem that doing things like direct port access is inherently nonportable. For example, many types of systems nowadays have PCI buses. Under Linux (and possibly Solaris, but it's hard to tell), PCI devices can be supported on multiple platforms by the same drivers. Using an i386-specific method may allow you to use this product to port among i386 operating systems (at severely reduced performance of course), but costs you any opportunity to remain compatible across architectures. It's just not worth it. I find it exceedingly cool that, for example, I can stick a 3com PCI network adapter in a Sun box and have it just plain work with the exact same driver that it uses on an x86 box. Wouldn't it be a shame if 3com had decided 6 years ago to produce a pseudo-driver in binary-only format, and thus the current driver never existed at all? You'd be a slave to 3com, and to Intel becuase you'd have no portable drivers at all.

    Of course, this cross platform thing would be useful for making sure there is support for as many possible platforms ASAP, then optimised platform specific ports could come at a later date...

    Yes, where "later date" == "never". You're ignoring the lessons of history. If it's "good enough," it'll stick around forever and its vastly superior replacement will never see the light of day. It's happened way too many times. How else do you explain people willingly buying a 16-bit operating system for a 32-bit CPU that can't compete with its 64-bit competitors that have been around for five years? Good enough indeed.

  • by Phaid ( 938 ) on Tuesday April 11, 2000 @08:44AM (#1139304) Homepage
    I like the line about "safe and stable... keeps inexperienced developers out of the Linux kernel"

    Yeah right. Device drivers for Linux and other Unixes aren't that difficult to write, and besides who would let an inexperienced programmer mess with a device driver anyway? You still have to understand basic concepts like I/O ranges, interrupt handlers, DMA, etc., so really other than a time savings when porting from Windows to Linux (or other) this doesn't get you anything at all.

    The only way they could implement this the way they claim -- with most of the driver sitting in user space making calls to a loadable module -- is for the module to be bloated and generic. The userspace "driver" then calls the LKM, with ioctls or some other mechanism, to get the real work done. Sounds like a great deal of indirection which would probably hurt performance.

    Getting cross-platform compatibility for device drivers isn't difficult as long as you exclude Windows from the mix. I've written numerous device drivers for various hardware, and had to port them from old SVR3 to UnixWare and Linux. In each case the kernel calls are different, the methods have different parameters etc, but basically in most cases it's similar enough that you could almost write a Perl script to convert from one to the other. Obviously this isn't true of Windows, so I guess if you're really desperate to get a driver for Windows and you only know Unix this might fit the bill. And of course your "driver" won't work at all if they you don't install their module, and if they choose to not support or drop support for a particular OS you're up the creek.

    My advice... If you don't know how to port your driver to a different OS, hire a contractor who does and keep the source code when they're done.
  • by A nonymous Coward ( 7548 ) on Tuesday April 11, 2000 @09:56AM (#1139305)
    No, a closed source driver is not better than nothing. Closed source == binary == specific kernel version. People will load it for other versions anyway. Closed source == little review == it-compiles-so-ship-it attitude == lousy quality. Both cases lead to people complaining about Linux when it's a specific driver at fault which they almost certainly don't know about.

    Furthermore, a closed source driver will lead to pressure to not update the kernel because it would break closed source drivers == no improvements == obsolete code, code to handle historic cases, etc etc etc.

    Hardware vendors who can't see the rational for open source now aren't going to suddenly see the light just because it's UDI instead of native. They will still be narrow minded and myopic. They will still imagine their competitors are so inept that they can't or won't reverse engineer the damned thing, even tho they do it themselves all the time.

    As for redundant drivers for different OSs, the problem is NOT getting vendors to write drivers, it's getting vendors to release specs so WE can write drivers. Think of it! They could release ONE spec and get drivers for free. What a concept!

    Backward and forward compatibility hinders development. You get a bloated slow kernel because it has to support all sorts of obsolete crap and try (and fail) to support unknown future capabilities. Worst aspect of "future" compatibility is that future drivers are constrained by previous thinking, meaning losing all advances since the forward compatibility was designed.

    The ONLY advantage would be for prototyping drivers. Maybe someone could write a user mode driver with a generic kernel interface. Gawddd! Swapping in a user task to handle interrupts! What a mess.

    --
  • by Achates ( 7572 ) on Tuesday April 11, 2000 @08:28AM (#1139306) Homepage
    or too crapy to work in real life?
    Use the powerful graphical Wizard (available in Windows only), to create your driver source code. The Wizard will automatically generate make files for both Linux and Windows. Move the generated code onto your Linux platform and recompile. Your driver will now run in both Linux and Windows.
    i dunno.. just seems unlikely to me.. but then again.. im not a coder.. soooO
    ----
  • by Deven ( 13090 ) <deven@ties.org> on Tuesday April 11, 2000 @09:10AM (#1139307) Homepage
    I can see several good reasons for Linux to support UDI drivers:
    • Drivers may become available that wouldn't otherwise be available. (Even a closed-source driver is better than nothing.)
    • Hardware manufacturers would have some incentive to release driver source, since only source-level compatibility is guaranteed with UDI.
    • Linux, FreeBSD and other free operating systems wouldn't have to waste time rewriting drivers already implemented, if they all support UDI -- duplication of effort isn't helpful.
    • UDI drivers are cleanly separated from policy decisions about operating system implementation; changes to the core OS architecture won't require rewriting UDI drivers.
    • Since UDI drivers would be backward-compatible and forward-compatible across all UDI-supporting versions of the Linux kernel, there would be no need to rebuild drivers when rebuilding the kernel, and it would be simple (and easier to maintain) to distribute each UDI driver as a separate package rather than including millions of lines of code in the Linux kernel for drivers, when any given user needs only a few of them.
    • The Linux kernel could be designed to allow memory protection and fault-tolerance of UDI drivers, enhancing system stability at a (further) performance cost. (I'd rather have an experimental driver run slowly than crash my system!)
    • If the performance is that bad, the existing Linux device driver API could be used for the drivers where performance matters. (Besides, some talented hacker may find a solution to the performance problem.)
    Would supporting UDI in the mainline Linux kernel really be such a bad thing?
  • The idea of writing a universal driver interface is doomed from the get-go, since the kernels of operating systems differ so severely in their implementation and usage of drivers, that major concessions in the speed and robustness would be made to implement universal drivers.

    The best solution I could see is standardizing the hardware, so only one driver would have to be written for kind of each device.

    So, for example, you could have the Universal Sound Card Interface, with 3D Sound and MP3 Decoding Extensions (to satiate Creative Labs, for example). The "Sound Card" would have a standard driver, and the extensions would be dynamic pieces of that main driver. Video Cards, Network Cards, SCSI Cards, and any other hardware that's lived out its innovative life-cycle and is basically using the same design from model-to-model could have Universal Drivers written for them as well.

    The only case where this would be a problem woud be in hardware that's constantly changing its hardware interface and is always implementing new features. The Universal Driver would be a nuisance and a slowdown for any hardware that wants to add a new feature, since some council would have to OK adding this new feature to the next version of the Universal Standard Interface. (I'm mainly thinking of 3D Accelerators here, since most hardware doesn't really change that much.)

    It could be possible to allow anyone to write any extension they want for any interface, allowing them to add innovate new features while still providing a standardized base.

    And, of course, if that isn't good enough for them, they could just write their own drivers for their own proprietary interface for every platform they intend to support, like they do now!

    It seems like it could work if hardware vendors cared enough about the "fringe" operating systems to get along with eachother. :)

    ---
    Epitaph
  • by orpheus ( 14534 ) on Tuesday April 11, 2000 @09:54AM (#1139309)
    Don't get me wrong, I'd be as happy as a jumpin' toad [1] to know I could get even rudimentary but properly functioning Linux drivers for any pice of hardware my heart desires.

    However (and herein lies the rub) would that simply decrease the pressure to produce high-performance full-function drivers? I just don't know the culture of the Linux driver development sector well enough to judge. Worse, would it pread out and dilute the existing driver development talent, which currently focusses in depth on fewer 'supported' driver and projects.

    Will we be dooming ourselves to a sea o' crappola in the interest of 'just get it done' ? [How many Win install are done every second because the end user wants a out-of-box experience, even if they know it's a crappy one? Will we fall into that same trap?] Will this make Linux look bad?

    Factors to consider:

    a) Many users are not satisfied with the rate at which quality divers are being written at present

    b) Most users won't pitch in and write drivers. They just whine and complain. They should each buy a developer a keg for doing yeoman's work (Of course, then we'd never get any drivers written)

    c) While many programmers specialize in drivers and find it very rewarding, others will seek projects that are more 'pressing in the sense of providing 'new', rather than 'improved' functionality

    d) Crappy drivers can be a good starting place for respectable drivers.

    e) Linux users will tend to buy more disparate hardware if they know they can at least get it to run (vaguely). Now they quickly learn to limit themselves to the supported devices (and hence are more likely to limit themselves to the *well* supported devices)

    [1] Grenouille, IM and Crapaud, LS "Remarkable Surges in beta endorphin levels in Rana pipiens during programmed exercise" J. Bestial. Phys. 17:245-7 (1998)

    __________

  • While I see alot of folks who are essentially saying that it can't be done, for once I'm going to actually do the download and try it out, and maybe report back to /. in a few weeks(?).

    So far I can test for NT, Win9X, OS-2, and a few of the Linuxes (2.0X kernels though).

    I think my main test will be for a TWAIN scanner I bought back in '94, because I have the Win source code for both TWAIN21 and the scanner, but I am wondering if this is a good test or not, whether I should get a set of 2.2X kernels first, or what.

    Just an opinion here, but if a person started with a good set of base class libraries for each platform, then building the common interfaces into a cross-platform driver and testing it shouldn't be too terribly difficult. Am I missing anything in my thoughts about all of this?

  • by platypus ( 18156 ) on Tuesday April 11, 2000 @08:56AM (#1139311) Homepage
    I'm no kernel hacker, but I guess this has a chance of a snowball in hell to get adapted by the kernel folks.
    First, the idea seems old, look at the UDI project [projectudi.org].
    Looks even cleaner IMO.
    And now watch the opinion [linuxcare.com] of several high grade kernel hackers about it.
    Two citations:
    Dave Miller: "No thanks, IMHO OS neutral driver interfaces are a nice idea but they can only lead to mediocrity. (Yes I have read and understand how your stuff works, the problem will still be there)."
    Alan Cox "Not sure why anyone thinks this is Linux relevant 8) - other than it will help to make our drivers even faster than the competition if they adopt it. Have a read, but keep a bucket handy". And after smashing the idea of OS independant drivers a bit more, he really gets funny: "So what are you going to do with it. Joysticks?"

    On a quick glance, this thing has even more facts going against it's usage:
    The worst:
    -It seems to demand applications (not only drivers) to link special headers in order to use their infrastructure.
    - It's name (WinDriver)
    - This sentence on their webpage:
    "Use the powerful graphical Wizard (available in Windows only), to create your driver source code. The Wizard will automatically generate make files for both Linux and Windows."
  • by nd ( 20186 ) <nacase AT gmail DOT com> on Tuesday April 11, 2000 @08:34AM (#1139312) Homepage
    It doesn't appear that this is intended for writing the ultimate/best device driver. It's probably intended for people who just want to have a simple driver that works for their device.

    Almost like a temporary solution until someone writes a "real" driver.
  • The Uniform Driver Initiative has been working on this problem for a while. Their homepage is here:http://www.projectudi.org/ [projectudi.org]
  • by npsimons ( 32752 ) on Tuesday April 11, 2000 @08:50AM (#1139314) Homepage Journal
    It looks like it's not really a kit that let's you write drivers specifically for each different kernel, but rather you write user level code for a kernel module that adds the appropriate interface. This leads to a couple of problems, most of which are addressed by Linus' chapter in "Open Sources".

    Here's a brief overview of the problems i see with this:

    • Loss of speed because it's not in the kernel
    • Adding another interface to the kernel
    • Possible security holes might be added this way
    Of course, it would be nice to be able to just grab the DLL for the latest hardware and use it on both the Windows and Linux side. One has to wonder, though, how well this works on other platforms (ie Alpha, Sparc, and PPC).
  • by be-fan ( 61476 ) on Tuesday April 11, 2000 @12:06PM (#1139315)
    Well, extensions might not really be the best way to do this. Consider OpenGL. It really wasn't designed for a way to expand the API other than the extension mechanism. This works fine for awhile, but breaks down eventually. Believe it or not, MS actually has it right with the DirectX model of doing things. It has proved to be flexible, versitile, and stable. (Remember, mostof the instability of DirectX comes from dealing with windows. The other part comes from the fact that it trades utter stability for speed and passable stability.) MS puts a new featues, say texture compression, into the API, emulating it at first, and providing a "carrot" for vendors to come in and implement it in hardware. If a vendor wants, they can implement their own scheme, (like with S3TC and the 3DFx one) but eventually everybody standardizes on one thing. This is easy for developers, because most features in the API are emulated if not available on hardware, and eventually a card will come out that implements this features. Also there are not a lot of competing implementations of features. This is good for the API maker because most games will use the API to a much fuller extent, and it is good for the card maker because they can implement API features in hardware and thus have a trump card over competitors. This is in contrast to the OpenGL extension mechanism. Updates for OpenGL come out relativly infrequently, and any new features come out as extensions first. If you take a look at the extensions supported by your card, you can see there are quite a few implementations of extensions, all imcompatible. The ARB dictate kind of nullifies this, but they don't dictate very often. Also there is no "carrot" for the hardware vendors. This difference comes mostly from the original market of the API (OpenGL from workstations, supported the fast preview stuff, knowing most high quality effects would be done in a software render for final rendering. Direct3D on the other hand wants the preview to look as high quality as possible, so implemented a method to easily add hardware acceleration for special effects.) I'm not saying OpenGL has lower visual quality, I'm saying it is easier to extend Direct3D in a device independant manner. Thats why I was so happy when I heard about Farenheit. It was to meld the great OpenGL API with some of the cool features of Direct3D such as easy extendibility, an object-based interface, and good integration with a 2D API (namely DirectDraw). Too bad it died.
  • by (void*) ( 113680 ) on Tuesday April 11, 2000 @08:50AM (#1139316)
    One at a time.
  • by Harlequin ( 11000 ) on Tuesday April 11, 2000 @08:32AM (#1139317)
    Let me start off by saying I don't have any experience writing drivers. But, my guess would be that this process isn't as good for writing high performance drivers that a lot of manufacturers require. For video cards, the drivers dramatically affect performance. Frame rates often have double digit improvements from early beta drivers to more mature versions.

    Admittedly, it might be good to flesh out quick support for other operating systems, but the performance most likely wouldn't be there. Anyway, that'd be my guess. Besides, do we really want hardware manufacturers putting out cheap/quick drivers for Linux/Solaris/etc. and calling it support (just so they can put more text on their boxes)?
  • by Detritus ( 11846 ) on Tuesday April 11, 2000 @08:31AM (#1139318) Homepage
    This looks like a Linux version of some software that is available for NT. It isn't a kit to write universal device drivers, it's a Linux device driver that is an agent for user-mode applications, allowing them access to kernel-mode memory addresses and I/O ports. This lets you access I/O boards from an application without having to write a driver.
  • by Deven ( 13090 ) <deven@ties.org> on Tuesday April 11, 2000 @11:58AM (#1139319) Homepage
    See my previous post [slashdot.org] for a long response to another poster. I'll only address new points below...

    This, I believe, may be only a chimera; there are enough differences between the mechanisms that are used to protect driver and kernel data structures from interrupt service routines and peer-processor access in the various OSs that it may not even be possible to create a set of UDI drivers, much less port them across major kernel architecture change.

    Most such policy decisions (uniprocessor vs. multiprocessor, singletasking vs. multitasking, privileged vs. userspace, synchronous vs. asychronous I/O, VM or not, protected memory or not, etc.) are entirely contained within the UDI environment implementation on the OS side, and unknown to the UDI driver itself. In fact, the UDI driver code can be simpler, because it doesn't deal with interrupt masks, task switching, synchronization, etc. (Since the environment has to deal with these issues, the environment implementation is probably harder, but it only has to be done once for all drivers on that OS.)

    Since some devices MUST BE present in the kernel to access a boot device (and console), there is no way to prevent rebuilding drivers into a kernel. The current system of modular drivers already permits non-boot drivers to be excluded from the kernel image, and these modules can be distributed as separate packages. BTW, I just ran a line count on 2.2.5; there is a total of 1005095 lines of source code.

    While it makes the most sense for UDI drivers to normally be loaded dynamically, there's no reason some bootstrap drivers couldn't be statically linked with the kernel. After booting, those drivers could continue to operate, or possibly be replaced with dynamically-loaded (possibly newer) versions.

    In my Linux 2.2.12 tree (generic Red Hat 6.1, I think), "wc -l `find . -name '*.[ch]' -print`" gives me 1642672 lines. Most of this is device drivers: 1117142 lines, or 68% of the total. Surely the kernel would be more manageable if this mass of code wasn't so tightly coupled with kernel internals?

    The performance will be that bad, so the existing sort-of APIs will be used and all that will have happened is a lot of unused work creating the code to implement and support the APIs.

    I agree that the jury is still out, when it comes to UDI performance. They've made efforts to keep it high-performance, but it's unproven. Even if the performance is poor, there are good reasons to support UDI.

    If the Linux, bsd, etc. communities had been the driving aganecy for UDI, there's a good chance that the result would be a benefit to both users and device manufacturers. Since the current initiative is by M$ and the vendors that want to dump their M$-Windows drivers on Linux, bsd, etc., then YES it would be a REALLY, REALLY bad thing to implement UDI in Linux.

    Why aren't the Linux/BSD communities driving UDI? UDI can be good for everyone but Microsoft. Project UDI is not a Microsoft initiative; they're not even a participant! (They also have the most to lose, as the most entrenched OS vendor.) The history goes back years (1993, maybe?) to a multivendor effort to create common drivers for UNIX systems. Major participants include Sun, HP, SCO, Intel, Compaq, IBM and others. SCO and IBM are putting UDI support in AIX, Unixware and Monterey. (For Monterey, I'm told it will be the only API for device drivers.)

    Project UDI has an (outdated) HTML presentation [project-udi.org] that has some overview information to help give you a broad sense of the architecture -- it's well worth reading.
  • by Blue Lang ( 13117 ) on Tuesday April 11, 2000 @08:40AM (#1139320) Homepage
    Well, I was gonna flame this as being "Yet Another Rapid Application Development Platform That Isn't," but, after looking over their FAQ, this looks as tho it might be genuine.

    It's basically a framework that lets you write simple IO, interrupt handlers, etc, quickly, to test your card, and then write 'real' code to do the hard work.

    So, it's cutting and pasting the reference driver, with a wizard. :P Yer gonna cut and paste it anyways, so, why not? :P

    I will say, tho, that the cross-platform aspect only makes a minimum of sense to me. If the critical aspects have to be hand written anyways (and they do) then all the CP does is let you quickly write "does it respond correctly to bit X" kinda frameworks.

    Might be cool for high-speed serial devices.. I wonder if USB support could be worked out with this? Hrrrrmm..

    I'll never know, because it runs under windows. :P

    --
    blue
  • by The Man ( 684 ) on Tuesday April 11, 2000 @08:41AM (#1139321) Homepage
    on linux-kernel. UDI, for those who are unfamiliar, is an initiative by some hardware and proprietary software people to do write-once drivers for Unix. Like any such effort, it relies on an abstraction layer that interfaces the "real" OS-level driver layer with the driver components themselves.

    The problem with this product, as with UDI, is that performance suffers. The linux people refuse to take part in UDI for a number of various good reasons, which can most simply be expressed as "the performance sucks rocks." See also a similar discussion based on a misunderstanding of ImageWorks new WAN code. Essentially, the concept of providing a common binary interface to multiple different kernels - be they different systems altogether or simply different, incompatible versions of the same system - is an old one, something of a Holy Grail to some people it seems.

    The bottom line is, hardware vendors who refuse to open up the specs to their hardware are always looking for a way to provide as much "checkbox" operating system support as possible without actually doing any work or participating in the development community. There's an important technical downside as well, besides the poor performance these abstractions cause: if a vendor writes a poor winblows driver, then ports it to $favourite_os, what do you think this does for the stability of $favourite_os when the driver is loaded? That's right, it goes to hell. Microsoft has said for a long time that the stability problems their platform is known for are caused by third-party drivers. While I don't believe that's the whole story, they have a legitimate gripe here. If someone takes that same driver and loads it into Solaris, Linux, or vxWorks, they're going to suffer many of the same problems they would on winblows.

    So no, this isn't as good as it sounds. Linux especially is rejecting such ideas from its mainline tree, but it's important that people also be aware of what their distribution vendors are shipping - it might be too tempting for one of them to say, look we can support winmodems but we'll have to add this proprietary cruftware patch to the kernel; it sucks but we'll be the first so let's do it. I'm wondering more and more whether Linus is regretting his binary-only module license exception.

  • The "cool idea" of the product is that it allows you to:

    Note that the price of a WinDriver license [jungo.com] runs somewhere between $1000 and $2000 (not 100% clear what the $1000 package is). Which means that if you want to use this to deploy a device driver, you get to pay out "a couple thousand bucks."

    What is entirely unclear is what is the status of the resultant drivers. Is the code that is generated:

    • Yours to do with as you like, including applying the GPL so that it could go into "official" kernels?
    • Partially yours, and partially KRFTech's Driver Support Code, which you can't release?

      In such a case, the only way to use the results would be as a kernel module, due to the resultant license conflict...

    • Partially KRFTech's, with a per-copy licensing fee?

    I'm not "accusing" as the web site provides no indication one way or another. I'd find it surprising if the driver became "totally free," and that lack would put a big wrench in the "general interest" in the product.

    I'll bet they sell some copies for organizations that plan to deploy Linux on embedded systems that are used internally; I suspect that the product is not of all that much "general interest."

  • by Ex-NT-User ( 1951 ) on Tuesday April 11, 2000 @09:41AM (#1139323) Homepage
    I've used windriver at my work place. It's a neat idea but has a LOT of problems.

    1st the good news:

    1 YES it does allow you to write truelly crossplatform drivers. I've done this.
    2. It's easy to use (a lot easier then learning kernel stuff)
    3. It's very stable.

    The BAD news:

    1. It's SLOOOWWWW! This is because a driver written usiong this tool kit runs in USER space/mode. Which means there is a kernel switch penalty for every io operation.

    2. It does NOT allow REALTIME IRQ handling. (Well it does but it makes you jump through hoops and you have to learn about KERNEL stuff for the OS you're writing tis for).

    3. It's no good to you if you're looking to write a block device driver. It's only usefull if your writing drivers for weird custom hardware.

    4. You need to redistibute their kernel-user mode driver module with your product. IE the kit DOES NOT generate a standalone driver.

    4. Did I say it's slow?

    If any one has specific questions Let me know.. I've used this product extensively.

    -Ex-Nt-User
  • by Andreas Bombe ( 7266 ) on Tuesday April 11, 2000 @09:20AM (#1139324)

    People, we had such stuff developed before and it was rightfully not used by Linux kernel. The only thing that got some discussion was a sort of standardized kernel API for drivers (called UDI, I think). The problem with such generalizations is that they hit performance and that they don't fit everywhere. The latter adds to the former, because some standard semantics have to be emulated in an ineffective manner (where there would have been simpler and aggressively optimized operations available natively).

    That were just the problems of good frameworks. Now this KRFTech thing on the other hand... First it's not a real kernel driver development kit, it's an interface for user space drivers. This has more speed problems and big problems with interrupt handling: an IRQ stays asserted until the hardware is told to clear it, and only the driver knows how to do that (if the IRQ isn't cleared the system will infinitely reenter the IRQ handler). Now when the driver is in user space, there is a definite problem, since it can not handle the IRQ directly... So they probably do some heavy kludging to get around that.

    Then, their so called "advantages":

    • Simple - No operating system or kernel knowledge needed.
    • Stable / Safe - Keeps unexperienced developers AWAY from the Linux kernel.
    Advantages? Would you really want to use a driver on your Linux box that is written by someone without a clue of the Linux kernel? I know what they mean with keep unexp. developers away from kernel (they don't see the API), but since they access the hardware they have the same chances of causing big trouble as a real driver programmer. It's actually the opposite since it really invites clueless programmers to write drivers. The Visual Basic of driver programming.

    There are even more problems. The drivers using this system require a non-free kernel module to work. The source of which won't be publically available, judging from the web page. Sure, the developers get a copy of the source so that they can modify it for different Linux versions (how generous - we may even choose ourselves on which Linux kernel we want to run this on). The problem is that the end user of the driver needs this module to run on her kernel, not only the developer. Compiled with the same compiler and options as the rest of the kernel.

    Other things: does it support SMP, architectures other than x86, the latest developer kernel?

    This stuff is not worth the bytes it is written on. Don't bother reporting on it in the future.

  • by Deven ( 13090 ) <deven@ties.org> on Tuesday April 11, 2000 @08:51AM (#1139325) Homepage
    Project UDI [project-udi.org] (Uniform Driver Interface) is approaching this write-once, run-anywhere driver implementation idea in a fairly comprehensive manner.

    While it's not ready yet, the architecture is impressively clean and powerful. The same UDI driver code could potentially run (with only a recompile, no code changes) on a Windows system (e.g. Win95 or NT), a Unix system (e.g. Solaris or Linux), a small multitasking system without VM (e.g. Amiga), a small singletasking system (e.g. MS-DOS), or an intelligent I/O processor (e.g. I2O)... Each of these systems would need appropriate implementations of the UDI environment, but could run the same drivers. UDI drivers are written with very few assumptions about multitasking, memory protection, etc. You could even protect the OS from buggy drivers! (At a performance cost.)

    As a case in point, SCO's next-generation Monterey [sco.com] operating system is slated to use UDI as its sole driver API...
  • by Deven ( 13090 ) <deven@ties.org> on Tuesday April 11, 2000 @10:53AM (#1139326) Homepage
    You seem to have a number of misconceptions...
    No, a closed source driver is not better than nothing.
    Tell that to the large number of people waiting for drivers that aren't forthcoming, who don't have the skills to write the driver themselves.
    Closed source == binary == specific kernel version. People will load it for other versions anyway.
    A driver version should not be tied to a kernel version in the first place. With a well-defined API (i.e. UDI), this sort of backward-compatibility and forward-compatibility will work and should be encouraged. Needing to rebuild every driver because you updated the kernel is a waste of time and effort, especially when the drivers need updates to match kernel changes.
    Closed source == little review == it-compiles-so-ship-it attitude == lousy quality. Both cases lead to people complaining about Linux when it's a specific driver at fault which they almost certainly don't know about.
    The Linux UDI environment could be implemented in such a way that the kernel and other drivers and programs are protected from buggy drivers. This is taken for granted in userspace with user programs, but everyone assumes that drivers have to be privileged and that a buggy driver can always crash the kernel. UDI drivers don't know or care if they're running in a protected environment. It might be slower, but it would make Linux more stable, not less! (It would also allow blame for buggy drivers to be placed appropriately.) Also, you would still have the ability to implement an Open Source driver, which would hopefully be of higher quality than the closed source driver, giving the best of both worlds.
    Furthermore, a closed source driver will lead to pressure to not update the kernel because it would break closed source drivers == no improvements == obsolete code, code to handle historic cases, etc etc etc.
    Quite the opposite, actually. UDI drivers would (finally) separate out policy decisions and leave them in the kernel, where they belong. More improvements could be made to the kernel's driver code, because the API remains unchanged and drivers need not be recoded for architectural changes. Also, there would be no reason not to update the kernel for non-driver changes, since the UDI drivers written for the old kernel would be forward-compatible with the new kernel supporting the same UDI specification. In fact, rewriting drivers using the UDI model would allow existing historic cruft to be discarded easily.
    Hardware vendors who can't see the rational for open source now aren't going to suddenly see the light just because it's UDI instead of native. They will still be narrow minded and myopic. They will still imagine their competitors are so inept that they can't or won't reverse engineer the damned thing, even tho they do it themselves all the time.
    So? We already have that problem, with companies that refuse to release anything but a Windows-specific binary-only closed-source driver, and no released specs. Handle it the same as now -- buy the products where the vendor "gets it" and shun the ones that don't. Consumers still have some power to affect these things, you know.
    As for redundant drivers for different OSs, the problem is NOT getting vendors to write drivers, it's getting vendors to release specs so WE can write drivers. Think of it! They could release ONE spec and get drivers for free. What a concept!
    You're in such a hurry to do their work for them? The Open Source community has so many spare development cycles that we should waste them on every variation in hardware instead of developing innovative new software? I've got a better idea -- let the hardware vendors shoulder the burden of basic support for their devices. If they want a high-quality driver, they should be smart enough to release the basic driver as Open Source (and release specs) so that they get bugfixes and enhancements for free. If they're myopic enough to keep the source closed, the market will tend to converge on the smarter companies who release the source because their hardware will work better. I'd rather see development cycles spent on Mozilla than subsidizing hardware vendors...
    Backward and forward compatibility hinders development. You get a bloated slow kernel because it has to support all sorts of obsolete crap and try (and fail) to support unknown future capabilities. Worst aspect of "future" compatibility is that future drivers are constrained by previous thinking, meaning losing all advances since the forward compatibility was designed.
    It only hinders development if poor API's were chosen to begin with. Look at the compatibility problems caused by IBM's shortsighted BIOS interface, just in the area of hard drives. (540 Meg limit, 2 Gig limit, 8.4 Gig limit, 32 Gig limit, etc.) It is very hard to design a good interface for device drivers that doesn't hinder future code with a bad API. Project UDI has spent years designing this API, and it shows. Why don't you try reading some of the introductory material about UDI? They really thought about this, and came up with a very powerful and flexible framework. (Jury's still out on performance.) Down the road, if UDI 1.0 doesn't meet the needs, a revision of the API could be done, which probably means dual compatibility for a long time. Such a thing wouldn't be done casually.
    The ONLY advantage would be for prototyping drivers. Maybe someone could write a user mode driver with a generic kernel interface. Gawddd! Swapping in a user task to handle interrupts! What a mess.
    UDI would be excellent for prototyping drivers (even if performance sucks), and good for keeping unstable drivers under control (if the environment is designed for it). Yes, userspace drivers would perform poorly compared to kernelspace drivers, but why not allow both modes? Untrusted drivers could be loaded into userspace and run slowly but safely. After they've proven themselves, the user/sysadmin could choose to allow the driver to run in kernelspace for performance. Best of both worlds. (This switching could potentially even be automated...)

    One other thing: UDI represents the best hope for "fringe" operating systems (e.g. HURD) to get comprehensive driver support. A new OS only needs to implement a UDI environment, and all existing UDI drivers would work "for free". Wouldn't more competition between free OS's benefit everyone? Let's relieve alternative OS authors of the burden of constantly trying to achieve device driver parity with established OS's. Linux has been fighting that battle for years. Now that Linux is becoming one of those established OS's, Linux users are getting cavalier about support for non-Linux OS's. Should the "next Linux" have to duplicate all that effort? Entrenched Windows with its superior applications and driver support has always been a major barrier to entry for Linux. Sure, Linux is a better OS, and may "win" over Windows in the end. So when another OS comes out that's better than Linux, would it really be a good thing for that OS to have the same barrier that Linux has had to overcome?
  • by The Code Hog ( 79645 ) on Tuesday April 11, 2000 @08:50AM (#1139327)
    Anyone who reads the Linux Kernel mailing list (or even scans the digests) is familiar with the nasty recurring battle over binary compatibility of Linux drivers from stable release to stable release. The Powers That Be in the Linux Kernel world are committed to not worrying about binary compatibility across releases for a variety of reasons. Some of these reasons are good, some smack of stubborness, but that's OK.

    I've written drivers for cards under NT, Solaris & Irix. In some cases it was an in house designed cards across all three platforms; in others it was a quickie driver for a specific card on a specific platform. Drivers can be very persnickety creatures requiring *lots* of fine tuning. They can also be plain-vanilla simpletons with no thought for performance.

    Solaris has a fairly nice abstract ddi/ddk layer, letting you abstract things like endianess out of your code. And my 2.5 drivers work without a hitch under 2.7.

    Windows is insane, because of its compatibility heritage. There are *tons* of special cases and exceptions. After 2 drivers with the HAL layer I went out and bought a 3rd-party C++ library that abstracted most of Windows undergarments away from me (Vireo DriverWorks, which I recommend). There are new developments in the Windows world since that effort however. Luckily I am gone from that!

    Irix was icky, mainly because SGI couldn't decide if they really wanted to support PCI or not. They sure didn't want to support PCI bridge chips. Eventualy we were able to get enough atches to the OS to get it to work but it wasn't thrilling. Pretty damn fast however.

    With that as backround, I have to say that the 1 source compiling to many driver binaries will only work at all well for simple stuff. Things like inked-list DMA, card-to-card DMA, etc, are tricky beasts and are very OS sensitive.

    And endianness persisit as an issue; Solaris lets you specify any arbitrary memory range to be an arbitrary endianess.

    On the other hand, because of the conflagration over binary drivers across Linux versions, a toolkit like this that just targeted Linux would be very nice; it would let companies write closed source drivers for Linux and not get hammered with each new stable tree. And sometimes closed source is an economic necessity for companies -- they may have signifigant IP tied up in that driver.

  • by ceswiedler ( 165311 ) <chris@swiedler.org> on Tuesday April 11, 2000 @09:00AM (#1139328)
    NEW from BRF Technologies...

    Our new WinCoder software makes coding as easy as speaking! The WinCoder package comes with a special "development microphone" which connects to your PC through your sound card. Simply speak clearly into the microphone and request your program.

    For example:

    "Computer, make me a Linux device driver for my PCI BleeduxEdge video card!"

    "I want a program which automatically spies on my ex-wife!"

    "Show me eight thousand different pictures of Natalie Portman pouring grits down her pants!"

    ...And as easy as that, you've got a program! No more typing! No more reference manuals! No more staying up all night trying to finish a project! We guarantee massive throughput--less than thirty words per program! Ports to any OS--Unix, Windows, Macs, and even PalmPilots!

    How it works: your program request is digitized and sent to a special warped-space cage where an infinite numbers of moneys type at an infinite number of computers. A special program-matching application identifies the monkeycode which most closely matches your request, and returns that program to you. The entire process takes an infinitely small amount of time!

    Order yours today!

Always draw your curves, then plot your reading.

Working...