Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Could Graphics Drivers be Included on the Card? 142

starseeker asks: "With all of the difficulties (both technical and legal) caused by binary graphics card drivers (e.g. the nVidia drivers) the question naturally arises - why is it necessary to have all of this logic at the 'kernel' level in the first place? Why couldn't the necessary logic be abstracted on-board the nVidia/ATI/etc card and just have the OS use one generic driver to access the functionality in all of them? Use OpenGL or similar standards on the software side, and have the card handle things on-board from that point on down? That way, hardware manufacturers wouldn't have to listen to all the flack about binary drivers, and Linux users don't have to suffer with second-rate graphics and/or deal with binary drivers in an open (and dynamic) environment. Are there technical reasons this isn't practical? Or is it simply that it's easier/cheaper to do that type of work in the OS?" There are several issues that currently make such a thing impractical, but the large hurdle at this point is that there doesn't seem to be any interest (neither commercially or technically) to make such a leap.
This discussion has been archived. No new comments can be posted.

Could Graphics Drivers be Included on the Card?

Comments Filter:
  • Make it flashable? (Score:1, Interesting)

    by Asmor ( 775910 )
    I think it's a good idea. All of the problems that immediately jump out at me-- things getting outdated and such-- would seem to be dealt with very easily by making the internal software updatable, some simple solid-state memory.

    As a bonus, there wouldn't be any worries about ever updating drivers more than once. Reinstalling your OS? You already have the latest drivers!

    Could make rolling back a bitch, though, but that should be handleable as well. Heck, stick a jumper on there that clears the memory and re
    • by Ruie ( 30480 ) on Saturday July 29, 2006 @10:40PM (#15808720) Homepage
      I think it's a good idea. All of the problems that immediately jump out at me-- things getting outdated and such-- would seem to be dealt with very easily by making the internal software updatable, some simple solid-state memory.

      No, it is a terrible idea. The right way is to release the specs to the damn hardware. CPU manufacturers do it, why not video cards ?

      We already have driver built in the video card. It is called VIDEO BIOS. The latest VESA specification allows for fancy things like requesting memory map of the framebuffer so one can have direct video access. It is easy to envision making a specs for 3d acceleration as well. It could even be in pseudocode - one could compile the driver to whatever hardware is using it.

      So why this does not work ? Because, aside from graphics companies making shitty BIOS to begin with, companies like Dell intentionally cut down BIOS to safe a couple of dollars on flash RAM. Ask yourself - when was the last time you saw a widescreen laptop which video BIOS knew how to setup the widescreen mode ? And this is one of the most basic things.

      Good and thorough description of the hardware is a requirement for doing interesting things with it.

      • Apple hardware does some of this. Add in cards have to provide basic functions to the Apple OS that's one reason Apple's hardware works so well together, the boards have to match what the OS expects to see. That's the real reason Apple add-in cards cost more. They usually have 2x or more bios memory built into the hardware. A while ago there were write ups on the cross platform ATI9600 http://www.anandtech.com/showdoc.aspx?i=2502&p=9 [anandtech.com] (anandtech) that could handle the 30" displays. All the reviews we
      • Ask yourself - when was the last time you saw a widescreen laptop which video BIOS knew how to setup the widescreen mode ?

        Two months ago, when I last rebooted my Powerbook :-)

    • Heck, stick a jumper on there that clears the memory and resets to factory defaults.
      Are you kidding? Those jumpers cost two cents each...

      By the time it makes it past the lawyers (are jumpers patented?), engineering (DIP switches are cooler, man), QA (Oh, you wanted them soldered on, too?), and marketing (Exclusive New Sleeve-and-Pin Programming Control Interface!) - the card will cost $100 more.

  • by iminplaya ( 723125 ) on Saturday July 29, 2006 @09:43PM (#15808461) Journal
    should the drivers built in. It would definitely shorten boot up time. The present method is such a kludge. I don't understand why it happened that way in the first place. Now if I can only get a car that includes the driver...
  • It would be very hard to add new hardware features if every videocard has the same driver. It could work, but only if video card tech stoped changing so much... And I don't think gamers would be too happy with that!
  • by CDPatten ( 907182 ) on Saturday July 29, 2006 @09:44PM (#15808466) Homepage
    I think for this to work (good idea) it would require a comapny like MS to "play ball", and they won't. It is to their benefit if the HARDWARE can't just work out of the box on ANY OS. Imagine if any card could just work on linux and OSX? Then this spread to TV cards, and all other hardware devices.... Windows has a monoply because most software and hardware is made for WINDOWS, and I don't see MS giving that up so easily...
    • Run gigabit Ethernet from the computer to the monitor. Have the "monitor" be an X terminal that speaks X11 protocol. There, problem solved, and you can even put the computer in the server room or garage.

      Windows still doesn't speak X11 or even VNC.
      • Funny that, I've been running TightVNC [tightvnc.com], both server and client, on my Windows machines for years.

        Could have sworn my bullshit alarm just went off.
      • you realize that DVI dual_channel is several GIGABYTES per second (the number 19 seems to pop up from somewhere) throughput, where gigabit ethernet is only one GIGABIT per second (122 megabytes/s) throughput. Video cards do a lot of work to get the graphics on screen... even more for games. In 3D shooters, more bits are being processed on the video card's internal processor and ram and fed straight to the monitor, than any other part of your system.
        • by r00t ( 33219 )
          DVI is a dumb protocol. It sends an uncompressed image with every refresh.

          Neither VNC nor X11 is like that.

          X11 is good enough to play 3D games over gigabit. You just send the OpenGL. Video works fine too -- you don't send it raw, and certainly not at your monitor's refresh rate.
          • Why do people bring up X11 and VNC, both protocols suck. If you want a good protocol try RDP, ICA, or NX. OpenGL is a good protocol but wouldn't be good for a display adapter unless you wanted to tie the display to a particular video board built in, GL takes a lot of processing.
            • I can say little about VNC, ICA, or NX. The other two though...

              You couldn't be more wrong about your ranking of X11 and RDP.

              X11 is pretty decent, especially if you use the modern extensions.
              (the Xfixes, Damage, and font-related stuff)

              RDP is a joke. The sickest thing is that we tunnel it over
              TCP/IP. RDP already includes those layers of the network
              stack. To anyone with a clue about the behavior of network
              protocol congestion control and similar issues, this is an
              obviously bad thing to do. RDP has all sorts of
      • Then you'd need the graphics card in the computer to create the data that goes through the ethernet, so you've solved nothing.
      • Here's a project that implements what you are talking about:
        http://dmx.sourceforge.net/ [sourceforge.net]

        of course, at some point, you'll still have a video card, but..
    • I think for this to work (good idea) it would require a comapny like MS to "play ball"

      Like hell it would.

      USB keychain drives are a pefect example. On Win98, you need driver software, but on any recent Mac, Windows, or Linux, just plug it in and it works. I have, in fact, NEVER had a bit of USB Mass Storage not work out of the box on Linux. True, it's not as standardized as we like -- uhci vs ohci, for instance -- but it's getting better, there's only one ehci.

      No, all this needs is a working implementa

      • >I think for this to work (good idea) it would require a comapny like MS to "play ball"

        Like hell it would.

        USB keychain drives are a pefect example. ...

        How do you explain the USB situation, otherwise?


        Apologies for taking the sports metaphor further, but with USB MS *dropped* the ball. Apple made USB standard on their machines so device manufacturers were making USB devices that worked on Macs. MS had to play catch up. Now that there are tons of devices on the market, it's too late to screw with the stand
      • USB keyboards and mice, and probably other stuff, had a spec since the mid-90s... pre-windows 98. I had motherboards in 97 with USB that never quite worked correctly because MS wanted different drivers than the spec built into the boards. So yes, the MS monopoly clearly held back proper adoption, maybe not thru malice, but thru laziness. I've found MS plays the "techincal difficulty" card quite well over the years. They're a company with more money than god, and some of the best programers in the world.
        • Well, we'll see, but I still suspect that third-parties will make it work on Windows, even if MS doesn't, and MS holding back adoption will only hurt them, especially when people can tri-boot Windows, Linux, or OSX86, and Doom 4 / Quake 5 / World of Starcraft works flawlessly, with all the shiny new graphical effects, on all but one of those.
  • Patch vs Flash (Score:4, Interesting)

    by lexarius ( 560925 ) on Saturday July 29, 2006 @09:44PM (#15808471)
    Driver patches happen. If the driver is in hardware, you'll have to flash it, which has somewhat more severe consequences in the event of an error.
    • Not really.
      My motherboard has Dual Bios [tomshardware.co.uk] on it, one copy is the original BIOS, the other is the custom one.

      If the custom one breaks or fails, the primary original switches on.

      problem solved.

      According to my search, there are already graphics cards which have this capability as well, here is an article about a geforce 6600 [guru3d.com] with it.
      • Good to know. It seems like an obvious solution, but I have in the past heard horror stories of flash failure reducing cards to paperweight status. As long as they implement it properly it should be ok.
    • Contrary to popular belief, just because a device uses flash memory does not mean that it cannot fail safely.

      Trash an area on flash due to power failure, bad firmware image, or dried dog snot bridging the contacts of the memory chip?

      No problem. The bootloader, in a protected area of flash, sees that the checksum is bad and just Does The Right Thing by loading good data from a (also protected) portion of the flash, and the device boots up to a state which may not be latest-and-greatest, or even fully functi
    • Driver patches happen. If the driver is in hardware, you'll have to flash it, which has somewhat more severe consequences in the event of an error.

      It doesn't NEED to be a problem, it's just that stupid designs tend to make it one. In the case of a PC hosted device like a graphics card, there's no good reason the card can't be reflashed after an error by logging in from another machine (Even Winderz can use VNC).

      Even mainboard BIOS flashing problems SHOULDN'T brick the PC. In many cases, it DOES, but t

  • Only if you want few bugfix releases and to be tied to a single high level API.
  • by Dasher42 ( 514179 ) on Saturday July 29, 2006 @10:01PM (#15808548)
    I think NVidia is doing something right in this department. The same video driver works across the majority of their cards, from the old TNT2 to the latest GeForce. This implies that a good level of abstraction is possible with video cards, and if this is the case with video cards, one wonders how much can be done with other hardware.
    • The absurd size of these drivers (even compared to other 'abstracted' drivers like in X.Org) means there's a LOT of redundancy or sheer failure of engineering. No, nVidia sucks and should be overthrown by force if necessary. Binary drivers are an inexcusable evil, giving us a quick workaround to a permanent problem, which is not guaranteed to work with the combination of old cards and new kernels/XOrgs, or at all with custom configs or patchings, let alone allow porting to other systems with any practical e

    • It's called an API, and it's not a new concept. There may be a different term when it's hardware-software as opposed to software-middleware, but there it is.

      You build your hardware always such that the newer ones understand the older instructions, just using supersets. Unfortunately it means every X years you have to start from scratch to get rid of the absurd backwards "if such and such then do this kludge".

      But it's a good concept. If published, it allows for open drivers (or whatever), as long as you
    • I think what it really is is that the installer holds as many as several dozen drivers, and the installer gives the impression that it's just one set of driver files. Matrox has been doing this too, one installer holds the drivers for a good range of their products, if you look in the directories you'll see files for many different models.

      Intel's driver installers are a lot like that too. It will detect what chips you have in your system and install the ones you need.

      Apple does this with iPod, every updat
      • Actually, it's not like that. First, the OpenGL driver has a large hardware-independent portion. Second, if you look at the partial specs that are available for NIVIDA cards, you'll see that the interfaces are quit esimilar between the hardware generations, all the way back to the Riva 128.
    • not really, Nvidia uses it's own super-secret code to talk to it's own chips in assembler, that's why they won't ever let the drivers be open. They're not sending just polygons to the GPU but the actual programming is being compiled on your CPU first then sent over. It's horribly wasteful of resources, and Nvidia doesn't want you to see how all the fancy features are just "hacks" to old ones, or how it's not processing the instuctions from the game program exactly how they tell the developers it should be
    • to clarify my last post... current GPUs are like super fancy WinModems. We all remember when those became popular. That's why professional 3D apps won't/can't touch them with a ten-foot pole.
  • VESA F'ING BIOS (Score:4, Insightful)

    by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Saturday July 29, 2006 @10:02PM (#15808557) Homepage
    Welcome to two decades ago.

    Tom
  • The answer here is the same as the answer to most of these "Why don't the video card makers...?" questions: The reality is that the number of Linux users concerned with high-end 3D performance and objecting to binary drivers is simply to small to be worth worrying about. As others have noted, software drivers have enormous advantages -- there's simply no economic reason to forgo them to please a handful of politicized Linux users.
  • by Theovon ( 109752 ) on Saturday July 29, 2006 @10:08PM (#15808596)
    Note: I'm a graphics chip designer.

    Basically, you're asking for the software interface of the hardware to be standardized and abstracted. In a nut shell, hardware is expensive and software is cheap. Anything you can do in software with little or no impact on your performance requirements is something you should not do in hardware. ATI and nVidia have radically different approaches to GPU design. With differing internal structures, the interfaces exposed to drivers is also going to be radically different, but there's no reason not to use cheap CPU cycles to create the abstraction rather than expensive logic gates in hardware.

    Hardware is expensive because the cost of a chip goes up roughtly with the fourth power of the logic area.

    IMHO, the best solution to the problem of drivers for Linux is simply to not buy hardware that doesn't have open source drivers. Do you think that makes life difficult? The Open Graphics Project [opengraphics.org] has opinions about that.

    • Thanks for the link to the Open Graphics Project, that was new to me. I run a website [osdev.org] for ameatur operating system developers and while a 3d version of a VESA BIOS would be nice I can say that most of us would just be happy with open specs. Note to others: People are trying to make all kinds of truly open hardware, on of the biggest sites is http://www.opencores.org/ [opencores.org]
    • How about putting the driver on the board with a ROM chip , use hardware encryption or some other hardware DRM to protect it, which could be flashed to update it, etc. this happens on occasion for hardware devices.... then expose the driver through a standard API that anyone could implement, whether commercial or OSS... don't need to disclose how it works... just how to trigger it.
    • Actually I would love for the hardware to get some standardization back. I want to plug in a modern video card and have windows (or any OS) just use the "basic" functionality of that card. But by "basic" I mean having at least DirectX 5.0 compatability. Right now were dealing with a "standard" which goes back VESA 256 color mode. This is just retarded.

      The logic that is being used is the same logic that spawned the winmodem. So that there is some silly peice of junk software that is coded by chimps th

    • by jd ( 1658 )
      I can see that a standard interface for everything would be next to impossible. HOWEVER, there are some things that could be added that would greatly improve standardization without adding significant expensive hardware and would also take some of the burden off the main processor.
      • OpenGL - Many graphics cards support OpenGL, but do so in different ways. That's fine. All you need is some cheap transliteration layer that converts a generic OpenGL instruction into the card-specific OpenGL instruction. They're
      • Software generates the OpenGL instructions, which get turned into identical but probably different OpenGL instructions (on a 1:1 basis) in some OS library, which then gets translated into identical but almost certainly different OpenGL instructions (still on a 1:1 basis) by the graphics driver, which hands the data to the card.

        Erm... "identical but almost certainly different"?

        AFAIK it's nowhere near as bad as you suggest. OpenGL API calls go direct to the driver (on Windows, they're patched though by the sy
    • In a nut shell, hardware is expensive and software is cheap. Anything you can do in software with little or no impact on your performance requirements is something you should not do in hardware.

      Yes indeed Theovon, that's a commercial fact of life, and it's not going to change. Since it's not going to change, kernel designers who want to protect Linux from the problems inherent in closed binary drivers should have structured the driver architecture in a manner that addresses this, but they haven't.

      Driver ma

      • Er, I think you're confused as to who's in a position of power here. If Linux devs go out of their way to make life difficult for Nvidia and ATI, they'll just give Linux the finger and stop producing drivers for it altogether.

        From the point of view of the management of these companies, providing Linux drivers is a thing they do on the side to get good PR with open-source devs and on the off chance the OS becomes a significant market in the future. But force them to do extra work or put them at risk of
    • ...and not a software engineer.

      The abstraction you describe could just as easily be software in flash on the card as software in RAM on a PC. Your chip design wouldn't have to change at all for this to be a reality... ...except that it already has been a reality. This isn't a novel idea. It's happened at least three times in the past, and it will happen again in the future. Somebody comes up with a standard driver API, people implement it, cards work everywhere... Then some smartass graphics chip maker want
    • Agreed - hardware is expensive. However, developer time for software driver development is also expensive. I don't know if, in the long run, an onboard chip would be more or less expensive than OS driver headaches. I suppose, unfortunately, the current product offerings indicate that it's not worth the onboard hardware.

      I've been watching the Open Graphics project with interest - I hope they succeed. Such a card is exactly what I would want - I don't care about being able to play games at a zillion FPS,
  • by Anonymous Coward
    I remember sitting in stupid ass Windows 95 training classes, them going on and on about how the PCI peripherals would be able to ship with their own drivers... oh well.
  • Is to sue the hardware makers for only releasing drivers for an OS that was an illegal monopoly.

    There has to be a case there.
  • There's a few routes that all fit under your description.

    The first and most obvious one, is an well-standardized API for using the card.
    This isn't new, in fact for 2D stuff, it's ancient news. The standard is VESA VBE (video bios extensions), with it's lesser known cousin VESA VBE-AF (video bios extensions - acceleration functions). The VBE standard itself was legally free. The VBE-AF interface was only available as a commercial specification, with distribution restrictions - but copies of the information d
  • by Ant P. ( 974313 ) on Saturday July 29, 2006 @10:16PM (#15808628)
    Why do GPUs need huge drivers when CPUs don't?
  • How about an intermediate layer between the OS and the hardware that would contain all the drivers?

    The operating systems would only have to deal with a standard interface independent of the hardware.
  • Cards would be DirectX compliant, or OpenGL compliant, then, and anything outside the scope of the standard chosen would not be possible to use.
    Of course, commercially, DirectX makes more sense, although it would be a problem with ID engines, and some of the people who use these cards for actual 3d work.
    CDs are way cheaper than flash memory, also, and they give you an easy way to include the last driver with your card.
    Also, the NVidia utilities have to come somewhere, that's added value that they wouldn't w
  • EFI (Score:2, Informative)

    EFI is what you are talking about
    http://en.wikipedia.org/wiki/Extensible_Firmware_I nterface [wikipedia.org]
  • I used to wonder if in the PCI, PCIX, USB, Cardbus protocols an allowance is made to download the driver from the 'unknown' device. The machine can always be updated with newer drivers, but the machine should never report 'unknown' much like cisco routers, which have a very basic IOS in its rom before the full ios is loaded. Flash is cheap especially the size that is required for a basic driver.

    Next, I suppose the driver should be either in source format, or an intermediate machine language, or (my favorite
  • Actually, I'd like it if hardware was *less* smart. Driver bugs are much easier to diagnose and fix than firmware bugs, and the fixes are easier to apply.

    The real problem with video drivers is software patents. The graphics vendors have to be paranoiacally secretive about how their cards work, because they're all violating thousands of patents that should never have been granted in the first place. If you want to fix this situation, call your congresscritter.
    • about the patent thing, not really, in fact better on-board api would allow them to maintain the patent secrecy even better. What you want is a card that responds to the API.. similar to how serial modems used to respond to command sets. You don't care what goes on inside, just that the harware accepts a certian set of commands. Part of that is why 3D can be so cheap.. because graphics cards are like winmodems.. all the heavy lifting is done on the host CPU.. the GPU just crunches numbers fed to it reall
      • Serial modems implemented a standard and had to be bit-perfect or fail. Graphics companies have been known to patent their bugs upon discovering that they made things faster at the expense of a little bit of correctness, but things still looked "good enough". In the graphics world, performance trumps correctness, until the next game comes out that uses your slightly-broken feature in a new way and looks really ugly (or freezes your system), and then you need to push out a new driver. You can't do that ne
        • But that is a problem on a couple of levels.

          First, Companies are not spending enought time getting the hardware correct the first time around. I remember older hardware had few driver "issues". When you couldn't "just download" stuff, manufactures had to try a lot harder to make sure when you opened that box it was RIGHT or you were sending the whole thing back for refund. Now new hardware (CPU, motherboard, video, the whole lot) often requires downloading patches before the product is ON retail shelv

  • TI / Apple had this (Score:3, Informative)

    by Doctor Memory ( 6336 ) on Saturday July 29, 2006 @11:37PM (#15808978)
    ISTR the NuBus had this capability. Cards had the drivers on them, and they identified themselves to the bus, so when scanning the bus for hardware you just got "graphics card in slot N". They all presented the same API, with bus NAKs or soemthing if the software requested a mode the card didn't support.

    This, of course, made changing features a bitch, since you couldn't tell the software that you had eight hardware shaders instead of one, because there wasn't space in the API data structure for that...
  • I'd much rather have the card vendors get together and agree on a system of standards that could wrap up X, Quartz, GDI, OpenGL and DirectX into some standardized video description language. Do for video what PostScript did for print.

    This would probably entail a performance penalty at first, but if the engineering resources that are currently being dedicated to creating drivers for each and every little card were re-applied, it could come out ahead in short order.

    The only thing keeping this from happening i
  • Didn't microchannel do this very thing?
  • "Commercial Advantage"

    Actually, it's perceived commercial advantage of the horrible American video card makers (ATI and NVidia) that is the problem. They seem to think that by being as different as they can to the competition then hiding behind NDAs and funky specs that they will have some advantage. In reality there's no real advantage if the hardware is performing properly. Problem is the drivers contain hack after kludge to work around quirks in the hardware.

    The big guys don't want you seeing how shi
    • They also don't want us to see the bizaare hacks they use to detect benchmarking in progress and cheat for higher grades ;)
  • by MadAndy ( 122592 ) on Sunday July 30, 2006 @04:32PM (#15812950)
    They used to have a ROM chip on the board. Deviced drivers came in the form of 'modules' - and one specific to the device would be loaded off its ROM chip. If you had a newer driver, you'd load it from disk and it would replace the ROM one. It meant that the hardware worked 'out of the box', but if you needed newer drivers you could still use them.

    Main catch is that it made the card bigger/more expensive - important especially when you look at some of today's tiny cards. In this age of the internet we're probably better off just working off the unique PCI ID that every card type has. The ideal would just be a little utility that scans the IDs and fetches (or tells you) what you need. MS has done a half-assed job of it with Windows Update, but it definitely could be better.

  • If what you're thinking of is a fully fledged performing 3D Driver then I'm not sure that's a possiblity
    because that graphic card has to work potentially with a lot of Operating Systems and CPU architectures,
    because that card might get plugged into anything from a Power/AIX machine to a SPARC/Solaris with
    Intel/Microcrap in between. However there is a completely platform agnostic way to get a video card set
    up so and that is OpenFirmware (Openbootprom) IEEE 12something. Openfirmware cards (SCSI, FC adapters
    re
  • Read my tagline.. now re-read it. If we could do away with drivers, we would have done so long ago because there is no greater pain than having idiot users complain about drivers, call tech support because their drivers are corrupt because their OS is an infested pool of shite, or whine about how your expensive gadget doesn't work on their exotic "look what I made" OS derivative. Trust me, if all hardware companies could get rid of the software part of their product, they'd do it in a heartbeat. They can
  • http://www.openfirmware.org/ [openfirmware.org]

    The basic premise is that the interface (eg. video card) contains a chip holding system-independet drivers.

    It's been running on most commercial Unix systems with proprietary hardware for years.

BLISS is ignorance.

Working...