Please create an account to participate in the Slashdot moderation system


Forgot your password?

Root Exploit For NVIDIA Closed-Source Linux Driver 548

possible writes, "KernelTrap is reporting that the security research firm Rapid7 has published a working root exploit for a buffer overflow in NVIDIA's binary blob graphics driver for Linux. The NVIDIA drivers for FreeBSD and Solaris are also likely vulnerable. This will no doubt fuel the debate about whether binary blob drivers should be allowed in Linux." Rapid7's suggested action to mitigate this vulnerability: "Disable the binary blob driver and use the open-source 'nv' driver that is included by default with X."
This discussion has been archived. No new comments can be posted.

Root Exploit For NVIDIA Closed-Source Linux Driver

Comments Filter:
  • by platyk ( 696356 ) on Monday October 16, 2006 @05:28PM (#16459193) Homepage Journal
    This is one reason I think I'll stop using NVIDIA chips and start using Intel chipset graphics hardware in the future. []
  • by davidwr ( 791652 ) on Monday October 16, 2006 @05:31PM (#16459239) Homepage Journal
    Hardware vendors, be they printers, video cards, or what-not, should work to 2 sets of specs:

    A high-performance, possibly proprietary, specification that gives them a definate edge over their competitors. If they want to ship binary-only drivers that's fine.

    A possibly-lesser-performance specification that does "the basics" - everything a typical device of its type can do. This specification should be public, preferably with open-source drivers. Even without drivers, those who need to can write drivers from the specification. For a high-end video card, this should be everything that a low- or medium-end card could do. For an all-in-one printer, this should include basic full-color printing at "typical for its technology" resolutions, basic full-color scanning at "typical for its technology" resolutions, and b&w and color faxing. For a high-end sound card, this should include at least 2-channel sound. For a communications device, it should include all internationally-accepted standards that the device supports, but need not include the most efficient or highest-performance embodiment of those standards.

    Most important is full disclosure:
    Any device that doesn't provide a full, published specification of "everything" must disclose the limits of the published specifications, so buyers will know exactly what they are buying: a device that, should problems be found with the drivers, or when used with operating systems without supported drivers, is limited to a specified downgraded functionality.
  • by Caligari ( 180276 ) on Monday October 16, 2006 @05:34PM (#16459315) Homepage
    Seeing as there is no source code, and NVidia do not appear to have released a fix, using the Open Source X driver appears to be the only viable solution. Do you have a better suggestion? You are at the mercy of your proprietary vendor.
  • by AvitarX ( 172628 ) <me@brandySTRAWwi ... .org minus berry> on Monday October 16, 2006 @05:41PM (#16459451) Journal
    Best driver if you are not worried about a buffer overflow leading to a root exploit.

    If it was OSS it would already be patched.
  • by ZephyrXero ( 750822 ) <> on Monday October 16, 2006 @05:47PM (#16459591) Homepage Journal
    God forbid fair competition where the actual hardware's merit has to stand on it's own ;)
  • Re:Allowed? (Score:3, Interesting)

    by 99BottlesOfBeerInMyF ( 813746 ) on Monday October 16, 2006 @05:48PM (#16459595)

    We're talking about a graphics driver here. It pretty much has to execute in kernel mode. you know, where you can do anything you want on the system? Sure, we could have a userspace graphics driver, but it would still need a kernel mode driver stub and it would be substantially slower, which is not really an option for most people.

    With the current design of the Linux kernel + userspace, I agree, but I'm unconvinced that that has to be the case. I see inherent stumbling blocks to untrusted video drivers, but nothing that truly prevents them from running in an untrusted mode that does not present the same level of risk. I'm not, however, competent to judge the difficulty of such an enterprise and weigh it against the amount of real benefit to the end user.

  • by Aadain2001 ( 684036 ) on Monday October 16, 2006 @05:51PM (#16459647) Journal
    While the core idea of your's is not wrong, what you are suggesting would actually cost more. While a lot of silicon manufacturers (Intel, AMD, IBM, ATI, Nvidia, etc) do have some features that they can turn "off" when they want to sell a part cheaper than the fully enabled product, I very much doubt that they have a significant number of them. Remember, these are not software features we are talking about, in which the product is the same size (roughly) on the CD as the full version. In silicon manufacturing, die size is a big factor in the cost. As the die size increases, the number of chips per wafer decreases, thus increasing the cost per chip. Add in the decrease in yield for very large dies and the cost goes up more. Manufacturing designs with the full 24/48/64/etc pipelines and then disabling some of them using software is a waste of space and thus wasted money. It makes more sense to develope designs that can easily have more pipelines added to make the higher end products than to waste space on the die.
  • by kelnos ( 564113 ) <> on Monday October 16, 2006 @06:06PM (#16459899) Homepage
    Personally, I don't care so much about the HW-accelerated GL support the nvidia binary driver supplies. I only use it for the 2D acceleration (which, ironically, I usually don't use as it renders my system somewhat unstable). So for some of us, switching to the open source 'nv' driver is quite feasible.
  • by Anonymous Coward on Monday October 16, 2006 @06:47PM (#16460507)
    The computer science computer labs at my university all run Linux and use Nvidia graphics cards. There are about 250 machines in all the labs. I'm sure the sysadmins don't want us to have root.
  • by QuantumG ( 50515 ) <> on Monday October 16, 2006 @06:47PM (#16460511) Homepage Journal
    This is a buffer overflow in the closed-source Nvidia X11 driver, not the kernel modules. As far as I'm aware, Nvidia has no binary blobs that get loaded into the Linux kernel. ATI does, but Nvidia doesn't, all their kernel modules are open source.

    And for the record, X11 drivers run in userland, as root so they can access hardware ports directly. There's no real reason for them to require root, except that allowing any process to access hardware ports will undermine the security and stability of the system. What you could do is use capabilities to give X11 the ability to access particular hardware ports directly and run it as a regular user instead of root. As long as only root can assign the capabilities you'll be fine.
  • by spun ( 1352 ) <loverevolutionary@yah o o .com> on Tuesday October 17, 2006 @11:34AM (#16469843) Journal
    FTFA: This bug can be exploited both locally or remotely (via a remote X client or an X client which visits a malicious web page).

    So we have three possible routes to privilege escalation. One, the person already has shell access. This is rather rare these days. In any case, you can restrict access to X to only those people you trust or can hold accountable. Two, a remote X client. Who allows remote X connections these days? Require shell access with X connection tunneling through SSH and see #1, above.

    Three, you are running an X based web browser and visit a malicious web page. Okay, to prove this is not an issue, let me quote from the article again:

    The NVIDIA binary blob driver does not check this
          calculation against the size of the allocated buffer. As a result,
          a short sequence of user-supplied glyphs can be used to trick the
          function into writing to an arbitrary location in memory.

          It is important to note that glyph data is supplied to the X server
          by the X client. Any remote X client can gain root privileges on
          the X server using the proof of concept program attached.

          It is also trivial to exploit this vulnerability as a DoS by causing
          an existing X client program (such as Firefox) to render a long text
          string. It may be possible to use Flash movies, Java applets, or
          embedded web fonts to supply the custom glyph data necessary for
          reliable remote code execution.

    Okay, to work, the exploit needs to provide glyph data to be rendered. From the sound of it, without being able to supply arbitrary glyph data, the best that an attacker can accomplish is a DoS for as long as you are visiting that site. So, practice safe browsing, turn off embedded fonts, Flash, and Java for untrusted sites.

    I am predicting that this exploit will not affect many people.

"Hey Ivan, check your six." -- Sidewinder missile jacket patch, showing a Sidewinder driving up the tail of a Russian Su-27