Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Microsoft Research Warn About VM-Based Rootkits 336

Tenacious Hack writes "According to a story on eWeek, lab rats at Microsoft Research and the University of Michigan have teamed up to create prototypes for virtual machine-based rootkits that significantly push the envelope for hiding malware and maintaining control of a target OS. The proof-of-concept rootkit, called SubVirt, exploits known security flaws and drops a VMM (virtual machine monitor) underneath a Windows or Linux installation. Once the target operating system is hoisted into a virtual machine, the rootkit becomes impossible to detect because its state cannot be accessed by security software running in the target system."
This discussion has been archived. No new comments can be posted.

Microsoft Research Warn About VM-Based Rootkits

Comments Filter:
  • by LiquidCoooled ( 634315 ) on Friday March 10, 2006 @08:57PM (#14895974) Homepage Journal
    ...and nuke the entire site from orbit.
    It's the only way to be sure.

    Everything I know about rootkits tells me that you cannot detect one from within the running system, you have to be objective (I consider the current fingerprint detection to be working because of bugs in the rootkit implimentation, these will be "fixed" over time).

    Keep a known secure boot CD.

    Drain the battery and reset the bios then boot from that cd.
    If theres anything sophisticated enough to bypass this level of paranoia then it can damn well have my credit card number and I'll gladly send spam for them.
    • Deserves to be modded Funny, yes. But I feel it neccesary to ask—

      Surely re-flashed BIOSes (tampered firmware, that is) wouldn't be reset by simply taking out the battery? That just clears the settings, not the entire firmware. That's what puts the "firm" in "firmware".

    • by Beardo the Bearded ( 321478 ) on Friday March 10, 2006 @09:16PM (#14896056)
      You don't have to drain the battery - you can disconnect it.

      Your virtual machine could flash your BIOS without your consent. Then you're boned. A bootstrap doesn't require a lot of space.

      Oh fuck me - the next step is a VM rootkit that flashes the bios to keep a VM rootkit.
      • Your virtual machine could flash your BIOS without your consent. Then you're boned.

        There's two ways around that.
        - Flash the Bios chip. Pull the existing one out, put it in a programming unit, flash the chip, and push it into the Mainboard.
        - Use a mainboard that supports a "dual-bios" option (e.g. Some Gigabyte models).

        No virus has can penetrate further without physically damaging the hardware (and that would be difficult with the most modern computers.)

        • by Anonymous Coward on Friday March 10, 2006 @10:27PM (#14896308)
          My roommate runs a PC repair biz. I've noticed those dual-BIOS mobos. Always felt weird to me. If they want to make sure you have a good BIOS at all times, isn't it cheaper to install ONE bios socket, and send you two chips? Then you'll only swap if you really need to. And the "clean" chip is guaranteed clean because it can't be tampered with when it's not in the computer.

          In any event, programs being able to flash your BIOS without telling you about it is A Very Bad Thing(TM). Disabling BIOS writes except when booted from a floppy would be a start. But at a very bare *minimum*, when the BIOS is modified by anyone or anything, the next time you boot the machine, the BIOS boot routine should throw a warning up on the screen:

          "Your BIOS has been modified since last reboot. If you have not intentionally changed your BIOS, or added new hardware, you should discard these changes. Discard changes? (Y/n)"

          And the code that performs this check, and throws up the error message, should be in a ROM or OTP chip where software can't tamper with it.
          • by Tyger ( 126248 ) on Friday March 10, 2006 @11:06PM (#14896427)
            Are the chips actually socketted though? Because with the price of things these days, it's actually cheaper to have two chips soldered onto the motherboard than one socket and two socketted chips. Sockets are not cheap, as far as the price of parts go.

            Besides, swapping chips in a socket isn't a fun user experience, and these are probably high end boards where money isn't an object anyway.
        • by vux984 ( 928602 ) on Saturday March 11, 2006 @02:25AM (#14897001)
          I think it would be a lot more sensible to have a physical switch or jumper that would have to be set to enable bios flashing. 100% gaurantee that it can't be circumvented with software, and equally sure to be immune from socially engineering the less literate... "When you see the WARNING DANGER DANGER YOU ARE ABOUT TO PERMANTLY CHANGE YOUR HARDWARE WINDOW... just click 'continue anyways'." Don't worry about why, trust us...

          Failing that a setting in the bios itself that determines whether or not its flashable. I've seen a lot of bioses with that, and I like the feature; the default is no, you have to boot into bios to change the setting, and the flashing process resets the setting back to no.

          I'm not sure how strictly secure it is, but assuming that setting can't simply be ignored by a custom flashing utility it seems pretty good.

          • by el americano ( 799629 ) on Saturday March 11, 2006 @04:10AM (#14897193) Homepage
            "In order to enable this chat toolbar you need to move this jumper from position A to position B. Here's a photo of what it looks like. The factory incorrectly installed this, and it limits the ability of your video card to get full 3D resolution. You don't have to turn off the computer, and it will allow you to run this really cool software. All your myspace friends will love it."

      • by Dunbal ( 464142 ) on Friday March 10, 2006 @09:34PM (#14896128)
        Oh fuck me - the next step is a VM rootkit that flashes the bios to keep a VM rootkit.

              Just remind me when was Skynet supposed to become sentient again?
      • Oh fuck me - the next step is a VM rootkit that flashes the bios to keep a VM rootkit.

        Flashes your bios, writes your boot blocks, patches your microcode, wash, rinse, repeat, all that's left to do is nuke it from orbit, as the other guy said....

        C//
    • send it to me sir i am of nigerian royalty
    • by Bacon Bits ( 926911 ) on Friday March 10, 2006 @09:58PM (#14896215)
      If theres anything sophisticated enough to bypass this level of paranoia then it can damn well have my credit card number and I'll gladly send spam for them.
      The payload for the Chernobyl virus [wikipedia.org] wrote zeros to sector 0 of your hard drive (which generally contians partition table information) and also tried to write garbage to any present Flash BIOS. You had to have a manual EEPROM reprogram to recover a so damaged BIOS.

      However, this virus dates back to the innocent days where a virus would just destroy your data or computer, rather than steal your information for profit or turn your PC into another node in some botnet collective.

    • by this great guy ( 922511 ) on Friday March 10, 2006 @10:06PM (#14896242)
      <<
      If theres anything sophisticated enough to bypass this level of paranoia then it can damn well have my credit card number and I'll gladly send spam for them.
      >>

      This may very well astonish you, but such sophisticated infection mechanisms already exist and have already been demonstrated. See this rootkit concept overwriting your BIOS [ngssoftware.com] to create a permanent backdoor.

      Note: removing the CMOS battery will not destroy this rootkit because the CMOS battery erases the NVRAM, not the BIOS flash chip. The only known way to recover from a BIOS rootkit is to reflash your BIOS... but what if the rootkit is intelligent and tries to re-corrupt the new image being flashed ? This is a possibility. In this case your only option is to physically change the flash chip with a known good one. And don't forget that a modern computer has a lot of flash chips that can theoretically be infected: hard disk firmware, video card BIOS, DVD drive firmware, etc.

      • by frogstar_robot ( 926792 ) <frogstar_robot@yahoo.com> on Friday March 10, 2006 @11:27PM (#14896493)

        Stephen R. Donaldson wrote the "Gap Series". At one part of the story, the "Data First" of a pirate vessel put a virus in the firmware of one of the pieces of hardware controlling the ship. Even if the ship's computer was reloaded from known good stores, the virus would re-infect the computer. The virus was rigged to totally wipe the ship's computers if a password wasn't entered at specified intervals. Since you couldn't navigate or operate equipment without the computers, this was effective extortion. Billions of miles from home, there was simply no getting back without functional computers.

        The cure was to install known good hardware (itself tricky considering the circumstances) and to reload the ship's computer. The story also featured a kind of WORM device called a "datacore" that every ship had to carry by law. It was a combination flight-recorder and criminal evidence accumulator. Come to think of it, many IT issues were dealt with pretty well in this series. It's worth checking out. The IT issues are essential in certain parts of the story but they aren't the main point.

        • Not to mention that this series is among the best books I've ever read, if not *the* best. People may call it perverted if they want, but then they focus on the actions committed and not what lies behind them - Donaldson is really good at describing people who are doing things out of their own personal - and believable - motives, what drives them. Often with bad or even catastrophic results because they were misinformed or misdirected. This is true for his other books as well, especially the Covenant series
  • by Saven Marek ( 739395 ) on Friday March 10, 2006 @09:02PM (#14895990)
    Why is microsoft researching this kind of thing? And with Linux too? It makes me wonder if the next time you go to install Windows on a partition somewhere with the same machine as you also dual boot into Linux whether your linux boot will not then be "taken over" by Windows, and MS can insert any little hooks, DRM, inspection code or other things running underneath the linux system you have.

    Then they can force linux to perform worse than Windows and nobody will be none the wiser.

    Except when you boot into linux and then you get a blue screen it will give it away lol.
    • by TheWanderingHermit ( 513872 ) on Friday March 10, 2006 @09:12PM (#14896038)
      That was my first thought: why is MS researching this? Pure research like this and MS just do not go together.

      Honestly, this sounds like the kind of thing they'll think of so they can use it as a reason that all computers should have DRM build into the chipset, which plays right into MS being able to justify why all systems should follow their boot rules that allow only Vista to run. It's just laying the groundwork to force the exclusion of anything but Vista being able to be booted on future systems.

      This is also the kind of thing that I don't think many black hats would have come up with on their own due to the amount of research. MS continaully says it is irresponsible for people to publish info on exploits in Winodws before they can patch them, yet they've just gone and published what could be one of the nastiest exploits of any OS to date. If they're doing this, it's for a reason, and experience tells us MS's reasons are good for them and bad for everyone else.
      • That was my first thought: why is MS researching this?

        "Genuine Advantage for Vista" seems one possible application. So, what were we saying about the "Signs of the end times"?

      • by arrrrg ( 902404 ) on Friday March 10, 2006 @11:35PM (#14896522)
        Pure research like this and MS just do not go together.

        Ummmm ... I'm as fanatical as the next /.er, but come on. Microsoft has plenty of legitimate theoretical research projects going on, just look at research.microsoft.com [microsoft.com]. And an issue like this one is obviously relevant to them, if they want to get their act together and improve security (or at least the appearence thereof).
      • by afidel ( 530433 ) on Saturday March 11, 2006 @01:41AM (#14896902)
        Duh, it's a propaganda piece for Trusted Computing Platform. If they want a way to convince people to lock themselves out of their own system through software-hardware integration what better boogyman then a super-duper undetectable spyware. Obviously the spyware wouldn't be able to install a boot loader if it didn't have an authentication key and the hardware required such a key to boot...
      • by martyros ( 588782 ) on Saturday March 11, 2006 @11:57AM (#14898524)
        You only think that because you only know the "big bad" part of Microsoft. A pure research lab is a luxury generally only affordable by a monopoly, and Microsoft is one of the few ones of those around. They've been hiring academics for Microsoft Research for years now, and they have "lablets" near universities around the world.

        As to why this research is done, there are two reasons. The "official" justification is that if it's possible, eventually somebody will do it, and it's a lot better if the "good guys" (yes, in this case that includes Microsoft) figures it out first and has a way to deal with it, than if some black hat figures it out and we discover 5 years down the road when everyone's computers are 0wn3d already and we're all caught with our pants down, so to speak.

        The other reason was, it's just cool. I know the guy who did this work, and he's a brilliant "hack the system together and make it work" kind of guy. He had this crazy idea for an undetectable virus, and wanted to try it out just to see if it could be done. So he went to Microsoft for a summer internship, and got the prototype working with VirtualPC and a little internal help in 6 weeks or so, and spent the rest of the time analyzing defenses against it. Quite a productive summer for him.

        It actually took some doing for this paper to see the light of day, as the higher-level managers had the same reactions you guys do. They could see the headlines: "Microsoft research inevents un-detectable virus", and thought, "Great, that's just what we need..."

      • "MS continaully says it is irresponsible for people to publish info on exploits in Winodws before they can patch them, yet they've just gone and published what could be one of the nastiest exploits of any OS to date. If they're doing this, it's for a reason, and experience tells us MS's reasons are good for them and bad for everyone else."

        please, be fair. first, it's not like MS released a rootkit. they just did a proof of concept internally. second, it's sound engineering to figure out how a system can be
    • by Anonymous Coward
      They are researching it so they can scare people into thinking that Trusted Computing is required for their own protection. If the rootkit loads before the OS, that just leaves the BIOS to do your security checks, right?
    • I'd definitely say that yes, they do want to have that technology mastered for if they ever do decide to implement it. But I'd say in the nearer future, it's to try to create detection methods or methods to stop it getting installed in the first place. With company's like Sony and Microsoft, they're willing to do anything, if they can get away with it. So VM-based rootkits is definitely something they want to have mastered, so when they can get away with it, they are capable of doing it.
    • Why? Because everyone knows virtualization is going to become very common place almost everywhere you have a datacenter. They also know that every time you change something you open the possibility of exploits. By knowing how exploits could be introduced into systems using virtualization they can begin to look at how to combat it. Why look at Linux as well? I seem to remember MS buying some virtualization software that supports Linux guests. They also know about VMWare on Linux hosts running Windows guests,
    • Microsoft is a big company... it has an R&D budget and everything.

      They even have a public research website [microsoft.com]

    • Why is microsoft researching this kind of thing?

      Let's put it this way... Vista won't support booting from EFI based machines [slashdot.org] until they can figure out how to do the same thing there too.

      ;-)
  • by __aaclcg7560 ( 824291 ) on Friday March 10, 2006 @09:03PM (#14895991)
    You never sure if this is a feature or a bug. Either way, they will probably charge a subbscription fee to get the feature or get rid of the bug.
  • by perlionex ( 703104 ) * <josephNO@SPAMganfamily.com> on Friday March 10, 2006 @09:03PM (#14895993)

    Original Paper [umich.edu]

    Abstract

    Attackers and defenders of computer systems both strive to gain complete control over the system. To maximize their control, both attackers and defenders have migrated to low-level, operating system code. In this paper, we assume the perspective of the attacker, who is trying to run malicious software and avoid detection. By assuming this perspective, we hope to help defenders understand and defend against the threat posed by a new class of rootkits.

    We evaluate a new type of malicious software that gains qualitatively more control over a system. This new type of malware, which we call a virtual-machine based rootkit (VMBR), installs a virtual-machine monitor underneath an existing operating system and hoists the original operating system into a virtual machine. Virtual-machine based rootkits are hard to detect and remove because their state cannot be accessed by software running in the target system. Further, VMBRs support general-purpose malicious services by allowing such services to run in a separate operating system that is protected from the target system. We evaluate this new threat by implementing two proof-of-concept VMBRs. We use our proof-of-concept VMBRs to subvert Windows XP and Linux target systems, and we implement four example malicious services using the VMBR platform. Last, we use what we learn from our proof-of-concept VMBRs to explore ways to defend against this new threat. We discuss possible ways to detect and prevent VMBRs, and we implement a defense strategy suitable for protecting systems against this threat.

    • by perlionex ( 703104 ) *

      Traditional malicious software is limited because it has no clear advantage over intrusion detection systems running within a target system's OS. In this paper, we demonstrated how attackers can gain a clear advantage over intrusion detection systems running in a target OS. We explored the design and implementation of VMBRs, which use VMMs to provide attackers with qualitatively more control over compromised systems. We showed how attackers can leverage this advantage to implement malicious services that ar

      • Virtual-machine monitors are available from both the open-source community and commercial vendors.

        grrrr this is what pisses me off about microsoft. They listed the open source community based software first in order to put a bigger emphasis on it. Like they're saying open source people are going to be the most likely to write these hackjobs programs to send spam, porn dial and install maleware on computers. Why stand for this when the whole article comes down to a fud statement? This is the kind of thing Mi
      • by TubeSteak ( 669689 ) on Friday March 10, 2006 @09:51PM (#14896198) Journal
        > However, VMBRs have a number of disadvantages compared to traditional forms of malware. When compared to traditional forms of malware, VMBRs tend to have more state, be more difficult to install, require a reboot before they can run,
        How is that a disadvantage?

        If the bastards already have enough access to be downloading and executing code on your machine, it is trivial for them to crash your box and make you reboot... assuming they can't just reboot your box out of hand.

        Notice how one of their solutions is secure hardware?
        I think we know why MS is funding this.
        • Just like the *AA are saying that we must have tighter and tighter Digital Restrictions Management, enforce the DMCA ever more stringently, and chase down those pirates, now Microsoft is jumping on the bandwagon. "Our systems will never be secure enough until we have full TPM hardware support. This will be released as part of Windows Vista SP2 in our effort to improve security." Yep, we've found a killer way to make an awesome virus. Is the cure worse than the disease?
        • by Tyger ( 126248 )
          It's really a moot point for the reasons you point out about getting the user to reboot...

          But I don't see why it shouldn't be possible to demote a host OS running on the hardware into a guest OS running in a VM in a live system. It would probably be more trouble than it's worth considering the ease of the alternatives, but theoretically all the VM has to do is get ring 0 priveleges (Easy to do if you have root/administrator level access) then hijack the thread of execution away from the OS. Then it just h
  • rootkits? (Score:3, Interesting)

    by gcnaddict ( 841664 ) on Friday March 10, 2006 @09:03PM (#14895997)
    Can anyone say dual boot?

    And another question: I can understand the risk that this may pose for enterprise servers (Virtual Server systems, just to name one), but does this hold any implications for client VMs?
    • Can anyone say dual boot?

      That's what I was thinking. If I didn't see my familiar grub screen come up, I'd worry.

      But then I guess the idea would be to have even grub come up on the VM.

      Under those conditions, couldn't one just have a program that creates a checksum of the bootblock on install and checks it regularly? Then you can do an md5 on that program from time to time to make sure it's okay.

      So either there is something terribly wrong with that idea, or it's too damned simple for MS -- but maybe they d
      • Re:rootkits? (Score:3, Insightful)

        by Dionysus ( 12737 )
        Under those conditions, couldn't one just have a program that creates a checksum of the bootblock on install and checks it regularly? Then you can do an md5 on that program from time to time to make sure it's okay.

        Where do you put the checksum? On an external hd? On the system? What's preventing the rootkit from replacing the checksum? A checksum of the checksum? If you don't allow the checksum to be replaced, how do you upgrade?
        • For the truly paranoid, you could create the checksum and burn to CD. Every time you alter your boot, you'd have to burn a new CD but it might work. However the malware in question could in theory intercept the CD and attempt to burn a new checksum. I suppose then you could attempt to store it online someplace safe as well...dunno, the game of cat and mouse between virus writers and anti-virus writers continues yet again...
        • You could pick your own filename. Yes, there's a lot of people who would only want an automated checker and would end up as victims, but for those that are interested in their own security, there are a lot of simple steps. The checksum could be on a floppy, or a boot block could be stored on a CD, with the checksum checking program(s) on it.

          The rootkit can't replace the checksum if it doesn't know what filename to look for. It wouldn't be hard to create a program, that, when installed, can be given diffe
  • Of Course (Score:3, Insightful)

    by Alien54 ( 180860 ) on Friday March 10, 2006 @09:03PM (#14895998) Journal
    while I can appreciate the logic of the research, I imagine this only gives creedance to the theories that companies deliberately design viruses so that they can sell more of their latest security product. or system/OS upgrade
    • I don't think that's the case here. I think they're investigating this under the guise of looking for future virus methods so that they can in truth investigate it so they can implement it in a future Windows version/upgrade.
    • while I can appreciate the logic of the research, I imagine this only gives creedance to the theories that companies deliberately design viruses so that they can sell more of their latest security product. or system/OS upgrade

      You mean like those trying to sell TPM / Trusted Computing?

      Seems like a solution (TPM/TC) in search of a problem consumers/end-users can identify with ("VIRUSES VIRUSES VIRUSES!"), because "protecting our intellectual property" wasn't really ringing with end-users.

      It's still an i

  • that virtualising i386 was hard and carried quite some overhead.

    i'd imagine the vm would have quite different performance patterns for some operations than the real machine. it would also pretty much by definition have to have slightly less ram.
    • In the past that was mostly true. It's going to become a lot less true with the next generation of x86_64 CPUs, though.
    • by tbigby ( 902188 ) on Friday March 10, 2006 @09:20PM (#14896073)
      that virtualising i386 was hard and carried quite some overhead.

      Not in the slightest. When you emulate a different architecture, sure, that takes a significant overhead having to translate the machine instructions. But virtualisation runs the existing machine instructions more directly on the hardware, which can run at near-native speeds.

      Some of the latest hardware from Intel (the Vanderpool technology : http://en.wikipedia.org/wiki/Vanderpool [wikipedia.org]) is even able to do virtualisation at the hardware level directly.

      We are looking very seriously at replacing several servers with this virtualisation technology using VMWare ESX and VMotion. It should prove to save on hardware costs and running costs in terms of power and air conditioning, not to mention the flexibility you have! I'm sure other folks who have used this technology will be able to tell more about that.

      Also, you can check out the Wikipedia comparison of virtual machine technology (http://en.wikipedia.org/wiki/Comparison_of_virtua l_machines [wikipedia.org]) - it is amazing how many of those technologies run guest OSs at native or near native speeds!
      • Not in the slightest.

        Uhh, actually, the original poster was right. The x86 is actually quite difficult to virtualize effectively. This is because the x86 CPU has certain classes of instructions that make is exceedingly difficult to virtualize effectively, as the x86 doesn't allow you to trap and emulate them correctly. In fact, I would contend that simulating an x86 CPU is probably as easy, or perhaps easier, technically speaking, than actually virtualizing the x86. After all, while emulators like bochs
    • spin it around. if you boot into a virtualising os, how do you know you are virtualized. sounds like a matrix plot, but its plausable. i think we have some time before we have to worry about similar exploits, but lets not forget that a simple fix is to network boot a virus scanner, scan for a virtualizing rootkit and remove it. this is moreso a problem for the many ma and pa computers out there directly connected to the internet.
      • how will a networked virus scanner help? its still getting the system info from the OS on the compromised system, and the OS on the compromised system does not know its compromised because the VM is UNDERNEATH it, and therefore tries to act for all intents and purposes as if it's not there!!!.

        With a perfect bug free VM, neglecting slight performance differences that may or may not be detectable, you pretty much have to scan the compromised hard drive by pluggin it into another pc (as unbootable of course)

        • hey I just had an idea, what if you deliberately virtualised your machine in a hidden manner, so a vm rootkit trying to virtualise your os would actually be virtualising between the good VM and the OS, and the godd VM could detect and report on the bad VM :) (long way to go about it).
  • ROM Boot Keys (Score:5, Interesting)

    by PktLoss ( 647983 ) on Friday March 10, 2006 @09:05PM (#14896004) Homepage Journal
    It may not be feasible for home environments, but for workplaces. What about booting off either dedicated ROM boot keys, or USB memory keys with a some sort of physical read only/read&write switch. Put the key into your machine to boot (for bonus points, the key tells the machine who you are and begins to load your roaming profile), when it comes time for a new image the IT guys either give you a brand new ROM key, or update your USB key by toggling the switch.

    My worry with keeping things inside the machine (the article indicates that AMD and Intel have ideas) is that it's just going to be a perpetual arms race. Since we can't rely on the user to know when it is and is not apropriate to allow your OS to modify your boot sector, evenually virus/malware authors will just trick people into accepting the updates.
    • That sounds waaaay to close to TPM for my liking. I tend to go along with the thought process that the hardware is along for the ride and shouldn't give a fuck about what's running on it.
  • translation (Score:5, Insightful)

    by Anonymous Coward on Friday March 10, 2006 @09:09PM (#14896026)
    You can only be secure if your run hardware with treacherous computing modules installed on the motherboard and in the "approved" CPUs and BIOS chips, and that only works with treacherous computing software, sort of expensive hand in designer glove..

    Kind of a sneaky advertisement, isn't it? Instill terror to sell vendor lockin hardware and operating systems. Maybe even get a law or three passed. They sort of gloss over the "get the rootkit there in the first place" part, don't they?
    • Instill terror to sell vendor lockin hardware and operating systems.

            Isn't that what anti-virus companies and (dare I say it?) politicians have been doing for many years...
  • by saikatguha266 ( 688325 ) on Friday March 10, 2006 @09:10PM (#14896030) Homepage
    Here is a link to the actual paper the article references:
        http://www.eecs.umich.edu/virtual/papers/king06.pd f [umich.edu]

    The authors make an interesting point -- users and rootkits are about control. Whichever one controls the outer layer wins. If the user is in a protected environment, a rootkit running as root can win. If the user is root, then the rootkit must be a kernel-level root-kit and run in the kernel. If the user can control the kernel, the rootkit must control the machine, in this case, put the user kernel in a VM.

    My take is: in this game of cat and mouse, you'll stop only at the hardware -- it is hard for a rootkit to control the hardware short of the rootkit script kidde being able to get physical control. So yes, the user can win this game, if he controls the hardware that controls the software. How does the hardware control software? You guessed it: trusted computing ala TCPA ala Palladium etc etc.

    Can you think of a way to win against rootkits without TCPA?
    • Some sort of open BIOS, like LinuxBIOS, would be a good way to go too; but this is Microsoft Research we're talking about here, and Microsoft wants control over your hardware just as bad, if not worse, than any skript kiddie.
    • by Jon Luckey ( 7563 ) on Friday March 10, 2006 @09:25PM (#14896093)
      Can you think of a way to win against rootkits without TCPA?

      A rootkit can really only win if its undetectable. If you are playing a game of who has control of ring-zero resources, the victim, if running in a VM should be able to do various things that would cause an exception when it tried to do ring-0 only hardware accesses. If the exceptions are not what is expected, then the victim would be able to detect that its not in true control.


      It might be possible to make a VM that tried to emulate ring-0 hardware access in user mode. Been a while since I looked at that area of cpu's. But if so, I'd expect it to be much more complex than a normal VM.


      But suppose it is possible to test for true ring-0 hardware access. Then the root-kit has to fall back to classical root-kit techniques. It has to subvert the detection software. That task can be made difficult by classic defenses, like trip-wire, or running software from read-only sources, etc.

      • You are thinking x86; there are other fully virtualizable architectures.

        But fundamentally, can software attest to software state? Can software prove that it is correct (i.e. not under the influence of other software)? I suspect there may be a way to show that software cannot prove its own correctness along the lines of Gõdel's incompleteness theorem. That leaves only hardware that can attest to software state.

        What sort of primitives would the hardware then need to provide to help the software convince
        • Just remember:
          TCPA that gives the keys to YOU - GOOD
          TCPA that gives the keys to people SPYING you - BAD

          Anyway, running it from a read only memory is a way to avoid rootkits with hardware while not using TCPA.

      • by Tyger ( 126248 ) on Friday March 10, 2006 @10:51PM (#14896376)
        Speaking just of the x86 architecture...

        The thing with emulating a "ring 0" environment is that there is a lot to emulate. Most everything that would not work in a true ring 0 environment would cause the CPU to raise an interrupt for the host OS to handle. Typically the OS handles it by smacking around the application for being bad and doing something it isn't supposed to do. But it is possible to instead do what it is trying to do, and make it look like nothing was amiss.

        The trouble is there is a lot of different things to deal with. If you know your target OS, it's easier since you don't need to emulate every little thing the CPU does, just what the OS will be using. But even then there will always be telltale fingerprints that something is amiss. Theoretically you could get around some of them by scanning ahead the instructions to be executed, but at some point you seriously impact system performance, and that in itself will make people notice.

        Off the top of my head, the simplest way to detect it takes advantage of the fact that emulating ring 0 operations involve a context switch and some execution. Context switches tend to be rather expensive operations compared to most everything else the CPU does. The CPU has something called a timestamp counter, which basically counts every clock cycle, always incrememting, no matter what process/thread is running. An instructions should take a deterministic number of clock cycles. So just check the timestamp counter, perform a priveleged instruction, then check the timestamp counter again. If it looks like it took too long, that means you are running under a virtual machine.

        Of course detection doesn't help with removal, but it's a start.
    • by radtea ( 464814 ) on Friday March 10, 2006 @10:04PM (#14896238)
      Can you think of a way to win against rootkits without TCPA?

      Almost trivially.

      The whole point of TCPA is that "trust" is built in to the machine in a fundamentally inaccessbile (to the user) way.

      What is needed to defeat rootkits is to allow the user to trust the hardware. This is totally different from application vendors trusting the hardware.

      Here's an extreme example: hook a logic analyzer up to the BIOS. Look at the nice bits go by. See if they match expectations. If not, you've been rooted and had your BIOS flashed. "Expectations" are stored in a separate device.

      The issue here is strictly one of treating a computer as a fully self-contained block of hardware and software that no one is allowed or able to look inside without going through the terribly civilized interfaces. The solution is to say, "Fuck the fucking interfaces, I'm going to fucking look at what is on the fucking bus." Not civilized at all.

      I've debugged embedded code this way, by hooking a logic analyzer up to the hardware and watching the bits go by. It's educational. It would be simple to build this kind of exposure of hardware internals in to the motherboard, to make it easy to plug in an external integrity checker to ensure that the basic state of the machine is as expected.

      "Trusted" computing is all about hiding the hardware state from the user. Beating VM-based rootkits is all about exposing hardware state to the user. The two are diametrically opposed.
      • > hook a logic analyzer up to the BIOS. Look at the nice bits go by. See if they match expectations.

        Precisely. You need your hardware to verify that the software is in known-good state.

        To make this approach feasibly, incorporate your logic analyzer into the CPU itself. Make the verification a hash function. And voila, you have just reinvented TCPA. Congrats.

        The problem with TCPA is not that it is "inaccessbile (to the user)". The problem is that is does exactly what it is intended to do, and what you int
      • The whole point of TCPA is that "trust" is built in to the machine in a fundamentally inaccessbile (to the user) way.

        You don't know anything about TCPA. The whole point is to do a "trusted boot" so that the state of the machine can be known and reported in an unforgeable way. This allows both users and remote parties to know that the machine is running a certain configuration, with no rootkits or malware installed. This process protects the user against attacks contrary to your statement above.

        It also allow
        • by Soko ( 17987 ) on Friday March 10, 2006 @11:37PM (#14896533) Homepage
          That's fine if you don't like this, but don't lie about the technology and say that it doesn't help the user to trust the machine. It helps everyone trust the machine. That's why it's called Trusted Computing.

          Mmmmmm... KoolAid.

          Dude, I trust a machine to do exactly as it's told. I do not trust humans to do the same. Trusted Computing is an aphorism for "Hey, you can trust $VENDOR, since your machine does, due to $TECHNOLOGY." Fuck that.

          If you r00t a computer, you're after one thing - getting information _out_ of said machine. (THINK - Credit card #s or Spam - it all has to leave the machine somehow.) You need to do this via a network connection, USB key or some other means. There are ways of noticing that information has left a machine in some way, either through physical security or other means (It'll be a cold day in Hades before a vendor brings a cell phone into my data center. Those things have memory, after all.) since once outside the box it's no longer under the control of the r00tk1t. IOW, if someone r00ts one of my machines, it'll be either noticed or totally useless to them.

          I, and I alone, establish trust of my systems. Any vendor who says they can do that for me is sadly mistaken, unless they are willing to allow me to completely vet thier Trust protocol and methods. Even then, I had better be able to fully audit that system at a whim, on my terms.

          "Trusted Computing" is for those who don't want to learn or do thier job professionally, are just plain lazy or, they're willing to drink the KoolAid. As for users, they tend to trust people, like me, who fix thier broken systems, and take my advice to heart when I charge them $TEXAS for fixing thier broken assed PCs. /me sips his Rye and cola....

          Soko
        • The whole point is to do a "trusted boot" so that the state of the machine can be known and reported in an unforgeable way. This allows both users and remote parties to know that the machine is running a certain configuration, with no rootkits or malware installed. This process protects the user against attacks contrary to your statement above.

          A BIOS/Boot-sector "write enable" flip switch on the case of the computer does the same. Yet that is not an acceptable solution for those who want TCPA. That is beca

  • by nurb432 ( 527695 ) on Friday March 10, 2006 @09:12PM (#14896037) Homepage Journal
    On a normal machine, if you try to virtualize it you would notice right away that something was wrong as it would slow quite a bit.

    There might also be driver issues that could tip you off something isnt right. May not know what, but it should be apparent something is amis. It would have to emuate all the hardware that you had installed at the time of infection, unlike something like VMWare which presents a 'standard' ( but different ) set of hardware devices. Thats a prety tall order to pull off.
    • Perhaps (Score:3, Interesting)

      by phorm ( 591458 )
      But then again, maybe not. I'll play my own "devil's advocate" for a bit here to contradict my previous comments. A full blown VM, probably detectable. But what about something like a changeroot (essentially, for non-'nix users, a subdirectory which for all intents and purposes appears as the drive root).

      We have boxes at work which run chrooted... and the SSH server also runs in the changeroot. When you SSH in, you can't tell whether or not you're in the chroot except that we tend to have it labelled. If
  • i've been working on a compromised system to poke for holes in the concept and i hit upon a novel idea. in fact, it's really simple

    all you have to do is-END CARRIER-
  • Virtually. (Score:2, Insightful)

    by Roskolnikov ( 68772 )
    My experience with Windows and VM scenarios is that it runs better in VM then in real life; mom and pop might not notice this but I should hope those that are savvy enough to understand what Microsoft is proposing as a 'threat' would also be savvy enough to notice the little things that make VM still a pain.
    examples:

    I bought 4 GB of ram and a 400 GB drive, now I have 1 GB and 150 GB drive (with 250 GB overhead for mail and porn).
    My Ultra-Monkey quad SLI Nvidia 9999 video card with 1 GB of ram now shows up a
    • Couldn't malware just work at the driver level, similar to the Sony "rootkit" that got in-between the apps and the cdrom?

      This would be almost invisible to the user, but really really bad things could be done. (Like bypassing firewalls, avoiding packet-capture programs, hiding files, etc).
  • by LLuthor ( 909583 )
    For someone like me, who games on his PC a lot as well as working, it would be immediately obvious that there is something wrong.

    Gaming performance would take a serious hit, as would anything that would normally require privileged hardware access.

    No virtual machine can work as fast as the host system or with as much RAM.
    • It can be very very close though. We are not talking about emulation (running ppc code on x86 or similar) here, it is virtulization. The instructions are still native, just being passed through another transparent layer.
      • by LLuthor ( 909583 ) <lexington.luthor@gmail.com> on Friday March 10, 2006 @10:03PM (#14896234)
        Some functions cant be passed through, they need to be emulated, even on the same architecture, redirecting memory, storage and I/O requests, interrupt handlers and such. All these things suck performance, and in the case of games, where LOTS of memory and low-level calls to the graphics hardware are being made, performance sucks BADLY.

        Any gamer will notice a loss of 15 FPS or more in their favourite game. Developers will notice it too, when their profilers output does not match their codes timing.

        You can't play with the time, even if you are in a VM. People will notice this - even if the software wont.
        • Thanks for pointing that out. Still, most people aren't gamers. Most use only a tiny fraction of the available power they have. The difference between using 5% and 15% of the CPU will be undetectable to them. The casual computer user is also more apt to be affected by rootkits & viruses as well, making the problem worse.

          I should probably do more research on systems stuff in the future, it really isn't my focus.
        • The VM can reinterpret the software and run it, with no emulation. It doesn't need to drasticaly change the computer's speed. There are already VMs that do this, they are not that usefull, but can hide a rootkit.

  • by account_deleted ( 4530225 ) on Friday March 10, 2006 @09:24PM (#14896085)
    Comment removed based on user account deletion
  • Holy Crap! (Score:3, Insightful)

    by PhunkySchtuff ( 208108 ) <(ua.moc.acitamotua) (ta) (iak)> on Friday March 10, 2006 @09:31PM (#14896115) Homepage
    Why on earth is someone writing this software for the purposes of malware - why aren't they gainfully employed earning decent money.
    Seriously, whipping up your own VM that will run $HOST_OS is nowhere near in the same league as, say, hacking together a VBS macro in MS Word or similar...
  • The solution (Score:3, Informative)

    by aachrisg ( 899192 ) on Friday March 10, 2006 @09:38PM (#14896139)
    is to run under a virtualization manager from the beginning. Than, there will be no way for these VM-based rootkits to actually run on the real haardware. They'll think they are doing so, but the outermost vm will be able to detect them easily.
  • by Jon Luckey ( 7563 ) on Friday March 10, 2006 @09:41PM (#14896148)
    TFA seems to propose a model where the host OS is running a Root kit that runs a VM that runs a copy of the host OS that the user works within, which hides the root kit.

    But in that model, the host OS is still running.

    It mighr be possible to detect a rootkit by putting a honeypot of some sort in the true kernel. The when the root kit tried to do something, like say change the firewall, the true kernel could detect that and quarentine itself.

    Of course a root kit running with ring-zero permissions would try to lobotomize that code, so the honeypot itself can't be too easy to find and alter. You'd probably need other kernel level tripwire type code to look for lobotomization.

    Maybe a card with boot time code that the OS could call to verify itself. Not pure trusted computing as any user could add such a card (assuming a free slot)
  • Just one problem: (Score:5, Insightful)

    by guruevi ( 827432 ) on Friday March 10, 2006 @09:47PM (#14896181)
    How do you install the rootkit? Yes, you guessed it, through an insecure operating system. This article is imho just another promotion FUD campaign for TCPA.

    If your current operating system and security measures are good enough, such rootkits-with-virtual-machines are not even going to be able to be installed, heck as long as you don't have to login as administrator to print out a document or surf the web, you're pretty safe.

    And as soon as you notice your box could be r00t3d, you take it out anyway and don't trust it. And if you don't notice one of your boxes is generating extra traffic or doing things it shouldn't, you shouldn't have to have admin privileges anyway.
    • A thought just crossed my mind.
      Since admins running Unix-like systems mostly operate as non-root users, wouldn't it then be possible for a malware to lurk in the background of the non-privileged sandbox until you sudo/su and then for it to use the newly gained privileges to wreak havoc/gather intel and hide itself? In a non-root sandbox the malware process would likely show up in the process list, but who can honestly say that they check the process list each time before they become root? Also, a malware na
  • Microsoft start to SUPPORT linux? And start off with a rootkit prototype?

    Man, that is how a friend should be.
  • VM Machine Rootkits (Score:4, Interesting)

    by Orion Blastar ( 457579 ) <.orionblastar. .at. .gmail.com.> on Friday March 10, 2006 @10:24PM (#14896300) Homepage Journal
    So basically what it is, is a rootkit designed to run in a virtual machine (like VMWare, VirtualPC, Bochs, QEMU, etc) that takes root control of the virtual machine, but the host OS is unable to detect the malware because it runs under a virtual machine and not on the host OS itself.

    Microsoft had tested code under VMWARE for Linux, and VirtualPC for Windows that allowed them to gain root access to the host OS from the virtual machine, and run the rootkit malware under the virtual machine.

    Yet what they are not telling you, is that the virtual machine has to run on the host OS, and that can be detected, even if the malware cannot. If you are really paranoid, just don't run a VMWARE or Virtual PC virtual machine or any other virtual machine, and if you find one on your OS, remove it. The problem with that is that malware scanners will be looking for virtual machine files and suspect them of being malware and warn the user. Besides any virtual machine has to be installed on Linux with root access anyway, and VMWARE Server apparently when I installed it on my Linux box had to compile a part of itself to match my kernel, and asked me to download a few libraries before it would continue. I doubt someone can use VMWARE to install as a regular user on Linux without someone with root access allowing it. Still, Xen is a virtual machine and is becoming popular with Linux, I wonder if it is vulnerable as well?

    The whole VM rootkit fails, unless the malware author finds a way to install a VM on a host OS without being detected, and without Root or Administrator access. The only way I can see that happening on Linux and Unix systems is if they use a trojan horse method of making it part of a program the user or administrator wants to install and they use root or administrator access to install it. On Windows it would just use an exploit to get Administrator access.
    • by Tyger ( 126248 )
      Close but not quite...

      The rootkit IS the virtual machine, AND the host OS. It is what loads when the computer boots up. Then it sets up it's own virtual machine (Like vmware, et al., but it's own implementation) and boots the computer into that virtual machine. The OS can't detect this rootkit through normal means because the methods it would use to detect it could be emulated by the virtual machine to look correct. There is no "host OS" to detect the rootkit or not, because the rootkit IS the host OS.

      O
  • Secure installation (Score:3, Interesting)

    by Anne Honime ( 828246 ) on Friday March 10, 2006 @10:25PM (#14896305)
    I've done it with linux, I suppose it's possible to achieve with windows : have a two disk install, and make sure that there is a read only strap on one. Just put whatever binaries you have (/boot, /, /usr...) on that disk, then move the strap to ro ; on the other disk, put /var and /home. If you're paranoid about it, have syslogd hard print everything on an old line printer. Done. It doesn't prevent a break-in, but the attacker is stuck an can't damage the files, so when you reboot (because you notice the security log printing strange things) the evidences are easy to find.
  • by mombodog ( 920359 ) on Friday March 10, 2006 @10:29PM (#14896313)
    Here is how you detect any VMM on linux or Windows,no such thing as undetectable if you know how to find it. http://www.trapkit.de/research/vmm/scoopydoo/scoop y_doo.htm [trapkit.de]
  • by Seraphnote ( 655201 ) on Friday March 10, 2006 @10:40PM (#14896341)
    The obvious solution is... Windows VISTA!
    Heck the OS is so large any VMBR trying to "hoist" it is going to probably:
    A.) Run out of space (memory or HDD).
    B.) Take so long to hoist the OS, the user will probably reboot thinking their machine's locked up again.
    C.) Cause CowboyNeal to acquire a hernia.

    They (MS) are probably just looking for more selling points for their new BIG baby.
  • Ultimate? (Score:3, Interesting)

    by Kaenneth ( 82978 ) on Saturday March 11, 2006 @01:06AM (#14896805) Journal
    I recall there was a proof of concept modification of GCC that would add itself to any GCC complied with it, a Compiler Virus...

    How about a program that specifically attacks chip design software, and adds malware to any chips that are layed out for production. With the millions of transistors on a modern chip, who would notice a few more? and who would know that multiplying 563473563 by 756481984 turns off all memory access interupts, allowing the following instructions to read/write anything they want?
  • by Trelane ( 16124 ) on Saturday March 11, 2006 @11:26AM (#14898413) Journal
    The one time Microsoft ports some of their software to Linux, and it's a rootkit. ;)

If money can't buy happiness, I guess you'll just have to rent it.

Working...