VMware has shipped with hypervisor-level memory deduplication off by default since 2014, precisely because of this style of attack.
Being secure against this sort of attack has been the norm for years now.
VMware has shipped with hypervisor-level memory deduplication off by default since 2014, precisely because of this style of attack.
Being secure against this sort of attack has been the norm for years now.
So getting a program to work right with SELinux takes a RHCE? And elevated access so you can drop the context rule in the right secure place?
As one of the other posters noted here, the problem isn't configuring SELinux right on one system. The problem is that configuring it right is done differently on each user's different system - so you either have to write the configuration 3+ times (RPM, DEB, and pick some other common format, then listen to Linux users gripe about how you didn't support THEIR package format), or you have to write some sort of complicated setuid-root shell script that does the right thing. And to install this silly game (which doesn't require root), you have to be root! Remember how Windows got into a lot of trouble about how you had to be Administrator to install anything? But when it's SELinux with the same requirement, we are supposed to call this a good thing?
SELinux is a wonderful system - IF you can enumerate all permissions needed by all software that will ever be installed on the system. Which is true only for toy OSes or base OS installs or for people who have solved the halting problem. And that's why any non-trivial software immediately suggests turning off SELinux - the defaults are too restrictive for real-world software (JIT is only allowed for Java / Browsers / other things the SELinux rule authors have seen before), and you need to really know the system well to properly alter the configuration while still maintaining security. The point is, installing new pieces of "normal" software is a major piece of functionality for the OS, which means the OS needs to handle this itself and configuring security is not something that should be foisted upon the software being installed. Really fancy software - e.g. database servers and such - may need to carry a security configuration with it. But come on - a game needs security configuration ?!?!
(And before the Linux people skewer me for saying Windows is better - Linux is perfectly fine. It's SELinux that is
This, and the meta point: the fact that the poster of this "Ask Slashdot" left the conversation WITHOUT having an answer to those questions is itself indicative of poor communication skills. A good communicator will convey that sort of information regardless of how poorly his report listens; a merely average manager will convey merely average general principles and it's up to the report to pull out more information. (And a poor manager will give non-committal evals then fire somebody without warning).
I'm reading the OP as "my manager told me I had poor communications skills. I didn't understand what he meant, so I nodded my head, said I'd work on it, and walked out." Thus proving the point. (Though OP gets some points for at least asking somebody a.k.a. Slashdot. It's the wrong somebody, his manager or his peers would be better choices, but Slashdot is better than nothing.) If the manager can't explain to your satisfaction, go to the next level up the chain and say "I got this feedback from my manager, we talked about it and I didn't understand, can you help me understand?" (But no further, and don't blame your manager.)
I'm reading between lines some here, but what I'm seeing is more conflict avoidance than anything else
But let me put a positive spin on things: poor communication is expected of an average, very junior person. This managerial feedback should be viewed as "improve this to get promoted", not "improve this or get fired". (Well, except at a start-up, where being merely average is cause for firing.)
VMs do have good sources of entropy
The real security concern with VMs is duplication
Teams I see where the lead developer promotes ideas to the team and the manager supports those ideas end up being conspicuously stronger
Same message but with a more positive spin. (And yes, I'm in that 200K+ category).
It's hiding in plain sight, in the part of the Fifth Amendment most armchair lawyers don't bother reading:
No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.
Due process of law HAS been observed. The current state of law is that if the government can prove you knew something in the past, they can compel you to disclose what you knew. In this case, if the government can prove you used to know the password (which, in this case, they could not originally but could after the FBI decrypted one drive), the government can compel you to reveal the password.
The Fifth Amendment does not protect the password (it's just a sequence of characters); the amendment protects the "testimonial" aspect that you knew that particular sequence of characters was significant. Once that fact is entered into evidence through some other means, the Fifth Amendment's "due process" requirements have been satisfied.
Does the 5th amendment right to avoid self-incrimination apply only to the particular charges being brough in a given case, or does it cover any statement that could be incriminating, even if it were in a different proceeding, or if the record from Case A were to be used as evidence in Case B?
No, it applies to all cases
If somebody were being charged for one crime that probably left evidence on the HDD(kiddie porn, say); would the fact that they know that there is evidence of CC-skimming(but, unlike the kiddie porn, the feds have no circumstantial evidence or other grounds for belief) justify a 5th-amendment refusal to decrypt the volume? Would the other potentially-incriminating stuff be irrelevant because it isn't among the charges(even if the court record could be used as evidence to bring future charges)? Would the suspect be compelled to divulge the key; but the prosecution only have access to material relevant to the charges being filed, with some 3rd party forensics person 'firewalling' to exclude all irrelevant material?
I didn't see the warrant specifically mentioned, but "normally" the search warrant has to specify exactly what is being searched, and is thus ONLY valid for what is being searched. For example, the search warrant would say "the file named kiddie_porn.jpeg", and thus only that file (and not ccfraud.txt) becomes evidence. That said, warrants can also be broad - the hard drives themselves were presumably seized because the search warrant said "any computers and electronic storage devices located at 123 Perpetrator Street". Fishing expedition warrants saying "all files showing evidence of kiddie porn" tend to get thrown out, but a warrant saying "all files under C:\kiddie_porn" backed up by evidence (a P2P log) showing that files in fact were placed within C:\kiddie_porn is probably valid - and a warrant backed up by a P2P log is almost certainly what the search warrant this judge is ruling about says.
Not being a lawyer, I can't tell you what happens if the person examining the encrypted contents happens to see evidence of some other crime. But the physical analogy is this: if the police show up with a warrant to search your house for "computers", they are obviously entitled to seize all computers. And if they walk through your house and see illegal drugs sitting on the table, that's admissible evidence ("in plain sight") (Interestingly, it cannot be seized because the warrant does not specify "drugs". But what happens is the cop calls the judge and says "I'm executing warrant A for computers and see drugs on the table, can I get warrant B to seize the drugs?" and the judge faxes over a warrant right away). But they are not allowed to rifle through all your drawers and closets - drugs found there are not admissible evidence because they are not "in plain sight". (Unless you give the police permission - and they WILL ask. Which is why lawyers always advise saying "I do not consent" - you cannot stop the search / seizure, but not consenting makes any evidence found without a warrant inadmissible and the police potentially liable for misconduct). It's difficult to guess how courts would apply this standard to searching a HDD, but they would do it by starting with the physical analogy and figuring out how it applies to electronics.
What's happening in this case is that the prosecution knows files with kiddie porn names were downloaded. But they still cannot prove the files contain actual kiddie porn. (Maybe this guy is sick and thinks naming his legal porn files with kiddie porn names is funny). So the prosecutor was hoping to compel this guy to hand over the encrypted files (whose names they knew), under a warrant that compels him to be truthful about their contents (by having a neutral 3rd party do the work). The judge decided that the prosecutor does not have enough evidence to prove this guy actually knew what was in the files (maybe he operates a repository with files stored on an encrypted disk, but does not himself have access to the files). The judge also implied that if the prosecutors DID have evidence of what was in the files (maybe 1 or 2 got left on unencrypted drives by the P2P program as intermediate files and the filenames matched?), he probably would authorize the warrant and require this guy to decrypt his drives.
Attempting to pass through security with a prohibited item is a crime - think about it, the same laws apply to passing through security with toothpaste ("might be an explosive in disguise") as apply to passing through security with a stick of dynamite, and they're definitely going to arrest you (and not "voluntarily confiscate") for carrying dynamite. Airport security isn't a game where getting caught just means going to the back of the line. You get one chance, once the agent starts looking you over you either pass or get arrested. You do have a choice: "voluntarily surrender" the stupid stuff, or refuse and be arrested for taking a prohibited item through security. That toothpaste is still "yours" despite your arrest - it's just locked up in an evidence locker until you convince a judge/jury that calling toothpaste a prohibited item is stupid ("see? no explosion, your honor"). "Voluntarily surrendering" is indeed a lawyer's trick - but it's a lawyer's trick the TSA is employing for YOUR benefit, because their only other option is to arrest you.
(Which does suggest a really entertaining protest. Get a hundred people to show up at the airport with a tube of toothpaste, go through security, and all refuse to surrender that tube of toothpaste. Watch the TSA have to deal with a hundred arrestees - I doubt they have the holding space, and press headlines "TSA ARRESTS 100 OVER TOOTHPASTE" would be hilarious. And much more likely to affect the law than whining on Slashdot.)
There's a nice spin in there. At any given time, all important apps will be present in all markets (or at least the top three markets). What really happens here is that markets are actually forced to compete with each other a) for developers b) for users (markets that would demand exclusivity would simply die, even if anyone was stupid enough to pull something like that). This is good news for everyone, and the antithesis of everything Apple stands for. No matter how much he SJ tries to spin it, fragmentation is not a problem.
It's not spin - this is because Steve Jobs knows what he is talking about through experience. If the market were absolutely perfect, then indeed all important apps would be present in perfect competition with each other on each platform, and the best apps would come out winners. The problem is that this "perfect markets" idea is universally known to be an inaccurate approximation (the current favorite for adding accuracy is Search Theory). And search theory justifies exactly what Steve Jobs says, that it is easier to find things in an integrated market than in a wildly fragmented market.
Apple is not - and has never been - about providing the best technology. Apple is about providing the best customer experience (to separate consumers from their money as painlessly as possible); sometimes that requires brilliant technological innovation, sometimes that requires competition, and sometimes that requires enough control to make everybody do things one way even when the alternative may be technologically superior.
Disclaimer: VMware engineer; but I do like your blog post. It's one of the more accessible descriptions of cpu and memory overcommit that I have seen.
The SnowFlock approach and VMware's approach end up making slightly different assumptions that really make each's techniques not applicable to the other. In a cluster it is advantageous to have one root VM because startup costs outweigh customization overhead; in a datacenter, each VM is different enough that the customization overhead outweighs the cost of starting a whole new VM. Particularly with Windows: a Windows VM essentially needs to be rebooted to be customized (and thus the memory server stops being useful), whereas Linux can more easily customize on-the-fly. Different niches of the market.
The second big difference is architectural. VMware handles more in the virtual machine monitor; KVM and Xen use simpler virtual machine monitors that offload the complex tasks to a parent partition. This means that for VMware, each additional VM instance takes ~100MB of hypervisor overhead - small relative to non-idle VMs, but large relative to idle VMs. It's purely an engineering tradeoff: a design like VMware's vmm will always be (a little bit) quicker per-VM; a design like KVM/Xen's vmm will always scale (a little bit) better with idle VMs.
These combine to make it easy to show KVM/Xen hypervisors more deeply overcommitted than VMware hypervisors by using only idle Linux VMs. VMware doesn't care about such numbers, because the difference disappears or favors VMware as load increases. If GridCentric has found a business for deeply overcommitted VMs, more power to you!
It's a really good idea.
Part of the idea would have to be having a REPUTABLE escrow service disinterested in publicity - a service that can work with both the vendor and the security researcher and balance the competing interests.
Every security researcher wants to maximize the severity rating of the bug, an instantaneous commitment to a fix timeline, and an absurdly tight deadline (expects vendor to drop everything to analyze the bug, fix it perfectly on the first try, and release immediately). Responsible security researchers know that instantaneous is not going to happen, so are willing to wait a few months.
Every vendor wants to minimize the severity rating of a bug, and to push out the fix as long as possible. Delaying is less about saving face and more about saving money: regression testing eventually becomes free (next major release) or at least cheaper (batch several security updates and regression-test simultaneously). I know my employer prefers batching, as does my web browser's vendor (Firefox) - non-critical non-public vulnerabilities get queued and are only fixed with the next critical or public vulnerability or other minor update.
So a vulnerability escrow service needs to mediate these two competing interests. They need to keep pressure on for a deadline, but also need to be reasonable about the deadline in the first place (60 days sounds pretty good) and be flexible enough to move it back if the vendor can demonstrate good-faith efforts to fix that, for reasons outside the vendor's control, won't be able to make the deadline. (Example: the fix breaks Adobe, and they now need another 60-day window to get Adobe to release an update.) For a vendor, every deadline is too short, but for a security researcher, every deadline is too long; only an escrow service with a serious reputation for integrity and serious clout will be able to force both sides to accept a compromise, especially when a security researcher who doesn't like the compromise can so easily throw an adolescent temper-tantrum and go public prematurely.
Windows is an OS kernel, a very large set of system libraries, plus a few hundred applications (everything from Calculator to Internet Explorer). Linux source is just the kernel. If you want a real comparison, compare a Linux distro (say, Ubuntu) to Windows. Wikipedia already did it for you.
XP is 40M LOC, its contemporary Debian 3.0 is 104M LOC. I don't have a source for the size of the Windows kernel source code, but Windows 7's compiled kernel is ~5.4MB; Linux's compiled (core) kernel tends to run about twice that.
Most Windows (and for that matter, Linux) security vulnerabilities are not in the kernel.
Well, you need to be faster. Much faster. As fast as open-source software. Don't say you can't do it: we can
If this had been reported in open-source software, there wouldn't even be a fix, just a snarky e-mail (about as snarky as your post, actually) saying this was fixed four years ago and telling the user to upgrade. And woohoo, the latest (open-source) version is free! - when you don't count your time to do the upgrade.
Open source software doesn't support 9-year-old codebases; most open-source projects (core developers) only support top-of-trunk and even most open-source vendors (read: those who sell support contracts) only make 3-5 years out.
I've interacted with Microsoft security before. They are quite serious about fixing things, they have standards for what gets fixed on what timeline and they really do follow them, and get back in a REASONABLE amount of time (usually, ~1 week, not 2.5 business days). Generally, they ask whether a bug is being exploited in the wild. If it is, they react fast; if not, they take their time (a thorough investigation, not a rushed investigation), and not the refusal you naively claim.
The problem in parent's logic (and many other self-styled security exports) is assuming that their personal security issue is the single most important issue on the planet and applying scorched-earth tactics to escalate its priority - a sign of megalomania, not of responsible security research. Is a not-in-the-wild exploit more important than an in-the-wild exploit? Is a not-in-the-wild exploit more important than Joe's long-awaited vacation with his kids? Is a not-in-the-wild exploit worth risking breakage due to an unexpected conflict? Your personal answer to all these may be "yes"; it is plain arrogance to force that answer upon everyone else. That's the difference between responsible disclosure and (this Google idiot's) irresponsible disclosure.
Every time someone talks about how great XP is working, I have this odd compulsion to point out the Linux equivalent.
If you ran Linux systems that old, you would be using a 2.4.18 kernel (remember LinuxThreads?). You would be using OSS, because ALSA was still incomplete and PulseAudio hadn't come around yet. Your system's compiler would be gcc-2.95, your python implementation would be 1.5.x and run none of today's code, you would still be on an XFree86 server that doesn't support any graphics card made after ~2004. Your web browser would be Mozilla, because Firefox hadn't come around yet (and today's Firefox doesn't support kernels that old). Your OpenSSL libraries would have started at version 0.9.6b, and been patched roughly twice a year since release.
The odd thing is, were this Linux you would be flamed for trying to get modern things running with such old versions. But as this is Windows, you feel entitled to complain about having to re-learn something new and brag about the "effort" you save.
As somebody who programs for both Linux and Windows for a living - your "saved effort" comes at a significant cost to me. It is increasingly hard to write Windows software that works on both XP and Win7; every new feature has to be written twice, once using the right Vista+ API and once to degrade gracefully on XP. Linux is marginally better - there's a new trendy library-of-choice every few years, but at least old ones disappear before too long. Hardware tends to be less than 5 years old, Linux installs tend to be less than 5 years old; yet tech-savvy XP users somehow feel entitled to stay with a 9-year-old OS. Most people don't keep cars that long; why expect an operating system to last?