Yet its still better than anything that Microsoft can come up with.
System Integrity Protection is akin to Windows Resource Protection - which has been in Windows since Vista. Welcome to 2006.
Yet its still better than anything that Microsoft can come up with.
System Integrity Protection is akin to Windows Resource Protection - which has been in Windows since Vista. Welcome to 2006.
Nothing to do with java. Buffer overflows are quite possible with java, but this problem has everything to do with shitty coding, not the implementation language.
No, but this problem has everything to do with shitty operating system design. The login "screen" should not just be an application that maximizes it's screen to cover the UIs of all other application. That is a naïve implementation, and it opens the supposed security feature up to all kinds of attacks, including shatter attacks and more. Not to mention that an application crash will cause the OS to clean up and close the "blocking" window.
Google should take a cue from Windows and make the login screen a totally separate "desktop" which is completely isolated from the "user" desktop. Switching between the two should be a privileged operation, one that can only be executed by trusted login applications. This way a mere exception will not cause the "login" program to crash, close and reveal the user desktop.
It is isolated. In order to interact with it, a user must explicitly permit it by entering an admin's username and password.
Sorry, but that is not isolation. If the prompt require a password rather than just an accept, the launching process can *still* control it remotely through Applescript - it would just not know what to put in the fields. That's not isolation. At best, it is a mitigating factor.
Isolation would mean that any Applescript launched from the process was *barred* from interacting with the approval window.
The vulnerability here is architectural: Windows can be remotely controlled. Ask yourself this: What good is an approval window, if the process can just remote control the approval itself?
On OS X, this programmatically easier to do, but it's possible with a little more effort in Linux (if using GNOME or KDE and their password stores) and Windows (which is trickiest of all since you specifically deal with an application's store rather than a central one; presumably you'd go for a browser)
On Windows - unlike on OS X and Linux - there is the concept of User Interface Privilege Isolation (UIPI) where a process running with a higher integrity level cannot be remote controlled by a lower-integrity process.
The real vulnerability here is NOT whether the user has allowed the process to run or not or whether it came through the app store nor not. The critical vulnerability is the lack of isolation of the window that is supposed to obtain approval from the interactive user. This lack of isolation means that an OS X application can launch an action that requires approval and then immediately - through script - approve the action itself.
Pointing to the app store approval model is missing the point entirely. Even approved applications can (and do!) contain vulnerabilities. The reason why so many apologists are out in force and try deflecting the problem as "app approval" is because this illustrate an architectural problem within OS X: Even though the same user runs a number of processes, a mechanism for policing what they can do to each other is lacking.
But not a system default one.
As of Windows 10 (and soon previous versions with Windows Management Framework 5.0) there *is* a default one: OneGet - now just called "package manager". It is controlled through a PowerShell module. It is actually a package manager umbrella, as it can be used for a number of different package managers which can provide "providers" for PM and integrate that way.
Open a PowerShell prompt and type the following to reveal the commands of the PackageManager module
gcm -mod PackageManagement
(or type gcm -m pack[tab] if you want to save keystrokes - powershell will autocomplete the module name)
To register Chocolatey as a package source (so that you can find packages through Find-Package) type this:
register-packagesource -Name chocolatey -Provider PSModule -Location http://chocolatey.org/api/v2/
When is MicroSoft going to get off their butts and fix their operating systems so that the first user is not defaulted to administrator rights or at least have the first user forced to make a 'normal' user account for normal usage? Even 'ancient' Linuxs only add the first user to sudoers so that they have to explicitly invoke rootly powers.
Unlike Linux, Windows uses proper security tokens. Each process has it's own token governing what it can do to which resources. On Linux the "token" is - rather naively - a user id.
When you log on to Windows - since Vista - with an account with administrative rights, thee token that is created for the shell process is 1) stripped of all administrative rights and 2) given an integrity level of "normal". Integrity levels are also part of the token.
What it means is that *even when you log on as an administrator* you do not possess any administrative or god-like rights. You are a standard user.
When you invoke a program that has a manifest which states that it requires some form of administrative rights, Windows will prompt you for "elevated" privileges. Only when you accept to use your administrative privileges will the process be started with a token with higher than standard user rights.
It really is a much more elegant solution than the stupid effective user in Linux, where the description of a process rights is strongly tied to a user: There must exist a user with the specific sets of rights you want the process to have. Not so on Windows: Any process can have it's own token with fewer or more rights/privileges.
You can turn off UAC (don't!), which is why Microsoft must write the disclaimer *If the current user is logged on with administrative user rights*. If you turn off UAC and log in with an administrative account - then you run all processes with full permissions/privileges.
When is MicroSoft going to get off their butts and fix their operating systems so that the first user is not defaulted to administrator rights or at least have the first user forced to make a 'normal' user account for normal usage?
They did fix it. You are just ignorant.
How many of these problems could be mitigated if this were not MicroSoft's default approach?
The answer is 92% - and it is mitigated by default.
[Start] e n v
Win 10 responds with
"Edit environment variables for your account"
So, because someone throws a new cool Apple feature name out there, I should just accept that it is the ultimate security feature that will magically distinguish between malicious and legitimate writes to sudoers?
The description says that it will protect the *binaries*. Reading comprehension? (hint: sudoers is not a binary)
10.11 has a new SELinux-like 'rootless' security model that should mitigate against any privilege escalation attack like this. Odds are it was naturally immune..
That's interesting. This is waht I have been able to find from Apple on the feature (now called "System Integrity Protection"):
"System Integrity Protection
A new security policy that applies to every running process, including privileged code and code that runs out of the sandbox. The policy extends additional protections to components on disk and at run-time, only allowing system binaries to be modified by the system installer and software updates. Code injection and runtime attachments to system binaries are no longer permitted."
Hardly a new "security model". And from that description - no it would not have mitigated this attack.
Sounds an awful lot like Windows File Protection (later renamed to Windows Resource Protection). Welcome to 2004!
Modifying the sudoers file was only one example use for this. It allows you to write to any file that is normally only writeable to root. Modifying sudoers is a fairly simple and visible change, but modifying one of the system startup scripts that launchd runs as root would work just as well. I think it only lets you append to a file, but it would also be possible to temporarily modify sudoers, then set your worm's setuid bit and change the owner to root, then revert the sudoers change. The only user-visible thing would be the setuid bit on a suspicious binary hidden somewhere in the system (how many people check for this?). Of course, once you are root then you can do things like modify firmware and boot settings and hide inside the kernel...
Spot on. If I was a bad guy (I'm only a little bad) this is *exactly* how I would create an attack.
The only user-visible thing would be the setuid bit on a suspicious binary hidden somewhere in the system (how many people check for this?)
That part in particular highlights the problem with setuid.
It is, in effect, a deliberate hole in the security boundary: The mere existence of the setuid facility means that you can *never* audit the security policies (access rights) and be confident that they truly reflect the rights and restrictions of users.
Auditor: "Who can access this file"
Admin: "Easy" (ls in the directory), "User1 can write and users in the group "group1" can read it.
Auditor: "And no-one else can read or write the file, not even root?"
Admin: "What do you mean, of course root can read and write the file, root can do anything. This is Unix, d-oh!".
Auditor: "Ok. Who can run as root, then? I need to have an exhaustive list, you see. The insurance company needs the list to assert the risk and calculate the premium"
Admin: (sighs, looks up in sudoers and su) "The user admin1 and users in admingroup1 can run as root".
Auditor: "And no-one else can run as root? What about that setuid bit I've heard of?"
Admin: "yes, ok, a setuid root utility can run as root, I knew that. But I have those covered. I run a report every week which lists all of those utilities with the setuid bit that are owned by root. We accept only those utilities that we know. Trust me"
Auditor: "Ok then. So back to this file, how can you document to me that - say - this 'cmsagent' utility cannot access the file, now that we know it is setuid root?"
Admin: "What do you mean, I installed cmsagent myself, I'm pretty sure that it only allows remote users to access documents in the CMS system"
Auditor: "But how do *I* know that? The operating system does not protect the resource against root abuse?"
Admin: "No - this is Unix. I know what I am doing. Trust me. I have access to the source code, if you want to see what it can do".
Auditor: "Ok. I don't know how to read code, so I need to have one of our code auditors look at all the source code then. Assuming that, how do I know that the binary present on this system is the compilation of the source code you will give me?"
All of this because of a bad design decision. In other operating systems (with no all-powerful root and no setuid), the DACL of a resource *does* reflect who can access the file.
SELinux, apparmor etc are ways to add (yet another) security context with proper security boundary.
NO, you miss the point....
You need to learn to distinguish between vulnerabilities and exploits. An *exploit* (the "installer" in this case) takes advantage of a *vulnerability* (the privilege escalation bug) to perform the attack. The underlying vulnerability exists regardless of the exploit.
You focus on the exploit and (incorrectly) claim that it is unlikely to work. That's beside the point, however, as there are many *other* ways to exploit the vulnerability, where a code execution vulnerability in a browser, email client, facebook app or whatever can be combined with this vulnerability to create true drive-by exploits.
I took issue with the dismissal of this bug as "just a privilege escalation" bug. Privilege escalation bugs are *serious* and critical vulnerabilities.
You do not need an installer to exploit this vulnerability. A simple execution bug in Firefox (last version patched 4 of them, as did practically every version before that) or a sandbox escape bug in Chrome/Safari (more rare) will get you pwned should an attacker choose to create an exploit.
As an apologist you are looking for a way to explain away the seriousness of the bug. That's the wrong (and dangerous) way to think about it. There are many attackers with tons of creativity who are ready to leverage a privilege escalation bug in any way they can.
You cannot possibly cover all those scenarios. That is why we need OS vendors and software developers to maintain and respect security boundaries: Walls where as few as possible well-defined gateways, where each gateway is controlled by transparent policies that makes it easy to audit what can pass through the gateway and (preferably) why.
In this case a piece of the wall crumbled, which means that you must now consider the risk that all the bad guys on the outside can venture in to the protected inside and do whatever they like. You have identified one bad guy on the outside (the installer) and claim that he can be controlled. What about all those that you have not identified?
It's a privilege escalation exploit, so an attacker would already need shell access on your computer to get something done.
No shell access needed. A code execution bug in Firefox, Safari or Chrome (or whatever browser or internet-facing software you use) and the attacker is a local user. Especially Firefox does not have a sandbox, so a bug gives the attacker free reign. With this bug he can become root on your kit. That is bad. Blended attacks are the *norm* now - not the exception. Sometimes they are called "attack coctails" when they try multiple vulnerabilities to get foothold and then use privilege escalation bugs like these to break out of sandboxes or gain root.
Every OS has privilege escalation vulnerabilities, because it's much harder to close all the holes when you allow someone to execute arbitrary code on a system.
Unix and Linux with the braindead SUID/setuid design are especially susceptible to privilege escalation. The design is akin to the security model of ActiveX: You let someone gain privileges far beyond what is necessary and then hopes he is well behaved and - crucially - cannot be fooled to use those privileges in nefarious ways. Well, bugs is one way to fool a SUID process to do something wrong.
SUID/setuid breaches the security boundary of the *nix security model. Once a process becomes root there is no policy that constrain what the process can do*.
* (absent kludges like apparmor, SELinux that are bolted on with separate security policies).
That said, this is a particularly braindead bug from Apple, and it is worrisome because it shows they aren't thinking about security, or don't have proper processes in place to ensure the system stays secure. Their programmers should have known better than to create that kind of environment variable so lightly.
Again, the trap is in the basic Unix design. A SUID process executes in the environment where it was launched, but with privileges of the file owner (typically root). That means that *anything* from the user environment is potentially an attack vector. In this case it was as simple as environment variables. So the tables turn, and now the developer must *explicitly* guard against malicious injections rather than coding to a well-defined contract where parameters are explicit. Not to mention that the developer may not even be aware that someone will change the executable to SUID or just invoke the executable as a tool from another SUID executable (example: sudo).
turned off almost all the reporting features
But not all and you can't prevent MS from changing your shit with forced updates. Or are you using Enterprise?
Yes all. And you cannot prevent Apple from changing your shit when they update OS X. What makes you think that MS will use Windows Update to change settings? They have never done so before, and have not indicated that they reserve that right for the future.
I don't understand why people think this sort of thing doesn't happen. It has been *publicly disclosed* that this level of spying takes place. The NSA was caught red-handed putting spyware in the firmware of routers being sent overseas...why in the world wouldn't they partner with Microsoft to inject spying software into Windows?
It would be naïve to think that NSA do not try something like this. But what would Microsoft gain? They would risk their entire revenue, for what? Favors of the NSA? Microsoft - and any other vendor with business in US - will have to comply with lawful orders. Unfortunately, FISA decisions are not public. But Red Hat or any other vendors would have to comply with the same FISA orders.
That isn't the issue. The issue is YOU being able to share MY WiFi key because I was dumb enough to let a Windows 10 user on my WiFi network. This is akin to me giving you the keys to my house so you can housesit, and you getting a hundred copies cut and distributing them to a bunch of people you know.
If you *tell* someone your WiFi password *then* there's nothing stopping them from sharing it with whomever they want. So do not do that. Not if he brings OS X or Linux or Windows.
If you want to allow some friend onto your network but not allow him to share your network with others, then *you* tap in the password at his computer when it connects. On OS X or Linux or Windows. That what you would do today, and that's what you would do when your friends brings a Windows 10. On Windows 10 simply DO NOT CHECK the "share" checkbox. It is off by default. Your network will not be shared.
Nothing has changed. Neither your network nor your password will be shared with anyone. Your friend cannot go into settings and share the network after the fact - it has to be done when connecting.
But if *you* connect to some network which you would like to share with your friends, you can check the "share" checkbox. When you do that, your password will be stored encrypted in Microsofts servers. When one of your friends (if you share with - say - Facebook friends) is in range of that network, his Windows 10 computer can engage the network. The network will issue a challenge with must be hashed using the password as salt, and the hash returned. Modern password auth works like that to avoid sending passwords in cleartext. This means that the *actual* password hash is a one-time hash computed from the challenge.
The computation of the hash is performed on Microsofts servers, and your actual password is NEVER available on your friends computer - not even in encrypted form - only the challenge response hash. Your friends computer must obtain the response to the challenge from Microsofts servers - and when doing so it must prove that it belongs to a friend of yours.
Furthermore, Windows 10 which connects to a network in this way will *not* allow access to other devices on the network except for the internet gateway. I.e. it can only be used for Internet access - nor for local file or media sharing.
The trouble with a lot of self-made men is that they worship their creator.