Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:isn't music already open source? (Score 1) 183

Unless someone who receives the source is allowed to redistribute that source, it does not qualify as an Open Source license. Open Source requires that the redistribution rights flow downstream.

Copyrighted music, unless explicitly licensed in such a way to allow further redistribution by anyone who receives a copy, is more of a "shared source" or "licensed source access" model, in which certain distributors are explicitly authorized by the copyright holder to redistribute it under certain terms, but in which that right is not conferred downstream. While this provides some of the same benefits, it does not meet the minimum criteria for being an Open Source license.

The distinction between Open Source and Free is that the latter is not allowed to be redistributed in closed (binary) form without making the source available. A non-free music license would allow you to use it, modify it, and distribute recordings (binary form) without providing sheet music. A free license would require you to provide the altered sheet music upon request.

Comment Re:Where will this end? (Score 1) 986

I think the point is that encryption is useless against someone that can say, "give us the key or we'll dissappear you."

Not if you use encryption properly. Everyone who actually cares about privacy should have a CA cert. When someone asks for your public key, create a new PK pair for them on the spot and sign it with your CA cert. You now have a PK pair that you can use to communicate with them. Rotate this key frequently, and when you're done communicating with them, destroy the pair. Inform them ahead of time so that they don't send any communication with a no-longer-valid key.

With such a scheme, you should have no trouble proving that it is not possible for you to produce the key used to encrypt the communication.

Comment Re:Take it public (Score 1) 266

Until a few hundred celebs' walls get spammed and they declare en masse that they're all moving to Google+, followed shortly thereafter by a fan exodus. Facebook might not take security seriously enough at times, but even they aren't clueless enough to think that they can ignore it entirely.

Comment Re:Take it public (Score 2) 266

They simply do not have the time or manpower to respond to every last report of "I can haxxor" or "I was haxxored and they keep doing it".

The latter is almost invariably a problem with the user's computer, and even if it isn't, there's no possibility that the user has enough information to be helpful. However, Facebook should have the ability to flag what appears to be your own post when reporting a problem, and Facebook should at least take the time to determine whether the post occurred through password compromise, from a third-party FB app, or appears to have been actually posted by that user from a computer that had a valid cookie. Then, the system should send an automated message to the user indicating how he/she can protect him/herself from that attack in the future. This process could be entirely automated, giving the user the ability to follow up only in the case of a third-party FB app having made the post (which is likely a real security bug, or at best, an app developer violating the developer TOS).

Also, pay attention to the section which states that you are supposed to use a TEST ACCOUNT to reproduce the problem, not hack the Big Z's timeline.

Which he did, and they dismissed his bug report, so he took the only step that he thought could prove, in FB's eyes, that the flaw was legitimate.

What I find particularly interesting is how many ACs are defending Facebook in this. It almost makes me wonder if there's an astroturfing campaign going on, either officially or unofficially, by employees of either FB or a third-party firm hired to defend them. Just saying.

Comment Re:This is so bad (Score 3, Insightful) 266

This. As soon as a bug bounty program is shown to not actually pay out when a real security flaw is found, it becomes a worthless program. From now on, instead of telling Facebook, the not-insignificant percentage of hackers for whom the bounty was the only reason to report it to FB will simply disclose the flaw immediately, resulting in a significant reduction in the site's security for everyone.

Comment Re:Devil's Advocate (Score 2) 266

How can he have an IS degree if he can't even write a decent bug report?

Most universities (even in the U.S.) don't teach that skill. I'm not at all surprised. Even many fully employed software developers write terrible initial reports. My experience has been that on average, bug reports go back to the originator a couple of times just to collect the basics, and that's not including the number of times that the engineers bounce bugs back with suggestions like "Try [x] and see if that works" that are intended both to help the person get up and running and to determine the scope of the problem more fully.

Comment Re:Take it public (Score 4, Interesting) 266

Imagine you're Facebook and you're getting piles of "I can post on someone else's timeline!" Well, you can be 99.999% of those cases are probably one of user error - as in, the user reporting it could do it because the permissions said so.

Even if you're right, and 99% are bogus, there's no excuse for having a process where you choose "Not a bug" instead of "Need more information" with a request for steps to reproduce. That should be drilled into employees as the only valid response until they are relatively certain that the problem was user error. This culling was premature; you must assume that the bug *might* need investigation until it is clear that it does not. Anything less is negligence.

But the bigger problem is that there's no good way for Facebook to be certain that it wasn't user error unless the account is known (by Facebook) to have settings that should have prevented posting. That's what makes the CEO's page an obvious choice. IMO, there's also no excuse for a company the size of Facebook to not provide an account that is preconfigured to not allow posts so that if a researcher successfully posts on it, the subsequent security bug report has automatic credibility (and, hopefully, additional logging by Facebook's servers, immediate reaction from their security response team, etc.). Perhaps call the test account Zark Muckerberg.

Comment Re:Take it public (Score 5, Interesting) 266

Basically all he did is say "I posted to someone's timeline, this is a bug" and linked to the post he made. He didn't explain anything.

Bzzt. If Facebook's logging weren't broken, that should be all they need. The existence of the post itself, having been posted to a wall where he should not have been allowed to post, should have been enough to determine trivially that the bug was real. Further, the post's database record should contain the posting IP address and the ID of the server that handled the request. From there, they should have been able to look at the server's request logs to determine precisely how the attack happened (assuming the researcher was using a structurally valid URL in the request, as opposed to exploiting a null character handling bug in the web server itself).

But even if they looked at the logs and couldn't figure out what happened, IMO, it is still completely unacceptable to just close a bug like this. It's one of those bugs that, if real, is borderline catastrophic in scope. You do not close a bug like that as "cannot reproduce". You contact the originator and say, "Hey, can we get more information about this? We need to try to reproduce the problem."

It's sad that it takes somebody posting on the CEO's Facebook page to get the attention of Facebook's security staff. This means one of two things: they are grossly mismanaged or are woefully understaffed—probably the latter, IMO. Either way, it tells me that Facebook does not take security seriously enough. If bug screeners do not have time to properly follow up on bugs that are this severe, then they need to double or even triple the number of screeners.

Also, this brings into serious question the way that Facebook screens bugs in the first place. Where I work, a bug like this would have been tagged as a security bug the moment it came in. This causes additional people to review the bug, significantly reducing the likelihood of a serious mistake. Closing the bug without asking for more information strongly suggests that a single, hopelessly overworked individual made a mistake, and that the company as a whole failed to have proper processes in place to ensure additional review that would otherwise have caught that mistake quickly and followed up with the original reporter. Not good. Not good at all.

And as long as I'm criticizing Facebook's security practices, IMO, a service like this should have several publicly visible, official security testing accounts for precisely this purpose, with various restrictions on various posts, etc. so that security researchers can properly hammer on their site's security. For example, there should be an official test account that looks an awful lot like Mark Zuckerberg's account. If a researcher is able to post on the wall of that account, there can be no doubt whatsoever about the fact that a bug exists. Likewise, there should be more complex accounts with various security settings, complete with a list of that content and the expected behavior (e.g. you should not be able to read the barcode image entitled "nude_selfie_for_my_boyfriend.jpg").

In short, I suspect there's plenty of blame to go around for this error. What matters is not who gets blamed, but rather how Facebook fixes their processes to ensure that such mistakes do not get made in the future. And I would emphasize that this does not involve firing anyone. People make mistakes. That's why processes are supposed to be designed to mitigate those mistakes. A company like Facebook is big enough that they should know this. If they don't, then perhaps this object lesson will get their attention and cause them to change their ways. If not, it's time to run, not walk, to a competing service.

Either way, what the researcher did was IMO wholly appropriate. He initially performed the smallest attack that could potentially have proven that there was a flaw. When the first report was casually dismissed, he then escalated that attack, but only to the minimum degree necessary to prove beyond any reasonable doubt that there was a flaw (by attacking a single, prominent account belonging to a readily identifiable Facebook employee). Had Facebook provided a "Zark Muckerberg" account as suggested earlier, he could have used that. They didn't, so he used the only remaining tool that was available—the CEO's real account.

Would it have been better if he had included steps to reproduce in the original bug? Sure. Is Facebook behaving like a spoiled child after getting called out for misbehavior? You bet. Does the researcher deserve the bounty? Uh, duh. More to the point, this guy deserves a job. But at the very least, he deserves a big bounty for uncovering not just a security bug, but also a serious process problem that allowed such a serious bug to be inadvertently swept under the rug. And that, IMO, is even more significant than the bug itself.

Comment Re:Take it public (Score 4, Insightful) 266

No, not almost invariably. Invariably. You always follow up on security hole bug reports. Always. If you do not do this, you are incompetent. Assuming this security researcher gave them a reasonable amount of time (the summary here doesn't say), then this is once again a demonstration of Facebook talking "secure" but implementing the opposite, hyping their bounty program while refusing to pay out.

For that matter, you should always follow up on non-security bug reports unless they're obvious garbage (e.g. porn site spam submitted to your bug reporting page by a bot). But security bugs? There's no excuse for not following up on those. Ever. EVER.

Comment Re:This will be Godwinned (Score 1) 496

But that was a different world. 9/11 changed everything, man.

Seriously, as cynical as it sounds, at least in the U.S., if the Nuremberg trials were conducted today, we'd probably let them off for "just following orders". Chilling, isn't it, how quickly we have become the enemy, a mirror reflecting that which our forefathers died fighting against?

"What truth is there, but the law?" they say. "Crucify him! Crucify him!"

This is what it sounds like when justice ends and the blind and arbitrary pursuit of revenge begins. This is always what it sounds like.

Slashdot Top Deals

One possible reason that things aren't going according to plan is that there never was a plan in the first place.

Working...