Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.

 



Forgot your password?
typodupeerror

Comment: Re:Common Sense Prevails (Score 1) 62

by Tyger (#48523939) Attached to: Negative Online Reviews Are Not Defamation (At Least In Canada)

The frivolity and abuse was statement was only about the size of the monetary damages, not about the claim that it was defamation. So even removing the "flirts with" qualifier, it's still not a good overall ruling. The summary of this post completely misstates the ruling, which was primarily dismissed because the alleged defamation was not against the plaintiff, but was against the company he represents, and so the suit was brought by the wrong entity.

Comment: IANAL but that doesn't seem to be what it says (Score 4, Insightful) 62

by Tyger (#48523767) Attached to: Negative Online Reviews Are Not Defamation (At Least In Canada)

The main cause for dismissal of this wasn't that online reviews are not defamation. It was because the lawsuit was brought by the wrong entity (the lawyer who represents the website, rather than the corporation who owns the website) and that he failed to provide substantive proof of any monetary loss.

If it were brought by the right entity and there was proof of loss, it may not have gone the same way. The judge specifically said that the review did have defamatory language in it.

Comment: Or look at the GeoEye .5m resolution image (Score 5, Informative) 141

by Tyger (#37330800) Attached to: Satellite Captures Burning Man From Space

http://www.geeked.info/burning-man-2011-geoeye-satellite-image/

I can't believe the 16ft resolution image is getting so much press, when the 0.5m resolution image is so much better, and was announced ahead of time (And scheduled, as you can see from people forming shapes in it).

Networking

Ubisoft DRM Causing More Problems 279

Posted by Soulskill
from the there's-a-lesson-here-that-nobody-will-learn dept.
Joe Helfrich writes "Ubisoft's Settlers 7 servers have been causing problems for over a week for users worldwide, and Australian gamers are hardly able to connect at all. 'The problem reportedly strikes after the game has already confirmed an active Internet connection, and prevents the user from playing even the single-player campaign, returning the error "server not available." But they are available, because other people are logged into them and merrily playing away.' Wonder how they're going to describe this one as an attack."

Comment: Re:2^13? (Score 4, Interesting) 175

by Tyger (#27046917) Attached to: Google NativeClient Security Contest

The PDF was an interesting read, though I agree that the money they are dishing out is pretty paltry for all the free review they are trying to garner. Furthermore, I think they are taking platform neutrality in the wrong direction by locking the idea in to the x86 architecture.

But about how it would work, they are basically enforcing strict limits on how the code can be structured. The limits are designed to make the code easily analyzed. Anything that falls outside the strict requirements is rejected. It doesn't work for antivirus because they have to deal with any code that comes in without restriction.

As to why it doesn't work for OS... There is no reason the basic concept wouldn't, aside from the performance penalty and increased code size. (Though further compiler optimization could minimize or eliminate some of that).

However, if you want to go that route of making an OS do it, you might as well pick up a decent modern RISC architecture, because you're already breaking compatibility with any past program for any OS on the x86 CPU. Most of what they are doing is basically taking something that is standard on RISC and shoehorning it into the CISC architecture of the x86. Namely that instruction boundries can be reliably tested for jumps. They enforce that by requiring jumps only to 32 byte boundries, and then verifying each 32 byte block for correctness. Combined with disallowing self modifying code and eliminating the stack completely, all code that executes can be properly analyzed ahead of time.

The concept looks sound to me (Experience working low level with x86 architecture) but the security still relies on the implementation. Off the top of my head I can think of several ways to break the sandbox depending on how it is implemented. However the PDF is quite short on the details to evaluate the implementation. Namely, what exactly qualifies as an allowed x86 instruction, and for the syscalls that are checked, what the check is, not to mention the potential for bugs in the syscall handler for what would otherwise be valid calls, and even potentially the state of the OS or process when the protected code is executed.

Overall, I don't think this is the right direction for the web platform. Theoretically interpreted byte code should be more secure because it doesn't do anything that the interpreter doesn't explicitly allow (Javascript, Java, Flash, etc) and we see where that got us.

Pohl's law: Nothing is so good that somebody, somewhere, will not hate it.

Working...