Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

+ - SCOTUS denies Google's request to appeal Oracle API (c) case

Neil_Brown writes: The Supreme Court of the United States has today denied Google's request to appeal against the Court of Appeals for the Federal Circuit's ruling (PDF) that the structure, sequence and organization of 37 of Oracle's APIs (application program interfaces) was capable of copyright protection. The case is not over, as Google can now seek to argue that, despite the APIs being restricted by copyright, its handling amounts to "fair use".

Professor Pamela Samuelson has previously commented (PDF) on the implications if SCOTUS declined to hear the appeal.

More details at The Verge.

Comment: Re:Easy fix (Score 1) 247 247

... there was not excuse for what they did. All engineers do have to make trade-off decisions, but the fucking deluxe fix was $11, that is it.They could have built that into the car price with virtually no impact. TFA picked one terrible example...

The problem is that there were probably hundreds or even thousands of $11.00 fixes to the car that would have made it incrementally safer. At some point the engineer has to prioritize which to implement and which not to implement.

Comment: Re:If this works, everything will change. (Score 1) 132 132

Cars with autonomous freeway driving will be out in just a couple of years, according to automotive manufacturers. Nearly all the major players are predicting fully autonomous cars will be a solved problem sometime between 2020 and 2025.

Why is every cool technology always exactly ten years away?

Comment: Re:Or will it? (Score 1) 132 132

AI will not write books, do programming, etc. Strong AI is a myth.

Unless human brains have some magical powers (like a soul blessed by God), there is no logical reason that machines shouldn't eventually be smarter than humans. The only question is how far off it is.

Not necessarily. It's entirely possible that if we build smarter machines, they will, in turn, make us smarter. If we start implanting computer chips in our brains and nano-electronic optics in our eyes, humans themselves can change and advance, and it's entirely possible that we can "beat" computers indefinitely.

+ - SpaceX's Challenge Against Blue Origins' Patent Fails to Take Off->

speedplane writes: As was previously discussed on Slashdot, back in September SpaceX challenged a patent owned by Blue Origin. The technology concerned landing rockets at sea. Yesterday, the judges in the case issued their opinion stating that they are unable to initiate review of the patent on the grounds brought by SpaceX.

Although at first glance this would appear to be a Blue Origin win, looking closer, the judges explained that Blue Origin's patent lacks sufficient disclosure, effectively stating that the patent is invalid, but not on the specific grounds brought by SpaceX:

Because claim 14 lacks adequate structural support for some of the means-plus-function limitations, it is not amenable to construction. And without ascertaining the breadth of claim 14, we cannot undertake the necessary factual inquiry for evaluating obviousness with respect to differences between the claimed subject matter and the prior art.

If SpaceX wants to move forward against Blue Origin, this opinion bodes well for them, but they will need to take their case in front of a different court.
Link to Original Source

Comment: Re:he dodged the good vs evil question (Score 1) 53 53

He didn't dodge it. He said "We're not worried about lawlessness. Our job is to make the most secure product we can. Our job is not to help enforce laws" It's a rejection of the premise of the question, sure. But it's not a dodge. It's a clearly articulated moral stance.

Your paraphrase would be a moral stance, but he didn't actually say that. His answer ignores that Tor is used for Evil, it doesn't come out and say that any evil created by Tor is a necessary byproduct of the good that it creates.

Comment: Managing Good and Evil (Score 1) 61 61

Tor can be used for both obvious good (e.g., subverting oppressive regimes), obvious bad (e.g., murder for hire, child porn), and a semi-bads (purchasing contraband, hate speech). Despite all of the good that Tor does, how does Tor morally justify itself in light of all the bad that occurs on its networks? Is there some way of weighing the good and bad (i.e., if it got bad enough, would you shut it down)? Or does it decide to not justify itself (i.e., it's just a tool, people will use it how they wish)?

Comment: Re:And so therefor it follows and I quote (Score 1) 353 353

I'm all for free software, but this reasoning sounds insane. When people buy a PC, it says "comes with windows", you know what you're getting, and to require manufacturers to return half of it seems nuts. It's like ordering a cheeseburger, and then demanding a refund for the cheese. Why didn't you just order a hamburger?

If the code and the comments disagree, then both are probably wrong. -- Norm Schryer

Working...