Become a fan of Slashdot on Facebook


Forgot your password?

Comment Re:Antitrust... (Score 0) 222

Check this part out:

Amazon emailed marketplace sellers to inform them it is not accepting new listings for the two media devices, and any existing listings will be removed at the end of October.

Amazon is telling *other stores* what they can sell. Amazon isn't just a store: They are a search engine for goods and an online payment processor. This is not like Best Buy refusing to sell a certain product. This is closer to Visa refusing to broker payments that involved competing products. Or perhaps Microsoft refusing to host web sites that offered products that compete with Microsoft Word. They may not be considered a monopoly in this space, but it sure looks anticompetitive.

It would be one thing if Amazon refused to sell the item themselves, but telling others they can't even list it on their service is a step further.

Comment Re:Why? (Score 1) 182

Hey Doug: See my reply to the AC above. It's clear we aren't all talking about the same thing. The parent posts and the article imply that the OS can automatically resume any network connection of any kind in any application. It does not do that. Then they run the Netflix app and go "see, it works!" when they are really testing the Netflix app, not the OS9 feature. What Apple provided is an efficient way for the app to switch networks. That's not the same thing. Try logging into your web banking, or play a Flash video, or a video game that needs a realtime connection. Everyone is mixing up a feature of the Netflix app with a feature of the OS.

Comment Re: Why? (Score 1) 182

Try it! Start a movie on Netflix with wifi on. Disable wifi while it plays. The movie will not even stutter.

That's not what they are talking about. That works because Netflix has created a protocol and an application that can do this. But the claim I am trying to refute is that this works with ANY application doing ANYTHING.

From the article:

That's helpful if you're in the middle of watching a video or some other task on the internet that you don't want interrupted by spotty Wi-Fi service.

From the post I replied to:

...And packets can be re-routed and resent. You might not be used to being able to swap and change like this on your existing phone. But that's the point, this is a new feature that other OSs don't do as yet.

So what BasilBrush, and the press releases from Apple imply, is that OS9 has a new feature no other OS has: it can transparently make streams continue, on another interface, with no support from the application software. I am trying to clarify that no, that is not what Apple did, because that is not possible. Showing that one app can do this doesn't show that the OS can do it for all apps.

Someone should try playing a fast action game online, a Flash video, or login to bank's bill pay web site.

Point closed. No magic. Clever programming by Netflix engineers

Comment Article is insufficient (Score 3, Insightful) 444

The article does not provide sufficient information to support meaningful discussion or criticism. The article does not provide justification or data, only high-level conclusions. Those conclusions only apply to a particular implementation of a program in a particular state, so no generalizations are made. It does not provide any links to information about he program or the research. Unless someone wants to do that research and provide it in the summary, there is simply nothing to see here.

Comment Re:Why? (Score 1) 182

This whole feature doesn't even sound possible. I can't find any details about how this feature is supposed to work, but there has to be more to it than "it magically opens another connection and it just works." The Wifi and Cellular connections have different IPs. The packets would suddenly be coming from a different IP address. TCP and UDP do not support that.

At the transport layer, suppose a phone is on Wifi at IP, is authenticated, and is receiving data. Suppose the cell connection is There's no way to tell the server "Hey, I know I'm on, but I'm actually that guy who was on a moment ago, so start routing my packets here." You can't pick-up a TCP stream and just continue it on another IP address. UDP won't work either, because it will ignore packets from and keep sending to That is why cellular voice connections use special protocols where the towers negotiate with each other. There is unique design considerations for such a hand-off and most protocols don't consider that.

Supposing the transport layer could solve this, the session layer won't allow it. When you log in to a network service, you send credentials and get back some kind of security token. Those tokens are usually not valid when sent from another IP address. That's a pretty common security best practice.

You would need the application to realize that the connection went bad, then renegotiate the connection on the other IP address by sending the login credentials and accepting a new security token. Then it would need to tell the server to continue the connection from the point it left off. The OS can't do that for you.

It seems to me that if the OS transparently sent the packets from another IP, even if the server somehow got those packets, and for some reason the TCP stack routed it to the application - which it would not - any well written service would probably assume it was a hack and log both connections out. Or at least ignore the second one.

I also wonder what the OS would do if both connections returned data? Now there's 2 response streams for 1 single outgoing stream.

The only way that I could see this working is if some other server in the middle is proxying all your data, and there is a way to tell the proxy about your new IP address.

Here's a SO post on the topic of changing IP addresses:
Here's an academic paper on a proposed modification to TCP to allow this:

Comment Re:Faster..? (Score 3, Interesting) 85

Yes. I am having a hard time finding a good article on this, so I will attempt to explain. I'm a software guy with limited VLSI and electrical experience, so I bet 100 people will jump in and correct me on parts of this. But here goes...

I think the hope is that optical circuits would be lower resistance, be less susceptible to heat, not cause magnetic fields, and not act as transmitters or receivers.

When electricity passes through a wire, it experiences resistance. That resistance slows the signal and creates waste heat. "Slows the signal" means two things. One is that it takes longer for the current to flow to the destination. Two is that since current was lost to heat, it takes longer for the destination to sink enough current to turn on. As the wire heats, it also becomes a poorer conductor too.

Also, due to the way transistors work, they briefly short-circuit while they are switching. So the longer it takes for the current to build up at the gate's transistor, the longer it short circuits. Which produces heat too.

Another problem is that electricity in a wire creates a magnetic field. This creates more losses, but also can cause some of the electricity to jump to a neighboring wire. As transistors and wires get smaller, it becomes increasingly likely that signals will "short circuit" and jump to a neighboring wire.

Electronic circuits are also sources of, and susceptible to, external noise. A 2GHz CPU is a (weak) 2Ghz transmitter. And a 2Ghz transmitter could induce a voltage on wires within the CPU. I don't know how much of a problem this is though, since the wires in the CPU are very small.

Comment Re:How does injecting a cookie expose data? (Score 1) 66

What do you mean they don't allow subdomains?

Those domains don't sell subdomains to 3rd partties. I can't go buy a, and

The point of the attack is to MITM non-https sessions with a subdomain to manipulate future HTTPs sessions.

AHhhhhhhhhhhh!!!!!!!! (I didn't read the PDF - just the linked article.) Now that you say this the article makes much more sense. They MITM the HTTP session, set a cookie, that cookie is read by the HTTPS session. The cookie spec was supposed to take that into account since it has flags for secure and not secure. Sounds like the browsers aren't really abiding by that, probably since it used to be more common to mix HTTP and HTTPS.

Comment Re:How does injecting a cookie expose data? (Score 1) 66

Thanks for the write-up. It clarifies it a lot. But there is something unfair about these examples. You show how this attack could be used against,, or -- but only if those sites allowed subdomains. But since they don't offer subdomains, it seems inappropriate to use them even as hypothetical examples. The github example is a good one.

The second means of injecting cookies seems to go without saying. If someone can MITM, then you are screwed. They don't need this attack.

Submission Modern browsers are undefended against cookie-based MITM attacks over HTTPS->

An anonymous reader writes: An advisory from CERT warns that all web-browsers, including the latest versions of Chrome, Firefox, Safari and Opera, have 'implementation weaknesses' which facilitate attacks on secure (HTTPS) sites via the use of cookies, and that implementing HSTS will not secure the vulnerability until browsers stop accepting cookies from sub-domains of the target domain. This attack is possible because although cookies can be specified as being HTTPS-specific, there is no mechanism to determine where they were set in the first place. Without this chain-of-custody attackers can 'invent' cookies during man-in-the-middle (MITM) attacks in order to gain access to confidential session data.
Link to Original Source

Comment Re:Slightly more technical (Score 1) 111

To set the record straight, I skimmed the article and missed the "no false positives" claim. Doh! But I am skepitcal of that claim. The article says:

And since the technique offers up the full genomes of whatever virus it detects, it shouldn't throw up any false positives

That's like me saying "because I showed my work, my answer cannot be wrong." Just because it gives the sequence of what it thinks it found, doesn't mean that it was actually present, and that the sequence is correct. I've worked on PCR systems, but never sequencing systems. Are they really 100% perfect? If so, then... wow... that's amazing.

A bug in the hand is better than one as yet undetected.