It would have helped if the summary had pointed at the actual Nature article or the ArXiv preprint.
This post sums up the concept well enough. Each OAM value (usually associated with the letter l) means that the phase of the light around the beam centre changes by 2pi. So, l=0 is no change in phase, l=1 is 2pi change in phase and so on. There's no upper limit to OAM values, and light waves with different OAM are orthogonal, so you can theoretically have infinitely many beams with no interference between them. There are no more theoretical problems with this than having say an infinite number of GigE cards and cables. There is no way you could build something that actually uses an infinite number of beams with individual vorticities, of course, but that's the same with the infinite gigabit links too.
While that's true, a standard (and popular library) for SCTP-over-UDP could be created. At most, you'd need a single well-known UDP port for inbound SCTP-over-UDP (9989 is suggested by the Internet draft for this). SCTP ports would be used to distinguish between separate SCTP-using services on the server. I'm sure that the existing Linux and BSD SCTP stacks could support this with little effort. Firewalls that only permit HTTP/HTTPS would block this variant, but it would work well enough through NATs, especially if the multiple-endpoint parts of standard SCTP were left out.
I'd forgotten about the Sprint 2600:: address.
I suspect that the real answer was that fc00::/7 was created just to keep all the anti-publicly-allocated-address people happy, and was never taken that seriously. All the real connectivity would use link-local or suitably firewalled global address space. VPNs between separate companies would be handled via IPSEC, and no new addresses would be needed. Of course, that works fine in theory, but will probably never happen in reality...
Have you been living under a rock? In the U.S. every major city has 100mbit+ net service offered by many providers to residents and business alike. It would more difficult to not find 100mbit in a major urban area than it would be to find it.
Not living in the US, I wouldn't know. All I have to go on are the reports on
And now you're talking about schools mantling their own fiber! I'm sorry, are they going to employ their own lien men and NOC operators?
That'd be done either by whatever organisation runs the local school system or some group set up specifically as an ISP for the schools. (Assuming we're using JANET as an example...)
You also gota realize that the UK is like the size of California. No sorry, quite a bit smaller. And that's just 1 of 50 states that need to be interconnected. Ever hear of alaska? Ya, it's like the size of France, who thinks that'll be cheap to fiber up? I'll bet that it should cost the same to cross 20,000 miles of roads as it does to cross 2,000 miles. Maybe you can find me a single citation for the cost your talking about. I've looked up the prices, my estimate is quite conservative. Your price of "virtually nothing" is a laughable as it is fantastic. As in: it only exists in your fantasy.
This excuse comes up way too often. I imagine it's perfectly easy to wire up New York and its surrounding areas for not much more than Sweden. Meanwhile, there seems to be nowhere that has 100Mb/s Internet in the US that isn't sat on a university WAN. There is nothing at all stopping an ultrafast ISP being set up specifically in San Francisco, New York or other cities and ignoring anywhere remotely rural.
"I already have 50Mbps at home, going to 100Mbps sometime soon, with probably a 20Mbps backup - all for me."
I'm sure that makes it easy to download all those text books. But seriously, what the hell are you doing with 50mbps? Oh I'm sure you have a really good reason to need to double that bandwidth too.
I imagine he's using Virgin at home, in which case the speed doubling is their doing, not the poster's choice...
So with that disclaimer out of the way, does anyone think that it is possible that prolonged disputes like these might actually end up slowing the widespread adoption of IPv6? With IPv4, the number of potential addresses to have to block to effectively blacklist a site that the recognized powers have deemed offensive is substantially smaller than it could be with IPv6. Even though there may be many v4 IP's available right now, that number is still shrinking daily, and cannot possibly last more than a few more years. With a full-scale move to IPv6, even *hoping* to block an organization by IP would be completely impossible on any sort of time scale that humans could identify with, so would the organizations that are trying to shut off places like the pirate bay be lobbying to try to slow (or even halt) the adoption of IPv6, so that what they are trying to do here doesn't end up becoming completely unworkable? Why? Or why not?
Blocking by IP address appears to be near-impossible now; I fail to see that IPv6 will make it worse.
On the subject of blocking IPv6 hosts, if a
Ultimately though, the thing to remember is that at some point, people need to know what the present address is. I imagine that the address distribution methods will be somehow disrupted to prevent updates and then the IPs will be blocked. That applies regardless of the underlying protocol!
Apparently, the SDK has always had a basic compiler included.
As for alternatives, that's probably what will happen; people without MSDN access will just use GCC or Clang instead. However, given that the open source alternatives are far better supported under Linux or OS X, why write software for Windows? We're more likely to get new software projects targeting Linux, OS X or the mobile equivalents (Android/iOS) and ignoring Windows entirely. Alternatively, we get more web apps hosted on Linux servers that do not care about the type of client used. Either way, Microsoft and Windows users end up losing out on native software.
Neutrinos have bad breadth.