Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:One is enough (Score 1) 94

In most cases, the splitter will be hard-wired to the antenna. Installing the filter on the antenna side will require cutting into the existing coaxial cable, which most people will require engineers to do. The affected houses will probably get a small box that plugs in between the aerial and the TV or set-top box. Something similar was done when Channel 5 was first broadcast on analogue to prevent interference with video recorders. Pretty much anyone can handle plugging the filter box in, which means the installation costs are zero.
Patents

Submission + - UN Wades Into Patent War Mess (bbc.com)

Rambo Tribble writes: The BBC is reporting that the worldwide, tangled mess of IP litigation has come to the attention of the UN's International Telecommunication Union. The agency has announced it will be holding talks aimed at reducing this massive drag on the digital economy. Good luck.

Comment Re:Just one word: WOW! (Score 1) 142

This post sums up the concept well enough. Each OAM value (usually associated with the letter l) means that the phase of the light around the beam centre changes by 2pi. So, l=0 is no change in phase, l=1 is 2pi change in phase and so on. There's no upper limit to OAM values, and light waves with different OAM are orthogonal, so you can theoretically have infinitely many beams with no interference between them. There are no more theoretical problems with this than having say an infinite number of GigE cards and cables. There is no way you could build something that actually uses an infinite number of beams with individual vorticities, of course, but that's the same with the infinite gigabit links too.

Comment Re:Fixing the problem on the wrong layer (Score 1) 135

While that's true, a standard (and popular library) for SCTP-over-UDP could be created. At most, you'd need a single well-known UDP port for inbound SCTP-over-UDP (9989 is suggested by the Internet draft for this). SCTP ports would be used to distinguish between separate SCTP-using services on the server. I'm sure that the existing Linux and BSD SCTP stacks could support this with little effort. Firewalls that only permit HTTP/HTTPS would block this variant, but it would work well enough through NATs, especially if the multiple-endpoint parts of standard SCTP were left out.

Comment Re:Real lesson -- make guessing expensive! (Score 1) 198

The issue is that with unsalted hashes, you only need to use brute-force once to generate your rainbow table, at which point all the passwords are cracked simultaneously. (OK, in practice you would not wait for a full rainbow table to be produced; you'd download a list of pre-computed common passwords and start with those.) With salt and a strong password, you still have to do all of that brute-force work just to obtain a single password, even if you have several GPUs doing the calculations.

Comment Re:Privacy Concerns (Score 1) 244

I'd forgotten about the Sprint 2600:: address. :(

I suspect that the real answer was that fc00::/7 was created just to keep all the anti-publicly-allocated-address people happy, and was never taken that seriously. All the real connectivity would use link-local or suitably firewalled global address space. VPNs between separate companies would be handled via IPSEC, and no new addresses would be needed. Of course, that works fine in theory, but will probably never happen in reality...

Comment Re:Ridiculous government waste as usual (Score 1) 292

Have you been living under a rock? In the U.S. every major city has 100mbit+ net service offered by many providers to residents and business alike. It would more difficult to not find 100mbit in a major urban area than it would be to find it.

Not living in the US, I wouldn't know. All I have to go on are the reports on /. and other such sites, which give the impression that those speeds are far from the norm. (If they are, this whole thread is rather pointless, apart from the issue of rural schools.) Not that 100mbit+ internet for end-users is that common yet outside the US either...

Comment Re:Ridiculous government waste as usual (Score 1) 292

And now you're talking about schools mantling their own fiber! I'm sorry, are they going to employ their own lien men and NOC operators?

That'd be done either by whatever organisation runs the local school system or some group set up specifically as an ISP for the schools. (Assuming we're using JANET as an example...)

You also gota realize that the UK is like the size of California. No sorry, quite a bit smaller. And that's just 1 of 50 states that need to be interconnected. Ever hear of alaska? Ya, it's like the size of France, who thinks that'll be cheap to fiber up? I'll bet that it should cost the same to cross 20,000 miles of roads as it does to cross 2,000 miles. Maybe you can find me a single citation for the cost your talking about. I've looked up the prices, my estimate is quite conservative. Your price of "virtually nothing" is a laughable as it is fantastic. As in: it only exists in your fantasy.

This excuse comes up way too often. I imagine it's perfectly easy to wire up New York and its surrounding areas for not much more than Sweden. Meanwhile, there seems to be nowhere that has 100Mb/s Internet in the US that isn't sat on a university WAN. There is nothing at all stopping an ultrafast ISP being set up specifically in San Francisco, New York or other cities and ignoring anywhere remotely rural.

"I already have 50Mbps at home, going to 100Mbps sometime soon, with probably a 20Mbps backup - all for me."

I'm sure that makes it easy to download all those text books. But seriously, what the hell are you doing with 50mbps? Oh I'm sure you have a really good reason to need to double that bandwidth too.

I imagine he's using Virgin at home, in which case the speed doubling is their doing, not the poster's choice...

Comment Re:Consider... (Score 1) 224

So with that disclaimer out of the way, does anyone think that it is possible that prolonged disputes like these might actually end up slowing the widespread adoption of IPv6? With IPv4, the number of potential addresses to have to block to effectively blacklist a site that the recognized powers have deemed offensive is substantially smaller than it could be with IPv6. Even though there may be many v4 IP's available right now, that number is still shrinking daily, and cannot possibly last more than a few more years. With a full-scale move to IPv6, even *hoping* to block an organization by IP would be completely impossible on any sort of time scale that humans could identify with, so would the organizations that are trying to shut off places like the pirate bay be lobbying to try to slow (or even halt) the adoption of IPv6, so that what they are trying to do here doesn't end up becoming completely unworkable? Why? Or why not?

Blocking by IP address appears to be near-impossible now; I fail to see that IPv6 will make it worse.

On the subject of blocking IPv6 hosts, if a /64 (or whatever) is owned entirely by the organisation you want to block, blocking is easy. If it's shared with other users, though, a lot will depend on the host company's willingness to help. Some may somehow force a specific address per client (DHCPv6?) which makes blocking no harder than at present, or be willing to act on take-down requests. With unhelpful host companies, the offending site could try fast-flux DNS techniques to temporary addresses within the local /64. At that point I suspect the whole /64 would be blocked regardless of collateral damage. Alternatively, DNS blocks could be used, with the same effectiveness that we see today.

Ultimately though, the thing to remember is that at some point, people need to know what the present address is. I imagine that the address distribution methods will be somehow disrupted to prevent updates and then the IPs will be blocked. That applies regardless of the underlying protocol!

Comment Re:Wait, what now? (Score 3, Informative) 462

Apparently, the SDK has always had a basic compiler included.

As for alternatives, that's probably what will happen; people without MSDN access will just use GCC or Clang instead. However, given that the open source alternatives are far better supported under Linux or OS X, why write software for Windows? We're more likely to get new software projects targeting Linux, OS X or the mobile equivalents (Android/iOS) and ignoring Windows entirely. Alternatively, we get more web apps hosted on Linux servers that do not care about the type of client used. Either way, Microsoft and Windows users end up losing out on native software.

Slashdot Top Deals

Neutrinos have bad breadth.

Working...