Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re: I manage Internet connections in 148 locations (Score 1) 116

Qwest are apparently doing 6rd so you should be able to get v6 with them too, albeit over a tunnel.

I have this set up, and can attest that it works reasonably well. The only real problem is that (presumably unlike native IPv6) you aren't assigned a static IPv6 prefix; it's tied to your dynamic IPv4 address. Consequently, I also have a Hurricane Electric tunnel configured with a static IPv6 prefix for use in DNS. This required some complicated source-based routing rules, though, so it's not for everyone. (You can't route HE packets out over the 6rd tunnel or vice-versa, and normal routing only looks at the destination address. To make it work you have to set up multiple routing tables ("ip route table ...") and select the proper one based on the source address ("ip rule add from ... table ...").

Of course, one could just pay extra $$$ for a static IPv4 address, which would provide a static 6rd prefix...

Comment Re:IoA (Score 3, Informative) 116

That would be well and fine if most IPv6 addresses didn't have a 64-bit or even 80-bit prefix, identical for everything routable at the endpoint.

That 64-bit network prefix is the equivalent of 4 billion entire IPv4 internets—and each "host" in each of those internets contains its very own set of 2**32 IPv4 internets in the 64-bit suffix. Quadrupling the number of bits from 32 to 128 means raising the number of addresses to the fourth power (2**32 vs. 2**128 = (2**32)**4). We can afford to spare a few bits for the sake of a more hierarchical and yet automated allocation policy that addresses some of the more glaring issues with IPv4, like the address conflicts which inevitably occur when merging two existing private networks.

Think of it this way: If we manage to be just half as efficient in our use of address bits compared to IPv4, it will still be enough to give every public IPv4 address its own private 32-bit IPv4 internet. Right now the vast majority of IPv6 unicast space is still classified as "reserved", so we have plenty of time to adjust our policies if it turns out that we need to be more frugal.

Then there are DHCP addressing schemes that use the MAC as part of the address, further reducing it.

Automatic address assignment (based on MAC or random addresses or whatever) comes out of the host-specific suffix, not the network prefix, so it doesn't reduce the number of usable addresses any more than the prefix alone. It does imply that you need at least a 64-bit host part in order to ensure globally uniqueness without manual assignment, but the recommended 64-bit split between network and host was already part of the standard.

Comment Re:What I would do different is DNS related (Score 1) 116

1) First I would have done only countries and no other TLD.

Personally, I would have done the opposite, and demoted country-specific sites to a second-level domain like .us.gov. The Internet is an international network; forcing every domain to be classified first and foremost according to its national origin would cause needless discord. Only a small minority of sites are truly country-specific.

it could have been debian.cc or debian.de or any other that they wanted

In which case the country code would communicate zero information about the site—so why have it at all?

What might make more sense would be using registrars as TLDs (e.g. google.mm for MarkMonitor), with a convention that multiple TLDs can contain the same subdomains if and only if they mirror each other. This would tie in well with DNSSEC while also avoiding the need to defend one's domain name against scammers in a million separate TLDs. If a government just happens to run its own registrar it could use the country code for its TLD alongside non-country TLDs. The main difference from the current system would be that TLDs would be generic rather than catering to a particular kind of site, which is mostly the case in practice anyway: .com no longer implies commerce, not every .org is a non-profit, .net does not imply an ISP, etc. Instead, the TLD would imply a trust relationship; the name "google.mm" would imply looking up the "google" subdomain in the MarkMonitor domain registry, which would presumably be listed among the user's local trust anchors. If there were an alternative domain like "google.vs" (for VeriSign) it would be required to resolve to the same address.

Comment Re:Do away with them (Score 1) 86

But how would you even do that with dynamic languages, where the type can just change at runtime?

Obviously you can't, which is one of the arguments against programming in dynamically-typed (unityped) languages. This is why TypeScript exists: a statically-typed JavaScript derivative which compiles down to plain JS after proving that the types are satisfied (i.e. performing static code analysis), much as any other statically-typed languages compiles down to unityped machine code.

Furthermore, TypeScript is handling null just like Java.

No, it isn't. Both TypeScript and Java will complain about uninitialized variables, but Java will not produce an compile-time error if you set the variable to null (directly or indirectly) and then try to use it as a reference. TypeScript will, unless you explicitly check that the value is not null before using it. (Checking for null changes the type from nullable to non-null within the scope of the condition.)

declare function arbitrary(): string | null;
let x: string;
let y: string | null;
x = arbitrary(); // Error, type 'string | null' is not assignable to type 'string'.
y = arbitrary(); // Fine
x.length; // Fine, x is non-nullable.
y.length; // Error, object is possibly 'null'.
if (y != null) {
y.length; // Fine, y is non-null in this scope.
}

Comment Re:Do away with them (Score 1) 86

A NPE means you have a bug in your code, and it's better for the app to crash than to corrupt your data, or silently just lose it.

Even better would be to detect the bug statically, at compile time, as a type error, so that your program doesn't crash at some arbitrary point later and lose all the user's data.

The point is not to eliminate the concept of nullable references, which are indeed useful for representing data which is not available. The point is to distinguish between such nullable references and references which cannot be null so that the compiler can check that all the nullable references have been properly handled and warn you about any and all potential null pointer issues in advance.

Comment Re:Do away with them (Score 1) 86

The problem isn't that the language has nullable references, it's that it doesn't have a reference type which cannot be null. A nullable reference is isomorphic to the Optional or Maybe type available in most "null-free" languages, and this certainly has perfectly legitimate uses. The issue in a language like C, Java or JavaScript is that every single operation on references takes this nullable reference type as input, and the vast majority of those operations assume that the reference will not be null and generate a runtime error otherwise. In a more principled language there would be a type distinction between a reference which may be null and a reference which cannot be null, and the programmer would need to destructure the nullable value into either null or a non-null reference before using it in operations which do not expect null. This eliminates a wide variety of common mistakes by making them type errors at compile time rather than subtle runtime errors which may or may not cause the program to crash, depending on the inputs. If the program is written correctly then this destructuring happens at the same places where you already should be checking whether the reference is null before using it, so it doesn't even make the program significantly longer. You just need to annotate your type declarations to indicate where the reference is allowed to be null.

SQL databases are actually a fairly good example of how null values should be implemented, because you can specify whether a field can or cannot be null in the table definition and this constraint will be enforced by the database.

Comment So Palmer supports a fascist demagogue. (Score 5, Interesting) 847

Guess I shouldn't be surprised. Glad I gave up on Oculus the second Facebook bought them.

He's proven himself to be a duplicitous piece of shit since the acquisition. This is not shocking.

Hillary is also a piece of shit, but not one that would immediately alienate 90% of the rest of the planet, and likely plunge us into thermonuclear war within 6 months of taking office.

Comment Re:I Think this article might be a bit misleading. (Score 1) 189

Thank you. That is exactly what I said.

The only part of quantum entanglement that is "instantaneous" (or "FTL") is that when one party performs its measurement, the wave functions for both of the entangled particles collapse out of their superimposed states simultaneously, no matter how far apart they might be. However, this does not communicate any information by itself; for that the two parties still need a classical channel. As you say, nothing is transferred FTL. An observer cannot tell that the wave function has collapsed without making a measurement, which would collapse the wave function anyway, and without a separate channel there is no way to know whether the other party observed the same quantum state.

Slashdot Top Deals

Mathematics deals exclusively with the relations of concepts to each other without consideration of their relation to experience. -- Albert Einstein

Working...