Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re: I manage Internet connections in 148 locations (Score 1) 123

Qwest are apparently doing 6rd so you should be able to get v6 with them too, albeit over a tunnel.

I have this set up, and can attest that it works reasonably well. The only real problem is that (presumably unlike native IPv6) you aren't assigned a static IPv6 prefix; it's tied to your dynamic IPv4 address. Consequently, I also have a Hurricane Electric tunnel configured with a static IPv6 prefix for use in DNS. This required some complicated source-based routing rules, though, so it's not for everyone. (You can't route HE packets out over the 6rd tunnel or vice-versa, and normal routing only looks at the destination address. To make it work you have to set up multiple routing tables ("ip route table ...") and select the proper one based on the source address ("ip rule add from ... table ...").

Of course, one could just pay extra $$$ for a static IPv4 address, which would provide a static 6rd prefix...

Comment Re:IoA (Score 3, Informative) 123

That would be well and fine if most IPv6 addresses didn't have a 64-bit or even 80-bit prefix, identical for everything routable at the endpoint.

That 64-bit network prefix is the equivalent of 4 billion entire IPv4 internets—and each "host" in each of those internets contains its very own set of 2**32 IPv4 internets in the 64-bit suffix. Quadrupling the number of bits from 32 to 128 means raising the number of addresses to the fourth power (2**32 vs. 2**128 = (2**32)**4). We can afford to spare a few bits for the sake of a more hierarchical and yet automated allocation policy that addresses some of the more glaring issues with IPv4, like the address conflicts which inevitably occur when merging two existing private networks.

Think of it this way: If we manage to be just half as efficient in our use of address bits compared to IPv4, it will still be enough to give every public IPv4 address its own private 32-bit IPv4 internet. Right now the vast majority of IPv6 unicast space is still classified as "reserved", so we have plenty of time to adjust our policies if it turns out that we need to be more frugal.

Then there are DHCP addressing schemes that use the MAC as part of the address, further reducing it.

Automatic address assignment (based on MAC or random addresses or whatever) comes out of the host-specific suffix, not the network prefix, so it doesn't reduce the number of usable addresses any more than the prefix alone. It does imply that you need at least a 64-bit host part in order to ensure globally uniqueness without manual assignment, but the recommended 64-bit split between network and host was already part of the standard.

Comment Re:What I would do different is DNS related (Score 1) 123

1) First I would have done only countries and no other TLD.

Personally, I would have done the opposite, and demoted country-specific sites to a second-level domain like .us.gov. The Internet is an international network; forcing every domain to be classified first and foremost according to its national origin would cause needless discord. Only a small minority of sites are truly country-specific.

it could have been debian.cc or debian.de or any other that they wanted

In which case the country code would communicate zero information about the site—so why have it at all?

What might make more sense would be using registrars as TLDs (e.g. google.mm for MarkMonitor), with a convention that multiple TLDs can contain the same subdomains if and only if they mirror each other. This would tie in well with DNSSEC while also avoiding the need to defend one's domain name against scammers in a million separate TLDs. If a government just happens to run its own registrar it could use the country code for its TLD alongside non-country TLDs. The main difference from the current system would be that TLDs would be generic rather than catering to a particular kind of site, which is mostly the case in practice anyway: .com no longer implies commerce, not every .org is a non-profit, .net does not imply an ISP, etc. Instead, the TLD would imply a trust relationship; the name "google.mm" would imply looking up the "google" subdomain in the MarkMonitor domain registry, which would presumably be listed among the user's local trust anchors. If there were an alternative domain like "google.vs" (for VeriSign) it would be required to resolve to the same address.

Comment Re:Do away with them (Score 1) 87

But how would you even do that with dynamic languages, where the type can just change at runtime?

Obviously you can't, which is one of the arguments against programming in dynamically-typed (unityped) languages. This is why TypeScript exists: a statically-typed JavaScript derivative which compiles down to plain JS after proving that the types are satisfied (i.e. performing static code analysis), much as any other statically-typed languages compiles down to unityped machine code.

Furthermore, TypeScript is handling null just like Java.

No, it isn't. Both TypeScript and Java will complain about uninitialized variables, but Java will not produce an compile-time error if you set the variable to null (directly or indirectly) and then try to use it as a reference. TypeScript will, unless you explicitly check that the value is not null before using it. (Checking for null changes the type from nullable to non-null within the scope of the condition.)

declare function arbitrary(): string | null;
let x: string;
let y: string | null;
x = arbitrary(); // Error, type 'string | null' is not assignable to type 'string'.
y = arbitrary(); // Fine
x.length; // Fine, x is non-nullable.
y.length; // Error, object is possibly 'null'.
if (y != null) {
y.length; // Fine, y is non-null in this scope.
}

Comment Re:Do away with them (Score 1) 87

A NPE means you have a bug in your code, and it's better for the app to crash than to corrupt your data, or silently just lose it.

Even better would be to detect the bug statically, at compile time, as a type error, so that your program doesn't crash at some arbitrary point later and lose all the user's data.

The point is not to eliminate the concept of nullable references, which are indeed useful for representing data which is not available. The point is to distinguish between such nullable references and references which cannot be null so that the compiler can check that all the nullable references have been properly handled and warn you about any and all potential null pointer issues in advance.

Comment Re:Do away with them (Score 1) 87

The problem isn't that the language has nullable references, it's that it doesn't have a reference type which cannot be null. A nullable reference is isomorphic to the Optional or Maybe type available in most "null-free" languages, and this certainly has perfectly legitimate uses. The issue in a language like C, Java or JavaScript is that every single operation on references takes this nullable reference type as input, and the vast majority of those operations assume that the reference will not be null and generate a runtime error otherwise. In a more principled language there would be a type distinction between a reference which may be null and a reference which cannot be null, and the programmer would need to destructure the nullable value into either null or a non-null reference before using it in operations which do not expect null. This eliminates a wide variety of common mistakes by making them type errors at compile time rather than subtle runtime errors which may or may not cause the program to crash, depending on the inputs. If the program is written correctly then this destructuring happens at the same places where you already should be checking whether the reference is null before using it, so it doesn't even make the program significantly longer. You just need to annotate your type declarations to indicate where the reference is allowed to be null.

SQL databases are actually a fairly good example of how null values should be implemented, because you can specify whether a field can or cannot be null in the table definition and this constraint will be enforced by the database.

Comment Re:I Think this article might be a bit misleading. (Score 1) 189

Thank you. That is exactly what I said.

The only part of quantum entanglement that is "instantaneous" (or "FTL") is that when one party performs its measurement, the wave functions for both of the entangled particles collapse out of their superimposed states simultaneously, no matter how far apart they might be. However, this does not communicate any information by itself; for that the two parties still need a classical channel. As you say, nothing is transferred FTL. An observer cannot tell that the wave function has collapsed without making a measurement, which would collapse the wave function anyway, and without a separate channel there is no way to know whether the other party observed the same quantum state.

Comment Re:I Think this article might be a bit misleading. (Score 1) 189

What is being "communicated" FTL, without a non-FTL classical channel, is a random superposition of all the possible quantum states. That is not "random information", it's "no information". Without the classical channel you don't even know whether the holder of the other entangled particle is measuring the same quantum states, so no information is exchanged, not even information about the measured states of the entangled particles.

But sure, as a trivial special case, it is possible to exchange zero information at FTL speeds...

Comment Re: Now for regulation (Score 1) 87

It's also limited to preventing States from designating their choice of currency (other than gold and/or silver) as legal tender. They can issue whatever currency they want, so long as it isn't close enough to U.S. federal currency to be considered counterfeit. They just can't make anyone accept it the way people are forced to accept an offer of full payment in legal tender to settle a debt—regardless of what currency or goods the debt may originally have been denominated in.

This is to prevent a particular state from picking some good it has in abundance (but which is in low demand), declaring it legal tender, and using it to "settle" debts at below-market rates. Somewhat ironically, this is exactly what the federal government did when it went off the gold standard and declared unbacked paper currency to be legal tender in payment of debts.

Comment Re:The more hated windows 10 is (Score 1) 232

I think it was an Inspiron. Almost certainly one of the consumer lines. It had a smooth underside with no ventilation holes or visible screws, no removable battery, and few ports. (One needs to remove the rubber feet to open the case.) I don't have it with me to check the exact model.

Comment Re:The more hated windows 10 is (Score 1) 232

Unless things have changed, WiFi on a decent laptop is usually implemented as a removable miniPCIe card. You can get any card you want on Ebay for $20 or less; I usually use some Intel card, I forget the model number now.

Will that work here? The last time I tried to swap out the WiFi card in a Dell laptop (fairly recent, but not an XPS) for another one from the same manufacturer (Intel) it refused to boot with the BIOS citing a problem with the WiFi card's serial number.

I like the look of this XPS 13 DE, but I wouldn't care to buy any laptop with that degree of hardware lock-down.

Comment Re: Crazy (Score 1) 136

as X IP numbers connected = amount of shares

Which is really stupid, when you think about it. If you upload 0.1% of a file to 1,000 different peers, that's one copy shared, not 1,000. And if they all do the same (within the group) that's 1,000 copies created, not 1,000,000. But the studios manage to get away with suing 1,000 peers for 1,000 copies each, which is far more than the potential revenues even if one very generously assumes that every peer who obtains a copy of the file represents a lost sale, even before you throw in nonsense like statutory damages.

The correct liability for a single peer with a share ratio of 1.00 or less (cumulative upload less than or equal to the file size) is no more than perhaps three times the standard retail value of the work. Not $450 or $1,500 or $150,000 or $21M, no matter how many other peers were involved.

Comment Re:The Legoland example (Score 1) 199

The ability to skip the line is obviously a limited resource. The park generally alternates service between the fast and slow lines to ensure both make some progress rather than following a strict priority order. If "everyone" skipped the line (because the price was set too low; or from the other P.O.V., because the park wasn't paying enough in discounts to induce people to stand in the slow line) then the fast line would just become nearly as long as the slow line and few would choose to pay the extra cost (or forego the slow-line discount) for no gain.

Comment Re:Yes they are (Score 1) 199

Yeah, that's pretty much how it works now. As it should.

Sure; this was just the basics of why QoS is important, without regard to any specific QoS policy. The proposal is to let the each customer set the QoS policy for its own traffic, rather than leaving that up to the ISP. Someone has to decide the QoS policy if protocols like VoIP are going to be usable, and when ISPs set the policy they tend to do things like prioritize their own VoIP service packets while leaving their competitors languishing in the bulk-data queue. Even if it isn't malicious, there is no reason to expect the ISP to go out of its way to benefit a competitor. And of course, not every ISP implements a fair QoS system; some prefer to simply throttle specific protocols (like BitTorrent) regardless of capacity or fairness, in part because standard QoS policies are usually based around dividing capacity between "flows" rather than customers. Protocols like BT use many flows for the same transfer, with the end result that they can obtain more than their users' fair shares of the bandwidth compared to less distributed protocols utilizing one flow at a time if the QoS rules are not implemented carefully.

One nice thing about IPv6 is that it would make it easier to allocate bandwidth fairly: one unique IPv6 prefix equals one customer.

Slashdot Top Deals

Disclaimer: "These opinions are my own, though for a small fee they be yours too." -- Dave Haynie

Working...