Forgot your password?

Comment: Re:Cut off your nose to spite your face (Score 1) 86

That doesn't seem to be true.

The Many Flaws of Dual_EC_DRBG

Back in 2004-5, NIST decided to address a longstanding weakness of the FIPS standards, namely, the limited number of approved pseudorandom bit generator algorithms (PRGs, or 'DRBGs' in NIST parlance) available to implementers. This was actually a bit of an issue for FIPS developers, since the existing random number generators had some known design weaknesses.*

  NIST's answer to this problem was Special Publication 800-90, parts of which were later wrapped up into the international standard ISO 18031. The NIST pub added four new generators to the FIPS canon. None these algorithms is a true random number generator in the sense that they collect physical entropy. Instead, what they do is process the (short) output of a true random number generator -- like the one in Linux -- conditioning and stretching this 'seed' into a large number of random-looking bits you can use to get things done.** This is particularly important for FIPS-certified cryptographic modules, since the FIPS 140-2 standards typically require you to use a DRBG as a kind of 'post-processing' -- even when you have a decent hardware generator.

  The first three SP800-90 proposals used standard symmetric components like hash functions and block ciphers. Dual_EC_DRBG was the odd one out, since it employed mathematics more that are typically used to construct public-key cryptosystems. This had some immediate consequences for the generator: Dual-EC is slow in a way that its cousins aren't. Up to a thousand times slower.

Now before you panic about this, the inefficiency of Dual_EC is not necessarily one of its flaws! Indeed, the inclusion of an algebraic generator actually makes a certain amount of sense. The academic literature includes a distinguished history of provably secure PRGs based on on number theoretic assumptions, and it certainly didn't hurt to consider one such construction for standardization. Most developers would probably use the faster symmetric alternatives, but perhaps a small number would prefer the added confidence of a provably-secure construction.

Comment: Re:Cut off your nose to spite your face (Score 1) 86

I don't remember if I've seen that link before, but thanks for sharing it. That is a great explanation, and reinforces the point I've been making.

The Many Flaws of Dual_EC_DRBG

The 'back door' in Dual-EC comes exclusively from the relationship between P and Q -- the latter of which is published only in the Dual-EC specification.

Comment: Re:it would be OK if..... (Score 2) 241

in other words, net neutrality would remain, but content providers could pay to BOOST the speed at which the internet provider customers received their content

Which only lasts until the next increment in consumer connection speed is rolled out. Then the companies that pay get to use it, but - SURPRISE! - nobody else does.

If this proposal had gone into effect before broadband became common you'd be hooked to on your, say, 5 Mbps DSL line, trying to watch videos at 56 kbps.

Comment: And wrong battleground. (Score 1) 241

The problem here isn't differentiated services - which can be valuable to a lot of us. The problem is that here in the US we have effective ISP monopolies or duopolies in nearly every region.

The other part of the problem is that the net neutrality advocates have been fighting on the wrong battleground.

As you point out: The prblem isn't some packets getting preferences over others: Sometimes that makes things BETTER for users. The problem is companies using their ability to configure this to give their own (and affiliates') carried-by-ISPs services an advantage, or artificially DISadvatntge packets of other providers unless an extra toll is paid, to the disadvantage of their customers.

The FCC is not the place to fight that battle. The correct venues are the Department of Justice's Antitrust division (is giving content the ISP's affiliate provides an advantage over that of others an illegal "tying"?), the FTC (is penalizing others' packets a consumer fraud, providing something less than what is understood to be "internet service"?) and perhaps congress.

I don't see how this can reasonably be resolved short of breaking up media conglomerates to separate information transport from providing "content" and other information service beyond information transport. Allowing them to be combined into a single company is a recipie for conflict-of-interest, at the cost of the consumer.

Comment: Re: About time! (Score 1) 235

by cold fjord (#46828797) Attached to: ARIN Is Down To the Last<nobr> <wbr></nobr>/8 of IPv4 Addresses

Why don't you ask Interop why they basically returned a Class A network address block?

Interop Returns 16 Million IPv4 Addresses

Interop gives back a month’s worth of IPv4 addresses

Apparently Interop, the holder of the 45.x.x.x block since 1995, no longer needs that much space. They're now returning 99 percent of it to ARIN, the American Registry for Internet Numbers, which handles IP address distribution in North America. Interop is holding on to a small fraction of the 45/8 block that's currently in active use.

Comment: Here's how that works. (Score 1) 140

by Ungrounded Lightning (#46827697) Attached to: Asteroid Impacts Bigger Risk Than Thought

My math isn't very strong; can you explain the (1-0.3*0.03)^10 part?

You mean (1-0.3*0.03)^100? (You lost a digit.) Let's walk it:

0.3 land fraction = probability a given meteor hits over land (assuming equal likelyhood it hits any given area).
0.3 * 0.03 Multiply by the fraction of land that's urban to get the probability it hits over urban land.
1- 0.3*0.03 Convert to the probability it misses all urban land. (P(hit) + P(miss) = 1 (certainty)).
(1-0.3*0.03)^100 We get a hundred of 'em in 50 years (assuming 2000-2013 is typical). Raise to the hundredth power to get the jackpot probably that they ALL miss.
1-(1-0.3*0.03)^100 Convert to the probabiltiy that at least one doesn't miss.

Comment: Re:Cut off your nose to spite your face (Score 1) 86

You could keep Dual_EC_DRBG by updating the standard to have a new set of constants just like you can update the standard to remove Dual_EC_DRBG entirely. It isn't that hard.

I never claimed that the existing constants were created via an open process. What I pointed out is that a new set of constants could be created by an open process and that addresses the trust issue.

Comment: Re:About time! (Score 1) 235

by cold fjord (#46825401) Attached to: ARIN Is Down To the Last<nobr> <wbr></nobr>/8 of IPv4 Addresses

That would have about as much effect as pissing into the ocean would have on raising sea levels.

That isn't completely true due to the high degree of leveraging that can occur with NAT. It only takes a relatively small number of public addresses to service millions of private IP client addresses. There are very large numbers of private IP addresses being wasted. One properly used Class A block could allow you to service many billions of client computers.

I agree that we do need to move to IPv6.

Comment: Re:About time! (Score 3, Insightful) 235

by cold fjord (#46823963) Attached to: ARIN Is Down To the Last<nobr> <wbr></nobr>/8 of IPv4 Addresses

And hopefully more large companies and organizations that hold large blocks of public IP addresses will start moving to private IP addresses and release the public IP addresses for use by others. I know some places that have large numbers of systems with public IP addresses that are behind firewalls and really have no business having a public IP address on those systems anymore.

"If that makes any sense to you, you have a big problem." -- C. Durance, Computer Science 234