To quantify the degree of obfuscation, they have precise computational metrics based on their stylometric algorithms. But to judge the quality of the obfuscation, there is no objective metrics. Instead
To measure soundness and properness, obfuscations will be sampled and handed out to participants for peer-review.
which seems to me to make the contest rather less meaningful. Why not just peer review the quality of all obfuscations exceeding some minimum standard?
That's from 2002 and I wonder if that's even true of Cisco anymore. I have watched Cisco firewalls hard crash with too many connections on 256 mb ram.
This site seems to indicate 16 KB per connection, which doesn't leave much once you've subtracted the memory needed for OS/daemons etc..
That would be bad. However, I see that later in that document there's a section entitled Ideal case: firewalling-only machine where it says:
sizeof(struct ip_conntrack) is around 300 bytes on i386 for 2.6.5, but heavy development around 2.6.10 make it vary between 352 and 192 bytes!
For safety we might want to assume recent kernels have doubled that again, perhaps to 800 bytes. That still puts us under 2MB of RAM for 2000 connections. For greater certainty, I tried to check the kernel v4.3 source and sizeof(), but NAT has changed drastically in the 4.x series kernels.
This is not as good as it appears. Their "Enterprise router" has 128 mb ram and there is no way that's going to hold up to a significant amount of simultaneous (connections let alone the 64 mb ram that most of the devices have,
Is that really an issue? According to this, each NAT entry needs <200Bytes, in which case 2000 simultaneous connections (plenty for most any single dwelling) require less than 1MB RAM.
It wasn't that long ago that even enterprise-class routers got by on 32MB or less of RAM.
It's hard to say without doing all the implementation work, but the paper does say that the algo is "...general enough to describe both local polynomial and Gaussian process approximations..." and there is a section called "Local Gaussian process surrogates". So, they do in fact incorporate this in the larger framework of their algo.
In fact, they claim "...that the accuracy is nearly identical for all the cases, but the approximate chains use fewer evaluations of the true model, reducing costs by more than an order of magnitude for quadratic or Gaussian process approximations (Figure 10b)."
Indeed, though that quote is simply pointing out that the relative performance of their algorithm is at its best with its mode set to local Gaussian approximations.
Huh. You think they would have realized that more quickly.
I'm sure the authors are well aware of it. It's the press hype that I'm pointing out.
How is this relevant? The algorithm is for speeding up Markov Chain Monte Carlo (MCMC) analyses
The paper is phrasing it in terms of MCMC but it's more generally applicable if you think of it as an optimizer.
I am a US citizen as frustrated about unauthorized domestic surveillance as anyone. But this summary goes too far. Finding, keeping and using vulnerabilities is exactly what the NSA is supposed to do, and there is nothing questionable about that behavior.
If the submitter wants the government to have a group that finds and discloses vulnerabilities as part of its remit, then make a case for creating such a group. Don't saddle the NSA with the job.
it has some leaked aspects that I think are truly terrible, such as the intellectual freedom troubles
This is why "trade" agreements are reviled by default these days. They have a couple of chapters regarding trade and a dozen chapters dedicated to screwing country's national laws.
While I agree with you that it's a valid reason to revile trade agreements by default, I perceive the revulsion to be comprised of more protectionist, beggar-thy-neighbor sentiment, mixed with ugly patriotism. See "Squiddie" above, the guy who thinks doubling (!) the daily wages of benighted bastards in a poor country isn't worth the risks to American workers.
Start making trade agreements about trade again and people will start respecting them again.
Not as optimistic on that score as you are. Cheers.
Old programmers never die, they just branch to a new address.