Forgot your password?
typodupeerror

Comment: They didn't! (Score 2) 135

by GrievousMistake (#45728485) Attached to: Academics Should Not Remain Silent On Government Hacking

What a non-story. The flaws in Dual EC DRBG were widely published shortly after release.

The backdoor was first published by Dan Shumow and Niels Ferguson in August 2007.

Bruce Schneier wrote the same year:

My recommendation, if you're in need of a random-number generator, is not to use Dual_EC_DRBG under any circumstances. If you have to use something in SP 800-90, use CTR_DRBG or Hash_DRBG.

This was common knowledge if you had more than a passing interest in cryptography. I think TFA is mistaken when it says that it didn't get enough attention. The reason academics didn't take it more seriously is that it was seen as so obvious, it was mostly harmless shenanigans.

You would only use it in a serious cryptographic product if you were an incompetent crackhead, or if the NSA had stuffed your ass full of money.

Incidentally, RSA, the large security firm, shipped it in a serious cryptographic product for years and years.

Comment: Re:End of certificates, please? (Score 1) 80

by GrievousMistake (#45696685) Attached to: IETF To Change TLS Implementation In Applications

The trouble with Convergence; I think, is the reliance on online notaries; which become highly-centralized single points of failure.

They don't, really. The great thing about notaries as opposed to CAs is that you can use as many of them as you want, and the client decides how to handle discrepancies and outages. So a browser could ship preconfigured with 8 independent notaries, and alert the user if more than four of them were down, or if any single one of them disagreed with the rest.

In the same way, CAs can still act as authoritative notaries for domains they have signed. But now if they misbehave they can be instantly delisted, and users will fall back on the standard Convergence protection.

+ - UK spies continue "quantum insert" attack via LinkedIn, Slashdot pages->

Submitted by Anonymous Coward
An anonymous reader writes "In the academic literature, these are called "man-in-the-middle" attacks and have been known to the commercial and academic security communities. More specifically, they are examples of "man-on-the-side" attacks.

They are hard for any organization other than the NSA to reliably execute, because they require the attacker to have a privileged position on the Internet backbone and exploit a "race condition" between the NSA server and the legitimate website. This top-secret NSA diagram, made public last month, shows a Quantum server impersonating Google in this type of attack."

Link to Original Source

Comment: Re:Tired of bashing Bitcoin, yet? (Score 0) 285

by GrievousMistake (#45360057) Attached to: Security Breach Forces Bitcoin Bank Inputs.io To Halt Operations

I disagree. The "proof of work" busywork is wasteful and makes it hard to prove any real security. The Bitcoin protocol scales poorly and consumes disproportionate resources.

I am sure it is possible to do both the ledger and the currency distribution more elegantly than Bitcoin does.

For instance, a IOU system like Ripple could facilitate a Hawala-like transaction network without the meaningless weapons race caused by allocating new coins proportionally to hashing power.

Or zero-knowledge protocols could be used to vastly enhance the anonymity of transactions.

Bitcoin is an interesting proof of concept, but "as elegant as a decentralised digital transactions system could be" is overselling it by far.

+ - The Future Will Be Modular: Tinkertoy-Like Blocks Will Build Bridges, Planes

Submitted by cartechboy
cartechboy (2660665) writes "Does that sketchy bridge on your commute to work freak you out? How about that budget airplane seat your boss puts you in once a month? If you're nervous about that, then you'll probably freak out about this: Future airplanes, bridges, boats, even spacecraft may be built from modular blocks that snap together like Tinkertoys. While the idea seems strange, the parts are claimed to be up to 10 times stiffer than existing ultralight materials and the construction work will be done by tiny robots crawling along the structure as it's built. It would even be possible to disassemble one structure, say, a bridge, and repurpose it into a new building. Imagine taking apart one wing of your office building and turning it into a boat--just be sure to bring your life jacket."
Television

Legislators Introduce Bill To Stop Set Top Boxes From Watching You 161

Posted by Soulskill
from the stop-looking-at-me dept.
An anonymous reader writes "For a few years now, we've been hearing about TV-related devices that have built-in cameras and microphones. Their stated purpose is to monitor consumers and gather data — often to target advertising. (We'll set aside any unstated purposes — the uses they tell us about are bad enough.) Now, two members of the U.S. House of Representatives have submitted legislation to regulate this sort of technology. '[They] said they want to get out ahead of the release of this new technology and pass legislation that ensures it would include beefed up privacy protections for consumers. They added that this legislation is particularly relevant given the recent revelations about the National Security Agency's Internet surveillance programs. ... Additionally, the bill requires a cable box or set-top device to notify consumers when the monitoring technology is activated and in use by posting the phrase "We are watching you" across their TV screens.'"

Comment: Re:AF_BUS -- a[n] implementation of the D-BUS" (Score 3, Informative) 61

by GrievousMistake (#42668423) Attached to: LTSI Linux Kernel 3.4 Released

Hadn't heard about AF_BUS before...
I found the rationale, and a summary of the argument against.

I get that doing multicast in userspace isn't optimal, but I'm a bit mystified what people are doing with D-Bus that would require any kind of performance. Wasn't D-Bus supposed to be a simple pub-sub system for notification of events and the like?

Comment: Re:Requires local access (Score 1) 210

by GrievousMistake (#42301035) Attached to: Denial-of-Service Attack Found In Btrfs File-System

this will be easily stopped by adding a filename prefix or suffix

No it won't. It is still easy to make collisions with a known prefix or suffix. You would have to include a random component.
Even if that was a feasible workaround, it's hardly a common best practice, nor should it be.

There goes this script kiddie's

He discovered this vulnerability himself, and wrote the attack code; he is by definition not a script kiddie. Never mind that he's a professor and published cryptographer.

while about experimental software not being perfect.

This has nothing to do with being experimental software. This is not a bug, it is a weakness in the design. Furthermore, the bad behaviour will not manifest by accident - you have to deliberately provoke it.
This is the type of problem that isn't fixed before someone finds and reports it -- like Junod did.

Please cease your inane babbling.

Comment: Re:Brilliant references! (Score 4, Funny) 197

Also be sure to check out the brilliant paper recently published by Hakin9 in their issue on Nmap.

The authors detail the working of their DARPA Inference Cheking Kludge Scanner (DICKS), and cite such prominent references as
Z. Sun, "Towards the synthesis of vacuum tubes," Journal of Concurrent, Extensible Technology, vol. 84, pp. 1-19, Feb. 2005.
C. Hoare, J. Wilkinson, and D. Ritchie, "Contrasting Scheme and Internet QoS using SluicyMash," Journal of Flexible, Omniscient Epistemologies, vol. 20, pp. 154-194, Feb. 2000

Some excerpts:

"Obviously, event-driven modalities and web browsers are based entirely on the assumption that extreme programming and digital-to-analog converters are not in conflict with the deployment of massive multiplayer online role-playing games."

"We show our method's real-time evaluation in Figure 1. We consider a framework consisting of n flip-flop gates. Such a claim might seem counter intuitive but is derived from known results. Next, NMAP does not require such a theoretical emulation to run correctly, but it doesn't hurt. This seems to hold in most cases. We use our previously enabled results as a basis for all of these assumptions. This seems to hold in most cases."

"Figure 1.3: The 10th-percentile latency of NMAP, as a function of popularity of IPv7"

Comment: Re:Waiting for ad.doubleclick.net ...zzz... (Score 1) 275

by GrievousMistake (#38821425) Attached to: Google's SPDY Could Be Incorporated Into Next-Gen HTTP

Some web browsers just render the page assuming that included scripts won't call document.write(), and then render the page again when the scripts have loaded, in case they do.
I think Chrome does this, and Opera has it as an experimental option in opera:config ("Delayed script execution").
It speeds up things a lot, especially if you aren't blocking ads. Many sites spend most of their loading time just waiting for ad servers.

There ought to be an attribute or something that webmasters could use to explicitly request XHTML semantics... Something like

Comment: Re:Some Discrepancies with Your Bitching (Score 1) 194

by GrievousMistake (#38730230) Attached to: Google Ports Box2D Demo To Dart

Tying NaCl to a specific architecture was a very bad move in the first place, and PNaCl doesn't help a lot.
LLVM bitcode isn't intended to be a platform-independent transport of code - it isn't frozen, so you'll have to tie yourself to a specific LLVM version, while LLVM is still improving a lot with each release.
Neither is it very portable - it isn't endian independent, and it reflects details of the ABI, which means you can't even portably call C functions. It's really just a compiler IR.

See also e.g. this post.

I can certainly see reasons that you'd want to tie a VM to the browser instead of being stuck with ECMAScript for every situation, but you need to bring a real, portable VM to the table. LLVM isn't it, and the idea of putting architecture dependent binaries on the web is patently ridiculous, as should be obvious just from the time NaCl spent as x86 only. Imagine if web site owners had to recompile their site for every new architecture that became supported. "This site is best viewed on a x86"

Comment: Re:So why do I trust the notaries? (Score 1) 127

by GrievousMistake (#37965150) Attached to: SSL Certificate Authorities vs. Convergence, Perspectives

*Ideally* In the CA relationship, you would at least have assurance that the site being validated worked explicitly with a trustworthy CA. In the reputation system, the site being validated didn't work with anyone and has no way to authoritatively 'tell' someone they got compromised.

A CA could be one such authentication step. Consider a network of independent notaries to which the CAs could securely push public certificates and tie them to a domain name.
Now you have to compromise the CA (or a sufficient number of the notaries, some perhaps run by the CAs themselves), and you have to perform the MITM upstream, not downstream, so the perspectives-like notaries will still see a consistent view.

Comment: Re:So why do I trust the notaries? (Score 2) 127

by GrievousMistake (#37962806) Attached to: SSL Certificate Authorities vs. Convergence, Perspectives

-DNSSEC secured results enumerating the CAs the site selected to secure the domain. If DigiNotar signs yourdomain.com and your DNSSEC says 'Thawte', then there is an issue.
-Multiple CAs signing a certificate. If you have 3 or so CAs (all listed in your DNSSEC record of course), then compromising all three would be required to compromise your security.

What does this gain you over storing the cert signature itself in DNSSEC?

Since the people attesting to the authenticity of a certificate have zero 'special' interaction, it remains feasible to fool them.

Nothing prevents a notary from taking extra steps to verify the authenticity of a certificate. That is one of the advantages of the concept: other methods of authentication can be added in a modular way.
In some ways the notary system gives you the security of the strongest of the notaries you trust, and the CA system gives you the security of the weakest of the CAs you trust.

Comment: Re:So why do I trust the notaries? (Score 2) 127

by GrievousMistake (#37962730) Attached to: SSL Certificate Authorities vs. Convergence, Perspectives

if someone MITM's very close to you (think the people who own/control the AP you're connecting through at a hotel), they could MITM *all* of the notaries as well

The communication with the notaries is in all likelihood encrypted and signed with predistributed keys, similar to CA certificates today. That's not a large problem, because ultimately you have to trust the software you are running anyway.
That still retains all the benefits over the CA system that you mention; you get multiple points of trust that all have to be compromised, and if one is compromised you can distrust it with minimal consequences.

The bugs you have to avoid are the ones that give the user not only the inclination to get on a plane, but also the time. -- Kay Bostic

Working...