Haha, that sounds like a badass idea! Does battlebot have any rules in place for "electronic warfare" like that?
The contest works as follows:
- every team creates a "Cyber Reasoning System", which is software that takes a vulnerable application binary as input and outputs an exploit and a patched version of the binary
- when the contest starts, DARPA releases a crap-ton of applications (for the qualifying event, there were 131, some of which complex applications that comprised multiple binaries).
- each team's CRS analyzes these binaries (without human intervention), and submits the resulting exploits and patches to DARPA
For the final event, there will be multiple "rounds", in which our CRSes will attempt to hack the *patched* binaries provided to us by our competitors. Additionally, their exploits will be actively launched against our binaries, so we can do some traffic analysis on top of our program analysis.
For the contest, Shellphish put on our researcher hats (we are a bunch of graduate students) and condensed a lot of our recent research into an automated Cyber Reasoning System. Given that this was a student effort, there was the expected level of chaos (for example, at one point, one of my teammates accidentally ran "rm -rf
In the more general sense of what "Shellphish does", we are a CTF (Capture The Flag) team. By CTF, in this context, I mean a computer security Capture the Flag contest, in which teams have to exploit services (network applications) to steal "flags" (random, secret data) from others teams and redeem it for points. Some popular CTFs are the iCTF (run by us at UCSB for students to participate in, http://ictf.cs.ucsb.edu/), CSAW CTF (run by NYU Poly, https://ctf.isis.poly.edu/), and, of course, Defcon CTF (the world championship, http://legitbs.net/). Shellphish is, I think, the oldest CTF team that's still playing (at least, definitely the oldest still qualifying for and playing Defcon CTF). I don't know how good a distinction that is, but it's something
Security is definitely a constantly evolving arms race, and it's exactly that cat-and-mouse game that makes it fascinating. A key thing to keep in mind is that this contest isn't necessarily about creating an AI that evolves to respond to emerging attacks or new techniques. In fact, the scope of the Cyber Grand Challenge is quite well defined to identifying, exploiting, and patching memory corruption vulnerabilities.
The goal of the CGC, as we understand it, is to create a system that, given this human-specified model of "badness" and a model of a protection technique, is able to handle the rest in an automated fashion. The "arms race", for the time being, is going to continue to be played between humans -- new attack techniques and new defense techniques would be discovered by humans and programmed into the "Cyber Reasoning Systems", as the CGC terms our auto-hacking software. Rather than taking that fun part away from humans, the goal of the CGC is to relieve us of the task of analyzing/exploiting/patching individual pieces of software.
As another commenter mentioned, the CGC looks at compiled binaries, regardless of language. In practice, most (all?) of the challenges were written in C. While, in principle, the choice of language shouldn't matter overly much, some languages make heavy use of constructs which seriously complicate analysis. For example, C++ vtables (https://en.wikipedia.org/wiki/Virtual_method_table) or Objective C's dynamic method lookup (http://stackoverflow.com/questions/14219840/how-does-objective-c-handle-method-resolution-at-run-time).
As for (b), we're all students and are pretty swamped. There are plenty of companies that do provide professional services. Grammatech (one of the other teams) and ForAllSecure (yet another competitor) both do, for example.
Hello! I'm the "team leader" of team Shellphish, one of the seven finalists. Super cool to see a story about us! If people have questions, I'd love to answer them if I can
A long time ago, I set up Siege of Avalon (at that point, already a 5 year-old game) and, upon getting to some specific level, found that performance had gone down the toilet. I fiddled around for a while, then (for some reason) called the support number. They told me to update my video card drivers. I told them that the video card drivers were already about 4.5 years newer than the game itself, and so their suggestion made no sense. We debated for a while, but they stuck to their guns. I hung up, frustrated.
Updating my video card drivers fixed the issue.
Maybe you are.
While base products, like TP or toothpaste, are more expensive on Amazon than in physical stores, the price difference isn't *that* much. To some people, an extra dollar or two is easily worth not having to worry about it at the store next time. If you tally up your yearly usage of toothpaste (say, if you're an insanely prolific tooth brusher, or have a family) to be a giant tube a month, that's $30 a year from Amazon as opposed to, say, $12 from a real store.
If you're well-organized and go to the store regularly, the $18 isn't worth it. Personally, I am not perfectly organized, and am insanely busy. That $18 difference is worth forgetting about it in the store a few times in a row and going without toothpaste for a week. Of course, it's not even an $18 difference: I probably go through two tubes a year, so it's a $3 difference. That's almost literally nothing.
It's been this way for years. ATI/AMD support for Linux is unbelievably bad. nVidia support is basically perfect, with the exception of the open-source issue. In the past, I've bought a brand new (nVidia) video card, right after it was released, brought it home, and got it running under Linux, day 1, with no headaches. If you want decent Linux graphics, go nVidia.
Another interesting one is Diamond Age (aka A Young Lady's Illustrated Primer). Pretty interesting book that introduces a lot of CS concepts (although also explicitly mentions CS).
That's actually the opposite of true. Many techniques (http://static.usenix.org/event/woot09/tech/full_papers/paleari.pdf, http://roberto.greyhats.it/pro..., http://honeynet.asu.edu/morphe..., http://www.symantec.com/avcent...) exist to identify the presence of a CPU emulator, because these things aren't (and will likely never be) perfect. Most of those techniques don't even rely on timing attacks. Once you introduce timing attacks (*especially* if there's an external source of time information), all bets are off.
This reads like an urban legend... Every field office got a copy, (seemingly) lots of employees were notified, but it's only public 30 years later? Hmm...
I'm glad people are out there thinking about this. As I understand it, though, there are a couple of drawbacks to this specific approach.
1. The unique identifier that now allows you to be tracked across each application you use. I guess this can be solved by having multiple IDs per app. You might want to consider this.
2. "Pay per authentication"...
3. Requirement for your phone to have connectivity. While this doesn't matter most of the time, it can be important when, for example, you're traveling abroad and don't have phone service.
4. You need to be a trusted party for your users. If you're compromised, the whole system is screwed.
Other approaches, such as Google Authenticator, provide 2FA without the requirements of connectivity, trackability, trust, or payment. The only advantage (and this is also quite a weakness) that I can see with your approach is that it's probably easier to replace a lost phone; just call you guys and have you reroute the passwords to a different app. The problem is that this opens the door to social engineering attacks (see #4).
I read that as "Spocks-as-a-Service". That'd be a waay cooler market.
Nvidia (no fucking way)
If you're enough of a dumbass to ignore the right solution (nVidia stuff *works*, binary blob or not, as opposed to ATI's, also binary-blob, braindead crap), you deserve to fail. Every media PC I've built has been nVidia; no problems on the graphics side.