I'm not sure if you were sarcastic or not, but your email address is at gmail, and I'm gonna mention Fight Club, and there's no I in team. Do you want me to post your email address more plainly?
So, yeah, posting email hashes is only a little bit safer than posting the full text.
I browsed the PDF, and it seems they have some trampoline code in the first 64KB of memory that has unsafe instructions that allow that code to do more dangerous things. The idea is that the untrusted code can only interface with the trampoline code, which checks that nothing funny is going on, then it interacts with the real OS.
I see a primary weakness is that they support threads. Start a thread, and have it try to interfere with another thread calling the trampoline code. Basically, mess about with the "stack" trying to get it to jump to a non-32-byte boundary. The trampoline code seems to be a very weak spot, and attacking it seems like the easiest area to go after. It's very difficult to make the trampoline code safe from attacks from other threads in the same address space (it actually may not be possible to make it bullet proof). Try to attack the trampoline to make failing security checks into passing ones--the idea is the trampoline code has to store data somewhere--just try to modify it.
I think they may have some weaknesses in mmap, mprotect, etc.--they need to check these calls very carefully. Try to remap the trampoline code to another address (which would then be vulnerable). Try to map in a library over the trampoline code. The PDF itself said they check open() carefully, but then not read()...this shows they are probably being too clever and not defensive enough.
Another area is create races--is it possible to provide one copy of the code to the checker, and another copy actually gets loaded into memory? This is surprisingly difficult to get right, but depends a great deal on how they load code (or, rather, how the code is presented to them in the first place, I guess by a browser).
Note that any check the trampoline code makes might be bypassable by a clever thread, which changes the data after the sandbox check is complete but before the OS call is made. OS calls which take in buffers probably don't "snapshot" the data to protect it being changed by threads, so there may be a large window in which threads can break the sandbox security (the security check passed, but then a thread changes the data to unsafe values before the OS acts on it).
And of course, try to break out of the sandbox by exposing OS-level bugs or just extreme events such as opening too many files, overflowing structures, to create a way out of the sandbox.
If you have time to try all of the above, enjoy your $512.
This is called "clean room" engineering.
However, it is my understanding there is no settled legal basis for this extreme view. Can you cite any court cases where copying concepts from code was considered illegal even though the copy differed significantly? And where it was ruled that a clean-room technique would have been valid?
I think the closest analogy which seems pretty settled is book authorship. If I write a book about a girl, her dog, a scarecrow, and a tin man heading to Oz to meet a wizard, etc., then I have a good chance of losing a copyright infringement claim by the owners of the Wizard of Oz. Even if I didn't read the book, and if only a 3rd party told me the broad outline of the story. Unless it's funny. (Which is true--parody is an exception).
However, lots of people write books inspired by other books, even "borrowing" characters, and generally this is OK. It doesn't matter whether you read the book or not, or whether some 3rd party told you the story.
The moon is made of green cheese. -- John Heywood