Forgot your password?
typodupeerror

Comment: Say goodbye to your 5th amendment rights. (Score 1) 87

by Zeek40 (#47417203) Attached to: A Brain Implant For Synthetic Memory
If you think DARPA is funding "development of multi-scale computational models with high spatial and temporal resolution that describe how neurons code declarative memories " because they care about veterans and not because they're looking for a more effective way to pull memories from people's minds than water-boarding, you haven't been paying attention to how America treats their military veterans.

Comment: Re:Arbitrage (Score 2, Insightful) 382

by Zeek40 (#47175795) Attached to: High Frequency Trading and Finance's Race To Irrelevance
Yeah, the neo-liberal set seem to have forgotten how the Reagan administration gutted SEC regulations and cut the tax brackets of America's richest by over 50%, which converted our economy into a steadily-growing powerhouse into the shitty cycle of booms and busts we're currently trapped in. Allowing banks to grow "too big to fail" and letting Wall Street create financial instruments so complicated that even their industry leaders can't explain to congress are indicators that de-regulation went WAY too far.

Comment: Re:So you are arguing to leave bugs in place ? (Score 1) 125

by Zeek40 (#47125943) Attached to: Imparting Malware Resistance With a Randomizing Compiler
An uninitialized variable can be caught with a style-checker. There's no need to resort to something like randomized binaries to solve a problem like that. I'm not arguing in favor of leaving bugs in place, I'm arguing in favor of choosing a specific set of binaries to focus your testing efforts on. The bottom line is that testing resources are finite and one of the key steps to fixing a bug is identifying a method of repeatably demonstrating that bug. Having randomized binaries severely complicates that one critical task and will result in significantly lower quality testing when utilizing the same level of resources.

I agree with you completely about cross platform development being one of the best methods of exposing bugs, but i don't think this kind of stack randomization is really comparable. When doing cross-platform development, you'll have a very specific, very well-defined set of target environments that you'll be testing a single version of software on. This stack randomization is an effectively infinite number of variations on a theme being tested in a single environment. One lends itself to repeatable testing, the other lends itself to versioning hell trying to replicate bugs in order to solve them.

I agree it's worth looking into, but I'm currently having difficulty seeing how the costs outweigh the benefits.

Comment: Re:the crutch of determinism (Score 3, Insightful) 125

by Zeek40 (#47124365) Attached to: Imparting Malware Resistance With a Randomizing Compiler
You respectfully disagree with his points without actually providing any reason why, and while nick's post makes complete sense, your statements seem to have a ton of unexplained assumptions built in.
  1. What kinds of bugs do you think would manifest earlier using this technique, and why do you think that earlier manifestation of that class of bugs will outweigh the tremendous burden of chasing down all the heisenbugs that only occur on some small percentage of randomized builds?
  2. How does such an environment reward programmers who invest more time in validation? More time spent in validation will result in better code regardless of whether you're using a randomized or non-randomized build. More time spent in validation is a cost you're paying, not some free thing provided by the randomized build process.
  3. I don't know what this sentence means: "Debugging suck, if instigated soon enough to matter, returns 100x ROI as compared to debugging code." If what instigated soon enough?
  4. "Determinism should not be reduced to a crutch for failing to code correctly" - What does this even mean? An algorithm is either deterministic or non-deterministic. If your build system is changing a deterministic algorithm into a non-deterministic algorithm, your build system is broken. If your algorithm was non-deterministic to begin with, a randomized build is not going to make it any easier to track down why the algorithm is not behaving as desired.

All in all, your post reads like a smug "Code better, noob!" while completely ignoring the tremendous extra costs that are going to be necessary to properly test hundreds of thousands of randomized builds for consistency.

"There are things that are so serious that you can only joke about them" - Heisenberg

Working...