Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Confused! DevShare *is* opt-in for developers (Score 1) 198

I actually read the article (I know, you can't do that on Slashdot). It says DevShare is opt-in for developers, not opt-out, and that's what inserts the additional stuff in the executables. So were the GIMP folks just confused? It sounds like GIMP left over something that was in their control in the first place. (No, I don't work for any of these folks.)

Comment There are no exoplanets. IAU says so. (Score 1) 116

The IAU has decided that a planet - at least around our Sun - has to "clear the neighbourhood" around its orbit. There will always be objects we can detect, without being able to detect if the neighbourhood is cleared (currently is all so-called exoplanets).

One solution is that "planet" has a different definition between our Solar System and everywhere else. But that is inconsistent. What we should do is have the same definition everywhere; I suggest "orbiting star" and "so massive it's round". If that means Pluto and Ceres are planets, well, that's just fine.

Comment Missing the point (Score 1) 330

But how, exactly, were going to use those alternative compilers? If you just use an alternative compiler executable, maybe the original executable was okay and the alternative was subverted - so now you have introduced corruption into the compiler executable you cared about. Just using a different compiler in the obvious way simply moves the problem somewhere else, it doesn't actually solve anything. In DDC, you have to subvert both compiler executables, which is significantly harder.

Ken Thompson's trusting trust paper didn't describe how to solve the problem. The only proposed approach is to rewrite everything yourself, which is impractical.

Comment Bruce Schneier connection (Score 3, Informative) 330

Oh, and a Bruce Schneier connection: In 2006 Bruce wrote a summary of my ACSAC paper on diverse double-compiling (DDC). Bruce's article is simply titled Countering "Trusting Trust".

Bruce completely understood the approach. He explained it very well in his blog, and he also did a nice job explaining its larger ramifications. His conclusions are still true: the "trusting trust" attack has actually gotten easier over time, because compilers have gotten increasingly complex, giving attackers more places to hide their attacks. Here's how you can use a simpler compiler -- that you can trust more -- to act as a watchdog on the more sophisticated and more complex compiler.

Comment Re:Diverse Double-Compiling (trust but verify) (Score 5, Informative) 330

I've gotten a lot of hits, and that's a good thing. As I noted in another post, I got hit by reddit earlier this year. In general people are becoming more interested in protecting and verifying build environments, as this post about Tor demonstrates.

So please take a look at my Fully Countering Trusting Trust through Diverse Double-Compiling (DDC) page!

Comment Diverse Double-Compiling (trust but verify) (Score 5, Insightful) 330

Thanks for pointing out my Diverse Double-Compiling (DDC) paper!

My page on Fully Countering Trusting Trust through Diverse Double-Compiling (DDC) has more details, including detailed material so you can duplicate the experiments and re-verify the proofs. Note that you do not have to take my word for it.

You have to trust some things. But you can work to independently verify those things, to determine if they're trustworthy. I don't always agree with Bruce Schneier, but after watching what's he's done for years, I've determined that he's quite trustworthy. This is the same way we decide if we should trust anyone or any thing. In short: "trust, but verify".

Comment Make scales just fine (see: Peter Miller) (Score 1) 179

Make scales just fine. Badly using make, through mistakes like using recursive make, causes scalability problems.

The paper "Recursive make considered harmful" by Peter Miller identifies common mistakes in using make, and how to fix them. The biggest mistake is using recursive make; this is a common mistake that is NOT required by make. Once you stop making this mistake, make is suddenly much faster.

Two other issues with standard make were not part of POSIX, but they are now:

Issue 1: Historically, standard make only implements deferred assignment (where values are calculated when referenced, not when set). This meant that as size grows, there was an exponentially increasing calculation effort (eek). Miller recommends using immediate assignment op, but although GNU make has one (as :=) that wasn't in the POSIX standard. He also suggests using an appending assignment (+=_, which wasn't in POSIX either. Since then, POSIX has added the immediate-assignment operator ::= and the appendix-assignment += (see http://austingroupbugs.net/view.php?id=330). GNU make 4.0 implements "::=", so you can now start using it. This gets rid of a major scalability problem.

Issue 2: The "obvious" ways to implement automatic dependency generation in make require the ability to "include" multiple from one line, and the ability to silently ignore errors when including, and those weren't in POSIX either. These have since been added to POSIX (in http://austingroupbugs.net/view.php?id=333 and http://austingroupbugs.net/view.php?id=518).

Just getting something into the POSIX spec doesn't cause anything magical to happen. But if a capability is in a standard, it's way more likely to be implemented, and people are far more willing to depend on it.

Comment Lisp s-expression notation can be readable (Score 1) 179

Previous poster: "Being simpler for a computer means it is simpler to write evaluators for LISP expressions. Because of the simplicity of LISP an evaluator + applicator gives you a compiler or runtime environment. That is a huge huge advantage."

Yes, but that doesn't require using the old s-expression notation from the 1950s.

Check out http://readable.sourceforge.net./ This adds additional abbreviations to s-expressions, just like 'x currently means (quote x), so that people can produce much more readable code and data. It's implemented in Scheme and Common Lisp, and is released as open source software using the MIT license.

Comment Replacing make with... make (Score 1) 179

There are a lot of build systems that provide more built-in features than straight-up make. Heck, GNU make itself has LOTS more features than POSIX make.

But many of those more-automated build systems run on top of... make. In particular, if you use cmake or automake/autotools, they *generate* makefiles, so you still need a capable "make" program. In fact, you *want* a "make" underneath with lots of capabilities, so the tool you use directly can generate better results.

Ant and Maven are nice tools... but usually they're only used with Java. Rake is great, but is typically only used with Ruby. I like Python (the language), but there are several articles showing that at least at the time Scons was *slow* (and thus had trouble scaling). Autoconf's syntax is still baroque, but if you follow certain conventions it's actually not too bad, and it's much easier to use now that a number of annoying bugs have been fixed.

For general-purpose build systems, the autotools or cmake are still reasonable build systems to look at (unless you're using Java or Ruby). And since they generate makefiles, it's important to have a great tool underneath to process the makefile, even if you don't use make directly.

Comment Interactive Fiction is very alive (Score 4, Informative) 106

These games are now typically called "Interactive Fiction"; there are LOTS of them, and they are still being developed. It's a small community, but active. Two good post-Infocom games are Bronze (by Emily Short) and Anchorhead (by Michael Gentry).

More info: http://en.wikipedia.org/wiki/Interactive_fiction

A gentle intro: http://emshort.wordpress.com/how-to-play/

Comment Coverity: Static analyzer (Score 5, Informative) 187

Coverity sells software that does static analysis on source code and looks for patterns that suggest defects. E.G., a code sequence that allocates memory, followed later by something that de-allocates that memory, followed later by something that de-allocates the same memory again (a double-free).

The product is not open source software, but a number of open source software projects use it to scan their software to find defects: https://scan.coverity.com/ It's a win-win, in the sense that Coverity gets reports from real users using it on real code, as well as press for their product. The open source software projects get reports on potential defects before users have to suffer with them.

Comment The king is dead, long live the king (Score 4, Insightful) 570

"Unix" - as they define it - is going away. But what's really happening is that old implementations of Unix are being replaced by modern implementations and re-implementations of Unix.

Servers are increasingly using Red Hat Enterprise Linux, Fedora, Debian, Ubuntu, etc. On the client side, the #1 smartphone (by popularity) is Android, based on Linux. The #2 smartphone is iOS, based on Unix. On the desktop, Macs are running MacOS, also based on Unix.

Slashdot Top Deals

Say "twenty-three-skiddoo" to logout.

Working...