Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:The killer application (Score 2) 302

Essentially for free, like how printing this 300 page textbook is essentially free. $8 for good Double-A 22 pound copy paper (not HP 20 pound poorly cut crap that jams your printer), 15 cents/page for ink. Only about $53.

This is, of course, why I own a color laser printer that can print for 2 cents/page or less. Plastic filament, however, is expensive.

Comment Re:Content Creation isn't there (Score 3) 302

Back in the day, my dad knew a guy in the military who thought Microsoft would be big. He said the guy was an idiot. Dude was like, "Look man, computers, pre-packaged software, nobody wants to write their own code!" Dad wrote a voice recognition system on the VIC-20, figured prepackaged software was a no-go industry but computers would be big and everyone would learn to program them.

You're going to see a lot of "My 3D printer is awesome because I can model things and print, and everyone in the world will want to print things they make!" idiocy going around here. 3D printers aren't universal constructors; they can't make high-quality plastic with injection molding or carving or shaping, much less metal and circuits. And even then, it takes specialized processes to make certain materials: you can etch ICs easy enough with a universal fabricator, but what happens when you want an electric-motor-driven ceramic burr coffee grinder? Glass, ceramic (what kind? What grade, what process?), steel (what type?), plastic, screws, basins, lids, hoppers, shafts, bearings. Just working with "metal" is an exercise in working with hundreds of different material--thousands when you start getting into anything like car parts or fountain pens.

Comment Re:Why? (Score 1) 148

GPL code is why we can't have nice things like ffmpeg/libav producing good-quality AAC. You see, there are open source libraries but we can't link them against ffmpeg and redistribute a binary that's capable of using the shared object, because GPL says so.

Comment Re:Militia, then vs now (Score 1) 1633

Reasonable Person test is fine in some contexts. We can still define boundaries around this. For example, a "reasonable person" test would be appropriate for criminal sexual behavior laws for example between a college professor and a student. As the student is 18, if a reasonable person would look at the evidence given on their relationship and determine that the student's grades were unaffected by this liaison and that there was no abuse of power dynamic, this should not be a crime. (I knew a girl who was screwing around with her teacher, but she already had A's in his class and he never put administrative pressure on her; they both figured it was fine, but this is a Category 3 sexual offense here even if the student is 35 or so, and a Cat 3 is what you get for screwing a 12 year old.)

Comment Re:Or.. (Score 1) 360

You just don't get it. You're assuming that a function is perfect, that its code is provably correct. Well, we were originally talking about integrating new, not-broken, correct code into a code body which is incorrect, so that assumption is immediately useless here. Further, if your code is correct and perfect, why would you come back to modify it?

We're coming in to clean up bugs. Now we're taking code and integrating it from one changing code base into another changing code base. The new code may work properly, while the old has corner cases that break synergistically: some body of code may work properly only in the presence of a defect in some other body of code. Microsoft Windows' source code was published a decade or so ago, and it was rife with this--hacks on top of hacks.

Do you honestly think that this can be done by just "checking the function for correctness"? You've integrated some correct code from another codebase which now makes this part of your code correct. Unfortunately some other part of your code now fails and, in the most extreme cases, we've created a new exploitable condition!

So now you have to go back and assess impact. By changing this function, you change a lot of other code. That code has the same logic, but different data coming to it--it's the same cog shoved into a different machine. You can do analysis to see if your new function does anything different than the old function, if the conditions its output precipitates on can now produce different output. If you can't prove that it doesn't produce different output in some situations, then you must now assume that your program changes may have wide impact. This includes if your output may be between 1 and 255, and it is, but it may return 47 instead of 42.

You can reduce the probabilities, but you cannot eliminate the risk. OpenSSL merging LibreSSL code without doing impact analysis and making sure the new code doesn't open new vulnerabilities will incur risk. Hell, LibreSSL will face the risk of creating new vulnerabilities as it goes, so you may eliminate 100 problems and cause 1 and that's okay because you are 99 problems better. The risk OpenSSL faces is greater, because they may import defective code from LibreSSL, or they may import code that works but which is no longer broken in such a way as required by other (still broken) code present in OpenSSL but no longer in LibreSSL.

Comment Re:Or.. (Score 1) 360

This kind of thing can only be evaluated on a case by case basis.

In the same way that 6,880 pound kerb weight gasoline V8 pickup trucks with 400HP engines getting worse MPG than 3,220 pound kerb weight V6 passenger vehicles with 220HP can only be evaluated on a case-by-case basis.

I'm sorry but I'm going by scientific basis here, by known principles that have been put into practice, re-examined, and attacked repeatedly by method (agile project management is one such method to attempt to make repeating work less of a problem; it does reduce the problem and reduce risk compared to waterfall for projects which have high risk, but it still doesn't make parity). It simply takes less effort to consolidate multiple tasks into single tasks, for example the task of understanding a code base (as you write it) and ensuring cross-platform portability requires two passes: if you implement portability later, you have to re-examine the code base, find non-portable code, then re-envision how you want to write this code so that it meets requirements both old and new. That same work is done the first time you write it, just with different constraints and more mass output; doing it this way the first time eliminates the difference.

I guess you like to drive to the super market and spend 2 hours shopping for detergent, eggs, bread, yogurt, flour, pasta, sauces, cheese, and produce; and then drive home and go out for another 15 minutes to the corner store to get milk. Me, I'll take the 30 second diversion to grab the milk while I'm grabbing the yogurt at the super market.

Comment Re:Or.. (Score 1) 360

Hint: Python doesn't have #ifdef blocks.

You do understand that POSIX and standard C libraries are essentially universal, and that OS-specific facilities are less-universal, right? POSIX compatibility gets me all BSD, AIX, System V, HPUX, Solaris, and OSX platforms; but I usually validate anything non-trivial against what's sane for Windows, if Windows is an eventual target. For something like a system allocator, Windows may not be a target: you can't really replace the system allocator anyway, whereas your allocator could get adopted by various libc implementations. For something like a MIDI sequencer, the choice of wxWidgets and portable middle-layer libraries or the cautious use of gtk+ facilities portable across all systems is usually a consideration.

From Day 0, I consider anything I do in my code against what my target platforms will be, and what facilities they are likely to not share. For example: All Unix systems are likely to share malloc() and mmap(). They're unlikely to share inotify or kevent facilities specific to Linux, which one should implement with caution. Likewise, relying on specialized system behavior is a big problem: a lot of people relied on the size of ints and longs, whereas I've always used uint32_t or int64_t or whatnot when the size had some real importance (i.e. in OpenSSL, that TLS heartbeat is a 16-bit value; you'd better not use 'unsigned short int' to refer to it, but rather 'uint16_t' from C99).

I learned C after I learned assembly. It seems unnatural to not consider the full impact of your programming code.

Comment Re:Or.. (Score 1) 360

Oh I see. You're trying to ask for evidence that the sky is blue. Got it.

(It is considerably well-known that doing things to achieve the final goal in the first pass is vastly more efficient than doing things haphazardly with disregard to part of an end goal, then going back and adding the remaining requirements and redoing the work to fit with them. This is the basis for some archaic and probably outdated behavior commonly known as "planning".)

Comment Re:de Raadt (Score 1) 304

Ah, ok. SO like Segmexec in PaX where the address space is dual-mapped and split 50/50, or W^X where part of the memory is NX.

Mapped and unmapped memory is not relevant for the protection mechanism, UNLESS you want per-page protection (which is kinda stupid anyway, code should reside in its own descriptor).

Code resides in segments in a binary. A library has data and code segments, mapped separately. They're mapped per-page for memory management: if you experience memory pressure, small pages (4K) can be swapped out. Code pages get invalidated and re-read from the original source on disk if needed.

LDT in 32-bit flat protected mode is an interface I never saw. Odd. I only knew about the single segment (code, data) layout using CS and DS and SS.

Slashdot Top Deals

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...