Forgot your password?
typodupeerror

Comment: Re:Militia, then vs now (Score 1) 1615

by bluefoxlucid (#46832423) Attached to: Retired SCOTUS Justice Wants To 'Fix' the Second Amendment
Reasonable Person test is fine in some contexts. We can still define boundaries around this. For example, a "reasonable person" test would be appropriate for criminal sexual behavior laws for example between a college professor and a student. As the student is 18, if a reasonable person would look at the evidence given on their relationship and determine that the student's grades were unaffected by this liaison and that there was no abuse of power dynamic, this should not be a crime. (I knew a girl who was screwing around with her teacher, but she already had A's in his class and he never put administrative pressure on her; they both figured it was fine, but this is a Category 3 sexual offense here even if the student is 35 or so, and a Cat 3 is what you get for screwing a 12 year old.)

Comment: Re:Or.. (Score 1) 348

by bluefoxlucid (#46832347) Attached to: Not Just a Cleanup Any More: LibreSSL Project Announced

You just don't get it. You're assuming that a function is perfect, that its code is provably correct. Well, we were originally talking about integrating new, not-broken, correct code into a code body which is incorrect, so that assumption is immediately useless here. Further, if your code is correct and perfect, why would you come back to modify it?

We're coming in to clean up bugs. Now we're taking code and integrating it from one changing code base into another changing code base. The new code may work properly, while the old has corner cases that break synergistically: some body of code may work properly only in the presence of a defect in some other body of code. Microsoft Windows' source code was published a decade or so ago, and it was rife with this--hacks on top of hacks.

Do you honestly think that this can be done by just "checking the function for correctness"? You've integrated some correct code from another codebase which now makes this part of your code correct. Unfortunately some other part of your code now fails and, in the most extreme cases, we've created a new exploitable condition!

So now you have to go back and assess impact. By changing this function, you change a lot of other code. That code has the same logic, but different data coming to it--it's the same cog shoved into a different machine. You can do analysis to see if your new function does anything different than the old function, if the conditions its output precipitates on can now produce different output. If you can't prove that it doesn't produce different output in some situations, then you must now assume that your program changes may have wide impact. This includes if your output may be between 1 and 255, and it is, but it may return 47 instead of 42.

You can reduce the probabilities, but you cannot eliminate the risk. OpenSSL merging LibreSSL code without doing impact analysis and making sure the new code doesn't open new vulnerabilities will incur risk. Hell, LibreSSL will face the risk of creating new vulnerabilities as it goes, so you may eliminate 100 problems and cause 1 and that's okay because you are 99 problems better. The risk OpenSSL faces is greater, because they may import defective code from LibreSSL, or they may import code that works but which is no longer broken in such a way as required by other (still broken) code present in OpenSSL but no longer in LibreSSL.

Comment: Re:Or.. (Score 1) 348

by bluefoxlucid (#46832225) Attached to: Not Just a Cleanup Any More: LibreSSL Project Announced

This kind of thing can only be evaluated on a case by case basis.

In the same way that 6,880 pound kerb weight gasoline V8 pickup trucks with 400HP engines getting worse MPG than 3,220 pound kerb weight V6 passenger vehicles with 220HP can only be evaluated on a case-by-case basis.

I'm sorry but I'm going by scientific basis here, by known principles that have been put into practice, re-examined, and attacked repeatedly by method (agile project management is one such method to attempt to make repeating work less of a problem; it does reduce the problem and reduce risk compared to waterfall for projects which have high risk, but it still doesn't make parity). It simply takes less effort to consolidate multiple tasks into single tasks, for example the task of understanding a code base (as you write it) and ensuring cross-platform portability requires two passes: if you implement portability later, you have to re-examine the code base, find non-portable code, then re-envision how you want to write this code so that it meets requirements both old and new. That same work is done the first time you write it, just with different constraints and more mass output; doing it this way the first time eliminates the difference.

I guess you like to drive to the super market and spend 2 hours shopping for detergent, eggs, bread, yogurt, flour, pasta, sauces, cheese, and produce; and then drive home and go out for another 15 minutes to the corner store to get milk. Me, I'll take the 30 second diversion to grab the milk while I'm grabbing the yogurt at the super market.

Comment: Re:Or.. (Score 1) 348

by bluefoxlucid (#46826257) Attached to: Not Just a Cleanup Any More: LibreSSL Project Announced

Hint: Python doesn't have #ifdef blocks.

You do understand that POSIX and standard C libraries are essentially universal, and that OS-specific facilities are less-universal, right? POSIX compatibility gets me all BSD, AIX, System V, HPUX, Solaris, and OSX platforms; but I usually validate anything non-trivial against what's sane for Windows, if Windows is an eventual target. For something like a system allocator, Windows may not be a target: you can't really replace the system allocator anyway, whereas your allocator could get adopted by various libc implementations. For something like a MIDI sequencer, the choice of wxWidgets and portable middle-layer libraries or the cautious use of gtk+ facilities portable across all systems is usually a consideration.

From Day 0, I consider anything I do in my code against what my target platforms will be, and what facilities they are likely to not share. For example: All Unix systems are likely to share malloc() and mmap(). They're unlikely to share inotify or kevent facilities specific to Linux, which one should implement with caution. Likewise, relying on specialized system behavior is a big problem: a lot of people relied on the size of ints and longs, whereas I've always used uint32_t or int64_t or whatnot when the size had some real importance (i.e. in OpenSSL, that TLS heartbeat is a 16-bit value; you'd better not use 'unsigned short int' to refer to it, but rather 'uint16_t' from C99).

I learned C after I learned assembly. It seems unnatural to not consider the full impact of your programming code.

Comment: Re:Or.. (Score 1) 348

by bluefoxlucid (#46825085) Attached to: Not Just a Cleanup Any More: LibreSSL Project Announced

Oh I see. You're trying to ask for evidence that the sky is blue. Got it.

(It is considerably well-known that doing things to achieve the final goal in the first pass is vastly more efficient than doing things haphazardly with disregard to part of an end goal, then going back and adding the remaining requirements and redoing the work to fit with them. This is the basis for some archaic and probably outdated behavior commonly known as "planning".)

Comment: Re:de Raadt (Score 1) 304

by bluefoxlucid (#46823479) Attached to: OpenBSD Team Cleaning Up OpenSSL

Ah, ok. SO like Segmexec in PaX where the address space is dual-mapped and split 50/50, or W^X where part of the memory is NX.

Mapped and unmapped memory is not relevant for the protection mechanism, UNLESS you want per-page protection (which is kinda stupid anyway, code should reside in its own descriptor).

Code resides in segments in a binary. A library has data and code segments, mapped separately. They're mapped per-page for memory management: if you experience memory pressure, small pages (4K) can be swapped out. Code pages get invalidated and re-read from the original source on disk if needed.

LDT in 32-bit flat protected mode is an interface I never saw. Odd. I only knew about the single segment (code, data) layout using CS and DS and SS.

Comment: Re:Or.. (Score 1) 348

by bluefoxlucid (#46822249) Attached to: Not Just a Cleanup Any More: LibreSSL Project Announced
I target all platforms and avoid using any OS-specific facilities outside of #ifdef blocks, unless I'm writing something specifically for an OS that requires a particular facility (I.e. to enhance the OS, say like udev). Often I leave this to the people who wrote whatever Python modules I'm using these days.

Comment: Re:de Raadt (Score 1) 304

by bluefoxlucid (#46822217) Attached to: OpenBSD Team Cleaning Up OpenSSL

I actually find that quite hard to believe. The LDT was introduced with the 80286, in 1981. If you check both manuals I mentioned, you'll find indication that you cannot store a TSS in the LDT and some other special descriptor types.

As per Wikipedia:

There was no 'Executable' flag in the page table entry (page descriptor) in the 80386 and later x86 processors, until, to make this capability available to operating systems using the flat memory model, AMD added a "no-execute" or NX bit to the page table entry in its AMD64 architecture, providing a mechanism that can control execution per page rather than per whole segment.

What happens is your program starts its VMA with 16MB of non-mapped memory. Then you have the executable .text segment. Directly above that is the non-executable brk() segment (heap). Directly above that are anonymous mappings, including executable shared library .text segments. Finally, you have the stack.

So memory looks approximately like: nnnnXXXXWWWWXXWXWXnnnnnnnWWWWW for non-mapped, eXecutable, and Writable memory. All mapped is readable to simplify this model.

On x86, PROT_READ is also PROT_EXEC; there aren't 2 bits. While you can use a segmentation setup, it must be contiguous: setting the highest executable page to the top of the main executable makes the heap non-executable, but also makes all library code non-executable. Setting the highest executable page to just below the stack makes the stack non-executable, but leaves the heap and anonymous data mappings executable. There is no bit to say, "This part is executable, and this part isn't, and this next part is, but this next part isn't, this part is, and this last part isn't."

Hence the tricks with setting up ring-0 memory protection on data pages and then forcing a DTLB load if they're read/written, to act as a kernel-controlled NX bit.

Comment: Re:Or.. (Score 1) 348

by bluefoxlucid (#46822159) Attached to: Not Just a Cleanup Any More: LibreSSL Project Announced

You're missing a large point here: function 'int getBoundaryLength(myObject *p)' returns back a piece of information about boundary length. A lot of things use boundary length, and boundary length is stored in a variable 'L = getBoundaryLength(p)' which gets passed around and assigned to things in objects (structures, classes) and subsequently used by other functions such as 'int copyBuffer(char *d, char *s)'.

Modifying how getBoundaryLength() produces its return value has an impact on all of this code. Buffers allocated on that length passed to other functions may be too short; copy operations may be too long. These are things you must verify. So a one-line modification to a function can have huge, sweeping impact across your program.

You can pretend you know more about programming because you think you've invented a way for modifications to one function to have no impact on any other part of the code; but if you ever achieve that goal, gcc has a "dead code eliminator" which removes your entire function, unless it's exported and thus gcc can't verify that nothing else calls it to useful effect.

Comment: Re:Or.. (Score 0) 348

by bluefoxlucid (#46817893) Attached to: Not Just a Cleanup Any More: LibreSSL Project Announced

They're doing a huge amount of work. They're going to write portale code because they are good programmers but not going to to the porting---just like openssh. They write good solid portable code, other people port it, everyone wins.

Most peoples' definition of "portable code" is that it's, you know, portable. It runs on multiple platforms. Write once, run across all substantially-similar systems. For example: Unix utilities running on the POSIX platform are portable because the exact same unmodified source code can be compiled on any POSIX platform against the standard POSIX system headers and linked with the standard libraries and run. Much portable code also has OS specific performance enhancements: it may take advantage of an OS facility that is non-portable if available. Non-portable code is written in such a way that it must be modified to compile on other operating systems using standard, portable interfaces--a non-portable OS facility is used in all cases, and if not available then you cannot compile the code.

Your fallacy: Equivocation, the informal logical fallacy of calling two different things by the same name. In this case, "portability" (the ability to simply carry one thing from one place to another--in programming, the ability to compile unmodified code on various platforms which supply a standardized API) and "porting" (the act of making a thing portable--in programming, the act of rewriting non-portable software to be more portable by making it compile on additional platforms).

It's funny hoy you cite "eonomics" as your argument for why people should give you free stuff.

Yes. It's called wealth production. You see, if you use 1 unit of labor and produce 1 unit of output, you have created 0 wealth. If you use 2 units of labor and produce 1 unit of output, you destroy 1 unit of wealth. If you use 1 unit of labor and produce 2 units of output, you create wealth.

As I've explained, it takes some units of labor (effort, work) to fork a code base, greatly improve it in a way which makes it non-portable to the platforms the original code base was portable to, and then apply additional labor to modify the result to again make it portable to the same original target platforms. It takes some fewer units of labor to simply retain portability as you make the improvements. The end result of both of these strategies is the same; however, the second strategy requires fewer units of labor input--it destroys less wealth in the process of creating the same wealth output, thus it is economically more efficient.

Think about if you paid $10,000 for a car, then paid $1000 for new tires and $3000 to add a V6 engine to replace the I4. Now consider if instead you paid $12,000 for the higher model which comes with the upgraded tires and the V6 engine. In both cases you get the same car; however, in one case you get it for $14,000 and in the other you get it for $12,000. In the first case, additional labor is used to install, ship, and then remove the original equipment, which is then replaced with new equipment which must be installed and shipped. The first install-ship-remove cycle (and any re-shipping to get those parts to another place where they are useful) is avoided by doing it right the first time, which is where the $2000 savings in this example comes from (we assume in this model that the automaker uses a static margin model, where everything is produced and then has a certain marginal profit slapped onto it).

Why would you waste effort making additional work?

"Consistency requires you to be as ignorant today as you were a year ago." -- Bernard Berenson

Working...