Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Or.. (Score 1) 360

Hint: Python doesn't have #ifdef blocks.

You do understand that POSIX and standard C libraries are essentially universal, and that OS-specific facilities are less-universal, right? POSIX compatibility gets me all BSD, AIX, System V, HPUX, Solaris, and OSX platforms; but I usually validate anything non-trivial against what's sane for Windows, if Windows is an eventual target. For something like a system allocator, Windows may not be a target: you can't really replace the system allocator anyway, whereas your allocator could get adopted by various libc implementations. For something like a MIDI sequencer, the choice of wxWidgets and portable middle-layer libraries or the cautious use of gtk+ facilities portable across all systems is usually a consideration.

From Day 0, I consider anything I do in my code against what my target platforms will be, and what facilities they are likely to not share. For example: All Unix systems are likely to share malloc() and mmap(). They're unlikely to share inotify or kevent facilities specific to Linux, which one should implement with caution. Likewise, relying on specialized system behavior is a big problem: a lot of people relied on the size of ints and longs, whereas I've always used uint32_t or int64_t or whatnot when the size had some real importance (i.e. in OpenSSL, that TLS heartbeat is a 16-bit value; you'd better not use 'unsigned short int' to refer to it, but rather 'uint16_t' from C99).

I learned C after I learned assembly. It seems unnatural to not consider the full impact of your programming code.

Comment Re:Or.. (Score 1) 360

Oh I see. You're trying to ask for evidence that the sky is blue. Got it.

(It is considerably well-known that doing things to achieve the final goal in the first pass is vastly more efficient than doing things haphazardly with disregard to part of an end goal, then going back and adding the remaining requirements and redoing the work to fit with them. This is the basis for some archaic and probably outdated behavior commonly known as "planning".)

Comment Re:de Raadt (Score 1) 304

Ah, ok. SO like Segmexec in PaX where the address space is dual-mapped and split 50/50, or W^X where part of the memory is NX.

Mapped and unmapped memory is not relevant for the protection mechanism, UNLESS you want per-page protection (which is kinda stupid anyway, code should reside in its own descriptor).

Code resides in segments in a binary. A library has data and code segments, mapped separately. They're mapped per-page for memory management: if you experience memory pressure, small pages (4K) can be swapped out. Code pages get invalidated and re-read from the original source on disk if needed.

LDT in 32-bit flat protected mode is an interface I never saw. Odd. I only knew about the single segment (code, data) layout using CS and DS and SS.

Comment Re:Or.. (Score 1) 360

I target all platforms and avoid using any OS-specific facilities outside of #ifdef blocks, unless I'm writing something specifically for an OS that requires a particular facility (I.e. to enhance the OS, say like udev). Often I leave this to the people who wrote whatever Python modules I'm using these days.

Comment Re:de Raadt (Score 1) 304

I actually find that quite hard to believe. The LDT was introduced with the 80286, in 1981. If you check both manuals I mentioned, you'll find indication that you cannot store a TSS in the LDT and some other special descriptor types.

As per Wikipedia:

There was no 'Executable' flag in the page table entry (page descriptor) in the 80386 and later x86 processors, until, to make this capability available to operating systems using the flat memory model, AMD added a "no-execute" or NX bit to the page table entry in its AMD64 architecture, providing a mechanism that can control execution per page rather than per whole segment.

What happens is your program starts its VMA with 16MB of non-mapped memory. Then you have the executable .text segment. Directly above that is the non-executable brk() segment (heap). Directly above that are anonymous mappings, including executable shared library .text segments. Finally, you have the stack.

So memory looks approximately like: nnnnXXXXWWWWXXWXWXnnnnnnnWWWWW for non-mapped, eXecutable, and Writable memory. All mapped is readable to simplify this model.

On x86, PROT_READ is also PROT_EXEC; there aren't 2 bits. While you can use a segmentation setup, it must be contiguous: setting the highest executable page to the top of the main executable makes the heap non-executable, but also makes all library code non-executable. Setting the highest executable page to just below the stack makes the stack non-executable, but leaves the heap and anonymous data mappings executable. There is no bit to say, "This part is executable, and this part isn't, and this next part is, but this next part isn't, this part is, and this last part isn't."

Hence the tricks with setting up ring-0 memory protection on data pages and then forcing a DTLB load if they're read/written, to act as a kernel-controlled NX bit.

Comment Re:Or.. (Score 1) 360

You're missing a large point here: function 'int getBoundaryLength(myObject *p)' returns back a piece of information about boundary length. A lot of things use boundary length, and boundary length is stored in a variable 'L = getBoundaryLength(p)' which gets passed around and assigned to things in objects (structures, classes) and subsequently used by other functions such as 'int copyBuffer(char *d, char *s)'.

Modifying how getBoundaryLength() produces its return value has an impact on all of this code. Buffers allocated on that length passed to other functions may be too short; copy operations may be too long. These are things you must verify. So a one-line modification to a function can have huge, sweeping impact across your program.

You can pretend you know more about programming because you think you've invented a way for modifications to one function to have no impact on any other part of the code; but if you ever achieve that goal, gcc has a "dead code eliminator" which removes your entire function, unless it's exported and thus gcc can't verify that nothing else calls it to useful effect.

Comment Re:Or.. (Score 0) 360

They're doing a huge amount of work. They're going to write portale code because they are good programmers but not going to to the porting---just like openssh. They write good solid portable code, other people port it, everyone wins.

Most peoples' definition of "portable code" is that it's, you know, portable. It runs on multiple platforms. Write once, run across all substantially-similar systems. For example: Unix utilities running on the POSIX platform are portable because the exact same unmodified source code can be compiled on any POSIX platform against the standard POSIX system headers and linked with the standard libraries and run. Much portable code also has OS specific performance enhancements: it may take advantage of an OS facility that is non-portable if available. Non-portable code is written in such a way that it must be modified to compile on other operating systems using standard, portable interfaces--a non-portable OS facility is used in all cases, and if not available then you cannot compile the code.

Your fallacy: Equivocation, the informal logical fallacy of calling two different things by the same name. In this case, "portability" (the ability to simply carry one thing from one place to another--in programming, the ability to compile unmodified code on various platforms which supply a standardized API) and "porting" (the act of making a thing portable--in programming, the act of rewriting non-portable software to be more portable by making it compile on additional platforms).

It's funny hoy you cite "eonomics" as your argument for why people should give you free stuff.

Yes. It's called wealth production. You see, if you use 1 unit of labor and produce 1 unit of output, you have created 0 wealth. If you use 2 units of labor and produce 1 unit of output, you destroy 1 unit of wealth. If you use 1 unit of labor and produce 2 units of output, you create wealth.

As I've explained, it takes some units of labor (effort, work) to fork a code base, greatly improve it in a way which makes it non-portable to the platforms the original code base was portable to, and then apply additional labor to modify the result to again make it portable to the same original target platforms. It takes some fewer units of labor to simply retain portability as you make the improvements. The end result of both of these strategies is the same; however, the second strategy requires fewer units of labor input--it destroys less wealth in the process of creating the same wealth output, thus it is economically more efficient.

Think about if you paid $10,000 for a car, then paid $1000 for new tires and $3000 to add a V6 engine to replace the I4. Now consider if instead you paid $12,000 for the higher model which comes with the upgraded tires and the V6 engine. In both cases you get the same car; however, in one case you get it for $14,000 and in the other you get it for $12,000. In the first case, additional labor is used to install, ship, and then remove the original equipment, which is then replaced with new equipment which must be installed and shipped. The first install-ship-remove cycle (and any re-shipping to get those parts to another place where they are useful) is avoided by doing it right the first time, which is where the $2000 savings in this example comes from (we assume in this model that the automaker uses a static margin model, where everything is produced and then has a certain marginal profit slapped onto it).

Why would you waste effort making additional work?

Comment Re:Or.. (Score 1) 360

who says they're merging back? If they are then your whining is for nothing since it will be merged back. If not then your point is moot.

The libav people go to the ffmpeg repos, get code, and merge it into ffmpeg. Same with vice versa. Do you think only OpenBSD LibreSSL developers could merge code back to OpenSSL? Probably someone else is going to pull the code from LibreSSL and merge it; otherwise wouldn't OpenBSD LibreSSL developers just be OpenSSL developers?

Apparently you don't understand how programming works as a group process either, or how community dynamics in open source software work, or something. Somewhere you've failed to figure out how code gets from one place to another.

Comment Re:Or.. (Score 1) 360

one of 3 things. It can also compute something based on the data passed to it and not modify the data passed. That's functional style, and is generally considered good practive.

And then you don't store that data anywhere, so that function doesn't impact any of the other code anywhere, because it doesn't impact any value that's passed on through the program, right?

Slashdot Top Deals

The optimum committee has no members. -- Norman Augustine

Working...