I did mean that about time pressure, but I don't understand what you are suggesting.
The scenario is to suppose that they have some audio algorithm that is
based on creating and executing a block of code a bunch of times, but
for a very small fraction of a second, and then, critical assumption, it
needs to create and execute a slightly different block of code, and so on,
over and over.
Long ago I saw a demo of an Amiga video capture program that had precisely
that sort of thing going on, where they could only get the throughput they
needed by using self-modifying code, changing one instruction in an inner
loop on a regular basis, and their approach held up under scrutiny of 100
developers critiquing them; it really was needed.
So I am sure that such things do arise in the real world; whether the current
codebase truly has such a need, I don't know.
So *if* they have this need, then every time they create or modify a code
block, they need it to be writable, but when they execute what they wrote,
security issues insist that it it should go from (write,no-execute) to
(no-write,execute), every single time.
Making that permissions change requires a system call every time, unavoidably.
Any scheme that allows avoiding that system call is inherently making different
assumptions than I did above. Without knowledge of their algorithms, I don't
see how we can be sure that assumptions like that are either right or wrong;
I'm just pointing out that *if* they are correct, they explain why the codebase
does what it does.
An alternative explanation that I tried to give a nod to is that they may have
simply done a premature optimization that was not actually needed, but
again, motivated as outlined above.