"it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next " Doesn't this go without saying?
Back in the day, pipelining - issuing, say, a new multiply instruction every clock, even though several earlier multiplies were still working their way thru the pipeline - was too expensive for most architectures. An instruction might take multiple clock cycles to execute, but in most architectures the multi-clock instruction would tie up the functional unit until the computation was done - you might be able to issue a new multiply every 10 clocks or something. Pipelining takes more gates and more design because you don't want one slow stage to determine the clock rate of the whole design.
Which leads us to the early RISC computers, I can recall an early Sun SPARC architecture that lacked a hardware integer multiply instruction. The idea at the time was every instruction should take one clock, and any instruction that demanded too long a clock should be unrolled in software. So this version of SPARC used shifts and adds to do a multiply. At the time, that was a pure RISC design. One of the key insights in RISC, still useful today, is to separate main memory access from other computations.
The CPU design course I took in the late 80's said Seymour Cray invented that idea of separating loads and stores from computation, because even then, even with static RAM as main memory, accessing main memory was slower than accessing registers. So by separating loading from RAM into registers and storing from registers into RAM, the compiler could pre-schedule loads and stores such that they would not stall functional units. Cray also invented pipelining, another key feature in most modern CPUs (I'm not sure when ARM adopted pipelining, but i'm pretty sure it's in some ARM architectures have it now). Of course Cray had vector registers and the consequent vector SIMD hardware.
I don't think Cray invented out of order execution, but I don't think he needed it; in Cray architectures, it would be the compiler's job to order instructions to prevent stalls. In CISC architectures, OOO is mostly a trick for preventing stalls without the compiler needing to worry about it (also, with many models and versions of the Intel instruction architecture out there, it would be painful to have to compile differently for each and every model). So, for example, the load part of an instruction could be scheduled early enough that the data would be in a register by the time the rest of the instruction needed it.
Anyway, the upshot is modern CPU designs have a bigger debt to Cray than to any other single design.