Even in the days of the x86, they got there by accident.
Guess you've never heard of a "pivot" before.
Before pivots had their Andy Warhol moment: lucky lightning strike; after pivots become the in thing: just another day in the office at Innovation Inc.
[*] Here "Inc." stands for Incarnate.
The 432 was intended to be the successor of the 386, but it was complicated and late.
Meanwhile Intel had an 860 team and a follow-on 486 team competing with each other.
———
Oral History of John H. Crawford 2014 Computer History Museum Fellow — 24 February 2014
So I think that the key to success of the 486 was not that we were going to beat RISC but we were going to be able to keep up and we were able to adopt a lot of the same engineering techniques, maybe not the same but we were able to get performance that was consistent with that.
Well, we had to spend more transistors. It was more complicated. But we had a big enough market to justify the investment, so our goal was not to beat all the RISC guys. Our goal was to be within, I don't know, 20 percent or something of performance, at which point we thought that other considerations would carry the day, and we still could be quite successful. Yeah, the key thing was the one clock per instruction throughput on the 486.
And a couple of interesting things about that. I had mentioned Pat Gelsinger's push to accommodate that within the execution pipeline. Even before that I remember a conversation with engineers from Sun and Bill Joy was in the audience, and we were probably trying to sell him on the 386, and back and forth questions.
And a question that he asked that really struck me as profound was, "Well, how do you know when to start ..." because we had this complicated instruction set. "How many clocks does it take you from the time you star t decoding one instruction to know where the next one starts?"
And that's a key parameter that we had to deal with in order to get one clock per instruction throughput. I mean in order to get one clock per instruction throughput everything has to happen at a one clock pace. So you start decoding one instruction. Next clock you better be decoding the next one or you missed it.
He goes on to explain that you sort of had to guess about instruction boundaries, but your accuracy turned out to be 95 to 99 percent. And then you also had to guess a bit prefetching from the microcode ROM, but here your quick path covered 90% of the cases.
———
So Intel actually had three different approaches running in parallel, and the different groups sometimes cross fertilized.
But it was a friendly rivalry; in fact, Les Kohn [of the 860 RISC team] was very helpful in helping us set direction for the cache that we were going to put on the 486. I think he provided a cache simulator and a couple of other things we were able then to change to try to predict how things might operate and what size we should try to focus on and some of those kind of questions.
Through the fuller lens of history, this begins to look less like a pivot, and more like covering your bases in spades.
———
Intel is no stranger to bad management. We don't need to revise history to invent more bad management that actually existed, over and above what's already tumbling out of the horn of plenty.