Next Generation Stack Computing 347
mymanfryday writes "It seems that stack computers might be the next big thing. Expert Eric
Laforest talks
about stack computers and why they are better than register-based
computers. Apparently NASA uses stack computers in some of their probes. He
also claims that a kernel would only be a few kilobytes large! I wonder if
Windows will be supported on a stack computer in the future?"
Twelfth of Never (Score:5, Funny)
In Redmond, 640 bytes isn't enough for anybody.
Re:Twelfth of Never (Score:3, Insightful)
Re:Twelfth of Never (Score:3, Insightful)
Re:Twelfth of Never (Score:2)
Re:Computer-Science Motto: Back to the Future (Score:2, Funny)
Assembly Code was fun (Score:2, Informative)
fyi - The Open Office format Slide links don't work, so sadly I had to open the PPT file in Open Office instead.
Re:Assembly Code was fun (Score:5, Funny)
Re:Assembly Code was fun (Score:4, Funny)
Oh? (Score:4, Funny)
Re:Oh? (Score:2)
Re:Oh? (Score:2)
Re:Oh? (Score:3, Funny)
I should be finished unshuffling them in another six or seven months.
Re:Oh? (Score:5, Informative)
Computer languages can be Turing complete, but physical computers cannot be.
Re:Oh? (Score:3, Informative)
Don't forget the classic HP3000 (Score:2)
Re:Oh? (Score:2)
Also Java JVM is a stack architecture, and we have lots of microcontrollers that run JVM natively. so basically you are all way behind the times on this "stack cpu fad"
wikipedia link (Score:5, Informative)
http://en.wikipedia.org/wiki/Stack_machines [wikipedia.org]
Re: transputer wikipedia link (Score:2, Interesting)
Re:wikipedia link (Score:4, Informative)
Sadly I actually still work on a stack computer, and I had to go look it up.
Re:wikipedia link (Score:2)
Re:wikipedia link (Score:2, Offtopic)
Size and functionality (Score:5, Insightful)
I've seen sub-1k kernels for FORTH systems before. The question is, how much functionality do you want to wrap into that kernel? More capable kernels would, of course, be correspondingly larger.
That said, stack computing and languages like FORTH have long been underrated. Depending on the application, the combination of stack computers and postfix languages can be quite powerful.
Re:Size and functionality (Score:2)
Why would the type of notation matter? Couldn't you program a stack computer just as well with a prefix functional language like Scheme?
For the same reason language choice always matters (Score:2)
If the problem is "make this work on a stack based machine" then look out! You're gonna have aging LISP programmers crawling out of the woodwork to show off their obsolete, er, elite, programming skills.
Re:For the same reason language choice always matt (Score:5, Informative)
OTOH, most FORTHs I've seen use 3 or more stacks. I.e., most of them have a separate stack for floats. What would be *really* nice is if someone built a machine that used Neon as it's assembler. Neon is/was an Object-oriented dialect of FORTH for the Mac that allowed the user to specify early or late binding for variables. It was developed by Kyria Systems, a now-defunct software house. Unfortunately Neon died during a transition to MSWind95. I understand that it is survived by MOPS, but I've never had a machine that MOPS would run on, so I don't know how similar it was.
I think that FORTH would make a truly great assembler...and the more so if that dialect of FORTH were NEON. But I forget how many stacks it used. At least three, but I have a vague memory that it was actually four. The main stack, the return stack, the floating stack, and ??the Object stack??...I don't know.
Re:For the same reason language choice always matt (Score:3, Informative)
Re:Size and functionality (Score:4, Insightful)
Sure you can - and it compiles to postfix notation anyways, rather ineffeiciently I might add (get it, add????)
let's say you wanted to write a function like:
function addsubandmultiply(b, c, d, e) {
a = (b + c) * (d - e);
return a;
}
and you've got assembly level instructions such as mov, add, sub, mult, push, and pop, as well as
the very stack-centric stor and lod, allowing you to move one or more stack variables to memory and
the reverse.
A typical register based computer might compile the above as:
pop b
pop c
pop d
pop e
mov b, ax
mov c, bx
add bx
mov ax, temp_memory
mov d, ax
mov e, bx
sub bx
mov temp_memory, bx
mult bx
push a
Whereas a stack-based computer might compile as:
add
stor temp_memory
sub
lod temp_memory
mult
In a stack based computer, operations are carried out directly on your stack... it's very convenient,
since most languages compile function calls to use the stack anyways, and as you can see not having
to deal with an accumulator register makes for much terser code. Between 20 - 40% of your compiled code is spent moving data in and out of the accumulator register, since most instructions depend on
specific data being in that register - to the point that they introduced zero-cycle add/mov functionality in the P4 line - basically, if your code performs an add and then movs ax immediately
out to memory (like the above code - and possibly the most common arithmetic operation in compiled code), if the pipeline and data caches are all available, the P4 will
execute both instructions with enough time to put something else in the instruction pipeline that
cycle. It's not really a zero-cycle function - you can do something like 2.5 (add,mov,add,mov,add) a cycle if you stack them back to back to back, for instance...
Yes, Intel released a benchmark for it. No, I can't imagine why you would want to keep adding and moving the results around memory - maybe some esoteric functions like a fibbanoci generator or even a DSP algorithm of some sort might need to do it, but I don't think it'll be all that often... or that any compiler would have an optimisation to specifically output that sequence if appropriate...
Re:Size and functionality (Score:4, Insightful)
What most register machines compile the following code:
function addsubandmultiply(b, c, d, e) {
a = (b + c) * (d - e);
return a;
}
Into something like (sorry for PPC asm):
add r3, r3, r4
sub r4, r5, r6
mulw r3, r3, r4
blr #(return)
Now tell me that is not just as simple (or even simplier) as the stack based one?
Re:Size and functionality (Score:5, Informative)
specific data being in that register - to the point that they introduced zero-cycle add/mov functionality in the P4 line - basically, if your code performs an add and then movs ax immediately
out to memory (like the above code - and possibly the most common arithmetic operation in compiled code), if the pipeline and data caches are all available, the P4 will
execute both instructions with enough time to put something else in the instruction pipeline that
cycle. It's not really a zero-cycle function - you can do something like 2.5 (add,mov,add,mov,add) a cycle if you stack them back to back to back, for instance...
The only zero-cycle mov I'm familiar with on the P4 is a register-to-register mov, and that just takes advantage of the fact that the P4 has a physical register file and a map between the architectural registers and the physical ones. E.g. given
add bx, [cx]
mov ax, bx
the mapper might assign bx to physical register 10. It will then realize that ax is just a copy of bx, so it will make ax point at register 10 as well, and the mov never has to execute at all, thus 'zero cycle'.
You seem to be saying that the P4 can write the result of an add to the cache in zero cycles, or more than two values in a cycle, which doesn't mesh with what i know of the P4 which is that it has a two-ported cache. But I'm only intimately familiar with early revs of P4; if you know what rev this was added in I would be interested.
Joy! (Score:3, Interesting)
I've looked into it a couple times, and it seems pretty neat. In a word, functional concatenation.
Plus, as we all know, functional languages are so much more fun than procedural.
Linking to 300MB video files from Slashdot? (Score:3, Funny)
Please try the bittorrent. No, wait... Teach em a lesson, make em burn.
.NET Compatibility (Score:5, Interesting)
Of course... that would still depend on a version of Windows for it to run on.
Re:.NET Compatibility (Score:4, Informative)
They're great (Score:5, Funny)
Hardware engineers like stack computers because the hardware is interesting and easy to design
Investors hate them because they keep loosing money on them.
Re:They're great (Score:2)
Yeah, but do they ever make any of it back?
We are heard this before... (Score:4, Funny)
In space no one can hear you blue screen of death. Unless you work for Lucas Films.
PC Stacks (Score:5, Funny)
Awesome (Score:4, Insightful)
Re:Awesome (Score:2)
Saturn (Score:2)
Re:Awesome (Score:2)
Doesn't matter, they're both write-only languages.
Forth? (Score:2, Informative)
I remember that FORTH is a language support STACK COMPUTING. Hopefully, it is not totally wrong. Unfortunately, it is really hard to understand FORTH program.
http://en.wikipedia.org/wiki/Forth_programming_lan guage [wikipedia.org]
There is one very widely used FORTH-type language (Score:3, Insightful)
Re:There is one very widely used FORTH-type langua (Score:2)
Re:Forth? (Score:3, Informative)
face; mouth; teeth; brush; wash; wash;
Does it run Windows?!? (Score:5, Funny)
No, no, no, NO! This is SLASHDOT! The proper response is "Does it run Linux [wikipedia.org]"?
Re:Does it run Windows?!? (Score:2)
Re:Does it run Windows?!? (Score:2, Funny)
I think you mean a Beowolf STACK...
Which would be better, a cluster of Beowolf Stacks, or a stack of Beowolf Clusters? Of course, the answer is a stacked cluster of Beowolf Clustered Stacks.
Re:Does it run Windows?!? (Score:5, Funny)
No, the *PROPER* response is (Score:3, Funny)
Next Generation? (Score:2)
X86 FPU's finally losing their stackness (Score:5, Interesting)
Re:X86 FPU's finally losing their stackness (Score:2, Interesting)
As someone who has written several Forth compilers for the x86 I'd like to point out that the design of the stacks on the x86 is very inefficient. The main stack is just that: a stack tied to one particular register. The FP stack was just a joke; a dead weasel could have designed something better. Anyway, I do like using Forth even under the x86 model - it's nice to know that my entire programming system fits into the on-die cache!
Re:X86 FPU's finally losing their stackness (Score:3, Funny)
No no no, since the dawn of time, Man has yearned to destroy the Sun!
x86 came much later, right after the COBOL and the other dinosaurs.
mod parent funny! (Score:2)
-Rick
Re:X86 FPU's finally losing their stackness (Score:2)
Or, at least close enough, for non-technical people.
The X86 is an example of everything! (Score:3, Interesting)
Th
Cup of Joe (Score:2)
Therefore, we should consider moving to Java-based Operating Systems and accelerator chips!
[...]
In case anyone is wondering, I'm only half joking. Java is a stack-based platform, perfectly suited to processors that don't actually exist in real-life. Sun created the picoJava [sun.com] in the 90's, and claimed that it was faster than the Pentium of the day.
Re:Cup of Joe (Score:3, Informative)
Yes, I'm aware of these processors. However, they're not actually stack-based. They convert the Java instructions into ARM RISC instructions which are register-based. So while such chips are very useful in accelerating Java on standard RISC architectures (also VLIW architectures such as MAJC), they are not actually stack machines.
The only modern example of a stack-based processor
Fun and games (Score:5, Funny)
Text of PPT (Score:4, Informative)
Discovered field by chance in 2000 (blame the Internet)
Hobby project (simulations and assembly) until 2004
Transformed into Independent Study thesis project
Overview of current state of research
Focus on programmer's view
Stack Computers: Origins
First conceived in 1957 by Charles Hamblin at the University of New South Wales, Sydney.
Derived from Jan Lukasiewicz's Polish Notation.
Implemented as the GEORGE (General Order Generator) autocode system for the DEUCE computer.
First hardware implementation of LIFO stack in 1963: English Electric Company's KDF9 computer.
Stack Computers: Origins (Part 2)
Independently discovered in 1958 by Robert S. Barton (US).
Implemented in the Burroughs B5000 (also in 1963).
Better known
Spawned a whole family of stack computers
The First Generation
The First Generation: Features
Multiple independent stacks in main memory
Stacks are randomly accessible data structures
Contained procedure activation records
Evaluated expressions in Reverse Polish Notation
Complex instructions sets trying to directly implement high-level languages (e.g.: PL/1, FORTRAN, ALGOL)
Few hardware buffers (four or less typically)
Supplanted in the 1980's by RISC and better compilers
Stack Computers: A New Hope
Enter Charles H. ("Chuck") Moore:
Creator of the stack-based FORTH language, circa 1970
Left Forth, Inc. in 1981 to pursue hardware implementations
NOVIX (1986), Sh-BOOM (1991), MuP21 (1994), F21 (1998), X18 (2001)
Currently CTO of Intelasys, still working on hardware
product launch expected April 3, 2006 at Microprocessor Summit
Enter Prof. Philip Koopman, Carnegie-Mellon University
Documented salient stack designs in "Stack Computers: The New Wave", 1989
The Second Generation
The Second Generation: Features
Two or more stacks separate from main memory
Stacks are not addressable data structures
Expression evaluation and return addresses kept separate
Simple instruction sets tailored for stack operations
Still around, but low-profile (RTX-2010 in NASA probes)
Strangely, missed by virtually all mainstream literature
Exception: Feldman & Retter's "Computer Architecture", 1993
Arguments and Defense
Taken from Hennessy & Patterson's "Computer Architecture: A Quantitative Approach", 2nd edition
Summary: Valid for First Generation, but not Second
Argument: Variables
More importantly, registers can be used to hold variables. When variables are allocated to registers, the memory traffic reduces, the program speeds up (since registers are faster than memory), and the code density improves (since a register can be named with fewer bits than a memory location).
[H&P, 2nd ed, pg 71]
Manipulating the stack creates no memory traffic
Stacks can be faster than registers since no addressing is required
Lack of register addressing improves code density even more (no operands)
Globals and constants are kept in main memory, or cached on stack for short sequences of related computations
Ultimately no different than a register machine
Argument: Expression Evaluation
Second, registers are easier for a compiler to use and can be used more effectively than other forms of internal storage. For example, on a register machine the expression (A*B)-(C*D)-(E*F) may be evaluated by doing the multiplications in any order, which may be more efficient due to the location of the operands or because of pipelining concerns (see Chapter 3). But on a stack machine the expression must be evaluated left to right, unless special operations or swaps of stack position are done.
[H&P, 2nd ed, pg. 71]
Less pipelining is required to keep a stack machine busy
Location of operands is always the stack: no WAR, WAW dependencies
However: always a RAW dependency between instructions
Infix can be easily compiled to postfix
Dijkstra's "shunting yard" algorithm
Stack swap operations equivalent to register-register move operations
S
JVM (Score:5, Informative)
Some previous poster noted that CLI is also a stack based model. I can't verify that myself but it wouldn't surprise me; Microsoft is, after all, highly 'innovative' or something.
Re:JVM (Score:3, Informative)
Appropriate instruction set (Score:5, Insightful)
Beginning with Pentium II, and propagating to pretty much all of the other archictures in a short time, non of the mainstream CPUs have exposed their metal. We have an instruction set, but it's torn into primitives and scheduled for execution. We don't see the primitives, not even in assembler. AFAIK, there isn't even a way to use the true primitives, except perhaps on the Transmeta, where it was undocumented.
So in this light, since we're already fairly far from the true metal, it seems to me that it makes a lot of sense to re-evaluate the instruction set itself. Of course one could raise the Itanium argument, but I would also argue that politics were too big a part, there. Then again, one could also argue that x86 and amd64 are just so entrenched that it doesn't matter, and they do run well on today's hardware.
Then again I could cite my old favorite, the 6809. It started from the same origins and precepts as RISC, but a different attitude. RISC simply tried to optimize the most common operations, at the expense of less common ones. With the 6809, they tried to understand WHY certain things were happening, and how those things could be done better and faster. They ended up with a few more transistors, the same speed, and something approaching 3X the throughput, as compared to the 6800. More similar to the current topic, there was a paper on 'contour mapping', mapping blocks of cache into stacks and data structures. The 6809 was too old for a cache, but it seems to me that combining it's concepts with the contour mapping would be interesting indeed.
But like stack engines, it's not x86/amd64 compatible.
Re:Appropriate instruction set (Score:2)
What are you talking about, you clueless git?
Nearly every device driver in your Windows, Linux or Mac machine has assembly code modules which are HAND-TUNED to the processor type (which is why every processor offers a CPUID). And I'm not referring just to graphics cards... There are teams where I work that still need to use MSofts MASM 6.22 to compile 16 bit portions of BIOS code.
I'd say it is 50/50 assembly vs. higher
Re:Appropriate instruction set (Score:3, Insightful)
Re:Appropriate instruction set (Score:2)
Ok, please enlighten me with the 50's era micrcode-tranlation CPU you are referring to, because I've never heard of such a device. This is your chance to show us how smart you are.
Too bad you posted as AC.
Re:Appropriate instruction set (Score:3, Interesting)
The 6809 was not only easy and fun to program, 6809 programs tended to benchmark out significantly faster than programs for comperable CPUs like the Z80, 6800 and 8080. If the industry ever decides to scrap the x86 mess -- which they won't -- going back to the 6809 for a starting point might not be a bad idea at all. I once did a plot of measured times for a benchmark where timings were available for a bunch of CPUs (Sieve of Eratosthenes). When y
Why these downright stupid comments? (Score:4, Insightful)
Re:Why these downright stupid comments? (Score:2)
Stop Hurting My Eyes (Score:5, Informative)
Please stop describing undergrads doing independent studies as "Experts". Theres a reason that mainstream processors haven't picked up on "Stack Processors", and it has nothing to do with binary compatibility, the difficulty of writing a compiler for their instruction set, or general programming complexity. Stack Machines are really only good for In-Order processing. Wonder why NASA probes have Stack Processors? Because they don't freaking need to do out of order processing in order to get the performance they require, and they probably found stack processors to have a favorable power / performance ratio for their application. You will never see a full blown Windows running on a Stack processor, because Superscalar processors destroy their performance.
"My research project shows that some people wrote nifty papers in the 1970s, but everyone ignored them for an obvious reason I don't understand." -> Not an Expert
Re:Stop Hurting My Eyes (Score:3, Insightful)
Actually, even asserting that register based computers solve the problems that you are describing is not a general truth. You need to specify how many registers of what type can deal with how many out of order processes. And I suspect that a stack computer with 5 or 6 stacks could deal as flexibly with th
Question about stack computer types (Score:4, Funny)
Are maple syrup and butter options?
Re:Question about stack computer types (Score:2)
Are maple syrup and butter options?
I'm more into grid computing, where it's all about the waffles!
that's a common misnomer (Score:2)
Stack machines - again? (Score:4, Insightful)
Who can forget the English Electric Leo-Marconi KDF9 [carleton.ca], the British stack machine from 1960. That, and the Burroughs 5000, were where it all began.
Stack machines are simple and straightforward to build, but are hard to accelerate or optimize. Classically, there's a bottleneck at the top of the stack; everything has to go through there. With register machines, low-level concurrency is easier. There's been very little work on superscalar stack machines. This student paper from Berkeley [berkeley.edu] is one of the few efforts.
It's nice that you can build a Forth machine with about 4000 gates, but who cares today? It would have made more sense in the vacuum tube era.
Re:Stack machines - again? (Score:2)
Considering that we seem to be entering the vacuum tube era in nano-tech, perhaps a 4000 gate forth machine can be used to run programmable nano-machines.
Not a good idea (Score:5, Insightful)
How important is this parallism? Consider that modern processors have 10-30 pipeline stages, 3-6 execution units that can have an instruction executing at each stage; moreover, most of them have out-of-order execution units that handle instructions more in the order that data is available for them rather than the order they are listed in the object file (and main memory is hundreds of times slower than the processors themselves, so this is important!). Typically, such processors can have more than 100 instructions in some stage of execution (more than 250 for IBM POWER5 :-)
Consider, also, that the only pieces of anything-like-current stack hardware are Intel x87-style floating point units, that Intel is throwing away -- for good reason! -- in favor of (SSE) vector style units. In the current Intel processors, the vector unit emulates an x87 if it needs to -- but giving only a quarter of the performance.
Someone made remarks about Java and .Net interpreters: in both cases, the interpreter is simulating a purely scalar machine with no fine grained parallelism; no wonder an extensible software-stack implementation is one of the simplest to implement. Stacks are not the way that true Java compilers like gjc generate code, though!
No, stack-based hardware is not a good idea. And haven't been since some time in the eighties, when processors started to be pipelined, and processor speed started outstripping memory speed.
Re:Not a good idea (Score:3, Interesting)
NASA (Score:5, Insightful)
Is that supposed to be a ringing endorsement? I thought NASA was using components the rest of the world treated as obsolete due their proven durability and reliability in the radiation of space.
Re:NASA (Score:3, Interesting)
Essentially correct. It is so costly and time-consuming to get a new component certified for use that it's usually less work to find a clever way to use old components. Then ten months after launch production ceases on the old part, and you have to have special ones built at a hundred times the cost (military option) or scavenge them on eBay (scientific opti
FORTH post! (Score:2, Funny)
No, they're not better (Score:4, Interesting)
But why do you need out-of-order execution? Well, misses to memory are very expensive these days - it can easily take from 200 to 400 cycles to service a load that misses all the way to main memory. This can have a significant effect on performance. What out-of-order execution does is to allow independent instructions that are younger than the load to execute in parallel with it. Quite often these parallely-executed instruction will generate other misses to main memory, overlapping their latencies. So - latency of loads that miss is still very high, but at the very least the processor is not idle while servicing them (for a good read see "MLP Yes! ILP no!" by Andy Glew)
Itanium and Sparc compensate for the fact that they don't do stuff out-of-order by putting sh*tloads of L2/3 cache on-chip. The cost of a miss is still very high, but it happens much less often. The manufacturing cost of a chip is also much higher.
Note that what NASA is sending into space is "old" tech. The reason - well, cosmic rays are much stronger in outer space, and the smaller the gate, the easier it is for them to flip its state.
P.S. I'm a computer architect.
Stack computers are hardly new (Score:5, Insightful)
There were stack computers in the 1960s and 1970s. There was a resurgence of interest in the 1980s--primarily because of Forth's popularity in embedded systems--resulting in a slew of stack-based "Forth engines." Forth creator Chuck Moore has been working on a series of custom Forth CPUs for 20+ years now. His latest version has 24 cores on one chip (and was entirely designed by one person and uses MILLIWATTS of power).
Stack processors and languages have one big advantage: they minimize the overall complexity of a system. The tradeoff is that they often push some of that complexity onto the programmer. That's why Forth tends to shine for small, controlled systems (like a fuel injector or telescope controller or fire alarm), but you don't see people writing 3D video games or web browsers in Forth.
Re:Stack computers are hardly new (Score:2)
Saying it is low power is meaningiless when there are CPUs that use hundreds of MICROWATTS in a plethora of embedded devices today.
Re:Stack computers are hardly new (Score:2)
I think the fault is more with the submitter of the story than the author's presentation. The author just gives an overview of how stack computers work and their history. The submitter apparently never knew about stack computers and is all excited about them as a possible future of computing. The presentation is simple a history, mostly about stuff that's 20+ years old. So, yes, while stack processors have been commercially available and they've
Re:Stack computers are hardly new (Score:2)
Truer words were never typed. However, as a low-overhead, "portable assembly language", Forth is a beautiful way to go. The nature of the language causes you to think about the problem as a heirarchy of "procedural objects", which is really ideal for polling the inputs and turning on the lights and motors. I used Forth a lot in an environment where others were using PLCs and
Who likes them? (Score:2)
Re:Who likes them? (Score:2, Interesting)
Probably mostly just an accident of history register machines went superscalar first and "won" (mostly, because maybe since stack machines were more efficient, the need for superscalarity didn't hit so early...),. But, in short: stack machines, with similar design overheads to r
Intel x87 is A STACK ENGINE (Score:2)
You pushed FP numbers onto F(0:7) and the operations worked on the stack. They than had to be popped off to the accumulator to load to memory.
Kids these days, I tells ya.
Stack - bad for speed, good for low power (Score:5, Insightful)
The main advantage of stack machines is that all or most parameters for each instruction are implicit. Aside from stack shuffle/rotate instructions, the operands are always the top few on the stack. This makes instructions very small. The logic is also exceedingly simple (for fixed-stack designs). If you want a simple, low-power CPU, a stack machine is what you want.
Where I explored this issue, however, is in the realm of high-performance computing. The key advantage of a stack architecture is that smaller instructions take less time to fetch from memory. If your RISC instructions are 32 bits, but your stack machine instructions are 8 bits, then your instruction caches are effectively 4x larger, and your over-all cache miss penalty is greatly reduced.
The problem with stack machines is that they're damn near impossible to add instruction-level parallelism to. With a RISC machine, near-by instructions that deal with different registers (i.e. no dependencies) can be executed in parallel (whether that's multi-issue or just pipelining). With a stack machine, everything wants to read/write the top of the stack.
I came up with two things to deal with this problem, that are very much like the CISC-to-RISC translation done by modern x86 processors, so it's more of a stack ISA on a RISC architecture. One is that the stack is virtual. When you want to pop from the stack, what's happening in the front-end of the CPU is that you're just popping register numbers corresponding to a flat register file. When you want to push, you're allocating an assigned register number from the flat register file. Now, if you can get two instructions going that read different parts of the stack and write (naturally) to different locations, you can parallelize them. The second part is a healthy set of register shuffling instructions. Since you're doing all of this allocation up front, shuffling registers is as simple as renumbering things in your virtual stack. So a swap operation swaps two register numbers (rather than their contents), and a rotate operation renumbers a bunch of them, but the pending instructions being executed still dump their results in the same physical registers.
This all sounds great, but there are some problems with this:
(1) The shuffling instructions are separate instructions. With a RISC processor, you have more information all in one unit. Although you could try to fetch and execute multiple stack instructions at once, it's much more complicated to execute four stack instructions in parallel than to execute a single RISC instruction, even though they require the same amount of memory.
(2) You need a lot of shuffling instructions. Say your stack contains values A, B, C, and D, and you want to sum them. Without shuffling, you'd add A and B, yielding E, then add E and C, yielding F, then add F and D. Three add instructions. If your adder(s) is/are pipelined, you'd like to add A+B and C+D in parallel or overlapping, THEN wait around for their results and do the third add. The problem is that to do that, you'd need to add A+B, then rotate C to the top then D to the top, then add, then add again. The first case was 3 instructions; the second case is 5 instructions. Depending on your architecture, the extra shuffle instructions may take so long to process that you might as well just have waited. No speed gain at all.
(3) The extra shuffing instructions take up space. Optimizers are hard to write. Although it's conceivable that one could optimize for this architecture so as to avoid as many shuffling instructions as possible, you still end up taking up quite a lot of space with them, potentially offsetting much of the space savings that you got from switching from RISC to stack.
So, there you have it. Somewhat OT, because surely NASA's primary goal has got to be low-power, but also somewhat on-topic because stack architectures aren't the holy grail. Just ideal for some limited applications.
Re:Stack - bad for speed, good for low power (Score:3, Interesting)
With a stack machine, running one instruction stream in parallel is very hard, while very easy on a register-based one. But the flip side of this is that on a stack machine running multiple instruction streams in parallel is incredibly easy while *Very* difficult on a register based CPU.
For instance, take "add 1 to each element of this 30-length array"
Parallalism (Score:3, Interesting)
If a stack machine is that much simpler, couldn't you either have:
The second refers to an instruction coding similar to VLIW such that you parallelise the code on multiple stacks but it still shares an instruction/data cache and allows for parallelism without heavy multi-threading at the high-level (and instead having parall
MMIX uses a register stack (Score:3, Interesting)
A bumper sticker I saw once (Score:5, Funny)
A useful HTML article (better than 9999TB AVI) (Score:5, Informative)
A Near Miss for Stack Computing Circa 1981 (Score:5, Interesting)
An excerpt from a bit longer essay I wrote [geocities.com]:
If Stack-Based Computing Is So Great... (Score:3, Interesting)
Re:I Know... (Score:4, Funny)
Re:I Know... (Score:2)