Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Next Generation Stack Computing 347

mymanfryday writes "It seems that stack computers might be the next big thing. Expert Eric Laforest talks about stack computers and why they are better than register-based computers. Apparently NASA uses stack computers in some of their probes. He also claims that a kernel would only be a few kilobytes large! I wonder if Windows will be supported on a stack computer in the future?"
This discussion has been archived. No new comments can be posted.

Next Generation Stack Computing

Comments Filter:
  • by Tackhead ( 54550 ) on Thursday August 10, 2006 @02:20PM (#15883677)
    > He also claims that a kernel would only be a few kilobytes large! I wonder if Windows will be supported on a stack computer in the future?"

    In Redmond, 640 bytes isn't enough for anybody.

    • But in reality this would be a Major Redesign of the OS, and all Apps would need to be recompiled/emulated. Registers are a core part of assembably language. Having to remake Windows would be like making Windows for the Power PC, If not more difficult.
      • by real_b0fh ( 557599 )
        actually, if windows is 'done right' all it would take is a recompile. I don't think there is a lot of assembler code in the windows source that needs to be rewritten, most code will be in C or C++.

      • I've ran C code on stack computers before. Now what's harder is running on a stack-less computer. register-based cpus have some form of stack typically, but some microcontrollers don't have enough space for a call stack or they have some unusual limitation (like no CALL/RET instructions).
  • Sounds like fun. I remember writing in RISC assembly code. I fealt like a real man back then. C# just doesn't cut it.

    fyi - The Open Office format Slide links don't work, so sadly I had to open the PPT file in Open Office instead.
  • Oh? (Score:4, Funny)

    by qbwiz ( 87077 ) * <johnNO@SPAMbaumanfamily.com> on Thursday August 10, 2006 @02:23PM (#15883713) Homepage
    I thought the 387 and Burroughs B5000 were odd, antiquated architectures, but apparently they're the wave of the future.
    • Everything old is new again! *kick-starts the old ENIAC*
      • ENIAC's a great machine. All you need to do is re-implement it in nano-technology. Unfortunately, it's analog, so you won't be able to translate C to run on it.
        • Re:Oh? (Score:3, Funny)

          I had a good C interpreter ported over to punch cards, but one day I accidentally dropped crate #147 off the forklift and they went everywhere. Damn my lazy habit of not labelling my media!

          I should be finished unshuffling them in another six or seven months.
    • Don't forget the venerable HP3000 "Classic" [wikipedia.org] machines like the Series 68 and 70 machines.
    • It's like how bellbottoms where poised to make a comeback in the early 1990s. It was just a big scam cooked up by the fashion industry because they were out of ideas (again).

      Also Java JVM is a stack architecture, and we have lots of microcontrollers that run JVM natively. so basically you are all way behind the times on this "stack cpu fad"
  • wikipedia link (Score:5, Informative)

    by whitelines ( 715819 ) on Thursday August 10, 2006 @02:24PM (#15883727) Homepage
    I didn't know either:
    http://en.wikipedia.org/wiki/Stack_machines [wikipedia.org]
  • by Angst Badger ( 8636 ) on Thursday August 10, 2006 @02:25PM (#15883742)
    He also claims that a kernel would only be a few kilobytes large!

    I've seen sub-1k kernels for FORTH systems before. The question is, how much functionality do you want to wrap into that kernel? More capable kernels would, of course, be correspondingly larger.

    That said, stack computing and languages like FORTH have long been underrated. Depending on the application, the combination of stack computers and postfix languages can be quite powerful.
    • Depending on the application, the combination of stack computers and postfix languages can be quite powerful.

      Why would the type of notation matter? Couldn't you program a stack computer just as well with a prefix functional language like Scheme?

      • because you should choose a language that fits the problem, not the reverse.

        If the problem is "make this work on a stack based machine" then look out! You're gonna have aging LISP programmers crawling out of the woodwork to show off their obsolete, er, elite, programming skills.
        • Sorry, but LISP (though I don't mean Common LISP) is just as much a stack language as FORTH. I think the first LISP that wasn't was LISP 1.5...but I'm rather vague on LISP history. Still, s-expressions are as stack oriented as FORTH is. The interesting thing is the first Algol 60 compiler (well...really an interpreter) I ever used was a stack machine. (That was why it was an interpreter. The real computer was an IBM 7090/7094 BCS system so it ran a stack machine program that Algol was compiled to run on. Whee!) So if you want a good stack computer language you could pick Algol 60. But FORTH is easier, and could even be the assembler language.

          OTOH, most FORTHs I've seen use 3 or more stacks. I.e., most of them have a separate stack for floats. What would be *really* nice is if someone built a machine that used Neon as it's assembler. Neon is/was an Object-oriented dialect of FORTH for the Mac that allowed the user to specify early or late binding for variables. It was developed by Kyria Systems, a now-defunct software house. Unfortunately Neon died during a transition to MSWind95. I understand that it is survived by MOPS, but I've never had a machine that MOPS would run on, so I don't know how similar it was.

          I think that FORTH would make a truly great assembler...and the more so if that dialect of FORTH were NEON. But I forget how many stacks it used. At least three, but I have a vague memory that it was actually four. The main stack, the return stack, the floating stack, and ??the Object stack??...I don't know.
          • If you miss Neon, you'll be happy to know that you can get about 90% of its implementation and 100% of its concepts in the form of PowerMops [powermops.org], which is open-source and runs great and natively on Leopard. I haven't used it for anything recently, but it's worked fine for hobbyist stuff I've done in the past. I strongly encourage you to check it out.
      • by merlin_jim ( 302773 ) <James.McCrackenNO@SPAMstratapult.com> on Thursday August 10, 2006 @02:55PM (#15883996)
        Couldn't you program a stack computer just as well with a prefix functional language like Scheme?

        Sure you can - and it compiles to postfix notation anyways, rather ineffeiciently I might add (get it, add????)

        let's say you wanted to write a function like:
        function addsubandmultiply(b, c, d, e) {
        a = (b + c) * (d - e);
        return a;
        }

        and you've got assembly level instructions such as mov, add, sub, mult, push, and pop, as well as
        the very stack-centric stor and lod, allowing you to move one or more stack variables to memory and
        the reverse.

        A typical register based computer might compile the above as:

        pop b
        pop c
        pop d
        pop e
        mov b, ax
        mov c, bx
        add bx
        mov ax, temp_memory
        mov d, ax
        mov e, bx
        sub bx
        mov temp_memory, bx
        mult bx
        push a

        Whereas a stack-based computer might compile as:

        add
        stor temp_memory
        sub
        lod temp_memory
        mult

        In a stack based computer, operations are carried out directly on your stack... it's very convenient,
        since most languages compile function calls to use the stack anyways, and as you can see not having
        to deal with an accumulator register makes for much terser code. Between 20 - 40% of your compiled code is spent moving data in and out of the accumulator register, since most instructions depend on
        specific data being in that register - to the point that they introduced zero-cycle add/mov functionality in the P4 line - basically, if your code performs an add and then movs ax immediately
        out to memory (like the above code - and possibly the most common arithmetic operation in compiled code), if the pipeline and data caches are all available, the P4 will
        execute both instructions with enough time to put something else in the instruction pipeline that
        cycle. It's not really a zero-cycle function - you can do something like 2.5 (add,mov,add,mov,add) a cycle if you stack them back to back to back, for instance...

        Yes, Intel released a benchmark for it. No, I can't imagine why you would want to keep adding and moving the results around memory - maybe some esoteric functions like a fibbanoci generator or even a DSP algorithm of some sort might need to do it, but I don't think it'll be all that often... or that any compiler would have an optimisation to specifically output that sequence if appropriate...
        • by Anonymous Coward on Thursday August 10, 2006 @03:27PM (#15884260)
          Actually x86 is inbetween a stack machine and a register based machine.
          What most register machines compile the following code:
          function addsubandmultiply(b, c, d, e) {
          a = (b + c) * (d - e);
          return a;
          }
          Into something like (sorry for PPC asm):
          add r3, r3, r4
          sub r4, r5, r6
          mulw r3, r3, r4
          blr #(return)

          Now tell me that is not just as simple (or even simplier) as the stack based one?
        • by Chris Burke ( 6130 ) on Thursday August 10, 2006 @06:11PM (#15885295) Homepage
          Between 20 - 40% of your compiled code is spent moving data in and out of the accumulator register, since most instructions depend on
          specific data being in that register - to the point that they introduced zero-cycle add/mov functionality in the P4 line - basically, if your code performs an add and then movs ax immediately
          out to memory (like the above code - and possibly the most common arithmetic operation in compiled code), if the pipeline and data caches are all available, the P4 will
          execute both instructions with enough time to put something else in the instruction pipeline that
          cycle. It's not really a zero-cycle function - you can do something like 2.5 (add,mov,add,mov,add) a cycle if you stack them back to back to back, for instance...


          The only zero-cycle mov I'm familiar with on the P4 is a register-to-register mov, and that just takes advantage of the fact that the P4 has a physical register file and a map between the architectural registers and the physical ones. E.g. given
          add bx, [cx]
          mov ax, bx

          the mapper might assign bx to physical register 10. It will then realize that ax is just a copy of bx, so it will make ax point at register 10 as well, and the mov never has to execute at all, thus 'zero cycle'.

          You seem to be saying that the P4 can write the result of an add to the cache in zero cycles, or more than two values in a cycle, which doesn't mesh with what i know of the P4 which is that it has a two-ported cache. But I'm only intimately familiar with early revs of P4; if you know what rev this was added in I would be interested.
      • Joy! (Score:3, Interesting)

        How come noone has mentioned the language Joy?
        I've looked into it a couple times, and it seems pretty neat. In a word, functional concatenation.
        Plus, as we all know, functional languages are so much more fun than procedural.
  • by Colin Smith ( 2679 ) on Thursday August 10, 2006 @02:25PM (#15883743)
    Someone's having a larf. Oh you do crack me up Messrs mymanfryday and CmdrTaco.

    Please try the bittorrent. No, wait... Teach em a lesson, make em burn.

     
  • .NET Compatibility (Score:5, Interesting)

    by DaHat ( 247651 ) on Thursday August 10, 2006 @02:26PM (#15883744)
    Interestingly enough the Microsoft Intermediate Language (MSIL) that .NET apps are compiled to before being JITed into machine code is actually built around a stack based system as well... No doubt porting the .NET Framework over to such a system would be quite easy... and give much in the way of performance boosts (especially on startup).

    Of course... that would still depend on a version of Windows for it to run on.
    • by jfengel ( 409917 ) on Thursday August 10, 2006 @02:31PM (#15883788) Homepage Journal
      The Java Virtual Machine is also sort of stack-based. The JVM bytecode set uses stack operations but the safety checks that it runs make it equivalent to a sliding set of registers not unlike, say, the SPARC architecture. A JIT implementation could do away with the stack, at least in the individual method calls, though the call-return stack would still have to be there.
  • by TechyImmigrant ( 175943 ) * on Thursday August 10, 2006 @02:26PM (#15883746) Homepage Journal
    Mathematicians like stack computers because its easier to formally prove the behaviour of algorithms using stacks.
    Hardware engineers like stack computers because the hardware is interesting and easy to design
    Investors hate them because they keep loosing money on them.
  • by __aaclcg7560 ( 824291 ) on Thursday August 10, 2006 @02:26PM (#15883750)
    Apparently NASA uses stack computers in some of their probes.

    In space no one can hear you blue screen of death. Unless you work for Lucas Films.
  • PC Stacks (Score:5, Funny)

    by celardore ( 844933 ) on Thursday August 10, 2006 @02:26PM (#15883758)
    I once had a job where I had to sort through stacks of computers. Overall the stacks were pretty useless, a bunch of burnt out 286s. Even if you put all your redundant computing power into a stack doesn't neccesarily make it better!
  • Awesome (Score:4, Insightful)

    by LinuxFreakus ( 613194 ) on Thursday August 10, 2006 @02:27PM (#15883760)
    Does this mean my old HP48GX will be considered cutting edge? I should get ready to sell it on EBay when the craze hits! All my old classmates will be forced to allow me to have the last laugh after I was on the recieving end of much ridicule for using the HP when the TI was the only thing "officially" endorsed by all the calculus textbooks. I don't know if I could ever part with it though. I still use it almost daily, the thing continues to kick ass.
  • Forth? (Score:2, Informative)

    by dslmodem ( 733085 )

    I remember that FORTH is a language support STACK COMPUTING. Hopefully, it is not totally wrong. Unfortunately, it is really hard to understand FORTH program.

    http://en.wikipedia.org/wiki/Forth_programming_lan guage [wikipedia.org]

  • by Stealth Dave ( 189726 ) on Thursday August 10, 2006 @02:30PM (#15883785) Homepage
    I wonder if Windows will be supported on a stack computer in the future?

    No, no, no, NO! This is SLASHDOT! The proper response is "Does it run Linux [wikipedia.org]"?
  • Patrick Stewart would be displeased by this misleading headline.
  • by GGardner ( 97375 ) on Thursday August 10, 2006 @02:31PM (#15883793)
    Since the dawn of time, the x86 FPU has been organized as a stack, which has been recognized as a mistake by modern computer architects. For one thing, it is hard to get a stack architecture to take advantage of multiple functional units. Only recently, with the development of SSE, 64 bit modes and other additions have we been able to move away from the stack on the x86.
    • by Anonymous Coward
      we been able to move away from the stack on the x86

      As someone who has written several Forth compilers for the x86 I'd like to point out that the design of the stacks on the x86 is very inefficient. The main stack is just that: a stack tied to one particular register. The FP stack was just a joke; a dead weasel could have designed something better. Anyway, I do like using Forth even under the x86 model - it's nice to know that my entire programming system fits into the on-die cache!

    • Since the dawn of time, the x86 FPU has been organized as a stack

      No no no, since the dawn of time, Man has yearned to destroy the Sun!

      x86 came much later, right after the COBOL and the other dinosaurs.
    • > Since the dawn of time, the x86 FPU has been organized as a stack

      Or, at least close enough, for non-technical people.
    • I did a computer architecture course a number of years ago. One day, we came to the consensus that the X86 architecture was an example of every computer architecture in existence. You want load store: look at all those MOV AX, xxxx instructions. You want register RISC, look at all those registers AX, BX, CX, DX, SI, DI, SP, BP. You want stack based: look at the FPU. You want vector parallel processing, look at those MMX/SSE instructions. You want symmetric multi-processing, look at those dual cores.

      Th
  • Expert Eric Laforest talks about stack computers and why they are better than register-based computers. Apparently NASA uses stack computers in some of their probes.

    Therefore, we should consider moving to Java-based Operating Systems and accelerator chips!

    [...]

    In case anyone is wondering, I'm only half joking. Java is a stack-based platform, perfectly suited to processors that don't actually exist in real-life. Sun created the picoJava [sun.com] in the 90's, and claimed that it was faster than the Pentium of the day.

  • by Carnildo ( 712617 ) on Thursday August 10, 2006 @02:32PM (#15883796) Homepage Journal
    It's all fun and games until someone hits a stack underflow.
  • Text of PPT (Score:4, Informative)

    by Anonymous Coward on Thursday August 10, 2006 @02:40PM (#15883851)
    Introduction
    Discovered field by chance in 2000 (blame the Internet)
    Hobby project (simulations and assembly) until 2004
    Transformed into Independent Study thesis project
    Overview of current state of research
    Focus on programmer's view

    Stack Computers: Origins
    First conceived in 1957 by Charles Hamblin at the University of New South Wales, Sydney.
    Derived from Jan Lukasiewicz's Polish Notation.
    Implemented as the GEORGE (General Order Generator) autocode system for the DEUCE computer.
    First hardware implementation of LIFO stack in 1963: English Electric Company's KDF9 computer.
    Stack Computers: Origins (Part 2)
    Independently discovered in 1958 by Robert S. Barton (US).
    Implemented in the Burroughs B5000 (also in 1963).
    Better known
    Spawned a whole family of stack computers
    The First Generation
    The First Generation: Features
    Multiple independent stacks in main memory
    Stacks are randomly accessible data structures
    Contained procedure activation records
    Evaluated expressions in Reverse Polish Notation
    Complex instructions sets trying to directly implement high-level languages (e.g.: PL/1, FORTRAN, ALGOL)
    Few hardware buffers (four or less typically)
    Supplanted in the 1980's by RISC and better compilers
    Stack Computers: A New Hope
    Enter Charles H. ("Chuck") Moore:
    Creator of the stack-based FORTH language, circa 1970
    Left Forth, Inc. in 1981 to pursue hardware implementations
    NOVIX (1986), Sh-BOOM (1991), MuP21 (1994), F21 (1998), X18 (2001)
    Currently CTO of Intelasys, still working on hardware
    product launch expected April 3, 2006 at Microprocessor Summit
    Enter Prof. Philip Koopman, Carnegie-Mellon University
    Documented salient stack designs in "Stack Computers: The New Wave", 1989
    The Second Generation
    The Second Generation: Features
    Two or more stacks separate from main memory
    Stacks are not addressable data structures
    Expression evaluation and return addresses kept separate
    Simple instruction sets tailored for stack operations
    Still around, but low-profile (RTX-2010 in NASA probes)
    Strangely, missed by virtually all mainstream literature
    Exception: Feldman & Retter's "Computer Architecture", 1993
    Arguments and Defense
    Taken from Hennessy & Patterson's "Computer Architecture: A Quantitative Approach", 2nd edition
    Summary: Valid for First Generation, but not Second
    Argument: Variables
    More importantly, registers can be used to hold variables. When variables are allocated to registers, the memory traffic reduces, the program speeds up (since registers are faster than memory), and the code density improves (since a register can be named with fewer bits than a memory location).
    [H&P, 2nd ed, pg 71]
    Manipulating the stack creates no memory traffic
    Stacks can be faster than registers since no addressing is required
    Lack of register addressing improves code density even more (no operands)
    Globals and constants are kept in main memory, or cached on stack for short sequences of related computations
    Ultimately no different than a register machine
    Argument: Expression Evaluation
    Second, registers are easier for a compiler to use and can be used more effectively than other forms of internal storage. For example, on a register machine the expression (A*B)-(C*D)-(E*F) may be evaluated by doing the multiplications in any order, which may be more efficient due to the location of the operands or because of pipelining concerns (see Chapter 3). But on a stack machine the expression must be evaluated left to right, unless special operations or swaps of stack position are done.
    [H&P, 2nd ed, pg. 71]
    Less pipelining is required to keep a stack machine busy
    Location of operands is always the stack: no WAR, WAW dependencies
    However: always a RAW dependency between instructions
    Infix can be easily compiled to postfix
    Dijkstra's "shunting yard" algorithm
    Stack swap operations equivalent to register-register move operations
    S
  • JVM (Score:5, Informative)

    by TopSpin ( 753 ) * on Thursday August 10, 2006 @02:40PM (#15883855) Journal
    Java bytecode is interpreted on a virtual stack based processor. Most bytecode gets JITed into native register based instructions, but the model JVM processor is a stack processor.

    Some previous poster noted that CLI is also a stack based model. I can't verify that myself but it wouldn't surprise me; Microsoft is, after all, highly 'innovative' or something.

    • Re:JVM (Score:3, Informative)

      by Anonymous Coward
      Its not like Java was super-innovative to use the stack-based architecture. Java was designed with web-applications in mind, and as such having small code size was extremely important for bandwidth reasons. One of the best features of stack machines is the small instruction size (no need to store register locations). So a stack machine is a natural choice for the JVM. If you wanna nag on .NET copying Java, there are plenty of good reasons, but this isn't one.
  • by dpilot ( 134227 ) on Thursday August 10, 2006 @02:46PM (#15883908) Homepage Journal
    Even in assembler, the mainstream hasn't been programming to the metal since Pentium I.

    Beginning with Pentium II, and propagating to pretty much all of the other archictures in a short time, non of the mainstream CPUs have exposed their metal. We have an instruction set, but it's torn into primitives and scheduled for execution. We don't see the primitives, not even in assembler. AFAIK, there isn't even a way to use the true primitives, except perhaps on the Transmeta, where it was undocumented.

    So in this light, since we're already fairly far from the true metal, it seems to me that it makes a lot of sense to re-evaluate the instruction set itself. Of course one could raise the Itanium argument, but I would also argue that politics were too big a part, there. Then again, one could also argue that x86 and amd64 are just so entrenched that it doesn't matter, and they do run well on today's hardware.

    Then again I could cite my old favorite, the 6809. It started from the same origins and precepts as RISC, but a different attitude. RISC simply tried to optimize the most common operations, at the expense of less common ones. With the 6809, they tried to understand WHY certain things were happening, and how those things could be done better and faster. They ended up with a few more transistors, the same speed, and something approaching 3X the throughput, as compared to the 6800. More similar to the current topic, there was a paper on 'contour mapping', mapping blocks of cache into stacks and data structures. The 6809 was too old for a cache, but it seems to me that combining it's concepts with the contour mapping would be interesting indeed.

    But like stack engines, it's not x86/amd64 compatible.
    • Even in assembler, the mainstream hasn't been programming to the metal since Pentium I.

      What are you talking about, you clueless git?

      Nearly every device driver in your Windows, Linux or Mac machine has assembly code modules which are HAND-TUNED to the processor type (which is why every processor offers a CPUID). And I'm not referring just to graphics cards... There are teams where I work that still need to use MSofts MASM 6.22 to compile 16 bit portions of BIOS code.

      I'd say it is 50/50 assembly vs. higher
    • ***Then again I could cite my old favorite, the 6809.***

      The 6809 was not only easy and fun to program, 6809 programs tended to benchmark out significantly faster than programs for comperable CPUs like the Z80, 6800 and 8080. If the industry ever decides to scrap the x86 mess -- which they won't -- going back to the 6809 for a starting point might not be a bad idea at all. I once did a plot of measured times for a benchmark where timings were available for a bunch of CPUs (Sieve of Eratosthenes). When y

  • by Jerk City Troll ( 661616 ) on Thursday August 10, 2006 @02:47PM (#15883920) Homepage
    You “wonder if Windows will run on a stack computer?” Where do you people come up with this nonsense? This is as irrelevant as saying: "someday, car tires will not be made of rubber. I wonder if Windows will support them?" Really, there is no need to try to come up with insightful remarks or questions to tack on the end of your story submissions. Just present the article and leave it at that. Let everyone else do the thinking.
  • Stop Hurting My Eyes (Score:5, Informative)

    by Anonymous Coward on Thursday August 10, 2006 @02:49PM (#15883933)
    Dear Slashdot Contributors,

    Please stop describing undergrads doing independent studies as "Experts". Theres a reason that mainstream processors haven't picked up on "Stack Processors", and it has nothing to do with binary compatibility, the difficulty of writing a compiler for their instruction set, or general programming complexity. Stack Machines are really only good for In-Order processing. Wonder why NASA probes have Stack Processors? Because they don't freaking need to do out of order processing in order to get the performance they require, and they probably found stack processors to have a favorable power / performance ratio for their application. You will never see a full blown Windows running on a Stack processor, because Superscalar processors destroy their performance.

    "My research project shows that some people wrote nifty papers in the 1970s, but everyone ignored them for an obvious reason I don't understand." -> Not an Expert
    • I believe that your criticisms apply to only specific stack based architectures. That they do apply to the commonly presumed architectures I accept, but this is far from asserting them as general truths.

      Actually, even asserting that register based computers solve the problems that you are describing is not a general truth. You need to specify how many registers of what type can deal with how many out of order processes. And I suspect that a stack computer with 5 or 6 stacks could deal as flexibly with th
  • by thewiz ( 24994 ) * on Thursday August 10, 2006 @02:51PM (#15883956)
    Do these come in short- and tall-stack versions?
    Are maple syrup and butter options?
  • by Animats ( 122034 ) on Thursday August 10, 2006 @02:54PM (#15883987) Homepage

    Who can forget the English Electric Leo-Marconi KDF9 [carleton.ca], the British stack machine from 1960. That, and the Burroughs 5000, were where it all began.

    Stack machines are simple and straightforward to build, but are hard to accelerate or optimize. Classically, there's a bottleneck at the top of the stack; everything has to go through there. With register machines, low-level concurrency is easier. There's been very little work on superscalar stack machines. This student paper from Berkeley [berkeley.edu] is one of the few efforts.

    It's nice that you can build a Forth machine with about 4000 gates, but who cares today? It would have made more sense in the vacuum tube era.

    • It's nice that you can build a Forth machine with about 4000 gates, but who cares today


      Considering that we seem to be entering the vacuum tube era in nano-tech, perhaps a 4000 gate forth machine can be used to run programmable nano-machines.
  • Not a good idea (Score:5, Insightful)

    by coats ( 1068 ) on Thursday August 10, 2006 @02:55PM (#15883997) Homepage
    The reason modern systems are so fast is that they hide a lot of fine grained parallelism behind the scenes. It is very hard to express this kind of parallelism in a way that it can be executed on a stack machine.

    How important is this parallism? Consider that modern processors have 10-30 pipeline stages, 3-6 execution units that can have an instruction executing at each stage; moreover, most of them have out-of-order execution units that handle instructions more in the order that data is available for them rather than the order they are listed in the object file (and main memory is hundreds of times slower than the processors themselves, so this is important!). Typically, such processors can have more than 100 instructions in some stage of execution (more than 250 for IBM POWER5 :-)

    Consider, also, that the only pieces of anything-like-current stack hardware are Intel x87-style floating point units, that Intel is throwing away -- for good reason! -- in favor of (SSE) vector style units. In the current Intel processors, the vector unit emulates an x87 if it needs to -- but giving only a quarter of the performance.

    Someone made remarks about Java and .Net interpreters: in both cases, the interpreter is simulating a purely scalar machine with no fine grained parallelism; no wonder an extensible software-stack implementation is one of the simplest to implement. Stacks are not the way that true Java compilers like gjc generate code, though!

    No, stack-based hardware is not a good idea. And haven't been since some time in the eighties, when processors started to be pipelined, and processor speed started outstripping memory speed.

  • NASA (Score:5, Insightful)

    by HTH NE1 ( 675604 ) on Thursday August 10, 2006 @02:55PM (#15883999)
    Apparently NASA uses stack computers in some of their probes.

    Is that supposed to be a ringing endorsement? I thought NASA was using components the rest of the world treated as obsolete due their proven durability and reliability in the radiation of space.
    • Re:NASA (Score:3, Interesting)

      by Medievalist ( 16032 )

      I thought NASA was using components the rest of the world treated as obsolete due their proven durability and reliability in the radiation of space.

      Essentially correct. It is so costly and time-consuming to get a new component certified for use that it's usually less work to find a clever way to use old components. Then ten months after launch production ceases on the old part, and you have to have special ones built at a hundred times the cost (military option) or scavenge them on eBay (scientific opti

  • FORTH post! (Score:2, Funny)

    by Anonymous Coward
    Nothing to see here. Sorry.
  • by vlad_petric ( 94134 ) on Thursday August 10, 2006 @03:01PM (#15884046) Homepage
    I didn't even try to torrent-download the thing, but I can tell you why stack machines aren't better than register-based ones. The main reason is that it's much much much harder to do renaming of a stack than of"regular" registers. Without renaming you can't do out-of-order execution ... Currently, there are two "mainstream" architectures that include stacked register files: Itanium and Sparc. Neither of them have out-of-order implementations.

    But why do you need out-of-order execution? Well, misses to memory are very expensive these days - it can easily take from 200 to 400 cycles to service a load that misses all the way to main memory. This can have a significant effect on performance. What out-of-order execution does is to allow independent instructions that are younger than the load to execute in parallel with it. Quite often these parallely-executed instruction will generate other misses to main memory, overlapping their latencies. So - latency of loads that miss is still very high, but at the very least the processor is not idle while servicing them (for a good read see "MLP Yes! ILP no!" by Andy Glew)

    Itanium and Sparc compensate for the fact that they don't do stuff out-of-order by putting sh*tloads of L2/3 cache on-chip. The cost of a miss is still very high, but it happens much less often. The manufacturing cost of a chip is also much higher.

    Note that what NASA is sending into space is "old" tech. The reason - well, cosmic rays are much stronger in outer space, and the smaller the gate, the easier it is for them to flip its state.

    P.S. I'm a computer architect.

  • by Junks Jerzey ( 54586 ) on Thursday August 10, 2006 @03:09PM (#15884100)
    Normally this kind of stuff doesn't bug me, but this is like an article in 2006 proclaiming the benefits of object-oriented programming. Doesn't anyone know their computing history?

    There were stack computers in the 1960s and 1970s. There was a resurgence of interest in the 1980s--primarily because of Forth's popularity in embedded systems--resulting in a slew of stack-based "Forth engines." Forth creator Chuck Moore has been working on a series of custom Forth CPUs for 20+ years now. His latest version has 24 cores on one chip (and was entirely designed by one person and uses MILLIWATTS of power).

    Stack processors and languages have one big advantage: they minimize the overall complexity of a system. The tradeoff is that they often push some of that complexity onto the programmer. That's why Forth tends to shine for small, controlled systems (like a fuel injector or telescope controller or fire alarm), but you don't see people writing 3D video games or web browsers in Forth.
    • how fast does it run specINT @milliwatts? It doesn't? Oh, there ya go.

      Saying it is low power is meaningiless when there are CPUs that use hundreds of MICROWATTS in a plethora of embedded devices today.
    • Okay, I posted before reading the article. Can you tell :)

      I think the fault is more with the submitter of the story than the author's presentation. The author just gives an overview of how stack computers work and their history. The submitter apparently never knew about stack computers and is all excited about them as a possible future of computing. The presentation is simple a history, mostly about stuff that's 20+ years old. So, yes, while stack processors have been commercially available and they've
    • > ...often push some of that complexity onto the programmer. That's why Forth tends to shine for small, controlled systems...

      Truer words were never typed. However, as a low-overhead, "portable assembly language", Forth is a beautiful way to go. The nature of the language causes you to think about the problem as a heirarchy of "procedural objects", which is really ideal for polling the inputs and turning on the lights and motors. I used Forth a lot in an environment where others were using PLCs and
  • Ivory-tower types LOVE stack computers. No messy "addresses". No messy "registers". Nice and simple. Many an undergrad has written a stack machine emulator. An occasionally a stack machine makes it to the market. A few old Burroughs mainframes. The HP 3000 series. oops, that one had to be recalled cause it was so ungodly slow. Lots of software virtual stadck machines:: The Pascal P-machine. UCSD P-code. FORTH. PostScript. Java byte code (I think). All kinds of weird and wonderful and general
    • Re:Who likes them? (Score:2, Interesting)

      by Anonymous Coward
      You're Just Wrong(tm) about that, actually. See BOOST. No, not the masochistic c++ template library (ANYTHING written in C++ is masochistic), Berkeley's Out of Order Stack Thingy. http://www.cs.berkeley.edu/~satrajit/cs252/BOOST.p df [berkeley.edu]

      Probably mostly just an accident of history register machines went superscalar first and "won" (mostly, because maybe since stack machines were more efficient, the need for superscalarity didn't hit so early...),. But, in short: stack machines, with similar design overheads to r
  • In case y'all forgot, the Intel x87 FP engine was stack based until SSE. And it is still in there!

    You pushed FP numbers onto F(0:7) and the operations worked on the stack. They than had to be popped off to the accumulator to load to memory.

    Kids these days, I tells ya.

  • by Theovon ( 109752 ) on Thursday August 10, 2006 @03:59PM (#15884504)
    I'm a chip designer, and I am working on my Ph.D. in CS. The idea of stack machines is something I have researched a bit, and I have drawn some of my own conclusions.

    The main advantage of stack machines is that all or most parameters for each instruction are implicit. Aside from stack shuffle/rotate instructions, the operands are always the top few on the stack. This makes instructions very small. The logic is also exceedingly simple (for fixed-stack designs). If you want a simple, low-power CPU, a stack machine is what you want.

    Where I explored this issue, however, is in the realm of high-performance computing. The key advantage of a stack architecture is that smaller instructions take less time to fetch from memory. If your RISC instructions are 32 bits, but your stack machine instructions are 8 bits, then your instruction caches are effectively 4x larger, and your over-all cache miss penalty is greatly reduced.

    The problem with stack machines is that they're damn near impossible to add instruction-level parallelism to. With a RISC machine, near-by instructions that deal with different registers (i.e. no dependencies) can be executed in parallel (whether that's multi-issue or just pipelining). With a stack machine, everything wants to read/write the top of the stack.

    I came up with two things to deal with this problem, that are very much like the CISC-to-RISC translation done by modern x86 processors, so it's more of a stack ISA on a RISC architecture. One is that the stack is virtual. When you want to pop from the stack, what's happening in the front-end of the CPU is that you're just popping register numbers corresponding to a flat register file. When you want to push, you're allocating an assigned register number from the flat register file. Now, if you can get two instructions going that read different parts of the stack and write (naturally) to different locations, you can parallelize them. The second part is a healthy set of register shuffling instructions. Since you're doing all of this allocation up front, shuffling registers is as simple as renumbering things in your virtual stack. So a swap operation swaps two register numbers (rather than their contents), and a rotate operation renumbers a bunch of them, but the pending instructions being executed still dump their results in the same physical registers.

    This all sounds great, but there are some problems with this:

    (1) The shuffling instructions are separate instructions. With a RISC processor, you have more information all in one unit. Although you could try to fetch and execute multiple stack instructions at once, it's much more complicated to execute four stack instructions in parallel than to execute a single RISC instruction, even though they require the same amount of memory.
    (2) You need a lot of shuffling instructions. Say your stack contains values A, B, C, and D, and you want to sum them. Without shuffling, you'd add A and B, yielding E, then add E and C, yielding F, then add F and D. Three add instructions. If your adder(s) is/are pipelined, you'd like to add A+B and C+D in parallel or overlapping, THEN wait around for their results and do the third add. The problem is that to do that, you'd need to add A+B, then rotate C to the top then D to the top, then add, then add again. The first case was 3 instructions; the second case is 5 instructions. Depending on your architecture, the extra shuffle instructions may take so long to process that you might as well just have waited. No speed gain at all.
    (3) The extra shuffing instructions take up space. Optimizers are hard to write. Although it's conceivable that one could optimize for this architecture so as to avoid as many shuffling instructions as possible, you still end up taking up quite a lot of space with them, potentially offsetting much of the space savings that you got from switching from RISC to stack.

    So, there you have it. Somewhat OT, because surely NASA's primary goal has got to be low-power, but also somewhat on-topic because stack architectures aren't the holy grail. Just ideal for some limited applications.
    • Uh, I guess I'm too daft to get a Ph. D, but it sure seems to me like optimizing on the instruction level with a stack machine is solving the wrong problem.

      With a stack machine, running one instruction stream in parallel is very hard, while very easy on a register-based one. But the flip side of this is that on a stack machine running multiple instruction streams in parallel is incredibly easy while *Very* difficult on a register based CPU.

      For instance, take "add 1 to each element of this 30-length array"
    • Parallalism (Score:3, Interesting)

      by Peaker ( 72084 )
      How about achieving parallelism by using multiple stacks?

      If a stack machine is that much simpler, couldn't you either have:
      1. A vast amount of cores for many unrelated threads
      2. Or: Multiple pipelines and explicit division of instructions into the pipelines?

      The second refers to an instruction coding similar to VLIW such that you parallelise the code on multiple stacks but it still shares an instruction/data cache and allows for parallelism without heavy multi-threading at the high-level (and instead having parall

  • by bunratty ( 545641 ) on Thursday August 10, 2006 @04:06PM (#15884552)
    Knuth's MMIX architecture [wikipedia.org] uses registers, but the registers themselves are on a register stack. Perhaps this architecture provides the best of both worlds.
  • by Michael Woodhams ( 112247 ) on Thursday August 10, 2006 @04:17PM (#15884638) Journal
    You Forth (heart) if honk then
  • by Baldrson ( 78598 ) * on Thursday August 10, 2006 @06:12PM (#15885296) Homepage Journal
    Stack computing came close to changing the course of the computer industry, including setting networking forward 15 years (displacing Microsoft's stand-alone approach to software) back in 1981.

    An excerpt from a bit longer essay I wrote [geocities.com]:

    In August 1980, Byte magazine published its issue on the Forth programming language

    At that time, I was working with Control Data Corporation's PLATO project [cyber1.org], pursuing a mass market version of that system using the Intelligent Student Terminal (IST). The IST's were Z80 processor terminals sporting 512*512 bit mapped displays with touch sensitive screens and 1200bps modems that went for about $1500. We were shooting for, and actually successfully tested, a system that could support almost 8,000 simultaneous users on 7600-derived Cybers (the last machine designed by Seymour Cray to be marketed by CDC --with 60 bits per word, 6 bits per character, no virtual memory, but very big and very fast) with under 1/4 second response time (all keys and touch inputs went straight to the central processor) for $40/month flat rate including terminal rental. Ray Ozzie [cio.com] had been working at the University of Illinois on offloading the PLATO central system to the Z80 terminal through downloaded assembly language programming, doing exotic things like "local key echo" and such functions.

    I was interested in extending Ray's work to offload the mass-market version of the PLATO central system. In particular I was looking at a UCSD Pascal [brighton.ac.uk]-based approach to download p-code [brighton.ac.uk] versions of terminal functions -- and even more in particular the advanced scalable vector graphics commands of TUTOR (the "relative/rotatable" commands like rdraw, rat, rcircle, rcircleb, etc.) if not entire programs, to be executed offline. Pascal was an attractive choice for us at the time because CDC's new series of computers, the Cyber 180 (aka Cyber 800) was to have virtual memory, 64 bit words, 8 bit characters and be programmed in a version of the University of Minnesota Pascal called CYBIL [utah.edu] (which stood for Cyber Implementation Language). Although this was a radically different architecture than that upon which PLATO was then running, I thought it worthwhile to investigate an architecture in which a reasonable language (you should have seen what we were used to!) could be made to operate on both the server and the terminal so that load could be dynamically redistributed. This idea of dynamic load balancing would, later, contribute to the genesis of Postscript.

    Over one weekend a group of us junior programmers managed to implement a good portion of TUTOR's (PLATO's authoring language) advanced graphics commands in CYBIL. Our little hunting pack at CDC 's Arden Hills Operations was in a race against the impending visit of Dave Anderson of the University of Illinois' PLATO project who was promoting what he called "MicroTUTOR". Anderson was going to take the TUTOR programming language and implement a modified version of it for execution in the terminal -- possibly in a stand-alone mode. Many of us didn't like TUTOR, itself, much. Indeed, I had to pull teeth to get the authorization to put local variables into TUTOR -- and we were determined to select a better board from our quiver with which to surf Moore's Shockwave into the Network Revolution [amazon.com]. CDC management wasn't convinced that such a radical departure from TUTOR would be wise, and we hoped to demonstrate that a p-code Pascal approach could accomplish what microTUTOR purported to -- and more. We quickly ported a TUTOR central sy

  • by Nom du Keyboard ( 633989 ) on Thursday August 10, 2006 @11:13PM (#15886785)
    If stack-based computing is so great, powerful, and cheap, why aren't IBM PPC, AMD Athlon, Intel Core pick-a-number, and Sun Sparc dueling it out for the best stack-based chip. Why aren't the next-gen game consoles all using it, since Microsoft and Sony at least (Wii is just a faster GC) went to new architectures. Don't tell me no one has ever heard of the concept before. The Burroughs 5500 dates back to the late 1960's. I think there's more here than is being told.

A computer lets you make more mistakes faster than any other invention, with the possible exceptions of handguns and Tequilla. -- Mitch Ratcliffe

Working...