Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

High-level Languages and Speed 777

nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."
This discussion has been archived. No new comments can be posted.

High-level Languages and Speed

Comments Filter:
  • Old debate (Score:5, Informative)

    by overshoot ( 39700 ) on Tuesday July 18, 2006 @07:21AM (#15735454)
    Twenty years ago we were still in the midst of the "language wars" and this was a hot topic. The argument then, as now, was whether a high-level language could be compiled as efficiently as a low-level language like C [1].

    Well, we ran our own tests. We took a sizable chunk of supposedly well-written time-critical code that the gang had produced in what was later to become Microsoft C [2] and rewrote the same modules in Logitech Modula-2. The upshot was that the M2 code was measurably faster, smaller, and on examination better optimized. Apparently the C compiler was handicapped by essentially having to figure out what the programmer meant with a long string of low-level expressions.

    Extrapolations to today are left to the reader.

    [1] I used to comment that C is not a high-level language, which would induce elevated blood pressure in C programmers. After working them up, I'd bet beer money on it -- and then trot out K&R, which contains the exact quote, "C is not a high-level language."
    [2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.)

    • Re:Old debate (Score:5, Insightful)

      by StarvingSE ( 875139 ) on Tuesday July 18, 2006 @07:34AM (#15735485)
      C is not a low level language. If you're not directly manipulating the registers on the processor, you are not in a low level language (and forget about the "register" keyword, modern compilers just treat register variables in C/C++ as memory that needs to be optimized for speed).

      If anything, C is a so-called mid level language. If it wasn't, you'd be using an assembler instead of a compiler.
      • Re:Old debate (Score:5, Insightful)

        by Bastian ( 66383 ) on Tuesday July 18, 2006 @09:35AM (#15736003)
        The article addressed this point by mentioning that the definitions of high and low level language are a moving target. Nowadays I think most people consider assembly language to be its own thing, and the low-level classification has now been shifted into a domain that was once described completely by the term high-level. The term "high-level language" has been replaced by the term "programming language."

        If you're going to go with the jargon as it's most often used nowadays (which is a perfectly reasonable thing to do), then C would certainly be about as low as you can get without manipulating individual registers - i.e., without being assembly language.

        • Re:Old debate (Score:4, Insightful)

          by jacksonj04 ( 800021 ) <nick@nickjackson.me> on Tuesday July 18, 2006 @09:58AM (#15736147) Homepage
          Low level says what you want the system to do. High level says what you want the language (Via compiler, interpreter etc) to make the system do.
          • New debate (Score:4, Interesting)

            by Dzonatas ( 984964 ) on Tuesday July 18, 2006 @11:54AM (#15737050) Homepage
            High level languages have always been compared to cognitive semantics and grammatical styles. That is the higher the level of the language the easier it is for us humans to read and write it. Conversely, the lower the level the language is the more discreet steps are needed to describe an instruction or data.

            Speed of program languages or machine languages are not measured by how high or low level they are to us. They are also measured by time to develop and implement the program. The article basically makes a point of it, that it's "better to let someone else" to optimize the low-level code while you write with the high-level language. You could write a super fast machine coded program, but it'll take you much longer to write it than with a simpler higher level language.

            The new debate is over datatypes and the available methods to manipulate them. Older hardware gave us the old debate with primitive datatypes and a general set of instructions to manipulate the data. Newer hardware can give us more than just primitives. For example, a unicoded string datatype seen by the hardware as a complete object instead of an array of bytes. With hardware instructions to manipulate unicoded strings, that would pratically take away any low-level implementation of unicoded strings. The same could be done for UTF-8 strings. We could implement hardware support for XML documents and other common protocols. How these datatypes are actually implemented in hardware is the center of the debate.

            Eventually, there will be so many datatypes that there will be seperate low-level languages specifically designed for a domain a datatypes. The article makes the point there exists an increase in complexity for newer compliers to understand what was intended by a set of low-level instructions. Today's CPUs have a static limit of low-level instructions. The future beholds hardware implemented datatypes and their dynamic availability of low-level instructions. Newer processors will need to be able to handle the dynamic set of machine language instructions.

            Does the new debate conflict with Turing's goal to simply make a processor unit extensible without the need to add extra hardware? For now, we have virtualization.
        • Re:Old debate (Score:3, Informative)

          by fyngyrz ( 762201 )

          C would certainly be about as low as you can get without manipulating individual registers - i.e., without being assembly language.

          Actually, I think Forth is a little lower. The RPN nature of the language makes for a considerably closer mapping from language use to stack use for one thing, and for another, Forth atoms tend to be more primitive and more prefab than what a particular expression in C might produce.

          C remains my favorite for anything that requires speed. It has always seemed to me that

      • Re:Old debate (Score:3, Interesting)

        by NickFitz ( 5849 )

        The statement "C is not a high level language" is not logically equivalent to the statement "C is a low level language", so the OP is still entitled to his beer money :-)

    • Re:Old debate (Score:3, Informative)

      ...trot out K&R, which contains the exact quote, "C is not a high-level language."

      Actually the quote from my copy of K&R, on my desk beside me is,

      C is not a "very high level" language...

      emphasis is mine.
      • Re:Old debate (Score:3, Informative)

        by cerberusss ( 660701 )
        It also says in the introduction (next page):
        C is a relatively "low level" language.

        • Re:Old debate (Score:3, Insightful)

          by StarvingSE ( 875139 )
          Key word is "relatively." C is low level compared to languages such as Java and C#, which do a lot of things such as memory management for you.
    • Re:Old debate (Score:5, Informative)

      by shreevatsa ( 845645 ) <<shreevatsa.slashdot> <at> <gmail.com>> on Tuesday July 18, 2006 @08:19AM (#15735642)
      For what it's worth, at The Computer Language Shootout [debian.org], OCaml does pretty well [debian.org]. Of course, C is still faster [debian.org] for most things (but note that the really high factors (29 and 281) are in OCaml's favour!), but OCaml is pretty fast compared to Java [debian.org] or Perl [debian.org]. Haskell does pretty well too. Functional programming, anyone?
      Of course, these benchmarks measure only speed, are just for fun, and are "flawed [debian.org]", but they are still interesting to play with. If you haven't seen the site before, enjoy fiddling with things to try and get your favourite language on top :)
      • Re:Old debate (Score:5, Interesting)

        by bloodredsun ( 826017 ) <martin@nosPam.bloodredsun.com> on Tuesday July 18, 2006 @08:50AM (#15735781) Journal

        If I had mod points I'd certainly mod you informative. Those benchmarks might be synthetic and flawed but as a general illustration of how the various languages differ, that link is fantastic.

        Of course I'll just use it for my own ends by convincing my managers that we're using the right languages - "Yes boss you'll see that we use C++ for the stuff that needs to be fast with low memory overhead, Java for the server side stuff, stay the fuck away from Ruby and if you say 'Web 2.0' at me one more time I'll be forced to wham you with a mallet!" ;-)

        • Re:Old debate (Score:5, Interesting)

          by rainman_bc ( 735332 ) on Tuesday July 18, 2006 @09:58AM (#15736152)
          stay the fuck away from Ruby

          What's wrong with Ruby, as a replacement for a very ugly language called Perl?

          Ruby is an elegant language, fully Object Oriented, and does just as well as Python and Perl...

          Ruby On Rails OTOH is a different story and don't want to get into a flame war over it, but Ruby itself is pretty good for a lot of things you'd otherwise write in Perl but don't like the ugliness of Perl...

          I've found some people don't get the distinction between Ruby and Ruby on Rails.
          • Re:Old debate (Score:4, Interesting)

            by Julian Morrison ( 5575 ) on Tuesday July 18, 2006 @11:54AM (#15737051)
            If you looked at the shootout you'd see what was wrong in Ruby: it's just about the slowest serious scripting language. It seems to be using pure bloody-minded interpretation without any bytecode or JIT stage.

            Nothing wrong with the language that a proper implementation couldn't cure, basically.
          • Re:Old debate (Score:4, Interesting)

            by Dr. Zowie ( 109983 ) <slashdot@defores t . org> on Tuesday July 18, 2006 @03:13PM (#15738811)
            There are two main problems with Ruby as a replacement for "a very ugly language called Perl":

            * It's not sufficiently prettier than Perl

            * It's not Perl

            Perl may look ugly but it is to most programming languages as English is to most other languages. Perl is a brawling, sprawling mess of borrowed, Hamming-optimized idioms that is extremely ugly from the POV of a syntax engineer and extremely expressive from the POV of a fluent speaker.

            Ruby is more like Esperanto - elegant, clean, and spoken by practically no-one because it isn't very expressive.
      • Re:Old debate (Score:4, Interesting)

        by masklinn ( 823351 ) <.slashdot.org. .at. .masklinn.net.> on Tuesday July 18, 2006 @09:14AM (#15735903)
        Haskell also does very well, and Digital Mars' impressive D is consistently in the top spots (one wonders why the hell Soustrup is still trying to improve C++ when he could just switch to D and build from there)
        • Haskell does OK. But compare to Clean, another pure lazy functional language. Clean blows away Haskell most of the time and competes favourably with C, sometimes beating it.
  • Bah (Score:5, Insightful)

    by perrin ( 891 ) on Tuesday July 18, 2006 @07:24AM (#15735458)
    So we "still can get good performance" from C? The implication is that C will somehow become overcome by some unnamed high-elvel language soon. That is just wishful thinking. The article is not very substantial, and where it tries to substantiate, it misses the mark badly. The claim that C cannot handle SIMD instructions well is not true. You can use them directly from C, or the C compiler can use them through autovectorization, as in gcc 4.1. The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".
    • Re:Bah (Score:5, Insightful)

      by TheRaven64 ( 641858 ) on Tuesday July 18, 2006 @07:38AM (#15735503) Journal
      The claim that C cannot handle SIMD instructions well is not true. You can use them directly from C, or the C compiler can use them through autovectorization, as in gcc 4.1

      You have two choices when using SIMD instructions in C:

      1. Use non-portable (between hardware, and often between compilers) intrinsics (or even inline assembly).
      2. Write non-vectorised code, and hope the compiler can figure out how to optimally decompose these into the intrinsics. Effectively, you think vectorised code, translate it into scalar code, and then expect the compiler to translate it back.
      Compare the efficiency of GCC at auto-vectorising FORTRAN (which has a primitive vector type) and C (which doesn't), if you don't believe me.

      The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".

      When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.

      • Re:Bah (Score:3, Insightful)

        by rbarreira ( 836272 )

        Use non-portable (between hardware, and often between compilers) intrinsics (or even inline assembly).

        Which usually isn't a big problem anyway since the code sections in which that's an advantage are usually quite small and infrequent, so if you really need the performance you can make a very little sacrifice of inserting conditional compiling statements with different code for the platforms which you are interested on.

        It's certainly not an ideal solution but it's a very attractive one, and it has the advan

      • Re:Bah (Score:3, Insightful)

        by eraserewind ( 446891 )

        Compare the efficiency of GCC at auto-vectorising FORTRAN (which has a primitive vector type) and C (which doesn't), if you don't believe me.

        You see this all the time in SW Engineering. If there is a well defined high level API specifying what something is trying to do rather than how it should be efficiently (at the time) done, it will eventually be far more efficient to use the API because it will be get dedicated instructions in the chipset or even be completely implemented in a dedicated HW device where

    • Re:Bah (Score:5, Insightful)

      by Anonymous Coward on Tuesday July 18, 2006 @07:42AM (#15735518)
      C is faster in the same sense that assembly is faster: You have more control over the resulting machine code, so the code can by definition always be faster. You can optimize by hand. But that comes at a price: You have to optimize by hand. That's why C isn't always faster, especially not when it's supposed to be portable. The question isn't whether there could be a faster program in a language of choice, it's whether a language is at the right level of abstraction for a programmer to describe what the program must do and not a bit more. Overspecification prevents optimization. If you write for (int i=0; i<100; i++) where you really meant for (i in [0..99]), how is the compiler going to know if order is important? The latter is much more easily parallelized, for example. C is full of explicitness where it is often not needed. Assembly even more so. That's the problem of low level languages.
  • High Level (Score:5, Insightful)

    by HugePedlar ( 900427 ) on Tuesday July 18, 2006 @07:24AM (#15735459) Homepage
    I remember back in the days of the Atari ST and Amiga, C was considered to be a high-level language. People would complain about the poor performance of games written in C (to ease the porting from Amiga to ST and vice versa) over 'proper' Assembly coded games.

    Now I hear most people referring to C and C++ as "low level" languages, compared to Java and PHP and visual basic and so on. Funny how that works out.

    I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardware.
    • Assembler (Score:4, Insightful)

      by backwardMechanic ( 959818 ) on Tuesday July 18, 2006 @08:36AM (#15735722) Homepage
      Every serious hacker should have a play with assember, or even machine code. There is real magic in starting up a uP or uC on a board you built yourself, and making it flash a few LEDs under the control of your hand assembled program. I found a whole new depth of understanding when I built a 68hc11 based board (not to mention memorizing a whole bunch of op-codes). Of course, I'd never want to write a 'serious' piece of code in assembly, and it still amazes me that anyone ever did!
    • Re:High Level (Score:3, Interesting)

      by rockmuelle ( 575982 )
      "I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardwar"

      A minor observation about the feasability of working with the target hardware: the two most popular instruction set architectures for commidity hardware, PowerPC and IA-32, have both been stable since the mid 90s. The programming guide for PowerPC processors is still pretty much the same document as it was in 1996, around the same time the P
  • by ChrisRijk ( 1818 ) on Tuesday July 18, 2006 @07:25AM (#15735461)
    Not really much "meat" here. The proof is in the pudding as they say - but there's no benchmarks here. Just some minor talk about how things should compare.

    I don't agree with the basic premise of the article at all - but I've also written equivalent programs in C and more modern languages and compared the performance.
  • Inaccurate summary (Score:5, Insightful)

    by rbarreira ( 836272 ) on Tuesday July 18, 2006 @07:26AM (#15735465) Homepage
    The task of mapping C code to a modern microprocessor has gradually become increasingly difficult.

    This is not true. What they mean, I think, is "the task of mapping C code to efficient machine code has gradually become increasingly difficult".
  • It's very simple (Score:5, Interesting)

    by dkleinsc ( 563838 ) on Tuesday July 18, 2006 @07:33AM (#15735484) Homepage
    The speed of code written in computer language is based on the number of CPU cycles required to carry it out. That means that the speed of any higher-level language is related to the efficiency of code executed by the interpreter or produced by the compiler. Most compilers and interpreters these days are pretty darn good at optimizing, making the drawback of using a higher-level language less and less important.

    If you don't believe me, I suggest you look at some of the assembly code output of gcc. I'm no assembly guru, but I don't think I would have done as well writing assembly by hand.

    • Re:It's very simple (Score:5, Informative)

      by rbarreira ( 836272 ) on Tuesday July 18, 2006 @07:41AM (#15735514) Homepage
      I'm no assembly guru, but I don't think I would have done as well writing assembly by hand

      I don't believe this as much as the people who I see repeating that sentence all the time...

      Not many years ago (with gcc), I got an 80% speed improvement just by rewriting a medium sized function to assembly. Granted, it was a function which was in itself, half C code, half inline assembly, which might hinder gcc a bit. But it's also important to note that if the function had been written in pure C code, the compiler wouldn't have generated better code anyway since it wouldn't use MMX opcodes... Last I checked, MMX code is only generated from pure C in modern compilers when it's quite obvious that it can be used, such as in short loops doing simple arithmetic operations.

      An expert assembly programmer in a CPU which he knows well can still do much better than a compiler.
      • Re:It's very simple (Score:3, Interesting)

        by spinkham ( 56603 )
        True, since they can always start with the compiler output, and are thus will at least do no worse.
        The more interesting question is if a person with only passing familiarity with assembly can do better then the compiler, and the answer to that is usually no these days.
      • by Terje Mathisen ( 128806 ) on Tuesday July 18, 2006 @10:34AM (#15736399)
        I've probably written more assembly than most slashdot readers, and most of what you say is true:

        It used to be the case that I could always increase the speed of some random C/Fortran/Pascal code by rewriting it in asm, parts of that speedup came from realizing better ways to map the current problem to the actual cpu hardware available.

        However, I also discovered that much of the time it was possible to take the experience gained from the asm code, and use that to rewrite the original C code in such a way as to help the compiler generate near-optimal code. I.e. if I can get within 10-25% of 'speed_of_light' using portable C, I'll do so nearly every time.

        There are some important situations where asm still wins, and that is when you have cpu hardware/opcodes available that the compiler cannot easily take advantage of. I.e. back in the days of the PentiumMMX 300 MHz cpu it became possible to do full MPEG2/DVD decoding in sw, but only by writing an awful lot of hand-optimized MMX code. Zoran SoftDVD was the first on the market, I was asked to help with some optimizations, but Mike Schmid (spelling?) had really done 99+% of the job.

        Another important application for fast code is in crypto: If you want to transparently encrypt anything stored on your hard drive and/or going over a network wire, then you want the encryption/decryption process to be fast enough that you really doesn't notice any slowdown. This was one of the reasons for specifying a 200 MHz PentiumPro as the target machine for the Advanced Encryption Standard: If you could handle 100 Mbit Ethernet full duplex (i.e. 10 MB/s in both directions) on a 1996 model cpu, then you could easily do the same on any modern system.

        When we (I and 3 other guys) rewrote one of the AES contenders (DFC, not the winner!) in pure asm, we managed to speed it up by a factor of 3, which moved it from being one of the 3-4 slowest to one of the fastest algorithms among the 15 alternatives.

        Today, with fp SIMD instructions and a reasonably orthogonal/complete instruction set (i.e. SSE3 on x86), it is relatively easy to write code in such a way that an autovectorizer can do a good job, but for more complicated code things quickly become much harder.

        Terje
    • Re:It's very simple (Score:4, Interesting)

      by jtshaw ( 398319 ) on Tuesday July 18, 2006 @08:12AM (#15735617) Homepage
      Most compilers and interpreters these days are pretty darn good at optimizing, making the drawback of using a higher-level language less and less important.


      In the past, most compilers were dreadful at optimizations. Now, they are just horrible. I guess that is an improvement, but I still believe there is a lot of good research to come here.

      I do agree that the playing field has become pretty even. For example, with the right VM and the right code you can get pretty good performance out of Java. Problem is "the right VM" depends greatly on the task the program is doing.. certainly not a one vm fits all out of the box solution (ok.. perhaps you could always use the same VM, but app specific tuning is often neccesary for really high performance).

      At any rate.. people just need to learn to use the best tool for the job. Most apps don't actually need to be bleedingly fast, so developing them in something that makes the development go faster is probably more important then developing them in something to eek out that tiny performance gain nobody will probably notice anyway.
  • by Anonymous Coward on Tuesday July 18, 2006 @07:35AM (#15735488)
    Isnt the JIT for java written in C though.

    ahah now we know why my java program is so slow. damn C slowing it down.
  • by TeknoHog ( 164938 ) on Tuesday July 18, 2006 @07:38AM (#15735500) Homepage Journal
    This is exactly what I've been saying over and over, why I think that e.g. Fortran is better than C in many respects. The main point is neatly summarized at the end:
    the more information you can give to your optimizer, the better the job it can do. When you program in a low-level language, you throw away a lot of the semantics before you get to the compilation stage, making it much harder for the compiler to do its job.
  • It goes both ways (Score:5, Interesting)

    by JanneM ( 7445 ) on Tuesday July 18, 2006 @07:46AM (#15735533) Homepage
    Sure, CPU:s look quite a bit different now than they did 20+ years ago. On the other hand, CPU designs do heavily take into account what features are being used by the application code expected to be run on them, and one constant you can still depend on is that most of that application code is going to be machine-generated by a C compiler.

    For instance, 20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it). One expectation was still, at the time, that a lot of code would be generated directly by humans, so instructions and instruction designs catering to that use-case were developed. But by around then, most code was machine generated by a compiler, and since the compiler had little high-level semantics to work with, the high-level instructions - and most low-level one's too - went unused; this was one impetus for the development of RISC machines, by the way.

    So, as long as a lot of coding is done in C and C++ (and especially in the embedded space, where you have most rapid CPU development, almost all coding is), designs will never stray far away from the requirements of that language. Better compilers have allowed designers to stray further, but stray too far and you get penalized in the market.
    • Re:It goes both ways (Score:5, Informative)

      by pesc ( 147035 ) on Tuesday July 18, 2006 @08:21AM (#15735650)
      20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it).

      While the VAX had some complex instructions (such as double-linked queue handling), it did not have a quicksort instruction.

      Here [hp.com] is the instruction set manual.
    • For instance, 20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it). One expectation was still, at the time, that a lot of code would be generated directly by humans, so instructions and instruction designs catering to that use-case were developed. But by around then, most code was machine generated by a compiler, and since the compiler had little high-level semantics to work with, the high-level instructions - and most low-level one's too - went unused;

      • That reminds me of the most specialized machine instruction I ever saw. Back in the 80s I was in a EE lab where we made our own CPUs on breadboards out of AMD bitslice chips, then we implemented the specified instruction set in microcode. A large chunk of the grade was based on the lab instructor running a standard test program on each team's "system" and checking the expected results.

        One guy I knew realized that he was never going to get his rig stable enough to run through the whole test, so he set up a

  • by Bogtha ( 906264 ) on Tuesday July 18, 2006 @07:47AM (#15735534)

    The more abstract a language is, the better a compiler can understand what you are doing. If you write out twenty instructions to do something in a low-level language, it's a lot of work to figure out that what matters isn't that the instructions get executed, but the end result. If you write out one instruction in a high-level language that does the same thing, the compiler can decide how best to get that result without trying to figure out if it's okay to throw away the code you've written. Optimisation is easier and safer.

    Furthermore, the bottleneck is often in the programmer's brain rather than the code. If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations. High-level languages help with programmer productivity. I know that it's considered a mark of programmer ability to write the most efficient code possible, but it's a mark of software engineer ability to get the programming done faster while still meeting performance constraints.

    • by Eivind ( 15695 ) <eivindorama@gmail.com> on Tuesday July 18, 2006 @08:01AM (#15735577) Homepage
      If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations.

      Especially since you can combine. Even in high-performance applications there's typically a only a tiny fraction of the code that actually needs to be efficient, it's perfectly common to have 99% of the time spent in 5% of the code.

      Which means that in basically all cases you're going to be better off writing everything in a high-level language and then optimize only those routines that need it later.

      That way you make less mistakes, and get higher-quality better code quicker for the 95% of the code where efficiency is unimportant, and you can spend even more time on optimizing those few spots where it matters.

    • "If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations."

      This sound perfectly reasonable in theory. In practice, however, it's not. Users want speedy development AND speedy execution. I developed a Java image management program for crime scene photos, and the Sheriff Patrol's commander told me flat out: we'll never use this. It's too slow.

      I rewrote the program using C++ and Qt, and gained a massive
    • The more abstract a language is, the better a compiler can understand what you are doing

      Except it doesn't. Nobody has written a compiler that smart, and I don't care what anyone says: I don't think anyone ever will.

      Learning how to invent and develop algorithms is important. Learning how to translate those algorithms into various languages is important. And knowing how the compiler will translate those algorithms into machine instructions- and how the CPU itself will process those machine instructions, will
  • by mlwmohawk ( 801821 ) on Tuesday July 18, 2006 @07:48AM (#15735540)
    The first mistake: Confusing "compile" performance with execution performance. The job of maping C/C++ code to machine code is trivial.

    I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages.

    What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment.

    The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant.

    If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"

    Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse.
    • by Anonymous Coward on Tuesday July 18, 2006 @08:08AM (#15735600)
      "I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages."

      The "appeal to an expert" fallacy?

      "What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment."

      It also means that portability becomes ever harder, as well as adaptability to new hardware.

      "If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?""

      It's about algorithms. Computers just happen to be the most convienent means for trying them..

      "The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant."

      With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.

      "Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse."

      Now who's handwaving?
      • Now who's handwaving?

        I'd say you are. His first statement wasn't a logically fallacy, he was just pointing out this argument has been going on for a long time.

        You made a good point about portability, but I think that was your only point. And its easily shot down byt the fact that its just as easy to port a standard C/C++ API to a new environment as it is to port Java/.NET to a new environment.

        He made an excellent point about many new graduates not knowing how the CPU actually works and you replied w

        • by Anonymous Coward
          "In the real world we need to be able to get a solution in the minimum amount of time. VMs always take more time."

          I'd argue that in the real world (or at least business world) we need the solution to be developed in the shortest amount of time, with the most amount of security. While a VM based language is not guaranteed to provide quicker time / security, in most cases it probably will.
      • The "appeal to an expert" fallacy?

        I've never come across that fallacy in philosophy class, however, if you mean the "Improper Appeal to Authority" fallacy then it isn't. If the above poster was a movie star or a well known public figure and their comments about the article are being referenced to prove a point (assuming said movie star or public figure isn't an expert programmer), then that would be an improper appeal to authority. In any case, the insight and experience of long time programmer is valuabl

      • With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.

        Yay. With continued displays of attitudes like that, I'm going to leave the industry.

        It is getting increasingly difficult to hire S/W engineers that understand that there is an operating system and also hardware beneath the software they write. I need people NOW that can grok device drivers, understand and use Unix facilities, fiddle with DBs, write decent code in C, C++, Java, and shell, and can also whip tog

    • by cain ( 14472 ) on Tuesday July 18, 2006 @08:13AM (#15735626) Journal
      If computer science isn't about computers, what is it about?

      "Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)

      Sorry, you're arguing against Dijkstra: you lose. :)

      • Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)

        I see this quote everywhere, and just because it's by some semi-famous academic, nodody questions it and takes it for granted. The quote is utter rubish.

        With astronomy you have stars, which aren't man made and thus only scarcely understood and the tools we use to look at them, teleskopes, which are man-made. We understand them.

        Computers and Comp
        • The quote is utter rubish. ... With astronomy you have stars, which aren't man made ... Computers and Computer Science are both things that are entirely man-made. There is no natural phenomenon that we call 'computer' and a science that studies this natural phenomenon called "computer science".

          Not. Even. Wrong.

          If astronomy was called "telescope science" you'd also forget that it was about ways of looking at the skies. Computers are more flexible that that - they are used to model and study all kinds of nat
    • by arevos ( 659374 ) on Tuesday July 18, 2006 @08:31AM (#15735696) Homepage
      The first mistake: Confusing "compile" performance with execution performance. The job of maping C/C++ code to machine code is trivial.

      I've designed compilers before, and I wouldn't class constructing a C/C++ compiler as "trivial" :)

      If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"

      One could also make the opposite argument. Many computer courses teach languages such as C++, C# and Java, which all have connections to low level code. C# has its pointers and gotos, Java has its primatives, C++ has all of the above. There aren't many courses that focus more heavily on highly abstracted languages, such as Lisp.

      And I think this is more important, really. Sure, there are many benefits to knowing the low level details of the system you're programming on; but its not essential to know, whilst it is essential to understand how to approach a programming problem. I'm not saying that an understanding of low level computational operations isn't important, merely that it is more important to know the abstract generalities.

      Or, to put it another way, knowing how a computer works is not the same as knowing how to program effectively. At best, it's a subset of a wider field. At worst, it's something that is largely irrelevant to a growing number of programmers. I went to a University that dealt quite extensively with low level hardware and networking, and a significant proportion of the marks of my first year came from coding assembly and C for 680008 processors. Despite this, I can't think of many benefits such knowledge has when, say, designing a web application on Ruby on Rails. Perhaps you can suggest some?

      Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse.

      I disagree. I think software sucks because software engineers don't understand programming

    • by Oligonicella ( 659917 ) on Tuesday July 18, 2006 @08:36AM (#15735726)
      "The job of maping C/C++ code to machine code is trivial."

      Which machine, chum?

      "I've been programming professionally for over 20 years..."

      OK, bump chests. I've been at it for 35+. And? Experience doth not beget competence. There are uses for low-level languages and those that require them will use them. Try writing a 300+ module banking application in assembler. By the time you do, it will be outdated. Not because the language will change, but because the banking requirements will. Using assembler to write an application of that magnitude is like trying to write an Encyclopedia article with paper and pencil. Possible, but 'tarded.

      "Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse."

      More like, 'software sucks today for the same reason it always has -- fossized thinkers can't change to make things easier for those who necessarily follow them.' Ego, no more.

    • by embracethenerdwithin ( 989333 ) on Tuesday July 18, 2006 @09:07AM (#15735868)
      I thought it might be helpful for a current student to let you know what it is we learn today at my college. I'm a senior Software Engineering major, not a comp sci major. Comp Sci is another department and has a totaly different focus. They focus on super efficent algorithms, we focus on developing large software projects.

      My software engineering program has been very Java intensive. My software engineering class, object oriented class, and software testing class were all java based. We dabbled in C# a bit as well.

      However, I also had an assembly class, a programming languages class where we learned perl and scheme(this language sucks) and about five algorithms classes in C++. I also had an embedded systems class in both C and assembly(learned assembly MCU code, then did C).

      I feel like this is all pretty well rounded; I've learned a bunch of languages and am not really specialized in one. I'd say I am best at Java right now, but I can also write C++ code just fine.

      I've never been told a computer has any kind of crazy limitless performance. In embedded systems, I learned about performance. Making a little PIC microcontroller calculate arctan was fun(took literally 30 seconds without a smart solution). I also learned that there is a trade off between several things such as performance, development time, readability, and portability.

      We are taught to see languages as tools, you look at your problem and pull a tool out of the tool box that you think fit the problem best. You have to weigh whats important for the project and chose based off of that.

      The final thing I'd like to point out is that one huge issue with software today is it is bug ridden. How easy something is a test makes a big difference in my opinion. Assembly and C will pretty much always be harder to test than languages like Java and C#.

      I don't think the universities are the problem, at least not in my experience.
  • by jaaron ( 551839 ) on Tuesday July 18, 2006 @07:56AM (#15735566) Homepage
    Here's a print view [informit.com] of the article so that you don't have to keep moving through the pages. Despite that annoyance, it was a good article. I wish there had been more concrete examples though.
  • by rbarreira ( 836272 ) on Tuesday July 18, 2006 @08:04AM (#15735586) Homepage
    OK, the article isn't bad but contains a few misleading parts... Some quotes:

    one assembly language statement translates directly to one machine instruction

    OK, this is nitpicking but there are some exceptions - I remember that TASM would convert automatically long conditional jumps to the opposite conditional jump + an unconditional long jump since there was no long conditional jump instruction.

    Other data structures work significantly better in high-level languages. A dictionary or associative array, for example, can be implemented transparently by a tree or a hash table (or some combination of the two) in a high-level language; the runtime can even decide which, based on the amount and type of data fed to it. This kind of dynamic optimization is simply impossible in a low-level language without building higher-level semantics on top and meta-programming--at which point, you would be better off simply selecting a high-level language and letting someone else do the optimization.

    This paragraph is complete crap. If you're using a Dictionary API in a so called "low-level language", it's as possible for the API to do the same optimization as it is for the runtime he talks about; and you're still letting "someone else do the optimization".

    When you program in a low-level language, you throw away a lot of the semantics before you get to the compilation stage, making it much harder for the compiler to do its job.

    That's surely true. But the opposite is also true - when you use an immense amount of too complex semantics, they can be translated into a pile of inefficient code. Sure, this can improve in the future, but right now it's a problem of very high level constructs.

    Due to the way C works, it's impossible for the compiler to inline a function defined in another source file. Both source files are compiled to binary object files independently, and these are linked.

    Not exactly true I think [greenend.org.uk]. Yes, the approach on that page is not standard C, but on section 4 he also talks about some high level performance improvements which are still being experimented on, so...
  • by s_p_oneil ( 795792 ) on Tuesday July 18, 2006 @08:11AM (#15735615) Homepage

    I didn't see anything mentioning that many high-level languages are written in C. And I don't consider languages like FORTRAN to be high-level. FORTRAN is a language that was designed specifically for numeric computation and scientific computing. For that purpose, it is easy for the compiler to optimize the machine code better than a C compiler could ever manage. The FORTRAN compiler was probably written in C, but FORTRAN has language constructs that are more well-suited to numeric computation.

    Most truly high-level languages, like LISP (which was mentioned directly in TFA), are interpreted, and the interpreters are almost always written in C. It is impossible for an interpreted language written in C (or even a compiled one that is converted to C) to go faster than C. It is always possible for a C programmer to write inefficient code, but that same programmer is likely to write inefficient code in a high-level language as well.

    I'm not saying high-level languages aren't great. They are great for many things, but the argument that C is harder to optimize because the processors have gotten more complex is ludicrous. It's the machine code that's harder to optimize (if you've tried to write assembly code since MMX came out, you know what I mean), and that affects ALL languages.

    • Most truly high-level languages, like LISP (which was mentioned directly in TFA), are interpreted, ...

      Programming languages are not "interpreted". A language IMPLEMENTATION may be based on an interpreter. Every major implementation of Common Lisp today has a complier, and most of them don't even have an interpreter any more - everything, including command-line/evaluator input, is compiled on-the-fly before being executed.

      ... and the interpreters are almost always written in C. It is impossible for an int
  • Flawed Argument (Score:4, Interesting)

    by logicnazi ( 169418 ) <gerdesNO@SPAMinvariant.org> on Tuesday July 18, 2006 @08:51AM (#15735788) Homepage
    The fact that C code is not as close to assembely code as it once was isn't the relevant issue. The question is whether C code is still closer to the assembely than high level languages are. This is undoubtedly true. If you don't believe this try adding constructs to ruby or lisp to let you do low level OS programming and see how difficult it would be.

    I'm a big fan of high level languages and I believe eventually it will be the very distance from assembely that high level languages provide that will make them faster by allowing compilers/interpreters to do more optimization. However, it is just silly to pretend that C is not still far closer to the way a modern processor works than high level languages are.

    If nothing else just look at how C uses pointers and arrays and compare this to the more flexible way references and arrays work in higher level languages.
  • Imaginary history (Score:5, Interesting)

    by dpbsmith ( 263124 ) on Tuesday July 18, 2006 @08:53AM (#15735805) Homepage
    Whoa! This article seems to be making up history out of whole cloth. I'm not even sure where to begin. It's just totally out to lunch.

    C was not a reaction to LISP. I can't even imagine why anyone would say this. LISP's if/then/else was an influence on ALGOL and later languages.

    C might have been a reaction to Pascal, which in turn was a reaction to ALGOL.

    LISP was not "the archetypal high-level language." The very names CAR and CDR mean "contents of address register" and "contents of decrement register," direct references to hardware registers on the IBM 704. When the names of fundamental languages constructs are those of specific registers in a specific processor, that is not a "high-level language" at all. Later efforts to build machines with machine architectures optimized for implementation of LISP further show that LISP was not considered "a high-level language."

    C was not specifically patterned on the PDP-11. Rather, both of them were based on common practice and understanding of what was in the air at the time. C was a direct successor to, and reasonably similar to BCPL, on Honeywell 635 and 645, the IBM 360, the TX-2, the CDC 6400, the Univac 1108, the PDP-9, the KDF 9 and the Atlas 2.

    C makes an interesting comparison with Pascal; you can see that C is, in many ways, a computer language rather than a mathematical language. For example, the inclusion of specific constructs for increment and decrement (as opposed to just writing A := A + 1) puts it closer, not to PDP-11 architecture, but to contemporary machine architecture in general.
    • Re:Imaginary history (Score:4, Informative)

      by masklinn ( 823351 ) <.slashdot.org. .at. .masklinn.net.> on Tuesday July 18, 2006 @09:56AM (#15736132)

      LISP was not "the archetypal high-level language." The very names CAR and CDR mean "contents of address register" and "contents of decrement register," direct references to hardware registers on the IBM 704.

      You forgot "CONS" which comes from the IBM cons cells (a 36bit machine word on the 704), which is the block holding both a CAR and a CDR.

      The thing is, the names only existed because no one found any better name for them, or any more interresting name (Common Lisp now offers the "first" and "rest" aliases to CAR and CDR... yet quite a lot of people still prefer using CAR and CDR).

      LISP has always been a high level language, because it was started from mathematics (untyped lambda calculus) and only then adapted to computers.

      And the fact that Lisp Machines (trying to get away from the Von Neumann model) were built doesn't mean that Lisp is a low level language, only that IA labs needed power that the Lisp => Von Neumann machines mappings could not give them at that time.

      Lisp is a high level languages, because Lisp abstracts the machine away (no memory management, not giving a fuck about registers or machine words [may I remind you that Lisp was one of the first languages with unbound integers and automatic promotion from machine to unbound integers?])

  • by Rod, Hot ( 672270 ) * on Tuesday July 18, 2006 @09:24AM (#15735942)
    Dusted this off from the rec.arts.humor archive... It seemed appropriate.

    From:

    Subject: The truth about 'C++' revealed

    Date: Tuesday, December 31, 2002 5:20 AM

    On the 1st of January, 1998, Bjarne Stroustrup gave an interview to the IEEE's 'Computer' magazine.

    Naturally, the editors thought he would be giving a retrospective view of seven years of object-oriented design, using the language he created.

    By the end of the interview, the interviewer got more than he had bargained for and, subsequently, the editor decided to suppress its contents, 'for the good of the industry' but, as with many of these things, there was a leak.

    Here is a complete transcript of what was was said, unedited, and unrehearsed, so it isn't as neat as planned interviews.

    You will find it interesting...

    __________________________________________________ ________________

    Interviewer: Well, it's been a few years since you changed the world of software design, how does it feel, looking back?

    Stroustrup: Actually, I was thinking about those days, just before you arrived. Do you remember? Everyone was writing 'C' and, the trouble was, they were pretty damn good at it. Universities got pretty good at teaching it, too. They were turning out competent - I stress the word 'competent' - graduates at a phenomenal rate. That's what caused the problem.

    Interviewer: problem?

    Stroustrup: Yes, problem. Remember when everyone wrote Cobol?

    Interviewer: Of course, I did too

    Stroustrup: Well, in the beginning, these guys were like demi-gods. Their salaries were high, and they were treated like royalty.

    Interviewer: Those were the days, eh?

    Stroustrup: Right. So what happened? IBM got sick of it, and invested millions in training programmers, till they were a dime a dozen.

    Interviewer: That's why I got out. Salaries dropped within a year, to the point where being a journalist actually paid better.

    Stroustrup: Exactly. Well, the same happened with 'C' programmers.

    Interviewer: I see, but what's the point?

    Stroustrup: Well, one day, when I was sitting in my office, I thought of this little scheme, which would redress the balance a little. I thought 'I wonder what would happen, if there were a language so complicated, so difficult to learn, that nobody would ever be able to swamp the market with programmers? Actually, I got some of the ideas from X10, you know, X windows. That was such a bitch of a graphics system, that it only just ran on those Sun 3/60 things. They had all the ingredients for what I wanted. A really ridiculously complex syntax, obscure functions, and pseudo-OO structure. Even now, nobody writes raw X-windows code. Motif is the only way to go if you want to retain your sanity.

    [NJW Comment: That explains everything. Most of my thesis work was in raw X-windows. :)]

    Interviewer: You're kidding...?

    Stroustrup: Not a bit of it. In fact, there was another problem. Unix was written in 'C', which meant that any 'C' programmer could very easily become a systems programmer. Remember what a mainframe systems programmer used to earn?

    Interviewer: You bet I do, that's what I used to do.

    Stroustrup: OK, so this new language had to divorce itself from Unix, by hiding all the system calls that bound the two together so nicely. This would enable guys who only knew about DOS to earn a decent living too.

    Interviewer: I don't believe you said that...

    Stroustrup: Well, it's been long enough, now, and I believe most people have figured out for themselves that C++ is a waste of time but, I must say, it's taken them a lot longer than I thought it would.

    Interviewer: So how exactly did you do it?

    Stroustrup: It was only supposed to be a joke, I never thought people would take the book seriously.

  • the author... (Score:3, Insightful)

    by ynohoo ( 234463 ) on Tuesday July 18, 2006 @09:38AM (#15736017) Homepage Journal
    the author is only a couple of years out of college and he is already well on his way to be becoming a professional troll. I see a bright future for him...
  • by alispguru ( 72689 ) <bob,bane&me,com> on Tuesday July 18, 2006 @10:29AM (#15736361) Journal
    Existing high-level languages, such as LISP, provided too much abstraction for implementing an operating system

    Huh? I would argue that commercially successful (as in boxes sold to Fortune 500 companies and used in production) operating systems have been written in three languages:

    * Assembly

    * C

    * Lisp [andromeda.com]

    Are there any commercially successful OSs written in C++ yet?

    (revealing my ignorance and posting flamebait, all in one)

  • More Myth here (Score:3, Informative)

    by wonkavader ( 605434 ) on Tuesday July 18, 2006 @10:36AM (#15736407)
    It's possible to say everything siad in this article -- vaugely, as it is said in this article -- and be right, and yet still dance around the reality.

    Take a look yourself on http://shootout.alioth.debian.org/ [debian.org]

    C's faster than Java. It will probably always generally be so, unless you're trying to run C code on a hardware Java box.

    This article says Java, for example, CAN be faster. But it doesn't say "C is almost always faster than Java or Fortran, usually faster than ADA, and C can be mangled (in the form of D Digital Mars, for instance) to be faster than C usually is. Often, Java is a pig, compared to C, BUT THERE ARE TIMES WHEN IT ISN'T. Really. There are times, few and far between, when it's actually, get this, FASTER. It's fun to look for those few times. And if you write programs which do that, that'd be cool. And as processors get wackier and wackier, there will be more and more times where this is true. Meanwhile, if your developers write good code, Java's easier to develop in and debug." Which would be more completely correct.

    Excuse, me, now. I have to go back to my perl programming.
  • by master_p ( 608214 ) on Tuesday July 18, 2006 @10:46AM (#15736479)

    When Fortran was made, nobody thought that CPUs of 30 years in the future will have vector processing instructions. In fact, as Wikipedia says [wikipedia.org], vector semantics in Fortran arrived only in Fortran 90.

    The only advantage of current Fortran over C is that the vector processing unit of modern CPUs is better utilised, thanks to Fortran semantics. But, in order to be fair and square, the same semantics could be applied to C, and then C would be just as fast as Fortran.

    The fact that C does not have vector semantics reflects the domain C is used: most apps written in C do not need vector processing. In case such processing is needed, Fortran can easily interoperate with C: just write your time-critical vector processing modules in Fortran.

    As for higher-level-than-C languages being faster than C, it is purely a myth. Code that operates on hardware primitives (e.g. ints or doubles) has exactly the same speed in C, Java and other languages...but higher level languages have semantics that affect performance as much as they can help performance. All the checks VMs do have an additional overhead that C does not have; the little VM routines run here and there all add up to slower performance, as well as the fact that some languages are overengineered or open the way for sloppy programming (like, for example, not using static members but creating new ones each time there is a call).

  • Forth (Score:3, Interesting)

    by Drasil ( 580067 ) on Tuesday July 18, 2006 @11:45AM (#15736958)
    It can be made to be fast, and it can be made to be as high level as you want. I ofter wonder what the world would have been like if more programmers had gone the Forth way instead of the C/*nix way.
  • by Animats ( 122034 ) on Tuesday July 18, 2006 @01:21PM (#15737883) Homepage

    The article is a bit simplistic.

    With medium-level languages like C, some of the language constructs are lower-level than the machine hardware. Thus, a decent compiler has to figure out what the user's code is doing and generate the appropriate instructions. The classic example is

    char tab1[100], tab2[100];
    int i = 100;
    char* p1 = &tab1; char* p2 = &tab2;
    while (i--) *p2++ = *p1++;

    Two decades ago, C programmers who knew that idiom thought they were cool. In the PDP-11 era, with the non-optimizing compilers that came with UNIX, that was actually useful. The "*p2++ = *p1++;" explicitly told the compiler to generate auto-increment instructions, and considerably shortened the loop over a similar loop written with subscripts. By the late 1980s and 1990s, it didn't matter. Both GCC and the Microsoft compilers were smart enough to hoist subscript arithmetic out of loops, and writing that loop with subscripts generated the same code as with pointers. Today, if you write that loop, most compilers for x86 machines will generate a single MOV instruction for the copy. The compiler has to actually figure out what the programmer intended and rewrite the code. This is non-trivial. In some ways, C makes it more difficult, because it's harder for the compiler to figure out the intent of a C program than a FORTRAN or Pascal program. In C, there are more ways that code can do something wierd, and the compiler must make sure that the wierd cases aren't happening before optimizing.

    The next big obstacle to optimization is the "dumb linker" assumption. UNIX has a tradition of dumb linkers, dating back to the PDP-11 linker, which was written in assembler with very few comments. The linker sees the entire program, but, with most object formats, can't do much to it other than throw out unreachable code. This, combined with the usual approach to separate compilation, inhibits many useful optimizations. When code calls a function in another compilation unit, the caller has to assume near-unlimited side effects from the call. This blocks many optimizations. In numerical work, it's a serious problem when the compiler can't tell, say, that "cos(x)" has no side effects. In C, it doesn't; in FORTRAN, it does, which is why some heavy numerical work is still done in FORTRAN. The compiler usually doesn't know that "cos" is a pure function; that is, x == y implies cos(x) = cos(y). This is enough of a performance issue that GCC has some cheats to get around it; look up "mathinline.h". But that doesn't help when you call some one-line function in another compilation unit from inside an inner loop.

    C++ has "inline" to help with this problem. The real win with "inline" is not eliminating the call overhead; it's the ability for the optimizers to see what's going on. But really, what should be happening is that the compiler should check each compilation unit and output not machine code, but something like a parse tree. The heavy optimization should be done at link time, when more of the program is visible. There have been some experimental systems that did this, but it remains rare. "Just in time" systems like Java have been more popular. (Java's just-in-time approach is amusing. It was put in because the goal was to support applets in browsers. (Remember applets?) Now that Java is mostly a server-side language, the JIT feature isn't really all that valuable, and all of Java's "packaging" machinery takes up more time than a hard compile would.)

    The next step up is to feed performance data from execution back into the compilation process. Some of Intel's embedded system compilers do this. It's most useful for machines where out of line control flow has high costs, and the CPU doesn't have good branch prediction hardware. For modern x86 machines, it's not a big win. For the Itanium, it's essential. (The Itanium needs a near-omniscient compiler to perform well, because you have to decide at compile time which instructions should be executed

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...