Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

High-level Languages and Speed 777

nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."
This discussion has been archived. No new comments can be posted.

High-level Languages and Speed

Comments Filter:
  • by ChrisRijk ( 1818 ) on Tuesday July 18, 2006 @07:25AM (#15735461)
    Not really much "meat" here. The proof is in the pudding as they say - but there's no benchmarks here. Just some minor talk about how things should compare.

    I don't agree with the basic premise of the article at all - but I've also written equivalent programs in C and more modern languages and compared the performance.
  • It's very simple (Score:5, Interesting)

    by dkleinsc ( 563838 ) on Tuesday July 18, 2006 @07:33AM (#15735484) Homepage
    The speed of code written in computer language is based on the number of CPU cycles required to carry it out. That means that the speed of any higher-level language is related to the efficiency of code executed by the interpreter or produced by the compiler. Most compilers and interpreters these days are pretty darn good at optimizing, making the drawback of using a higher-level language less and less important.

    If you don't believe me, I suggest you look at some of the assembly code output of gcc. I'm no assembly guru, but I don't think I would have done as well writing assembly by hand.

  • It goes both ways (Score:5, Interesting)

    by JanneM ( 7445 ) on Tuesday July 18, 2006 @07:46AM (#15735533) Homepage
    Sure, CPU:s look quite a bit different now than they did 20+ years ago. On the other hand, CPU designs do heavily take into account what features are being used by the application code expected to be run on them, and one constant you can still depend on is that most of that application code is going to be machine-generated by a C compiler.

    For instance, 20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it). One expectation was still, at the time, that a lot of code would be generated directly by humans, so instructions and instruction designs catering to that use-case were developed. But by around then, most code was machine generated by a compiler, and since the compiler had little high-level semantics to work with, the high-level instructions - and most low-level one's too - went unused; this was one impetus for the development of RISC machines, by the way.

    So, as long as a lot of coding is done in C and C++ (and especially in the embedded space, where you have most rapid CPU development, almost all coding is), designs will never stray far away from the requirements of that language. Better compilers have allowed designers to stray further, but stray too far and you get penalized in the market.
  • Re:It's very simple (Score:3, Interesting)

    by spinkham ( 56603 ) on Tuesday July 18, 2006 @08:04AM (#15735588)
    True, since they can always start with the compiler output, and are thus will at least do no worse.
    The more interesting question is if a person with only passing familiarity with assembly can do better then the compiler, and the answer to that is usually no these days.
  • Re:It's very simple (Score:4, Interesting)

    by jtshaw ( 398319 ) on Tuesday July 18, 2006 @08:12AM (#15735617) Homepage
    Most compilers and interpreters these days are pretty darn good at optimizing, making the drawback of using a higher-level language less and less important.

    In the past, most compilers were dreadful at optimizations. Now, they are just horrible. I guess that is an improvement, but I still believe there is a lot of good research to come here.

    I do agree that the playing field has become pretty even. For example, with the right VM and the right code you can get pretty good performance out of Java. Problem is "the right VM" depends greatly on the task the program is doing.. certainly not a one vm fits all out of the box solution (ok.. perhaps you could always use the same VM, but app specific tuning is often neccesary for really high performance).

    At any rate.. people just need to learn to use the best tool for the job. Most apps don't actually need to be bleedingly fast, so developing them in something that makes the development go faster is probably more important then developing them in something to eek out that tiny performance gain nobody will probably notice anyway.
  • Re:Old debate (Score:1, Interesting)

    by Anonymous Coward on Tuesday July 18, 2006 @08:40AM (#15735738)
    Yes, the guy's main point is valid - an optimizer needs lots of semantic information, and putting that information into a language tends to make that language higher-level. But C++ has lots of this metadata now and it's still horribly low-level in use (lack of garbage collection makes sure of that). I spent a year prototyping a language myself which was just an extension to C with lots of metadata directed at the compiler. It was still a low-level language. So high vs low doesn't determine speed, some other aspect of language design, as yet unnamed, is responsible for how fast the program ends up. And of course, the lowest level language of all is your assembler and with the complete chip specs on one hand and a keyboard on the other (and plenty of time!) a programmer can always beat any compiler.
  • Re:Old debate (Score:5, Interesting)

    by bloodredsun ( 826017 ) <martin@bl[ ] ['ood' in gap]> on Tuesday July 18, 2006 @08:50AM (#15735781) Journal

    If I had mod points I'd certainly mod you informative. Those benchmarks might be synthetic and flawed but as a general illustration of how the various languages differ, that link is fantastic.

    Of course I'll just use it for my own ends by convincing my managers that we're using the right languages - "Yes boss you'll see that we use C++ for the stuff that needs to be fast with low memory overhead, Java for the server side stuff, stay the fuck away from Ruby and if you say 'Web 2.0' at me one more time I'll be forced to wham you with a mallet!" ;-)

  • Flawed Argument (Score:4, Interesting)

    by logicnazi ( 169418 ) <<logicnazi> <at> <>> on Tuesday July 18, 2006 @08:51AM (#15735788) Homepage
    The fact that C code is not as close to assembely code as it once was isn't the relevant issue. The question is whether C code is still closer to the assembely than high level languages are. This is undoubtedly true. If you don't believe this try adding constructs to ruby or lisp to let you do low level OS programming and see how difficult it would be.

    I'm a big fan of high level languages and I believe eventually it will be the very distance from assembely that high level languages provide that will make them faster by allowing compilers/interpreters to do more optimization. However, it is just silly to pretend that C is not still far closer to the way a modern processor works than high level languages are.

    If nothing else just look at how C uses pointers and arrays and compare this to the more flexible way references and arrays work in higher level languages.
  • Imaginary history (Score:5, Interesting)

    by dpbsmith ( 263124 ) on Tuesday July 18, 2006 @08:53AM (#15735805) Homepage
    Whoa! This article seems to be making up history out of whole cloth. I'm not even sure where to begin. It's just totally out to lunch.

    C was not a reaction to LISP. I can't even imagine why anyone would say this. LISP's if/then/else was an influence on ALGOL and later languages.

    C might have been a reaction to Pascal, which in turn was a reaction to ALGOL.

    LISP was not "the archetypal high-level language." The very names CAR and CDR mean "contents of address register" and "contents of decrement register," direct references to hardware registers on the IBM 704. When the names of fundamental languages constructs are those of specific registers in a specific processor, that is not a "high-level language" at all. Later efforts to build machines with machine architectures optimized for implementation of LISP further show that LISP was not considered "a high-level language."

    C was not specifically patterned on the PDP-11. Rather, both of them were based on common practice and understanding of what was in the air at the time. C was a direct successor to, and reasonably similar to BCPL, on Honeywell 635 and 645, the IBM 360, the TX-2, the CDC 6400, the Univac 1108, the PDP-9, the KDF 9 and the Atlas 2.

    C makes an interesting comparison with Pascal; you can see that C is, in many ways, a computer language rather than a mathematical language. For example, the inclusion of specific constructs for increment and decrement (as opposed to just writing A := A + 1) puts it closer, not to PDP-11 architecture, but to contemporary machine architecture in general.
  • Re:Old debate (Score:4, Interesting)

    by masklinn ( 823351 ) < .at.> on Tuesday July 18, 2006 @09:14AM (#15735903)
    Haskell also does very well, and Digital Mars' impressive D is consistently in the top spots (one wonders why the hell Soustrup is still trying to improve C++ when he could just switch to D and build from there)
  • Re:Old debate (Score:2, Interesting)

    by Anonymous Coward on Tuesday July 18, 2006 @09:16AM (#15735914)
    The O'Caml examples are not functional, they're imperative. The best-performing Haskell examples are also written imperatively.
  • Re:Old debate (Score:3, Interesting)

    by StarvingSE ( 875139 ) on Tuesday July 18, 2006 @09:30AM (#15735966)
    Java is hardly going the way of a legacy language. It is heavily used in the business world for web applications, which are becoming much more popular, not less.

    And although you call Cobol legacy, it really isn't. Many financial institutions still run applications written in cobol since it is too costly and risky to migrate the old code to a new language. Cobol was meant for the financial industry, and its probably there to stay. Colleges and universities are even starting to teach it again since it is in high demand in the job market right now (older cobol programmers are retiring).
  • Re:High Level (Score:3, Interesting)

    by rockmuelle ( 575982 ) on Tuesday July 18, 2006 @09:39AM (#15736025)
    "I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardwar"

    A minor observation about the feasability of working with the target hardware: the two most popular instruction set architectures for commidity hardware, PowerPC and IA-32, have both been stable since the mid 90s. The programming guide for PowerPC processors is still pretty much the same document as it was in 1996, around the same time the PowerPC ISA was defined. IA-32 has undergone some changes with each new major processor family, but is still backwards compatible at the instruction level with processors released in the 80s.

    Contrast this with high-level (i.e., non assembly languages). Java has undergone a few major revisions in its 10 year lifespan. C++ has yet to have a compiler that fully implements the spec (think export and the really fun template games). Scripting languages are constantly evolving and sometimes aren't backwards compatible over a 4 year period. Then there's the Microsoft switch to .Net that invalidated billions of lines of VB and VC++ code. Compared to these languages, machine code is incredibly stable and portable (across processor iterations, at least).

    Of course, there are architecture considerations for squeezing performance out of code. But, again, these haven't changed much in the last 10 years, either. The memory bus is still the bottleneck and you still get 50-80 instructions for 'free' on each load, even if you're not filling the pipeline completely. If you are doing something that isn't memory bound, it's not that hard to look up the instruction latencies in the manual and code things up to fully utilize the processing units and keep the pipline full. At least, it's no more difficult developing scalable EJB applications for your favorite web application engine...


  • by rs79 ( 71822 ) <> on Tuesday July 18, 2006 @09:45AM (#15736066) Homepage
    "People who are good at C will insist that they can code something as quickly as anyone else can code the same thing in a higher level language. Well, maybe. I got my own awakening when I watched two students struggling to write something in C that they could have written in ten lines of Basic (it was a while ago).

    In the embedded world, programming DSPs is a wonderful example. We used to write assembly code for the parts that had to be fast because we could get tighter code than a compiler could produce. Now the tools are so good that even good programmers are better off letting the tools do the work.

    So, is there any point writing in a low level language, even where speed matters? I don't think. In any event, it will take longer to write the code. It just doesn't seem like a winning proposition"

    I would never say it's jut as quick to write stuff in C. But I will say the code runs faster and you can support higher loads than if it was written in something else.

    A 10 line basic program isn't a good example. Rewrite, say, Apache, or a unix kernel in something besides C (or assembler) and watch it suck, bad.

    There's a reason all the important stuff is in C. I should have thought this was obvious. What should be obvious too is that it doesn't all need to be in C. The 80/20 rule applies someplace, the config/startup struff in, say, Apache dosn't really matter what it's written in. Use LISP and it wouldn't matter. But the task spawning and http sevice parts damn well better be in C. Even better in Asselbler but here portability goes out the door.

    C may not be perfect but it's less imperfect than anything else. It's about the optimal compromise between portability and speed.

  • Re:Old debate (Score:5, Interesting)

    by rainman_bc ( 735332 ) on Tuesday July 18, 2006 @09:58AM (#15736152)
    stay the fuck away from Ruby

    What's wrong with Ruby, as a replacement for a very ugly language called Perl?

    Ruby is an elegant language, fully Object Oriented, and does just as well as Python and Perl...

    Ruby On Rails OTOH is a different story and don't want to get into a flame war over it, but Ruby itself is pretty good for a lot of things you'd otherwise write in Perl but don't like the ugliness of Perl...

    I've found some people don't get the distinction between Ruby and Ruby on Rails.
  • by glindsey ( 73730 ) on Tuesday July 18, 2006 @10:20AM (#15736305)
    Well, that's not entirely true: compilers for PCs may not be a big business anymore, but compilers for embedded systems are still a huge business, despite the availability of GCC for many platforms. You need only look at IAR [] to confirm that...
  • Re:Old debate (Score:3, Interesting)

    by NickFitz ( 5849 ) <slashdot@nickfit ... k minus language> on Tuesday July 18, 2006 @10:23AM (#15736321) Homepage

    The statement "C is not a high level language" is not logically equivalent to the statement "C is a low level language", so the OP is still entitled to his beer money :-)

  • Re:It goes both ways (Score:3, Interesting)

    by Waffle Iron ( 339739 ) on Tuesday July 18, 2006 @10:25AM (#15736335)
    That reminds me of the most specialized machine instruction I ever saw. Back in the 80s I was in a EE lab where we made our own CPUs on breadboards out of AMD bitslice chips, then we implemented the specified instruction set in microcode. A large chunk of the grade was based on the lab instructor running a standard test program on each team's "system" and checking the expected results.

    One guy I knew realized that he was never going to get his rig stable enough to run through the whole test, so he set up a single opcode to just dump the entire expected output of the test program to the printer then halt. IIRC, he pulled it off.

  • by porkchop_d_clown ( 39923 ) <> on Tuesday July 18, 2006 @11:35AM (#15736866) Homepage
    C is best at what it was designed for - controlling the computer. It used to be that people chose the language to match the app they were writing: For math, use Fortran or APL. For reports use Cobol or RPG. C for flipping bits. Pascal for teaching.

    We're where we are today because, for many years, C was the one you could get for free. The others cost hundreds of dollars.

    I remember the first time I encountered a computer that shipped from the vendor with GCC instead of a proprietary compiler - it was like seeing a death sentence for Abacus, Lightspeed, and all those other little compiler companies.
  • Forth (Score:3, Interesting)

    by Drasil ( 580067 ) on Tuesday July 18, 2006 @11:45AM (#15736958)
    It can be made to be fast, and it can be made to be as high level as you want. I ofter wonder what the world would have been like if more programmers had gone the Forth way instead of the C/*nix way.
  • New debate (Score:4, Interesting)

    by Dzonatas ( 984964 ) on Tuesday July 18, 2006 @11:54AM (#15737050) Homepage
    High level languages have always been compared to cognitive semantics and grammatical styles. That is the higher the level of the language the easier it is for us humans to read and write it. Conversely, the lower the level the language is the more discreet steps are needed to describe an instruction or data.

    Speed of program languages or machine languages are not measured by how high or low level they are to us. They are also measured by time to develop and implement the program. The article basically makes a point of it, that it's "better to let someone else" to optimize the low-level code while you write with the high-level language. You could write a super fast machine coded program, but it'll take you much longer to write it than with a simpler higher level language.

    The new debate is over datatypes and the available methods to manipulate them. Older hardware gave us the old debate with primitive datatypes and a general set of instructions to manipulate the data. Newer hardware can give us more than just primitives. For example, a unicoded string datatype seen by the hardware as a complete object instead of an array of bytes. With hardware instructions to manipulate unicoded strings, that would pratically take away any low-level implementation of unicoded strings. The same could be done for UTF-8 strings. We could implement hardware support for XML documents and other common protocols. How these datatypes are actually implemented in hardware is the center of the debate.

    Eventually, there will be so many datatypes that there will be seperate low-level languages specifically designed for a domain a datatypes. The article makes the point there exists an increase in complexity for newer compliers to understand what was intended by a set of low-level instructions. Today's CPUs have a static limit of low-level instructions. The future beholds hardware implemented datatypes and their dynamic availability of low-level instructions. Newer processors will need to be able to handle the dynamic set of machine language instructions.

    Does the new debate conflict with Turing's goal to simply make a processor unit extensible without the need to add extra hardware? For now, we have virtualization.
  • Re:Old debate (Score:4, Interesting)

    by Julian Morrison ( 5575 ) on Tuesday July 18, 2006 @11:54AM (#15737051)
    If you looked at the shootout you'd see what was wrong in Ruby: it's just about the slowest serious scripting language. It seems to be using pure bloody-minded interpretation without any bytecode or JIT stage.

    Nothing wrong with the language that a proper implementation couldn't cure, basically.
  • Re:Old debate (Score:2, Interesting)

    by Anonymous Coward on Tuesday July 18, 2006 @11:54AM (#15737056)
    Yeah, but Clean is endorsed by a small dutch company (whereas Haskell is promoted by several people with english-sounding names who can write research papers in perfect English) so nobody cares about Clean.

  • by orthogonal ( 588627 ) on Tuesday July 18, 2006 @11:59AM (#15737095) Journal
    Here's an actual data point:

    I sped up some C code by unrolling a loop with Duff's Device []. Duff's Device, for those who haven't encountered it, makes an ingenious use of the often-maligned C behavior that case statements, in the absence of a break or return statement, fall-through.

    Duff's Device takes advantage of the fall-through by jumping into the middle of an unrolled loop of repeated instructions. If eight instructions are unrolled, Duff's Device iterates the loop

    count divided by eight (count / 8 )

    times, but enters the loop by jumping to the

    count mod eight (count % 8)

    'the unrolled instruction from the end of the loop. (This sounds complicated, but isn't; just look at the code and it becomes clear.)

    The whole point of Duff's Device is speed and locality of code. Speed: because the loop is unrolled, more instructions are executed for each jump back to the top (and jumps are, relatively, expensive, because they mean any preloaded instructions must be tossed out ans re-read. Locality: (hopefully) all the instructions can be cached, so the processor doesn't have to re-read them from memory.

    But what gcc does with Duff's Device on ARM targets is just bizarre. gcc uses a jump table (good) to directly change the Program Counter (good, so far). But instead of jumping into the loop (which would be good), gcc uses the jump table to jump to ...

    a redundant assignment and ...

    an unconditional jump.

    Yes, gcc very smartly makes a jump table (which directly changes the Program Counter, just like a jump would) to jump to a jump. This is simply a waste of code and time:

    I'd show you the entire assembly code gcc produces, but slashdot won't let me: "Your comment violated the "postercomment" compression filter. Try less whitespace and/or less repetition. Comment aborted."
    cmp r2, #7
    ldrls pc, [pc, r2, asl #2] <-- directly modify the Program Counter making it pc + ( r2 << 2 )
    b .L70
    .p2align 2
    .L79: <-- jump table
    .word .L71
    .word .L72
    .word .L73
    .word .L74
    .word .L75
    .word .L76
    .word .L77
    .word .L78
    .L72: <-- first jump table destination
    mov r1, lr <-- redundant assignment made at every destination
    b .L80 <-- actual jump into unrolled loop
    [ 7 repeats of the above, with differnt branch targets elided]
    .L87: <-- for each iteration of the loop, we're moving exactly 8 halfwords = 4 words
    ldrh r3, [r0], #2 <-- what would be fastest is to load multiple four words,
    <-- then shift high words down
    strh r3, [ip, #0] @ movhi
    [6 repeats of the above elided]
    ldrh r3, [r0], #2
    strh r3, [ip, #0] @ movhi
    sub r1, r1, #1 <-- a subs instruction here would obviate the need for the
    cmp r1, #0 <-- cmp instruction that follows it, saving a cycle per iteration
    bgt .L87

    Why a jump table just to set up an unconditional jump? Why the redundant mov, which could have been done once, prior to the jump table jump? Who knows, that's what gcc does.

    In this particular case, the object is to copy halfwords to a memory address, which address is really mapped to an output device. ARM processors, of course, are optimized for word addresses, so the "best" way to do this would be to load multiple words (LDM), shift the upper

  • Re:Old debate (Score:1, Interesting)

    by Anonymous Coward on Tuesday July 18, 2006 @11:59AM (#15737105)
    You can manipulate individual registers in some compilers by using inline assembly. Furthermore, on many microcontrollers you can manipulate the registers inside the memory mapped peripherals in the C code. Many compilers provide facilities for writing interrupt handling functions in C as well.

    I argue that C is not a low-level language, but can provide low-level interraction with the hardware components (arguably C# with the .NET Framework 2.0 can do this too with the SerialPort class).

    The language is high-level and compiled into machine language unlike other high-level langauges that are compiled into byte-code and interpreted.

    I think a good border is:

    1) Low-level - Machine code that can be directly converted into assembly.
    2) High-level - Machine code that can be directly converted into assembly; however, there is no 1-1 mapping of it into the language it was originally written in (i.e. C, because different compilers will create different machine code for the same platform). What I aim to say is that it's not possible to figure out the exact C code from looking at the assembly.

  • This seems generally to be true, but some small outfits are apparently still making money selling compilers. In the early 90s I used the Power C compiler for DOS. It was a nice compiler and cheap ($20). Recently I was amazed to see that the company, Mix Software [], is still in business, with the same low prices. How they do this I have no idea.

  • by Anonymous Coward on Tuesday July 18, 2006 @12:49PM (#15737615)

    Did you finish reading my post? I mentioned that some languages which are often interpreted can also be compiled. And if you write a LISP compiler in LISP, what do you compile the compiler with? Since the new compiler hasn't been built yet, you would have to compile it with a different LISP compiler,

    This, obviously, goes for all compiled languages whose compiler is written in the language in question. In particular, it is true for C. LISP pre-dates C by a significant number of years (2 decades, in fact), so LISP compilers definitely do not have to go back to C. I am involved in work on a native-code compiler for a high-level language that is written in the language itself. Yes, there once was an "ur-version" of the compiler written in a different language. But that was more than 20 years ago, and the other language was not C. In fact, that language was LISP...

      and I doubt that one was written in LISP.

    Why do you doubt that? It certainly wasn't written in C either. Moreover, all this is completely orthogonal to the discussion. A compiler does not have to be efficient in order to produce efficient code.

    The main reason for C's popularity these days is that it is self-sustaining: You can always count on having a C compiler at hand, and since C is important, even hardware designers take some pains in making sure that C can be compiled at least semi-efficiently. But, as the article points out correctly, the efforts involved are becoming more and more heroic.
  • by Animats ( 122034 ) on Tuesday July 18, 2006 @01:21PM (#15737883) Homepage

    The article is a bit simplistic.

    With medium-level languages like C, some of the language constructs are lower-level than the machine hardware. Thus, a decent compiler has to figure out what the user's code is doing and generate the appropriate instructions. The classic example is

    char tab1[100], tab2[100];
    int i = 100;
    char* p1 = &tab1; char* p2 = &tab2;
    while (i--) *p2++ = *p1++;

    Two decades ago, C programmers who knew that idiom thought they were cool. In the PDP-11 era, with the non-optimizing compilers that came with UNIX, that was actually useful. The "*p2++ = *p1++;" explicitly told the compiler to generate auto-increment instructions, and considerably shortened the loop over a similar loop written with subscripts. By the late 1980s and 1990s, it didn't matter. Both GCC and the Microsoft compilers were smart enough to hoist subscript arithmetic out of loops, and writing that loop with subscripts generated the same code as with pointers. Today, if you write that loop, most compilers for x86 machines will generate a single MOV instruction for the copy. The compiler has to actually figure out what the programmer intended and rewrite the code. This is non-trivial. In some ways, C makes it more difficult, because it's harder for the compiler to figure out the intent of a C program than a FORTRAN or Pascal program. In C, there are more ways that code can do something wierd, and the compiler must make sure that the wierd cases aren't happening before optimizing.

    The next big obstacle to optimization is the "dumb linker" assumption. UNIX has a tradition of dumb linkers, dating back to the PDP-11 linker, which was written in assembler with very few comments. The linker sees the entire program, but, with most object formats, can't do much to it other than throw out unreachable code. This, combined with the usual approach to separate compilation, inhibits many useful optimizations. When code calls a function in another compilation unit, the caller has to assume near-unlimited side effects from the call. This blocks many optimizations. In numerical work, it's a serious problem when the compiler can't tell, say, that "cos(x)" has no side effects. In C, it doesn't; in FORTRAN, it does, which is why some heavy numerical work is still done in FORTRAN. The compiler usually doesn't know that "cos" is a pure function; that is, x == y implies cos(x) = cos(y). This is enough of a performance issue that GCC has some cheats to get around it; look up "mathinline.h". But that doesn't help when you call some one-line function in another compilation unit from inside an inner loop.

    C++ has "inline" to help with this problem. The real win with "inline" is not eliminating the call overhead; it's the ability for the optimizers to see what's going on. But really, what should be happening is that the compiler should check each compilation unit and output not machine code, but something like a parse tree. The heavy optimization should be done at link time, when more of the program is visible. There have been some experimental systems that did this, but it remains rare. "Just in time" systems like Java have been more popular. (Java's just-in-time approach is amusing. It was put in because the goal was to support applets in browsers. (Remember applets?) Now that Java is mostly a server-side language, the JIT feature isn't really all that valuable, and all of Java's "packaging" machinery takes up more time than a hard compile would.)

    The next step up is to feed performance data from execution back into the compilation process. Some of Intel's embedded system compilers do this. It's most useful for machines where out of line control flow has high costs, and the CPU doesn't have good branch prediction hardware. For modern x86 machines, it's not a big win. For the Itanium, it's essential. (The Itanium needs a near-omniscient compiler to perform well, because you have to decide at compile time which instructions should be executed

  • Re:Old debate (Score:4, Interesting)

    by Dr. Zowie ( 109983 ) <slashdot@ d e f o r e s t .org> on Tuesday July 18, 2006 @03:13PM (#15738811)
    There are two main problems with Ruby as a replacement for "a very ugly language called Perl":

    * It's not sufficiently prettier than Perl

    * It's not Perl

    Perl may look ugly but it is to most programming languages as English is to most other languages. Perl is a brawling, sprawling mess of borrowed, Hamming-optimized idioms that is extremely ugly from the POV of a syntax engineer and extremely expressive from the POV of a fluent speaker.

    Ruby is more like Esperanto - elegant, clean, and spoken by practically no-one because it isn't very expressive.
  • Re:Old debate (Score:3, Interesting)

    by masklinn ( 823351 ) < .at.> on Tuesday July 18, 2006 @03:18PM (#15738842)

    How about the fact that you can't use an integer as an array index in Ada and you have to use natural numbers (defined as a positive or null integer), because array indexes can't be negative (in most languages anyway, some -- like Python -- are exceptions to this quite common rule) and you therefore shouldn't be allowed to use a number that might ever be negative as an index. C# merely gives you a warning if your index is explicit (e.g., myArray[-1]) and doesn't do anything otherwise, before throwing an IndexOutOfRangeException at runtime.

    That's one measly example, but I find it quite interresting.

    So no, C#'s unsafe keyword isn't a factor (and the lack of implicit conversion clearly isn't, if anything implicit conversion is the sure sign of quirky and unsafe type systems).

    When people say that an Ada program that compiles will usually work without problem, they're not joking, Ada's type system is so extensive and so strong that it misses very few errors (that it could handle, that is, flaws in your own logic can't be patched by a compiler).

  • by Anonymous Coward on Tuesday July 18, 2006 @05:30PM (#15739741)
    Quality of generated count still counts for a lot in embedded systems. GCC generates significantly worse code than commercial alternatives for ARM. And it doesn't have good support for some processors, like the 68HC11 or 8051, still important in the embedded world even if the desktop PC crowd has long since moved on.

    GCC's long suit is portability to new architectures, at which it is unparalleled. However, for any particular processor architecture, the specialist compilers generally do better. And the cost of the compiler is small compared to the cost of the programmer or the cost of high-volume manufacturing. Saving some flash and RAM space adds up to a lot of money.

    Of course, the compiler is only a small part of the whole toolchain. Embedded systems also need to be debugged. You can't download a free JTAG emulator to go with GCC. Information might want to be free, but hardware doesn't.

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"