Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

High-level Languages and Speed 777

nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."
This discussion has been archived. No new comments can be posted.

High-level Languages and Speed

Comments Filter:
  • Bah (Score:5, Insightful)

    by perrin ( 891 ) on Tuesday July 18, 2006 @07:24AM (#15735458)
    So we "still can get good performance" from C? The implication is that C will somehow become overcome by some unnamed high-elvel language soon. That is just wishful thinking. The article is not very substantial, and where it tries to substantiate, it misses the mark badly. The claim that C cannot handle SIMD instructions well is not true. You can use them directly from C, or the C compiler can use them through autovectorization, as in gcc 4.1. The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".
  • High Level (Score:5, Insightful)

    by HugePedlar ( 900427 ) on Tuesday July 18, 2006 @07:24AM (#15735459) Homepage
    I remember back in the days of the Atari ST and Amiga, C was considered to be a high-level language. People would complain about the poor performance of games written in C (to ease the porting from Amiga to ST and vice versa) over 'proper' Assembly coded games.

    Now I hear most people referring to C and C++ as "low level" languages, compared to Java and PHP and visual basic and so on. Funny how that works out.

    I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardware.
  • Inaccurate summary (Score:5, Insightful)

    by rbarreira ( 836272 ) on Tuesday July 18, 2006 @07:26AM (#15735465) Homepage
    The task of mapping C code to a modern microprocessor has gradually become increasingly difficult.

    This is not true. What they mean, I think, is "the task of mapping C code to efficient machine code has gradually become increasingly difficult".
  • Re:Old debate (Score:5, Insightful)

    by StarvingSE ( 875139 ) on Tuesday July 18, 2006 @07:34AM (#15735485)
    C is not a low level language. If you're not directly manipulating the registers on the processor, you are not in a low level language (and forget about the "register" keyword, modern compilers just treat register variables in C/C++ as memory that needs to be optimized for speed).

    If anything, C is a so-called mid level language. If it wasn't, you'd be using an assembler instead of a compiler.
  • Re:Old debate (Score:2, Insightful)

    by dpilot ( 134227 ) on Tuesday July 18, 2006 @07:35AM (#15735487) Homepage Journal
    Ain't it great to know that Modula-2 - and essentially ALL of the strongly typed and structure languages - have pretty much died out. I did piles of stuff in M2, including reading and parsing legacy binary files, re-entrant interrupt handlers in DOS, etc.
  • Re:Bah (Score:5, Insightful)

    by TheRaven64 ( 641858 ) on Tuesday July 18, 2006 @07:38AM (#15735503) Journal
    The claim that C cannot handle SIMD instructions well is not true. You can use them directly from C, or the C compiler can use them through autovectorization, as in gcc 4.1

    You have two choices when using SIMD instructions in C:

    1. Use non-portable (between hardware, and often between compilers) intrinsics (or even inline assembly).
    2. Write non-vectorised code, and hope the compiler can figure out how to optimally decompose these into the intrinsics. Effectively, you think vectorised code, translate it into scalar code, and then expect the compiler to translate it back.
    Compare the efficiency of GCC at auto-vectorising FORTRAN (which has a primitive vector type) and C (which doesn't), if you don't believe me.

    The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".

    When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.

  • Re:Bah (Score:5, Insightful)

    by Anonymous Coward on Tuesday July 18, 2006 @07:42AM (#15735518)
    C is faster in the same sense that assembly is faster: You have more control over the resulting machine code, so the code can by definition always be faster. You can optimize by hand. But that comes at a price: You have to optimize by hand. That's why C isn't always faster, especially not when it's supposed to be portable. The question isn't whether there could be a faster program in a language of choice, it's whether a language is at the right level of abstraction for a programmer to describe what the program must do and not a bit more. Overspecification prevents optimization. If you write for (int i=0; i<100; i++) where you really meant for (i in [0..99]), how is the compiler going to know if order is important? The latter is much more easily parallelized, for example. C is full of explicitness where it is often not needed. Assembly even more so. That's the problem of low level languages.
  • by Bogtha ( 906264 ) on Tuesday July 18, 2006 @07:47AM (#15735534)

    The more abstract a language is, the better a compiler can understand what you are doing. If you write out twenty instructions to do something in a low-level language, it's a lot of work to figure out that what matters isn't that the instructions get executed, but the end result. If you write out one instruction in a high-level language that does the same thing, the compiler can decide how best to get that result without trying to figure out if it's okay to throw away the code you've written. Optimisation is easier and safer.

    Furthermore, the bottleneck is often in the programmer's brain rather than the code. If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations. High-level languages help with programmer productivity. I know that it's considered a mark of programmer ability to write the most efficient code possible, but it's a mark of software engineer ability to get the programming done faster while still meeting performance constraints.

  • by mlwmohawk ( 801821 ) on Tuesday July 18, 2006 @07:48AM (#15735540)
    The first mistake: Confusing "compile" performance with execution performance. The job of maping C/C++ code to machine code is trivial.

    I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages.

    What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment.

    The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant.

    If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"

    Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse.
  • Re:Bah (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 18, 2006 @07:55AM (#15735564)
    When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.

    You say this as if C defines an object format and you can toss libraries around without assuming a particular compiler, linker, and loader facility, e.g. a specific C implementation such as GCC with the GNU toolchain!

    C compilers can and do store intermediate forms in "object" files such that the linker can do final inter-procedural optimization at link time or even dynamic load time. The SGI Irix compiler did this, for example.
  • by hummassa ( 157160 ) on Tuesday July 18, 2006 @07:59AM (#15735574) Homepage Journal
    An expert assembly programmer in a CPU which he knows well can still do much better than a compiler.
    FOR ONE FUNCTION. If you programmed the whole system in asm, you'd see that the assembler+you combo would lose so many opportunities for optimization that a good compiler got. And that's the whole point of the article.
  • by Eivind ( 15695 ) <eivindorama@gmail.com> on Tuesday July 18, 2006 @08:01AM (#15735577) Homepage
    If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations.

    Especially since you can combine. Even in high-performance applications there's typically a only a tiny fraction of the code that actually needs to be efficient, it's perfectly common to have 99% of the time spent in 5% of the code.

    Which means that in basically all cases you're going to be better off writing everything in a high-level language and then optimize only those routines that need it later.

    That way you make less mistakes, and get higher-quality better code quicker for the 95% of the code where efficiency is unimportant, and you can spend even more time on optimizing those few spots where it matters.

  • by rbarreira ( 836272 ) on Tuesday July 18, 2006 @08:04AM (#15735586) Homepage
    OK, the article isn't bad but contains a few misleading parts... Some quotes:

    one assembly language statement translates directly to one machine instruction

    OK, this is nitpicking but there are some exceptions - I remember that TASM would convert automatically long conditional jumps to the opposite conditional jump + an unconditional long jump since there was no long conditional jump instruction.

    Other data structures work significantly better in high-level languages. A dictionary or associative array, for example, can be implemented transparently by a tree or a hash table (or some combination of the two) in a high-level language; the runtime can even decide which, based on the amount and type of data fed to it. This kind of dynamic optimization is simply impossible in a low-level language without building higher-level semantics on top and meta-programming--at which point, you would be better off simply selecting a high-level language and letting someone else do the optimization.

    This paragraph is complete crap. If you're using a Dictionary API in a so called "low-level language", it's as possible for the API to do the same optimization as it is for the runtime he talks about; and you're still letting "someone else do the optimization".

    When you program in a low-level language, you throw away a lot of the semantics before you get to the compilation stage, making it much harder for the compiler to do its job.

    That's surely true. But the opposite is also true - when you use an immense amount of too complex semantics, they can be translated into a pile of inefficient code. Sure, this can improve in the future, but right now it's a problem of very high level constructs.

    Due to the way C works, it's impossible for the compiler to inline a function defined in another source file. Both source files are compiled to binary object files independently, and these are linked.

    Not exactly true I think [greenend.org.uk]. Yes, the approach on that page is not standard C, but on section 4 he also talks about some high level performance improvements which are still being experimented on, so...
  • by Anonymous Coward on Tuesday July 18, 2006 @08:08AM (#15735600)
    "I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages."

    The "appeal to an expert" fallacy?

    "What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment."

    It also means that portability becomes ever harder, as well as adaptability to new hardware.

    "If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?""

    It's about algorithms. Computers just happen to be the most convienent means for trying them..

    "The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant."

    With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.

    "Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse."

    Now who's handwaving?
  • by s_p_oneil ( 795792 ) on Tuesday July 18, 2006 @08:11AM (#15735615) Homepage

    I didn't see anything mentioning that many high-level languages are written in C. And I don't consider languages like FORTRAN to be high-level. FORTRAN is a language that was designed specifically for numeric computation and scientific computing. For that purpose, it is easy for the compiler to optimize the machine code better than a C compiler could ever manage. The FORTRAN compiler was probably written in C, but FORTRAN has language constructs that are more well-suited to numeric computation.

    Most truly high-level languages, like LISP (which was mentioned directly in TFA), are interpreted, and the interpreters are almost always written in C. It is impossible for an interpreted language written in C (or even a compiled one that is converted to C) to go faster than C. It is always possible for a C programmer to write inefficient code, but that same programmer is likely to write inefficient code in a high-level language as well.

    I'm not saying high-level languages aren't great. They are great for many things, but the argument that C is harder to optimize because the processors have gotten more complex is ludicrous. It's the machine code that's harder to optimize (if you've tried to write assembly code since MMX came out, you know what I mean), and that affects ALL languages.

  • by iotaborg ( 167569 ) <exa@sof t h o m e.net> on Tuesday July 18, 2006 @08:12AM (#15735620) Homepage
    If computer science isn't about computers, what is it about?

    I was rather under the impression that computer science was the theory of computation, where the computer is simply a tool; just as much as a soldering iron is a tool in electrical engineering.
  • by cain ( 14472 ) on Tuesday July 18, 2006 @08:13AM (#15735626) Journal
    If computer science isn't about computers, what is it about?

    "Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)

    Sorry, you're arguing against Dijkstra: you lose. :)

  • Re:Bah (Score:3, Insightful)

    by rbarreira ( 836272 ) on Tuesday July 18, 2006 @08:17AM (#15735632) Homepage
    Use non-portable (between hardware, and often between compilers) intrinsics (or even inline assembly).

    Which usually isn't a big problem anyway since the code sections in which that's an advantage are usually quite small and infrequent, so if you really need the performance you can make a very little sacrifice of inserting conditional compiling statements with different code for the platforms which you are interested on.

    It's certainly not an ideal solution but it's a very attractive one, and it has the advantage that you can have experts on each CPU optimizing the code of the platform they know best.
  • by rbarreira ( 836272 ) on Tuesday July 18, 2006 @08:23AM (#15735662) Homepage
    Of course, an Idiot might write nonsense code in .NET, but that doesn't mean .NET is a bad thing.

    I think his point was not that abstractions are bad, but that not knowing what's happening behind the scenes isn't good.
    Even to optimize .NET code, sometimes it's good to inspect the generated CIL (or even asm!) code in order to know why something isn't going fast.
  • Re:High Level (Score:5, Insightful)

    by radarsat1 ( 786772 ) on Tuesday July 18, 2006 @08:25AM (#15735674) Homepage
    No. Well, generally you'll have faster code if you code it in assembly. But things change when you enter the world of embedded programming... you're right, portability isn't AS important as speed. Sometimes. In certain parts of your program. But I recommend you DON'T disregard portability, even when it comes to microprocessors. In a real-world engineering project, you never know when one day parts will change, parts become obsolete, and you don't want to be left having to translate thousands of lines of assembly code.

    Rather, usually whats done is that most of the code is written in C, and only those parts that REALLY REALLY have to be optimized, like interrupt handlers for example, can be done in assembly. People use assembly for routines that, for example, have to take exactly a certain number of instruction cycles to complete.

    But it should be avoided as much as possible. It's just not worth losing the portability.

    More and more these days, microprocessors are embedding higher level concepts, and even entire operating systems, just to make software development easier.
  • by arevos ( 659374 ) on Tuesday July 18, 2006 @08:31AM (#15735696) Homepage
    The first mistake: Confusing "compile" performance with execution performance. The job of maping C/C++ code to machine code is trivial.

    I've designed compilers before, and I wouldn't class constructing a C/C++ compiler as "trivial" :)

    If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"

    One could also make the opposite argument. Many computer courses teach languages such as C++, C# and Java, which all have connections to low level code. C# has its pointers and gotos, Java has its primatives, C++ has all of the above. There aren't many courses that focus more heavily on highly abstracted languages, such as Lisp.

    And I think this is more important, really. Sure, there are many benefits to knowing the low level details of the system you're programming on; but its not essential to know, whilst it is essential to understand how to approach a programming problem. I'm not saying that an understanding of low level computational operations isn't important, merely that it is more important to know the abstract generalities.

    Or, to put it another way, knowing how a computer works is not the same as knowing how to program effectively. At best, it's a subset of a wider field. At worst, it's something that is largely irrelevant to a growing number of programmers. I went to a University that dealt quite extensively with low level hardware and networking, and a significant proportion of the marks of my first year came from coding assembly and C for 680008 processors. Despite this, I can't think of many benefits such knowledge has when, say, designing a web application on Ruby on Rails. Perhaps you can suggest some?

    Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse.

    I disagree. I think software sucks because software engineers don't understand programming

  • Assembler (Score:4, Insightful)

    by backwardMechanic ( 959818 ) on Tuesday July 18, 2006 @08:36AM (#15735722) Homepage
    Every serious hacker should have a play with assember, or even machine code. There is real magic in starting up a uP or uC on a board you built yourself, and making it flash a few LEDs under the control of your hand assembled program. I found a whole new depth of understanding when I built a 68hc11 based board (not to mention memorizing a whole bunch of op-codes). Of course, I'd never want to write a 'serious' piece of code in assembly, and it still amazes me that anyone ever did!
  • by Oligonicella ( 659917 ) on Tuesday July 18, 2006 @08:36AM (#15735726)
    "The job of maping C/C++ code to machine code is trivial."

    Which machine, chum?

    "I've been programming professionally for over 20 years..."

    OK, bump chests. I've been at it for 35+. And? Experience doth not beget competence. There are uses for low-level languages and those that require them will use them. Try writing a 300+ module banking application in assembler. By the time you do, it will be outdated. Not because the language will change, but because the banking requirements will. Using assembler to write an application of that magnitude is like trying to write an Encyclopedia article with paper and pencil. Possible, but 'tarded.

    "Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and .NET will make software worse."

    More like, 'software sucks today for the same reason it always has -- fossized thinkers can't change to make things easier for those who necessarily follow them.' Ego, no more.

  • by backwardMechanic ( 959818 ) on Tuesday July 18, 2006 @08:44AM (#15735751) Homepage
    I love these hard definitions of soft concepts. Just because you write down some rules, it doesn't mean we follow them. Any programmer understands roughly what 'high level' and 'low level' mean, but I'm sure we'll all argue over where the boundaries are - they're not well defined. I guess you stopped at 101?
  • by 14CharUsername ( 972311 ) on Tuesday July 18, 2006 @08:54AM (#15735809)

    Now who's handwaving?

    I'd say you are. His first statement wasn't a logically fallacy, he was just pointing out this argument has been going on for a long time.

    You made a good point about portability, but I think that was your only point. And its easily shot down byt the fact that its just as easy to port a standard C/C++ API to a new environment as it is to port Java/.NET to a new environment.

    He made an excellent point about many new graduates not knowing how the CPU actually works and you replied with: "It's about algorithms. Computers just happen to be the most convienent means for trying them.." ??? What the hell does that mean? Handwaving indeed.

    His main point was that VM's are always slower compiled machine code. Even if computers are doubling in speed every 18 months or whatever, native machine code will still be faster than virtual machine code.

    With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.

    Right there you have just proven yourself to be an academic. Trends do not make reality. Besides that, what about gcj? If VMs were so great, why would anyone want to compile java to native code? In the real world, people care about performance. Academics are satisfied that a problem has a solution. In the real world we need to be able to get a solution in the minimum amount of time. VMs always take more time.

    Now you may continue your handwaving.

  • by Rinzai ( 694786 ) on Tuesday July 18, 2006 @08:56AM (#15735822) Journal
    From TFA: The closer to the metal you can get while programming, the faster your program will compile -- or so conventional wisdom would have you believe. In this article, I will show you how high-level languages like Java aren't slow by nature, and in fact low level languages may compile less efficiently.

    I believe the phrase the faster your program will compile means "the faster the compiler will translate your program into machine-executable code." Apparently the author means "the compiler will generate faster code." He then makes the same mistake again, equivocating between the process of compilation and the quality of the compiled output.

    If you can't manage to write a clear sentence defining what topic you're exploring...what else might you be getting wrong?

  • by D-Cypell ( 446534 ) on Tuesday July 18, 2006 @09:04AM (#15735858)
    I am going to assume there are some rounding errors in your 8+3 years because you are going back to a time prior to the first Java release. It is feasible that your worked for Sun but I would think you would have mentioned it.

    "It's the best of both worlds"

    The problem with that assertion is that software development has more than two worlds :). I remain a Java booster but even I would have raised an eyebrow if you had come to me to suggest the development of a photo manipulation tool using Java. Sure, it can be done but you can loosen a bolt with a hammer and chisel. To be perfectly frank, your example sounds like a text book example of a poor work-man blaming his tools.

    There are applications that benefit from running in a managed environment, and spend the vast majority of their time waiting for input or shifting memory around. These are cases where QT and C++ would be bad choices (the consequences of 'mis-shifting' memory in a language like C++ are well documented). Java wouldn't be the only choice, but I wouldn't call you crazy (or a bad work-man) for making that choice.

    Please don't fall into the trap of using the wrong tool and then blaming the tool when things go wrong. This is exactly the kind of thing that has been plastered all over these discussions for the last ten years or so.
  • Re:Bah (Score:3, Insightful)

    by eraserewind ( 446891 ) on Tuesday July 18, 2006 @09:05AM (#15735861)
    Compare the efficiency of GCC at auto-vectorising FORTRAN (which has a primitive vector type) and C (which doesn't), if you don't believe me.
    You see this all the time in SW Engineering. If there is a well defined high level API specifying what something is trying to do rather than how it should be efficiently (at the time) done, it will eventually be far more efficient to use the API because it will be get dedicated instructions in the chipset or even be completely implemented in a dedicated HW device whereas the how to do it version will be forever limited by how it's doing it.

    For PCs this isn't so obvious, since generic hardware + biggest CPU going tends to get used, but in embedded devices the dedicated hardware is much more often the way to go than the processor upgrade. My last project I can think of two APIs that gave us this benefit immediately without SW effort on our part, and a third area that benefitted by ripping out all the "optimized" code that bypassed the API, and using the (now HW accelerated) API directly.
  • by Azarael ( 896715 ) on Tuesday July 18, 2006 @09:14AM (#15735901) Homepage
    The "appeal to an expert" fallacy?
    I've never come across that fallacy in philosophy class, however, if you mean the "Improper Appeal to Authority" fallacy then it isn't. If the above poster was a movie star or a well known public figure and their comments about the article are being referenced to prove a point (assuming said movie star or public figure isn't an expert programmer), then that would be an improper appeal to authority. In any case, the insight and experience of long time programmer is valuable. Sure they can be wrong but, they still know their stuff front to back. Likely the GP poster knows very well that you can through as much virtualization as you want at a problem, but no matter what, you're still bound to the limitations of the underlying hardware. Maybe at some point hardware with almost infinite flexibility will exist and I'd be surprised if that happened any time soon.
  • Re:Old debate (Score:3, Insightful)

    by StarvingSE ( 875139 ) on Tuesday July 18, 2006 @09:18AM (#15735922)
    Key word is "relatively." C is low level compared to languages such as Java and C#, which do a lot of things such as memory management for you.
  • by aadvancedGIR ( 959466 ) on Tuesday July 18, 2006 @09:24AM (#15735945)
    as much as development process.

    CPU power is available and cheap but time to market is critical. Most of the time, you don't need to do the fastest program ever, but to do a program that works reasonably well and that you can debug easily (some may say it is the same requirement).

    C may not be the best tool for any given task but it is a pretty decent swiss army knife that most people know how to use reasonably well.

    Disclaimer: I'm not in web devmnt but in embedded real time on DSP. With 8 dedicated ALU (2 mul, 2 add/sub, 2 logic and 2 load/store) running at the same time on the chip, there is still not many good alternatives to C (let the compiler optim and pray) and ASM (massive headhache).
  • Re:Bah (Score:2, Insightful)

    by sasdrtx ( 914842 ) on Tuesday July 18, 2006 @09:34AM (#15735994)
    From the first sentence: "The closer to the metal you can get while programming, the faster your program will compile..." WTF? How fast a language compiles has nothing to do with the so-called myth, whcih is that low-level languages allow a good programmer to produce programs that run faster. They may well compile faster (and they probably retain that advantage), but that's beside the point.

    Oddly enough, he proceeds to jump back on track and discuss optimization techniques and levels, most of which is OK. But he berates Java for implementing arrays (that's supposed to be an advantage over C and C++, which don't), and ignores the advantages of managed memory provided by a virtual machine.

    C. Needs more work.

    (yes, that's a pitiful pun.)
  • Re:Old debate (Score:5, Insightful)

    by Bastian ( 66383 ) on Tuesday July 18, 2006 @09:35AM (#15736003)
    The article addressed this point by mentioning that the definitions of high and low level language are a moving target. Nowadays I think most people consider assembly language to be its own thing, and the low-level classification has now been shifted into a domain that was once described completely by the term high-level. The term "high-level language" has been replaced by the term "programming language."

    If you're going to go with the jargon as it's most often used nowadays (which is a perfectly reasonable thing to do), then C would certainly be about as low as you can get without manipulating individual registers - i.e., without being assembly language.

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Tuesday July 18, 2006 @09:36AM (#15736009)
    Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)

    I see this quote everywhere, and just because it's by some semi-famous academic, nodody questions it and takes it for granted. The quote is utter rubish.

    With astronomy you have stars, which aren't man made and thus only scarcely understood and the tools we use to look at them, teleskopes, which are man-made. We understand them.

    Computers and Computer Science are both things that are entirely man-made. There is no natural phenomenon that we call 'computer' and a science that studies this natural phenomenon called "computer science". It's all one thing. The quote is rubbish and contains no usefull information whatsoever. On the contrary: the conclusion it draws in abolutely false.
  • the author... (Score:3, Insightful)

    by ynohoo ( 234463 ) on Tuesday July 18, 2006 @09:38AM (#15736017) Homepage Journal
    the author is only a couple of years out of college and he is already well on his way to be becoming a professional troll. I see a bright future for him...
  • by Anonymous Coward on Tuesday July 18, 2006 @09:40AM (#15736037)
    People will always argue over 3 things:
    1) whether assembly is faster than C
    2) whether interpreted languages are faster than C/C++

    The real question here is - which type of language does well for your application?

    Ultimately C will be faster if a good programmer, who understands the language and the application. However, will he be more productive? I'd never write a third person shooter in python, perl, or java. However, what I might do is add a 3d engine to a python statistics modeling program that's already written in one of those languages. Most people would agree, writing a web interface in C is just insane if you have anything particularly useful you want to write. However, I'll probably write a multi-process webserver in C, just because it makes sense for speed (I know python has a built-in webserver, but there are features it doesn't have. You may be able to write it in python, but will all those features that apache have be fast?).

    The bottom line is:
    - Define the application you want to build.
    - Define your requirements (responsiveness, rhobustness [security,reliability,etc], extendability, deadlines).
    - Do a little research with a few languages (just experimentation). Write prototype interfaces in the language, do a little benchmarking, just play with it.
    - Make a decision on a language based on what you've found and what's required.

    As more high level languages appear (functional languages look very promising), see what those languages have over what's already out there. If it has an applicability to what you're doing, use it.

    I'm tired of seeing everyone beat a dead horse. Yes, I know the two arguments:
    - X is faster
    - Y is just as fast as X, but can do it in less lines of code.

    X & Y are different, there's no ignoring it. There's more dimensions to languages than speed and time to market, don't ignore them.
  • by p3d0 ( 42270 ) on Tuesday July 18, 2006 @09:45AM (#15736065)
    Well, generally you'll have faster code if you code it in assembly.
    No, generally you'll have slower code. In a few specific, well-chosen places, you may get faster code. If you had unlimited time, patience, and performance tuning expertise, then you could beat the compiler on a large application, but how realistic is that?

    Coding large apps in assembly is usually way beyond the point of diminishing returns in terms of performance.

  • by mrsbrisby ( 60242 ) on Tuesday July 18, 2006 @09:56AM (#15736134) Homepage
    The more abstract a language is, the better a compiler can understand what you are doing

    Except it doesn't. Nobody has written a compiler that smart, and I don't care what anyone says: I don't think anyone ever will.

    Learning how to invent and develop algorithms is important. Learning how to translate those algorithms into various languages is important. And knowing how the compiler will translate those algorithms into machine instructions- and how the CPU itself will process those machine instructions, will yield a lot more performance than choice of languages.

    Consider djbfft [cr.yp.to], one of the fastest FFT implementations, outruns many FFT implementations in Java, Haskell, Lisp, or assembly, and yet it's written in C.

    Don't confuse me: I'm not saying C is fast, or C is good, I'm saying djbfft is good. Reordering the instructions in the C code will lower the efficiency- even if the code is otherwise equivelent.

    That said, I agree with almost everything else in your post.

  • Re:Old debate (Score:4, Insightful)

    by jacksonj04 ( 800021 ) <nick@nickjackson.me> on Tuesday July 18, 2006 @09:58AM (#15736147) Homepage
    Low level says what you want the system to do. High level says what you want the language (Via compiler, interpreter etc) to make the system do.
  • Don't be so sure (Score:4, Insightful)

    by overshoot ( 39700 ) on Tuesday July 18, 2006 @10:01AM (#15736170)
    Well, generally you'll have faster code if you code it in assembly.
    I wouldn't even grant that in the general case.

    Amazingly far back (try the 80s) a professor friend of mine had a marvelous example of compiler-generated code where the compiler had done such an amazing job of optimising register use that you had to trace through more than 20 pages of assembler output with colored markers to trace from where the register was loaded to where it was used.

    No way I would ever have the huevos to code that way in assembler. On a RISC machine or (Heaven help us) the Itanic it gets lots worse.

  • by Erich ( 151 ) on Tuesday July 18, 2006 @10:04AM (#15736195) Homepage Journal
    That low-level stuff is important if you have code that needs to run fast. Need to multiply a number by a constant? You can use shifts and adds instead. Does the same thing, but takes the processor less time.

    INCORRECT

    . Shifts and adds are sometimes faster for certain constants. Power of two, maybe power-of-two plus one. But for any arbitrary constant, this is false on most processors. Multipliers are much faster than a stream of many shifts and adds. Furthermore, the compiler should hold the knowledge of when a shift-and-add is better-performing than a multiply for what constant values. And, if you're not using MyPrettySchoolProjectCC, it probably does.

    Now what your compiler *really* hopefully knows about is how to make division by a constant into a multiply. That can really save time. Division is an iterative process and is very hard to make fast. Multiplies are highly parallel; you can do large multiplies fully pipelined and with pretty low latency. And you can typically turn a 32 bit / 32 bit divide into a 32x32->64 multiply with the reciprocal. Since you can determine the reciprocal at compile time this is probably a win.

    Maybe you just went to a school where they didn't show you how multiplies are actually implemented on modern hardware. Shift registers with accumulators they aren't. This is also potentially a reason why the professor will tell you that you can't outsmart the compiler. The typical college student can't, because he or she doesn't understand enough about how things really work. But any engineer with a decent amount of experience -- or most grad students -- can outsmart a compiler easily.

  • Initially (Score:3, Insightful)

    by Vexorian ( 959249 ) on Tuesday July 18, 2006 @10:20AM (#15736296)
    The article later points out that the native version was running slower due to not using optimization options correctly. And later the native version was running 15% faster than the managed version
  • by alispguru ( 72689 ) <bob@bane.me@com> on Tuesday July 18, 2006 @10:29AM (#15736361) Journal
    Existing high-level languages, such as LISP, provided too much abstraction for implementing an operating system

    Huh? I would argue that commercially successful (as in boxes sold to Fortune 500 companies and used in production) operating systems have been written in three languages:

    * Assembly

    * C

    * Lisp [andromeda.com]

    Are there any commercially successful OSs written in C++ yet?

    (revealing my ignorance and posting flamebait, all in one)

  • by Terje Mathisen ( 128806 ) on Tuesday July 18, 2006 @10:34AM (#15736399)
    I've probably written more assembly than most slashdot readers, and most of what you say is true:

    It used to be the case that I could always increase the speed of some random C/Fortran/Pascal code by rewriting it in asm, parts of that speedup came from realizing better ways to map the current problem to the actual cpu hardware available.

    However, I also discovered that much of the time it was possible to take the experience gained from the asm code, and use that to rewrite the original C code in such a way as to help the compiler generate near-optimal code. I.e. if I can get within 10-25% of 'speed_of_light' using portable C, I'll do so nearly every time.

    There are some important situations where asm still wins, and that is when you have cpu hardware/opcodes available that the compiler cannot easily take advantage of. I.e. back in the days of the PentiumMMX 300 MHz cpu it became possible to do full MPEG2/DVD decoding in sw, but only by writing an awful lot of hand-optimized MMX code. Zoran SoftDVD was the first on the market, I was asked to help with some optimizations, but Mike Schmid (spelling?) had really done 99+% of the job.

    Another important application for fast code is in crypto: If you want to transparently encrypt anything stored on your hard drive and/or going over a network wire, then you want the encryption/decryption process to be fast enough that you really doesn't notice any slowdown. This was one of the reasons for specifying a 200 MHz PentiumPro as the target machine for the Advanced Encryption Standard: If you could handle 100 Mbit Ethernet full duplex (i.e. 10 MB/s in both directions) on a 1996 model cpu, then you could easily do the same on any modern system.

    When we (I and 3 other guys) rewrote one of the AES contenders (DFC, not the winner!) in pure asm, we managed to speed it up by a factor of 3, which moved it from being one of the 3-4 slowest to one of the fastest algorithms among the 15 alternatives.

    Today, with fp SIMD instructions and a reasonably orthogonal/complete instruction set (i.e. SSE3 on x86), it is relatively easy to write code in such a way that an autovectorizer can do a good job, but for more complicated code things quickly become much harder.

    Terje
  • Re:Old debate (Score:3, Insightful)

    by exp(pi*sqrt(163)) ( 613870 ) on Tuesday July 18, 2006 @10:35AM (#15736403) Journal
    Haskell does OK. But compare to Clean, another pure lazy functional language. Clean blows away Haskell most of the time and competes favourably with C, sometimes beating it.
  • by master_p ( 608214 ) on Tuesday July 18, 2006 @10:46AM (#15736479)

    When Fortran was made, nobody thought that CPUs of 30 years in the future will have vector processing instructions. In fact, as Wikipedia says [wikipedia.org], vector semantics in Fortran arrived only in Fortran 90.

    The only advantage of current Fortran over C is that the vector processing unit of modern CPUs is better utilised, thanks to Fortran semantics. But, in order to be fair and square, the same semantics could be applied to C, and then C would be just as fast as Fortran.

    The fact that C does not have vector semantics reflects the domain C is used: most apps written in C do not need vector processing. In case such processing is needed, Fortran can easily interoperate with C: just write your time-critical vector processing modules in Fortran.

    As for higher-level-than-C languages being faster than C, it is purely a myth. Code that operates on hardware primitives (e.g. ints or doubles) has exactly the same speed in C, Java and other languages...but higher level languages have semantics that affect performance as much as they can help performance. All the checks VMs do have an additional overhead that C does not have; the little VM routines run here and there all add up to slower performance, as well as the fact that some languages are overengineered or open the way for sloppy programming (like, for example, not using static members but creating new ones each time there is a call).

  • Re:Old debate (Score:2, Insightful)

    by MetaKey ( 896166 ) on Tuesday July 18, 2006 @11:38AM (#15736896)
    To fill out your item [2], the name of the company was Lattice. MS bought the Lattice compiler and renamed it MS C.

    It was an early example of the MS method of software development: buy out someone who has a viable product and do a much better job of marketing that product.

    I maintain that MS has never been much of a software development company but, rather, a software marketing company. Certainly, the vast majority of their "innovations" have been in marketing. MS tends to incrementally improve on other developers software while being very innovative in their marketing of that software.

    Lattice C was an early example. Excel is a mid-life example. A more recent example is the Groove Networks collaboration tool. MS recently bought them and will include the Groove in the next version of Office. They pretty much had to do this as Office is pretty stale. Who really needs a newer version of Word for example? And OpenOffice is coming along and is free. The only way to improve the Office product enough to warrent an upgrade was to add serious collaboration capabilities. And, this being MS we're talking about, the only way to do that was to go out and buy serious collaboration capabilities. Now they'll integrate it into Office and market the bejeezus out of it.

    I rest my case...

  • Re:High Level (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 18, 2006 @11:54AM (#15737047)
    If C is low level, then so is Java. They have almost identical syntax

    No. It's the semantics that matter, not the syntax.
  • by Anonymous Coward on Tuesday July 18, 2006 @12:04PM (#15737148)
    "In the real world we need to be able to get a solution in the minimum amount of time. VMs always take more time."

    I'd argue that in the real world (or at least business world) we need the solution to be developed in the shortest amount of time, with the most amount of security. While a VM based language is not guaranteed to provide quicker time / security, in most cases it probably will.
  • Re:Old debate (Score:2, Insightful)

    by letxa2000 ( 215841 ) on Tuesday July 18, 2006 @12:17PM (#15737285)
    Haven't people generally considered C to be kind of a cross platform assembler? That certainly seems to be the attitude of the Scheme crowd...


    Anyone that considers C to be a "cross-platform assembler" probably has never worked in assembler, and almost definitely hasn't done so on more than one platform.

    'C' is only "low level" to those that don't get any closer to the hardware than, say, Visual Basic. Anyone that has programmed in assembly language will assure you that 'C' is quite high level. I'd be willing to accept "mid-level", but in reality once you've worked at the assembly level you will realize that there's very little difference between 'C' and Visual Basic. 'C' and VisualBasic are essentially both high-level languages; but 'C' just seems more intimidating than VisualBasic to the VisualBasic programmer. Those that call 'C' mid-level are probably VisualBasic programmers that think 'C' is intimidating so it clearly can't be a high-level language like VB.

    When you write a single line in 'C' and realize that that can correspond to hundreds of assembly language instructions, you realize that 'C' is very much a relatively high-level language. When you try to do floating point math on an 8-bit processor with no floating point instructions, you realize that 'C' is very much a high-level language. When you try to add three numbers and multiply it by a fourth, and you come from 'C', you realize that (1 + 2 + 3) * 4 is a heck of a lot more complicated than you imagined.

    The main difference between VB and 'C' is that VB gives you more self-contained packages to let you interact with today's GUI's. 'C' gave you printf which was fine for writing to a terminal. VB gives you all kinds of controls to let you do pretty GUI stuff. The concept is exactly the same, and both are high level.

    I say all of this having programmed in assembly language, then Basic, then QuickBasic, then 'C', then VisualBasic, and now almost exclusively 'C' and assembly language in truly embedded systems (embedded != Windows or Linux in a small form factor).

  • Re:Old debate (Score:3, Insightful)

    by Julian Morrison ( 5575 ) on Tuesday July 18, 2006 @02:43PM (#15738605)
    Nah, that's nonsense. Python has string objects, Scheme has continuations. Ruby's still slower.

    First-class reentrant continuations and dynamic typing (another major efficiency hog) probably constrain you to, in the best case, the same box as compiled Scheme - about the same as Java.
  • Ugly? Idiomatic! (Score:2, Insightful)

    by Paolone ( 939023 ) on Tuesday July 18, 2006 @05:08PM (#15739620)
    Perl is not ugly, just really really idiomatic. As with all idiomatic languages, you can't grok what something means if you're not exposed to it.
    It's just a matter of "if you can't stand the line noise, get out from the code-kitchen!". :)
    Even if I can understand easily Perl code, what I can't really stand is C pointer arithmetic if it steps too far...
  • by EmbeddedJanitor ( 597831 ) on Tuesday July 18, 2006 @06:08PM (#15739956)
    I do stuff in embedded space using IAR or GreenHills and gcc. For the most part, the proprietary vendors are losing ground to gcc. The proprietary advantage is shrinking, especially with more modern micros and as gcc improves. For the most part, code that comes out of gcc is no worse than code coming out of IAR or GreenHills. Where the priopritary guys have a real advantage is in better Clib implementations. The Clib, and newlib, that are normally used with gcc are huge and bloaty in comparison.
  • Re:Old debate (Score:3, Insightful)

    by jgrahn ( 181062 ) on Tuesday July 18, 2006 @06:41PM (#15740112)
    This is the first valid criticism of C++ vs C I've ever seen. Most complaints about C++ are "it's not what I'm used to" from C programmers, but this is just a fundamental design flaw in C++.

    What fundamental design flaw -- that malloc() is less convenient to use in C++? For crying out loud, use new!

    Ok, void pointers are less useful in C++ than in C. In my experience, that has been a non-problem. But then I've never tried to program in C with a C++ compiler -- I have enough problems without creating artificial ones.

  • Re:wasted ink (Score:3, Insightful)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday July 18, 2006 @07:47PM (#15740408) Journal
    Unfortunately, just because a new generation is growing up doesn't mean we'll want to rewrite absolutely everything. It'd be much better if things were developed rationally as soon as possible -- that reduces the total amount of legacy c/c++ code which will ultimately have to be rewritten later.

    Besides, it's not a new concept, and if this generation of programmers didn't get it, neither will the new generation, because among the very first generation of programmers were people who understood Lisp machines. Of course, if a new generation really does start using mostly Ruby when the current one can't handle Lisp, we'll know it was those darned parentheses. Just as any sufficiently advanced technology is indistinguishable from magic, any sufficiently advanced language is indistinguishable from Lisp.

    It will be funny to see this turned on its head, if there are ever enough, say, Python or Ruby programmers to improve python/ruby compilers/runtimes to where, a couple generations of processors later, it's C that has a lack of optimizations and is actually farther from the hardware. We may actually see a C virtual machine as a necessity!

    More practically, I try to work with languages that suit the task at hand, which is really never C unless I'm dealing with a huge existing C codebase.
  • Re:wasted ink (Score:3, Insightful)

    by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Wednesday July 19, 2006 @02:32AM (#15741548) Journal
    Oh, hell no.

    Java feels way slower than anything else. My college courses were mostly in Eclipse. It runs fast enough, but it takes forever to start, which is true of many, many Java apps.

    Which means that when these same programmers end up learning C/C++, they'll think Java is slow because it's "interpreted". I guess there's at least the hope that they'll wind up using C#, and thinking Java is slow because it sucks. Which is good enough, because Java sucks for other reasons, even though it isn't really slow.

    But really, with Generics, Java has basically picked up most of the features and syntax of C++, added garbage collection and much more anal retentive restrictions, and called it a whole new language. The bytecode and virtual machine is really not relevant to the awfulness of the language itself -- you can write a perfectly good language for the JVM -- but the JVM, specificalyl, has its own drawbacks, in that it's hard to write more libraries for Java, and many of the existing libraries suck in profound ways compared to C/C++ alternatives, or even .NET.

    Frankly, the only good thing about them learning Java is that at least for awhile, their code may be portable, because it's so hard to make OS-specific or arch-specific Java.
  • by moro_666 ( 414422 ) <kulminaator@gmai ... Nom minus author> on Wednesday July 19, 2006 @04:21AM (#15741772) Homepage
    the problem of yum is in the design of the application, not the language.

    python is fast enough for almost any package management quest, but yum is the worst piece of ... that i have seen on that frontier. proper indexes and logical stops would make it much faster. there's your chance to write it. choose whatever language you want. design has to be good.

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...