Forgot your password?
typodupeerror

High-level Languages and Speed 777

Posted by ScuttleMonkey
from the ever-changing-animal dept.
nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."
This discussion has been archived. No new comments can be posted.

High-level Languages and Speed

Comments Filter:
  • Old debate (Score:5, Informative)

    by overshoot (39700) on Tuesday July 18, 2006 @06:21AM (#15735454)
    Twenty years ago we were still in the midst of the "language wars" and this was a hot topic. The argument then, as now, was whether a high-level language could be compiled as efficiently as a low-level language like C [1].

    Well, we ran our own tests. We took a sizable chunk of supposedly well-written time-critical code that the gang had produced in what was later to become Microsoft C [2] and rewrote the same modules in Logitech Modula-2. The upshot was that the M2 code was measurably faster, smaller, and on examination better optimized. Apparently the C compiler was handicapped by essentially having to figure out what the programmer meant with a long string of low-level expressions.

    Extrapolations to today are left to the reader.

    [1] I used to comment that C is not a high-level language, which would induce elevated blood pressure in C programmers. After working them up, I'd bet beer money on it -- and then trot out K&R, which contains the exact quote, "C is not a high-level language."
    [2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.)

  • Re:Slashdot (Score:2, Informative)

    by jamie (78724) <jamie@slashdot.org> on Tuesday July 18, 2006 @06:27AM (#15735467) Journal
    We had to make a change to our 'comments' table schema that would have locked up the site if we had allowed full access. At over 15M rows, this takes some time. Sorry about that.
  • Re:Old debate (Score:1, Informative)

    by Anonymous Coward on Tuesday July 18, 2006 @06:38AM (#15735504)
    "[2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.)"

    Lattice

    From a gray fox....
  • Re:It's very simple (Score:5, Informative)

    by rbarreira (836272) on Tuesday July 18, 2006 @06:41AM (#15735514) Homepage
    I'm no assembly guru, but I don't think I would have done as well writing assembly by hand

    I don't believe this as much as the people who I see repeating that sentence all the time...

    Not many years ago (with gcc), I got an 80% speed improvement just by rewriting a medium sized function to assembly. Granted, it was a function which was in itself, half C code, half inline assembly, which might hinder gcc a bit. But it's also important to note that if the function had been written in pure C code, the compiler wouldn't have generated better code anyway since it wouldn't use MMX opcodes... Last I checked, MMX code is only generated from pure C in modern compilers when it's quite obvious that it can be used, such as in short loops doing simple arithmetic operations.

    An expert assembly programmer in a CPU which he knows well can still do much better than a compiler.
  • Re:Old debate (Score:3, Informative)

    by StrawberryFrog (67065) on Tuesday July 18, 2006 @06:49AM (#15735541) Homepage Journal
    essentially ALL of the strongly typed and structure languages - have pretty much died out.

    Uh, Java and C# are strongly typed and structured languages.
  • by Toreo asesino (951231) on Tuesday July 18, 2006 @06:52AM (#15735557) Journal
    Of course, lower-level languages can be faster, but I'd suggest that writing code at a very low-level is rarely worth the extra effort.

    Take Quake II [slashdot.org] for instance; as quoted from the article 'the managed version initially ran faster than the native version' - which would suggest higher-level languages are certainly capable of comparing to that of their lower-level siblings.

    Also, take into account the added developer time gained from factors like memory-management being, well, managed, and ever-falling processor & memory prices, and the logical conclusion is usually "write at a higher-level".

    There are of course more considerations than these when deciding on a development platform, but essentially, I think there'd have to be very good reasons for writing green-field projects too close to the machine.

  • by jaaron (551839) on Tuesday July 18, 2006 @06:56AM (#15735566) Homepage
    Here's a print view [informit.com] of the article so that you don't have to keep moving through the pages. Despite that annoyance, it was a good article. I wish there had been more concrete examples though.
  • The criteria for a high-level language are: 1) you aren't allowed to do direct memory register manipulations (i.e. cant run of the end of an array into other areas), and 2) you are interpreted. Either of these can qualify a language as high-level. C has direct memory register manipulation and it is not interpreted, therefore it cannot be a high-level language.
  • Re:Bah (Score:3, Informative)

    by gbjbaanb (229885) on Tuesday July 18, 2006 @07:02AM (#15735581)
    It seemed to me the article was criticising C and trying to compare Java favourably. ie, C is a low level language that canot be optimised, Java is a high level language that can. roughly.

    It didn;t say much at all otherwise, but it did have a nice collection of adverts.

    Optimisation:
    You don't have to hack around, some compilers do it for you. The new MS compiler does a 'whole program optimisation' where it will link things together from separate object modules. Still cannot handle libraries, but then, that's just an issue that applies to all programs that are split into component parts. (except as the article implies, java that uses the bytecode in class libraries... except when compiled to native code as the first page of the article mentioned as a way to boost speed. Can't have it both ways :-) )
  • Re:Old debate (Score:3, Informative)

    by CapnOats.com (805246) <(moc.staonpac) (ta) (ekim)> on Tuesday July 18, 2006 @07:07AM (#15735599) Homepage
    ...trot out K&R, which contains the exact quote, "C is not a high-level language."

    Actually the quote from my copy of K&R, on my desk beside me is,

    C is not a "very high level" language...

    emphasis is mine.
  • by billcopc (196330) <vrillco@yahoo.com> on Tuesday July 18, 2006 @07:17AM (#15735634) Homepage
    The main reason C is "faster" than high level languages is because C doesn't cover bad programmers' butts with elaborate type checking, ref counting and garbage collection. Take a properly designed C app with graceful error handling and secure inputs, and you will take a performance hit. Let's face it, most of the code we write in C involves error handling and idiot-proofing, things that most high-level languages have built-in functionality for these boring, repetitive slabs of code we all hate writing.

    I see no reason why a high-level application couldn't be compiled as skillfully as a feature-equivalent low-level application. It's just a matter of breaking down the code into manageable building blocks.

  • Re:Old debate (Score:5, Informative)

    by shreevatsa (845645) <shreevatsa,slashdot&gmail,com> on Tuesday July 18, 2006 @07:19AM (#15735642)
    For what it's worth, at The Computer Language Shootout [debian.org], OCaml does pretty well [debian.org]. Of course, C is still faster [debian.org] for most things (but note that the really high factors (29 and 281) are in OCaml's favour!), but OCaml is pretty fast compared to Java [debian.org] or Perl [debian.org]. Haskell does pretty well too. Functional programming, anyone?
    Of course, these benchmarks measure only speed, are just for fun, and are "flawed [debian.org]", but they are still interesting to play with. If you haven't seen the site before, enjoy fiddling with things to try and get your favourite language on top :)
  • Re:It goes both ways (Score:5, Informative)

    by pesc (147035) on Tuesday July 18, 2006 @07:21AM (#15735650)
    20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it).

    While the VAX had some complex instructions (such as double-linked queue handling), it did not have a quicksort instruction.

    Here [hp.com] is the instruction set manual.
  • by mrchaotica (681592) * on Tuesday July 18, 2006 @07:36AM (#15735723)
    The proof is in the pudding as they say

    No, what they say is "the proof of the pudding is in the eating." (Just pointing it out because most people get it wrong.)

  • by StormReaver (59959) on Tuesday July 18, 2006 @07:40AM (#15735739)
    "If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations."

    This sound perfectly reasonable in theory. In practice, however, it's not. Users want speedy development AND speedy execution. I developed a Java image management program for crime scene photos, and the Sheriff Patrol's commander told me flat out: we'll never use this. It's too slow.

    I rewrote the program using C++ and Qt, and gained a massive speed improvement. The Sheriff Patrol and detective units have been using it ever since, and they love it. I had been a Java booster for upwards of eight years until then. That was (roughly) three years ago, and I haven't written a line of Java since. I have, however, run my historic Java programs in SUN's most recent JVM. The newer hardware runs it faster, but Qt/C++ still smokes Java. Qt gives me speedy development, and C++ gives me fast execution. It's the best of both worlds.
  • Re:Old debate (Score:3, Informative)

    by cerberusss (660701) on Tuesday July 18, 2006 @07:46AM (#15735760) Homepage Journal
    It also says in the introduction (next page):
    C is a relatively "low level" language.

  • by Anonymous Coward on Tuesday July 18, 2006 @07:59AM (#15735831)
    Badly researched to the point of being irresponsible.

    1. Unsupported implication that 'C' was created in response to PDP-11 assembly language.

    2. Using vector attached processors as evidence of HLL obsolescense. First, the Altivec/MMC unit is not the entire processor, it doesn't even do most of the work, it's an *attached* unit. There is still a main MPU to do the spaghetti code. Second, they are easily used by HLL's via optimized LIBRARIES, that's the beauty and breakthrough of 'C' that has become a model for HLL's.

    3. JIT examples fail to include the runtime of the JIT compiler itself. The program may speed up by 10%, but running the JIT before the program will blow that time out of the water.

    4. Article totally ignores the "RISC revolution" of the 80's where processors were actually designed based on HLL's, designed specifically to speed them up, acting in consort with the compilers & linkers. This concept is now old hat. Maybe the author wasn't born yet.

    Need I continue??
  • by embracethenerdwithin (989333) on Tuesday July 18, 2006 @08:07AM (#15735868)
    I thought it might be helpful for a current student to let you know what it is we learn today at my college. I'm a senior Software Engineering major, not a comp sci major. Comp Sci is another department and has a totaly different focus. They focus on super efficent algorithms, we focus on developing large software projects.

    My software engineering program has been very Java intensive. My software engineering class, object oriented class, and software testing class were all java based. We dabbled in C# a bit as well.

    However, I also had an assembly class, a programming languages class where we learned perl and scheme(this language sucks) and about five algorithms classes in C++. I also had an embedded systems class in both C and assembly(learned assembly MCU code, then did C).

    I feel like this is all pretty well rounded; I've learned a bunch of languages and am not really specialized in one. I'd say I am best at Java right now, but I can also write C++ code just fine.

    I've never been told a computer has any kind of crazy limitless performance. In embedded systems, I learned about performance. Making a little PIC microcontroller calculate arctan was fun(took literally 30 seconds without a smart solution). I also learned that there is a trade off between several things such as performance, development time, readability, and portability.

    We are taught to see languages as tools, you look at your problem and pull a tool out of the tool box that you think fit the problem best. You have to weigh whats important for the project and chose based off of that.

    The final thing I'd like to point out is that one huge issue with software today is it is bug ridden. How easy something is a test makes a big difference in my opinion. Assembly and C will pretty much always be harder to test than languages like Java and C#.

    I don't think the universities are the problem, at least not in my experience.
  • Re:Old debate (Score:3, Informative)

    by masklinn (823351) <slashdot,org&masklinn,net> on Tuesday July 18, 2006 @08:11AM (#15735886)
    No they're not, they're statically typed but many languages exist with much stronger type systems (Ada, Modula2, Haskell).
  • by LizardKing (5245) on Tuesday July 18, 2006 @08:24AM (#15735939)

    One interesting feature the compiler/IDE system I was using at the time (TopSpeed's) had was this concept that all their language compilers (M2, C, C++, etc) all compiled into an intermediate binary form, and their final compiler did very heavy optimizations on that "byte code".

    That's no different to most compilers. GCC for instance parses the "frontend" language (C, C++, etc) into an intermediate language and performs most optimisations on that intermediate language before translating it to assembler instructions. Optimisation can be performed in the high level language, and even the assembler, but most is performed at the intermediate level as this way all frontends can potentially benefit.

  • by pfdietz (33112) on Tuesday July 18, 2006 @08:36AM (#15736008)
    The more recent versions of GCC also perform transformations on a tree-based intermediate form, before converting that into the older RTL form. There are certain high level optimizations that just work better on abstract syntax trees.
  • Re:Old debate (Score:3, Informative)

    by Bastian (66383) on Tuesday July 18, 2006 @08:43AM (#15736052)
    I'm pretty sure that over my C programming career I've managed (sometimes by accident, sometimes by misguided designs at creating a "clever hack") to cram data of every type into a bin that was reserved for every other type without the use of a cast. C is statically typed, but I wouldn't say it's strongly-typed at all.
  • by LWATCDR (28044) on Tuesday July 18, 2006 @08:43AM (#15736055) Homepage Journal
    C became popular because of Unix. Since you could get the source code for Unix most big universities used Unix in there OS courses. And since it was written in c you where going to learn C if took Computer Science. Textbooks started to assume you knew c. Magazines started to assume you knew c. People wrote free small c compilers and then came GCC, so now you could have a good free c compiler for just about any system. But before GCC all the buzz was about Smalltalk. Smalltalk was the future. OOP was going to replace structured programing. The problem was very few people has a computer that could run Smalltalk. So C++ was born.
    A final blow to Modula-2 was simply Borland didn't create a Modula-2 compiler. For many years when you said Pascal you reall meant Turbo or Borland Pascal. Borland was the Pascal company and they add objects to pascal and eventual created Delphi.
    I am sure Topspeed has closed up shop. There just isn't much room for compiler makers anymore. You have the free software at the bottom end and the Microsoft Monster at the top. Only a few niche players are left. Ada seems to be a place where a good compiler company can still make a few dollars.
  • Re:It goes both ways (Score:3, Informative)

    by Trailer Trash (60756) on Tuesday July 18, 2006 @08:46AM (#15736070) Homepage
    For instance, 20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it). One expectation was still, at the time, that a lot of code would be generated directly by humans, so instructions and instruction designs catering to that use-case were developed. But by around then, most code was machine generated by a compiler, and since the compiler had little high-level semantics to work with, the high-level instructions - and most low-level one's too - went unused; this was one impetus for the development of RISC machines, by the way.

    As someone else mentioned, there is no quicksort instruction. That's far too complex and involves looping and conditional branching. Probably the most complex of vax instructions was the polyf/polyg instruction, which would compute a polynomial to 7 iterations thus allowing one instruction to compute a trigonometric function. There were also instructions for copying strings up to 64k (and those instructions were interruptable), and instructions to format numbers a la cobol pics. These instructions were generally emulated in the smaller microvaxen and such, but were in microcode on the larger ones. Note that even x86 has a string copy instruction.

    Now, here's where you're really wrong. Those instructions weren't put in there as a convenience to humans writing in assembly. Instead, they were put in there as a convenience to compiler writers who could make use of the high-level assembly instructions to ease their code generation. The cobol compiler was almost unnecessary. They had numeric data types to cover it, it was nuts.

    They also had instructions to deal with octawords (128 bit integers), and of course the vax allowed accesses of any size integer on any boundary, which could mean a couple of fetches for a particular piece of data. There are assembly instructions to force alignment.

    The only non-magic of which I'm aware is that it was "required" that between writing a piece of code into memory and executing it there should be an intervening rei instruction, apparently to clear all caching. I put the word "required" in quotes for a reason. A professor at a college that I attended wrote a very popular Scheme compiler. I mentioned one day to a grad-student friend this requirement, and somehow we ended up getting to the prof. He didn't have that in his compiler and it worked just fine writing to a piece of memory then executing it. I showed him the page in the VAX Architecture Handbook (probably around 276 or 278) and we got a good chuckle.

    Anyway, shortly after VAX came out people started to seriously think about simplifying the instruction set and putting more burden on the compilers. I still believe the Alpha is probably the king of risc, ironic given that VAX is the king of cisc. Most of the lessons that VAX taught us were in the negative.

  • Re:Old debate (Score:2, Informative)

    by Anonymous Coward on Tuesday July 18, 2006 @08:48AM (#15736081)
    And although you call Cobol legacy, it really isn't. Many financial institutions still run applications written in cobol since it is too costly and risky to migrate the old code to a new language.
    Errr, well if they're no longer running it, you can't debate if it's legacy, can you? The code would be gone. Not to be an English nazi or anything, but as a word, legacy just means it was handed down from a predecessor, usually a different generation. I've always assumed this meaning carried over to programming as well.
  • Re:Old debate (Score:3, Informative)

    by Marcos Eliziario (969923) on Tuesday July 18, 2006 @08:54AM (#15736118) Homepage Journal
    Not really. Modern pipelined architectures make hard-written assembler slower than compiler generated. A human can't really deal with out-of-order execution.
  • Re:Imaginary history (Score:4, Informative)

    by masklinn (823351) <slashdot,org&masklinn,net> on Tuesday July 18, 2006 @08:56AM (#15736132)

    LISP was not "the archetypal high-level language." The very names CAR and CDR mean "contents of address register" and "contents of decrement register," direct references to hardware registers on the IBM 704.

    You forgot "CONS" which comes from the IBM cons cells (a 36bit machine word on the 704), which is the block holding both a CAR and a CDR.

    The thing is, the names only existed because no one found any better name for them, or any more interresting name (Common Lisp now offers the "first" and "rest" aliases to CAR and CDR... yet quite a lot of people still prefer using CAR and CDR).

    LISP has always been a high level language, because it was started from mathematics (untyped lambda calculus) and only then adapted to computers.

    And the fact that Lisp Machines (trying to get away from the Von Neumann model) were built doesn't mean that Lisp is a low level language, only that IA labs needed power that the Lisp => Von Neumann machines mappings could not give them at that time.

    Lisp is a high level languages, because Lisp abstracts the machine away (no memory management, not giving a fuck about registers or machine words [may I remind you that Lisp was one of the first languages with unbound integers and automatic promotion from machine to unbound integers?])

  • Re:Old debate (Score:3, Informative)

    by Megane (129182) on Tuesday July 18, 2006 @08:57AM (#15736139) Homepage

    Uh, K&R is slightly older than Java or C#... there was no such thing as memory management or virtual machines (as we know them today) back then.

    Actually, there were virtual machines [wikipedia.org] back then, just not on micros or minis.

    And as far as this high-level/low-level thing goes, I'd call C a "mid-level" language.

  • Re:Old debate (Score:3, Informative)

    by dzfoo (772245) on Tuesday July 18, 2006 @09:03AM (#15736183)
    >> Uh, K&R is slightly older than Java or C#... there was no such thing as memory management or virtual machines (as we know them today) back then.

    Didn't Infocom implemented their "database query system" (which eventually became their famous text-adventure game engine) using a virtual machine they called the Z-machine? As far as I know that system predated Java and C# by a few decades.

    http://en.wikipedia.org/wiki/Z-machine [wikipedia.org]

            -dZ.
  • Re:Old debate (Score:3, Informative)

    by masklinn (823351) <slashdot,org&masklinn,net> on Tuesday July 18, 2006 @09:11AM (#15736237)

    Yep C is very weakly typed (some could say that it's untyped, as is ASM) as only the compiler does some sanity check, and even then it doesn't work too hard at it.

  • Um no. (Score:3, Informative)

    by wonkavader (605434) on Tuesday July 18, 2006 @09:17AM (#15736277)
    No, what they SAY is "The proof is in the pudding" --

    From google:

    Results 1 - 10 of about 326,000 for "the proof is in the pudding". (0.47 seconds)
    Results 1 - 10 of about 118,000 for "the proof of the pudding is in the eating" [definition]. (0.30 seconds)

    They're not right, of course, but then, sadly, you're not either, since what people say has changed. It's changed to something nonsensical, which people quote without understanding, which is annoying, like "I could care less!":

    Results 1 - 10 of about 2,180,000 for "I could care less". (0.28 seconds)
    Results 1 - 10 of about 776,000 for "I couldn't care less". (0.22 seconds)

    But "the proof is in the pudding" kind of rolls off the tongue better... like a pudding which tastes nasty and you are therefore gently, but suavely, spitting out.
  • by alispguru (72689) <bane AT gst DOT com> on Tuesday July 18, 2006 @09:18AM (#15736287) Journal
    Most truly high-level languages, like LISP (which was mentioned directly in TFA), are interpreted, ...

    Programming languages are not "interpreted". A language IMPLEMENTATION may be based on an interpreter. Every major implementation of Common Lisp today has a complier, and most of them don't even have an interpreter any more - everything, including command-line/evaluator input, is compiled on-the-fly before being executed.

    ... and the interpreters are almost always written in C. It is impossible for an interpreted language written in C (or even a compiled one that is converted to C) to go faster than C.

    Again, this is a property of implementations, not of languages. The highest-performance Common Lisp implementations have scaffolding written in C and assembly, but they do not use a C compiler when they compile Lisp code. They often use non-C ABI conventions for argument passing and stack handling, to make their style of function calling faster.

    I don't mean to be harsh, but the "Lisp is slow because it's interpreted" meme is about twenty years out of date. It tends to be spread primarliy by college professors whose last exposure to Lisp was pre-1980, and it really grates on those of us who know better.
  • by Anonymous Coward on Tuesday July 18, 2006 @09:29AM (#15736362)
    I see this quote everywhere, and just because it's by some semi-famous academic, nodody questions it and takes it for granted. The quote is utter rubish.

    I agree. Why should we give any weight to the sayings of some random guy. What the hell would he know know about computer science? [utexas.edu]

    The quote is rubbish and contains no usefull information whatsoever. On the contrary: the conclusion it draws in abolutely false.

    It seems to me that you are good example of the type of person that the OP was complaining about (ie. not knowing much about computer science). If you read about the history of computer science you would see that it started as a pure mathematical discipline that just happened to use computing devices because the algorithms were too complex to be solved quickly by hand. The early computer was just a tool that made things easier for mathematicians, much like a telescope for astronomers. Of course, modern computer science focuses much more on algorithms specifically related to computer functions like disk caching, task scheduling, etc. So Dijkstra's comment may not be as relevant today but at the time he said it was pretty accurate.

  • More Myth here (Score:3, Informative)

    by wonkavader (605434) on Tuesday July 18, 2006 @09:36AM (#15736407)
    It's possible to say everything siad in this article -- vaugely, as it is said in this article -- and be right, and yet still dance around the reality.

    Take a look yourself on http://shootout.alioth.debian.org/ [debian.org]

    C's faster than Java. It will probably always generally be so, unless you're trying to run C code on a hardware Java box.

    This article says Java, for example, CAN be faster. But it doesn't say "C is almost always faster than Java or Fortran, usually faster than ADA, and C can be mangled (in the form of D Digital Mars, for instance) to be faster than C usually is. Often, Java is a pig, compared to C, BUT THERE ARE TIMES WHEN IT ISN'T. Really. There are times, few and far between, when it's actually, get this, FASTER. It's fun to look for those few times. And if you write programs which do that, that'd be cool. And as processors get wackier and wackier, there will be more and more times where this is true. Meanwhile, if your developers write good code, Java's easier to develop in and debug." Which would be more completely correct.

    Excuse, me, now. I have to go back to my perl programming.
  • by smcdow (114828) on Tuesday July 18, 2006 @09:38AM (#15736422) Homepage
    With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.

    Yay. With continued displays of attitudes like that, I'm going to leave the industry.

    It is getting increasingly difficult to hire S/W engineers that understand that there is an operating system and also hardware beneath the software they write. I need people NOW that can grok device drivers, understand and use Unix facilities, fiddle with DBs, write decent code in C, C++, Java, and shell, and can also whip together a decent WS interface. Someone who does all of those.

    WhyTF has the S/W industry become so compartmentalized? I can hire a device driver person, but he won't know anything about web services. I can hire a DB person, but she won't know a damn thing about poking values into registers. I can hire a web-services person, but he will have never worked on a Unix platform before. WTF? Really, WTF?

    In short, I can't hire someone who can take ownership of an entire system. It's always, "Well, that's a hardware thing, go ask Foo", "Oh, it looks like the database, need to talk to Bar", "The Web interface is borked, we'll need to bring Baz in", "Hm, it doesn't do this when we run it on Windows" (this one always pisses me off, because they can never explain why, and that's because they know nothing about Unix). How come I can't hire someone who could understand a whole vertical stack (and maintain it, and provide analysis and fixes when something breaks)?

    I do this kind of thing now. If I can do it, it can't be that hard. But everybody thinks they have to specialize. THIS IS WHAT'S WRONG WITH THE INDUSTRY.

  • by Wudbaer (48473) on Tuesday July 18, 2006 @10:21AM (#15736748) Homepage
    Borland didn't create a Modula-2 compiler

    Small nitpick: They did indeed create a Modula-2 compiler - I think even called Turbo Modula-2 - at the end of the 80s for CP/M. I purchased it back then for my C-128 (those where the days *looks at current laptop* - not). However, CP/M then already had begun its way into obsolecence, and Borland's German division needed almost 6 months to deliver the damn thing. When I finally got it it was more or less unusable, as the IDE froze or something like that when you tried to compile something. In that respect it's better to think they never had released this abomination.
  • Re:Old debate (Score:3, Informative)

    by fyngyrz (762201) on Tuesday July 18, 2006 @10:30AM (#15736828) Homepage Journal
    C would certainly be about as low as you can get without manipulating individual registers - i.e., without being assembly language.

    Actually, I think Forth is a little lower. The RPN nature of the language makes for a considerably closer mapping from language use to stack use for one thing, and for another, Forth atoms tend to be more primitive and more prefab than what a particular expression in C might produce.

    C remains my favorite for anything that requires speed. It has always seemed to me that when someone who understands what is going on at the machine level writes C code, they can make quite fast results as compared to someone who has learned C syntax, but doesn't have a sense of what is happening with stacks, LEAs, how a particular problem may map to float, fixed or integer approaches on top of a particular processor or chip set. C++ approaches appear overrated to me. If I want objects, I make them. If I want a *really* high level approach, I use Python.

    Basically, give me C or give me Python.

  • by Anonymous Coward on Tuesday July 18, 2006 @11:15AM (#15737270)
    I'm not sure that this has to do with a low-level/high-level language debate any more. Consider, for example,
    that C++ offers both very low and very high-level semantics. When properly used, this yields high level
    programs with excellent performance.

    But, so what? Neither C++ today, or any other very widely-used programming language adequately manages the
    real problem, which is concurrency.

    Herb Sutter has written an excellent paper on this topic, called "The Free Lunch is Over". Let's get off this
    hobby horse and on to some real (and interesting) problems!

    Here is Mr. Sutter's article: http://www.gotw.ca/publications/concurrency-ddj.ht m [www.gotw.ca]
  • Re:Old debate (Score:5, Informative)

    by The_Wilschon (782534) on Tuesday July 18, 2006 @11:20AM (#15737327) Homepage
    Garbage collection [wikipedia.org], a form of memory management in widespread use today, was invented "around 1959" by John McCarthy as he discovered LISP. This predates K&R [wikipedia.org], first edition in 1978, by quite a bit.
  • by StrawberryFrog (67065) on Tuesday July 18, 2006 @11:42AM (#15737556) Homepage Journal
    The quote is utter rubish. ... With astronomy you have stars, which aren't man made ... Computers and Computer Science are both things that are entirely man-made. There is no natural phenomenon that we call 'computer' and a science that studies this natural phenomenon called "computer science".

    Not. Even. Wrong.

    If astronomy was called "telescope science" you'd also forget that it was about ways of looking at the skies. Computers are more flexible that that - they are used to model and study all kinds of natural phenomena. Algorithyms are strictly speaking mathematics, which is a feature of the universe and not "man made" if anything ever was. Computers are used to store and manipulate data about all kinds of things, most of which are not about computers. learning how to do all that is computer science.

  • Re:Old debate (Score:5, Informative)

    by civilizedINTENSITY (45686) on Tuesday July 18, 2006 @11:44AM (#15737573)
    Actually, there was, way before C (let alone Java or C#.)

    "Lisp is very old language, second only to Fortran in the family tree of high level languages." A Little history [bath.ac.uk]

    Whereas C (rather like Fortran) wanted to stay "close to the metal", Lisp wanted to transcend metal to get closer to the math [stanford.edu]. Hence, innante elegance :-)
    Towards the end of the initial period, it became clear that this combination of ideas made an elegant mathematical system as well as a practical programming language. Then mathematical neatness became a goal and led to pruning some features from the core of the language. This was partly motivated by esthetic reasons and partly by the belief that it would be easier to devise techniques for proving programs correct if the semantics were compact and without exceptions. The results of (Cartwright 1976) and (Cartwright and McCarthy 1978), which show that LISP programs can be interpreted as sentences and schemata of first order logic, provide new confirmation of the original intuition that logical neatness would pay off.
    It is true that Lisp ran inside an interpreter rather than a VM. Still, garbage collection is *old*, and memory management techniques from the 1950s/60s shouldn't be considered a new thing.

    Still waiting for the Visual.Lisp.Net, though :-) When UML and visual design paradims are finally swallowed by Lisp, oh what fun times we'll have! ;-)
  • by Anonymous Coward on Tuesday July 18, 2006 @12:40PM (#15738056)
    C was a reaction to both BCPL (via the language B) and PL/I. The UNIX designers, Thompson, Ritchie, et. al., came from the Multics project, and Multics was mostly written in PL/I with some BCPL. Most (but not all) of the BCPL code was written by the Bell Labs members of the Multics team. BCPL was a completely type-less language. C introduced a few rudimentary types.

    By contrast, PL/I had a much more complete type system, although it was not even close to "strongly typed".

    PASCAL was still very very new when C was designed.

    In particular, PL/I strings and arrays were first class data types with compiler-known lengths,
    and buffer overflows were MUCH MUCH less common. (not impossible - just much less common).

    Full PL/I was an enormous language and hard to compile, but the ANSI G subset was actually quite
    reasonable and not hard to compile for. The DEC PL/I (ANSI G subset) and C compilers for the VAX used the same code generator back-end (written by Dave Cutler who also designed RSX-11/M, VMS, and Windows NT), but the PL/I compiler produced better code for string and array handling, precisely because the compiler knew more about what the programmer actually intended. It could take better advantage of the VAX instruction set, particularly for strings of maximum known length. String instructions, such as on the VAX or the IBM System/360 could easily handle PL/I strings, but null-terminated C strings were much harder to compile for. This is not surprising, since IBM designed PL/I as a language for the System/360.
  • by Anonymous Coward on Tuesday July 18, 2006 @01:33PM (#15738521)
    Further: Topspeed was a suite of compilers by JPI (the split away group of Borland -> JP Jensen Partners), They ended up writing the compiler for Clarion (well reused their existing compiler technology as all their compilers: c/c++/pascal/modula2 shared the same obj format and as such they could mix and match languages within a single exe/dll - sound familier!!!), after providing this compiler, they merged with Clarion to form TopSpeed the company, whose main product was Clarion...
  • Re:Old debate (Score:3, Informative)

    by pthisis (27352) on Tuesday July 18, 2006 @05:46PM (#15740147) Homepage Journal
    Well, all I can say is that C++ is (so much) more than just a "stricter" C.

    While I agree with your core point, I have to take exception to the implication that C++ is at all a stricter C (even if it's also more). C++ and C are different languages, and C is not a subset of C++. There are valid C programs that are invalid in C++ (even not using things like variables named "new", etc), and features like implicit void casting that C++ lacks. There are programs that are valid C and valid C++ but behave differently.

    And that's without getting into features of modern C (variable size arrays, language built-in complex numbers, restricted pointers, etc) that are not in C++ as far as I know.

    But as far as your main point, yes, the reason to use C++ is if you want/need C++ features. My original objection was to the suggestion that you just "write C but use a C++ compiler to add namespaces and nothing else". Many of the drawbacks of C++ compared to C are pretty minor, and may be worth the tradeoff if you're going to take advantage of a lot of language features. Writing "C in C++" is just silly, though.
  • Re:Old debate (Score:3, Informative)

    by chthonicdaemon (670385) on Wednesday July 19, 2006 @01:18AM (#15741515) Homepage Journal
    Automated memory mgmt via garbage collection has been a feature of Lisp and many other languages since the early 1960s http://www-128.ibm.com/developerworks/library/j-jt p10283/ [ibm.com]

The reason that every major university maintains a department of mathematics is that it's cheaper than institutionalizing all those people.

Working...