High-level Languages and Speed 777
nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."
Article is theory not practice - no measurements (Score:3, Interesting)
I don't agree with the basic premise of the article at all - but I've also written equivalent programs in C and more modern languages and compared the performance.
It's very simple (Score:5, Interesting)
If you don't believe me, I suggest you look at some of the assembly code output of gcc. I'm no assembly guru, but I don't think I would have done as well writing assembly by hand.
It goes both ways (Score:5, Interesting)
For instance, 20 years ago there was nothing strange about having an actual quicksort machine instruction (VAXen had it). One expectation was still, at the time, that a lot of code would be generated directly by humans, so instructions and instruction designs catering to that use-case were developed. But by around then, most code was machine generated by a compiler, and since the compiler had little high-level semantics to work with, the high-level instructions - and most low-level one's too - went unused; this was one impetus for the development of RISC machines, by the way.
So, as long as a lot of coding is done in C and C++ (and especially in the embedded space, where you have most rapid CPU development, almost all coding is), designs will never stray far away from the requirements of that language. Better compilers have allowed designers to stray further, but stray too far and you get penalized in the market.
Re:It's very simple (Score:3, Interesting)
The more interesting question is if a person with only passing familiarity with assembly can do better then the compiler, and the answer to that is usually no these days.
Re:It's very simple (Score:4, Interesting)
In the past, most compilers were dreadful at optimizations. Now, they are just horrible. I guess that is an improvement, but I still believe there is a lot of good research to come here.
I do agree that the playing field has become pretty even. For example, with the right VM and the right code you can get pretty good performance out of Java. Problem is "the right VM" depends greatly on the task the program is doing.. certainly not a one vm fits all out of the box solution (ok.. perhaps you could always use the same VM, but app specific tuning is often neccesary for really high performance).
At any rate.. people just need to learn to use the best tool for the job. Most apps don't actually need to be bleedingly fast, so developing them in something that makes the development go faster is probably more important then developing them in something to eek out that tiny performance gain nobody will probably notice anyway.
Re:Old debate (Score:1, Interesting)
Re:Old debate (Score:5, Interesting)
If I had mod points I'd certainly mod you informative. Those benchmarks might be synthetic and flawed but as a general illustration of how the various languages differ, that link is fantastic.
Of course I'll just use it for my own ends by convincing my managers that we're using the right languages - "Yes boss you'll see that we use C++ for the stuff that needs to be fast with low memory overhead, Java for the server side stuff, stay the fuck away from Ruby and if you say 'Web 2.0' at me one more time I'll be forced to wham you with a mallet!" ;-)
Flawed Argument (Score:4, Interesting)
I'm a big fan of high level languages and I believe eventually it will be the very distance from assembely that high level languages provide that will make them faster by allowing compilers/interpreters to do more optimization. However, it is just silly to pretend that C is not still far closer to the way a modern processor works than high level languages are.
If nothing else just look at how C uses pointers and arrays and compare this to the more flexible way references and arrays work in higher level languages.
Imaginary history (Score:5, Interesting)
C was not a reaction to LISP. I can't even imagine why anyone would say this. LISP's if/then/else was an influence on ALGOL and later languages.
C might have been a reaction to Pascal, which in turn was a reaction to ALGOL.
LISP was not "the archetypal high-level language." The very names CAR and CDR mean "contents of address register" and "contents of decrement register," direct references to hardware registers on the IBM 704. When the names of fundamental languages constructs are those of specific registers in a specific processor, that is not a "high-level language" at all. Later efforts to build machines with machine architectures optimized for implementation of LISP further show that LISP was not considered "a high-level language."
C was not specifically patterned on the PDP-11. Rather, both of them were based on common practice and understanding of what was in the air at the time. C was a direct successor to, and reasonably similar to BCPL, on Honeywell 635 and 645, the IBM 360, the TX-2, the CDC 6400, the Univac 1108, the PDP-9, the KDF 9 and the Atlas 2.
C makes an interesting comparison with Pascal; you can see that C is, in many ways, a computer language rather than a mathematical language. For example, the inclusion of specific constructs for increment and decrement (as opposed to just writing A
Re:Old debate (Score:4, Interesting)
Re:Old debate (Score:2, Interesting)
Re:Old debate (Score:3, Interesting)
And although you call Cobol legacy, it really isn't. Many financial institutions still run applications written in cobol since it is too costly and risky to migrate the old code to a new language. Cobol was meant for the financial industry, and its probably there to stay. Colleges and universities are even starting to teach it again since it is in high demand in the job market right now (older cobol programmers are retiring).
Re:High Level (Score:3, Interesting)
A minor observation about the feasability of working with the target hardware: the two most popular instruction set architectures for commidity hardware, PowerPC and IA-32, have both been stable since the mid 90s. The programming guide for PowerPC processors is still pretty much the same document as it was in 1996, around the same time the PowerPC ISA was defined. IA-32 has undergone some changes with each new major processor family, but is still backwards compatible at the instruction level with processors released in the 80s.
Contrast this with high-level (i.e., non assembly languages). Java has undergone a few major revisions in its 10 year lifespan. C++ has yet to have a compiler that fully implements the spec (think export and the really fun template games). Scripting languages are constantly evolving and sometimes aren't backwards compatible over a 4 year period. Then there's the Microsoft switch to
Of course, there are architecture considerations for squeezing performance out of code. But, again, these haven't changed much in the last 10 years, either. The memory bus is still the bottleneck and you still get 50-80 instructions for 'free' on each load, even if you're not filling the pipeline completely. If you are doing something that isn't memory bound, it's not that hard to look up the instruction latencies in the manual and code things up to fully utilize the processing units and keep the pipline full. At least, it's no more difficult developing scalable EJB applications for your favorite web application engine...
-Chris
Re:Speed of programming (Score:2, Interesting)
In the embedded world, programming DSPs is a wonderful example. We used to write assembly code for the parts that had to be fast because we could get tighter code than a compiler could produce. Now the tools are so good that even good programmers are better off letting the tools do the work.
So, is there any point writing in a low level language, even where speed matters? I don't think. In any event, it will take longer to write the code. It just doesn't seem like a winning proposition"
I would never say it's jut as quick to write stuff in C. But I will say the code runs faster and you can support higher loads than if it was written in something else.
A 10 line basic program isn't a good example. Rewrite, say, Apache, or a unix kernel in something besides C (or assembler) and watch it suck, bad.
There's a reason all the important stuff is in C. I should have thought this was obvious. What should be obvious too is that it doesn't all need to be in C. The 80/20 rule applies someplace, the config/startup struff in, say, Apache dosn't really matter what it's written in. Use LISP and it wouldn't matter. But the task spawning and http sevice parts damn well better be in C. Even better in Asselbler but here portability goes out the door.
C may not be perfect but it's less imperfect than anything else. It's about the optimal compromise between portability and speed.
Re:Old debate (Score:5, Interesting)
What's wrong with Ruby, as a replacement for a very ugly language called Perl?
Ruby is an elegant language, fully Object Oriented, and does just as well as Python and Perl...
Ruby On Rails OTOH is a different story and don't want to get into a flame war over it, but Ruby itself is pretty good for a lot of things you'd otherwise write in Perl but don't like the ugliness of Perl...
I've found some people don't get the distinction between Ruby and Ruby on Rails.
Re:C and Smalltalk is what happened. (Score:4, Interesting)
Re:Old debate (Score:3, Interesting)
The statement "C is not a high level language" is not logically equivalent to the statement "C is a low level language", so the OP is still entitled to his beer money :-)
Re:It goes both ways (Score:3, Interesting)
One guy I knew realized that he was never going to get his rig stable enough to run through the whole test, so he set up a single opcode to just dump the entire expected output of the test program to the printer then halt. IIRC, he pulled it off.
Depends on the job... (Score:3, Interesting)
We're where we are today because, for many years, C was the one you could get for free. The others cost hundreds of dollars.
I remember the first time I encountered a computer that shipped from the vendor with GCC instead of a proprietary compiler - it was like seeing a death sentence for Abacus, Lightspeed, and all those other little compiler companies.
Forth (Score:3, Interesting)
New debate (Score:4, Interesting)
Speed of program languages or machine languages are not measured by how high or low level they are to us. They are also measured by time to develop and implement the program. The article basically makes a point of it, that it's "better to let someone else" to optimize the low-level code while you write with the high-level language. You could write a super fast machine coded program, but it'll take you much longer to write it than with a simpler higher level language.
The new debate is over datatypes and the available methods to manipulate them. Older hardware gave us the old debate with primitive datatypes and a general set of instructions to manipulate the data. Newer hardware can give us more than just primitives. For example, a unicoded string datatype seen by the hardware as a complete object instead of an array of bytes. With hardware instructions to manipulate unicoded strings, that would pratically take away any low-level implementation of unicoded strings. The same could be done for UTF-8 strings. We could implement hardware support for XML documents and other common protocols. How these datatypes are actually implemented in hardware is the center of the debate.
Eventually, there will be so many datatypes that there will be seperate low-level languages specifically designed for a domain a datatypes. The article makes the point there exists an increase in complexity for newer compliers to understand what was intended by a set of low-level instructions. Today's CPUs have a static limit of low-level instructions. The future beholds hardware implemented datatypes and their dynamic availability of low-level instructions. Newer processors will need to be able to handle the dynamic set of machine language instructions.
Does the new debate conflict with Turing's goal to simply make a processor unit extensible without the need to add extra hardware? For now, we have virtualization.
Re:Old debate (Score:4, Interesting)
Nothing wrong with the language that a proper implementation couldn't cure, basically.
Re:Old debate (Score:2, Interesting)
Re:Along those lines... (Score:5, Interesting)
I sped up some C code by unrolling a loop with Duff's Device [catb.org]. Duff's Device, for those who haven't encountered it, makes an ingenious use of the often-maligned C behavior that case statements, in the absence of a break or return statement, fall-through.
Duff's Device takes advantage of the fall-through by jumping into the middle of an unrolled loop of repeated instructions. If eight instructions are unrolled, Duff's Device iterates the loop
times, but enters the loop by jumping to the
'the unrolled instruction from the end of the loop. (This sounds complicated, but isn't; just look at the code and it becomes clear.)
...
...
The whole point of Duff's Device is speed and locality of code. Speed: because the loop is unrolled, more instructions are executed for each jump back to the top (and jumps are, relatively, expensive, because they mean any preloaded instructions must be tossed out ans re-read. Locality: (hopefully) all the instructions can be cached, so the processor doesn't have to re-read them from memory.
But what gcc does with Duff's Device on ARM targets is just bizarre. gcc uses a jump table (good) to directly change the Program Counter (good, so far). But instead of jumping into the loop (which would be good), gcc uses the jump table to jump to
a redundant assignment and
an unconditional jump.
Yes, gcc very smartly makes a jump table (which directly changes the Program Counter, just like a jump would) to jump to a jump. This is simply a waste of code and time:
Why a jump table just to set up an unconditional jump? Why the redundant mov, which could have been done once, prior to the jump table jump? Who knows, that's what gcc does.
In this particular case, the object is to copy halfwords to a memory address, which address is really mapped to an output device. ARM processors, of course, are optimized for word addresses, so the "best" way to do this would be to load multiple words (LDM), shift the upper
Re:Old debate (Score:1, Interesting)
I argue that C is not a low-level language, but can provide low-level interraction with the hardware components (arguably C# with the
The language is high-level and compiled into machine language unlike other high-level langauges that are compiled into byte-code and interpreted.
I think a good border is:
1) Low-level - Machine code that can be directly converted into assembly.
2) High-level - Machine code that can be directly converted into assembly; however, there is no 1-1 mapping of it into the language it was originally written in (i.e. C, because different compilers will create different machine code for the same platform). What I aim to say is that it's not possible to figure out the exact C code from looking at the assembly.
Re:C and Smalltalk is what happened. (Score:3, Interesting)
This seems generally to be true, but some small outfits are apparently still making money selling compilers. In the early 90s I used the Power C compiler for DOS. It was a nice compiler and cheap ($20). Recently I was amazed to see that the company, Mix Software [mixsoftware.com], is still in business, with the same low prices. How they do this I have no idea.
Re:What I didn't see in TFA... (Score:1, Interesting)
This, obviously, goes for all compiled languages whose compiler is written in the language in question. In particular, it is true for C. LISP pre-dates C by a significant number of years (2 decades, in fact), so LISP compilers definitely do not have to go back to C. I am involved in work on a native-code compiler for a high-level language that is written in the language itself. Yes, there once was an "ur-version" of the compiler written in a different language. But that was more than 20 years ago, and the other language was not C. In fact, that language was LISP...
Why do you doubt that? It certainly wasn't written in C either. Moreover, all this is completely orthogonal to the discussion. A compiler does not have to be efficient in order to produce efficient code.
The main reason for C's popularity these days is that it is self-sustaining: You can always count on having a C compiler at hand, and since C is important, even hardware designers take some pains in making sure that C can be compiled at least semi-efficiently. But, as the article points out correctly, the efforts involved are becoming more and more heroic.
Some of the real optimization issues (Score:5, Interesting)
The article is a bit simplistic.
With medium-level languages like C, some of the language constructs are lower-level than the machine hardware. Thus, a decent compiler has to figure out what the user's code is doing and generate the appropriate instructions. The classic example is
char tab1[100], tab2[100];
int i = 100;
char* p1 = &tab1; char* p2 = &tab2;
while (i--) *p2++ = *p1++;
Two decades ago, C programmers who knew that idiom thought they were cool. In the PDP-11 era, with the non-optimizing compilers that came with UNIX, that was actually useful. The "*p2++ = *p1++;" explicitly told the compiler to generate auto-increment instructions, and considerably shortened the loop over a similar loop written with subscripts. By the late 1980s and 1990s, it didn't matter. Both GCC and the Microsoft compilers were smart enough to hoist subscript arithmetic out of loops, and writing that loop with subscripts generated the same code as with pointers. Today, if you write that loop, most compilers for x86 machines will generate a single MOV instruction for the copy. The compiler has to actually figure out what the programmer intended and rewrite the code. This is non-trivial. In some ways, C makes it more difficult, because it's harder for the compiler to figure out the intent of a C program than a FORTRAN or Pascal program. In C, there are more ways that code can do something wierd, and the compiler must make sure that the wierd cases aren't happening before optimizing.
The next big obstacle to optimization is the "dumb linker" assumption. UNIX has a tradition of dumb linkers, dating back to the PDP-11 linker, which was written in assembler with very few comments. The linker sees the entire program, but, with most object formats, can't do much to it other than throw out unreachable code. This, combined with the usual approach to separate compilation, inhibits many useful optimizations. When code calls a function in another compilation unit, the caller has to assume near-unlimited side effects from the call. This blocks many optimizations. In numerical work, it's a serious problem when the compiler can't tell, say, that "cos(x)" has no side effects. In C, it doesn't; in FORTRAN, it does, which is why some heavy numerical work is still done in FORTRAN. The compiler usually doesn't know that "cos" is a pure function; that is, x == y implies cos(x) = cos(y). This is enough of a performance issue that GCC has some cheats to get around it; look up "mathinline.h". But that doesn't help when you call some one-line function in another compilation unit from inside an inner loop.
C++ has "inline" to help with this problem. The real win with "inline" is not eliminating the call overhead; it's the ability for the optimizers to see what's going on. But really, what should be happening is that the compiler should check each compilation unit and output not machine code, but something like a parse tree. The heavy optimization should be done at link time, when more of the program is visible. There have been some experimental systems that did this, but it remains rare. "Just in time" systems like Java have been more popular. (Java's just-in-time approach is amusing. It was put in because the goal was to support applets in browsers. (Remember applets?) Now that Java is mostly a server-side language, the JIT feature isn't really all that valuable, and all of Java's "packaging" machinery takes up more time than a hard compile would.)
The next step up is to feed performance data from execution back into the compilation process. Some of Intel's embedded system compilers do this. It's most useful for machines where out of line control flow has high costs, and the CPU doesn't have good branch prediction hardware. For modern x86 machines, it's not a big win. For the Itanium, it's essential. (The Itanium needs a near-omniscient compiler to perform well, because you have to decide at compile time which instructions should be executed
Re:Old debate (Score:4, Interesting)
* It's not sufficiently prettier than Perl
* It's not Perl
Perl may look ugly but it is to most programming languages as English is to most other languages. Perl is a brawling, sprawling mess of borrowed, Hamming-optimized idioms that is extremely ugly from the POV of a syntax engineer and extremely expressive from the POV of a fluent speaker.
Ruby is more like Esperanto - elegant, clean, and spoken by practically no-one because it isn't very expressive.
Re:Old debate (Score:3, Interesting)
How about the fact that you can't use an integer as an array index in Ada and you have to use natural numbers (defined as a positive or null integer), because array indexes can't be negative (in most languages anyway, some -- like Python -- are exceptions to this quite common rule) and you therefore shouldn't be allowed to use a number that might ever be negative as an index. C# merely gives you a warning if your index is explicit (e.g., myArray[-1]) and doesn't do anything otherwise, before throwing an IndexOutOfRangeException at runtime.
That's one measly example, but I find it quite interresting.
So no, C#'s unsafe keyword isn't a factor (and the lack of implicit conversion clearly isn't, if anything implicit conversion is the sure sign of quirky and unsafe type systems).
When people say that an Ada program that compiles will usually work without problem, they're not joking, Ada's type system is so extensive and so strong that it misses very few errors (that it could handle, that is, flaws in your own logic can't be patched by a compiler).
Re:C and Smalltalk is what happened. (Score:1, Interesting)
GCC's long suit is portability to new architectures, at which it is unparalleled. However, for any particular processor architecture, the specialist compilers generally do better. And the cost of the compiler is small compared to the cost of the programmer or the cost of high-volume manufacturing. Saving some flash and RAM space adds up to a lot of money.
Of course, the compiler is only a small part of the whole toolchain. Embedded systems also need to be debugged. You can't download a free JTAG emulator to go with GCC. Information might want to be free, but hardware doesn't.