High-level Languages and Speed 777
nitsudima writes to tell us Informit's David Chisnall takes a look at the 'myth' of high-level languages versus speed and why it might not be entirely accurate. From the article: "When C was created, it was very fast because it was almost trivial to turn C code into equivalent machine code. But this was only a short-term benefit; in the 30 years since C was created, processors have changed a lot. The task of mapping C code to a modern microprocessor has gradually become increasingly difficult. Since a lot of legacy C code is still around, however, a huge amount of research effort (and money) has been applied to the problem, so we still can get good performance from the language."
Bah (Score:5, Insightful)
High Level (Score:5, Insightful)
Now I hear most people referring to C and C++ as "low level" languages, compared to Java and PHP and visual basic and so on. Funny how that works out.
I like Assembler. There's something about interacting intimately with your target hardware. It's a shame that it's no longer feasible with today's variety of hardware.
Inaccurate summary (Score:5, Insightful)
This is not true. What they mean, I think, is "the task of mapping C code to efficient machine code has gradually become increasingly difficult".
Re:Old debate (Score:5, Insightful)
If anything, C is a so-called mid level language. If it wasn't, you'd be using an assembler instead of a compiler.
Re:Old debate (Score:2, Insightful)
Re:Bah (Score:5, Insightful)
You have two choices when using SIMD instructions in C:
The claim that C cannot inline functions from another source file is also wrong. This is a limitation in gcc, but other compilers can do it, and IIRC the intel compiler can. It is certainly not "impossible".
When you pass a C file to a compiler, it generates an object file. It has absolutely no way of knowing where functions declared in the header are defined. You can hack around this; pass multiple source files to the compiler at once and have it treat them as a single one, for example, but this falls down completely when the function is declared in a library (e.g. libc) and you don't have access to the source.
Re:Bah (Score:5, Insightful)
High-level languages have an advantage (Score:5, Insightful)
The more abstract a language is, the better a compiler can understand what you are doing. If you write out twenty instructions to do something in a low-level language, it's a lot of work to figure out that what matters isn't that the instructions get executed, but the end result. If you write out one instruction in a high-level language that does the same thing, the compiler can decide how best to get that result without trying to figure out if it's okay to throw away the code you've written. Optimisation is easier and safer.
Furthermore, the bottleneck is often in the programmer's brain rather than the code. If programmers could write code ten times faster, that executes a tenth as quickly, that would actually be a beneficial trade-off for many (most?) organisations. High-level languages help with programmer productivity. I know that it's considered a mark of programmer ability to write the most efficient code possible, but it's a mark of software engineer ability to get the programming done faster while still meeting performance constraints.
Typical Java Handwaving (Score:5, Insightful)
I've been programming professionally for over 20 years, and for those 20 years, the argument is that computers are now fast enough to allow high level languages and we don't need those dirty nasty assemblers and low level languages.
What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment.
The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant.
If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?"
Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and
Re:Bah (Score:1, Insightful)
You say this as if C defines an object format and you can toss libraries around without assuming a particular compiler, linker, and loader facility, e.g. a specific C implementation such as GCC with the GNU toolchain!
C compilers can and do store intermediate forms in "object" files such that the linker can do final inter-procedural optimization at link time or even dynamic load time. The SGI Irix compiler did this, for example.
Re:It's very simple (Score:2, Insightful)
Re:High-level languages have an advantage (Score:5, Insightful)
Especially since you can combine. Even in high-performance applications there's typically a only a tiny fraction of the code that actually needs to be efficient, it's perfectly common to have 99% of the time spent in 5% of the code.
Which means that in basically all cases you're going to be better off writing everything in a high-level language and then optimize only those routines that need it later.
That way you make less mistakes, and get higher-quality better code quicker for the 95% of the code where efficiency is unimportant, and you can spend even more time on optimizing those few spots where it matters.
Some comments on the article (Score:5, Insightful)
OK, this is nitpicking but there are some exceptions - I remember that TASM would convert automatically long conditional jumps to the opposite conditional jump + an unconditional long jump since there was no long conditional jump instruction.
This paragraph is complete crap. If you're using a Dictionary API in a so called "low-level language", it's as possible for the API to do the same optimization as it is for the runtime he talks about; and you're still letting "someone else do the optimization".
That's surely true. But the opposite is also true - when you use an immense amount of too complex semantics, they can be translated into a pile of inefficient code. Sure, this can improve in the future, but right now it's a problem of very high level constructs.
Not exactly true I think [greenend.org.uk]. Yes, the approach on that page is not standard C, but on section 4 he also talks about some high level performance improvements which are still being experimented on, so...
Typical "/." Handwaving (Score:5, Insightful)
The "appeal to an expert" fallacy?
"What was true 20 years ago is still true today, well written code in a low level language tailored to how the computer actually works will always be faster than a higher level environment."
It also means that portability becomes ever harder, as well as adaptability to new hardware.
"If computer science isn't about computers, what is it about? I haate that students coming out of universities, when asked about registers and how would they write a multiply routine if they only had shifts and adds, ask "why do I need to know this?""
It's about algorithms. Computers just happen to be the most convienent means for trying them..
"The problem with computer science today is that the professors are "preaching" a hypothetical computer with no limitations. Suggesting that "real" limitations of computers are somehow unimportant."
With the trend towards VM's and virtualization, that "hypothetical" computer comes ever closer.
"Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and
Now who's handwaving?
What I didn't see in TFA... (Score:4, Insightful)
I didn't see anything mentioning that many high-level languages are written in C. And I don't consider languages like FORTRAN to be high-level. FORTRAN is a language that was designed specifically for numeric computation and scientific computing. For that purpose, it is easy for the compiler to optimize the machine code better than a C compiler could ever manage. The FORTRAN compiler was probably written in C, but FORTRAN has language constructs that are more well-suited to numeric computation.
Most truly high-level languages, like LISP (which was mentioned directly in TFA), are interpreted, and the interpreters are almost always written in C. It is impossible for an interpreted language written in C (or even a compiled one that is converted to C) to go faster than C. It is always possible for a C programmer to write inefficient code, but that same programmer is likely to write inefficient code in a high-level language as well.
I'm not saying high-level languages aren't great. They are great for many things, but the argument that C is harder to optimize because the processors have gotten more complex is ludicrous. It's the machine code that's harder to optimize (if you've tried to write assembly code since MMX came out, you know what I mean), and that affects ALL languages.
Re:Typical Java Handwaving (Score:2, Insightful)
I was rather under the impression that computer science was the theory of computation, where the computer is simply a tool; just as much as a soldering iron is a tool in electrical engineering.
Re:Typical Java Handwaving (Score:5, Insightful)
"Computer science is no more about computers than astronomy is about telescopes" -- Edsger Dijkstra quotes (Dutch computer Scientist. Turing Award in 1972. 1930-2002)
Sorry, you're arguing against Dijkstra: you lose. :)
Re:Bah (Score:3, Insightful)
Which usually isn't a big problem anyway since the code sections in which that's an advantage are usually quite small and infrequent, so if you really need the performance you can make a very little sacrifice of inserting conditional compiling statements with different code for the platforms which you are interested on.
It's certainly not an ideal solution but it's a very attractive one, and it has the advantage that you can have experts on each CPU optimizing the code of the platform they know best.
Re:Typical Java Handwaving (Score:3, Insightful)
I think his point was not that abstractions are bad, but that not knowing what's happening behind the scenes isn't good.
Even to optimize
Re:High Level (Score:5, Insightful)
Rather, usually whats done is that most of the code is written in C, and only those parts that REALLY REALLY have to be optimized, like interrupt handlers for example, can be done in assembly. People use assembly for routines that, for example, have to take exactly a certain number of instruction cycles to complete.
But it should be avoided as much as possible. It's just not worth losing the portability.
More and more these days, microprocessors are embedding higher level concepts, and even entire operating systems, just to make software development easier.
Re:Typical Java Handwaving (Score:5, Insightful)
I've designed compilers before, and I wouldn't class constructing a C/C++ compiler as "trivial" :)
One could also make the opposite argument. Many computer courses teach languages such as C++, C# and Java, which all have connections to low level code. C# has its pointers and gotos, Java has its primatives, C++ has all of the above. There aren't many courses that focus more heavily on highly abstracted languages, such as Lisp.
And I think this is more important, really. Sure, there are many benefits to knowing the low level details of the system you're programming on; but its not essential to know, whilst it is essential to understand how to approach a programming problem. I'm not saying that an understanding of low level computational operations isn't important, merely that it is more important to know the abstract generalities.
Or, to put it another way, knowing how a computer works is not the same as knowing how to program effectively. At best, it's a subset of a wider field. At worst, it's something that is largely irrelevant to a growing number of programmers. I went to a University that dealt quite extensively with low level hardware and networking, and a significant proportion of the marks of my first year came from coding assembly and C for 680008 processors. Despite this, I can't think of many benefits such knowledge has when, say, designing a web application on Ruby on Rails. Perhaps you can suggest some?
I disagree. I think software sucks because software engineers don't understand programming
Assembler (Score:4, Insightful)
Re:Typical Java Handwaving (Score:5, Insightful)
Which machine, chum?
"I've been programming professionally for over 20 years..."
OK, bump chests. I've been at it for 35+. And? Experience doth not beget competence. There are uses for low-level languages and those that require them will use them. Try writing a 300+ module banking application in assembler. By the time you do, it will be outdated. Not because the language will change, but because the banking requirements will. Using assembler to write an application of that magnitude is like trying to write an Encyclopedia article with paper and pencil. Possible, but 'tarded.
"Software sucks today because software engineers don't understand computers, and that's why languages and environments like Java and
More like, 'software sucks today for the same reason it always has -- fossized thinkers can't change to make things easier for those who necessarily follow them.' Ego, no more.
Re:high level vs. low level 101 (Score:2, Insightful)
Re:Typical "/." Handwaving (Score:3, Insightful)
I'd say you are. His first statement wasn't a logically fallacy, he was just pointing out this argument has been going on for a long time.
You made a good point about portability, but I think that was your only point. And its easily shot down byt the fact that its just as easy to port a standard C/C++ API to a new environment as it is to port Java/.NET to a new environment.
He made an excellent point about many new graduates not knowing how the CPU actually works and you replied with: "It's about algorithms. Computers just happen to be the most convienent means for trying them.." ??? What the hell does that mean? Handwaving indeed.
His main point was that VM's are always slower compiled machine code. Even if computers are doubling in speed every 18 months or whatever, native machine code will still be faster than virtual machine code.
Right there you have just proven yourself to be an academic. Trends do not make reality. Besides that, what about gcj? If VMs were so great, why would anyone want to compile java to native code? In the real world, people care about performance. Academics are satisfied that a problem has a solution. In the real world we need to be able to get a solution in the minimum amount of time. VMs always take more time.
Now you may continue your handwaving.
Dude can't even write a clear sentence (Score:2, Insightful)
I believe the phrase the faster your program will compile means "the faster the compiler will translate your program into machine-executable code." Apparently the author means "the compiler will generate faster code." He then makes the same mistake again, equivocating between the process of compilation and the quality of the compiled output.
If you can't manage to write a clear sentence defining what topic you're exploring...what else might you be getting wrong?
Re:High-level languages have an advantage (Score:2, Insightful)
"It's the best of both worlds"
The problem with that assertion is that software development has more than two worlds
There are applications that benefit from running in a managed environment, and spend the vast majority of their time waiting for input or shifting memory around. These are cases where QT and C++ would be bad choices (the consequences of 'mis-shifting' memory in a language like C++ are well documented). Java wouldn't be the only choice, but I wouldn't call you crazy (or a bad work-man) for making that choice.
Please don't fall into the trap of using the wrong tool and then blaming the tool when things go wrong. This is exactly the kind of thing that has been plastered all over these discussions for the last ten years or so.
Re:Bah (Score:3, Insightful)
For PCs this isn't so obvious, since generic hardware + biggest CPU going tends to get used, but in embedded devices the dedicated hardware is much more often the way to go than the processor upgrade. My last project I can think of two APIs that gave us this benefit immediately without SW effort on our part, and a third area that benefitted by ripping out all the "optimized" code that bypassed the API, and using the (now HW accelerated) API directly.
Re:Typical "/." Handwaving (Score:3, Insightful)
Re:Old debate (Score:3, Insightful)
Programs don't need optimization... (Score:2, Insightful)
CPU power is available and cheap but time to market is critical. Most of the time, you don't need to do the fastest program ever, but to do a program that works reasonably well and that you can debug easily (some may say it is the same requirement).
C may not be the best tool for any given task but it is a pretty decent swiss army knife that most people know how to use reasonably well.
Disclaimer: I'm not in web devmnt but in embedded real time on DSP. With 8 dedicated ALU (2 mul, 2 add/sub, 2 logic and 2 load/store) running at the same time on the chip, there is still not many good alternatives to C (let the compiler optim and pray) and ASM (massive headhache).
Re:Bah (Score:2, Insightful)
Oddly enough, he proceeds to jump back on track and discuss optimization techniques and levels, most of which is OK. But he berates Java for implementing arrays (that's supposed to be an advantage over C and C++, which don't), and ignores the advantages of managed memory provided by a virtual machine.
C. Needs more work.
(yes, that's a pitiful pun.)
Re:Old debate (Score:5, Insightful)
If you're going to go with the jargon as it's most often used nowadays (which is a perfectly reasonable thing to do), then C would certainly be about as low as you can get without manipulating individual registers - i.e., without being assembly language.
Quoted often, but still wrong (Score:3, Insightful)
I see this quote everywhere, and just because it's by some semi-famous academic, nodody questions it and takes it for granted. The quote is utter rubish.
With astronomy you have stars, which aren't man made and thus only scarcely understood and the tools we use to look at them, teleskopes, which are man-made. We understand them.
Computers and Computer Science are both things that are entirely man-made. There is no natural phenomenon that we call 'computer' and a science that studies this natural phenomenon called "computer science". It's all one thing. The quote is rubbish and contains no usefull information whatsoever. On the contrary: the conclusion it draws in abolutely false.
the author... (Score:3, Insightful)
Whaaa! My language is better. (Score:2, Insightful)
1) whether assembly is faster than C
2) whether interpreted languages are faster than C/C++
The real question here is - which type of language does well for your application?
Ultimately C will be faster if a good programmer, who understands the language and the application. However, will he be more productive? I'd never write a third person shooter in python, perl, or java. However, what I might do is add a 3d engine to a python statistics modeling program that's already written in one of those languages. Most people would agree, writing a web interface in C is just insane if you have anything particularly useful you want to write. However, I'll probably write a multi-process webserver in C, just because it makes sense for speed (I know python has a built-in webserver, but there are features it doesn't have. You may be able to write it in python, but will all those features that apache have be fast?).
The bottom line is:
- Define the application you want to build.
- Define your requirements (responsiveness, rhobustness [security,reliability,etc], extendability, deadlines).
- Do a little research with a few languages (just experimentation). Write prototype interfaces in the language, do a little benchmarking, just play with it.
- Make a decision on a language based on what you've found and what's required.
As more high level languages appear (functional languages look very promising), see what those languages have over what's already out there. If it has an applicability to what you're doing, use it.
I'm tired of seeing everyone beat a dead horse. Yes, I know the two arguments:
- X is faster
- Y is just as fast as X, but can do it in less lines of code.
X & Y are different, there's no ignoring it. There's more dimensions to languages than speed and time to market, don't ignore them.
The myth of assembly performance (Score:4, Insightful)
Coding large apps in assembly is usually way beyond the point of diminishing returns in terms of performance.
Re:High-level languages have an advantage (Score:3, Insightful)
Except it doesn't. Nobody has written a compiler that smart, and I don't care what anyone says: I don't think anyone ever will.
Learning how to invent and develop algorithms is important. Learning how to translate those algorithms into various languages is important. And knowing how the compiler will translate those algorithms into machine instructions- and how the CPU itself will process those machine instructions, will yield a lot more performance than choice of languages.
Consider djbfft [cr.yp.to], one of the fastest FFT implementations, outruns many FFT implementations in Java, Haskell, Lisp, or assembly, and yet it's written in C.
Don't confuse me: I'm not saying C is fast, or C is good, I'm saying djbfft is good. Reordering the instructions in the C code will lower the efficiency- even if the code is otherwise equivelent.
That said, I agree with almost everything else in your post.
Re:Old debate (Score:4, Insightful)
Don't be so sure (Score:4, Insightful)
Amazingly far back (try the 80s) a professor friend of mine had a marvelous example of compiler-generated code where the compiler had done such an amazing job of optimising register use that you had to trace through more than 20 pages of assembler output with colored markers to trace from where the register was loaded to where it was used.
No way I would ever have the huevos to code that way in assembler. On a RISC machine or (Heaven help us) the Itanic it gets lots worse.
Re:Typical Java Handwaving (Score:3, Insightful)
INCORRECT
. Shifts and adds are sometimes faster for certain constants. Power of two, maybe power-of-two plus one. But for any arbitrary constant, this is false on most processors. Multipliers are much faster than a stream of many shifts and adds. Furthermore, the compiler should hold the knowledge of when a shift-and-add is better-performing than a multiply for what constant values. And, if you're not using MyPrettySchoolProjectCC, it probably does.Now what your compiler *really* hopefully knows about is how to make division by a constant into a multiply. That can really save time. Division is an iterative process and is very hard to make fast. Multiplies are highly parallel; you can do large multiplies fully pipelined and with pretty low latency. And you can typically turn a 32 bit / 32 bit divide into a 32x32->64 multiply with the reciprocal. Since you can determine the reciprocal at compile time this is probably a win.
Maybe you just went to a school where they didn't show you how multiplies are actually implemented on modern hardware. Shift registers with accumulators they aren't. This is also potentially a reason why the professor will tell you that you can't outsmart the compiler. The typical college student can't, because he or she doesn't understand enough about how things really work. But any engineer with a decent amount of experience -- or most grad students -- can outsmart a compiler easily.
Initially (Score:3, Insightful)
Lisp and operating systems (Score:3, Insightful)
Huh? I would argue that commercially successful (as in boxes sold to Fortune 500 companies and used in production) operating systems have been written in three languages:
* Assembly
* C
* Lisp [andromeda.com]
Are there any commercially successful OSs written in C++ yet?
(revealing my ignorance and posting flamebait, all in one)
An expert assembly programmer in a CPU... (Score:5, Insightful)
It used to be the case that I could always increase the speed of some random C/Fortran/Pascal code by rewriting it in asm, parts of that speedup came from realizing better ways to map the current problem to the actual cpu hardware available.
However, I also discovered that much of the time it was possible to take the experience gained from the asm code, and use that to rewrite the original C code in such a way as to help the compiler generate near-optimal code. I.e. if I can get within 10-25% of 'speed_of_light' using portable C, I'll do so nearly every time.
There are some important situations where asm still wins, and that is when you have cpu hardware/opcodes available that the compiler cannot easily take advantage of. I.e. back in the days of the PentiumMMX 300 MHz cpu it became possible to do full MPEG2/DVD decoding in sw, but only by writing an awful lot of hand-optimized MMX code. Zoran SoftDVD was the first on the market, I was asked to help with some optimizations, but Mike Schmid (spelling?) had really done 99+% of the job.
Another important application for fast code is in crypto: If you want to transparently encrypt anything stored on your hard drive and/or going over a network wire, then you want the encryption/decryption process to be fast enough that you really doesn't notice any slowdown. This was one of the reasons for specifying a 200 MHz PentiumPro as the target machine for the Advanced Encryption Standard: If you could handle 100 Mbit Ethernet full duplex (i.e. 10 MB/s in both directions) on a 1996 model cpu, then you could easily do the same on any modern system.
When we (I and 3 other guys) rewrote one of the AES contenders (DFC, not the winner!) in pure asm, we managed to speed it up by a factor of 3, which moved it from being one of the 3-4 slowest to one of the fastest algorithms among the 15 alternatives.
Today, with fp SIMD instructions and a reasonably orthogonal/complete instruction set (i.e. SSE3 on x86), it is relatively easy to write code in such a way that an autovectorizer can do a good job, but for more complicated code things quickly become much harder.
Terje
Re:Old debate (Score:3, Insightful)
The advantage of Fortran is purely coincidental. (Score:3, Insightful)
When Fortran was made, nobody thought that CPUs of 30 years in the future will have vector processing instructions. In fact, as Wikipedia says [wikipedia.org], vector semantics in Fortran arrived only in Fortran 90.
The only advantage of current Fortran over C is that the vector processing unit of modern CPUs is better utilised, thanks to Fortran semantics. But, in order to be fair and square, the same semantics could be applied to C, and then C would be just as fast as Fortran.
The fact that C does not have vector semantics reflects the domain C is used: most apps written in C do not need vector processing. In case such processing is needed, Fortran can easily interoperate with C: just write your time-critical vector processing modules in Fortran.
As for higher-level-than-C languages being faster than C, it is purely a myth. Code that operates on hardware primitives (e.g. ints or doubles) has exactly the same speed in C, Java and other languages...but higher level languages have semantics that affect performance as much as they can help performance. All the checks VMs do have an additional overhead that C does not have; the little VM routines run here and there all add up to slower performance, as well as the fact that some languages are overengineered or open the way for sloppy programming (like, for example, not using static members but creating new ones each time there is a call).
Re:Old debate (Score:2, Insightful)
It was an early example of the MS method of software development: buy out someone who has a viable product and do a much better job of marketing that product.
I maintain that MS has never been much of a software development company but, rather, a software marketing company. Certainly, the vast majority of their "innovations" have been in marketing. MS tends to incrementally improve on other developers software while being very innovative in their marketing of that software.
Lattice C was an early example. Excel is a mid-life example. A more recent example is the Groove Networks collaboration tool. MS recently bought them and will include the Groove in the next version of Office. They pretty much had to do this as Office is pretty stale. Who really needs a newer version of Word for example? And OpenOffice is coming along and is free. The only way to improve the Office product enough to warrent an upgrade was to add serious collaboration capabilities. And, this being MS we're talking about, the only way to do that was to go out and buy serious collaboration capabilities. Now they'll integrate it into Office and market the bejeezus out of it.
I rest my case...
Re:High Level (Score:1, Insightful)
No. It's the semantics that matter, not the syntax.
Re:Typical "/." Handwaving (Score:3, Insightful)
I'd argue that in the real world (or at least business world) we need the solution to be developed in the shortest amount of time, with the most amount of security. While a VM based language is not guaranteed to provide quicker time / security, in most cases it probably will.
Re:Old debate (Score:2, Insightful)
Anyone that considers C to be a "cross-platform assembler" probably has never worked in assembler, and almost definitely hasn't done so on more than one platform.
'C' is only "low level" to those that don't get any closer to the hardware than, say, Visual Basic. Anyone that has programmed in assembly language will assure you that 'C' is quite high level. I'd be willing to accept "mid-level", but in reality once you've worked at the assembly level you will realize that there's very little difference between 'C' and Visual Basic. 'C' and VisualBasic are essentially both high-level languages; but 'C' just seems more intimidating than VisualBasic to the VisualBasic programmer. Those that call 'C' mid-level are probably VisualBasic programmers that think 'C' is intimidating so it clearly can't be a high-level language like VB.
When you write a single line in 'C' and realize that that can correspond to hundreds of assembly language instructions, you realize that 'C' is very much a relatively high-level language. When you try to do floating point math on an 8-bit processor with no floating point instructions, you realize that 'C' is very much a high-level language. When you try to add three numbers and multiply it by a fourth, and you come from 'C', you realize that (1 + 2 + 3) * 4 is a heck of a lot more complicated than you imagined.
The main difference between VB and 'C' is that VB gives you more self-contained packages to let you interact with today's GUI's. 'C' gave you printf which was fine for writing to a terminal. VB gives you all kinds of controls to let you do pretty GUI stuff. The concept is exactly the same, and both are high level.
I say all of this having programmed in assembly language, then Basic, then QuickBasic, then 'C', then VisualBasic, and now almost exclusively 'C' and assembly language in truly embedded systems (embedded != Windows or Linux in a small form factor).
Re:Old debate (Score:3, Insightful)
First-class reentrant continuations and dynamic typing (another major efficiency hog) probably constrain you to, in the best case, the same box as compiled Scheme - about the same as Java.
Ugly? Idiomatic! (Score:2, Insightful)
It's just a matter of "if you can't stand the line noise, get out from the code-kitchen!".
Even if I can understand easily Perl code, what I can't really stand is C pointer arithmetic if it steps too far...
Less space for alternative vendors (Score:3, Insightful)
Re:Old debate (Score:3, Insightful)
What fundamental design flaw -- that malloc() is less convenient to use in C++? For crying out loud, use new!
Ok, void pointers are less useful in C++ than in C. In my experience, that has been a non-problem. But then I've never tried to program in C with a C++ compiler -- I have enough problems without creating artificial ones.
Re:wasted ink (Score:3, Insightful)
Besides, it's not a new concept, and if this generation of programmers didn't get it, neither will the new generation, because among the very first generation of programmers were people who understood Lisp machines. Of course, if a new generation really does start using mostly Ruby when the current one can't handle Lisp, we'll know it was those darned parentheses. Just as any sufficiently advanced technology is indistinguishable from magic, any sufficiently advanced language is indistinguishable from Lisp.
It will be funny to see this turned on its head, if there are ever enough, say, Python or Ruby programmers to improve python/ruby compilers/runtimes to where, a couple generations of processors later, it's C that has a lack of optimizations and is actually farther from the hardware. We may actually see a C virtual machine as a necessity!
More practically, I try to work with languages that suit the task at hand, which is really never C unless I'm dealing with a huge existing C codebase.
Re:wasted ink (Score:3, Insightful)
Java feels way slower than anything else. My college courses were mostly in Eclipse. It runs fast enough, but it takes forever to start, which is true of many, many Java apps.
Which means that when these same programmers end up learning C/C++, they'll think Java is slow because it's "interpreted". I guess there's at least the hope that they'll wind up using C#, and thinking Java is slow because it sucks. Which is good enough, because Java sucks for other reasons, even though it isn't really slow.
But really, with Generics, Java has basically picked up most of the features and syntax of C++, added garbage collection and much more anal retentive restrictions, and called it a whole new language. The bytecode and virtual machine is really not relevant to the awfulness of the language itself -- you can write a perfectly good language for the JVM -- but the JVM, specificalyl, has its own drawbacks, in that it's hard to write more libraries for Java, and many of the existing libraries suck in profound ways compared to C/C++ alternatives, or even
Frankly, the only good thing about them learning Java is that at least for awhile, their code may be portable, because it's so hard to make OS-specific or arch-specific Java.
Re:While we're at it (Score:3, Insightful)
python is fast enough for almost any package management quest, but yum is the worst piece of