The End of Native Code? 1173
psycln asks: "An average PC nowadays holds enough power to run complex software programmed in an interpreted language which is handled by runtime virtual machines, or just-in-time compiled. Particular to Windows programmers, the announcement of MS-Windows Vista's system requirements means that future Windows boxes will laugh at the memory/processor requirements of current interpreted/JIT compiled languages (e.g. .NET, Java , Python, and others). Regardless of the negligible performance hit compared to native code, major software houses, as well as a lot of open-source developers, prefer native code for major projects even though interpreted languages are easier to port cross-platform, often have a shorter development time, and are just as powerful as languages that generate native code. What does the Slashdot community think of the current state of interpreted/JIT compiled languages? Is it time to jump in the boat of interpreted/JIT compiled languages? Do programmers feel that they are losing - an arguably needed low-level - control when they do interpreted languages? What would we be losing besides more gray hair?"
What else (Score:3, Funny)
No, wait, too late.
On the subject of loosers... (Score:4, Funny)
Re:On the subject of loosers... (Score:3, Funny)
I was beginning to think I had gone mad, or perhaps there was committee that changed the spelling of "lose" without telling me. I honestly haven't seen anyone spell it correctly in months. It's starting annoy me as much as people can't tell they're from there from their.
Re:On the subject of loosers... (Score:3, Insightful)
Wouldn't that logic make the Scottish and the Welsh "British"?
Re:On the subject of loosers... (Score:5, Funny)
I admire people like the parent poster who have the courage of their convictions and are willing to stand up in front of the crowd and tell someone off when they thing that's called for. So let me express my deep admiration to you, err... Mr Anonymous Coward.
It's a name, not an adjective. (Score:3, Informative)
I'd think that computer people could understand the difference. The 'South' in 'South America' is part of the string, it's not a prepended descriptive modifier.
"South America" is not a region of a larger area known as "America." "South America" is the name of a particular region (actually, an entire continent), period. (In the same way that "South Dakota" is the name of a place, and not just a southern region of some place called "Dakota.") Occasionally we confuse th
Euro-English (Score:5, Funny)
whereby English will be the official language of the EU rather than
German which was the other possibility. As part of the negotiations,
Her Majesty's Government conceded that English spelling had some
room for improvement and has accepted a 5 year phase-in plan that
would be known as "Euro-English".
In the first year, "s" will replace the soft "c". Sertainly, this will make the
sivil servants jump with joy. The hard "c" will be dropped in favour of
the"k". This should klear up konfusion and keyboards kan have 1 less
letter.
There will be growing publik enthusiasm in the sekond year, when the
troublesome "ph" will be replaced with "f". This will make words like
"fotograf" 20% shorter.
In the 3rd year, publik akseptanse of the new spelling kan be
ekspekted to reach the stage where more komplikated changes are
possible. Governments will enkorage the removal of double letters,
which have always ben a deterent to akurate speling. Also, al wil agre
that the horible mes of the silent "e"s in the language is disgraseful,
and they should go away.
By the fourth year, peopl wil be reseptiv to steps such as replasing "th"
with "z" and "w" with "v". During ze fifz year, ze unesesary "o" kan be
dropd from vords kontaining "ou" and similar changes vud of kors be
aplid to ozer kombinations of leters.
After zis fifz yer, ve vil hav a reli sensibl riten styl. Zer vil be no mor
trubl or difikultis and evrivun vil find it ezi to understand ech ozer. Ze
drem vil finali kum tru!
Re:Euro-English (Score:4, Informative)
See http://www.spellingsociety.org/news/media/spoofs.
Re:On the subject of looses... (Score:3, Funny)
Re:On the subject of loosers... (Score:5, Funny)
I loose my gray hair when I get off work. The ponytail and smoothly coiffed beard are necessary to convey the appropriate image in the office, but in the privacy of my home I let the beard go bushy and the tresses bounce about my shoulders.
But maybe this is more information than you really wanted to know...
Re:On the subject of loosers... (Score:3, Funny)
Re:What else (Score:4, Interesting)
Re:What else (Score:5, Interesting)
The question you have to ask, of course, is where is the bottleneck. And the answer is fairly obvious if you analyse the performance of modern applications on a variety of different hardware: IO is the bottleneck in almost every case. There's no other explanation for why my 400MHz desktop (with a nice, fast hard disk) performs as well as or better than my 1.7GHz laptop (with a slow, energy saving hard disk but otherwise similar specs) for many applications (including Firefox, OpenOffice, etc... the kind of things that the average user runs daily) while the laptop wipes the floor with it for others (media players, SketchUp).
The point is, if you're going to be waiting 50ms for disk access, why bother shaving 2ms of processing time by running in a native compiled language? Nobody will ever notice. And you may find the more modern and high-level design of the interpreted language's library allows you to write faster performing IO code more easily than the simple & low level libraries that are supplied with most compiled languages, at which point you may get better results for the same programming effort for using that language.
In the end, fast programs are about good design, not language choice. Higher level languages often allow you to spend more time on design and less on implementation. All real-world projects have a limited time scale; ISVs just try to do the best they can with the time they have available, which isn't usually producing something miraculous.
Re:What else (Score:4, Insightful)
what they will notice is when the gc decides it needs to scan a memory area that has been swapped out crowding out any other IO on the system.
Average performance only matters for a few time consuming tasks (and they do still exist), what matters far more in end user apps is any apparent hangs, if a button takes 100ms to get a response i probablly won't notice unless i'm gaming. if a button takes 10ms 99% of the time and 1 second the rest then i damn well will notice despite the better average performance, app startup time is also a killer in terms of percived perfomance (and languages like java are terrible for this especilly the first run on boot).
And you may find the more modern and high-level design of the interpreted language's library allows you to write faster performing IO code more easily than the simple & low level libraries that are supplied with most compiled languages, at which point you may get better results for the same programming effort for using that language.
java.io really sucks for some types of apps as it basically forces you to have one thread per socket and the new java.nio isn't really any higher level than bsd sockets. I don't know what the situation is like over in
you'll learn (Score:4, Insightful)
First of all, when experienced programmers write big systems in interpreted languages, you can rest assured that they know what they are doing and are doing the benchmarks to make sure they aren't losing performance where they need it. If they need special, high-performance algorithms or libraries, they will figure out the minimal set of C/C++ primitives they need and make them a native code library inside the scripting language.
And whether code is "clean" really has nothing to do with the language. People can write clean Perl code and unclean C code.
Finally, "the finer grains of computer science" are absolutely and positively not concerned with the kind of low-level mess that C exposes.
I'm currently working on learning SDL in C/C++ for exactly that reason.
Good, so you are in a very early stage of your development as a programmer. As you mature, you'll figure out how to get the job done without wasting all your time on C/C++ programming.
In general, when experienced programmers use languages like Python or Ruby with native code plug-ins, or when they use languages like Java or C#, they produce code with better performance and fewer bugs than straight C/C++, simply because they end up having more time implementing good data structures and focussing their efforts where it counts.
Re:you'll learn (Score:3, Informative)
Your ignorance of the limitations of programming languages demonstrates that you're young, and still have a lot to learn. Your ability to devise a coherant argument is also somewhat lacking.
Again, I'm being rather blunt, which, admittedly, may not be the most diplomatic approach.
Consider this tutorial [turbogears.org]. It demonstrates how, in 20 minutes, you can construct a s
Re:What else (Score:3, Funny)
Have you tried coding anything hard? (Score:4, Insightful)
Re:Have you tried coding anything hard? (Score:5, Informative)
> of 43,000 queries per hour, tell me which part of it you want to coded in PHP.
hmm, the warehouse I work on has multiple databases with billions of rows in them, can hit insert rates of 100,000 rows a second, can experience 60,000 queries/hour - many of which are trending data over 13 months, has hundreds of users. Many of these users are allowed to directly hit some of the databases with whatever query tool they want. Scans of a hundred million rows at a time aren't uncommon (though seldom happen more than a few dozen times a day).
This app is completely written in korn shell, python, php and sql (db2). Looks like Ruby is also coming into the picture now, will probably supplant much of the php in order to improve manageablity.
Oh yeah, and the frequency of releases is quick and it's defect rate is low. And we're planning to begin adding over 400 million events a day soon. I've done similar projects in C and java. Never anywhere near as successfully as in python and php.
We might consider rewriting a few select python classes in c. Maybe, if we port the ETL over to the Power5 architecture with psycho doesn't run. Otherwise, it's cheaper to just buy more hardware at this point - since each ETL server can handle about 3 billion rows of data/day with our python programs.
Re:Have you tried coding anything hard? (Score:3, Informative)
BTW: I use Perl with Postgres, and yes, I wouldn't want Postgres to be wrote in Perl or PHP. I do, however, love using Perl for most everything
Re:Have you tried coding anything hard? (Score:4, Insightful)
the warehouse I work on has multiple databases with billions of rows in them, can hit insert rates of 100,000 rows a second, can experience 60,000 queries/hour
cans of a hundred million rows at a time aren't uncommon (though seldom happen more than a few dozen times a day).
Yes they are. Go read what you wrote.
This app is completely written in korn shell, python, php and sql (db2).
One guess where 99% of the ccycles arae in that (and 90% of the dollars).
My one guess (Score:5, Insightful)
One guess where 99% of the ccycles arae in that
I'll take a guess! And it's even the one you want me to guess. The db2 instance. That's the fucking *point*. The fast C code that's executing has already been written.. some of it is in the python interpreter, some it is in the ksh and php interpreters, most of it is in the db2 interpreter. Very fast algorithms doing what they do best: optimized, super fast loops operating on static types.
That is WHY python and other interpreted languages achieve the speed they achieve.. because what they do is allow you to glue together C code written by other people. And, because the Python code is much simpler, you can understand the interactions between the fast code more easily, and see where your code fails to perform well. It's always because you're putting loops together inefficiently and making poor design choices, not because of the speed of the interpreter--and now that your code is short enough for you to see that, you can fix it.
Your application logic doesn't need to be super fast. It needs to be super agile, so you can refactor and accommodate changing requirements and make smart decisions about which pieces you are going to use and how you are going to use them together.
C won't die, at least, not for a long, long time*, and that doesn't bother me, a hardcore Python programmer, in the least. Somebody has to do the dirty job of writing those fast loops. Meanwhile I'll be here zipping through the application implementation.
*It will eventually be replaced by Pyrex, of course.
Re:My one guess (Score:4, Insightful)
In reality, it is a compromise between many factors, including cost, flexibility, rate of change, manageability, and performance.
The only REAL requirement is that it does its job at a cost that is reasonable and sustainable to the company.
If you spend 10 times more on development and increase time to delivery in order to save a small fraction of that on hardware, you've lost.
For what it's worth, we do ALL of our development in interpreted languages, mostly Java, some PHP, Ruby on Rails, etc., and it all comes down to whatever is the best tool for the job. Very rarely do we ever come across a situation where 2 clients have needs that result in the exact same tools being used, unless it's just to use tools that we're more familiar with so that we can get the job done faster for them.
It's all about balancing compromise.
Re:Have you tried coding anything hard? (Score:4, Insightful)
Re:Have you tried coding anything hard? (Score:3, Informative)
Thank you. In that one, concise post, you have provided the only credible answer to the question in the title: no.
As always, we should use the right tool for the job. For anything where processing performance matters, native code blows away anything interpreted, and always will. I loved this little bit of rhetoric in the origin
Re:Have you tried coding anything hard? (Score:3, Informative)
Well, db2 is obviously managing quite a lot of it. Certainly all of the queries, but also the very fast loads. DB2 is running on four-way Power4 & Power5 hardware with 4-12 disk arrays per server with 64-bit architectur and typically 8 gbytes of memory. It's running extremely fast.
By the time the data hits PHP it is typically just small result hits - that is, a scan of a few million rows
Re:Have you tried coding anything hard? (Score:5, Interesting)
Re:Have you tried coding anything hard? (Score:5, Funny)
Re:Have you tried coding anything hard? (Score:3, Funny)
Re:Have you tried coding anything hard? (Score:3, Informative)
But, right, PHP is slow. That's the second reason why I wish to move my web-development to Python. Python+Psyco kick ass unbelievably (speed-wise) - add "import psyco; psyco.profile()" to the end of your site.py
Oh, and the first reason is that PHP gets mess
-1 flamebait (Score:4, Funny)
LISP, BASIC, FORTH, P-Code, Java+Netscape (Score:5, Interesting)
BASIC had its problems, warping a generation of programmers (including me), but it was small and light and didn't take long to learn unless you wanted to enough find tricks to get real work done.
FORTH was smaller, lighter, and faster. It was overly self-important, considering its reinvention of the subroutine to something new and radical, but if you wanted to program toasters or telescopes it was the language to use. Postscript was somewhat of a Forth derivative.
P-Code was a nice portable little VM you could implement other things on.
And then there was Java, which grew out of Gosling's experiences with NeWS, a Postscript-based windowing system. If you wonder why you're not using Netscape and maybe not using Java, and why you've probably got Windows underneath your Mozilla, it's because it became obvious to lots of people that Netscape+Java was a sufficiently powerful and easily ported environment that the operating system underneath could become nearly irrelevant - so Microsoft had to go build a non-standards-compliant browser and wonky Java implementation and start working on .NET to kill off the threat. It wasn't that conquering the market for free browsers was a big moneymaker - it was self-defense to make sure that free browsers didn't conquer the OS market, allowing Windows+Intel to be replaced by Linux/BSD/QNX/MacOS/OS9/SunOS/etc.
What makes you think Java won't rule the client? (Score:5, Interesting)
I am a Pascal programmer from ancient days and have been pretty much a Delphi person on account of my Pascal affinity and other requirements, but I have implemented GUI apps in C++, C#, Java, Matlab, and VB. I am seriously looking at Java/Swing as the next wave of what started as DOS/Turbo Pascal and got reimplemented in Windows/Delphi. Java simply couldn't do in 1997 what I was doing even at that time in Windows, just plain couldn't from the standpoint of features and performances. Java is not-quite-there-yet with the features I use in Windows, but it is much farther along in 2006 than in 1997 and is closing the gap with graphics acceleration and other features. It may surpass Delphi for what I do if it proves to be easier to do multi-threaded apps to take advantage of multi-core.
While my complex data visualization stuff is a long way off from being done in Java, the sort of simple data visualization stuff that I was doing in 1997 under Windows works quite well under Java, and it works equally well under Linux. If anything will get me to switch to Linux it will be that I have a collection of graphical data visualiztion programs for the work I do written in Java that will work equally well under Linux. While I can write a faster program with more features in Windows, the Java implementation is proving good enough for a lot of stuff that I am doing and it breaks me loose from Windows as well.
SUN seems to be in this Java business for the long haul, seemingly spinning their wheels making it available for free and always being a step behind Windows in features. But at some point Java/Swing programs will have accumulated enough performance and features that they are good enough for what people want to do, and they have the added advantage of not being tied to Windows. This idea that something like Java could transcend the OS may yet happen for client GUI apps.
Re:What makes you think Java won't rule the client (Score:4, Interesting)
It is not a Java problem per se, but goes right back to the issue of creating cross-platform client apps in the first place. Many of us like to think of the OS as something that provides services - disk access, windowing, etc - that look like they can easily be abstracted - and they can. However, as well as being OS, Windows, OS X, KDE and GNOME are platforms - a set of programming APIs and a philosophy.
Rather than transcending these differences, Swing is yet another variation. Potentially you could make a Swing app that did look and behave identical to a Windows app - but it would feel plain wrong on OS X. The reverse is equally true (well, just about - I don't think you can use the top-of-screen menu bar in Swing apps).
I think SWT may be the better approach - it's not write-once run-anywhere, but you are reducing the amount you need to port. And as said above, you need to consider the philosophical differences between platform HCI anyway.
Ironically one of the few really successful Java GUI apps I know is a data visualisation tool - it mostly consists of OpenGL calls so it's a bit of a misnomer to say it's Java, but it's back to the point that it's the APIs that count. OpenGL is a nice x-platform API.
simple (Score:3, Insightful)
(1) Java's market presence for UI applications has been decreasing: applets have largely disappeared, and the JRE is preinstalled on fewer and fewer desktops.
(2) Even on OS X, where Java is pre-installed and exceptionally well supported and integrated, there are few applications written in Java, and even fewer written in Java using Swing.
(3) Java's UI classes don't integrate well with native desktops, and it is impossible with them to write a cross-platform UI that conforms to ever
Re:simple (Score:3, Interesting)
What few people seem to have realised is that the best way of achieving cross platform portability is not to throw out the systems you're porting to and implement everything from scratch (the AWT/SWING approach). This just results in applications that feel wrong whichever system you run them on. The answer is to use native widgets in a way that is flexible enough t
Re:LISP, BASIC, FORTH, P-Code, Java+Netscape (Score:4, Interesting)
I have started to believe that the proof is in the pudding. I don't know lisp but I know some zope. Zope much like lisp is elegant, innovative, comprehensive, well designed and capabable of almost anything. Just like you probably scratch your head and wonder why people code in PHP or java when they could code in lisp I wonder why people code in PHP or java when they could have used zope and python.
But I am ready to give that up. I am now under the imression that zope isn't everything I thought it was. I mean if zope is so great then how come there are only three or four blogs written for it and not one of them is 1/10th as good as wordpress which is written in PHP? How come not one ticket tracker written in zope is 1/10th as good as eventum written in php?
I ask those questions rhetorically though. I know the answer. The answer is that zope if very hard. You have to be a very smart and very dedicated person to climb the ladder of zope and attain zope zen and there are just not enough people in this world that are willing to put forth that much effort.
In the end it's better to be easy then to be good. Look at how gracefully ruby balances on that rope. ROR is easy and it's innovative. That's why great software is being written in rails while the zope folks are pounding on zope3 trying to make it easier for developers to write decent software.
BTW I am not even going to attempt to learn zope3. I have to break up with zope. Thanks for the great times guys.
Re:LISP, BASIC, FORTH, P-Code, Java+Netscape (Score:3, Informative)
You might think you know the answer, but you're wrong. The real answer is very obvious. It is this: zope has a low installed base at ISPs. Perhaps this is because PHP is easier than Zope, I don't know. I suspect it's just because more people use PHP becaus
Re:-1 flamebait (Score:4, Funny)
Emacs.
Re:-1 flamebait (Score:3, Insightful)
Therefore, Lisp can't be used to create an operating system!
Heh, I love it when kiddies try to do logic. Learn some history [wikipedia.org], damnit!Its inevitable (Score:5, Insightful)
And this is a good thing, because it means more independance from certain CPU architectures.
Someday, you will be able to use any OS on any CPU and any Application on any OS. This is one step in that direction.
Re:Its inevitable (Score:3, Insightful)
This cracks me up. As we head towards multi-core and massively-multi-core, you are telling me that things are going to get better for interprative languages? Compilers are about to kicked in the pants because we can only do thread-level-paral
Re:Its inevitable (Score:4, Insightful)
Re:Its inevitable (Score:5, Insightful)
Secondly, if it is a question of taking too long to compile, realize that you can always ship optimized binaries from high-level languages (e.g. GCJ), but you cannot readily make your optimized native code work on a new platform.
New binaries for every architecture (Score:3, Funny)
Are you going to compile new binaries for every architecture and combination of cores?
As a Gentoo user that is an emphatic yes!
Re:Its inevitable (Score:5, Insightful)
Basically you are smoking crack thinking that compiled languages are going to thrive on multi-core. They aren't. Hell it's hard enough to keep data access correct with just a single thread. And with a "safe" language like Java the compile *knows* there are no aliases for an array, so some kinds of access can automatically be done in parallel, whereas in a separately compiled/linked language like C there are few ways for the compiler to know this. When there's not enough active threads per core the other core's can GC the inactive programs. Safe languages have huge advantages on multi-core.
Re:Its inevitable (Score:5, Informative)
It's done by changing the paradigm. Stream programming [wikipedia.org], for one? You don't "magically" take linear code and make it fast. You get rid of "linear code". Linear code goes the way of the goto instruction... Very little of the computational heavy lifting is truely and unavoidably "linear".
Re:Its inevitable (Score:5, Interesting)
Forget your C/C++/Java/whatever. Side effects and multiple assignment are bad. Program in a pure functional language, such that all functions are referentially transparent---that is, f(x1,x2,...) always returns the same value given the same x1, x2,
Now, since most of your code is made up of referentially transparent functions, the compiler can automatically split independent pieces of code up, and perform them in parallel without fear that a call to b(x) somehow effects the results of c(y).
When you absolutely need side effects (for IO, for example), you use something (uniqueness types, monads; I'm guessing) that explicitly orders the code and in this case, would presumably prevent the compiler from parallelizing it.
Compilers aren't there yet. The things I'm (vaguely) familiar with require specific annotation of potentially parallel paths. Try Occam, for instance. Another example I've read only slightly more about is parallel Haskell, which includes similar annotation primitives (par and seq). However, just because you annotate something as parallel doesn't mean it will be performed in parallel. The compiler/runtime/I'm-not-sure-which decides what to run in parallel from among the massive potential of parallelism in such a program.
If you're asking how it's possible in Java: it isn't. But then, Java already sucks when it comes to concurrency compared to systems designed for it like, say, Erlang (which, incidentally, is VM interpreted, but still blows the pants off most conventional C/whatever programs within its application domain (massively concurrent/fault-tolerant systems), lending some credence to the point of this article, not that the same things necessarily couldn't be done with native code).
Negligleable performace hit my... (Score:3, Informative)
On every computer I use with Windows it takes up to 20-30 seconds to launch Java. Web page have a little "yes, you have Java" applet? Prepare for a massive slowdown. I'd hate to see what it does with applications that may just appear to hang the computer while Java launches. And don't get me started on taht stupid "Welcome to Java 2" dialog that pops up from the taskbar.
Now on my Mac, things are different. Java applets launch just as fast as Flash if not faster (basically, instantly). This is on my G4 so things would only be better with a CoreDuo. Same goes for applications. I've been using an appilcation called YourSQL for over a year. It accesses a MySQL server and works great. It's very fast, has a perfectly native interface. You would think it is a native app, but I recently discovered that it's Java. The end use would NEVER notice that kind of thing except I was trying to debug a problem in my own code so I went to invesitage how it worked. It was Open Source and when I downloaded it... it was Java.
Java is fantastic on Mac OS X. I don't know how fast it is on Linux. But as long as there is a 20-30 second launching penalty on Windows, Java will never be accpeted. I don't think .NET has this problem, but probably because MS is keeping it memory resident in Vista even if no one is using it.
Then again, maybe Mac OS X preloads Java. I don't know if it has tricks, or if the Windows implementation is just that bad.
Java and Mac OS X (Score:5, Informative)
The downside is that things don't work quite the same as they do in Sun's Java runtime, so there are differences between Java-on-Windows and Java-on-Mac. For instance, my wife is an avid Puzzle Pirates [puzzlepirates.com] player, and the game client is a Java app. There've been Mac-specific bugs in the past, and at one point a major slowdown appeared when the game was run on a Mac. It hasn't been fixed, so while she can still do crafting on the Mac, whenever she does anything multiplayer, she has to switch to the Windows box.
Re:Negligleable performace hit my... (Score:5, Informative)
Apple's JVM implementation has something called Class data sharing [javalobby.org] to speed Java startup after the first invocation. The first time is just as slow as always. Since then the feature has been added to Sun Java 1.5, so if you're up to date, you should have this.
Not quite the end yet (Score:2, Informative)
Application! (Score:2)
The answer is to use a mixture. (Score:2)
two things (Score:5, Insightful)
(b) the obvious answer is that native vs interpreted is basically simply the balance of developer cost versus cost of end-user resources (ram, cpu, users time). Interpreted code is getting faster every day, no matter what "OMG JAVA IS SO SLOW DUDE" geniuses on the interweb tell you, but there'll always be problem spaces where a 5% speedup pays huge dividends.
Re:two things (Score:5, Insightful)
There are plenty of cases where it is far more cost effective to pay somebody $10k/week to optimize the hell out of a piece of code, because a 1% optimization will save thousands of dollars over the course of a year. The market for supercomputing applications is growing substantially. It's quite frequently cheaper to prototype in a supercomputer than it is to do something 'in the real world.'
I always laugh when I see people point out benchmarks where Java is compared to C in terms of the Linpack benchmark -- entirely ignoring the fact that in both cases, the actual 'work' is being done in neither Java nor C, but in a BLAS library that is written in Fortran. It's hardly suprising they have similar speeds -- they're running the exact same routines, from the exact same Fortran library.
The thing I see is this: The market for interpreted languages is fairly static -- I remember playing simple games written in BASIC on my parent's Apple II. I recall word processors, education software, etc -- all written in interpreted languages.
The region of 'corner cases' where native-compiled code is substantially faster than interpreted languages hasn't changed significantly over my lifetime. High performance games were, are, and will remain native-compiled code for the forseeable future. The same applies to supercomputing. Embedded machines are also a bastion of native code -- simply because they are produced on a scale that favors code written natively-- the tradeoff being more expensive hardware, and the economics never work out such that software (including its one-time development cost) is cheaper than hardware.
There's nothing wrong with either; they are tools, to be used appropriately. Being a rabid fanboy (or hater) of either only proves one is willfully ignorant of reality. Fifteen years ago, an interpreted language kept many of the world's largest mainframes running -- it wasn't Java, it was BASIC (or one of quite a few other interpreted languages).
The languages used may have changed, but the amount of (and use cases for) interpreted vs. native code hasn't changed that much over the decades. Shiny-new Java didn't change it, neither did
Don't think for a second that interpreted languages are taking over; or that they're losing ground. The more things change, the more they stay the same.
Why isn't anything compiled natively anymore? (Score:3, Insightful)
Someone's been reading too many benchmarks (Score:4, Insightful)
Yeah... people keep saying that. Okay, lets take the benchmark I hear about most: http://kano.net/javabench/ [kano.net] "The Java is Faster than C++ and C++ Sucks Unbiased Benchmark". Unbiased my foot. "I was sick of hearing people say Java was slow" is not a good way to start an unbiased benchmark. Lets have a few more problems:
Y'know, I think there's a reason for that...
Y'know, a couple of decades ago I was running non-native applications on a 7Mhz system with 1MB RAM (my old A500). They were fast, but not quite as fast as native. I'm now using a system in the region of 500 times faster, in terms of raw CPU, and with 2,048 times more memory. And y'know what, non-native stuff is fast, but not quite as fast as native. Something about code expanding to fill the available CPU cycles, methinks...
Re:Someone's been reading too many benchmarks (Score:3, Insightful)
As A Developer (Score:3, Interesting)
Java is the language I use the most, and it's good for small programs. It's definitely noticably slower for large applications, but I don't think that's the big reason that a lot of developers don't like it. Swing is nice, but the problem with Java and a lot of other "modern" languages is that they try so hard to protect the developer from themselves and enforcing a certain development paradigm that the same features that make it really nice for writing small program end up standing in your way for large and complex application development. Looking at the other side of the issue, C++ is fast, it can be fairly portable if it's written correctly, and has a huge amount of libraries available. C++ will let you shoot yourself in the foot, but the reason is that it's willing to stand out of the way and say "oh really want to do that? ok...". This makes it easy to write bad/buggy programs if you don't know what your doing, but if you pay attention, have some experience, and a plan for writing the software, then C++ can be less stressful to develop.
Aside from a reasoned argument, I think a lot of developers are just attached to C/C++. I know that I just enjoy coding in C++ more than in Java. Not that Java is bad- and it can be fun to code in at times, but the lower level languages just give me more of a feeling of actually creating something on the computer- as opposed to some runtime environment.
Finally, one major reason to stick with C++ is that many interpreted languages aren't really as portable as they pretend to be. A language like C++ that really is only mostly portable, and then only if you keep portability in mind, can sometimes be more portable than other languages that claim to be perfectly portable and then make you spend weeks trying to debug the program because things are fouling up.
Analogies suck, but... (Score:4, Insightful)
One of the oldest analogies in computing is comparing algorithms to cooking recipes. We even have books like "Numerical Recipes" and "Perl Cookbook".
Well, to me, interpreted languages are like frozen dinners. They will do if you come home late at night and are too hurried and hungry to cook a proper meal. But C is like a fully equipped kitchen. It takes *much* more skill to cook a proper meal than to heat a frozen dinner in a microwave oven, but the results are incomparably better, not to mention the pleasure you get from doing it the right way.
Re:Analogies suck, but... (Score:5, Insightful)
By what metric? Expressiveness? Ease of implementation? Ease of maintenance? Error rate? Because, last I checked, low-level languages like C fail on all those points compared to a higher-level language.
Re:Analogies suck, but... (Score:3, Funny)
By what metric? Expressiveness? Ease of implementation? Ease of maintenance? Error rate? Because, last I checked, low-level languages like C fail on all those points compared to a higher-level language.
It's a little unfair to pick on the low-level language programmers. There'd be more of them here to defend themselves but they're all so busy looking for memory leaks and buffer overflows.Cross Platform not related to language (Score:3, Informative)
Even if I use Java or C#, but don't use a cross platform toolkit (e.g. Windows Forms would not be cross platform), the application won't be cross platform.
It doesn't matter if the language compiles to byte code, if that byte code doesn't use a cross platform toolkit, it won't be cross platform.
Not today, not tomorrow. (Score:3)
Commodore 64 BASIC was interpreted. Computers now are obviously powerful enough to run 64 BASIC code very quickly. Does that mean native code should have been abandoned years ago because technology advanced enough to allow C-64 code to run quickly? JIT code will always be slower than native code and because the complexity of both JIT and Native code programs will get more complicated as the technology advances interpreted code can never catch up.
GRAMMAR NAZI (Score:3, Funny)
- your gray hairs (unless you can command them somehow)
- control
- the big game
- your way
List of things you could be loosing:
- the hounds
- your belt
- an arrow
- responsibility
CPUs still have *A LOT* to evolve (Score:4, Insightful)
Now find me one CPU that can do a decent simulation of the physics of continuous media. Why isn't there any game where you ride a surfboard realistically? Why do meteorologists use the most powerful number crunching systems in the world to be wrong in 50% of the cases when predicting weather a week ahead?
And what about artificial intelligence and neural networks? Find me a CPU that can do a decent OCR, or speech recognition. What about parsing natural language? Why can't I search in Google by abstract concepts, instead of isolated words?
No matter how powerful CPUs are, they are still ridiculously inadequate for a large range of real world problems. When you go beyond textbook examples, one still needs to squeeze every bit of performance that only optimized compilers can get.
Re:CPUs still have *A LOT* to evolve (Score:5, Insightful)
Re:CPUs still have *A LOT* to evolve (Score:3, Informative)
Well, yes and no (Score:5, Insightful)
Yes, we are seeing more development in non-native code but, it gets it's power from the underlying libraries and core code that is native. The line between them gets fuzzy when you toss in JIT and scripting to native code compilers. It really depends on the problem area. If I'm just parsing apart a bunch of log files to make reports Perl or Python would be the best. Web apps seem to benefit from the safety net of non native code but I'm sure there are exceptions.
OTOH there are plenty of apps that need all the speed and memory the machine can provide. My current job involves real time financial data delivery. Writing that in Python or Java would (probably) not work out too well. OS code that works directly with hardware will probably stay in assembler or C. Fast low level stuff is what allows the slower high level stuff to be useful.
Either way you still need to know what you're doing because in the end both native code and interpreted code run as opcodes on a CPU and use hardware resources. You need to mind memory use in Java just like C. Just in different ways. You've need to watch what you do in inner loops in both Python and C++. Linear lookups can cause scaling problems in Perl, Java, Python or C/C++.
It all depends on how fast you want to get from problem to solution, how much hardware you can throw at it, how complicated the problem is, how much time you have and many other factors.
Languages are tools, not a religion. The broader your knowledge the more tools you have at your disposal. Pick the best one for the job at hand.
Depends on the task (Score:5, Insightful)
If you're making a pretty GUI, you may want to use an easy-to-use and portable language and may not care about performance as much. If you're creating a high-performance backend, or doing some realtime processing, an interpreted language is practially useless.
Before deciding which paradigm is superior, you must narrow down the question to a type of task. Since the variety of tasks we use software for does not seem to be shrinking, it seems that this issue will not be resolved decisively anytime soon.
It depends (Score:5, Insightful)
Interpreted & JIT languages are "within a constant factor" of native code's speed, and CS students are taught that such things don't matter. ;-)
And for many types of apps, they really don't. Ten times slower than instantaneous, is instantaneous.
But people use computers for lots of things, and believe it or not, some of those things are still CPU-bound, and take so much work that humans can perceive the delay. Your word-processor is 99% idle so surely it doesn't need to be native, but you know that somewhere on this planet, a poor shmuck is staring at an hourglass icon, waiting for a macro to finish. The real question is: who cares? Is that guy's time worth more, or is the programmer's time worth more?
Bingo (Score:4, Insightful)
A lot of folks use languages like PDL, IDL, MatLab, Octave, or even NumPy to do array processing, and tout the fact that for large arrays those languages run "essentially as fast as C". But that's bullshit. All those languages vectorize their operations in exactly the wrong order - if you have a hundred million datapoints and you want to do six operations on each one, each of those vectorized languages will dutifully swap each of your hundred million datapoints out of RAM into the processor, multiply it by seven (or whatever), and push it back out to RAM before pulling them all back in to add six to each one. What you really want is to vectorize in pipeline order, doing all the operations you plan to on each data point once and for all so that you can take advantage of your processor's nice, fast cache. Nobody has (to my knowledge) figured out a way to do that, that is robust enough for an interactive/JIT language, so just writing it in "C" and getting the loops nested in the right order can speed you up by a factor of more than 10 on a modern AMD or Intel CPU.
If we all quit using native languages.... (Score:3, Interesting)
All is native, all is managed (Score:3, Informative)
The reverse is true if I pass my java source- or byte-code through a compile-once/not-JIT "native" compiler. Managed code suddenly goes native.
I predict people will work in the environment that is most efficent for them, where efficiency takes into account development costs, maintenance costs, run-time costs, political costs, etc. etc. etc.
There's also the question of "what exactly is managed code." If your program compiles against an exception-handling library, as most large programs today do, is that not a primitive form of code management? Granted, you may have to write your own management layer, but it's still not totally unmanaged. Even running as a process in a modern OS is a form of management, since a fatal-to-the-process error can invoke OS-level clean-up routines to close files and return resources.
To borrow from Shakespear: Managed or unmanaged, that is the question.
The answer depends on your perspective.
Horses for courses (Score:3, Interesting)
Thinking back a few years, iirc the first Apple Mac had the Quickdraw graphics package written in machine language, didn't it? Not assembler, but instructions made of hand-mapped binary digits. It's the reason why those early Mac GUI's were able to extract such amazing graphic performance out of the Motorola 68000.
You can still buy Zilog Z8's, and embedded applications still exist for them.
Re:Horses for courses (Score:3, Interesting)
PHP vs ASP vs C++ vs JavaScript (Score:4, Informative)
In summary, C++ was 4 times faster than PHP, and in turn PHP was 3 times faster than ASP and JavaScript was truly appalling. I can't think of many applications that wouldn't benefit from being 4 to 12 times faster.
Re:PHP vs ASP vs C++ vs JavaScript (Score:3, Insightful)
So, the question is what is more important? Time to code or Time to run?
If you have a one-off task, and your developer costs money, you want quick-to-develop.
If you are going to run the program a bijillion times (a lot), you quick-to-run at whatever developer cost.
I've coded C++, Java, and .NET services that all did what they needed to do (long running data collection applications).
There is more than one way to skin a cat, and the end result
Teh funnay! (Score:3, Funny)
No, stupid programming maybe... yes.
The problem: our native-code languages are bad (Score:5, Insightful)
The problem isn't native-code vs interpretive code. It's that our native code languages are terribly flawed.
Programming backed itself into a corner with C and C++. They're useful languages, but they're not safe. Now this has nothing to do with performance; you can have safety in a hard-compiled language. Ada, the Modula family, and the Pascal/Delphi family did it. The problem is that, because of some bad design decisions in C (the equivalence of arrays and pointers being the big one), you have to lie to the language to get anything done. This makes safety hopeless. The basic problem of C is that you have to obsess on "who owns what" for memory allocation purposes, and the language gives you no help with this. The language doesn't even adequately address "how big is this". With those two design defects, we're doomed to have memory safety problems. Which we do.
C++ at first seemed like an improvement, but as it turned out, C++ adds hiding to C without improving safety. Note that this seems to be unique to C++; no prior language did that, and no language since has taken that route. Attempts have been made to work around the problem within the structure of C++, but with limited success. The "auto_ptr" debacle and the endless problems of trying to make sound reference-counted allocation work reliably indicate the fundamental limitations of the language. You just can't fix those problems in C++ without breaking backwards compatibility. (See my postings in comp.std.c++ over the last decade for more details on this.)
Java was invented mostly to get around the memory safety problems of C and C++. The fact that Java is usually semi-interpretive has nothing to do with the language design; that's a consequence of Sun's original focus on applets. There are native-code compilers for Java; GCC contains one. There are competitive advantages of locking the user into a giant environment (J2EE in the Java world, .NET in the Microsoft world), which is part of why we're seeing so much of that. But it's not a language design issue.
Microsoft came up with C# as their answer to Java, and most of the same issues as with Java apply.
What's so embarassing about all this is that it's quite fixable. The solutions were known twenty years ago. If you have a language where the language knows how big everything is, and the subscript checks are hoisted out of loops at compile time, you get safety with high performance. There were Pascal compilers that got this right in the 1980s.
On the allocation front, you can use either garbage collection or reference counting to automate that process. Java and C# are garbage-collected; Perl and Python are reference-counted, and in practice, programmers in those languages seldom have to think about memory allocation issues. Allocation overhead can also be hoisted out of loops. Java compilers are starting to do this, allocating temporary variables on the stack. Reference count updates can be optimized similarly. There's nothing to prevent using these techniques in a native-code compiler.
And that's how we got to where we are today, with buffer overflows, zombies, and blue screens of death, papered over with a layer of inefficient interpreters. Fortunately the hardware people have held up their end and made it possible to live with this, but we on the software side should have the understanding and grace to be embarassed by it.
Re:The problem: our native-code languages are bad (Score:3, Interesting)
Well, programming languages are about the man/machine interface. Their design is very much a tradeoff between the needs of the humans that'll be using them and the needs of the computers that'll be running them. Simple example - adding features to a language makes it harder to learn but potentially more efficient.
The other thing is you have to separate language from implementation. Java isn't just slow because of the design of the language, but many other choices that went into it like using a virtual mac
Re:The problem: our native-code languages are bad (Score:3, Insightful)
But the _real_ issue is that there are times: 1) when you want arrays and pointers to be the same thing, and 2) when you don't
No, the real issue is that you can't talk about the size of variable-sized data in C.
If C had syntax like
int write(int fd, size_t length, char buf[length]);
instead of
int write(int fd, char* buf, size_t length);
then size information would be carried along with the data, and checking would be possible. In the syntax we have now, you're lying to the compiler; you're sa
The End of [Insert Something] (Score:4, Insightful)
As always this is a non-existant problem hyped up by people that don't have a clue.
First, the performance hit of managed code is not "negligible". For tasks that rely on raw math power it can be significant, like 3D engines, data processing and so on.
But if you're doing, say, a rich client, your code will most likely just call existing multimedia, communication and input API-s. Then managed code's performance hit is next to nothing since most of the time is spent processing the commands from the API-s, not your own code anyway.
Thing is: can we drop the raw power of native code and go all managed: hell no. And we never will. While plenty of consumer applications have good performance on managed code, there will always be a class of professional software where 20% better performance means potentially tens of hundred of dollars saved for a middle-sized company.
But the big miss in the article is that managed and non-managed code can coexist. In
So wait, we were talking about end of what again?
Philosophical reason why higher level is key (Score:4, Insightful)
Note that this is nothing new in software engineering. Most software a few decades ago was written in low-level code, even assembly language. If you did a survey then about the ratio of low-level to high-level coding, and compare it to a survey today, you'd realize that we do not even ponder about where things are going, as there is plenty of evidence today to tell us that going higher-level is the path the software industry is taking.
Native coding will become more and more of a niche, first to do Operating Systems, Drivers, Kernels and such. But eventually I can see even a fully-operational OS being written in a high-level language. In a sense that's what's happing today when you combine all the Web Services and tools we find today on the Web.
So, it's just a matter of time before everyone codes in high-level languages, and even today's high-level languages will seem low-level by what we're going to replace them in the future.
Re:What?!?!? (Score:5, Funny)
Re:What?!?!? (Score:5, Funny)
fun fact: slashdot is written in an interpreted language (perl).
wait a minute, the kid might be onto something ...
Re:What?!?!? (Score:5, Informative)
And no, the STL does not suck.
You could always try "D" (Score:3, Informative)
Re:What?!?!? (Score:5, Informative)
In which frigging paralell universe are you living please? I want to go there. C being orders of magnitude faster than interpreted languages I agree with, but C easier to debug? Either you've never tried interpreted languages (say Python or C#, PHP is not a language) or you never got past "hello world" (hell, even hello world is harder to debug in C).
In a word, you want D.
Or another nice high-level compiled language. Most of them are functional though (Haskell, *ML) so you may have some trouble adapting. And they usually don't allow variable-length strings, being functional and all.
Re:What?!?!? (Score:3, Informative)
In Java, some of the behavior -- indeed, a lot of the underlying behavior -- when it comes to fine-tuning for performance, or 'why does this thing eat 400 megs of RAM?!' is hidden from the user; it's part of the underlying interpreter, and beyond the view of the debugger. In C, I can track my memory allocations and performance much more readily in a debugger than I can in
Re:What?!?!? (Score:2)
Re:What?!?!? (Score:3, Informative)
Re:What?!?!? (Score:3, Interesting)
It's all bs.
15 years ago I benchmarked assembler vs c for graphics code - c was 200 x slower. There is NO way that any interpreted runtime will even begin to approach the "bare metal", never mind c.
Most of the benchmarks crowing about the speed of JIT compilers ignore the startup and initialization time, as well as the end-run time.
I couldn't believe some of the naive assumptions on one published benchmark - they had the java code print out its start and end time and said "see, only 4x slower than c"
Re:What?!?!? (Score:3, Insightful)
Re:What?!?!? (Score:3, Insightful)
In this day and age, the compiler is probably smarter than you are. It can be told which processor your executable is targetting and use its instruction set as appropriate. This is particularly important for x86 chips, as there have been a number of SIMD extensions added to the instruction set in the last 15 years.
15 years ago, 3D graphics on a computer was unheard of. Now, they are ubiquitous
Re:What?!?!? (Score:3, Insightful)
Re:What?!?!? (Score:5, Interesting)
One of the neat things was te 4k graphics demo contests - try to write the most impressive graphics demo with only 4k of assembler. There was a LOT of code writing code in memory, code using other code that had already run as raw data for designing the next iteration, then using it again as code ... a 4k program that could take you through a 3-dimensional roller coster ride for 20 minutes, never repeating, all done in real time, on hardware that you wouldn't deign to pick out of the scrap heap.
Re:What?!?!? (Score:3, Informative)
I guess we can call that the new peter principle - every piece of code rises to its coders level of incompetence :-)
Every year we hear stories about how c is dead, but its still alive and kicking. The "java chip" that the Java OS was supposed to run on never materialized. Various other managed languages are either unmanageable (hello, perl! - sorry, had to throw that one in) or just can't compete
Re:What?!?!? (Score:3)
--- In other words, the lousy formatting of the source that follows is not my fault
To get a real comparison, just try running these two programs:
#include <stdio.h>
int main(int argc, char* argv[], char* env[]) {
int a, b, c, d, e, f, total;
total = 0;
printf("Content-type: text/html\r\n\r\n");