Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

The End of Native Code? 1173

psycln asks: "An average PC nowadays holds enough power to run complex software programmed in an interpreted language which is handled by runtime virtual machines, or just-in-time compiled. Particular to Windows programmers, the announcement of MS-Windows Vista's system requirements means that future Windows boxes will laugh at the memory/processor requirements of current interpreted/JIT compiled languages (e.g. .NET, Java , Python, and others). Regardless of the negligible performance hit compared to native code, major software houses, as well as a lot of open-source developers, prefer native code for major projects even though interpreted languages are easier to port cross-platform, often have a shorter development time, and are just as powerful as languages that generate native code. What does the Slashdot community think of the current state of interpreted/JIT compiled languages? Is it time to jump in the boat of interpreted/JIT compiled languages? Do programmers feel that they are losing - an arguably needed low-level - control when they do interpreted languages? What would we be losing besides more gray hair?"
This discussion has been archived. No new comments can be posted.

The End of Native Code?

Comments Filter:
  • What else (Score:3, Funny)

    by Anonymous Coward on Monday June 12, 2006 @08:18PM (#15520725)
    We might be loosing our ability to spell the verb "lose".

    No, wait, too late.
    • by Kelson ( 129150 ) * on Monday June 12, 2006 @08:24PM (#15520755) Homepage Journal
      No, no, obviously, they're loosing grey hair in the same sense that one "looses the dogs" -- i.e. they're setting the grey hair free.
      • by Anonymous Coward
        Thank you!

        I was beginning to think I had gone mad, or perhaps there was committee that changed the spelling of "lose" without telling me. I honestly haven't seen anyone spell it correctly in months. It's starting annoy me as much as people can't tell they're from there from their.
      • by mysticgoat ( 582871 ) * on Tuesday June 13, 2006 @12:18AM (#15521765) Homepage Journal

        I loose my gray hair when I get off work. The ponytail and smoothly coiffed beard are necessary to convey the appropriate image in the office, but in the privacy of my home I let the beard go bushy and the tresses bounce about my shoulders.

        But maybe this is more information than you really wanted to know...

    • Re:What else (Score:4, Interesting)

      by eonlabs ( 921625 ) on Tuesday June 13, 2006 @12:38AM (#15521842) Journal
      If your native code is running as slow as interpreted, I would really recommend getting that looked at. It would seem that people are losing the ability to write clean code since the crutch of interpreted languages is hiding so much of the finer grains of computer science. Sure, if you're writing apps that are fine slow, interpreted doesn't matter. If you're writing higher end programs like games, I would recommend cross-platform libraries in a native language. I'm currently working on learning SDL in C/C++ for exactly that reason.
      • Re:What else (Score:5, Interesting)

        by julesh ( 229690 ) on Tuesday June 13, 2006 @07:14AM (#15522841)
        If your native code is running as slow as interpreted, I would really recommend getting that looked at.

        The question you have to ask, of course, is where is the bottleneck. And the answer is fairly obvious if you analyse the performance of modern applications on a variety of different hardware: IO is the bottleneck in almost every case. There's no other explanation for why my 400MHz desktop (with a nice, fast hard disk) performs as well as or better than my 1.7GHz laptop (with a slow, energy saving hard disk but otherwise similar specs) for many applications (including Firefox, OpenOffice, etc... the kind of things that the average user runs daily) while the laptop wipes the floor with it for others (media players, SketchUp).

        The point is, if you're going to be waiting 50ms for disk access, why bother shaving 2ms of processing time by running in a native compiled language? Nobody will ever notice. And you may find the more modern and high-level design of the interpreted language's library allows you to write faster performing IO code more easily than the simple & low level libraries that are supplied with most compiled languages, at which point you may get better results for the same programming effort for using that language.

        In the end, fast programs are about good design, not language choice. Higher level languages often allow you to spend more time on design and less on implementation. All real-world projects have a limited time scale; ISVs just try to do the best they can with the time they have available, which isn't usually producing something miraculous.
        • Re:What else (Score:4, Insightful)

          by petermgreen ( 876956 ) <plugwash.p10link@net> on Tuesday June 13, 2006 @08:16AM (#15523064) Homepage
          The point is, if you're going to be waiting 50ms for disk access, why bother shaving 2ms of processing time by running in a native compiled language? Nobody will ever notice.
          what they will notice is when the gc decides it needs to scan a memory area that has been swapped out crowding out any other IO on the system.

          Average performance only matters for a few time consuming tasks (and they do still exist), what matters far more in end user apps is any apparent hangs, if a button takes 100ms to get a response i probablly won't notice unless i'm gaming. if a button takes 10ms 99% of the time and 1 second the rest then i damn well will notice despite the better average performance, app startup time is also a killer in terms of percived perfomance (and languages like java are terrible for this especilly the first run on boot).

          And you may find the more modern and high-level design of the interpreted language's library allows you to write faster performing IO code more easily than the simple & low level libraries that are supplied with most compiled languages, at which point you may get better results for the same programming effort for using that language.
          java.io really sucks for some types of apps as it basically forces you to have one thread per socket and the new java.nio isn't really any higher level than bsd sockets. I don't know what the situation is like over in .net land though maybe its better there.
      • you'll learn (Score:4, Insightful)

        by m874t232 ( 973431 ) on Tuesday June 13, 2006 @09:03AM (#15523284)
        If your native code is running as slow as interpreted, I would really recommend getting that looked at. It would seem that people are losing the ability to write clean code since the crutch of interpreted languages is hiding so much of the finer grains of computer science.

        First of all, when experienced programmers write big systems in interpreted languages, you can rest assured that they know what they are doing and are doing the benchmarks to make sure they aren't losing performance where they need it. If they need special, high-performance algorithms or libraries, they will figure out the minimal set of C/C++ primitives they need and make them a native code library inside the scripting language.

        And whether code is "clean" really has nothing to do with the language. People can write clean Perl code and unclean C code.

        Finally, "the finer grains of computer science" are absolutely and positively not concerned with the kind of low-level mess that C exposes.

        I'm currently working on learning SDL in C/C++ for exactly that reason.

        Good, so you are in a very early stage of your development as a programmer. As you mature, you'll figure out how to get the job done without wasting all your time on C/C++ programming.

        In general, when experienced programmers use languages like Python or Ruby with native code plug-ins, or when they use languages like Java or C#, they produce code with better performance and fewer bugs than straight C/C++, simply because they end up having more time implementing good data structures and focussing their efforts where it counts.
  • by Anonymous Coward on Monday June 12, 2006 @08:21PM (#15520741)
    When your web-based-datastore gets 50,000 inserts per second, hovers between 15 and 20 billion rows and endures a sustained query rate of 43,000 queries per hour, tell me which part of it you want to coded in PHP.
    • by kpharmer ( 452893 ) on Monday June 12, 2006 @09:37PM (#15521092)
      > When your web-based-datastore gets 50,000 inserts per second, hovers between 15 and 20 billion rows and endures a sustained query rate
      > of 43,000 queries per hour, tell me which part of it you want to coded in PHP.

      hmm, the warehouse I work on has multiple databases with billions of rows in them, can hit insert rates of 100,000 rows a second, can experience 60,000 queries/hour - many of which are trending data over 13 months, has hundreds of users. Many of these users are allowed to directly hit some of the databases with whatever query tool they want. Scans of a hundred million rows at a time aren't uncommon (though seldom happen more than a few dozen times a day).

      This app is completely written in korn shell, python, php and sql (db2). Looks like Ruby is also coming into the picture now, will probably supplant much of the php in order to improve manageablity.

      Oh yeah, and the frequency of releases is quick and it's defect rate is low. And we're planning to begin adding over 400 million events a day soon. I've done similar projects in C and java. Never anywhere near as successfully as in python and php.

      We might consider rewriting a few select python classes in c. Maybe, if we port the ETL over to the Power5 architecture with psycho doesn't run. Otherwise, it's cheaper to just buy more hardware at this point - since each ETL server can handle about 3 billion rows of data/day with our python programs.
      • I think you guys are missing the original poster's point. I think he is using the standard "right tool for the right job" line. He is saying that the db system shouldn't be an interpreted language since performance is very important there. That is the one system you probably wouldn't want to be in PHP. (Disclaimer: I'm just clarifying what I guess to be their point.)

        BTW: I use Perl with Postgres, and yes, I wouldn't want Postgres to be wrote in Perl or PHP. I do, however, love using Perl for most everything
      • by Fulcrum of Evil ( 560260 ) on Monday June 12, 2006 @11:56PM (#15521657)

        the warehouse I work on has multiple databases with billions of rows in them, can hit insert rates of 100,000 rows a second, can experience 60,000 queries/hour

        cans of a hundred million rows at a time aren't uncommon (though seldom happen more than a few dozen times a day).

        Yes they are. Go read what you wrote.

        This app is completely written in korn shell, python, php and sql (db2).

        One guess where 99% of the ccycles arae in that (and 90% of the dollars).

        • My one guess (Score:5, Insightful)

          by xant ( 99438 ) on Tuesday June 13, 2006 @03:02AM (#15522258) Homepage

          One guess where 99% of the ccycles arae in that

          I'll take a guess! And it's even the one you want me to guess. The db2 instance. That's the fucking *point*. The fast C code that's executing has already been written.. some of it is in the python interpreter, some it is in the ksh and php interpreters, most of it is in the db2 interpreter. Very fast algorithms doing what they do best: optimized, super fast loops operating on static types.

          That is WHY python and other interpreted languages achieve the speed they achieve.. because what they do is allow you to glue together C code written by other people. And, because the Python code is much simpler, you can understand the interactions between the fast code more easily, and see where your code fails to perform well. It's always because you're putting loops together inefficiently and making poor design choices, not because of the speed of the interpreter--and now that your code is short enough for you to see that, you can fix it.

          Your application logic doesn't need to be super fast. It needs to be super agile, so you can refactor and accommodate changing requirements and make smart decisions about which pieces you are going to use and how you are going to use them together.

          C won't die, at least, not for a long, long time*, and that doesn't bother me, a hardcore Python programmer, in the least. Somebody has to do the dirty job of writing those fast loops. Meanwhile I'll be here zipping through the application implementation.

          *It will eventually be replaced by Pyrex, of course.
          • Re:My one guess (Score:4, Insightful)

            by nettdata ( 88196 ) on Tuesday June 13, 2006 @04:28AM (#15522477) Homepage
            Well said... too many people lose sight of the goal, and think that all eficiency boils down to CPU cycles.

            In reality, it is a compromise between many factors, including cost, flexibility, rate of change, manageability, and performance.

            The only REAL requirement is that it does its job at a cost that is reasonable and sustainable to the company.

            If you spend 10 times more on development and increase time to delivery in order to save a small fraction of that on hardware, you've lost.

            For what it's worth, we do ALL of our development in interpreted languages, mostly Java, some PHP, Ruby on Rails, etc., and it all comes down to whatever is the best tool for the job. Very rarely do we ever come across a situation where 2 clients have needs that result in the exact same tools being used, unless it's just to use tools that we're more familiar with so that we can get the job done faster for them.

            It's all about balancing compromise.

      • by teknomage1 ( 854522 ) on Tuesday June 13, 2006 @12:00AM (#15521678) Homepage
        I suspect you've missed the point. The database (db2) is doing most of the work and is indeed written in native code. The interface logic most certainly is appropriate to be highe level, but the database engine itself is probably better off as native code. Ditto for the operating system kernel.
        • The interface logic most certainly is appropriate to be highe level, but the database engine itself is probably better off as native code. Ditto for the operating system kernel.

          Thank you. In that one, concise post, you have provided the only credible answer to the question in the title: no.

          As always, we should use the right tool for the job. For anything where processing performance matters, native code blows away anything interpreted, and always will. I loved this little bit of rhetoric in the origin

    • by Memnos ( 937795 ) on Tuesday June 13, 2006 @02:49AM (#15522220) Journal
      Hmm.. as well. I worked on a team that developed a DB app that was nine PETABYTES and growing constantly. (Our little test database was 60 terabytes.) It will soon be one of the five largest databases in the world, and could extend into the exabyte range (you can guess who it's for.) We use Java and ASP.NET on the server and Java and an AJAX solution on the client. We throw shitloads of big boxes at it and we don't give a damn, because it works. Do not get me started on how analytically complex the algorthms are that use that data...
    • To be fair it's more up to database engine - and they _do_ seriously differ in speed despite what you might think otherwise (EG, MSSQL is up to 100 times slower on lots of simple selects than MySQL or Firebird - those I have extensive experience with).

      But, right, PHP is slow. That's the second reason why I wish to move my web-development to Python. Python+Psyco kick ass unbelievably (speed-wise) - add "import psyco; psyco.profile()" to the end of your site.py :)

      Oh, and the first reason is that PHP gets mess
  • by Anonymous Coward on Monday June 12, 2006 @08:24PM (#15520753)
    Didn't we already do this with lisp, like 40 years ago?
    • by billstewart ( 78916 ) on Monday June 12, 2006 @09:21PM (#15521031) Journal
      LISP was a simple, elegant language that demonstrated that almost any language written after 1961 was unnecessary, except for demonstrations of concepts like Object-Oriented programming that could then be re-implemented into LISP, and that any code written in older languages could be replaced with something better :-)

      BASIC had its problems, warping a generation of programmers (including me), but it was small and light and didn't take long to learn unless you wanted to enough find tricks to get real work done.

      FORTH was smaller, lighter, and faster. It was overly self-important, considering its reinvention of the subroutine to something new and radical, but if you wanted to program toasters or telescopes it was the language to use. Postscript was somewhat of a Forth derivative.

      P-Code was a nice portable little VM you could implement other things on.

      And then there was Java, which grew out of Gosling's experiences with NeWS, a Postscript-based windowing system. If you wonder why you're not using Netscape and maybe not using Java, and why you've probably got Windows underneath your Mozilla, it's because it became obvious to lots of people that Netscape+Java was a sufficiently powerful and easily ported environment that the operating system underneath could become nearly irrelevant - so Microsoft had to go build a non-standards-compliant browser and wonky Java implementation and start working on .NET to kill off the threat. It wasn't that conquering the market for free browsers was a big moneymaker - it was self-defense to make sure that free browsers didn't conquer the OS market, allowing Windows+Intel to be replaced by Linux/BSD/QNX/MacOS/OS9/SunOS/etc.

      • by Latent Heat ( 558884 ) on Tuesday June 13, 2006 @12:02AM (#15521689)
        A lot of people are dismissive of Java as having failed on client GUI apps. What is it now, 2006, and Java came out around 1996? I know we talk about "Internet time", but major software concepts can take years to evolve, and Windows started out sometime in the 1980's but it wasn't until Windows 95 that it started kicking backsides and taking names. So maybe Java will eventually have its day.

        I am a Pascal programmer from ancient days and have been pretty much a Delphi person on account of my Pascal affinity and other requirements, but I have implemented GUI apps in C++, C#, Java, Matlab, and VB. I am seriously looking at Java/Swing as the next wave of what started as DOS/Turbo Pascal and got reimplemented in Windows/Delphi. Java simply couldn't do in 1997 what I was doing even at that time in Windows, just plain couldn't from the standpoint of features and performances. Java is not-quite-there-yet with the features I use in Windows, but it is much farther along in 2006 than in 1997 and is closing the gap with graphics acceleration and other features. It may surpass Delphi for what I do if it proves to be easier to do multi-threaded apps to take advantage of multi-core.

        While my complex data visualization stuff is a long way off from being done in Java, the sort of simple data visualization stuff that I was doing in 1997 under Windows works quite well under Java, and it works equally well under Linux. If anything will get me to switch to Linux it will be that I have a collection of graphical data visualiztion programs for the work I do written in Java that will work equally well under Linux. While I can write a faster program with more features in Windows, the Java implementation is proving good enough for a lot of stuff that I am doing and it breaks me loose from Windows as well.

        SUN seems to be in this Java business for the long haul, seemingly spinning their wheels making it available for free and always being a step behind Windows in features. But at some point Java/Swing programs will have accumulated enough performance and features that they are good enough for what people want to do, and they have the added advantage of not being tied to Windows. This idea that something like Java could transcend the OS may yet happen for client GUI apps.

        • by JulesLt ( 909417 ) on Tuesday June 13, 2006 @04:28AM (#15522476)
          I'm not sure that we've moved that much. I think Gosling and the other originators of Java are still pushing in the wrong direction with GUI; see his remarks on Eclipse / SWT.

          It is not a Java problem per se, but goes right back to the issue of creating cross-platform client apps in the first place. Many of us like to think of the OS as something that provides services - disk access, windowing, etc - that look like they can easily be abstracted - and they can. However, as well as being OS, Windows, OS X, KDE and GNOME are platforms - a set of programming APIs and a philosophy.

          Rather than transcending these differences, Swing is yet another variation. Potentially you could make a Swing app that did look and behave identical to a Windows app - but it would feel plain wrong on OS X. The reverse is equally true (well, just about - I don't think you can use the top-of-screen menu bar in Swing apps).

          I think SWT may be the better approach - it's not write-once run-anywhere, but you are reducing the amount you need to port. And as said above, you need to consider the philosophical differences between platform HCI anyway.

          Ironically one of the few really successful Java GUI apps I know is a data visualisation tool - it mostly consists of OpenGL calls so it's a bit of a misnomer to say it's Java, but it's back to the point that it's the APIs that count. OpenGL is a nice x-platform API.
        • simple (Score:3, Insightful)

          by m874t232 ( 973431 )
          There are several reasons.

          (1) Java's market presence for UI applications has been decreasing: applets have largely disappeared, and the JRE is preinstalled on fewer and fewer desktops.

          (2) Even on OS X, where Java is pre-installed and exceptionally well supported and integrated, there are few applications written in Java, and even fewer written in Java using Swing.

          (3) Java's UI classes don't integrate well with native desktops, and it is impossible with them to write a cross-platform UI that conforms to ever
          • Re:simple (Score:3, Interesting)

            by julesh ( 229690 )
            As a Linux and Mac user, I don't want cross-platform scraps thrown to me, I want high-quality applications that integrate well with my desktop.

            What few people seem to have realised is that the best way of achieving cross platform portability is not to throw out the systems you're porting to and implement everything from scratch (the AWT/SWING approach). This just results in applications that feel wrong whichever system you run them on. The answer is to use native widgets in a way that is flexible enough t
      • by killjoe ( 766577 ) on Tuesday June 13, 2006 @01:26AM (#15521989)
        I agree with you but...

        I have started to believe that the proof is in the pudding. I don't know lisp but I know some zope. Zope much like lisp is elegant, innovative, comprehensive, well designed and capabable of almost anything. Just like you probably scratch your head and wonder why people code in PHP or java when they could code in lisp I wonder why people code in PHP or java when they could have used zope and python.

        But I am ready to give that up. I am now under the imression that zope isn't everything I thought it was. I mean if zope is so great then how come there are only three or four blogs written for it and not one of them is 1/10th as good as wordpress which is written in PHP? How come not one ticket tracker written in zope is 1/10th as good as eventum written in php?

        I ask those questions rhetorically though. I know the answer. The answer is that zope if very hard. You have to be a very smart and very dedicated person to climb the ladder of zope and attain zope zen and there are just not enough people in this world that are willing to put forth that much effort.

        In the end it's better to be easy then to be good. Look at how gracefully ruby balances on that rope. ROR is easy and it's innovative. That's why great software is being written in rails while the zope folks are pounding on zope3 trying to make it easier for developers to write decent software.

        BTW I am not even going to attempt to learn zope3. I have to break up with zope. Thanks for the great times guys.
        • if zope is so great then how come there are only three or four blogs written for it and not one of them is 1/10th as good as wordpress which is written in PHP? How come not one ticket tracker written in zope is 1/10th as good as eventum written in php?

          You might think you know the answer, but you're wrong. The real answer is very obvious. It is this: zope has a low installed base at ISPs. Perhaps this is because PHP is easier than Zope, I don't know. I suspect it's just because more people use PHP becaus
  • Its inevitable (Score:5, Insightful)

    by greywire ( 78262 ) on Monday June 12, 2006 @08:24PM (#15520754) Homepage
    As the overhead of interpreted languages gets smaller (through faster systems, JIT, and other optimizations), its inevitable that eventualy we'll all be using one (unless you are one of the few people who have to program the virtual machines, the JIT compilers, etc).

    And this is a good thing, because it means more independance from certain CPU architectures.

    Someday, you will be able to use any OS on any CPU and any Application on any OS. This is one step in that direction.

    • Re:Its inevitable (Score:3, Insightful)

      by lbrandy ( 923907 )
      As the overhead of interpreted languages gets smaller (through faster systems, JIT, and other optimizations), its inevitable that eventualy we'll all be using one (unless you are one of the few people who have to program the virtual machines, the JIT compilers, etc).

      This cracks me up. As we head towards multi-core and massively-multi-core, you are telling me that things are going to get better for interprative languages? Compilers are about to kicked in the pants because we can only do thread-level-paral
      • Re:Its inevitable (Score:4, Insightful)

        by David Greene ( 463 ) on Monday June 12, 2006 @09:47PM (#15521129)
        There will come day where we expect our compilers to encode parallel information into the code so it will run faster on our 1024 core machines.
        Too late. That day has been around [cray.com] for 20 years already.
      • Re:Its inevitable (Score:5, Insightful)

        by evought ( 709897 ) <{moc.xobop} {ta} {thguove}> on Monday June 12, 2006 @10:16PM (#15521237) Homepage Journal
        Your argument actually points out how much *more* valuable interpreted and JIT languages will get. Are you going to compile new binaries for every architecture and combination of cores? Or, are you going to encode the logic of the application and have your JIT figure out how to optimize for the specific platform. Before you say that JITs cannot hack this, remember that they use exactly the same technology as your 'standard' compilers.

        Secondly, if it is a question of taking too long to compile, realize that you can always ship optimized binaries from high-level languages (e.g. GCJ), but you cannot readily make your optimized native code work on a new platform.
      • Re:Its inevitable (Score:5, Insightful)

        by cryptoluddite ( 658517 ) on Tuesday June 13, 2006 @01:33AM (#15522007)
        I actually have a dual-core machine at work and the only single program I have ever seen use more than one CPU at a time was written in Java. Even the single-threaded Java programs use like 110% CPU as the garbage collector (or whatever) runs in parallel (this was the genome benchmark from language shootout iirc). As in, "cpu time 110s, wall clock time 100s". Java is already ahead on multi-core.

        Basically you are smoking crack thinking that compiled languages are going to thrive on multi-core. They aren't. Hell it's hard enough to keep data access correct with just a single thread. And with a "safe" language like Java the compile *knows* there are no aliases for an array, so some kinds of access can automatically be done in parallel, whereas in a separately compiled/linked language like C there are few ways for the compiler to know this. When there's not enough active threads per core the other core's can GC the inactive programs. Safe languages have huge advantages on multi-core.
  • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Monday June 12, 2006 @08:26PM (#15520763) Homepage
    Have you ever USED a Java application or applet on Windows? Once they launch they perform pretty good. Once they launch.

    On every computer I use with Windows it takes up to 20-30 seconds to launch Java. Web page have a little "yes, you have Java" applet? Prepare for a massive slowdown. I'd hate to see what it does with applications that may just appear to hang the computer while Java launches. And don't get me started on taht stupid "Welcome to Java 2" dialog that pops up from the taskbar.

    Now on my Mac, things are different. Java applets launch just as fast as Flash if not faster (basically, instantly). This is on my G4 so things would only be better with a CoreDuo. Same goes for applications. I've been using an appilcation called YourSQL for over a year. It accesses a MySQL server and works great. It's very fast, has a perfectly native interface. You would think it is a native app, but I recently discovered that it's Java. The end use would NEVER notice that kind of thing except I was trying to debug a problem in my own code so I went to invesitage how it worked. It was Open Source and when I downloaded it... it was Java.

    Java is fantastic on Mac OS X. I don't know how fast it is on Linux. But as long as there is a 20-30 second launching penalty on Windows, Java will never be accpeted. I don't think .NET has this problem, but probably because MS is keeping it memory resident in Vista even if no one is using it.

    Then again, maybe Mac OS X preloads Java. I don't know if it has tricks, or if the Windows implementation is just that bad.

    • Java and Mac OS X (Score:5, Informative)

      by Kelson ( 129150 ) * on Monday June 12, 2006 @08:46PM (#15520871) Homepage Journal
      Mac OS X treats Java as just another app framework [apple.com], equivalent to Cocoa or Carbon. (I'm fairly certain I've seen an older version of that diagram that also listed Classic in that layer.) I imagine they've done a bunch of optimizations to tie it into the system, though I don't know whether it launches the runtime at boot or not. You've probably noticed that on Mac OS, you get your Java runtime from Apple, not from Sun or IBM.

      The downside is that things don't work quite the same as they do in Sun's Java runtime, so there are differences between Java-on-Windows and Java-on-Mac. For instance, my wife is an avid Puzzle Pirates [puzzlepirates.com] player, and the game client is a Java app. There've been Mac-specific bugs in the past, and at one point a major slowdown appeared when the game was run on a Mac. It hasn't been fixed, so while she can still do crafting on the Mac, whenever she does anything multiplayer, she has to switch to the Windows box.
    • by NutscrapeSucks ( 446616 ) on Monday June 12, 2006 @08:51PM (#15520895)
      Let me add some content to your Apple advertisement :)

      Apple's JVM implementation has something called Class data sharing [javalobby.org] to speed Java startup after the first invocation. The first time is just as slow as always. Since then the feature has been added to Sun Java 1.5, so if you're up to date, you should have this.
  • At this point, there is still a lot of development happening in "native" languages. Additionally, there are projects in motion to turn bytecode from environments like Java and Python into native code. One of the reasons a lot of people are seeing this seemingly massive movement is because of the technologies these "non-native" solutions leverage. Take both Java and .Net - the support libraries are huge and designed to (more or less) work together. All of that said, I'm a bit sick of either having to put u
  • You're going to get a lot of the same sort of responses now - lots of people arguing about a requirement that these non-compiled programming languages aren't suited for. Every language has a different purpose when its creator(s) decides what direction to take.
  • The old saw is to not optimize until you have to. Write in an interpreted language, but be ready to dive into native code when the need for speed arises.

  • two things (Score:5, Insightful)

    by bunions ( 970377 ) on Monday June 12, 2006 @08:29PM (#15520776)
    (a) 'loosing': oh jesus christ
    (b) the obvious answer is that native vs interpreted is basically simply the balance of developer cost versus cost of end-user resources (ram, cpu, users time). Interpreted code is getting faster every day, no matter what "OMG JAVA IS SO SLOW DUDE" geniuses on the interweb tell you, but there'll always be problem spaces where a 5% speedup pays huge dividends.
  • by Crussy ( 954015 ) on Monday June 12, 2006 @08:37PM (#15520824)
    Outside of introspection and like technologies there is no reason why code cannot be compiled natively. Linux users are aware of compiling java code natively, microsoft is working on a native c# compiler, so why is it that everyone else things it's still okay to compile to byte code or scripts. It's not; end of story. I do not like when every new processor that comes out is stomped on by new programs requiring more resources to do the same job. How many java programmers use runtime reflection or introspection? How many programs is it actually needed? If you're not using that, then you should compile natively. Just because Vista is wasting precious resources on it's silly aero glass, etc, doesn't make it right for everyone else too. What happens when someone tries writing a kernel in an interpreted language? Stage 3 bootloaders'll throw us into a JIT environment now. I could just imagine the efficiency there. Native languages are the way to go and we're in for big problems if they don't stay around.
  • by Xugumad ( 39311 ) on Monday June 12, 2006 @08:39PM (#15520835)
    "Regardless of the negligible performance hit compared to native code"

    Yeah... people keep saying that. Okay, lets take the benchmark I hear about most: http://kano.net/javabench/ [kano.net] "The Java is Faster than C++ and C++ Sucks Unbiased Benchmark". Unbiased my foot. "I was sick of hearing people say Java was slow" is not a good way to start an unbiased benchmark. Lets have a few more problems:

    • This is not Java vs C++. This is Sun's JDK 1.4.2 vs GCC 3.3.1 on a P4 mobile processor.
    • GCC is not a fast compiler, it's a portable compiler that happens to be fairly fast. A fast compiler might be something like Intel's own compiler: http://www.linuxjournal.com/article/4885 [linuxjournal.com]
    • Having proven that method calls take almost twice as long under G++: http://kano.net/javabench/graph [kano.net] - the author then several of the tests recursively ( http://kano.net/javabench/src/cpp/fibo.cpp [kano.net] ). When this benchmark came out, various people on /. managed to get around 1,000 times better perfomance (under G++) by switching to a fixed memory usage non-recursive implementation.


    Regardless of the negligible performance hit compared to native code, major software houses, as well as a lot of open-source developers, prefer native code for major projects even though interpreted languages are easier to port cross-platform, often have a shorter development time, and are just as powerful as languages that generate native code.


    Y'know, I think there's a reason for that...

    Particular to Windows programmers, the announcement of MS-Windows Vista's system requirements means that future Windows boxes will laugh at the memory/processor requirements of current interpreted/JIT compiled languages (e.g. .NET, Java , Python, and others).


    Y'know, a couple of decades ago I was running non-native applications on a 7Mhz system with 1MB RAM (my old A500). They were fast, but not quite as fast as native. I'm now using a system in the region of 500 times faster, in terms of raw CPU, and with 2,048 times more memory. And y'know what, non-native stuff is fast, but not quite as fast as native. Something about code expanding to fill the available CPU cycles, methinks...
  • As A Developer (Score:3, Interesting)

    by miyako ( 632510 ) <miyako AT gmail DOT com> on Monday June 12, 2006 @08:41PM (#15520845) Homepage Journal
    Of the development I do, about 60% is in non-native code (mostly java) and about 40% is in native code (usually C++). What I have found is this:
    Java is the language I use the most, and it's good for small programs. It's definitely noticably slower for large applications, but I don't think that's the big reason that a lot of developers don't like it. Swing is nice, but the problem with Java and a lot of other "modern" languages is that they try so hard to protect the developer from themselves and enforcing a certain development paradigm that the same features that make it really nice for writing small program end up standing in your way for large and complex application development. Looking at the other side of the issue, C++ is fast, it can be fairly portable if it's written correctly, and has a huge amount of libraries available. C++ will let you shoot yourself in the foot, but the reason is that it's willing to stand out of the way and say "oh really want to do that? ok...". This makes it easy to write bad/buggy programs if you don't know what your doing, but if you pay attention, have some experience, and a plan for writing the software, then C++ can be less stressful to develop.
    Aside from a reasoned argument, I think a lot of developers are just attached to C/C++. I know that I just enjoy coding in C++ more than in Java. Not that Java is bad- and it can be fun to code in at times, but the lower level languages just give me more of a feeling of actually creating something on the computer- as opposed to some runtime environment.
    Finally, one major reason to stick with C++ is that many interpreted languages aren't really as portable as they pretend to be. A language like C++ that really is only mostly portable, and then only if you keep portability in mind, can sometimes be more portable than other languages that claim to be perfectly portable and then make you spend weeks trying to debug the program because things are fouling up.
    • by mangu ( 126918 ) on Monday June 12, 2006 @10:15PM (#15521231)
      the lower level languages just give me more of a feeling of actually creating something on the computer- as opposed to some runtime environment.


      One of the oldest analogies in computing is comparing algorithms to cooking recipes. We even have books like "Numerical Recipes" and "Perl Cookbook".


      Well, to me, interpreted languages are like frozen dinners. They will do if you come home late at night and are too hurried and hungry to cook a proper meal. But C is like a fully equipped kitchen. It takes *much* more skill to cook a proper meal than to heat a frozen dinner in a microwave oven, but the results are incomparably better, not to mention the pleasure you get from doing it the right way.

      • by Abcd1234 ( 188840 ) on Monday June 12, 2006 @10:40PM (#15521338) Homepage
        but the results are incomparably better

        By what metric? Expressiveness? Ease of implementation? Ease of maintenance? Error rate? Because, last I checked, low-level languages like C fail on all those points compared to a higher-level language.
        • By what metric? Expressiveness? Ease of implementation? Ease of maintenance? Error rate? Because, last I checked, low-level languages like C fail on all those points compared to a higher-level language.

          It's a little unfair to pick on the low-level language programmers. There'd be more of them here to defend themselves but they're all so busy looking for memory leaks and buffer overflows. ;-)
  • The choice of language does not determine if something is cross platform. It has more to do with the choice of toolkits. If you are using GTK or wxWidgets you are pretty safe for being cross platform. C/C++ are cross platform languages, but if you use MFC and COM, they're not.

    Even if I use Java or C#, but don't use a cross platform toolkit (e.g. Windows Forms would not be cross platform), the application won't be cross platform.

    It doesn't matter if the language compiles to byte code, if that byte code doesn't use a cross platform toolkit, it won't be cross platform.
  • by Perseid ( 660451 ) on Monday June 12, 2006 @08:56PM (#15520919)
    Silly question. The answer is and will always be: No.

    Commodore 64 BASIC was interpreted. Computers now are obviously powerful enough to run 64 BASIC code very quickly. Does that mean native code should have been abandoned years ago because technology advanced enough to allow C-64 code to run quickly? JIT code will always be slower than native code and because the complexity of both JIT and Native code programs will get more complicated as the technology advances interpreted code can never catch up.
  • by illuminatedwax ( 537131 ) <stdrange@alumni. ... u ['go.' in gap]> on Monday June 12, 2006 @08:56PM (#15520920) Journal
    List of things you cannot loose:
    - your gray hairs (unless you can command them somehow)
    - control
    - the big game
    - your way

    List of things you could be loosing:
    - the hounds
    - your belt
    - an arrow
    - responsibility
  • by mangu ( 126918 ) on Monday June 12, 2006 @09:06PM (#15520969)
    Interpreted languages have been OK for a long time, for applications where performance isn't the most important parameter.


    Now find me one CPU that can do a decent simulation of the physics of continuous media. Why isn't there any game where you ride a surfboard realistically? Why do meteorologists use the most powerful number crunching systems in the world to be wrong in 50% of the cases when predicting weather a week ahead?


    And what about artificial intelligence and neural networks? Find me a CPU that can do a decent OCR, or speech recognition. What about parsing natural language? Why can't I search in Google by abstract concepts, instead of isolated words?


    No matter how powerful CPUs are, they are still ridiculously inadequate for a large range of real world problems. When you go beyond textbook examples, one still needs to squeeze every bit of performance that only optimized compilers can get.

    • by An Onerous Coward ( 222037 ) on Monday June 12, 2006 @09:49PM (#15521136) Homepage
      You seem to be under the impression that these problems you cite display inadequacies in the hardware, rather than the software. But, in the words of some fictional professor from a book I can't remember: "If you speed up a dog's brain by a factor of a million, you'll have a machine that takes only three nanoseconds to decide to sniff your crotch." Given the current software and algorithms available, more computing power alone wouldn't solve any of the problems you describe.
    • Highly accurate general purpose speech recognition is an AI problem, which as others have pointed out, currently hits the limits of our knowledge not our hardware.
  • Well, yes and no (Score:5, Insightful)

    by BigCheese ( 47608 ) <dennis.hostetler@gmail.com> on Monday June 12, 2006 @09:19PM (#15521023) Homepage Journal
    Don't you hate that answer?

    Yes, we are seeing more development in non-native code but, it gets it's power from the underlying libraries and core code that is native. The line between them gets fuzzy when you toss in JIT and scripting to native code compilers. It really depends on the problem area. If I'm just parsing apart a bunch of log files to make reports Perl or Python would be the best. Web apps seem to benefit from the safety net of non native code but I'm sure there are exceptions.
    OTOH there are plenty of apps that need all the speed and memory the machine can provide. My current job involves real time financial data delivery. Writing that in Python or Java would (probably) not work out too well. OS code that works directly with hardware will probably stay in assembler or C. Fast low level stuff is what allows the slower high level stuff to be useful.

    Either way you still need to know what you're doing because in the end both native code and interpreted code run as opcodes on a CPU and use hardware resources. You need to mind memory use in Java just like C. Just in different ways. You've need to watch what you do in inner loops in both Python and C++. Linear lookups can cause scaling problems in Perl, Java, Python or C/C++.

    It all depends on how fast you want to get from problem to solution, how much hardware you can throw at it, how complicated the problem is, how much time you have and many other factors.

    Languages are tools, not a religion. The broader your knowledge the more tools you have at your disposal. Pick the best one for the job at hand.
  • by DigitalCrackPipe ( 626884 ) on Monday June 12, 2006 @09:19PM (#15521025)
    Ok, assuming the post isn't flamebait... This issue keeps coming up. A good programmer should understand that the language choice depends on the task at hand.

    If you're making a pretty GUI, you may want to use an easy-to-use and portable language and may not care about performance as much. If you're creating a high-performance backend, or doing some realtime processing, an interpreted language is practially useless.

    Before deciding which paradigm is superior, you must narrow down the question to a type of task. Since the variety of tasks we use software for does not seem to be shrinking, it seems that this issue will not be resolved decisively anytime soon.
  • It depends (Score:5, Insightful)

    by Sloppy ( 14984 ) on Monday June 12, 2006 @09:50PM (#15521141) Homepage Journal

    Interpreted & JIT languages are "within a constant factor" of native code's speed, and CS students are taught that such things don't matter. ;-)

    And for many types of apps, they really don't. Ten times slower than instantaneous, is instantaneous.

    But people use computers for lots of things, and believe it or not, some of those things are still CPU-bound, and take so much work that humans can perceive the delay. Your word-processor is 99% idle so surely it doesn't need to be native, but you know that somewhere on this planet, a poor shmuck is staring at an hourglass icon, waiting for a macro to finish. The real question is: who cares? Is that guy's time worth more, or is the programmer's time worth more?

    • Bingo (Score:4, Insightful)

      by Dr. Zowie ( 109983 ) <slashdot@defores t . org> on Tuesday June 13, 2006 @01:48AM (#15522074)
      I do a lot of interactive data processing. I use PDL [perl.org] a variant of Perl (which, recall, is JIT-compiled) that is designed for array handling. For most of what I do PDL is great -- the CPU spends most of its time waiting for me to make up my mind what I want to do, and moving my ponderously slow fingers to type the command at 110 baud. But some of the stuff I do (magnetohydrodynamic simulation [swri.edu]) is exremely CPU-bound, and that stuff I write in C.


      A lot of folks use languages like PDL, IDL, MatLab, Octave, or even NumPy to do array processing, and tout the fact that for large arrays those languages run "essentially as fast as C". But that's bullshit. All those languages vectorize their operations in exactly the wrong order - if you have a hundred million datapoints and you want to do six operations on each one, each of those vectorized languages will dutifully swap each of your hundred million datapoints out of RAM into the processor, multiply it by seven (or whatever), and push it back out to RAM before pulling them all back in to add six to each one. What you really want is to vectorize in pipeline order, doing all the operations you plan to on each data point once and for all so that you can take advantage of your processor's nice, fast cache. Nobody has (to my knowledge) figured out a way to do that, that is robust enough for an interactive/JIT language, so just writing it in "C" and getting the loops nested in the right order can speed you up by a factor of more than 10 on a modern AMD or Intel CPU.

  • by Dcnjoe60 ( 682885 ) on Monday June 12, 2006 @10:29PM (#15521286)
    If we all quit using native languages, then what are we going to use to a) write embed code, b) write drivers, c) write operating systems and d) write the interpreted languages that we use to replace our native ones?
  • by davidwr ( 791652 ) on Monday June 12, 2006 @11:38PM (#15521582) Homepage Journal
    When I run so-called-compiled code under an emulator like Bochs, *poof* it's no longer native. In theory, it can be very managed if the emulator is capable of doing sophisticated things like moving threads around virtual processors based on the potentially-changing resources available on the underlying host environment, adding processors or memory on the fly (assuming an OS that supported such things), etc., things clearly beyond the abilities of most "native" PCs.

    The reverse is true if I pass my java source- or byte-code through a compile-once/not-JIT "native" compiler. Managed code suddenly goes native.

    I predict people will work in the environment that is most efficent for them, where efficiency takes into account development costs, maintenance costs, run-time costs, political costs, etc. etc. etc.

    There's also the question of "what exactly is managed code." If your program compiles against an exception-handling library, as most large programs today do, is that not a primitive form of code management? Granted, you may have to write your own management layer, but it's still not totally unmanaged. Even running as a process in a modern OS is a form of management, since a fatal-to-the-process error can invoke OS-level clean-up routines to close files and return resources.

    To borrow from Shakespear: Managed or unmanaged, that is the question.
    The answer depends on your perspective.
  • Horses for courses (Score:3, Interesting)

    by Nefarious Wheel ( 628136 ) on Monday June 12, 2006 @11:43PM (#15521602) Journal
    It's still horses for courses, mate. Look at the niche markets -- embedded systems for example -- and you'll find opportunities to shave a few cents by using a smaller configuration that would profit from having tighter code.

    Thinking back a few years, iirc the first Apple Mac had the Quickdraw graphics package written in machine language, didn't it? Not assembler, but instructions made of hand-mapped binary digits. It's the reason why those early Mac GUI's were able to extract such amazing graphic performance out of the Motorola 68000.

    You can still buy Zilog Z8's, and embedded applications still exist for them.

  • by PassMark ( 967298 ) on Monday June 12, 2006 @11:46PM (#15521609) Homepage
    We wrote the same search engine code in 4 languages, PHP, ASP, C++ & JavaScript. The results are published here, http://www.wrensoft.com/zoom/benchmarks.html [wrensoft.com]

    In summary, C++ was 4 times faster than PHP, and in turn PHP was 3 times faster than ASP and JavaScript was truly appalling. I can't think of many applications that wouldn't benefit from being 4 to 12 times faster.
    • But it will take you 4 times longer to write it in C++...

      So, the question is what is more important? Time to code or Time to run?

      If you have a one-off task, and your developer costs money, you want quick-to-develop.

      If you are going to run the program a bijillion times (a lot), you quick-to-run at whatever developer cost.

      I've coded C++, Java, and .NET services that all did what they needed to do (long running data collection applications).

      There is more than one way to skin a cat, and the end result

  • Teh funnay! (Score:3, Funny)

    by chris_eineke ( 634570 ) on Tuesday June 13, 2006 @12:34AM (#15521828) Homepage Journal
    The irony of the tagsoup is delicious...

    No, stupid programming maybe... yes.
  • by Animats ( 122034 ) on Tuesday June 13, 2006 @01:45AM (#15522064) Homepage

    The problem isn't native-code vs interpretive code. It's that our native code languages are terribly flawed.

    Programming backed itself into a corner with C and C++. They're useful languages, but they're not safe. Now this has nothing to do with performance; you can have safety in a hard-compiled language. Ada, the Modula family, and the Pascal/Delphi family did it. The problem is that, because of some bad design decisions in C (the equivalence of arrays and pointers being the big one), you have to lie to the language to get anything done. This makes safety hopeless. The basic problem of C is that you have to obsess on "who owns what" for memory allocation purposes, and the language gives you no help with this. The language doesn't even adequately address "how big is this". With those two design defects, we're doomed to have memory safety problems. Which we do.

    C++ at first seemed like an improvement, but as it turned out, C++ adds hiding to C without improving safety. Note that this seems to be unique to C++; no prior language did that, and no language since has taken that route. Attempts have been made to work around the problem within the structure of C++, but with limited success. The "auto_ptr" debacle and the endless problems of trying to make sound reference-counted allocation work reliably indicate the fundamental limitations of the language. You just can't fix those problems in C++ without breaking backwards compatibility. (See my postings in comp.std.c++ over the last decade for more details on this.)

    Java was invented mostly to get around the memory safety problems of C and C++. The fact that Java is usually semi-interpretive has nothing to do with the language design; that's a consequence of Sun's original focus on applets. There are native-code compilers for Java; GCC contains one. There are competitive advantages of locking the user into a giant environment (J2EE in the Java world, .NET in the Microsoft world), which is part of why we're seeing so much of that. But it's not a language design issue.

    Microsoft came up with C# as their answer to Java, and most of the same issues as with Java apply.

    What's so embarassing about all this is that it's quite fixable. The solutions were known twenty years ago. If you have a language where the language knows how big everything is, and the subscript checks are hoisted out of loops at compile time, you get safety with high performance. There were Pascal compilers that got this right in the 1980s.

    On the allocation front, you can use either garbage collection or reference counting to automate that process. Java and C# are garbage-collected; Perl and Python are reference-counted, and in practice, programmers in those languages seldom have to think about memory allocation issues. Allocation overhead can also be hoisted out of loops. Java compilers are starting to do this, allocating temporary variables on the stack. Reference count updates can be optimized similarly. There's nothing to prevent using these techniques in a native-code compiler.

    And that's how we got to where we are today, with buffer overflows, zombies, and blue screens of death, papered over with a layer of inefficient interpreters. Fortunately the hardware people have held up their end and made it possible to live with this, but we on the software side should have the understanding and grace to be embarassed by it.

  • by suv4x4 ( 956391 ) on Tuesday June 13, 2006 @08:10AM (#15523030)
    I'm sick of cliched sensationalist articles with the sensationalist titles, how about you?

    As always this is a non-existant problem hyped up by people that don't have a clue.

    First, the performance hit of managed code is not "negligible". For tasks that rely on raw math power it can be significant, like 3D engines, data processing and so on.

    But if you're doing, say, a rich client, your code will most likely just call existing multimedia, communication and input API-s. Then managed code's performance hit is next to nothing since most of the time is spent processing the commands from the API-s, not your own code anyway.

    Thing is: can we drop the raw power of native code and go all managed: hell no. And we never will. While plenty of consumer applications have good performance on managed code, there will always be a class of professional software where 20% better performance means potentially tens of hundred of dollars saved for a middle-sized company.

    But the big miss in the article is that managed and non-managed code can coexist. In .NET this is pretty easy: "unsafe" sections and "managed" sections can originate in the same project and even the same source code, so you can optimize those critical 10% of your code where 90% of the time is spent (you prolly know the cliche), and use managed to take benefits of the advancements in the platform.

    So wait, we were talking about end of what again?
  • by eyefish ( 324893 ) on Tuesday June 13, 2006 @10:52AM (#15523942)
    There is a philosophical reason to go to a high-level for all this. If you observe evolution in all its forms it always goes from low-level stuff to high-level stuff (from tools making to society behavior, with countless examples in all fields like business, economics, etc). In our brains, it just make sense for us to always think in ever-higher levels, because if we had to keep track of all the previous details we'd spend more time dealing with details than with the goals of what we're trying to accomplish.

    Note that this is nothing new in software engineering. Most software a few decades ago was written in low-level code, even assembly language. If you did a survey then about the ratio of low-level to high-level coding, and compare it to a survey today, you'd realize that we do not even ponder about where things are going, as there is plenty of evidence today to tell us that going higher-level is the path the software industry is taking.

    Native coding will become more and more of a niche, first to do Operating Systems, Drivers, Kernels and such. But eventually I can see even a fully-operational OS being written in a high-level language. In a sense that's what's happing today when you combine all the Web Services and tools we find today on the Web.

    So, it's just a matter of time before everyone codes in high-level languages, and even today's high-level languages will seem low-level by what we're going to replace them in the future.

BLISS is ignorance.

Working...