Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

The Future of Computing 184

An anonymous reader writes "Penn State computer science professor Max Fomitchev explains that computing has evolved in a spiral pattern from a centralized model to a distributed model that retains some aspects of centralized computing. Single-task PC operating systems (OSes) evolved into multitasking OSes to make the most of increasing CPU power, and the introduction of the graphical user interface at the same time reduced CPU performance and fueled demands for even more efficiencies. "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes. Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power. The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts. The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
This discussion has been archived. No new comments can be posted.

The Future of Computing

Comments Filter:
  • Bloat (Score:4, Insightful)

    by metamatic ( 202216 ) on Sunday July 23, 2006 @11:43AM (#15766015) Homepage Journal
    Every time I think software can't get any more bloated, I wait a year or two and it doubles in size again.
    • Re:Bloat (Score:5, Interesting)

      by Poromenos1 ( 830658 ) on Sunday July 23, 2006 @11:49AM (#15766035) Homepage
      That's very true. I always wonder why some programs have to be a few tens of megabytes (especially some shareware ones) and then a (usually open source) program comes along that's 1/10th of the size of the previous program and has more features (e.g. uTorrent vs everything else). I know that processor speed and memory are practically unlimited so you don't have to worry about them, but this is just stupid.
      • Re:Bloat (Score:5, Interesting)

        by Cal Paterson ( 881180 ) on Sunday July 23, 2006 @02:14PM (#15766411)
        Small binary size != Fast program

        uTorrent != Open Source

        uTorrent isn't the fastest torrent program around either, and neither does it have the most features. It probably doesn't strike the best balance either.

        Next time you get the "uTorrent is b3tt4r!" bull from the #footorrents channel, read the "only 6mb memory requirement" or the "170KB binary size statistics: consider the fact that uTorrent is missing lots of features, isn't FOSS, depends on an OS with a circa 256mb base requirement, and isn't as fast or as nice with IO as some other clients [rakshasa.no].

        Then perhaps later, consider that the hallmarks of a good program aren't good benchmarks, but good design. The fact that Debian comes on seven cdroms and with 18,000 programs doesn't mean that WinXP is faster because it only comes on one.
      • Re:Bloat (Score:4, Informative)

        by chris_eineke ( 634570 ) on Sunday July 23, 2006 @02:27PM (#15766447) Homepage Journal
        Please remember that a Windows system by default doesn't come with a shitload of libraries like any desktop linux distribution does nowadays. kTorrent's payload is
        ceineke@lapsledge:/home/eineke$ d /usr/bin/ktorrent
        -rwxr-xr-x 1 root root 284636 2006-05-23 14:51 /usr/bin/ktorrent*
        284636 bytes. Not too bad for a K-app. But consider this, Batman: (leading whitespace removed)
        ceineke@lapsledge:/home/eineke$ ldd /usr/bin/ktorrent
        linux-gate.so.1 => (0xffffe000)
        libktorrent.so.0 => /usr/lib/libktorrent.so.0 (0xb7e38000)
        libkparts.so.2 => /usr/lib/libkparts.so.2 (0xb7df5000)
        libkio.so.4 => /usr/lib/libkio.so.4 (0xb7aca000)
        libkdeui.so.4 => /usr/lib/libkdeui.so.4 (0xb7807000)
        libkdesu.so.4 => /usr/lib/libkdesu.so.4 (0xb77f1000)
        libkwalletclient.so.1 => /usr/lib/libkwalletclient.so.1 (0xb77e1000)
        libkdecore.so.4 => /usr/lib/libkdecore.so.4 (0xb75b9000)
        libDCOP.so.4 => /usr/lib/libDCOP.so.4 (0xb7588000)
        libresolv.so.2 => /lib/tls/i686/cmov/libresolv.so.2 (0xb7574000)
        libutil.so.1 => /lib/tls/i686/cmov/libutil.so.1 (0xb7571000)
        libart_lgpl_2.so.2 => /usr/lib/libart_lgpl_2.so.2 (0xb755c000)
        libidn.so.11 => /usr/lib/libidn.so.11 (0xb752d000)
        libkdefx.so.4 => /usr/lib/libkdefx.so.4 (0xb7501000)
        libqt-mt.so.3 => /usr/lib/libqt-mt.so.3 (0xb6d18000)
        libaudio.so.2 => /usr/lib/libaudio.so.2 (0xb6d03000)
        libXt.so.6 => /usr/lib/libXt.so.6 (0xb6cb5000)
        libjpeg.so.62 => /usr/lib/libjpeg.so.62 (0xb6c96000)
        libXi.so.6 => /usr/lib/libXi.so.6 (0xb6c8e000)
        libXrandr.so.2 => /usr/lib/libXrandr.so.2 (0xb6c8b000)
        libXcursor.so.1 => /usr/lib/libXcursor.so.1 (0xb6c82000)
        libXinerama.so.1 => /usr/lib/libXinerama.so.1 (0xb6c7e000)
        libXft.so.2 => /usr/lib/libXft.so.2 (0xb6c6c000)
        libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0xb6c03000)
        libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0xb6bd5000)
        libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb6bd2000)
        libpng12.so.0 => /usr/lib/libpng12.so.0 (0xb6baf000)
        libXext.so.6 => /usr/lib/libXext.so.6 (0xb6ba1000)
        libX11.so.6 => /usr/lib/libX11.so.6 (0xb6abb000)
        libSM.so.6 => /usr/lib/libSM.so.6 (0xb6ab3000)
        libICE.so.6 => /usr/lib/libICE.so.6 (0xb6a9b000)
        libXrender.so.1 => /usr/lib/libXrender.so.1 (0xb6a93000)
        libz.so.1 => /usr/lib/libz.so.1 (0xb6a7f000)
        libfam.so.0 => /usr/lib/libfam.so.0 (0xb6a76000)
        libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb6a64000)
        libacl.so.1 => /lib/libacl.so.1 (0xb6a5c000)
        libattr.so.1 => /lib/libattr.so.1 (0xb6a58000)
        libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb6983000)
        libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb6961000)
        libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb6956000)
        libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb6827000)
        libXfixes.so.3 => /usr/lib/libXfixes.so.3 (0xb6823000)
        libexpat.so.1 => /usr/lib/libexpat.so.1 (0xb6804000) /lib/ld-linux.so.2 (0xb7efb000)
        libXau.so.6 => /usr/lib/libXau.so.6 (0xb6800000)
        I'm sure that when you claim that uTorrent isn't as large as other Shareware programs you have looked into its library dependencies. Or did you?
        • 284636 bytes. Not too bad for a K-app. But consider this, Batman: (leading whitespace removed)

          For a good time, try this:

          ldd /usr/bin/ktorrent | awk '{ print $3 }' | xargs wc -c
          I don't have ktorrent installed, but other simple KDE apps show dependencies on at least 22MB of libraries
      • I think they're large for debug informaiton. I noticed in visual studio just a simple comand line program took a second or two to execute, after I turned off a bunch of stuff the program would actually execute in an OK time, similar to that of default options in DevC++, where it executed instantaneously first time, OOTB.

        For a software giant, they[ms] aren't that userfriendly. Total bullshit.

        If everything built in the industry is like this then I guess they have the backing of the power companies.
    • Re:Bloat (Score:5, Insightful)

      by euice ( 953774 ) on Sunday July 23, 2006 @11:50AM (#15766044)
      Well, they keep thinking that 100 developpers can do the same job as a handful of good developpers. That's wrong most of the time, as in "9 women can have one baby in one month".
      • "Well, they keep thinking that 100 developpers can do the same job as a handful of good developpers."

        Yes, companies are falling over themselves to pay 100 salaries instead of a small handful of slightly larger salaries.
      • 9 women can have one baby in one month

        This is also the reason why a mere increase in the number of cores is no panacea. Although programmers need to start taking explicit parallelism more seriously, certain tasks simply cannot be split up. In many cases a task cannot continue until it has data that has been calculated by a previous task. There is no way around this. So, with the exception of embarrassingly parallel [wikipedia.org] tasks developers and corporations are just going to have to bite the bullet in terms of incr
        • Re:Bloat (Score:4, Interesting)

          by cnettel ( 836611 ) on Sunday July 23, 2006 @03:26PM (#15766612)
          On the other hand, even "obviously" serial tasks can be made faster if you let other threads handle highly speculative precalcing/prefetching/whatever. In a UI context, latency is king. If you can write your code so that processing starts in a background thread twenty ms before the actual click (when the mouse only hovered over the button/menu item), you'll still get the results of faster response. Try to make the processing that actually depends on the input from the previous task as low as possible. Try to guess, if you'll otherwise just be idle. Reindex your DB on another thread, even if it will only save 2 % on your main thread(s). Given, of coures, that performance and latency is what you care about.
        • Re:Bloat (Score:4, Insightful)

          by bit01 ( 644603 ) on Sunday July 23, 2006 @06:44PM (#15767074)

          certain tasks simply cannot be split up.

          That's a popular idea. It's almost always wrong.

          It may be true that something can't be split up well automatically but pretty much any practical task can be parallelized to some degree manually.

          Seriously, I challenge anybody here to name even one real-world CPU or IO intensive task that cannot be split up. Even things like encryption and compression can be pipelined and there are complicated mathematical and statistical tricks including speculative execution that can be applied as well.

          It may not be cost effective to do the split but if money is no object some parallelism will almost always help. Yes, tasks can have chokepoints but these become irrelevant if you can parallelize the work before and after the chokepoint.

          There are obscure mathematical exceptions, dependencies on external events and it may be hard to do with the tools being used but that's not what I'm talking about here.

          ---

          Creating simple artificial scarcity with copyright and patents on things that can be copied billions of times at minimal cost is a fundamentally stupid economic idea.

    • Re:Bloat (Score:3, Funny)

      by slughead ( 592713 )
      Every time I think software can't get any more bloated, I wait a year or two and it doubles in size again.

      It's true! No GUI has ever been as snappy as classic Mac OS!
      • Re:Bloat (Score:2, Informative)

        by Tolleman ( 606762 )
        BeOS
      • Re:Bloat (Score:3, Informative)

        That is true, but that code was wickedly hand-optimized assembly code written by Andy Hertzfeld [wikipedia.org]. The hardware was known and closed. That sort of thing is frowned on these days.
        • Re:Bloat (Score:2, Interesting)

          by lostguru ( 987112 )
          well not quite,

          depending on wich of the three great mac classic ages we're talking about 7, 8, or 9 that wasn't really true

          7: had to run on both 68k machines as well as the newer ppc machines and the numerous clones of the day, but still managed to perform quite well on all of them


          8: was the first to run on ppc and the g3 line of chips along with supporting a new proggraming system with the carbon libraries


          9: well nine sucked and im turning into a troll as i type so i just call it non classic
    • Re:Bloat (Score:2, Interesting)

      by resonte ( 900899 )
      One reason I think is the fact that programming languages have become more high-level over time. Decreasing production time while sacrificing program efficiency.

      While companies that can decrease the production time of creating a program will spend less money on the developers. Efficiency is not a priority as most users do not understand the concept. If a program runs slowly on a user's computer the novice user will think it's a problem with the computer and not the program.

      • Re:Bloat (Score:5, Informative)

        by TheRaven64 ( 641858 ) on Sunday July 23, 2006 @12:30PM (#15766137) Journal
        One reason I think is the fact that programming languages have become more high-level over time. Decreasing

        Really? Languages don't get much more high-level than Smalltalk, and a Squeak does things that C/C++ programs seem to require a lot more bloat to manage.

      • Re:Bloat (Score:5, Insightful)

        by metamatic ( 202216 ) on Sunday July 23, 2006 @03:20PM (#15766599) Homepage Journal
        No, bloat is avoidable yet not avoided in high level languages too. For example, I wrote an RSS and Atom library in Ruby, because I didn't like the one in the standard library--it was ugly code and badly documented. I expected my replacement to be slower and larger, because it was clean understandable code. To my surprise it was half the size and twice as fast. But we're still stuck with the crappy one, because it was the first one hacked together and therefore became part of the standard libraries. And that's the root problem--we have a culture where implementation speed is valued over everything else.
        • Re:Bloat (Score:3, Interesting)

          by 0111 1110 ( 518466 )
          we have a culture where implementation speed is valued over everything else.

          Here here. Well said. That, precisely, is the core of the problem. The only way that is going to change is if the market forces their hand. If/when the speed of a single core finally does hit a wall, we may see this. It's all about priorities. Developers are making an explicit choice in favor of reduced development time at the expense of exploding minimum machine requirements for nearly identical tasks. The end user really has no wa
        • Where can I get my hands on your library? I have recently been asked to write RSS code in ruby, and I wasnt overly impressed with the standard library, so I'm certainly willing to investigate other options. Also, standard libraries are not completely static. For example, I believe PHP has already gone through three different complete rewrites (complete with a completely new API for each) of their "standard" XML library. If your lib is as good as you say it is then maybe it could also seek inclusion in t
    • As the size of features on an integrated circuit continues to shrink, we eventually reach a point at which the width of the gate of an NMOS transistor is only 1 atom. Other features have dimensions that measure only 1 atom. The charges (i.e., the collection of electrons) that these features house are so tiny that failure due to alpha particles, cosmic rays, etc. are quite common.

      The only solution to the curse of infinitesimally small features is modular redundancy: hardware duplication. Triple modula

  • heh (Score:5, Funny)

    by JustNiz ( 692889 ) on Sunday July 23, 2006 @11:46AM (#15766024)
    >> we will be compelled to write more efficient code

    Spoken like a true Microsoft programmer.
  • The Foley and van Dam classic, "Fundamentals of Interactive Computer Graphics" cites Myer and Sutherland's description of adding more intelligence to graphics processors until they become the equivalent of CPUs, at which point they repeatedly find themselves slower than mass-production CPUs and are turned back into simple devices driven by fast external CPUs once more (;-))

    --dave

    • Yup, exactly. In fact, Fred Brooks extends it in general to any specialized processor --- you find that your specialized processor can never keep up with doing the same thing in a generalized processor. Consider, for example, the IBM System/38 --- which became the AS/400, simulating the S/38's wild-ass object-based architecture on cheap commodity processors; the Symbolics Lisp Machine, which was wiped out as a Lisp platform by pretty much the next generation of GP processors; or graphics processors, like
    • Was that a revelation? This has been going on for years, although at the present time I do not see specialized graphics CPUs losing any ground to their general CPU bretheren, in large part because they are architected as part of a complete system which is designed entirely for massive data manipulation but no expansion, random peripherals or anything else. So long as that architectural decision remains, it's unlikely we will see the downward spiral of performance predicted. But that's just my opinion.
  • Pardon my ignorance, but all the blades are going to have a lot of extra software running too (OS / App manager / network communication etc). So isn't there a chance of the micro-blades end up eating even more power (Specially if the software is still bloated)? Splitting the code in different blades is definitely not really code optimization anyway.
    • Splitting the code in different blades is definitely not really code optimization anyway.

      Of course it is not. Why doesn't everybody realize, that this Max Fomitchev has absolutely no idea what he's talking about. This is complete rubbish. "Microblades" to save power? Come on, do the math: More power supplies that produce energy loss (no power supply has an efficiency of 100%), more complex software (because tasks are split up over different cpu's and have to communicate over a sort of network connection)...

  • Wirth's law (Score:2, Insightful)

    by mangu ( 126918 )
    "software is getting slower faster than computers are getting faster"


    That's why I don't buy those Python/Ruby/Java productivity boasts. I'd rather do it efficiently in C/C++ right now than wait for a faster CPU that may never come.

    • Every tool has it's purpose. If you wish to waste your time writing a tool in C/C++ that will parse text files or organize your mp3 collection, then have at it. Not every application's bottleneck is the CPU (in fact, most aren't), in which case the speed of compiled code is moot. And even some of those tasks that need speed can be managed by higher level languages, because many of their modules are written in C/C++ and/or assembly. Also, why must you choose between high and low level languages? They ar
    • I'd rather do it efficiently in C/C++ right now than wait for a faster CPU that may never come.
      Even if it takes you ten times longer?
      • Re:Wirth's law (Score:3, Insightful)

        by Bert64 ( 520050 )
        Typically more time is spent running the code than writing it...
        For a one-off script that's gonna be run once and never used again, slow inefficient code that's quick to write makes sense... But for the majority of code that's going to be run over and over again, the time you saved writing the code could be wasted 10 times over waiting for it to run.
        • Typically more time is spent running the code than writing it...

          That's only relevant if the user is waiting for the code to run.

          Consider a "report feature" in many systems. More often than not you are waiting for the database to return the query and then the printer to print it. While it might take 30 seconds to produce the report from start to finish the "code" you wrote only runs for 1 second total, even in a high level interpreted script. Is there really a point to spending 10x the effort to cut the code
          • Similarly, consider an interactive system, where the software pauses at each step to get input from the user. If the high level scripted system is responding within 0.1 seconds, seemingly instant to the user.

            Yeah. How often does that happen? If you can manage to allow ALL of your code to run while waiting for a user response (or the hard drive or whatever) then bravo. But that is not the case for the vast majority of the code out there and you know it. Obviously no one is going to start optimizing code with
        • But for the majority of code that's going to be run over and over again, the time you saved writing the code could be wasted 10 times over waiting for it to run.
          Then why not use a language that supports churning out inefficient programs in a short time and highly-efficient programs in a bit more time? :P
    • That's why I don't buy those Python/Ruby/Java productivity boasts. I'd rather do it efficiently in C/C++ right now than wait for a faster CPU that may never come


      That's why I don't buy those Ford/Honda/Toyota productivity boasts. I'd rather travel efficiently on my bicycle right now than wait for a faster vehicle that may never come.

      • Uh, your analogy is broken, please debug and try again.

        The Ford is the faster vehicle, and we already have the infrastructure.
        • The Ford is the faster vehicle, and we already have the infrastructure.


          Precisely my point. In both the software and transportation scenarios, the modern mechanisms already exist and are already faster and more efficient than the old ones. Sorry if my sarcasm wasn't obvious enough.

          • In the original post:
            O is real. C is real. D is not real.
            O needs D.
            O is not used because there is no D so C is used.

            In you post:
            O is D. O is real, but despite their being the same thing D is not. Problem #1. C is real.
            O and D are the same thing, somehow dependent on each other. Problem #2.
            O is not used because it is real but it is not real. Problem #3. C is used.
    • by Inoshiro ( 71693 ) on Sunday July 23, 2006 @01:03PM (#15766228) Homepage
      It's the algorithm. It's straight complexity theory [wikipedia.org]; C/C++ is not a panacea. If you write a 2^n or n! algorithm in C, it'll have its doors blown off by an nlogn algorithm in Python.

      Either you have constant time, nlogn, or even n algorithms that run OK (CPUs today are fast enough that even for a decent sized n, an n algorithm will be executed shortly). However, no computer humans can ever build that works on the same principles as your desktop computer will be able to do 2^n, n^n, or n! algortihms in any kind of useful time for large n.

      You might be able to get results in a lesser amount of time if you can parallelize the work (see the Distributed.net cracking efforts on factoring into large prime numbers), but if you can't make the algorithm work in parallel or otherwise reduce it to a polynomial time algorithm, even a supercomputer from the year 50,000 won't solve these problems for large n.

      Don't focus on the language; that's the wrong area to look.
      • It's the algorithm. It's straight complexity theory; C/C++ is not a panacea. If you write a 2^n or n! algorithm in C, it'll have its doors blown off by an nlogn algorithm in Python.

        And here is where a couple of people would disagree with you. C/C++ has extremely well-performing optimizing compilers (alignment, instruction sets, etc.) and so for verrrry small datasets, a 2^n algorithm in C will most likely run faster than a n log n algorithm in, say, Ruby.

        (I don't have data to support my hypothesis, but I wi

        • Column 1: n. Column 2: 2^n. Colunm 3: n * log n. Column 4: n * log n + 100.

          1 2 0 100
          2 4 0.602059991327962 100.602059991328
          3 8 1.43136376415899 101.431363764159
          4 16 2.40823996531185 102.408239965312
          5 32 3.49485002168009 103.49485002168
          6 64 4.66890750230186 104.668907502302
          7 128 5.9156862800998 105.9156862801

          As you can see, for n less than 7, n * log n + 100 (which assumes our language is 100 times slower to run our n*log(n) algorithm vs. our 2
          • I 100% agree with your conclusion.

            Please don't forget that I emphasized "verrrrrry small datasets". I was trying to point out that, let's say, for three elements using a bubble-sort algorithm is faster than quicksort.

            And yes, compilers aren't little magical black boxes. But the human brain is. :P
          • What planet are you from where you get a factor of 100 difference by adding 100 rather than multiplying? Try using standard arithmetic.

            If you do, you will find that the crossover is around n=647 for a relative speed of 100x (647*647 = 418609 vs 100*647*log(647)=418760), or around n=282 for a relative speed of 50x.

            However, your broader point stands, since 50-100x ratios are what you might expect for a poorly interpreted language, and JITs or other optimizing compilers perform at the same order of magnitude
      • It's the algorithm. It's straight complexity theory; C/C++ is not a panacea. If you write a 2^n or n! algorithm in C, it'll have its doors blown off by an nlogn algorithm in Python.

        A programmer who knows how to choose the right algorithm will do so regardless of the language being used. So given the correct algorithm, it boils down to the BIG FAT CONSTANTS that determine better performance. Lower level languages like C++ can make those constants smaller.
      • What you are comparing (a N^2 C++, to a python nlogn) you are varying two parameter. When comparing two thing you should only vary one, or you blow hot air. In other word I can counter your argument by saying a C++ nlogn implementation is better than a python n^4, but what does it says us of the CPU performance in comparing language : NOTHING.

        The GP is comparing for the same implementation the speed difference. So if you implement a stupid N^2 bubble sort in C++, it will be slow as hell, but stil better
      • This is a simplistic view of the world. It assumes that the only means of optimization is a big-O algorithm change. If you're using a bad algorithm and you're working with non-negligible data sets, then obviously you should choose a better algorithm -- that's just CS101. The interesting conversation doesn't even start until you are already using the best algorithm available to you.

        Is C/C++ a panacea? Of course not -- straw man. But when your algorithms are equal, high-level languages will execute an al
      • It's the algorithm. It's straight complexity theory;

        Not so. [debian.org]
    • Re:Wirth's law (Score:2, Informative)

      by PostPhil ( 739179 )
      No one using Python is "waiting for a faster CPU". Languages like Python and Ruby do have productivity gains that are worth whatever overhead they have.

      If the only good thing going for such languages is that they are "high-level", and higher level languages must be slow and clunky (like BASIC, which doesn't belong in the same category), then I could see your point. However:

      1. Languages like Python gained popularity as a glue language. 90% of it is running C/C++ for the heavy lifting anyway.

      2. Such
    • That's why I don't buy those Python/Ruby/Java productivity boasts. I'd rather do it efficiently in C/C++ right now than wait for a faster CPU that may never come.

      You aren't doing it (much) faster in C or C++; at least not in all cases. Even for algorithms that are routinely used to check language performance (such as Linpack) customised Java VMs equal C code. Java is not interpreted - it is translated to byte code that is then compiled into machine code with a considerable amount of run-time optimisation
      • Java is not interpreted - it is translated to byte code that is then compiled into machine code with a considerable amount of run-time optimization

        I saw the Byte magazine in a newsstand for the first time in August 1978, the cover story was about Pascal and I bought it. I still have that magazine. Inside there's an article about how Pascal was compiled to an intermediate form called "P-code". As you can see, the Java VM isn't such a new invention.

        But optimization isn't about byte code alone. As someone men

        • I saw the Byte magazine in a newsstand for the first time in August 1978, the cover story was about Pascal and I bought it. I still have that magazine. Inside there's an article about how Pascal was compiled to an intermediate form called "P-code". As you can see, the Java VM isn't such a new invention.

          It certainly isn't. The VM is much older than that - Smalltalk was using a VM in 1972.

          But optimization isn't about byte code alone. As someone mentioned in this thread, algorithms can be much more important.
          • There is something really special that Java can do for optimisation that C and C++ can't - and that is to be able to optimise at run-time for the specific processor you deploy on.

            Until the day when you can say "hey, Duke Nukem Whenever is written in Java", that's all theory. If you can do run-time optimization for Java you can also do it for C++. The only thing that keeps anyone from writing a byte-code compiler for C++ that dynamically optimizes it for the processor is that, in the bottom line, the advant

    • Very, very few modern computer programs are CPu bound. Your computer spends waiting 99% of its time fo ether a harddisk or a network or some other input.
  • by NotInTheBox ( 235496 ) on Sunday July 23, 2006 @12:03PM (#15766074) Homepage
    Writing small is difficult and I don't think it can be done in a group. Most software which is small is written bij less then 3 programmers.

    Compare: "Easy writing makes hard reading." -- Ernest Hemingway
  • by geoff lane ( 93738 ) on Sunday July 23, 2006 @12:11PM (#15766095)
    ...is strongly dependent on the interfaces it presents to the world. The pressure is to push more and more functions onto a chip so that external interfaces can be eliminated. This is the victory of the general purpose computer. While in the short term it is always possible to build faster, more speciallised hardware to perform a function, eventually a faster CPU chip which implements the same facility in software becomes cheaper and generic.
  • What units are on the two axes of this spiral? Why isn't it nested tetrahedrons?
  • If I had a dime for every time /. has had a story titled "The Future of Computing".

    Seriously. Not the same story, true, but the same title, over and over. Just look. [slashdot.org]

  • by plasmacutter ( 901737 ) on Sunday July 23, 2006 @12:18PM (#15766115)
    Gaming continues to be highly demanding on computer systems.

    While I believe processors are currently heavily outmuscleing the exchange rate of primary memory, and that this gap should be closed, I don't believe the era of power expansion is over.

    While chipmakers are becomming increasingly environmentally conscious by increasing performance per watt, they are also abandoning hype based "clock speed" development and actually focusing on reducing cycles per instruction, raising instructions per second, optimizing pipelining, and increasing responsiveness.

    While this might not be seen as power growth it is, but it's similar to the difference between overall horsepower vs torque on a vehicle.

    in the previous decade, most vehicles had decent horsepower but low torque, now the carmakers focus on less fuel hungry but higher torque engines, but as a side effect they also get more HP per liter.
  • Some thoughts (Score:5, Interesting)

    by madcow_bg ( 969477 ) on Sunday July 23, 2006 @12:32PM (#15766145)
    > "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes.
    Yes, he is right. The problem is that http://en.wikipedia.org/wiki/Unix_philosophy [wikipedia.org] has been very long forgotten from the manifacturers of OS for 90% of the PC's around the world. I do not want to start a flamewar, just consider how many features of the OS you really need? It is arguably a GOOD practice to put everything you can in an OS, but for cryin' out loud, at least there must be a way to remove the unneeded parts.

    > Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power.
    That looks like where we're heading now. Jut consider the 1000 projects for distributed computing out there, and the whole virtualization thingy. But this by itself cannot mean that much less power. If you want less consumption, you have to rely on technology AND on more optimized software.

    > The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts.
    YES!!! That's what we're talking about, man! We need separate modules to do the work. Just for info, try googling for Microkernels vs. Monolithic. Tannenbaum has good arguments in favor of microkernels in terms of stability. I don't want to take either side, but it is true that whilst a mere 99.999% of the cars don't suffer from reboots of their onboard computers, our desctops still do. Remember the old joke: "You've moved your mouse. Please restart your computer for the changes to take efect."

    > The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
    Maybe we should move into higher-programming languages that take most of the optimizations hidden from the programmer. For example I have recently read a review that optimizied Java code is VERY near native C performance. Even if that is not true, C is not adapded enough for the various SSE, SIMD and so on optimizations in the modern PCs. Yes, GCC makes all kinds of optimizations, but maybe WE need to move into higher-order logic for our programs?
    • I don't want to take either side, but it is true that whilst a mere 99.999% of the cars don't suffer from reboots of their onboard computers, our desctops still do.

      This is actually not true. Car computers *do* crash and reboot, they just do it automatically and very very quickly. The thing has to respond correctly within X milliseconds, or a supervising piece of hardware simply resets it, and it starts working again within Y more milliseconds - maybe you get a couple misfires, but the car basically keeps on

  • I should have been an academic after all, I think. All I'd have to do is write a paper that says the same thing Fred Brooks said 25 years earlier and people would think I'm brilliant.
  • by drDugan ( 219551 ) * on Sunday July 23, 2006 @12:49PM (#15766199) Homepage
    ok, my BS meter is pegged

    while the article has lots of intersting data and information, he doesn't know much about predicting the future

    He's right on focusing on memory (vs. CPU) - this is where the major bottlenecks are

    He completely missed the boat though on virtualization. Everywhere I look there are different examples of virtualization that are driving development choices - and he doesn't mention it once.

    he also is missing the tide happening right now with metaprogramming and generators

    also missing the boat on the trends in language flexibility that are turning application development into "domain specific language" development. we're at a tipping point over the current 2-3 year horizon where developers are building out the language AT THE SAME time they write their application. coupled with effective reuse strategy, this will revolutionize how quickly and how functional all our apps can be.

    it sucks that tesxt is static, there are a huge number of ideas here, and I have not expressed them as well as I'd like, but alas, once submitted, the text can't change, and it presents the same info to each reader, no matter what their context or background is. I like talking to people much better.

  • by Teckla ( 630646 ) on Sunday July 23, 2006 @01:03PM (#15766229)
    Here's a link to the single page print version [ddj.com] of the article.
  • Buy SUNW (Score:2, Interesting)

    by alucinor ( 849600 )
    Wow, if this guy is right, then I'd say to buy some SUNW stock, because this power-consumption problem is exactly what their latest line of server offerings directly address.

    Oh, and I don't have any Sun stock myself, heheh ... yet :)
  • Today we have bulky boxen and entire rooms filled with computers. We have computers taking up space in our offices and homes. We dedicate energy to just keeping them cool. Tommorow (ok, so not really tommorow, probably in the semi-distant future) we won't really see computers at all in terms of our daily routines. They'll be so miniaturized as to become transparent. The only aspect of computing we'll see in our daily lives will be the user interfaces. The actual computers themselves will be invisible, or at least barely noticeable. They'll become mere extensions of our every whim, capable of reinforcing and improving our minds in a seamless fashion. That, I believe, is the future of computing.

    Take for example Google. What happens when you can query a search into google without actually interfacing with an external device like a laptop with a wireless internet connection? Or into Wikipedia? You'll be able to answer questions within seconds of being asked. Maybe less. This is a bigger change than you might think. Where does this leave conventional schooling, for example?

    To me, it's exciting. And I wish it were here already.

    TLF
    • You're thinking 70's and 80's notion of future computing. We already have a lot of that. Your toaster, car, watch, etc all have computers (microcontrollers) in them.

      The true future of computing is highly parallel systems. Think quantum computing. We will eventually eliminate the entire concept of serialized computing like we have today. Computing time will no longer be an issue as everything will compute instantly. Software will be much more about the algorithm rather than the hardware. In a highly p
  • Wow, was this article hard to read. It looks like the author never read the first draft and merely ran a spell checker on it. There are scores of typos, missing commas, and improper homonyms. Here's just a few:

    We have not really come back to good old centralized computing but rather to arrived at distributed computing model. Although a bulk of work may be done by centralized resources such as servers providing computational services, our desktop PCs and client workstations handle independently multitu

  • As I see it, the history of computing is one of repeated waves that crash against the beach as the next wave gathers behind it. In terms of corporate IT, this is marked by shifts back and forth between centralized and distributed computing, but the phenomenon encompasses all of computing.

    What happens in each phase is that the new model is adopted first at the grass roots -- individual users or small groups see the "new thing" as a way to get something done, and start using it under the radar. It's frequ

  • Where's the *journal* in Dr. Dobbs Journal? It has editors but apparently no one actually edits? I can forgive the lack of "the" articles in the article from I assume a Russian writer, but not the dozens of basic errors.

    Discreet elements were gradually replaced with integrated circuits
    "Discrete elements"

    Intel's new "Woodcrest" server chip as only 14
    "Woodcrest server chip has only 14"

    speculative threading in the vane of to Intel's Mitosis.
    new manufacturing technology in the
  • "Single-task PC operating systems (OSes) evolved into multitasking OSes to make the most of increasing CPU power"

    I don't think so. The motivation for multitasking was to allow data to be exchanged between applications while both were still running as well as allowing a longer-term task to run in the background while the user does other work. The trade-offs between running a single application and muliple applications at the same time are actually rather independent of CPU speed.
  • That should be a given, regardless of your nice new shiny hardware. Anything less is pure lazyness.
  • by Sloppy ( 14984 ) on Monday July 24, 2006 @01:48AM (#15768025) Homepage Journal
    What a twisted perspective:
    So as CPU power grew to meet specific tasks we wanted our PCs to perform it became too much for general tasks such as text editing or spread-sheeting. That extra power just as in the case of old mainframes led to the adoption of multi-tasking operating systems on desktop and personal computers. We had extra power and we wanted to do something with it.

    Multitasking is for getting the most our of your computer (whether it's fast or slow) but pays off most rewardingly when it's slow. If processors and I/O were infinitely fast, people wouldn't give a damn about multitasking, because they would never be waiting for their computer to complete a task. It's when you have to wait for something that you most enjoy multitasking; it lets you use your machine for doing something else instead of twiddling your thumbs staring at the progress indicator.

    Let's say you want to render a graphics scene, download a file, and edit a text document. An MSDOS user would do those things serially, sadly knowing that:

    • while he was rendering, his serial port was idle
    • while he was downloading, his CPU was nearly idle
    • while he was editing, his serial port and CPU were both nearly idle

    And this was true whether it was a 4.7 MHz XT or a 100 MHz 486. "Extra power" had nothing to do with it. Indeed, the 486 user probably lamented MSDOS' lack of multitasking less (not more, as the author suggests) because the rendering would be so much faster.

    Meanwhile, the 7 MHz Amiga user, despite the seemingly "wimpiness" of his machine (HA!), did all three operations in parallel. His CPU stayed at 100% utilization, his serial port downloaded as fast as it could, and his text editor easily kept up with his typing. The Amiga user gets the most out of his machine. Not because the Amiga is fast, but because multitasking mitigates slowness.

    It's the mere desire to get the most work done, that led to multitasking on personal computers. It wasn't the "extra power" that did it. It just seemed that way to the x86 users (and probably only the x86 users) because the slow chips (8086) just happened to have very poor support for multitasking compared to the fast chips (80386). So multitasking appears to correlate with speed. But for the x86ers, it was really a question of CPU features, rather than performance.

    The crux of the author's error is this: "We had extra power and we wanted to do something with it." He has forgotten that "we wanted to do something with it" whether or not we had "extra power."

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...