Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

The Future of Computing 184

An anonymous reader writes "Penn State computer science professor Max Fomitchev explains that computing has evolved in a spiral pattern from a centralized model to a distributed model that retains some aspects of centralized computing. Single-task PC operating systems (OSes) evolved into multitasking OSes to make the most of increasing CPU power, and the introduction of the graphical user interface at the same time reduced CPU performance and fueled demands for even more efficiencies. "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes. Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power. The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts. The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
This discussion has been archived. No new comments can be posted.

The Future of Computing

Comments Filter:
  • Re:Bloat (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Sunday July 23, 2006 @12:30PM (#15766137) Journal
    One reason I think is the fact that programming languages have become more high-level over time. Decreasing

    Really? Languages don't get much more high-level than Smalltalk, and a Squeak does things that C/C++ programs seem to require a lot more bloat to manage.

  • Re:Bloat (Score:2, Informative)

    by Tolleman ( 606762 ) <.jens. .at. .tollofsen.se.> on Sunday July 23, 2006 @12:31PM (#15766139) Homepage
    BeOS
  • Re:Bloat (Score:3, Informative)

    by $RANDOMLUSER ( 804576 ) on Sunday July 23, 2006 @12:37PM (#15766171)
    That is true, but that code was wickedly hand-optimized assembly code written by Andy Hertzfeld [wikipedia.org]. The hardware was known and closed. That sort of thing is frowned on these days.
  • by Teckla ( 630646 ) on Sunday July 23, 2006 @01:03PM (#15766229)
    Here's a link to the single page print version [ddj.com] of the article.
  • Re:Bloat (Score:4, Informative)

    by chris_eineke ( 634570 ) on Sunday July 23, 2006 @02:27PM (#15766447) Homepage Journal
    Please remember that a Windows system by default doesn't come with a shitload of libraries like any desktop linux distribution does nowadays. kTorrent's payload is
    ceineke@lapsledge:/home/eineke$ d /usr/bin/ktorrent
    -rwxr-xr-x 1 root root 284636 2006-05-23 14:51 /usr/bin/ktorrent*
    284636 bytes. Not too bad for a K-app. But consider this, Batman: (leading whitespace removed)
    ceineke@lapsledge:/home/eineke$ ldd /usr/bin/ktorrent
    linux-gate.so.1 => (0xffffe000)
    libktorrent.so.0 => /usr/lib/libktorrent.so.0 (0xb7e38000)
    libkparts.so.2 => /usr/lib/libkparts.so.2 (0xb7df5000)
    libkio.so.4 => /usr/lib/libkio.so.4 (0xb7aca000)
    libkdeui.so.4 => /usr/lib/libkdeui.so.4 (0xb7807000)
    libkdesu.so.4 => /usr/lib/libkdesu.so.4 (0xb77f1000)
    libkwalletclient.so.1 => /usr/lib/libkwalletclient.so.1 (0xb77e1000)
    libkdecore.so.4 => /usr/lib/libkdecore.so.4 (0xb75b9000)
    libDCOP.so.4 => /usr/lib/libDCOP.so.4 (0xb7588000)
    libresolv.so.2 => /lib/tls/i686/cmov/libresolv.so.2 (0xb7574000)
    libutil.so.1 => /lib/tls/i686/cmov/libutil.so.1 (0xb7571000)
    libart_lgpl_2.so.2 => /usr/lib/libart_lgpl_2.so.2 (0xb755c000)
    libidn.so.11 => /usr/lib/libidn.so.11 (0xb752d000)
    libkdefx.so.4 => /usr/lib/libkdefx.so.4 (0xb7501000)
    libqt-mt.so.3 => /usr/lib/libqt-mt.so.3 (0xb6d18000)
    libaudio.so.2 => /usr/lib/libaudio.so.2 (0xb6d03000)
    libXt.so.6 => /usr/lib/libXt.so.6 (0xb6cb5000)
    libjpeg.so.62 => /usr/lib/libjpeg.so.62 (0xb6c96000)
    libXi.so.6 => /usr/lib/libXi.so.6 (0xb6c8e000)
    libXrandr.so.2 => /usr/lib/libXrandr.so.2 (0xb6c8b000)
    libXcursor.so.1 => /usr/lib/libXcursor.so.1 (0xb6c82000)
    libXinerama.so.1 => /usr/lib/libXinerama.so.1 (0xb6c7e000)
    libXft.so.2 => /usr/lib/libXft.so.2 (0xb6c6c000)
    libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0xb6c03000)
    libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0xb6bd5000)
    libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0xb6bd2000)
    libpng12.so.0 => /usr/lib/libpng12.so.0 (0xb6baf000)
    libXext.so.6 => /usr/lib/libXext.so.6 (0xb6ba1000)
    libX11.so.6 => /usr/lib/libX11.so.6 (0xb6abb000)
    libSM.so.6 => /usr/lib/libSM.so.6 (0xb6ab3000)
    libICE.so.6 => /usr/lib/libICE.so.6 (0xb6a9b000)
    libXrender.so.1 => /usr/lib/libXrender.so.1 (0xb6a93000)
    libz.so.1 => /usr/lib/libz.so.1 (0xb6a7f000)
    libfam.so.0 => /usr/lib/libfam.so.0 (0xb6a76000)
    libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb6a64000)
    libacl.so.1 => /lib/libacl.so.1 (0xb6a5c000)
    libattr.so.1 => /lib/libattr.so.1 (0xb6a58000)
    libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0xb6983000)
    libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0xb6961000)
    libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb6956000)
    libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb6827000)
    libXfixes.so.3 => /usr/lib/libXfixes.so.3 (0xb6823000)
    libexpat.so.1 => /usr/lib/libexpat.so.1 (0xb6804000) /lib/ld-linux.so.2 (0xb7efb000)
    libXau.so.6 => /usr/lib/libXau.so.6 (0xb6800000)
    I'm sure that when you claim that uTorrent isn't as large as other Shareware programs you have looked into its library dependencies. Or did you?
  • by Inoshiro ( 71693 ) on Sunday July 23, 2006 @02:39PM (#15766478) Homepage
    Column 1: n. Column 2: 2^n. Colunm 3: n * log n. Column 4: n * log n + 100.

    1 2 0 100
    2 4 0.602059991327962 100.602059991328
    3 8 1.43136376415899 101.431363764159
    4 16 2.40823996531185 102.408239965312
    5 32 3.49485002168009 103.49485002168
    6 64 4.66890750230186 104.668907502302
    7 128 5.9156862800998 105.9156862801

    As you can see, for n less than 7, n * log n + 100 (which assumes our language is 100 times slower to run our n*log(n) algorithm vs. our 2^n language), the boundary exists at 6. If our language is only 50 times slower, the boundary is 5.

    How much slower would a language have to be (in units) for that n to be not incredibly small; say you have AI for an RTS where you want 20 units on screen? Well, if we scale up our little speadsheet table, we see that 2^20 is 1.0x10^6 larger than 20 * log(20). This leads us to the conclusion that if we are writing AI for a game (such as Warcraft) where we want 20 units on screen, and we have a choice between C with a 2^n decision algorithm, or an interpreted language with an n*log n decision algorithm, the interpreted language would have to be 1048550.0 units slower -- or, 52428.0 units of time slower per iteration of the algorithm to be equally effective (and it'd have to have an overhead of greater than 52,428 units/iteration to be LESS effective!).

    The order of the algorithm is the dominant factor in time performance of input to output. Compilers are not little god boxes, and will not fix broken algorithms. Even a very large per-iteration overhead (which doesn't exist, since interpretted languages will use caches, P-code, or even decent JIT techniques) isn't enough to sink the performance of them.
  • Re:Wirth's law (Score:2, Informative)

    by PostPhil ( 739179 ) on Sunday July 23, 2006 @02:42PM (#15766486)
    No one using Python is "waiting for a faster CPU". Languages like Python and Ruby do have productivity gains that are worth whatever overhead they have.

    If the only good thing going for such languages is that they are "high-level", and higher level languages must be slow and clunky (like BASIC, which doesn't belong in the same category), then I could see your point. However:

    1. Languages like Python gained popularity as a glue language. 90% of it is running C/C++ for the heavy lifting anyway.

    2. Such languages are also prototyping languages. A programmer who uses these languages as the prototype can still translate to C/C++ later, and they'll be much more productive because these languages allow you to more freely experiment with your working design. There's less reason to fear starting over if necessary. Simply taking an elitist view that you begin and end with C isn't going to make you more productive, nor does it guarantee your program to be faster if it ends up with a bad design because you already had 5000 lines of code (instead of 500 lines) written that you'd hate to just throw away.

    3. Face it, there are varying skill-levels for programmers of all languages. Optimized standard libraries and built-in higher-level datatypes are tried and tested code within the language that works. Leveraging this code reduces the chance that a newbie will try to re-invent a higher-level data structure, and do it wrong, which would be slower than simply using an optimized one already available.

    4. "Higher-level" doesn't mean "slow". JIT compilers are getting to the point where it's more efficient to let the compiler or interpreter handle garbage collection than doing it yourself.
  • by gtwilliams ( 738565 ) on Sunday July 23, 2006 @03:33PM (#15766626)
    Wow, was this article hard to read. It looks like the author never read the first draft and merely ran a spell checker on it. There are scores of typos, missing commas, and improper homonyms. Here's just a few:

    We have not really come back to good old centralized computing but rather to arrived at distributed computing model. Although a bulk of work may be done by centralized resources such as servers providing computational services, our desktop PCs and client workstations handle independently multitude of tasks.

    As computer clock speed increased from kilohertz to gigahertz so did out imagination and understanding of what can be done with this computational power to serve our needs;

    So on one had we have a habit (but rarely a need) for higher performance and on the other hand we have a looming fossil fuel crisis, global warming and rising energy prices.

    Yet the only piece of evidence on AMD's involvement with speculative threading that so far surfaced is infamous U.S. patent # 6,574,725 that looks like hardware support for speculative threading in the vane of to Intel's Mitosis.

    Still, with Itanium disappointment tarnishing commercial VLIW prospects perhaps permanently we are unlikely to see more general-purpose VLIW computers, but instead are likely to seem them in niece markets employed for solving a very limited set of special-purpose tasks.

    (for example, new manufacturing technology in the vane of IBM's recent report of experimental SiGe chips running at 350 GHz at room temperature and at 500 GHz when chilled by liquid helium).

    Further more we thought that a better CPU makes a better computer, which is no longer so.

  • Re:Bloat (Score:2, Informative)

    by Kjella ( 173770 ) on Monday July 24, 2006 @05:33AM (#15768305) Homepage
    Seriously, I challenge anybody here to name even one real-world CPU or IO intensive task that cannot be split up.

    Well, if you want one that can't be split up well, try any modern 1st person 3D game (FPS, RPG or otherwise). If you want the game to feel good, you have an extremely limited response time. You have no chance to predict it in advance, you don't have time to ship it out a render farm and get it back in time. And while some tasks can be "outsourced" to a second CPU/core, it scales far worse than linearly. I doubt quad-core and beyond will do anything at all for gaming. You can throw all the paralellism you want in it but you couldn't beat a modern PC no matter how many Pentium IIs and Voodoo cards you throw at it.

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...