Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

The Future of Computing 184

An anonymous reader writes "Penn State computer science professor Max Fomitchev explains that computing has evolved in a spiral pattern from a centralized model to a distributed model that retains some aspects of centralized computing. Single-task PC operating systems (OSes) evolved into multitasking OSes to make the most of increasing CPU power, and the introduction of the graphical user interface at the same time reduced CPU performance and fueled demands for even more efficiencies. "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes. Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power. The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts. The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
This discussion has been archived. No new comments can be posted.

The Future of Computing

Comments Filter:
  • Re:Bloat (Score:5, Interesting)

    by Poromenos1 ( 830658 ) on Sunday July 23, 2006 @11:49AM (#15766035) Homepage
    That's very true. I always wonder why some programs have to be a few tens of megabytes (especially some shareware ones) and then a (usually open source) program comes along that's 1/10th of the size of the previous program and has more features (e.g. uTorrent vs everything else). I know that processor speed and memory are practically unlimited so you don't have to worry about them, but this is just stupid.
  • by namityadav ( 989838 ) on Sunday July 23, 2006 @11:57AM (#15766055)
    Pardon my ignorance, but all the blades are going to have a lot of extra software running too (OS / App manager / network communication etc). So isn't there a chance of the micro-blades end up eating even more power (Specially if the software is still bloated)? Splitting the code in different blades is definitely not really code optimization anyway.
  • by geoff lane ( 93738 ) on Sunday July 23, 2006 @12:11PM (#15766095)
    ...is strongly dependent on the interfaces it presents to the world. The pressure is to push more and more functions onto a chip so that external interfaces can be eliminated. This is the victory of the general purpose computer. While in the short term it is always possible to build faster, more speciallised hardware to perform a function, eventually a faster CPU chip which implements the same facility in software becomes cheaper and generic.
  • Re:Bloat (Score:2, Interesting)

    by resonte ( 900899 ) on Sunday July 23, 2006 @12:15PM (#15766105)
    One reason I think is the fact that programming languages have become more high-level over time. Decreasing production time while sacrificing program efficiency.

    While companies that can decrease the production time of creating a program will spend less money on the developers. Efficiency is not a priority as most users do not understand the concept. If a program runs slowly on a user's computer the novice user will think it's a problem with the computer and not the program.

  • by plasmacutter ( 901737 ) on Sunday July 23, 2006 @12:18PM (#15766115)
    Gaming continues to be highly demanding on computer systems.

    While I believe processors are currently heavily outmuscleing the exchange rate of primary memory, and that this gap should be closed, I don't believe the era of power expansion is over.

    While chipmakers are becomming increasingly environmentally conscious by increasing performance per watt, they are also abandoning hype based "clock speed" development and actually focusing on reducing cycles per instruction, raising instructions per second, optimizing pipelining, and increasing responsiveness.

    While this might not be seen as power growth it is, but it's similar to the difference between overall horsepower vs torque on a vehicle.

    in the previous decade, most vehicles had decent horsepower but low torque, now the carmakers focus on less fuel hungry but higher torque engines, but as a side effect they also get more HP per liter.
  • Some thoughts (Score:5, Interesting)

    by madcow_bg ( 969477 ) on Sunday July 23, 2006 @12:32PM (#15766145)
    > "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes.
    Yes, he is right. The problem is that http://en.wikipedia.org/wiki/Unix_philosophy [wikipedia.org] has been very long forgotten from the manifacturers of OS for 90% of the PC's around the world. I do not want to start a flamewar, just consider how many features of the OS you really need? It is arguably a GOOD practice to put everything you can in an OS, but for cryin' out loud, at least there must be a way to remove the unneeded parts.

    > Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power.
    That looks like where we're heading now. Jut consider the 1000 projects for distributed computing out there, and the whole virtualization thingy. But this by itself cannot mean that much less power. If you want less consumption, you have to rely on technology AND on more optimized software.

    > The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts.
    YES!!! That's what we're talking about, man! We need separate modules to do the work. Just for info, try googling for Microkernels vs. Monolithic. Tannenbaum has good arguments in favor of microkernels in terms of stability. I don't want to take either side, but it is true that whilst a mere 99.999% of the cars don't suffer from reboots of their onboard computers, our desctops still do. Remember the old joke: "You've moved your mouse. Please restart your computer for the changes to take efect."

    > The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
    Maybe we should move into higher-programming languages that take most of the optimizations hidden from the programmer. For example I have recently read a review that optimizied Java code is VERY near native C performance. Even if that is not true, C is not adapded enough for the various SSE, SIMD and so on optimizations in the modern PCs. Yes, GCC makes all kinds of optimizations, but maybe WE need to move into higher-order logic for our programs?
  • by crmartin ( 98227 ) on Sunday July 23, 2006 @12:42PM (#15766183)
    Yup, exactly. In fact, Fred Brooks extends it in general to any specialized processor --- you find that your specialized processor can never keep up with doing the same thing in a generalized processor. Consider, for example, the IBM System/38 --- which became the AS/400, simulating the S/38's wild-ass object-based architecture on cheap commodity processors; the Symbolics Lisp Machine, which was wiped out as a Lisp platform by pretty much the next generation of GP processors; or graphics processors, like PIxar's specialized graphics machine, which has been replaced by a bunch of Linux boxes.

    In this case, the guy is re-inventing massively parallel computing; a useful technique for certain problems, but hard to map to general computing.

    how different is this from multi-core SPARCs running a thread per core?
  • by drDugan ( 219551 ) * on Sunday July 23, 2006 @12:49PM (#15766199) Homepage
    ok, my BS meter is pegged

    while the article has lots of intersting data and information, he doesn't know much about predicting the future

    He's right on focusing on memory (vs. CPU) - this is where the major bottlenecks are

    He completely missed the boat though on virtualization. Everywhere I look there are different examples of virtualization that are driving development choices - and he doesn't mention it once.

    he also is missing the tide happening right now with metaprogramming and generators

    also missing the boat on the trends in language flexibility that are turning application development into "domain specific language" development. we're at a tipping point over the current 2-3 year horizon where developers are building out the language AT THE SAME time they write their application. coupled with effective reuse strategy, this will revolutionize how quickly and how functional all our apps can be.

    it sucks that tesxt is static, there are a huge number of ideas here, and I have not expressed them as well as I'd like, but alas, once submitted, the text can't change, and it presents the same info to each reader, no matter what their context or background is. I like talking to people much better.

  • by Inoshiro ( 71693 ) on Sunday July 23, 2006 @01:03PM (#15766228) Homepage
    It's the algorithm. It's straight complexity theory [wikipedia.org]; C/C++ is not a panacea. If you write a 2^n or n! algorithm in C, it'll have its doors blown off by an nlogn algorithm in Python.

    Either you have constant time, nlogn, or even n algorithms that run OK (CPUs today are fast enough that even for a decent sized n, an n algorithm will be executed shortly). However, no computer humans can ever build that works on the same principles as your desktop computer will be able to do 2^n, n^n, or n! algortihms in any kind of useful time for large n.

    You might be able to get results in a lesser amount of time if you can parallelize the work (see the Distributed.net cracking efforts on factoring into large prime numbers), but if you can't make the algorithm work in parallel or otherwise reduce it to a polynomial time algorithm, even a supercomputer from the year 50,000 won't solve these problems for large n.

    Don't focus on the language; that's the wrong area to look.
  • Buy SUNW (Score:2, Interesting)

    by alucinor ( 849600 ) on Sunday July 23, 2006 @01:08PM (#15766242) Journal
    Wow, if this guy is right, then I'd say to buy some SUNW stock, because this power-consumption problem is exactly what their latest line of server offerings directly address.

    Oh, and I don't have any Sun stock myself, heheh ... yet :)
  • by yope ( 656090 ) on Sunday July 23, 2006 @01:31PM (#15766309)
    Splitting the code in different blades is definitely not really code optimization anyway.

    Of course it is not. Why doesn't everybody realize, that this Max Fomitchev has absolutely no idea what he's talking about. This is complete rubbish. "Microblades" to save power? Come on, do the math: More power supplies that produce energy loss (no power supply has an efficiency of 100%), more complex software (because tasks are split up over different cpu's and have to communicate over a sort of network connection)...
    How on earth is such a thing going to be simpler and more energy-efficient? Ok, in the past, there was one big mainframe, where we now have a rack full of smaller servers (blades), but every one of those small servers does much more than that mainframe we had in the past. This fact is a simple balance of two forces: Frist, computers need to get more powerful and efficient, in order to be able to do more computing with less power in less space. Second, since computers get smaller, we can put more computers in that same space to get even more computing power. The point of balance between small and powerful is a simple matter of costs. One huge mainframe with many CPU's eventually gets more expensive to build than a collection of smaller servers with only a few CPU's each.

    I can't believe how many people, who are supposed to be knowledgeable, can talk so much nonsense. He's supposed to be a computer science professor, isn't he? And then there are so many who believe that stuff whithout even thinking about it.
  • Re:Bloat (Score:5, Interesting)

    by Cal Paterson ( 881180 ) on Sunday July 23, 2006 @02:14PM (#15766411)
    Small binary size != Fast program

    uTorrent != Open Source

    uTorrent isn't the fastest torrent program around either, and neither does it have the most features. It probably doesn't strike the best balance either.

    Next time you get the "uTorrent is b3tt4r!" bull from the #footorrents channel, read the "only 6mb memory requirement" or the "170KB binary size statistics: consider the fact that uTorrent is missing lots of features, isn't FOSS, depends on an OS with a circa 256mb base requirement, and isn't as fast or as nice with IO as some other clients [rakshasa.no].

    Then perhaps later, consider that the hallmarks of a good program aren't good benchmarks, but good design. The fact that Debian comes on seven cdroms and with 18,000 programs doesn't mean that WinXP is faster because it only comes on one.
  • Re:Bloat (Score:2, Interesting)

    by lostguru ( 987112 ) on Sunday July 23, 2006 @02:28PM (#15766452) Homepage
    well not quite,

    depending on wich of the three great mac classic ages we're talking about 7, 8, or 9 that wasn't really true

    7: had to run on both 68k machines as well as the newer ppc machines and the numerous clones of the day, but still managed to perform quite well on all of them


    8: was the first to run on ppc and the g3 line of chips along with supporting a new proggraming system with the carbon libraries


    9: well nine sucked and im turning into a troll as i type so i just call it non classic and leave


    of course i see no real problem with hand optimized assembly if thats what gets the job done right, and i still think that the mac gui is wonderfull and faster than others (kde, gnome/metashitty)
  • Re:Bloat (Score:4, Interesting)

    by cnettel ( 836611 ) on Sunday July 23, 2006 @03:26PM (#15766612)
    On the other hand, even "obviously" serial tasks can be made faster if you let other threads handle highly speculative precalcing/prefetching/whatever. In a UI context, latency is king. If you can write your code so that processing starts in a background thread twenty ms before the actual click (when the mouse only hovered over the button/menu item), you'll still get the results of faster response. Try to make the processing that actually depends on the input from the previous task as low as possible. Try to guess, if you'll otherwise just be idle. Reindex your DB on another thread, even if it will only save 2 % on your main thread(s). Given, of coures, that performance and latency is what you care about.
  • Re:Bloat (Score:3, Interesting)

    by 0111 1110 ( 518466 ) on Sunday July 23, 2006 @03:40PM (#15766637)
    we have a culture where implementation speed is valued over everything else.

    Here here. Well said. That, precisely, is the core of the problem. The only way that is going to change is if the market forces their hand. If/when the speed of a single core finally does hit a wall, we may see this. It's all about priorities. Developers are making an explicit choice in favor of reduced development time at the expense of exploding minimum machine requirements for nearly identical tasks. The end user really has no way to know how efficient the code is or how much faster it might have been if the developer had been using C/C++, Ada, or whatever fast language with ample hand coded assembly where needed (yes, different routines for each platform).

    Not that development time is a non-issue. It is very important. There needs to be a balance struck between code efficiency/optimization and development time. Right now there is no balance. For modern developers, it is efficiency that is the non-issue.
  • by spage ( 73271 ) <`moc.egapreiks' `ta' `egaps'> on Sunday July 23, 2006 @05:41PM (#15766882)
    Where's the *journal* in Dr. Dobbs Journal? It has editors but apparently no one actually edits? I can forgive the lack of "the" articles in the article from I assume a Russian writer, but not the dozens of basic errors.

      Discreet elements were gradually replaced with integrated circuits
    "Discrete elements"

      Intel's new "Woodcrest" server chip as only 14
    "Woodcrest server chip has only 14"

      speculative threading in the vane of to Intel's Mitosis.
      new manufacturing technology in the vane of IBM's
      in the vane of Sun's UltraSparc
    "in the vein of..."!

      although it's new Efficieon CPU
    "Its" here is not a contraction of "it is" or "it has", so no apostrophe, also garbled name "Efficeon"

      the cores itself would become more simple and less-deeply pipelined (kind of like UltraSparc T1 is doing already).
    The cores themselves would become simpler and less-deeply pipelined (similar to the UltraSparc T1)

      while other cores might be deprived of such capacity
    He means "capability"

      unless a way of frequency increases is found that does not result in the market increase in power consumption
    "Unless a way to increase frequency is found that does not result in a marked increase in power consumption"

      instead are likely to seem them in niece markets
    "see them in niche markets"

      Code efficiency is at all time low and potentially hide at least a order of magnitude performance boost
    "Code efficiency is at an all-time low and potentially hides at least an order-of-magnitude performance boost"

      the role of CPU is likely to diminish with time living little reason for further clock-speed improvement
    "leaving little reason..." !!

      extremely bloated code that out GHz-rated CPUs execute
    "that ouR ... CPUs execute"

      there is amble room for software optimization
    "ample room" !

      Quite another alternative to VLIW that is already sprouting profusely
    WTF?

    Crap editing makes text difficult to read, so people won't read carefully, leading to superficial scanning and the decline of RTFA.
  • by sgtrock ( 191182 ) on Sunday July 23, 2006 @05:54PM (#15766924)
    As the GP suggested that graphic intensive games would not perform well, might I suggest doing a side by side comparison between Quake2 and Jake2 [bytonic.de]? There's even a benchmarks page [bytonic.de]. :)

    I think that should help quell the fears of Java vs. C, anyway.

8 Catfish = 1 Octo-puss

Working...