The Future of Computing 184
An anonymous reader writes "Penn State computer science professor Max Fomitchev explains that computing has evolved in a spiral pattern from a centralized model to a distributed model that retains some aspects of centralized computing. Single-task PC operating systems (OSes) evolved into multitasking OSes to make the most of increasing CPU power, and the introduction of the graphical user interface at the same time reduced CPU performance and fueled demands for even more efficiencies. "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes. Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power. The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts. The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
Re:Bloat (Score:5, Interesting)
Code optimization != specialized blades (Score:2, Interesting)
the cost of hardware... (Score:5, Interesting)
Re:Bloat (Score:2, Interesting)
While companies that can decrease the production time of creating a program will spend less money on the developers. Efficiency is not a priority as most users do not understand the concept. If a program runs slowly on a user's computer the novice user will think it's a problem with the computer and not the program.
"radical new technology"? (Score:5, Interesting)
While I believe processors are currently heavily outmuscleing the exchange rate of primary memory, and that this gap should be closed, I don't believe the era of power expansion is over.
While chipmakers are becomming increasingly environmentally conscious by increasing performance per watt, they are also abandoning hype based "clock speed" development and actually focusing on reducing cycles per instruction, raising instructions per second, optimizing pipelining, and increasing responsiveness.
While this might not be seen as power growth it is, but it's similar to the difference between overall horsepower vs torque on a vehicle.
in the previous decade, most vehicles had decent horsepower but low torque, now the carmakers focus on less fuel hungry but higher torque engines, but as a side effect they also get more HP per liter.
Some thoughts (Score:5, Interesting)
Yes, he is right. The problem is that http://en.wikipedia.org/wiki/Unix_philosophy [wikipedia.org] has been very long forgotten from the manifacturers of OS for 90% of the PC's around the world. I do not want to start a flamewar, just consider how many features of the OS you really need? It is arguably a GOOD practice to put everything you can in an OS, but for cryin' out loud, at least there must be a way to remove the unneeded parts.
> Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power.
That looks like where we're heading now. Jut consider the 1000 projects for distributed computing out there, and the whole virtualization thingy. But this by itself cannot mean that much less power. If you want less consumption, you have to rely on technology AND on more optimized software.
> The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts.
YES!!! That's what we're talking about, man! We need separate modules to do the work. Just for info, try googling for Microkernels vs. Monolithic. Tannenbaum has good arguments in favor of microkernels in terms of stability. I don't want to take either side, but it is true that whilst a mere 99.999% of the cars don't suffer from reboots of their onboard computers, our desctops still do. Remember the old joke: "You've moved your mouse. Please restart your computer for the changes to take efect."
> The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
Maybe we should move into higher-programming languages that take most of the optimizations hidden from the programmer. For example I have recently read a review that optimizied Java code is VERY near native C performance. Even if that is not true, C is not adapded enough for the various SSE, SIMD and so on optimizations in the modern PCs. Yes, GCC makes all kinds of optimizations, but maybe WE need to move into higher-order logic for our programs?
Re:This is called "the wheel of reincarnation" (Score:3, Interesting)
In this case, the guy is re-inventing massively parallel computing; a useful technique for certain problems, but hard to map to general computing.
how different is this from multi-core SPARCs running a thread per core?
virtualization, generators, and languages (Score:4, Interesting)
while the article has lots of intersting data and information, he doesn't know much about predicting the future
He's right on focusing on memory (vs. CPU) - this is where the major bottlenecks are
He completely missed the boat though on virtualization. Everywhere I look there are different examples of virtualization that are driving development choices - and he doesn't mention it once.
he also is missing the tide happening right now with metaprogramming and generators
also missing the boat on the trends in language flexibility that are turning application development into "domain specific language" development. we're at a tipping point over the current 2-3 year horizon where developers are building out the language AT THE SAME time they write their application. coupled with effective reuse strategy, this will revolutionize how quickly and how functional all our apps can be.
it sucks that tesxt is static, there are a huge number of ideas here, and I have not expressed them as well as I'd like, but alas, once submitted, the text can't change, and it presents the same info to each reader, no matter what their context or background is. I like talking to people much better.
It's not the language, stupid! (Score:5, Interesting)
Either you have constant time, nlogn, or even n algorithms that run OK (CPUs today are fast enough that even for a decent sized n, an n algorithm will be executed shortly). However, no computer humans can ever build that works on the same principles as your desktop computer will be able to do 2^n, n^n, or n! algortihms in any kind of useful time for large n.
You might be able to get results in a lesser amount of time if you can parallelize the work (see the Distributed.net cracking efforts on factoring into large prime numbers), but if you can't make the algorithm work in parallel or otherwise reduce it to a polynomial time algorithm, even a supercomputer from the year 50,000 won't solve these problems for large n.
Don't focus on the language; that's the wrong area to look.
Buy SUNW (Score:2, Interesting)
Oh, and I don't have any Sun stock myself, heheh
Re:Code optimization != specialized blades (Score:2, Interesting)
Of course it is not. Why doesn't everybody realize, that this Max Fomitchev has absolutely no idea what he's talking about. This is complete rubbish. "Microblades" to save power? Come on, do the math: More power supplies that produce energy loss (no power supply has an efficiency of 100%), more complex software (because tasks are split up over different cpu's and have to communicate over a sort of network connection)...
How on earth is such a thing going to be simpler and more energy-efficient? Ok, in the past, there was one big mainframe, where we now have a rack full of smaller servers (blades), but every one of those small servers does much more than that mainframe we had in the past. This fact is a simple balance of two forces: Frist, computers need to get more powerful and efficient, in order to be able to do more computing with less power in less space. Second, since computers get smaller, we can put more computers in that same space to get even more computing power. The point of balance between small and powerful is a simple matter of costs. One huge mainframe with many CPU's eventually gets more expensive to build than a collection of smaller servers with only a few CPU's each.
I can't believe how many people, who are supposed to be knowledgeable, can talk so much nonsense. He's supposed to be a computer science professor, isn't he? And then there are so many who believe that stuff whithout even thinking about it.
Re:Bloat (Score:5, Interesting)
uTorrent != Open Source
uTorrent isn't the fastest torrent program around either, and neither does it have the most features. It probably doesn't strike the best balance either.
Next time you get the "uTorrent is b3tt4r!" bull from the #footorrents channel, read the "only 6mb memory requirement" or the "170KB binary size statistics: consider the fact that uTorrent is missing lots of features, isn't FOSS, depends on an OS with a circa 256mb base requirement, and isn't as fast or as nice with IO as some other clients [rakshasa.no].
Then perhaps later, consider that the hallmarks of a good program aren't good benchmarks, but good design. The fact that Debian comes on seven cdroms and with 18,000 programs doesn't mean that WinXP is faster because it only comes on one.
Re:Bloat (Score:2, Interesting)
depending on wich of the three great mac classic ages we're talking about 7, 8, or 9 that wasn't really true
7: had to run on both 68k machines as well as the newer ppc machines and the numerous clones of the day, but still managed to perform quite well on all of them
8: was the first to run on ppc and the g3 line of chips along with supporting a new proggraming system with the carbon libraries
9: well nine sucked and im turning into a troll as i type so i just call it non classic and leave
of course i see no real problem with hand optimized assembly if thats what gets the job done right, and i still think that the mac gui is wonderfull and faster than others (kde, gnome/metashitty)
Re:Bloat (Score:4, Interesting)
Re:Bloat (Score:3, Interesting)
Here here. Well said. That, precisely, is the core of the problem. The only way that is going to change is if the market forces their hand. If/when the speed of a single core finally does hit a wall, we may see this. It's all about priorities. Developers are making an explicit choice in favor of reduced development time at the expense of exploding minimum machine requirements for nearly identical tasks. The end user really has no way to know how efficient the code is or how much faster it might have been if the developer had been using C/C++, Ada, or whatever fast language with ample hand coded assembly where needed (yes, different routines for each platform).
Not that development time is a non-issue. It is very important. There needs to be a balance struck between code efficiency/optimization and development time. Right now there is no balance. For modern developers, it is efficiency that is the non-issue.
Dr. Dobbs editors don't edit (Score:2, Interesting)
Discreet elements were gradually replaced with integrated circuits
"Discrete elements"
Intel's new "Woodcrest" server chip as only 14
"Woodcrest server chip has only 14"
speculative threading in the vane of to Intel's Mitosis.
new manufacturing technology in the vane of IBM's
in the vane of Sun's UltraSparc
"in the vein of..."!
although it's new Efficieon CPU
"Its" here is not a contraction of "it is" or "it has", so no apostrophe, also garbled name "Efficeon"
the cores itself would become more simple and less-deeply pipelined (kind of like UltraSparc T1 is doing already).
The cores themselves would become simpler and less-deeply pipelined (similar to the UltraSparc T1)
while other cores might be deprived of such capacity
He means "capability"
unless a way of frequency increases is found that does not result in the market increase in power consumption
"Unless a way to increase frequency is found that does not result in a marked increase in power consumption"
instead are likely to seem them in niece markets
"see them in niche markets"
Code efficiency is at all time low and potentially hide at least a order of magnitude performance boost
"Code efficiency is at an all-time low and potentially hides at least an order-of-magnitude performance boost"
the role of CPU is likely to diminish with time living little reason for further clock-speed improvement
"leaving little reason..." !!
extremely bloated code that out GHz-rated CPUs execute
"that ouR
there is amble room for software optimization
"ample room" !
Quite another alternative to VLIW that is already sprouting profusely
WTF?
Crap editing makes text difficult to read, so people won't read carefully, leading to superficial scanning and the decline of RTFA.
Re:It's not the language, stupid! (Score:3, Interesting)
I think that should help quell the fears of Java vs. C, anyway.