The Future of Computing 184
An anonymous reader writes "Penn State computer science professor Max Fomitchev explains that computing has evolved in a spiral pattern from a centralized model to a distributed model that retains some aspects of centralized computing. Single-task PC operating systems (OSes) evolved into multitasking OSes to make the most of increasing CPU power, and the introduction of the graphical user interface at the same time reduced CPU performance and fueled demands for even more efficiencies. "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes. Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power. The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts. The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."
Re:Bloat (Score:5, Informative)
Really? Languages don't get much more high-level than Smalltalk, and a Squeak does things that C/C++ programs seem to require a lot more bloat to manage.
Re:Bloat (Score:2, Informative)
Re:Bloat (Score:3, Informative)
Single page print version of article (Score:3, Informative)
Re:Bloat (Score:4, Informative)
You're not understanding. (Score:3, Informative)
1 2 0 100
2 4 0.602059991327962 100.602059991328
3 8 1.43136376415899 101.431363764159
4 16 2.40823996531185 102.408239965312
5 32 3.49485002168009 103.49485002168
6 64 4.66890750230186 104.668907502302
7 128 5.9156862800998 105.9156862801
As you can see, for n less than 7, n * log n + 100 (which assumes our language is 100 times slower to run our n*log(n) algorithm vs. our 2^n language), the boundary exists at 6. If our language is only 50 times slower, the boundary is 5.
How much slower would a language have to be (in units) for that n to be not incredibly small; say you have AI for an RTS where you want 20 units on screen? Well, if we scale up our little speadsheet table, we see that 2^20 is 1.0x10^6 larger than 20 * log(20). This leads us to the conclusion that if we are writing AI for a game (such as Warcraft) where we want 20 units on screen, and we have a choice between C with a 2^n decision algorithm, or an interpreted language with an n*log n decision algorithm, the interpreted language would have to be 1048550.0 units slower -- or, 52428.0 units of time slower per iteration of the algorithm to be equally effective (and it'd have to have an overhead of greater than 52,428 units/iteration to be LESS effective!).
The order of the algorithm is the dominant factor in time performance of input to output. Compilers are not little god boxes, and will not fix broken algorithms. Even a very large per-iteration overhead (which doesn't exist, since interpretted languages will use caches, P-code, or even decent JIT techniques) isn't enough to sink the performance of them.
Re:Wirth's law (Score:2, Informative)
If the only good thing going for such languages is that they are "high-level", and higher level languages must be slow and clunky (like BASIC, which doesn't belong in the same category), then I could see your point. However:
1. Languages like Python gained popularity as a glue language. 90% of it is running C/C++ for the heavy lifting anyway.
2. Such languages are also prototyping languages. A programmer who uses these languages as the prototype can still translate to C/C++ later, and they'll be much more productive because these languages allow you to more freely experiment with your working design. There's less reason to fear starting over if necessary. Simply taking an elitist view that you begin and end with C isn't going to make you more productive, nor does it guarantee your program to be faster if it ends up with a bad design because you already had 5000 lines of code (instead of 500 lines) written that you'd hate to just throw away.
3. Face it, there are varying skill-levels for programmers of all languages. Optimized standard libraries and built-in higher-level datatypes are tried and tested code within the language that works. Leveraging this code reduces the chance that a newbie will try to re-invent a higher-level data structure, and do it wrong, which would be slower than simply using an optimized one already available.
4. "Higher-level" doesn't mean "slow". JIT compilers are getting to the point where it's more efficient to let the compiler or interpreter handle garbage collection than doing it yourself.
Article Difficult to Read (Score:2, Informative)
Re:Bloat (Score:2, Informative)
Well, if you want one that can't be split up well, try any modern 1st person 3D game (FPS, RPG or otherwise). If you want the game to feel good, you have an extremely limited response time. You have no chance to predict it in advance, you don't have time to ship it out a render farm and get it back in time. And while some tasks can be "outsourced" to a second CPU/core, it scales far worse than linearly. I doubt quad-core and beyond will do anything at all for gaming. You can throw all the paralellism you want in it but you couldn't beat a modern PC no matter how many Pentium IIs and Voodoo cards you throw at it.