Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

OLPC Inspires Open Source Projects 75

Don Marti writes "A loose network of developers representing many commonly used open source projects are working to develop a new generation of low-memory, efficient code. This targeted code is being designed for a system, of which only 500 prototype boards now exist: the 'Children's Machine 1' from the One Laptop Per Child project." From the article: "Gettys says measuring existing performance has to come before trying those changes. 'We've been pulling in every decent performance tool Linux has so we can optimize when and where it really matters,' he says. A key automated testing tool is Tinderbox, a build and test management tool originally developed for Mozilla, that new OLPC developer Chris Ball has installed, to build and test OLPC software. And, after Red Hat kernel developer Dave Jones gave a standing-room-only talk at the 2006 Linux Symposium titled, Why Userspace Sucks (Or, 101 Really Dumb Things Your App Shouldn't Do), his reports of suckiness, which include kernel-based measurements of wasteful behavior, are helpful, Blizzard says."
This discussion has been archived. No new comments can be posted.

OLPC Inspires Open Source Projects

Comments Filter:
  • by hcob$ ( 766699 ) on Friday October 27, 2006 @03:26PM (#16613802)
    I swear, it's about time people got away from the "aww, we got as much memory as we need! No need to worry if this is way to big for our needs!
  • Re:I do Hope... (Score:3, Insightful)

    by vandon ( 233276 ) on Friday October 27, 2006 @04:13PM (#16614628) Homepage
    efficiently using memory being one of them
    I've got to agree. Even simple programs use up way more memory than needed.
  • by JonTurner ( 178845 ) on Friday October 27, 2006 @04:40PM (#16615020) Journal
    >>Even simple programs use up way more memory than needed.

    Agreed. It's obscene to write a simple "Hello, world" and look at the memory usage. I used to fret about a few dozen bytes... now I allocate megs and don't even thing about it. Such is progress. Some people (such as Steve Gibson at GRC.com) are still coding Windows apps in assembly. I know people here aren't too impressed with Gibson (for all his showboating) but I've gotta say it's damned impressive seeing a real honest full-featured Windows app that's smaller than the Slashdot.gif picture in the upper left corner of this page. That's just cool.

    Also, look at the Demo world. There are some absolutely stunning apps being written that use procedural rendering to accomplish stunning skeletal character animation with inverse kinematics, with soundtracks and advanced effects, in just a few hundred K bytes. Amazing stuff.

    So coding for efficiency is happening, but it's rare -- a case of someone showing off. Or is it?

    This brings up an interesting point: due to changes in architecture and hardware, coding for efficiency (usually performance) is already resulting in smaller code size. Let me explain.

    In the early days of microcomputers (C64, Apple ][, TRS80), where system resources were extremely limited and cpu power was slight (e.g. 6502/8088, 8bit, 1mhz, 32k RAM, 40k floppies, no HDD or only tape for storage) and programmers had no choice but to code for efficient 1) performance within the boundaries of storage limitations.

    Then around the days of the M68000 Macintosh and the 386, with its extended memory addressing, coding for performance meant pre-computing tables and looking up values as needed. Memory was cheaper than CPU.

    This trend reversed in the early 90s when storage became cheap and bus speed increased, but couldn't keep pace with CPU speed advances. Suddenly, it was "cheaper" to compute values at the time they were needed b/c bus speeds imposed a huge penalty on looking up values. Breaking the on-chip ram cache could make-or-break a tight graphics rendering loop, so that was priority. (remember, at this point, software rendering was still common).

    Introduce the extremely high-power GPU video cards we have today and the situation changes again. Offloading huge computational loads onto a deidcated graphics engine, the system CPU is somewhat of a traffic cop, ensuring subsystems have a steady flow of sound, textures, geometry, and network packets, oh... and occasionally performing game logic.

    So it appears we've come full circle.

    The bad news is now you guys have to listen to dinosaurs like me who cut their teeth coding in 6502 assembly ramble on about "well, sonny, back in MY day..."
  • Boot time (Score:4, Insightful)

    by Tribbin ( 565963 ) on Friday October 27, 2006 @05:26PM (#16615738) Homepage
    If you speed up a computer's boot time by one second. And every PC in the world (aprox a billion) starts up every day. You would save ~12000 human years every year. Every year you would save ~150 lifes!

    Every millisecond speed increase a day of software everybody uses every day would save 12 lives!

    NOW GO BACK TO WORK! You murderer!
  • by vtcodger ( 957785 ) on Friday October 27, 2006 @07:20PM (#16617200)
    ***They poll I/O ports?! Have these people never heard of hardware interrupts? I knew that a lot of lore had been lost in the PC revolution, but I had no idea the situation was this bad.***

    I don't think that the PS/2 Auxiliary Device Port is designed to generate an interrupt when something is hot plugged into it..

    BTW -- notwithstanding what your computer science teachers taught you, polling is quite efficient if loads are predictable. Polling is usually much less resource intensive than interrupts ... if the polls have a high hit rate. And it's much less subject to wierd, difficult to reproduce problems and to race conditions. Across the broad spectrum of computing, there are probably far more cases where interrupts are used when polling would work better than vice versa.

    That said, 20 polls per second seems excessive for a detecting a new device on a port that is rarely used except for mice. Once every 5 seconds would seem more appropriate.

  • by Anonymous Coward on Friday October 27, 2006 @11:20PM (#16619126)
    How quickly /. forgets, sigh.

    Without open documentation for the hardware the OLPC is not a truly open source platform.

    From a "help the chiiildren" point of view that's ok, except OLPC are trying to bullshit the FOSS community into doing their development for them by claiming the have an open platform.

Overload -- core meltdown sequence initiated.