Please create an account to participate in the Slashdot moderation system


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Crusoe: new benchmarks 117

C'T has published some new TM5600 benchmarks. Sony's new Vaio notebook uses 10W per hour to power 128 Mb of RAM (112 Mb useable), a 12 Gb Hard-drive, an ATI Rage Mobility Gfx controller and a 9 inch display (resolution: 1024 x 480). This compares with 15-22W for a normal notebook with a bigger display. Intel's Pentium III was usually 50 percent faster at a given frequency, but sometimes virtually no faster and sometimes twice as fast. Code-morphing's impact was measurable: some programs (Quake III and pov run on desk.pov) ran 10-20 percent faster the second time they were run.
This discussion has been archived. No new comments can be posted.

Crusoe: new benchmarks

Comments Filter:
  • by Anonymous Coward
    wtf does 10 watts per hour mean? Does it really mean that after 5 hours the thing draws 50 watts?
  • by Anonymous Coward on Wednesday October 11, 2000 @01:23PM (#713755)
    let's put this into perspective:

    this is not a real 9" display with the 4:3 aspect ratio we are used to. this display is 1/2 the size of a real 9" display at 4:3.

    i want to see power specs on a real laptop with a real display, not a palmtop with a 9" x 3" display.

  • My gaming is the lightest task the notebook does (mame really isn't processor heavy), but compiling code *is* processor intensive. I do it on the plane to try and get things right by the time I get to a customer site.

    Both battery life and processor power are important to me, thus I've resorted to carrying extra batteries and plugging in my laptop while at the airport. I'm the luser sitting on the floor in the corner because I found an outlet.
  • Here here!

    10W/h is nice, speeds of other processers 50%, 200 faster/slower is nice but what I'd like to know is if I'd be stupid to buy this for my laptop. How many dvds will I be able to watch on the plane? Does quake3 run acceptably? Q2? How is gaming vs the current standard (PIII-500/K7-600, whatever) cpus? Where is this processor best suited? I belive that from the beginning it was said that this isn't supposed to be a q3 fps crunching beast, but we need better benchmarks :)

    ie: sucks/doesn't suck/sucks less :)
  • The laser has more mass than a disk though - how would this save power?

    Perhaps you should be considering a mechanism below the CD which is basically a rotating mirror. The problem is the dimensions of this mechanism would make it *really* hard to construct in a non-flimsy manner.
  • Well, if it helps, I'm not forgiving. I think that TransMeta's technologies are very unimpressive and that their results are disappointing.
  • Hard drives already have a "power saving mode", which is the reason why if you leave your computer unattended for more than a few minutes, it porks out making you wait while the disk spins up. You can hear it. And of course the machine sits there like a moron while you're waiting for it to do something. This is why most people disable power managment in the computer's bios, to stop this annoyance.

    I understand that laptop hard drives are supposedly designed with more efficient motors, which was requested by Intel about two years ago when they wrote some standard paper about how component makers can make their components more energy efficient. While at the same time spitting out CPUs that consume upwards of 35 watts. It must be good to be the king.
  • um - the near future, you're talking Geological time-scales right?

    Face it, the 1ghz VAPORWARE announcement was a pathetic attempt to soften the blow they all knew was coming the day before they had to announce their profit warning. Didja see the stocks today? Oh! the humanity!

    Face it, if Motorola would just get out of the AIM partnership, PPC could be a great chip again. Right now, IBM has faster chips than Motorola is capable of producing, and CAN'T SELL THEM due to a clause in the contract that forbids, well, actual competition.
  • by jafac ( 1449 ) on Wednesday October 11, 2000 @01:21PM (#713762) Homepage
    While I'm initially disappointed; most geeks know that a very large chunk of laptop power goes into the screen, and another into the hard drive. Silly as it may sound, considering the performance hit you take by using this chip over the real-deal (Pentium), and considering the power savings is watered down because of the screen and HD, maybe MP would be the way to go. . .

    Of course, I've heard wonderful things about PPC-based laptops, and how very much more efficient they are with batteries. If only Moronola would get off their butts and ramp clock speed. ("twice as fast" argument doesn't wash when clock speed is half as fast).
  • Just wondering, but why were you using beowulf? PVM is good enough to do pvmPov renders. I have done it with straight pvm setups.
  • What seems to be forgotten in this discussion, is the fact that there is more than one component in laptops, that consumes large amounts of energy. The CPU and the chipset are certainly a significant contributor to the overall balance, by they are by no means the only one.

    You still have to deal with (at least) RAM, graphics card, backlit LCD display, mass storage devices, NIC/modem, etc. Of those, the display is likely the worst offender.

    I remember in the late 80's that Atari demoed a laptop that supposedly ran 12h on one set of batteries. Unfortunately, it was never marketed and I can therefore not verify the claim. The interesting bit of information though is that Atari opted for using a monochrome 640x400 _reflective_ display. While nobody could sell that type of hardware some 10+ years laters, it does show where we need to make changes before extended battery live is feasible. Maybe, OLEDs will someday mature to fill this niche.
  • If Transmeta were to provide an API to expose their cache to Linux (and potentially other operating systems), then the translated code could be saved in the filesystem or swap and reloaded as needed. This would remove the code morphing penalty and decrease memory pressure. This would be similar to what is done on AS/400.
  • Somehow, testing this processor in a system with a tiny display doesn't seem like a very good way to compare it to a realistic real world notebook

    No, but they could have compared it to Intel version of the picture book.

    (Sure, some people may buy this, but the display seems too small for many real world applications.)

    The only time the display was really just inconvenently too small was when a dialog "knew" it would have enough height, but my display was too short. Bloody pain. Happened failry offen in Windows, but pretty much never in BSD. The X apps mostly had scroll bars, or did something else when denied the height. When not I could at least drag the window up and down with ease, Windows made it really hard to do that.

    It is a pity Sony didn't support BSD (or Linux) on that system, it was much more usable.

  • by stripes ( 3681 ) on Wednesday October 11, 2000 @01:44PM (#713767) Homepage Journal
    Keep in mind that ultra portable machines using low-power consumption RISC processors and components achieve a 1W- rate.

    What machines would these be? While the StrongARM uses less power then the Crosue, most of that 10W isn't the CPU. I don't think we will get a 1W laptop until hard drives are replaced by something that sucks less power, DRAM gets replaced by MRAM, and most importantly we can make the big power sucking LCD backlight go away, or at least make it much smaller (like an eyeglasses backlight....).

    Until then our power sippers will be palm pilot like displays with no backlight most of the time, very little RAM, and no hard drive...

  • by stripes ( 3681 ) on Wednesday October 11, 2000 @06:04PM (#713768) Homepage Journal
    Sure, the StrongARM processor sucks about 1W.

    Oh, I didn't mean to pick on the SA because it sucks a watt. I actually thought it sucked more like half that. I was trying to say that replacing the crosue with another CPU in a 10W box will only give you at best a 8W box (assuming the Crosue sucks 2W, and the new CPU zero).

    So to suck dramitically less we either have to go the Palm Pilot route and drop the hard drive, most of the ram, and lots of other stuff...or find a way to get all that other stuff to sip power rather then gulp it.

    Typical ARM processors power consumption is much more like a few milliWatts ! Hundreds times less than the Crusoe.

    They tend to run slower as well, I picked the StrongARM because it was in the same ballpark (even if it's integer performance is likely to be halfish the Crouse, and the FP will be abysmal because it has no FPU). The Xscale would have been better, but I didn't think of it at the time.

    Good technology for embedded, ultraportatives, and even wearables.

    Yep, as long as you don't need x86 compatability it is better then the Crosue. Then again something almost allways beats the x86 if you don't need x86 compatability (well it has price/performace going for it in some price and performace bands, but I'm wondering offtopic...).

  • Hard drives already have a "power saving mode", which is the reason why if you leave your computer unattended for more than a few minutes, it porks out making you wait while the disk spins up. [...]

    This arrangement is decidedly sub-optimal for certain sparse workloads, e.g., mine, which I suspect is why Xeger said []:

    Why not produce a hard drive that "idles" at low RPM [...]

    It sounds like he was thinking the same thing I am, namely that instead of stopping when idle and spinning all the way up to 4800 RPM (or whatever it is) when needed, it should idle at an extremely low speed, like 120 RPM (no, I didn't forget a zero), so it can handle occasional small tasks slower than normal but without needing to spin up at all.

    Here's the thinking: my typical workload when I'm mobile tends to generate a single I/O once every 70 seconds or so. As a result, my HD spins unnecessariy for about a minute, spins down, and, about five seconds later, forces the application to freeze for two or three seconds while it spins back up to answer a single I/O request, and then continues spinning unnecessarily. Rinse and repeat until battery is drained. Meanwhile suffer a three-second latency for common actions.

    Okay, it's not quite that pathological, but I'm not exaggerating by much. I'm mostly reading and editing, which involves hardly any I/O at all once my files are open. So my HD should be idle all the time, right? Well, sorta. The thing is, every now and then I click a control, switch windows, or save a buffer, which generates a small but nonzero amount of activity.

    This is where the extremely-slow-but-not-stopped mode would come in: 120 RPM = 1/2 second per rotation = 1/4 second per half rotation, so the mean latency would be a quarter of a second. Pretty horrible compared to the typical 8 milliseconds at normal speed, right? Well, yeah, but since it takes two to three seconds to reach normal speed, this would actually be much better. Of course, you'd still want it to spin all the way up when you do something big, like a compile or launching or quitting an application. Also, maybe you'd want to double the speed to a whopping 240, or even 300 RPM to keep the max latency under the quarter-second annoyance threshold, but I think that with some work, this Could be a really good idea.

    David Gould
  • Thanks. I suspected as much, but I wasn't sure, and I figured I'd already rambled enough... Any idea what minimum speed is required by the air cushion? Maybe there could still be an intermediate state between off and on, to better handle the usage characteristics I described, even if it couldn't be that slow. Or is the whole idea just ridiculous? Oh, well.

    David Gould
  • The original Mac laptop had no backlight, as well. Easier to do with monochrome. Just look at your watch (LCD w/ no backlight).
  • In fact, unless you make the screen see-thru, you can't make a color laptop w/o backlight with existing tech.
  • Somehow, testing this processor in a system with a tiny display doesn't seem like a very good way to compare it to a realistic real world notebook.
    Some of us real realistic people in the realistic real world carry our real laptops to real places, and prefer not to lug around eight-pound monsters. :-)

    Don't get me wrong, I love my dual-monitor 3200x1200 setup at home, but 500K pixels is easily enough to get useful work done. (Besides, standard laptops aren't much better at 1024x768. Only 50% more pixels.)

    There is a large and growing market for subnotebooks, and it's obvious that's the part of the laptop market where a power-frugal processor would be most valuable. Most of the other Crusoe-powered laptops coming out have similar displays (Loox is 1024x480, TP240 is 800x600). It makes perfect sense to evaluate the processor in the context of the systems in which it'll actually be used, rather than big desktop-replacement laptops where nobody cares about power consumption...


  • by Mr Z ( 6791 )

    ROFL!!!! Someone mod that (+1, Funny). It certainly is not offtopic if you consider what we're discussing. (eg. Watt vs. Watt-hour and parsec being a unit of distance, not time).

  • Yes, you can measure time in meters (which, incidentally, makes velocity a unitless quantity as it's merely a slope at that point), but in reality, nobody seems to as a practical matter. Part of the problem is that there's a sqrt(-1) in there that's rather annoying.

    And yes, I took a semester of quantum mechanics myself.

  • The reason that a benchmark runs faster the second time is that the Code Morphing software doesn't need to retranslate, and with these short-lived benchmarks, the translation time is a significant amount of the timing. Rebooting won't necessarily result in a faster translation, as the Code Morphing software supposedly re-morphs sections of code more aggressively over time anyway if they get called often. Basically, if you rebooted your kernel, you might reboot more quickly, but the steady-state performance of the system would be identical after a minute or so.

    This is the reason standard benchmarking is unreliable on a Transmeta part. Basically, the benchmark runs end to end touching many features of an application, but not really reusing many of them, so you get charged the startup and initial Code Morphing overhead on a large body of code and you don't get to see the actual steady-state performance of the device. In contrast, if a user's sitting there using Word for an hour, they'll spend 99% of there time at steady state using just a few features the bulk of the time.

    So, no, you don't need to reboot to get a faster kernel on a Transmeta device, unless you just want to watch it boot 2 seconds faster the second time.

  • by Mr Z ( 6791 )

    Ack, you people don't get it! Here's the short, simple explanation:

    When the program is run the first time, you see "Code Morphing Time + Program Running Time." To the user, this manifests itself as "Total Running Time." The second time you run it you mostly only see "Program Running Time" (and some "Code Morphing Time", but not nearly as much), and so "Total Running Time" looks somewhat smaller. The reality is that "Program Running Time" didn't really change much, if at all.

    In a real world scenario (not a benchmark), "Program Running Time" is the important figure, as you typically end up using the program for quite a long period of time and so the "Code Morphing Time" ends up being in the noise, rather than being one of the dominant terms as it is in some of these benchmarks.

  • by Mr Z ( 6791 ) on Wednesday October 11, 2000 @02:51PM (#713778) Homepage Journal

    DISCLAIMER: I am not a kernel hacker, so I might have some factual errors in the text below. Kernel hackers: Feel free to correct me.

    It does bad things if the clock rate varies, as this affects micro-delay loops that are used when talking to certain (broken) peripherals. The execution speed of the instructions varies even on true Intel parts. The kernel has two mechanisms to cope with this, and the important one should work fine on Transmeta.

    (Reference: arch/i386/lib/delay.c in the kernel source.)

    The older mechanism is the BogoMIPS busy loop. This mechanism relies on a tight loop that fits in cache and should run with fixed behavior on a given device. This mechanism probably doesn't work real well on a Transmeta part, though I suspect Code Morphing would hit steady state real soon and so the BogoMIPS loop wouldn't be hurt too badly. Still, it's suboptimal. That leads me to the second mechanism.

    The newer mechanism which is available on most modern CPUs is the Time Stamp Counter, which returns a cycle count rendered in terms of CPU clock cycles. As long as you know the MHz rate of the CPU, you can measure time very accurately. Presumably, despite the Code Morphing layer, the Transmeta CPU will return a meaningful, coherent clock count for this instruction.

    The problem with varying clock rates is that the time-base for the BogoMIPS or TSC clock change and the kernel isn't notified. In theory, the Transmeta could actually just use a fixed-rate counter for the TSC whose time-base didn't vary as the CPU's clock-rate varied, thus fixing the problem entirely. But then, that'd make too much sense. ;-)

    As for HLT, I thought Linux did that already? That's how come my CPU stays nice and ice cold when I'm not running my Distributed Net client. A quick look at arch/i386/kernel/process.c shows the uniprocessor idle loop calling __asm__("hlt"); as long as the CPU supports it.

  • So now M$ tech support will have a legitimate reason to tell the poor user to reboot if his system is slow!

  • Im glad someone pasted a translation. I couldnt remember the web site that has the translation program...
    And 10% of the time the site goes down before I get to read the article.
  • Theoretically the chip could get better branch prediction or figure out getter ways to translate instructions if it has lots of sample data about how a program runs. Thus a program will get faster the longer you use it. I doubt this information is persistant across different sessions - unless it just optimizes based on overall CPU action rather than a per process basis. I don't really know what I'm talking about by the way.

  • I apparently don't understand the idea behind Code Morphing. I thought it was that there was a cache for the translated x86 instruction stream, and that when those instructions were run again, the processor says "hey, I have those in the cache already" and so doesn't have to translate them again. However, this would make the second time through the loop faster. I can't figure out how any technology on the processor would make a program faster after it was quit and restarted. Can anyone clear this up?
  • Oh, I didn't mean to pick on the SA because it sucks a watt. I actually thought it sucked more like half that.

    Oh, I actually thought the the SA sucked way more than that.
  • by Hanzie ( 16075 ) on Wednesday October 11, 2000 @01:19PM (#713784)
    Crusoe: Not the fastest one, but economically
    [ 11,10,2000 17:13 ]

    For some days the c't laboratory measures the efficiency of the TM5600-Prozessor von Transmeta[1 ] . After the first results to the Speicher-Performance[2 ] now further results of bench mark are certain.

    The Crusoe is 12 GByte fixed disk, ATI in the Sony Notebook Vaio PCG-C1VE[3 ] with 128 MByte primary storages, rises up Mobility and a 9-Zoll-Display with 1024 x 480 points dissolution. The processor runs alternatively with 300 mc/s with 1.2 V of core voltage or 600 mc/s with 1.6 V and can be switched during operation between both frequencies. It does not have 128 KByte Level-1 and 256 KByte Level-2-Cache. x86-Code can it execute directly, but translates it beforehand into its internal VLIW instruction set (very long INSTRUCTION word). In order not to repeat this process continuously, the Crusoe stores the translated code in a code Morphing memory. In addition it zwackt itself 16 MByte from the primary storage, so that for the operating system and applications only 112 MByte remain remaining.

    In the case of 300 mc/s the c't Akkubenchmark results in a run time of approximately two hours. Sony indicates the Akku capacity as approximately 20 Wh. Therefore the Notebook takes up altogether only about 10 Watts of performance - quite considerably, most Notebooks between 15 and 22 Watts goennen itself nevertheless. In the efficiency comparison the Crusoe remains certainly behind one fast clocked mobile Pentium III clearly:

    [ mc/s ] BAPCo
    SYSMark 2000 PovRay 3.1
    chess2.pov 3DMark 2000
    CCU Marks UT
    [ fps ] Cinema
    Crusoe 300 31 124 PPS 33 8,4 1,8
    Crusoe 600 50 257 PPS 56 11,8 3,7
    Pentium III 500 86 347 PPS 78 14,9 5,5
    Pentium III 600 92 417 PPS 81 15,4 6,6
    Comparative measurements on Acer TravelMate 522 TXV with Pentium III-600 (with speed steps), 128 MByte primary storages and likewise the ATI rise up Mobility.

    Some bench mark we let run several times consecutively, in order to observe the influence code of the Morphing (translate of x86-Maschinencode into Crusoe instruction). In the theory a bench mark should run with the second time faster, since the processor can fall back to the Morphing memory and again not translate the code must. In practice this effect actually shows up with some bench mark: Thus the Frame rate of Quake III of 13,5 rose fps by 10 per cent to 14,9 fps in the second run. PovRay calculated " desk.pov " in the first passage in 20 seconds and needed with the repetitions only 16 seconds. (both measured with 300 mc/s.)

    However the results remained by the 3DMark 2000, unreal Tournament or the " chess2.pov"-Berechnung von PovRay constantly and also most individual values of the BAPCo Suite varied only around the two per cent usual with all systems. This bench mark execute obviously most program sections anyway already several times, so that the rate advantage enters with the repeated passing through of a code paragraph bench mark result also. For example the BAPCo Einzeltest " Elastic Reality " consists mainly of calculating 150 frames. According to the first picture the code should be situated completely in the Morphing memory, so that the Crusoe can calculate the further 149 pictures with max. rate. Code the Morphing would have to go already extremely slowly, in order to measure an influence here.

    Further results follow in the c't output 22/00 (starting from 23 October in the trade). ( jow[4 ] / c't)

  • This is exactly what I'm looking for. I have the old PictureBook (PCG-C1X, 266Mhz Pentium MMX) and on the standard battery, I'm lucky to get 1 hr of usage. With the double length battery, I get 2.5-3 hrs of life. As for performance? It gets the performance of a 266 MMX. Sony also has their slightly newer version, the PCG-C1XS, which sports a Pentium II 400MHz, and theoretically gets the same battery life as mine. For me, I bought it because it was small. Battery life was a tradeoff, so if I could get the same form factor, slightly faster, with noticably longer battery life, I'm all for it.

    I'm actually going to the review site right now, so I'm going to see what benchmarking they do. If I can easily do the same tests, I'll post my results here. But, from the outward looks of the new PictureBook, it looks like it is the same base hardware as the Pentium-II 400 model, not mine, so a comparison to that one would be better. (I get the feeling that they were just using the Crusoe as a drop-in for the Pentium-II 400.)

    Okay, I'm going to try to find 3DMark2000 and SYSMark2000 and run them on mine... Unfortunately, I only get a 33.6 connection, so if they're too big, I'm not going to bother. I'll post my results tomorrow if I get them. But, for the record, a Pentium MMX 266 should get about 1/4 the scores of a Pentium III 600. (I know because my desktop is a P3/600, and all benchmarks I *HAVE* run show about a 4 to 1 advantage.) So, it looks like the Crusoe is noticably faster than my PictureBook, and probably slightly faster than the Pentium-II 400-powered one. Oh, yes... Playing Unreal Tournament I get about 1-2fps... :-)

  • Houses use power measured in KiloWatt-Hours, right? This should be 10 Watt-Hours, I suppose.
  • I knew that. Watts do seem like the natural thing to say, however. Joules/Hour just donesn't seem to cut it.

    Gawd, College Physics is a long way off from here. Anybody know the best way to say "The Laptop sucks a quantity of power X from the battery in one Hour? 10 Amp-Hours maybe?
  • Damn, beat me to that one.

    All that shows is the submitter doesn't know physics.
  • It sounds like it.

    This code morphing stuff does seem to have many strange effects. For example, how many times do you have to run a benchmark before the numbers stop changing?

    More to the point, how many times will you run a benchmark before reporting the results? Kind of depends if you like Intel or Crusoe, doesn't it?

    The Crusoe must be stopped before it renders ALL benchmarks useless!
  • a third of the weight.
    face it - other than gaming, how much heavy processing are you going to do on a plane?
    You're going to write emails, dick around with Word or listen to MP3's.
    (Even with MP3's, you can close it to turn off the monitor.)
    When you're not on the road, plug in and add a monitor...
    Jim in Tokyo

  • No... though you could say 'if on for one hour, the laptop will consume 10 watt-hours'.. but that's the same as saying it's a 10W device.

    The amount of power used is measured in kilowatt-hours... meaing the amount of energy equivalent to drawing a kilowatt for an hour.
  • The best way to say it is not to say it at all.

    Saying 'this laptop sucks 10 watt-hours per hour' would be correct, but can be reduced to 'it's a 10 watt laptop'.

    10wh/h = 10w...

    An amp-hour is meaningless unless you know the voltage. That's why batteries are usually measured in amp-hours.. the voltage is known and fixed.
    (there is no need to specify watts as you already know the voltage, and watts = volts * amps). It's saying the same thing.
  • Yes. How does that make what I said wrong?

    A '10 watt-hour' is a '10 joule/second' load for an hour.

  • The hard drive doesn't suck as much power as you'd think, under the right conditions. If you provide an extremely generous disk cache with tons of read-ahead, and spin down your drive after a minute of disuse, it won't be such a big problem. Why not produce a hard drive that "idles" at low RPM and then kicks into high gear when it receives an I/O request. A variable-speed hard drive would be significantly harder, but probably still doable.
  • Lithium polymer batteries have started hitting the market--the iPaq uses them, for one, and my new cell phone does as well. With their significantly higher energy density and the Crusoe's power-saving, we'll be seeing laptops with a running time of 6-8 hours--or to put it another way, laptops with a running time of 4 hours that have virtually no battery. This still isn't enough, and definitely isn't worth the 50% performance hit reported on some applications.

    In my experience, the biggest power drains on a mobile system are (of course) the display and the CD-ROM/DVD drive. I'm waiting for a new display technology (light-emitting polymer, for example) that will make more difference than the Crusoe or lithium polymer combined.
  • Ummmm...because you don't have to spin the laser? I'll leave it to somebody else to remember the formula for kinetic energy for a rotating body, but I know it's a hell of a lot more than moving a laser back and forth on a 3" track...
  • You mean somebody makes a CDROM that doesn't sound like a flippin' gas turbine engine spooling up? Woo hoo! I have been reluctant to replace my 4x because, believe it or not, I object to the noise.
  • I think we're talking around each other. The Kenwood drives DO spin the disc, but at a lower speed than other drives. However, instead of reading just one row of pits per rotation, it reads several. (five, I think) This multiplies the effective data rate by five.

    I thought you were wanting to spin the laser under the disc, which would make for some icky engineering and packaging challenges.

    Multihead IS a good idea, and I think that's what the Kenwood drives do.
  • Yes. Somehow, I think the original poster knew this too.

    Let's say a G4 is twice as efficient as a Pentium at every task. If its max speed is 500 MHz, it's still slower than a 1.1 GHz Pentium.

    The original promise of RISC was that reducing the instruction complexity would lead to more cycles per second, a net gain despite each cycle being less efficient.

    It appears that the opposite has happened.
  • Odd to see a Linux-focussed site only thinking about x86. Not a moan at you in particular, but the bunch here in _general_.

    If you're running Linix and OS-only software, you can run LinuxPPC. If you can run LinuxPPC, use a PowerBook or iBook, and get that battery life _today_.

    Or, for that matter, get one of the larger WinCE boxes (remember the IBM WorkPad z50?) which has no drives and can run for most of a day with no heroic measures. Possibly a Psion 7 could have the same treatment too, just never heard of it being done.
  • by MarcoAtWork ( 28889 ) on Wednesday October 11, 2000 @01:50PM (#713801)
    So, the chip is much slower than the p3, and extrapolating from the provided numbers, if you fit it with a real screen (not the ridiculous PDA size this benchmark has been run with) it sucks almost as much power as the aforementioned vanilla p3 notebook, and probably more or less the same as a p3 notebook at half of the Crusoe's clock speed (since that's a comparable speed given its performance).

    Can anybody explain me what's the point of all the hoopla that has been going on about this ? If Intel or AMD created a processor like this they would be fried and grilled here, but since it's Linus' employer I have the feeling that the /. community is much more forgiving.

    Don't get me wrong, from a company that has never produced CPUs the Crusoe is an excellent first product, but I fail to see why this should be hailed as the second coming or something.

    Am I being too cynical ?
  • I'll bet using Kenwood's CD scheme would help with
    CD power drain.

    Kenwood drives (like the 52x and 72x) spin much slower, but read 7 tracks in parallel to give much faster read times. I would guess that the spinning would eat up most of the CD power.

    So to have a 12x drive, you would have to really spin at 2x and probably would spin up and down faster.

  • Well we ran PWM or something for PVMpov. We called it a beowulf cluster. I'm not really the person to talk to about how the system was setup. I only took care of the povray part and building the machines up. I'm not all that linux inclined, but my friend is. I think we completely explained how we setup the machines with linux. it was kind of cool too. We wrote some perl scripts that copied one hard drive to another or something like that, and then would distribute the nessecary povray file to rendering. I might be slightly wrong, but that's pretty much how it is. But really, I was just the povray freak of the whole project :).

  • Well, the 486's and P-100's were sitting in a back room at our school so we used those in our project, free of cost.

    They consisted of 515mb hard drives. Just enough for a small install of debian and povray. Keep in mind you don't need to load up a bunch of crap on like X or whatnot. The only box that we installed X on was the server so we could show off the images.

    How much energy? Well figure how much a big high school puts in wattage through a single outlet in a wall. We plugged EVERYTHING into ONE SINGLE outlet. How stupid is that? :) That means, hubs, computers, monitors (2 total) were all chained together with power strips and plugged into one outlet. Fun :)

    Network throughput? Hrmm, dunno what you mean by that, but we were on a 10 network because the only cards we had free were all 10 so that worked just fine. Keep in mind we did this because we could, but we really wanted mention slashdot because that would make us cool :) Anyways it wasn't switched, it was just hubbed. It didn't matter because it was fast enough as it was.

    It doesn't matter anyways, our cluster is in peices and god knows where. Our school sold off the old boxes to other places. We didn't fully get the schools permission to use them in the first place, but we got the SYSADMIN at our school to help us with some things. We call him Mr. Network. He's a big geek :)

    Hope that somewhat answers your questions.

  • Both IBM and Motoral are currently shipping PPC ships at 700Mhz. I don't think these faster parts include Alti-Vec so Apple refuses to use them. And that is the key to the problem. THE PARTS ARE AVAILABLE AND APPLE REFUSES TO BUY THEM. As far as I know, there is nothing preventing other companies from buying them.

    IBM did a demo of a 1Ghz PPC over two years ago. Granted the PPC in the demo was hand built and only used a subset of the PPC instruction set, but it served to show what the PPC chip is capable of. As it is, IBM is currently focusing on the big iron where bus speed and io typically matter more than CPU clock speed and Motorola is focussing on the embedded applications. The focus of the other two AIM partners, in effect, leaves Apple out to dry. Of course, this is bound to happen in a partnership where one or more of the partners doesn't actually make chips, so it's not like this situation isn't Apple's fault.

    If I were Steve Jobs, I'd talk to Paul Allen about getting Transmeta to make some PPC instruction set compatible Ghz chips. That would be interesting.

    have a day,


  • Intel's response is that, in actuality, the Crusoe uses the same amount of power as the corresponding Intel chip. The Crusoe merely takes twice as long to do so. Another area where Intel is faster...
  • by drivers ( 45076 ) on Wednesday October 11, 2000 @01:12PM (#713807)
    How do you use 10W per hour? Considering that is a rating of power (energy per unit time), not energy.
  • I could care less about how fast the clock on the computer is -- it matters how much it can do. It's like that 'hard working' incompetant employee that every office seems to have -- they'll be there for 12hrs/day, and still not get shit done.

    If a 400 Mhz machine can do something in 4 cycles that a 600Mhz machine takes 8 cycles to do, then technically, the 400Mhz machine is the 'faster' machine for that process. You start seeing that behavior on a few dozen/hundred instructions, and it makes real sense to get the lower clock speed chip.
  • It sounds like he was thinking the same thing I am, namely that instead of stopping when idle and spinning all the way up to 4800 RPM (or whatever it is) when needed, it should idle at an extremely low speed, like 120 RPM (no, I didn't forget a zero)
    Unfortunately, normal harddisks can't do this. The head hovers over the disk on an air cushion that is produced by the fast spinning disk. Reducing the RPM with the head still over an active area of the disk will lead to a head crash.
  • I think the reason why we don't have any real number is that there is no benchmark simulating a real user. All the benchmarks available are either some sort of number crunching benchmark or giving the raw power consumption of the processor. I haven't found anything really usefull.

    I would imagine something like that: using a word processor for half an hour, a spreadsheet with a fair amount of calculation for another half an hour, some web page rendering and one hour in sleep mode. The result of the benchmark could be total amount of energy consummed and also number of times this benchmark could be run before the laptop battery is empty.
  • ("twice as fast" argument doesn't wash when clock speed is half as fast).

    Benchmarks are actually showing that a 500 MHz G4 is roughly equivalent to a 1 GHz Athlon. These aren't Apple benchmarks either, but numbers from x86 oriented sites. If the 500 MHz chip uses less than half the power of the 1 GHz chip, I'm all over it. Makes you wonder what the heck Intel and AMD are doing with all that wattage.
  • People just thinks that CPU clock speed is some accurate way to measure processing speed. As you already pointed out, this isn't true.

    I'm quite sure that 150MHz DSP can code and decode those MP3s faster than any pentium or what ever. Trick is simple.. Hardware is designed for such jobs in the DSP core. There is no over head looping, Single insturction cycle MACs (multiply and accumulate), circular buffering, etc. built in.

  • Umm, all the tests conducted ARE real apps. Thankfully, we seem to have evolved past the Winstone era. Unreal Tournament is a real FPS measure, Cinema4D and Povray test actual rendering times, and Bapco Sysmark is a script of around a dozen common Windows applications (Photoshop, Office, Netscape, etc.) So all these results ARE valid, and the loop thing really wouldn't change anything. (The score reported by Bapco on the Photoshop segment is going to be damn close to actual photoshop performance, because it IS actual photoshop perfromance.) The performance is still pretty good though, just not as groundbreaking as Transmeta would have you believe.
  • What I'd really be interested in is a comparison with the older Pentium-233 based picturebook, which most likely has the same screen and built-in peripheral support. If the TM 5600 can't outperform and outlast (batter life) the much older P233, the TM chips don't stand a chance in the laptop (or sub-laptop) arena. That's not to say they still don't have a purpose for other handheld or embedded devices...


  • Actually, I thought it was morph + run time the first time, and an optimized runtime the second time so the result looks like this:
    1st Each Additional Run
    Morph + Runtime Runtime - x%

    X being however much it was able to adapt to the instruction stream and how well it cached. Notice that the "how well it cached" statement is somewhat misleading because this part, as you indicate, is what you're paying the piper for no matter what, however, it doesn't change the fact that it is supposed to be making a dynamic effort to actually (re)optimize the instruction stream. I *assume* this is even when new and subttle code branches are hit which may effect morph cache. In other words, it would seem very reasonable to newly optimize a portion of related cache code when a new branch has been hit which may lead to a slightly improved optimization over the original cached version. The net effect is that it may be, for example, 1% faster now through that section of code on the third pass, as the second pass optimized over and above what was done on the first pass. Make sense?

  • by Datafage ( 75835 ) on Wednesday October 11, 2000 @01:21PM (#713816) Homepage
    This is an intriguing set of benchmarks. At first it would appear to validate everything Transmeta has been telling us about the superior lifetime of a Crusoe-powered computer. The lower performance is well-nigh negligible, I mean really, do you care how fast POVRAY runs on your laptop? I didn't think so.

    On the other hand, the article failed to mention the size of the screen on the P!!! laptop, along with what effect the 9" screen had on the Crusoe's power consumption. Considering the power the screen consumes, and how small 9" is compared with the size of a normal notebook screen, that could be very relevant.


  • by jonnythan ( 79727 ) on Wednesday October 11, 2000 @01:32PM (#713817) Homepage
    A Watt-hour is a measure of raw energy. A watt is energy/time, so energy/time * time gives you energy.

    Maybe what the poster was going for was something like "10 Watt-hours per hour," which is of course just 10 Watts. A 100 Watt light bulb consumes 100 Watt-hours every hour, hence 100 Watts.

    Watts have the time component built in.
  • We should read "the new Vaio uses 10 Watts of energy, whereas the previous ones used twice as much for the same functions."

    Wrong again. Let's try this one more time class, Watts are power, power is work per time, work is energy...Energy is not power, or watts.
  • The amount of power used is measured in kilowatt-hours... meaing [sic] the amount of energy equivalent to drawing a kilowatt for an hour.

    One more time, power != energy! Kilowatts is the power, Kilowatt-hours is the energy!

    Power is energy divided by time.

    Energy is power multiplied by time.

    Of course, power is really an instantaneous measurement (the derivative of energy vs time) while energy is often though of as cumulative (the definate integral of power vs time on some interval).

  • Well, that would be the case, if the Code Morphing Layer was a stupid one-to-one translator from x86 to native VLIW instructions, but in reality, that is not the case. The Code Morphing layer inserts small bits of accounting code into the VLIW instructions generated which it uses to profile code while it is actually running. From this it can find the hotspots and spend more time aggressively optimizing those parts.

    So, if you run one section of code 1 time, or the same section of code 1 million times, you get different results, because the Code Morphing layer is smart enough to realize that a large proportion of the execution time is spent in that section and that spending more time optimizing it will in the end yield greater performance.
  • I believe what was meant was that it uses 10 Watts, which of course would means 10 Joules per second or 36,000 Joules per hour. For those who prefer the old English measures that is roughly 34 1/8 BTU per hour. Or 26,554 foot-pounds per hour. Or 0.0134 horsepower.
  • Well, Sony's usually retail at about the same price as on the Sony website, which is $2299.00. I still want to see some real battery life figures. If I can get 8 hours on the double life battery (hell, even 6...) I may buy one of these...


  • Amen. This is sort of what the x86 compatibility layer on the Alphas did under NT... It translated the instructions and stored them, then on each reload it optimized them further and restored them, so the apps actually got faster. Unfortunately with the Crusoe you lose all the translations when you shut down.

    Hell, on a 12 gig drive under linux I'd be willing to give up a gig to store optimized translations. Ah well...


  • What's the ESRP on this Sony palmtop? Anyone know?
  • Let's take a look at those numbers

    50% faster

    10 W vs 15-22 W

    Since 15W = 50% greater than 10W, the Crusoe
    technology has a really meager effect.

    It seems the speed penalty is too great for the power savings. 50% less speed, and 50% less power consumption !!

  • I'm hoping this reduced usage of power would also allow the notebook's battery to have a longer life, because most owners for notebooks (atleast business travelers) look for longer living notebook batteries without the additional weight that most larger-sized batteries suffer from. Also, finally a bigger leap in notebook improvements have come. A year or so ago most notebooks were upgraded in many aspects and made them much more usable, but then the advancement of usability seemed to trickle. Now this'll hopefully give the notebook industry a large push.

  • I read on html
    that the guys in transmeta had really
    hard times optimizing for Win95 and
    its IA16 codes. and I don't suppose
    C'T ran those benchmarks on NT or W2k.
    That might too have hit the performance on

    I mostly agree about your point... but
    would it hold for Q3 also? hopefully the
    morph cache should have rendering code
    and optimized it very hard, especially
    no other CPU activity takes place in that case.

  • We all see how many watts those processors are supposed to consume. But I can't understand why nobody has ever given us real numbers. Let me make myself more clear. The question for the average user, is HOW many hours your notebook can be actively used (meaning that you actually do something that might be battery consuming, unlike manufactorers' usual numbers which tend to be 3 times more). I'd really applaude someone finally explaining what good is any power saving schema, and if it can do something more than a 10% better amount of time online. Just my 2 cents here
  • Don't be silly -- code morphing is simply a firmware form of just-in-time compiling of x86 instructions to native Transmeta code. If you reboot, you'll clear the "JIT cache". It's essentially the Java VM problem -- if you interpret, it will be slower, if you JIT it, it goes faster second time around -- and that's it. The first time is slower because the "compile" occurs then.
  • I don't think so... First, laptops (despite their appearance in _Star Trek_ spinoffs) are the past; the future is wearables. Second, wearables will be "always on" devices, since we will end up folding in the cellphone, PDA, and other extras (that replacement software already exists: Voice over IP, PIM's). Third, wearables are NOT devices typically used while we're plugged into external power sources/chargers.

    Speaking as one of the earliest adopters of laptop technology (I consult, and used a Model 100 with the ancient Radio Shack Model I when working as tech-writer/documentation specialist!), as well as a current user, I predict laptops will die out over the next 5-10 years, maybe sooner. Instead, we'll see divergence into portable cube/brick type computers, plus wearables. So power consumption will count for a lot, though I admit externally-invisible, projected screens will help.

  • Although the technology is intriguing Crusoe-based notebooks will not breaktrough because most people are satisfied with their notebook and its battery charge life span.

    Just the ones who barely are in the need to keep their notebooks constantly on for more than 3 hours will jump on Crusoe Inside [TM] ;-).

    Code morphing is however definitely the future.

    Just my opinion.

  • . It runs native, if I remember correctly.

    You don't and it doesn't. The Crusoe is an advanced microprocessor that has a software layer which provides the x86 instruction set. That's the whole point.

    Now hiring experienced client- & server-side developers

  • by carlivar ( 119811 ) on Wednesday October 11, 2000 @01:23PM (#713833)
    It seems that now other components of this notebook are guilty of the most power usage, so the focus should turn to other high-energy devices. What are these? My first couple guesses are the display screen and the hard drive. Here's a good application for those goggles that have a videoscreen built into them (so that a regular-sized monitor appears before your eyes). Those things can't use very much energy, do they? So use that for a display. I'd be interested to see how long the battery lasts then. Carl hi mom
  • You just answered the question yourself, it gets so much attention because linus is working on the project. Its funny how he's not even doing anything commercial with linux, he knows you can't make money directly.
  • Thats all well and good that your bunch of 486s and p100's beat a k6 and celeron, but how much space did these boxes use up? How much energy do they consume? What is the network throughput? I hope you are running it switched otherwise hubs would suck.
  • I assume you mean 10W, since 10W/hr is not something you see very often. A Watt is a measure of energy per unit time, and can be broken down into Joules / Second. A Volt can be broken down into Joules / Coulomb, and an Ampere (the measure of current) is Coulombs / Second. Power (which is what Watts measure) is voltage times current (P=VI) and the units cancel thusly: (Joules/Coulomb)*(Coulombs/Second)=Joules/Second.

    Joules/Second/Second, or Joules/Second^2 would indicate an accelerating energy consumption.

    --Steve, BSEE UVa 1993.

  • Motorola will be shipping 1 GHz PPC G4 chips at some point in the nearish future, although it isn't known if they will show up in Macs. see this article [] for details.
  • Once the code has been translated, optimized and cached, you should see the same performance improvement for every run - but that's a one time bonus. If it runs 10% faster the second time, it will run at the same 10% faster than the first run in subsequent runs.
  • Clock speed is not the only measure of CPU performance. The G4 does more per clock cycle than a Pentium at the same clock speed. The G4 also has a much better floating point unit, and has a rather nifty vector processing unit, which performs the sort of functions that you find in photoshop much more rapidly than is possible on Intel chips. The twice as fast claim refers to specific instances of Photoshop filters which have been optimized for the Altivec vector processing pipeline in the G4.
  • I seem to recall something about Linux developing a kernel tuned specifically for the Crusoe processors. It runs native, if I remember correctly. But if you were running a non-native instruction set instance of the kernel, it would run faster the second time around. Frequently used instructions have their translations cached and are optimized. So it would run faster the second time around. ;)
  • That would be energy per unit time per unit time = energy / unit time^2 = accelerating power usage! Given sufficient time, it would take all the energy in the universe to power the chip. ;)
  • by techmuse ( 160085 ) on Wednesday October 11, 2000 @01:14PM (#713842)
    Somehow, testing this processor in a system with a tiny display doesn't seem like a very good way to compare it to a realistic real world notebook. (Sure, some people may buy this, but the display seems too small for many real world applications.)
  • Using code-Murphying to write this down: if anything can go wr

  • Probably, relativistically speaking, you can essentially make a parsec a unit of time instead of space. At least when physicists start talking amongst themslves.

    I took a couple semesters of quantum mechanics, and the typical unit of "mass" used is the eV (electron volt). Remember, e=mc^2?

    'nuff said.
  • I wanna zwackt myself 16 MByte from the primary storage, too.

    Is there a Howto on zwackting?
  • Yes, but it's the kernel hangs, it goes half of the speed next time and then it will die again due the the previous morphed bug. Nevertheless, it's nice to see this comparisons, altough I can't believe a program runs twice faster next time. It supposed the cache it's activated and used from the first instruction of the program. And the hyped "morphing" does not implies faster second rounds, caching parsed code does.
  • a 9-Zoll-Display with 1024 x 480 points dissolution

    It's amazing what those Marketers will come up with.

  • Sure, the StrongARM processor sucks about 1W.

    But the StrongARM is only one branch in the huge ARM tree. StrongARM is in fact the ARM-based processor which consumes the most.

    Typical ARM processors power consumption is much more like a few milliWatts ! Hundreds times less than the Crusoe.

    If you want to know more go and visit the ARM site, it's well worth the look :

    Good technology for embedded, ultraportatives, and even wearables.
  • by javaDragon ( 187973 ) on Wednesday October 11, 2000 @01:16PM (#713854) Homepage
    Good point. Watt is a unit of power, not work. We should read "the new Vaio uses 10 Watts of energy, whereas the previous ones used twice as much for the same functions."

    BTW 10 W is still too much for a laptop, because it still take large batteries to run a the computer for a decent time (in that case, it takes a 4-uple size battery to run the Vaio for 8 hours). Keep in mind that ultra portable machines using low-power consumption RISC processors and components achieve a 1W- rate.
  • Keep in mind that ultra portable machines using low-power consumption RISC processors and components achieve ...

    The NY Times today ran an article [] about Intel's PR counter offense (essentially), laying assault on Transmeta as being erroneously knighted the low-power mobile-CPU provider. Intel claims:
    Intel, the world's largest semiconductor maker, said that its current generation of mobile Pentium processors already consumed less power on average than Transmeta's, and that a set of technologies on the horizon for 2002 or 2003 would keep Intel comfortably in the lead.
    Anyway, with direct regard to your last point on CPU battery draw, The biggest power consumer is the LCD display ... It has an entire light bulb behind it, quotes the Times.

    Me pican las bolas, man!
  • Would the fact that it is faster the second time around have a positive effect on recursive programming, AI, lisp, etc...?
  • by WillSeattle ( 239206 ) on Wednesday October 11, 2000 @01:36PM (#713868) Homepage
    OK, we're dealing with laptops and webpads.

    Based on batteries coming out of B.C. right now, I'd say Transmeta can cut the power on a system with color video screens and CD to about 80-90% of current usage. If you beef up the RAM to the gills, probably 70-85% power usage.

    Main drain is monitor power for most people - fastest method to cut this is better screen technology. Time to market of useable low-power high-res screen is probably 2002-2003 product cycle. This will still leave you at 50-60%.

    Not a lot of hope on the CD power usage.

    On the other hand, a non-CD Ethernet laptop, with honking big RAM and improved hard drive could probably cut power consumption down to 25-30% of current usage. This is with a total redesign. Expect to see these babies in late 2002. Price mark will be high until early 2004.

    [Note - I am expecting to put in an indication of interest for TMTA IPO shares - I am biased]

  • by Going for -100 Karma ( 241181 ) on Wednesday October 11, 2000 @01:17PM (#713869)
    Half the display size (vertically), half the processor speed, half the power consumption... I think they should ship OS/2 with that thing, just for the bad pun.

Trying to be happy is like trying to build a machine for which the only specification is that it should run noiselessly.