Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Transmeta

Transmeta Founder Talks Chips 153

gManZboy writes "Dave Ditzel, CTO and Founder of Transmeta (you remember Transmeta? weren't they supposed to kick some Intel booty?) sits down and speaks with Alpha and StrongARM chip designer Dan Dobberpuhl about the history of CPUs, where they're heading, and how the heck we'll keep up Moore's Law (if we can)."
This discussion has been archived. No new comments can be posted.

Transmeta Founder Talks Chips

Comments Filter:
  • by Anonymous Coward on Friday November 07, 2003 @02:11PM (#7418106)
    (you remember Transmeta? weren't they supposed to kick some Intel booty?)

    Uh, 1992 called. They want their slang back (and their processors, while you're at it.)
  • October? (Score:2, Funny)

    by michaelhood ( 667393 )
    In an October 1998 article, EE Times named him one of "40 forces that will shape the semiconductor industry of tomorrow."

    Hmm. I wonder what day in October 1998 that was supposed to be? I don't remember any big change.
  • by glassesmonkey ( 684291 ) * on Friday November 07, 2003 @02:17PM (#7418176) Homepage Journal
    I can't find it again, but I saw an interesting discussion that took the number of processors and embedded processors and the exponential growth of these devices and also the MIPS scaling and the energy per MIPS and compared it to the amount of energy in the sun. It was very clear that at some point you will run out of energy to power all the CPUs in a surprisingly short amount of time.

    I wish I could find it again. (please let me know if you know)
    • In related news, the next 50 years will see such an explosion in human population that we will be standing shoulder to shoulder stacked five high.

      Seriously though, everyone is aware that exponential growth is unsustainable. This is not news, and something will give. Chips will get smaller and smaller. They will also get more efficient and less power hungry. Power sources will also change radically.

      In any case, however, I'd be curious to see this paper. I can't imagine the number of electronic devices
      • the next 50 years will see such an explosion in human population that we will be standing shoulder to shoulder stacked five high.

        Why do so many humans think that just having gonads gives them a license to procreate without limit? We have a brain, so why don't we use it to realize that having four kids while living in trash isn't a good idea? Legislating families like China is most certainly a bad idea, but I think humans have some serious cultural issues to work out (the Catholics telling people to fill
      • In related news, the next 50 years will see such an explosion in human population that we will be standing shoulder to shoulder stacked five high.

        In my introductory calculus class, the professor gave us this problem: Assuming the human population continues to grow at its current exponential rate (and ignoring relativistic effects), how many years will it be until the surface of the expanding sphere of human bodies reaches the speed of light?

        IIRC, the answer was only a few thousand years.

        • Except the human race is not gorwing at an exponential race as income is a factor in population growth. As populations get richer they want/need less children on average and populations fall.

          Witness Western Europe is an example of this. Latin America is an example of poorer countries witha high er growth rates. The USA is an example of a country whose establised (higher income) population reduces and is filled by its immigrant population (lower average skills/education thus lower wages) who in turn beco
    • by bartash ( 93498 ) on Friday November 07, 2003 @02:36PM (#7418367)
      Was it this [incep.com] or this [usc.edu] or this [isqed.org] perhaps?
    • Wouldn't we run out of raw materials to build them long before we ran out of energy?
  • by Anonymous Coward on Friday November 07, 2003 @02:18PM (#7418181)
    I'd like to see the future of computing (and I do mean desktop computing) where the whole system has no moving parts. You read me, no spinning hard drive, only solid-state MRAM drives (or something.) No fans, not even in the power supply. 5W CPUs with the more processing muscle as today's 60W beasts. Oh, and OLED screens.

    Well that's enough fantasizing for one day.
    • The hard drive issue can almost be solved with a USB 2.0 solid-state NVRAM flash disk. They've already reached 1GB on a keychain-sized device; something the size of a 2.5" hard disk could probably hold 8 times that much.

      I think that despite the introduction of Serial ATA, SAS, iSCSI and all these other storage technologies, the venerable hard disk will meet its end sooner or later; probably later.
      • problem with these devices is that they're slow, much slower than hard drives.
        • Good point, but once you take it off USB and put it closer to the motherboard, you can use much faster buses, and many parallel memory (channels and) banks to speed things up. Imagine a SATA connection to a "drive" with a RAID controller that stripes the data to 8 or more flash chips; could be pretty good already.
      • The hard drive ... can ... be [replaced by] a ... flash disk [holding] 8 [GB}.

        (1) You need more like 80-200GB to replace hard disk these days.

        (2) Flash is appallingly slow writing and does not seem to be getting much faster anytime soon.

        The hard disk is a moving target, and flash is not catching up.
        • (2) Flash is appallingly slow writing and does not seem to be getting much faster anytime soon.
          www.sandisk.com/consumer/ultra2.asp [sandisk.com]

          9 megabytes per second is not good enough for you?

          • Well, let's see. 64 bit bus at 1GHz = 8 gigabytes per second, so 9 megabytes per second is about 1000 times slower than I'd like.
          • 9 megabytes per second is not good enough for you?

            I wish I could believe that spec is realizable in a real system, but even if so, no, it's really not good enough for me. I can push at least 5 times that into my hard disk, and if anything I want and need more, not less.
        • Flash also has a limited number of writes before it dies. Yes, it's tens or hundreds of thousands of write cycles, but try to use a flash disk as the home of a swap file and you're screwed. 100Mhz memory could write 100M times is a second. Goodbye flash.
          • I look forward (or not) to the day when there is trojan code out there that drills holes in the memory map of flash drives through repetetive writes.

            I'm sure there will be people tuning the code to do the maximum damage. How tightly spaced do burned out locations need to be on a flash drive before it renders big chunks of the drive useless?
        • I never said that hard disks would definitely be replaced by flash; it was just an example of how a hard disk can be replaced by a solid-state storage device.

          As for slow Flash memory, how about this:

          http://www.pcworld.com/news/article/0,aid,113332,0 0.asp
          • As for slow Flash memory, how about this:
            http://www.pcworld.com/news/article/0,aid,1 13332,0 0.asp


            10MBps, that's pretty impressive (still several times slower than HD though), but it's still several hundred times as much $ per GB as a HD. Just call me Mr. Negative :-)

        • How about MRAM [wired.com]

          "MRAM is up to six times faster than today's static RAM," said IBM spokesman Richard Butner. "It also has the potential to be extremely dense, packing more information into a smaller space."

          "Researchers have been trying for years to find a 'universal' RAM replacement, a device that is non-volatile, inexpensive, fast and low-power," Way said. "DRAM (dynamic RAM), flash and SRAM (static RAM) all have one or two of these characteristics, but MRAM appears to offer the best hope of an overall

        • The hard disk is a moving target, and flash is not catching up.

          Isn't the whole point of flash to have no moving parts? ;-)

    • by fnj ( 64210 )
      I'd like to see ... 5W CPUs with the more processing muscle as today's 60W beasts

      It would be a fine thing, but there's no sign of it happening. Instead, the next desktop CPUs are due to dissipate more like 103 watts. It's sad.
      • So, call up Radisys [radisys.com] and have them send you a LS855 with the Pentium M onboard! That's what I'm looking into (just waiting on quotes for 1 unit of either their CPU-less model or one with a 1.3GHz Pentium M).
        • Radisys ... LS855 with the Pentium M

          Yum! I could go for that. Pentium M is ideal for SFF.

          • Unless Micro-ATX (9.6"x9.6") is your favorite SFF mobo FF, don't spring for THAT one. Lippert has a Pentium M mobo (it's kinda strange for Mini-ITX, though, and it looks like it could be vapor (never trust a "Mid/End of Q4, 2003" and a "Price: Not fixed yet" in the beginning of Q4, 2003)), and Commell (the company behind the Pentium 4/4-M Mini-ITX mobos - they're switching away from industrial thanks to those two mobos, and they're developing SFF PC products now) is developing a Mini-ITX Pentium M mobo that

      • are you guys saying that a CPU only uses as much power as a regular lamp pulp.
        • CPU power (Score:3, Interesting)

          by fnj ( 64210 )
          are you guys saying that a CPU only uses as much power as a regular lamp [bulb]

          Absolutely. But grab a 60-100W light bulb that's been on a few minutes (PLEASE DON'T REALLY!) and tell me what it feels like. That is one heck of a lot of wasted heat energy.

          BTW, the body heat of one human is also approximately the same as this figure, and look how much food (energy) we use up each day. It's just spread over a lot of surface area so the peak temperature isn't as high.

          • well i tended to think that we are very very efficient machines considering all those things i heard about how long you'd have to run on a treadmill to burn that snickers bar you ate. Diet books are full of such trivia.
          • by pmz ( 462998 )
            look how much food (energy) we use up each day. It's just spread over a lot of surface area so the peak temperature isn't as high.

            Er, I don't quite get it. I used up a whole can of peanut butter, and I just feel sticky. The temperature hasn't changed a bit.
      • there's no sign of it happening.

        Sun, IBM, Transmeta, VIA, etc. have been producing sub-20-watt CPUs for years. Even the once top-of-the-line UltraSPARC II burns only 19 watts, yet has the FP power of a Pentium III at twice the clock.

        Intel's marketing machine is really quite sad, considering the cumulative megawatt/hours of electricity wasted in the quest for more MHz. Hell, I'd bet all the well-designed "enterprize" CPUs out there (sans Itanic) all are more efficient than any Intel offering for their p
        • That's why you use Pentium Ms. x86 compatibility, ridiculously low power, the Intel name (it means something, especially when they invented the x86, and it's an x86...), etc., etc. Of course, that's exactly what Intel DOESN'T want you to do (why else did they go to socket 479?), but RadiSys has a nice board that'll fix that little problem - the LS855.
    • It's already here... (Score:3, Interesting)

      by bhtooefr ( 649901 )
      Flash media, and not MRAM, thank you very much. As for fans, well, just look at some Mini-ITX boxes. And ask for something that can take a 1GHz ULV Pentium M, which outputs ~7W, and is as powerful as a 2GHz Pentium 4, which outputs ~60W. About your OLED screen, why not the billboard-grade eInk that can pull 70FPS (for your Intel Extreme Graphics 2 that can only pull 50 on a good day)?
      • Flash memory wears out after a short while, about 1000 writes if I remember correctly. Good Flash modules have spare flash memory cells with control circuitry that replaces the fizzled out flash cells on the main part of the memory of your USB stick/Compact Flash/etc. Stretches the time to failure.

        MRAM is just as durable as DRAM and is just as fast. It is the future.

        You are right about the eInk or similar technology instead of OLED. OLED pixels wear out way too soon to be usable in your desktop screen at
        • I wish I could find a fanless power supply for a decent price. Also, instead of one with a huge heatsink, I'd like one similar to a laptop power supply (a brick that sits on the floor, so I don't have to worry about it heating up the case) that splits into the various connectors once inside my PC. As it is, I've got a Zalman one that I can only hear if there's no other noise in my apartment (currently, the old CRT I've got hooked up to it makes more noise with its constant hum - I guess I'll replace it wi
    • Sounds like a Palm Pilot, except the OLED screen. (Personally I'd prefer a reflective "digital-paper" type screen while we're at it).

    • 8bit Atari anyone? Cartridge-based... :)

      I don't see why you can't have a PC like that now. USB harddrive, that big copper thing without moving parts on CPU, CPU downclocked some 30%, custom-made power supply without moving parts (not hard with low load). Standard, not accelerated VGA and standard CRT monitor... unless you consider electrons flying freely through vaccuum "moving parts".
    • There's an idea, a desktop version of the Zaurus! A flatscreen monitor that contains the computer. A few CF/SD slots at the side for removeable storage, and the usual USB/Firewire/PS2 ports. Dunno about the harddrive tho, flash storage is quite slow and if you write to it to many times it will begin to fail. Would be a huge bugger to upgrade... (expensive)

      Would there be a market for this kind of thing?
      • by DG ( 989 ) on Friday November 07, 2003 @03:51PM (#7419066) Homepage Journal
        I've been looking for a 100% solid-state DVR.

        Why? On-board camera for my race car.

        If I can get it to turn on recording at the same time as I push the DATA RECORD switch on the datalogger, then I get video and sound synched to the data log - and that would be a HUGE advantage.

        Why solid-state? Because race cars take a lot of abuse. 1.6G to -1.6G in the space of half a second or so.

        I figure an MPEG2 capture card, an audio capture card, the OS on EPROM and Compact Flash as the filesystem. Video IN and stereo audio IN. Record at full-speed every time the RECORD pin goes to ground. Operate at 10V-16V.

        I've found a number of VERY similar devices (for security cameras), but nothing yet that does full speed video and sound. Build one, price it cheap, and I'll buy it.

        DG
  • Moore's Law (Score:3, Interesting)

    by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Friday November 07, 2003 @02:19PM (#7418197) Homepage Journal
    For those of you who don't know what Moore's Law is (and especially for those of you who THINK you know what it is), allow me to quote from the Intel Web site [intel.com]:
    In his original paper, Moore observed an exponential growth in the number of transistors per integrated circuit and predicted that this trend would continue
    Many people have made the observation that Moore's Law is probably a limited phenomenon, and while other increases may continue to fuel increased processing power, Moore's Law does not actually have anything directly to do with processing power.
    • Re:Moore's Law (Score:2, Insightful)

      Many people have made the observation that Moore's Law is probably a limited phenomenon, and while other increases may continue to fuel increased processing power, Moore's Law does not actually have anything directly to do with processing power.

      Who needs Moore's Law when we've got Beowulf clusters?

      And Beowulf clusters of Beowulf clusters.

      And Beowulf clusters of Beowulf clusters of Beowulf clusters.

      And...
    • by fnj ( 64210 ) on Friday November 07, 2003 @02:54PM (#7418540)
      Moore's Law is probably a limited phenomenon.

      <pedantic>
      Probably? Assuredly, I would say. If transistor count continues to double every 2 years, with 42M transistors per CPU in 2000, you would have 43 billion in 2010, 44 trillion in 2020, 47*10^21 in 2050, and 53*10^36 in 2100. If that hasn't reached the number of atoms in the known universe, then keep counting years and it will.
      </pedantic>
      • by Surt ( 22457 ) on Friday November 07, 2003 @03:13PM (#7418714) Homepage Journal
        What, you're just going to blatantly assume that we'll not have discovered a way to perform our computations in another universe in the next 80+ years?

        What are we, lazy?

        • What are we, lazy?

          [insert Edgar Buchanan's voice from Petticoat Junction]

          Lazy? Why listen, sonny :-) Back in my day, we used slide rules and big heavy books with tables of logarithms. I had one of those new fangled all-aluminum slide rules, and the slide galled and bound to the stationary part. Let me tell you, it took real muscle to move that sucker.

          These whippersnappers nowadays couldn't find an alternate universe if it was staring them in the face :-)
          • by Anonymous Coward
            Let me tell you, it took real muscle to move that sucker.

            Have you tried loosening some manufacturer-mounted screws in PC case? Have you carried an SGI Challenge from 3rd floor? Have you ever need to squeeze a TP wire through 10 cm of empty space between two 2cm holes in opposite walls filled with other wires? What about smuggling a PC through the border secretly? Been there, done that. Gets you more than a little sweat.
          • "[insert Edgar Buchanan's voice from Petticoat Junction]"

            You didn't need any further examples of you age after that.
        • What, you're just going to blatantly assume that we'll not have discovered a way to perform our computations in another universe in the next 80+ years?

          Well, some people argue that quantum computers would in fact take advantage of parallel universes to do their work. The huge number of alternative computations are done in parallel in their own universes, then only the correct answer ends up in our universe when the wave function collapses.

          I'm not sure that this viewpoint is actually valid, but it seems t

      • <pedantic>
        Probably? Assuredly, I would say. If transistor count continues to double every 2 years, with 42M transistors per CPU in 2000, you would have 43 billion in 2010, 44 trillion in 2020, 47*10^21 in 2050, and 53*10^36 in 2100. If that hasn't reached the number of atoms in the known universe, then keep counting years and it will.
        </pedantic>

        <more pedantic>
        you are, of course, overlooking the inevitable creation of sub-atomic transistors!
        </more pedantic>

      • The exponential trend that has gained the greatest public recognition has become known as "Moore's Law." Gordon Moore, one of the inventors of integrated circuits, and then Chairman of Intel, noted in the mid 1970s that we could squeeze twice as many transistors on an integrated circuit every 24 months. Given that the electrons have less distance to travel, the circuits also run twice as fast, providing an overall quadrupling of computational power.

        After sixty years of devoted service, Moore's Law will di

      • Well obviously the transistor will be replaced by something far more efficient by then. What Moore's Law shows in the real world is that technology does advance at a remarkable rate. And that rate is increasing.
      • Probably? Assuredly, I would say. If transistor count continues to double every 2 years, with 42M transistors per CPU in 2000, you would have 43 billion in 2010, 44 trillion in 2020, 47*10^21 in 2050, and 53*10^36 in 2100. If that hasn't reached the number of atoms in the known universe, then keep counting years and it will.

        The number of atoms in the universe is not the limit for computation. The true limit is set by quantum states. It is actually possible to caluclate these limits, Professor Seth Lloyd
      • FYI, number of the atoms in the universe is about 10^79. 90% of them are hydrogens.
  • everyone now and then there would be a statement like the following:

    DOBBERPUHL The power is dissipated mostly in the transistors, either as they switch or as they just sit there and leak.

    You can calculate the dynamic power dissipation with the formula P = CV2f, where V is the power supply, C is the capacitance that is being switched, and f is the switching rate. There are some additional factors, but fundamentally the dynamic power is given by that formula.

    ...and now my head hurts from all that sm

  • did you rtfa? (Score:5, Informative)

    by glassesmonkey ( 684291 ) * on Friday November 07, 2003 @02:36PM (#7418372) Homepage Journal
    Not that you did read the article, but here's a great paper (pdf) [stanford.edu] on low-power processor design with lots of graphs and equations showing where the architecture can tradeoff power to keep your silicon chips from melting.

    The paper is out of Stanford paid for by your tax dollars.. Hopefully you won't notice the part about the address at Stanford University being the William Gates Computer Science Bldg
    • The paper is out of Stanford paid for by your tax dollars.. Hopefully you won't notice the part about the address at Stanford University being the William Gates Computer Science Bldg

      Universities name buildings after donors. It was "paid for" not just by your tax dollars but also partly by William Gates.
      • paid for refers to it being an ARPA funded paper. The Bill Gates building comment was only a little unnerving to me that the SU Computer Systems Dept. takes place in *His* building.. (who based his career on stealing ideas and reselling them)
  • Isn't transmeta's new, super-kewl uberchip running at a wonder 1.1 ghz? Or was it 1.4?
    • Just because a CPU runs at 1.1GHz (it was 1.1), doesn't mean it can't kick Pentium 4 ass. After all, you can take the clock speed of a Pentium M, add a gigahertz to it, and say that it's roughly equivalent to that of a P4 at that speed you just came up with. After all, the Pentium M goes up to 1.7GHz, and I've seen benchmarks showing a 1.4GHz Pentium M MURDERING a Pentium 4 @ 2.8GHz (if you count in that the Pentium M was a laptop and the Pentium 4 was a desktop!)

      BTW, if you're interested in the mode
      • Ahh but my laptop is a 1.8 Pentium M.

        jayson@Jayson jayson $ cat /proc/cpuinfo
        processor : 0
        vendor_id : GenuineIntel
        cpu family : 15
        model : 2
        model name : Mobile Intel(R) Pentium(R) 4 - M CPU 1.80GHz
        stepping : 7
        cpu MHz : 1794.389

        So I guess 1.7 isn't the top. I know 1.8 isn't that much more but it is more. BTW it's a Toshiba Satellite Pro 6100.
        • Herein lies the problem. If your CPU was really a Pentium M, here's what the info would be (guestimating on the name and certainly not the exact cpu MHz - this is synthesized, as I don't have access to a Pentium M based system)

          processor: 0
          vendor_id: GenuineIntel
          cpu family: 6 (same as a Pentium III or other P6-based cpus, not 15 like your NetBurst cored Pentium 4-M)
          model: 9 (in between Coppermine and Tualatin P3)
          model name: Mobile Intel (R) Pentium (R) M CPU 1.70GHz (see the difference?)
          stepping: ????? (
          • D'oh! Forgot something: the Intel spec finder is royally fscked - it doesn't list the ULV (900-1000MHz), LV (1.1GHz), BGA (all speeds, permanently soldered), or 1.7GHz CPUs. It also says that the four CPUs they list are all Socket 478, when they're actually Socket 479.
        • A Pentium 4-M is not a Pentium-M. What is the cache size mentioned by /proc/cpuinfo? Should be 1 Mb.
    • by Anonymous Coward
      It's amazing how the MHz Myth continues to this day.

      Let me say it once again, it does equate to performance. If you still believe it does I will gladly trade you this nice new 3Ghz Celeron for your 1800Mhz Athlon64.

  • by Skapare ( 16644 ) on Friday November 07, 2003 @02:40PM (#7418409) Homepage

    How Moore's Law affects some computer users as measured in the time it takes to do something, like render a page of a document on the graphical screen in a window opened for a word processor, is shown as an example here:

    • 1992 1.25 seconds
    • 1993 800 milliseconds
    • 1994 500 milliseconds
    • 1995 320 milliseconds
    • 1996 200 milliseconds
    • 1997 125 milliseconds
    • 1998 80 milliseconds
    • 1999 50 milliseconds
    • 2000 32 milliseconds
    • 2001 20 milliseconds
    • 2002 12500 microseconds
    • 2003 8000 microseconds
    • 2004 5000 microseconds
    • 2005 3200 microseconds
    • 2006 2000 microseconds
    • 2007 1250 microseconds
    • 2008 800 microseconds
    • 2009 500 microseconds
    • 2010 320 microseconds
    • 2011 200 microseconds
    • 2012 125 microseconds
    When you are doing something interactively and have to wait the better part of a second (or worse) for each step to complete, it can be a big pain. A faster CPU would be nice. But once that wait gets down into a certain range (varys depending on what the task actually is), it won't really matter as much, if at all.

    There will still be needed even faster CPUs for many things. The use of cryptography will certainly be increasing and that is a big need for more CPU speed. Larger, more bloated (in terms of steps of code, in addition to RAM and disk space), operating systems and applications will need faster (and larger) CPUs, too (though many have learned to avoid these steps to avoid the costs of upgrades to software and hardware).

    But the market for faster CPUs will gradually be leaving behind more and more people who do the kinds of things that just don't need it. The threshhold has been reached for many, and soon will be for many more. Hopefully new and expanded uses will keep (or restore) the markets in a thriving condition.

    • While your numbers do show the exponential pattern, as a Windows user I have noticed no particular speedup over the years of loading a document in say Word or Acrobat Reader ;-(
    • Amdahl's law (Score:2, Interesting)

      by kcm ( 138443 )
      I think you meant Amdahl's Law.. the improvement to the user is only as noticeable as the original experience was poor.

      The faster the original redraw, the less of an effect the speedier redraws have on the user's interaction experience.
    • The problem is there:

      1) You can do MORE. Display a '92 webpage in current box, it will take 8000 microseconds. But install OS and display a new HTML4.01 page with javascripts, CSS2, possibly some flash content, such stuff in a '92 computer. Nowadays the page may load in 1.5s, how would it run on such an old box?

      And this is good.

      2) You can afford doing things WORSE. Nobody really writes games in ASM nowadays. Hell, hardly ever you see anyone writing ANYTHING in ASM. They just use some high-level languages
    • though many have learned to avoid these steps to avoid the costs of upgrades to software and hardware

      By running Slackware 4.0??

      OS/2??

      Certainly not by running Windows or a bloat freenix desktop.
  • by stevesliva ( 648202 ) on Friday November 07, 2003 @02:45PM (#7418456) Journal
    Okay, I look at the impressive resumes belonging to both the interviewer and interviewee, and I cannot believe how little substance there is to their conversation. Why is that? They're almost powerless (no pun intended) to influence the development of process technologies. Transmeta is a fabless company that contracts with TSMC, I believe, to manufacture their processors, and the interviewee just started another fabless company. If you want to speculate on where process technology is going, ask someone with a fab!

    They spend several paragraphs discussing NMOS capicitors in CMOS processes circa 1994, but apparently neither knew enough to speculate about MIM or Trench capacitor structures, two mature technologies used in DRAM. Yes, they were leading in to the gate leakage issue, but the substance of that boiled down to, "Leakage sure is a big problem." Their solution is low-voltage chips with fewer transistors. Revolutionary!

    There's way more substance in press releases from Intel.

    • by Anonymous Coward
      Steve,

      For those unlucky enough to read your pointless remarks, I must give a rebuttal.

      Some fabless semi companies have more process engineers than the fabs themselves and those engineers do more to fine tune the process that you could imagine.

      Also fewer transistors may not be revolutionary but doing same/more work with fewer certainly is.

  • ACM Queue... "Tomorrow's computing today"

    so tomorrow, I get to look forward to more underpowered web servers?
  • Being a Moore, I can't help but comment on Moore's law. In my lifetime, there have been a number of unforseen and incredible advances that have helped Moore's law significantly, besides the usual annual technological improvements. Moore's law will continue to advance mostly because of these unforseen advances, and I believe that the annual technological improvements that have become commonplace will also continue. Long live progress!
  • Translator code... (Score:4, Interesting)

    by SharpFang ( 651121 ) on Friday November 07, 2003 @03:12PM (#7418706) Homepage Journal
    One of the best Transmeta features was supposed to be the replaceable "translator layer" code, so it could run as ix86, motorola, alpha, or whatever CPU you wanted. (so you could boot Amiga, Mac and PC stuff on the same box, just picking upload of proper code on bootup. But AFAIK only x86 translator code was ever created. Anybody knows about progress with other platforms?
    • The reason only x86 was done was because the other platforms already had low power chips, especially motorola and no one needs a low power alpha.
    • I don't thing the idea was to make one box able to boot as any processor.

      I think the idea was to make one die which could be configured to behave like any processor. But once you pick the one you want, you pretty much are stuck with it.

      The architectures of all the different systems are far too varied for one mother board to support them all.

      I have never heard a thing about any architecure besides x86. I doubt any work on any other translation layer has even begun. They have their hands full with x86 as i
    • by Fnord ( 1756 ) <joe@sadusk.com> on Friday November 07, 2003 @04:45PM (#7419614) Homepage
      Sigh, this comes up every time someone mentions transmeta. Yes the "translator code" (its acually called Code Morphing) is cool. Yes it takes x86 and converts it to the crusoe's native instruction set which is actually a 4 way vliw processor. No that was not done to run multiple instruction sets. That was done so that some of the complexity of the chip was done in software instead of silicon, making the chip smaller and less power hungry. In fact they've repeatedly said that while its theoretically possible to code morph other instruction sets, they've designed the underlying, real instruction set to effectivly run x86 code. Just in a simple and more efficient manner. The whole hype about multiple instruction sets was from people speculating about what could be done with this cool new code morphing thing, and then others looking at the comments assuming it was already planned. Transmeta themselves never contributed to that hype in the slightest.
  • Transmeta a Go-Go [geekculture.com] followed by his encounter with Riker and Geordie... here. [geekculture.com] :-)
  • DITZEL
    In some sense one can say that the people who design operating systems and new processors are looking for ways that you won't have sufficient performance with the machine you have today, so that you'll need to buy something new. Is there any way out of this vicious cycle?
    ME
    They call it "Linux".
    • Not exactly.

      I mean, sure Linux provides you with more "user-usable stuff per CPU cycle" on average, but still I found I just can't fit a decent install of RedHat on 200M disk of an old Sun. I went with NetBSD for that and I was amazed about how much I got. And no, I'm not saying NetBSD is the solution. I'm just saying Linux is far from such perfection.
      • Ok but your example doesn't negate my real point, which is that Microsoft benefits by increasing the bloat of its OS so that you buy new hardware (which typically has MS installed).

        RH may be bloated (by Linux standards) in terms of disk space, but it will run great in much less RAM (and CPU) than WinXP, while providing equivalent functionality.

        I concede that NetBSD or other distros may be even better performers on grandma's toaster or whatever.
  • by emil ( 695 )

    DITZEL In terms of technology that might save us, in the last few years we've heard a lot about something called silicon on insulator, a variation of standard CMOS. Is that going to replace standard CMOS technology in the future?

    DOBBERPUHL Well, the proponents would say that it will, and the opponents will say that it won't, and only time will tell. The issue I think it struggles with is that it has an advantage over standard silicon in terms of performance and power of about 25 to 30 percent--which is a

  • Moore's law applies directly to the number of transistors on a chip, but since we are all using Maas Biochips these days, none of that applies anymore. Unless you own one of those old Ono-Sendai peices of crap...
  • We've developed lots of tools to do performance analysis for software development--and to understand the hot spots from a performance point of view in code and make improvements to tune code, to improve performance, or to reduce memory footprint, etc.

    But programmers don't use them.

    With my command of assembly language, there probably aren't many coders out there that could write faster code than I. I'm not bragging; it's a simple fact that if you can fit the entire executable into the processor's cach

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...