Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel

Intel Looks to Billion-Transistor Processors 136

Weedstock writes: "EE Times has an article about Intel's next decade roadmap. It explains what are the current issues with the actual "plastic bumped organic land grid array" packaging technology and how it will be modified into a "bumpless package with built-up layers" to accomodate billion-transistor processors."
This discussion has been archived. No new comments can be posted.

Intel Looks to Billion-Transistor Processors

Comments Filter:
  • The magnetic fridge story was covered here earlier [slashdot.org] and was thus yanked out as a duplicate article.
  • Hopefully people will be putting all of this power to good use. I wonder if C programming will become as rare as assembler programming is today?
    • think: hardware acceleration. graphics processing took the road it did when someone got the great idea to accelerate popular functions in the HARDWARE, instead of relying on software to carry out the functions.
      • That great idea had already been established years earlier in the Amiga. It always takes the PC world a few years (or decades) to catch on. I think Eide is finally getting close to SCSI speeds after what, a decade?
        • Right, because SCSI was invented for the Amiga, and is not available for PCs, nor has SCSI developed at all in the last 10 years.

          Hell, long before the Amiga, you had a seperate computer that did nothing but handle the display (e.g Pluto, Pixar Image Computer, Ikonas), and people thought it was pretty cool when you could integrate graphics into your main computer (not the the CPU, but the same box).
      • True, but re-read this please:
        accelerate popular functions in the HARDWARE, instead of relying on software to carry out the functions.


        With that in mind...one word:

        Winmodems

        But I agree. I've said to many people: "Never replace hardware with software".

        .
    • Not if Gnome has anything to say about it.
    • A friend of mine was explaining some of the notation used in piano sheet music. It seems that as the instruments became more capable, the sheet music became more complex. In the computer industry, we can see the same progression. Considering the length of time that it takes to develop quality software, by the time that it is written, hardware that will support it will be just around the corner. Early computer programs had to be written to use very little resources. I believe that there are plenty of examples out there of code that is sloppily written that runs fine because most people have more computer than they need... until they decide to upgrade to the latest greatest OS! So will people make the most out of the new hardware? That all depends on the people who write the software. It depends on the true artist who expresses himself not with code... but in spite of it.
    • What would replace C programming given massively powerful processors?
    • Just as C programming made a 12-MHz 80286 almost as powerful as a 4-MHz Z-80 programmed in hand-tuned assembly language, the multiply abstract and fantastically elegant languages of the future will make those terahertz machines almost as powerful for real work as a TRS-80 Model 1 programmed in Level II BASIC.
      • Let us remember: all computing power belongs to Microsoft; no matter how powerful your computer is your Microsoft operating system will make it feel like a 20 Mhz 386 running Windows 3.11.

        From Microsoft's point of view the ultra fast and powerful processor will allow them to write the 2012 version of Windows in Visual Basic 13.0; they will be able to hire beggars off the streets of Bombay at $0.40 a day to write their OS - no more expensive college grads to hire. Here is the real reason (from the Microsoft perspective) for more powerful computers. Naturally their PR people will tell everyone that the new version of Windows cost almost a trillion dollars to write, and everyone in the press will solemnly repeat that claim.

        If you project down that path a little more you can arrive at true artificial intelligence so that Microsoft can have computers writing the next generation of Windows without the need of human intervention. That way they can cut out their largest expense - programmers - and jump their gross profit margins from 90% of sales to 99.9%. Once this occurs you will actually start to see faster versions of Windows as machines won't need dumbed down languages to program in.

  • by OneShotUno ( 531386 ) <OneShotUno.yahoo@com> on Friday January 04, 2002 @11:07PM (#2789336)
    http://www.anandtech.com/showdoc.html?i=1542 If the URL is bad. Go to www.anandtech.com, CPU on the right side, and look under recent articles for the BBUL story.
  • bottleneck (Score:3, Interesting)

    by Transient0 ( 175617 ) on Friday January 04, 2002 @11:08PM (#2789340) Homepage
    I was interested by the fact that the article indicates that chip speed is about to reach a bottleneck with the array package. Of course, as with all things, everything needs to be upgraded in step in order to reap the benefits.

    The thing that i'm curious about is whether or not these changes in chip packaging will result in a disorganized series of changes in chip/board interface standards. socket 7, slot a, socket 370, etc.

    Will the various companies(most notably intel and AMD) all be independently trying to solve the same problem in different ways? And will this mean that not only will we have rapid interface generations within the same company but that we will have to deal with even further incompatability between chips of competing companies?
    -
  • 1THz processors are nice and all, but what about the necessary advancements in motherboard bus technology to match? I mean, you can have as fast a car you want, you get it to a track and flatten it's tires, it's not going to go very far. Personally, I would like to see a better partnership of chipset manufacturers and processor manufacturers to make sure that the rise of processor speeds is proportionate to the rise of chipset speeds.
    • first things first, i didn't read the article (yet) =) so, i may be completely off base.

      also, a large important factor is the use of those billion transistors. it could be used as a large onboard cache, or a massivly parallel adder, or something completely useless. and the something completely useless part is probably what intel will produce, not because their produces are crap necessarily, but because they continually use that pathetic x86 architecure. no matter how many clever tricks you use to decode, how many stages you make a pipeline, and how risc-like your core is, the external instruction set is still a severe limiting factor. it becomes uneconomical (in theory) compared to simpler alternatives. At least, with ia32 it is awful (excited me in middle school, then i realized how toy-like it was compared to something useful, like a mips or an ibm ppc or something). im not as sure about the ia64 architecture. if they're going to make something that sophisticated, i'd hate to see it blown by lousy implimentation. "yay, my cpu has .5 billion transistors employed to decode x86 instructions. thats got to be better than using 2million of them to decode a simple risc isa"
    • Well, since Intel wants to own the chipset business, it probably won't happen. Of course, Intel actually did own the chipset business until it got too cozy with Rambus...
  • Wow! This is pretty amazing. Just makes you wonder when traditional computing ( i.e. not quantum computing ) will reach its limits. I remember reading that this could occur around 2010, but then again that is barring new advances in physics.

    Even then we do know that there are limits, for example there is a minimum limit to the amount of heat produced in a computation ( this is a result of the Second Law of Thermodynamics ). So there is a limit to the number of transistors that can be fit into any given area, otherwise the processor would be putting out too much heat energy.

    Well anyway, this is very interesting and will make running simulations of real life scientific phenomena better, and as a result our understanding of the universe around us will be enhanced.

    • Actually Charles Bennett has shown that you can perform computation reversibly, which means that in theory, heatless computation should be possible.


      But we are nowhere near that limit.

  • A billion here, a billion there, sooner or later you're talking about a really, really hot piece of silicon.
  • When You have Software bloat,Slow SloW SLOW buses,(almost)unresponsive harddrives and low bandwidth????
  • What we've all been waiting for...a gigazistor! Enjoy it while you can. No doubt, sooner or later, the LinguisticallyCorrectNazis from Academia will change the name to gibizistor.
    • No, you will still be able to call it a "gigazistor". The -bi endings for the prefixes only apply to powers of 2. In this case, from what I got, this will have 10^9 transistors, therefore using the giga- prefix.
  • Heating a problem? (Score:4, Interesting)

    by PM4RK5 ( 265536 ) on Saturday January 05, 2002 @12:15AM (#2789536)
    Maybe I'm wrong, and if I am, I'll just crawl back in to my hole and shut up.
    But the article claims that the new technology will allow them to *embed* the
    processor(s) inside the casing material, unlike today where the core actually
    sticks out above the packaging.

    But the advantage, as I see it, to having the core *above* the packaging, is
    that heatsinks, thermal grease, etc... all have direct (or extremely close
    to direct) contact with the core - which is what generates the heat. Mabye
    in reducing voltage, heat output will drop significantly, but I digress.
    With the core embedded in the casing, it would seem hard to help cool the core
    when a heatsink doesn't have direct contact.

    I may be wrong, and in that case just ignore this comment, but I don't know
    how Intel would plan on dealing with that as a problem (if it in fact is one).
    • by cperciva ( 102828 )
      By "embed" they mean "stick the core into a hole so that the top of the core is level with the surface of the packaging".

      In other words, your heatsink will have more or less direct contact with the core, but there will be other material around which will make sure that you don't accidentally crush the core when you push down on the heatsink.
    • by grahamsz ( 150076 ) on Saturday January 05, 2002 @01:06AM (#2789639) Homepage Journal
      Firstly they did mention reducing gate leakage current by a factor of 3 i believe which means the chip will produce a lot less heat.

      As for embedding the core in the packaging - it's probably a great bonus. As has been pointed out this means that the top of your chip will be completely flush so you'll hopefully get better thermal transfer since you have a bigger surface area.

      On a current intel chip the space between the packaging and the heatsink is currently acting as an insulator (since air does that best when it's not moving).

      In addition to this, I would speculate that if the core is embedded into the packaging it might allow for small heat pipes to run directly into the core, allowing particularly hot areas of the chip to have additional passive cooling.

      That said, given fabrication facilities i'd struggle to make even a single pnp transistor and whilst i could probably remember how to build simple mos (and hence cmos) gates - i'd struggle to replicate what intel was doing in the 70s... so dont take me as any sort of authority on this one.
  • by smallblackdog ( 266198 ) <schipperke@smal[ ... g ['lbl' in gap]> on Saturday January 05, 2002 @12:45AM (#2789594) Homepage
    Does this mean faster pr0n?!
  • by Anonymous Coward
    The funny thing is that people thought the chips would get cooler when the gate sizes got smaller. Obviously this is not happening. But the thing is this - if the package is getting bumpless, the real challenge is getting the power through the package and onto the power mesh of the chip. Plus, will the new package substrates thermally match to the system boards they will be attached to? By the time Intel intends to have sub 50nm line widths, the die power is supposed to approach something close to your garden variety NUCLEAR REACTOR.

    Then you have the issue of signal integrity, particularly for high-speed analog and differential pair signals, which smaller traces only aggravate. The most advanced flip chip packages in the world currently push around 2000 connections for power and signals. This will only get more aggravated and congested at the board and package level as the level of integration increases due to the feature size decreases on the silicon. Small lines increase impedances, and merely cutting layers away will NOT help. Differential signals are supposed to be pushing 40Gb/s per pair in a year or so using modulation on top of differential signaling, so what are they expecting that these packages will be supporting when they have such strict routing requirements both in the signal and redistribution layer routing AND through the package? Not to mention the fact that they still have to attach these monsters using a substitute to lead solder to avoid alpha particles causing false switching in already small noise margins.

    Instead, you need different package materials than simple organic laminate subtrate and different silicon process materials than silicon dioxide and tungsten vias. When it gets to this, they have to rely on material science, which is the gating factor in a lot of science right now. I don't believe that Intel's core competency includes material science per se, so they'll be relying on outside companies and research labs for a good chunk of the new materials. Since this is out of their direct control, I don't see how they can deterministically schedule their packaging roadmap - not without forming clear strategic alliances with companies whose core competencies lie in material science related to the above-listed materials. I wish them all the luck and blessings in getting there though.
  • Who needs a BILLION transistors in a processor, for crying out loud?! Let me tell you something. A slow 4- or 8-bit processor can execute amazing things when coded correctly. Embedded developers have interfaced these processors to memory, hard drives, CD-ROMs, the ISA and PCI busses, and just about every kind of peripheral out there. I'm beginning to think that a fully functional and FAST computer can be built with NO x86 processor, but with about $20.00 (US) worth of these cheap, slow and small processors. It's the software that needs to be engineered correctly, and I'm afraid that nearly all software out there isn't.

    What happened to the good ol' days when programmers--real programmers--wrote very clever, small and fast programs? When it had to be written correctly or it didn't work?

    Try explaining to me why nearly all hardware needs to be engineered correctly, for a minimum of components and a maximum of performance, yet nearly all software is slopped together, taking up tens or hundreds of megs and running noticeably slow on today's powerhouse machines. You know what? There's no excuse.

    I've seen a hard real time operating system coded in 700 words. I've seen processors with 128 bytes of RAM control industrial robotics. Speaking of industrial stuff, I've seen an automation system that packs a real time operating system, high speed communication, interactive user interface (including full control of the display hardware), and all the automation software... in 20 kilobytes. Seeing this, I cannot understand why something simple like a word processor program should be several megs in size (and why it should hog a ton of memory).

    So back to the billion transistors question... why? Why should the processor have to predict the next mess of instructions, load them into a cache, find out it predicted incorrectly, dump the cache, find the correct location, load the instructions... Why are processors marketed by their internal clock speed when they spend most of their time waiting for data? And above all, why does software suck so badly?

    OH WELL.

    The Lord of the Rings. The book rocks. The movie sucks. Yeah, it SUCKS! I left the theater halfway through it. It SUCKS! But the book is awesome.

    OH WELL.

    • Re:Why?! (Score:3, Interesting)

      You need to look at what's driving processor design these days. It isn't word processing and spreadsheets, that's for sure. There are only four areas that I can think of that are really driving the desire for more and more transistors:

      #1 - Larger memory sizes. Terabyte databases require terabytes of RAM. Current 32-bit processors can't touch that with a 10-bit pole. Even the most elegant 4- and 8-bit processors can't do anything about their memory addressing limitations without huge kludges.

      #2 - Engineering/Scientific problems. Ever try to model the fluid/thermal dynamics of a star? You need ungodly amounts of processor power to do this properly, or ungodly numbers of processors. Preferrably both.

      #3 - 3D multimedia and design. This is my area of work. I've got five (count 'em, five) dual Athlons right this moment rendering like mad, churning through a 1 hour 3D animated sequence with lots of volumetric lights, NURBS, and tons of polygons. 3D eats cycles like they're going out of style, and in my business if you can cut your render time in half, you've just doubled your production capability. You can never buy enough render power.

      #4 - Gaming. Yes, games. Doom. Quake. Doom II. Quake 2. Quake 3. Unreal Tournament. Every game pushes the triangle count, texture resolution, and framerate to higher highs. Photorealism is the holy grail, and it's going to take absurd amounts of transistors running at an unheard of clockrate to do this.

      You'll note that business apps are anywhere in there, and they shouldn't be. Your average desktop processor spends about 99% of its time idle waiting on the operator between keystrokes. Nobody needs a 2Ghz P4 or a 1.6Ghz Athlon for these tasks, despite Intel's propaganda to the contrary.

      I know you long for fast, tight code, but that isn't being taught in college anymore (heck, it wasn't even when I went through in 1990). Profs are encouraging rapid design and quick-to-market code over elegant design. It's unfortunate, but the market itself is rewarding this philosophy. I don't agree with it, but the fact is that the company that produces a "good enough" piece of software quickly will generally steamroller a company that produces "elegant" software but comes out later.

      After all, beta means alpha, and 1.0 is really an extended beta. Kick it out the door, the marketing campaign is scheduled to start! Who cares if it works, we can always patch it later or put the bugfixes in version 2.0!

      Oh, and I strongly disagree with your assessment of Lord of the Rings. I found it a very good adaptation of such a sprawling book. What did you dislike about it so much that you descend to profanity to describe it?
      • Another item on your list can be carrier grade communication systems. Do you think the next generation of networking systems are going to be provided by a bunch of parallel 8 bit processors? Hell no. In order to move several terabits of data at crazy speeds we're going to need really fast switching technology to do it. Yet another is post production work. After you render all of your CG sequences somebody else with their ungodly number of processors computer edits it all together into a final product. The faster the processor is and the more memory it has the faster you can composite your video and audio and polish it up.
    • Re:Why?! (Score:4, Insightful)

      by TheAJofOZ ( 215260 ) <adrian@symphonio[ ]net ['us.' in gap]> on Saturday January 05, 2002 @01:53AM (#2789754) Homepage Journal
      What happened to the good ol' days when programmers--real programmers--wrote very clever, small and fast programs?

      We decided we wanted to do more with our computers. It's all very well to long for the days of very clever, small and fast programs but it's entirely another thing to create software which does all the things we have come to expect today while still keeping the software incredibly small and fast. It's even harder when you want to stay within a tight schedule and budget.

      Lets look at something near and dear to our hearts, something that many of us here have contributed to and something that isn't affected by budgets or timelines (well, mostly) - the Linux kernel. The Linux kernel is undoubtably a very good piece of software development, arguably the best that's currently available and it has been created by a wide range of people many of who come from the days when RAM and CPU time was expensive. Despite this, the linux kernel is certainly not small, and it shouldn't be. It has a wide range of devices to support, it has to be able to handle multiple users simultaneously and it provides a bunch of services that previously would never have been provided in an OS, let alone in a kernel.

      It could be argued that the Linux kernel is clever, and with my lack of knowledge of the kernel source I can't really comment. I think it is safe to assume that it's not as clever as it could be though - it doesn't use every trick in the book to reduce file size and increase efficiency because it's no longer small enough to make that kind of thing feasible. It's also modularised so that things can be loaded and unloaded as needed, there's extra code and overhead required to provide that. Finally, it supports a range of architectures now and is more portable. Going back to the old ways of doing things gives up all those benefits.

      Finally, the linux kernel is not fast - it is comparably fast for all the things it does, but it is not as fast on a per-cycle basis as OS's were back when every cycle mattered. It does however provide more features (like loadable modules), more portability and a faster release schedule for fewer man hours.

      So when you really sit down and think about it, while programs these days take up more RAM and CPU power there are a range of benefits that come from this. You should also note that comparatively the overall experience of using a computer has become radically faster then it previously used to. You may think that a program feels slow when you run it on a 3 year old machine, but what you fail to realise is that you've just gotten used to how much faster your new machine is. Having said that, some software is just plain crap, but so are some cars and bridges so the bad apples don't just come from software engineering.

      Why should the processor have to predict the next mess of instructions, load them into a cache, find out it predicted incorrectly, dump the cache, find the correct location, load the instructions...

      Incredibly poor chip design actually. This problem really only becomes significant when pipelines are made too long (such as in the P4). The pipelines are extended to make it possible to use a higher Mhz rating - though because of the extended pipeline and the problems caused by having to guess ahead so far the CPU doesn't actually function anywhere near as fast as the Mhz would indicate it should. This is why people talk about the Megahertz Myth - there's a ton of information on it around the web.

      Why are processors marketed by their internal clock speed when they spend most of their time waiting for data?

      Because consumers don't understand computers well enough to know this and Mhz has been used as a rating mechanism for so long (and previously it had been reasonably accurate). Marketers will jump at any opportunity to make their product sound better than the competition.

      And above all, why does software suck so badly?

      It doesn't. There is and always has been poorly written software but to say that all software sucks is unjustified. There are cars that break down due to manufacturing defects, bridges that collapse, constructions which go over time and budget and a myriad of failures from all types of engineering so of course not all software is perfect but it is improving whether or not you like the way it is improving is another matter.

      • This problem really only becomes significant when pipelines are made too long (such as in the P4). The pipelines are extended to make it possible to use a higher Mhz rating - though because of the extended pipeline and the problems caused by having to guess ahead so far the CPU doesn't actually function anywhere near as fast as the Mhz would indicate it should.

        Actually, pipelines aren't made longer to get higher MHz rating only, but to increase throughput [in optimal case]. Current crop of CPUs do more per clock than older ones (well, not counting P4, usually). You can nowadays add more than two numbers in one clock cycle and possibly do additinal multiplication in the same time. Even P4 should be really fast if all you do is basic operations without loops. P4 has 3+GHz ALU unit for this! Unfortunately, we really don't need that much computing power but logic power partly because we have additional processors on our sound and graphics cards where the computing power really counts. If you really need to emulate DSP in software, then P4 is what you need, otherwise deep pipeline is going to hurt badly.

        Perhaps it's just you didn't expect that much from computers a couple of years ago. I remember using 75MHz Pentium with sucky graphics adapter for not too many years ago and it felt plenty fast. I'd hate to have to use that kind of crap anymore - no matter what software I used. And that's because I know about better.

    • What happened to the good ol' days when programmers--real programmers--wrote very clever, small and fast programs? When it had to be written correctly or it didn't work?
      We got to the point where people cost more than computers. This happened years ago, and I, for one, don't want to go back. Computers are our tools. They should obey our will, and we shouldn't be bending to them.
    • > Who needs a BILLION transistors in a processor?

      Intel needs new products coming all the time to stay in business. This is the exact same business model that Microsoft follows. In the near future, when the saturation of technology hits some level, you will see some truly stupid product ideas to get people to buy even more crap.

    • I'll probably get ripped a new one for mentioning this... but...

      Go out and buy an Amiga :) - seriously. Even though they are largely unsupported (compared to 5~6 years ago) 16 megs of ram is still a colassal amount of memory for your average Amiga - and they can do just about anything your desktop PC can do now.

      Anyhoo - the way I justify software bloat is that hardware is so cheap these days does it really matter? I mean on a desktop level...

      My favorite comment about memory and the Amiga - was an issue of Amiga Format that had a full (older) copy of Real 3D - which was one of the first programs to ever do particle kenimatics. Anyhoo - the label said "warning requires at least 4 megs of ram" - I probably have the disk around here somewhere if someone doesn't believe me.

      Anyhoo - software is bloated sure, but does it make any difference when hardware is so cheap?
    • An 8-bit processor just refers to the length of its instruction, and thus how much memory it can address. It has nothing to do with the size, and could be just as many transisters as a modern chip. The common data elements are the same on both chips, just needed more repitions or bus lines to move from 8 to 32 bits (eg, decoder, adder). Most of these functions are well worked out and only a small portion of the chip so the majority of the transisters go towards more advanced components or controls (adv. controls handling being pipelining, SMT, etc). Also don't forget what a carry-look ahead teaches you for adders - more hardware is faster then less, if you can do more in parallel. So don't get confused on the x-bit area, its merely data crunching and representation of numbers. Having an 8-bit cpu would be horrible for multimedia and science apps which need persision.

      As to massively parrallel chips idea, its good in theory but horrible in practice. Most code can't be broken up into bite size chunks to be handled independantly. You rely on previous data, and have to be sure its completed. You can't access the same segment of memory or because you may be reading/writing the wrong information. And to solve this it takes more code, not less. So you may have a slower implementation and one that's harder to design and maintain.

      The ATM machine problem shows this, where you have a husband&wife both taking money out at the same time (eg. emptying it). If both can access the data attribute, they both check to see it has cash and are allowed to withdraw. The bank goes into the red. You say, make the line repetitious so one dollar at a time, but you still need a check.
      If (x>0)
      x--;
      The if goes through, but the husband withdraws at the same time. The bank still loses in the worst case $1. Dealing with parallelized code is a pain with shared resources and since its bigger, more is cranked out. Hopefully you get a speedup by having more code, but running in parallel on many chips, yet often less code is faster. Maintance is hell, so often the work to do this on important aspects, not every little thing.

      And you'd ask us all to write in assembly, ugh. Think about writing 8 or so lines for a simple switch statement, keeping track of jumps/labels, dealing with a small # of registers, dealing with memory. A simple program is horrible to write in assembly, since simple if-else code takes branches, jumps, labels, etc - longer code. Put this on something more then a few lines in C, and its hard to debug since its all addi's and beq's for ever little command.. stacks to return from a function. Its messy! Try dealing with your parallel goodness in assembly.. insane. And compilers know all the tricks, so often a modern compiler is better then a skilled assembly writer, since they can do far more tricks easier. You need to know which instructions are slow and not to use, optimize registers load/stores to reduce stalls, etc. Sure a perfect assembly writer may know it all, but its insane on chips with huge numbers of instructions, registers, and a big project. The myth that good assembly is faster then a compiler is just that, a myth. Ideally is true, in practice time is more important and a compiler often wins.

      We do parallize code like crazy, but in smart ways. Up at the cpu level is okay and done a lot, but not much more efficent. Go down a level. We use pipelining to parallize the cpu stages, so its not stuck computing one instruction through the whole process, but each stage can work on one. 1 in, 1 out every cycle (different instruction) or just the same in 5 (multiplier waiting for the decoder). Look at SMT which fills in the bubbles (stalls) when a stage must wait by simulating another CPU so other data can fill it. Think ILP and EPIC with prediction to replace branch prediction, by using more hardware to do the task in less time. Instead of picking the result for an if statement when waiting for memory to respond and being wrong, you do both simultaniously and throw out the incorrect data. Sure its brute force rather then trying to be 'smart' but its faster. That 10% of the time your wrong is gone, so your better off.

      I could go on, but I spent so much freaking time writing this for no reason. Don't need nor will likely get mod points, doubt you would care enough to learn. If you would like to know more, ask though. I'll leave you with this:

      What happened to the good ol' days when programmers--real programmers--wrote very clever, small and fast programs? When it had to be written correctly or it didn't work?

      Programmers have to write big programs, smart and clever in radically different and innovative ways. Design is no longer about size, but modularity, cleanness, reducing debugging and maintance time, and adding features. Larger code is acceptable if its better code - its easier to fix, and only slightly slower. Today's real programmers deal with designing massive, complex projects - not optimizing to hell for some platform or language. We leave the platform designers to optimize their end and the compiler to optimize to the hardware. Most real programmers have more important things to spend their time on.
  • But when am i... going to get a hard drive that can keep up or ram that can run at the clock speed i'd much rather MRAM (Magnetic RAM) that a Processor that will idle most of time except the most extreme calculations
  • by Animats ( 122034 ) on Saturday January 05, 2002 @03:33AM (#2789970) Homepage
    Maybe we need the transistor count to make HDTV work. But I don't think so.

    Thought for today: why do HTDV receivers cost so much? A GeForce 3 board has 35 million transistors in the CPU, 64MB of RAM, and costs under $200 at retail. The radio part of a cell phone, which is more elaborate than the radio receiver for HDTV, has a parts cost of about $10. $600 will buy a pretty good computer, monitor and all. Why do HDTV receivers cost upwards of $500 without a display device?

    • Because they are a rip off. Since they are marketed now for the high end "power user" who supposedly has money to burn, they jack the price up to outragous levels. There's also the cost of licensing the various technologies (you wouldn't want those nasty pirates out there to get ahold of HDTV now would you?) and finally there is merely the economies of scale at work here. HDTV hasn't sold particularly well in the states (gee, it adds $500 to the cost of the set so I can see a couple of shows I never watch in extra clarity? Sign me up! Oh, and we're still not entirely sure if we're going to keep the scheme that doesn't work very well in the city either...OH! and your local cable provider won't support it either, and you can forget about VHS tapes and pretty much all DVD players...).

      This will teach them for dragging their feet on High Definition Television!

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...