Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel

Intel Reveals More Larrabee Architecture Details 123

Ninjakicks writes "Intel is presenting a paper at the SIGGRAPH 2008 industry conference in Los Angeles on Aug. 12 that describes features and capabilities of its first-ever forthcoming many-core architecture, codenamed Larrabee. Details unveiled in the SIGGRAPH paper include a new approach to the software rendering 3-D pipeline, a many-core programming model and performance analysis for several applications. Initial product implementations of the Larrabee architecture will target discrete graphics applications, support DirectX and OpenGL, and run existing games and programs. Additionally, a broad potential range of highly parallel applications including scientific and engineering software will benefit from the Larrabee native C/C++ programming model."
This discussion has been archived. No new comments can be posted.

Intel Reveals More Larrabee Architecture Details

Comments Filter:
  • Good old SIGGRAPH (Score:5, Insightful)

    by Gothmolly ( 148874 ) on Monday August 04, 2008 @08:58AM (#24465189)

    With the supposed death of Usenet, the closing of PARC, and the general Facebookification of the Internet, its nice to see a bunch of nerds get together and geek out simply for the sake of it.

    • Re:Good old SIGGRAPH (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Monday August 04, 2008 @09:09AM (#24465317) Journal
      Unlike, say, any other academic conference where exactly the same thing happens. People don't go to SIGGRAPH for the sake of it, they go because it's the ACM Special Interest Group on GRAPHics main conference and getting a paper accepted there gets people in the graphics field a lot of respect. Many of the other ACM SIG* conferences are similar, and most other academic conferences are similar in form, but typically smaller.
      • Re: (Score:3, Interesting)

        by Duncan3 ( 10537 )

        Other areas of CS have multiple conferences throughout the year. Graphics has only one, and that's SIGGRAPH. If your paper is not accepted at SIGGRAPH, you are considered to have done nothing worthwhile that year. You could win every special effects award in Hollywood, but no SIGGRAPH paper = no cred.

        That's just how it works.

        • Re: (Score:3, Informative)

          by TheRaven64 ( 641858 )
          Not really. A lot of good papers go to IEEE Visualisation and a few other conferences. Outside the US, Eurographics is pretty well respected too. SIGGRAPH is the largest conference, and probably the highest impact factor, but it's certainly not the only one people care about.
    • the closing of PARC

      Eh? [parc.com]

  • by yoinkityboinkity ( 957937 ) on Monday August 04, 2008 @09:03AM (#24465249)
    With more and more emphasis going toward GPUs and other specialized processors, I wonder if this is to try to fight that trend and have Intel processors able to handle the whole computer again.
    • by TheRaven64 ( 641858 ) on Monday August 04, 2008 @09:16AM (#24465415) Journal

      It almost certainly won't work. In the past, there has been a swing between general and special purpose hardware. General purpose is cheaper, special purpose is faster. When general purpose catches up with 'fast enough' then the special purpose dies. The difference now is that 'cheap' doesn't just mean 'low cost' it also means 'low power consumption,' and special-purpose hardware is always lower power than general-purpose hardware used for the same purpose (and can be turned off completely when not in use).

      If you look at something like TI's ARM cores, they have a fairly simple CPU and a whole load of specialist DSPs and DSP-like parts that can be turned on and off independently.

      • by Kjella ( 173770 ) on Monday August 04, 2008 @09:28AM (#24465587) Homepage

        It almost certainly won't work. In the past, there has been a swing between general and special purpose hardware.

        Except with unified shaders and earlier variations the GPU isn't that "special purpose" anymore. It's basicly an array of very small processors that individually are fairly general. Sure, they won't be CPUs, but I wouldn't be surprised if Intel could specialize their CPUs and make them into a competitive GPU. At the very least, good enough to eat a serious chunk upwards in the graphics market, as they're already big on integrated graphics.

        • Your comment, "... as they're already big on integrated graphics." is true for some values of "big". Intel has been big in integrated graphics the way a dead whale is big on the beach.

          Basically, once you discover what Intel graphics has not been able to do, you buy an ATI or Nvidia graphics card.
          • More people in the world need Intel level graphics than need ATI/NVIDIA. This is borne out in sales numbers - Intel is the #1 graphics chip maker and has been so for many years.

            • Intel graphics has been TERRIBLE. We buy ATI video adapters (about $20) [ewiz.com] to put in business computers we build. (We've never bought from eWiz.com, or the particular video cards shown. That is just an example.)
              • Because what you do is clearly representative of what the rest of the world does, right?
              • Why? It's easy enough to say you do this, but _why_ do you do this? Your business users game testers? Or are you buying 2 year old motherboards with 2 year old integrated graphics? Even 965G chipsets should be plenty adequate for business use.
                • We've had a lot of problems with Intel graphics software. You are correct, however, we haven't tested the latest offerings from Intel. We felt so abused by the previous chipsets that we have had no desire to test the new software.

                  The last video driver we tested was version 14311 for the 945 chipset. It had a LOT of problems. There was a LOT of denial by Intel [intel.com] that there were problems.

                  So, I would be very interested to know: Is the video in the 965 chipset better? Is the software trouble-free? How about
          • And once you discover what kind of driver support they offer, you go right back to Intel.

            The new Intel G45 chipset recently made me order a new motherboard just to replace my video card. It's "fast enough", one might say...

            Personally, I can't wait to get all that proprietary crap out of my kernel. Shouldn't have fallen for the temptation in the first place.

        • Re: (Score:2, Interesting)

          Except the part where GPUs have 256-512 bit wide, 2GHz + dedicated memory interfaces and Intel processors are...way, way less. Add that to the ability to write tight code on a GPU that efficiently uses caching and doesn't waste a cycle, compared to the near impossibility of writing such code on the host processor which you share with an OS and other apps... meh.

          There might be some good stuff that can be done with this architecture, but I am not convinced it's a competitor to GPUs pound for pound. You have t

        • by adisakp ( 705706 )
          Except with unified shaders and earlier variations the GPU isn't that "special purpose" anymore. It's basicly an array of very small processors that individually are fairly general.

          Even with all the advances in shaders GPU's are not quite generalized due to several reasons. Hardcoded data fetch logic (yes there is some support for more arbitrary memory reados but those are limited and take a fairly big performance hit). GPUs also have poor performance for dynamic branching -- sure they support it, but
      • General purpose is cheaper, special purpose is faster.

        Only sort of. Special purpose is often cheaper, hence the profusion of ASICS. General purpose is more flexible, and so more desirable as a result. Also, special purpose is only cheaper if "general purpose" isn't quite up to the task. Speical purpose is also only cheaper if you're doing it all the time.

        For instance, on the low end, MP3 players often have (had?) MP3 decoder ASICS, because it was too expensive to perform on the very small CPU. On a PC, the

      • When general purpose catches up with 'fast enough' then the special purpose dies.

        ...except when it doesn't. When special purpose isn't so fast and special anymore, it often gets integrated as a feature into the CPU...

        See: x87 FPUs, cryptographic accelerators, video decoding, GPUs, etc.

    • by Churla ( 936633 ) on Monday August 04, 2008 @09:23AM (#24465503)

      I don't think so. I think the fact is that with the right architecture (which Intel is trying to get into place) which exact core on which processor handles a specific task should become less and less relevant.

      What this technology will hopefully provide will be the ability to have a more flexible machine which can task cores for graphics, then re-task them for other needs as they come up. Your serious gamers and rendering heads will still have high end graphics cards, but this would allow more flexibility for the "generic" business build PC's.

    • What'll be more interesting is if it fragments the PC market.

      If you want a super-fast ray-tr, erm, protein folding application you need one with the Larrabee chipset. If you want to play the latest game you'll need a traditional PC + graphics card. Would it be possible that business PCs turn to Larrabee and home PCs stick with current architectures?


  • Neither the summary nor TFA itself mentions the words "Ray Tracing" or "Rasterization" [slashdot.org].

    Am I missing something here?
    • by Anonymous Coward on Monday August 04, 2008 @09:16AM (#24465411)

      No, because the article is about Intel explaining that the purpose of Larrabee is NOT to be specialised like that. It's meant to be a completely programmable architecture that you can use for rasterization, ray tracing, folding, superPi or whatever else you want to program onto it.
      Basically, they're trying to say "it's not REALLY a GPU as such, it's actually a really fat, very parallel processor. But you can use it as a GPU if you really want to".


      • The biggest debate in all of graphics-dom [graphixery?] for the last six months or a year has been Ray Tracing -vs- Rasterization.

        So what happened?

        I just don't understand how you can have an article about next-generation GPU tech and not ask whether the logic gates & data busses are going to be optimized for Ray Tracing or for Rasterization or for both [which would require at least twice the silicon, if not twice the wattage and twice the heat dispensation].

        Has Intel completely abandoned the idea
        • by TheRaven64 ( 641858 ) on Monday August 04, 2008 @12:10PM (#24468091) Journal
          This is SIGGRAPH. They've been having the 'ray tracing versus rasterisation' debate for about three decades there. If you put anything definitive into your paper then you are likely to get a reviewer who is in the other camp, and get your paper rejected. If you say 'speeds up all graphics techniques and even some non-graphics ones' then all of your reviewers will be happy.

          • My bad - when something is this irrational, I guess the first suspicion should be politics - instead, I had simply assumed incompetence [or insouciance or absence of inquisitiveness] on the part of the author.

            I will work to up my cynicism.
          • Siggraph often includes papers describing major upcoming graphics or graphics-related hardware. So, it's no surprise that it was published. Most papers at the conference are on software techniques, but hardware papers for new platforms are often accepted.

            Another likely reason for acceptance is the last author listed, Pat Hanrahan, a giant in computer graphics (and a truly nice guy). Of course, the review process is supposedly scrupulously blind (the referees don't see the authors' names), but once th

        • That debate has for the most part been confined to speculation in various media outlets. It is obvious to everyone in the graphics community that Larrabee will be optimized for rasterization. It would not be possible to sell a graphics card that did not get comparable performance to ATI/NVIDIA in today's games, which means DirectX/OpenGL, which means rasterization. Intel wants to sell Larrabee, so it will have to rasterize extremely well. If Intel has to make a decision between optimizing a piece for ra

  • I get a warm-fuzzy feeling seeing that OpenGL isn't dead. I was first and best impressed with it when I played NeverWinter Nights, why hasn't it caught on more? Why don't more Open Source Games use it (as opposed to reusing the Quake engine)?

    • Re:OpenGL (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Monday August 04, 2008 @09:21AM (#24465475) Journal
      The Quake engine uses OpenGL (or its own software renderer, but I doubt anyone uses that anymore), so games based on it do use OpenGL. Most open source games that use 3D use it, as do most OS X games, and quite a lot of console games. OpenGL ES is supported on most modern mobile phone handsets (all Symbian handsets, the iPhone and Android) and the PS3. I don't know why you'd think OpenGL was dead or dying - it's basically the only way of writing portable 3D code that you want to benefit from hardware acceleration at the moment.
      • Re:OpenGL (Score:5, Interesting)

        by Ed Avis ( 5917 ) <ed@membled.com> on Monday August 04, 2008 @09:30AM (#24465621) Homepage

        The Quake engine uses OpenGL (or its own software renderer, but I doubt anyone uses that anymore),

        Isn't the point of Larabee to change that? With umpteen Pentium-compatible cores, each one beefed up with vector processing instructions, software rendering might become fashionable again.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          You still need an API - which OpenGL provides. On the hardware side of things, few chips actually implement the (idealized) state machine that OpenGL specifies, it's always a driver in between that translates the OpenGL model to the chip model.

        • Re:OpenGL (Score:4, Informative)

          by TheRaven64 ( 641858 ) on Monday August 04, 2008 @11:26AM (#24467369) Journal

          OpenGL is just an abstraction layer. Mesa implements OpenGL entirely in software. Implementing it 'in hardware' doesn't really mean 'in hardware' either, it means implementing it in software for a coprocessor that has an instruction set better suited to graphical operations than the host machine.

          Sure, you could write your own rasteriser for Larrabee, but it wouldn't make sense to do so. If you use an off-the-shelf one then a lot more people are likely to be working on optimising it. And if you're implementing an off-the-shelf rasteriser, then implementing an open specification like OpenGL for the API makes more sense than making everyone learn a new one, and means that there's already a load of code out there that can make use of it.

          • For generalized game engines, it's probably true that writing your own rasterizer is pointless, but there are many more specialized domains that have less general needs, and might well benefit from their own software rasterizer.
        • by mikael ( 484 )

          If you are going to replace the GPU for rendering, then you are going to have to replace the Z-buffer (for depth-testing), triangle rasterisation with hardware texture mapping with fragment shaders, hardware shadow-mapping, and vertex shaders (for character animation). All of this should also work with multi-sampling at HDTV resolutions (2048 x 1500+ pixels). Someone made the observation that the average pixel would be rendered at least seven times, so that would have to taken into account. The only alterna

          • by Ed Avis ( 5917 )

            I was just promoted by what the other poster mentioned about Quake. The old Quake rendering engine was optimized like crazy for Pentium processors. Larabee is going to be a phalanx of them (well, not quite the original Pentium, something a bit better). The Quake code might run on it rather well, with each CPU rendering a small tile of the display. Of course, Quake's visual effects are hardly state-of-the art nowadays, but it would be an interesting hack.

            • by mikael ( 484 )

              I certainly agree. The assembly language used for texture mapping actually took advantage of the separate add/multiply processor units so that the divide-by-z for perspective projection took no extra clock cycles. The geometry for the environment was designed to fit into cache pages. Combined with light-mapping and a complex environment, it was an amazing experience to see.

    • There's a difference between the quake engine and OpenGL. OpenGL is just a graphics library, it pretty much just outputs primitives.

      The Quake engine manages meshes, does collision detection, handles all the mess of drawing the right textures for the right models, managing lighting etc.

      If there were an OSI model for graphics, OpenGL would be layer 4, and the Quake Engine would be layer 5/6.

  • "its first-ever forthcoming many-core architecture, codenamed Larrabee" The Core architecture has duos and quads. Nehalem is just about to launch, going up to octocores at least. The point of the article eluded me until I went to Wikipedia and discovered that the Larrabee being talked about is a *GPU* rather than a CPU. Could have used that information somewhere in the original post.
    • Well it's really a little of both. The only piece of hardware on Larrabee which is *definitely* GPU-specific is the fixed-function texture sampling hardware (which implements trilinear sampling, anisotropic filtering, and texture decompression). The rest of it is basically just a multicore CPU, where each CPU has something like an expanded MMX/SSE unit (4x the size, with some new and interesting features), and all the 3D stuff (shaders, z-buffering, clipping, etc) is implemented in software on top of that

  • Bearing in mind all the other promises Intel has made about their previous graphics offerings, I'm rather inclined to think that once again this will underwhelm. Especially considering all the crap that's been coming out of Intel about real-time raytracing. (It's always been just around the corner because rasterisation always gets faster.)

    That's not to say that it's an interesting bit of tech, but from what I've seen so far it looks like the x86 version of Cell. Of course though it's a PC part and won't be

    • by TheRaven64 ( 641858 ) on Monday August 04, 2008 @09:24AM (#24465515) Journal
      I think Larrabee is quite believable. They are quoting performance number that make sense and a power consumption of 300W. The only unbelievable idea is that a component that draws 300W is a mass-market part in an era when computers that draw over 100W total are increasingly uncommon and handhelds (including mobile phones) are the majority of all computer sales with laptops coming in second and desktops third.
      • It isn't nearly as big as a more economical (both in terms of power usage and cost) market, but it is still large. ATi and nVidia are not having problems moving their new high end accelerators that draw hundreds of watts. On the contrary, there were some low stocks of the new GeForces after the prices were dropped.

      • by renoX ( 11677 )

        Uh? Most people don't even know what is the power consumption of their desktop!
        And most of those who care, care only because power --> heat --> fan noise and big case.

        So I don't think that 300W is an issue (with an heatpipe to avoid fan noise), the software for this kind of new architecture will be the biggest issue: GPUs have evolved 'progressively' providing new feature AND good performance for the existing games, this will be much more difficult for Larrabee.

    • > so far it looks like the x86 version of Cell

      Then you missed the fact that the article says it uses a coherent 2-level cache for inter-core communications; the Cell BE is quite exotic in that it uses DMA transfers and has no memory coherency between the SPEs.

      The article doesn't explicitly state that the Larrabee cores are homogeneous, but I would be surprised if they weren't; the Cell cores are somewhat heterogeneous if you want to use the PowerPC core to squeeze the last drop of processing power out of

      • by 32771 ( 906153 )

        Thanks, you just made me have a look at Almasi and Gottlieb (1994) again, where else would I need it than on slashdot. Chapter 10 Section 3 and following is good to have a look at.

        Your last statement leaves me puzzled. It seems that you must have something to connect caches and memory with so we could be content with a simple bus which A&G describes as existing in Sequents cost effective machines.

        When you read further you come across the KSR1 which uses a hierarchical ring architecture, sporting large c

  • there is like a LOT of computers with really good cpus and really weak video chips like laptops and dell computers

    Why not just do a software mode driver for em?

    that probably would make the 3D gaming market a bit bigger without forcing the people to buy a 3D acelerator card (thing that is kinda impossible to do on most laptops)
    • without forcing the people to buy a 3D acelerator card (thing that is kinda impossible to do on most laptops)

      Who forced you to buy a more 3D oriented graphics-card?

      These days you can pick your system oriented to your usage, if you want to play alot of games, do alot of encoding or work alot with media, you'll get a more advanced graphicscard and are willing to make a bigger investment in that. If you don't, you're perfectly fine with an integrated graphicscard. The choice is there, and it's to be made by yo

    • by mdwh2 ( 535323 )

      There are software drivers, but I suspect they'd tend to be far slower than even cheap graphics hardware.

      In fact one of the reasons why Intel integrated graphics are so slow is because they do some things (like vertex shaders) in software.

    • Because software mode is even slower than the crappy Intel chips.
  • ..a, uh, beowulf cluster...I just can't put my heart into it anymore!
    • Is this a new slashdot meme? Never finishing the jokes?

      Ok well, there was an AI bot running on a larrabee, an Irish Priest and a Soviet Russian who walked into a bar. The Irish priest orders a scotch. Then suddenly...

      • ...the priest says with a slur, "In Soviet Russia, larabee bot overlords welcome YOU!"

        There, fixed that for you.
  • by Futurepower(R) ( 558542 ) on Monday August 04, 2008 @09:33AM (#24465667) Homepage
    Today at a coder's party we had a discussion about Intel's miserable corporate communications.

    Intel's introduction of "Larrabee" is an example. Where will it be used? Only in high-end gaming computers and graphics workstations? Will Larrabee provide video adapters for mid-range business desktop computers?

    I'm not the only one who thinks Intel has done a terrible job communicating about Larrabee. See the ArsTechnica article, Clearing up the confusion over Intel's Larrabee [arstechnica.com]. Quote: "When Intel's Pat Gelsinger finally acknowledged the existence of Larrabee at last week's IDF, he didn't exactly clear up very much about the project. In fact, some of his comments left close Larrabee-watchers more confused about the scope and nature of the project than ever before."

    The Wikipedia entry about Larrabee [wikipedia.org] is somewhat helpful. But I don't see anything which would help me understand the cost of the low-end Larrabee projects.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      As soon as it actually exists somewhere other than Intel's laboratories, they're usually pretty forthcoming on details (to the point we even have specs on how to use their graphics hardware, which is more than we can say for e.g. nVidia.)

      OTOH, Larrabee is still Labware, and should be thought of as such. Unless you're willing to sign away your life in NDAs, don't expect to know too much yet.

    • According to the wikipedia article, the launch is currently planned for late 2009 or 2010. So it's a good bet that Intel won't have a good idea about what products they will be releasing for another year or so.

  • This only goes to show that the people at Intel really can't count..

    (Firmly tongue in cheeck, of course :)

  • by ponos ( 122721 ) on Monday August 04, 2008 @10:11AM (#24466221)

    What most people don't seem to realize is that Larabee is not about winning the 3d performance crown. Rather, it is an attempt to change the playground: you aren't buying a 3d card for games. You are buying a "PC accelerator" that can do physics, video, 3d sound, dolby decoding/encoding etc. Instead of just having SSE/MMX on chip, you now get a complete separate chip. AMD and NVIDIA already try to do this with their respective efforts (CUDA etc), but Larabee will be much more programmable and will really pwn for massively parallel tasks. Furthermore, you can plug in as many Larabees as you want, no need for SLI/crossfire. You just add cores/chip like we now add memory.

    P.

    • The need for SLI/crossfire is because the bandwith needed for multiple cards to work on a frame buffer is too high for even the newest PC memory bus.

      Intel's cards are not going to be able to get around this, so we will most likely add a third method of card interconnect to the mess.

    • Larabee is not about winning the 3d performance crown

      I beg to differ. Nobody will buy Larrabee unless there's software for it, but nobody will write the software until there's a large market of Larrabees out there to run it on. Luckily the entire library of PC games can be made to run on Larrabee via DirectX/OpenGL, and that's Intel's way around this problem. However, in order for this to work the price/performance has to be competitive. Intel's best bet is to use their superior fabrication facilities t

  • Larrabee looks very interesting for scientific computing, but what makes it better for graphics than a ATi/nVIDIA GPU?
    • Its architecture could (potentially) make for better multi-GPU solutions (i.e. with a shared frame buffer across all cores instead of x amount of RAM per GPU), and the use of tile-based rendering has a fair amount of efficiency benefits to make it interesting.

      It's way too early to say whether it'll even be equivalent performance-wise to AMD and NVIDIA's GPU designs in Larrabee's release time frame, and it'll be very dependant on its compiler and drivers, but as a concept right now it's hugely interesting i
    • I'm more of a 'highend' graphics coder (read: not games). Lets say we want to do some complex soft body animation. We need to be able to access a coherant data structure that represents the entire geometry mesh to be able to do that. You can't do that on the GPU - triangles and vertices are all you get.

      Lets say we want to use the numerous deformation techniques that do not work by transforming normals via a matrix (i.e. they must re-compute the normals because the deformation is non linear - FFD's for exam
  • Does this mean that Norton will scan my drive in 3D?

    Seriously, manymanycores architectures are nice for public servers that are coded very well. Potentially able to serve N clients at once, the machines running Larrabees will usually bottleneck somewhere else.

    For the desktop user, manymanycores mean that the main window will move smoothly in the foreground while anti-blackware, indexes and updates consume the background.

    For the power gamer, even manymanycores won't be enough. There's no such thing as
  • But can intel make good drivers as there on board ones suck?

    There on board video cards look good on paper but then come in dead last next to nvidia and ati on board video and that is with out use side port ram. ATI new board video can use side port ram.

  • I live in Larrabee, IA [google.com] and I'm getting a kick out of these replies . . .
    • by 32771 ( 906153 )

      This is remarkable, Larrabee is a fairly low resolution area, what are you guys trying to hide?
      Also notice the rectangular street patterns, uncanny how it resembles a chip layout.

      So are you renaming one of those streets into memory lane, the parking lot in front of the bank into Cache Plaza maybe ...

  • by Vaystrem ( 761 ) on Monday August 04, 2008 @11:36AM (#24467511)

    That is much more detailed than the one linked in the article summary. It can be found here. [anandtech.com]

    • Wow, I always thought Anand was a bit of an Intel fanboy, but does he ever gush of Intel in that one.

      Some fun out of context quotes:

      "adds a level of value to the development community that will absolutely blow away anything NVIDIA or AMD can currently (or will for the foreseeable future) offer."

      "Larrabee could help create a new wellspring of research, experimentation and techniques for real-time graphics, the likes of which have not been seen since the mid-to-late 1990s."

      "Larrabee is stirring up in the Old

  • I wonder what a 486 core would perform like on a modern fab process. After all that chip had a modest I/D cache, single instruction/cycle performance for many instructions, and integrated floating point - all with a tiny transistor budget by modern standards.
    • by faragon ( 789704 )
      In my opinion, very close to any RISC CPU, like back in the day. 486 IPC [wikipedia.org] was about 0.8, similar to contemporaneous MIPS and ARM processors (also in-order execution CPUs).
      • by LarsG ( 31008 )

        What I would like to know is how high you might clock a 486 core built in a modern fab.

        • by faragon ( 789704 )
          It could be very difficult to scale it above 500MHz frequencies, because of clock signal propagation assumptions. More complex processors, to achieve higher frequencies, have to propagate data and clock together, without having a omnipresent "main clock" signal.
          • by LarsG ( 31008 )

            Yeah, that was sort of what I was expecting. My understanding is (IANAChipDesigner, etc) that to reach high clock rates, you'd need to have deeper pipelines than on the 486. The reason being that each step can then be considered an independent unit with regards to clock propagation.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...