Is SMT In Your Future? 119
Dean Kent writes "Simultaneous MultiThreading is a method of getting greater throughput from a processor by essentially implementing multi-tasking within a single CPU. Information on Compaq's plans for SMT in the EV8 can be found in thisan an article and thisand this article. Also, there is some speculation that Intel's Foster CPU (based upon the Willamette core) will also have SMT and that the P4 may even have the circuitry already included, as discussed briefly in forums."
Already been done by tera computing! (Score:1)
Re:Just one question... why? (Score:1)
Re:Just one question... why? (Score:1)
With an SMP, each thread has resources dedicated to it: caches, function units, etc. In an SMT system these are shared dynamically across threads. Theoretically, each thread uses just as many resources as it needs for its level of instruction-level parallelism. So instead of each processor using, say, 2 integer units out of an available four, you now have 8 integer units being used in 90% capacity by multiple threads.
Note that these threads need not all be from the same program, either. SMT works great in a multiprogrammed environment.
Due to its ability for fast context switching, we're going to see some...interesting applications of threading. Check out the MICRO/ISCA/PACT, etc. papers on Dynamic Multithreading, Polypath architectures and Simultaneous Subordinate Microthreading (all of which, BTW increase the performance of single-threaded applications). Wild stuff is on the horizon.
--
Re:Are compilers amd CPUs really that bad? (Score:1)
This is why we have compilers and hardware. Compilers already do a fair amount of program transformation. Often the programmer, in a quest for "optimization," inevitably screws something up for the compiler, usually by doing "fast" pointer manipulation or using global variables.
--
Re:Billion Transistor Chips (Score:1)
Do you have a reference for the paper? I know several folks who'd be interested. Where did the 20% improvement of the MSC come from?
I can only assume the MSC is an abbreviation for the MultiScalar architecture (MSC == MultiScalar Computer?) that came out of Wisconsin. Is this correct?
--
Re:Are compilers amd CPUs really that bad? (Score:1)
There are many factors that affect the ILP available in "typical" programs. The two most important limiting factors are the memory subsystem and the branch predictor. Anywhere from 30%-60% (depending on architecture) of your dynamic instructions are memory operations. When these miss in the cache, it takes a long time to service them. This backs everything up in the instruction window. O-O-O tries to get around this by issuing independent instructions to the core. The problem is, either no such instructions are available or they also block on a memory operation.
On the I-fetch side, the branch predictor is responsible for feeding thie "right" stuff into the instruction queue. If a prediction is incorrect, the processor generally has to blow away everything in progress and start over on the right path. With the deeper pipelines we're seeing, this is only going to get more expensive. Even a 90% correct predictor incurs a huge penalty because the processor sees a branch every 5-8 instructions or so. The multiplicative factors ensure that accuracy diminishes quickly. No one has yet come up with a good multiway branch predictor.
So on one level, the hardware is to blame because it doesn't work right. Not only do memory and branches choke off the available ILP, the machine can't look "far away" to discover distant parallelism. Instructions after a function call, for example, are often independent of those before the call, but there is no way the processor can fetch that far ahead.
Enter the compiler. It is the compiler's job to schedule instructions such that the processor can "see" the parallelism. Unfortunately, this is very hard to do. Mostly this is due to the static nature of the compiler -- while the compiler can look far ahead (theoretically at the whole program, in fact), it doesn't know what will happen at runtime. The hardware has the advantage of (eventually) knowing the "right" path of execution. A compiler generally cannot schedule instructions above a branch because it doesn't know whether it is valid to execute them. In fact, the validity changes with each dynamic pass through the code.
We're seeing some of these limitations being lifted with dynamic translation and recompilation. Unfortunately, you're now saddling the compiler with the limitations of the hardware: limited lookahead. It is too expensive to do a "really good" job of optimization at runtime. Still, there is some improvement to be had here.
To sum up, the blame lies neither solely with the hardware or software. There is a complex interplay here that is only now beginning to be understood.
--
Re:Are compilers amd CPUs really that bad? (Score:1)
--
Re:IBM's SMT (Score:1)
Perhaps he's referring to the G5 processor? IBM's POWER line (S390, etc.) has used multiple execution cores for a while, but not for throughput. They use them for verification and reliability. One core checks the other and if one fails, the processor shuts down and its work is transferred to another node in the SMP system.
--
Re:Register freeing (Score:1)
Reference? I don't recall reading anything about "register freeing" instructions wrt. SMT. A compiler "releases" a register by redefining its value. They've done that for years. :)
It's true that a machine must hold onto a phyiscal register until a redefinition of the corresponding logical register is committed, but this isn't a problem in "traditional" O-O-O architectures, where the number of physical registers is adjusted to eliminate any difficulties this might imply. Register-caching architectures need to worry about stuff like this, but it has nothing to do with SMT per se.
--
Re:Similar complexity, better effects (Score:1)
An SMT doesn't really "context switch" in the traditional sense of the word. The reason it is "simultaneous" is that all threads are executing at the same time, like an SMP. It only "context switches" in the sense of fetching "more" from whatever threads are not currently blocked (or predicted to be blocked).
An MTA (muti-threaded architecture, of which SMT is a variation) really does "context switch," but it is fast enough that it can be done every few instructions. This is a very fine grade of threading, more akin to instruction-level parallelism than SMP.
With an SMT/MTA, the job of the OS (like an on SMP) is to schedule runable threads on the processor. Beyond that the hardware takes care of deciding what to fetch and from where.
--
Re:Intel IXP1200 already does this (kinda) (Score:1)
Similar complexity, better effects (Score:1)
The first thing you do is get the OS out of the way by making the processor able to handle multiple threads at once without involving OS code in context switching. This is a definite win, because context switches done by the OS kill the pipeline and often kill the cache. If the processor is in charge of all of this, it can do better, because it is closer to what is going on.
After that, the only innovation is to use the out-of-order-execution support on the whole set of things the processor is doing, instead of just one thread.
Considering what they're already doing, this isn't a lot of new complexity, but it should help performance significantly. The main problem with out-of-order stuff has generally been that there isn't really all that much implicit parallelism within a thread. Adding other threads which do not, in general, have any dependancies gives the instruction scheduler much more to work with.
Re:Linux has been running on Alpha for quite a whi (Score:1)
The Transmeta Crusoe?
Re:250Watts!!!! (Score:1)
absolute balderdash. In your data centre power usage is
Then you have to make sure that the peak power drawn by distribution boxes attached to one of your UPS line doesn't exceed what it is rated for.
Then you have to make sure the total peak draw on your UPS is below its spec.
Then you have to make sure you have enough mains 3 phase to feed your UPS at peak output.
Then you have to make sure that your generators can supply the UPS at peak.
So if equipment such as servers were to suddenly jump in power requirement from ~700W peak to maybe 2.5kW peak, this would poke a huge hole in your power planning.
Re:IBM's SMT (Score:1)
Actually, the AS/400 is just a new generation of the S/38, which was 64-bit back at least as far as the early 80s.
Re:Just one question... why? (Score:1)
The problem with 8 CPUs on one die is that if one CPU has one tiny flaw, you have to chuck the whole die.
WRONG!
With 8 CPUs on a die, you can deactivate the damaged CPU. This is often done with level 2 cache and I beleive its even done with memory. Sell it as a 7 CPU chip? 4?
Obviously this can't be done with LCD displays since you can see which pixels would be deactivated.
Re:Are compilers amd CPUs really that bad? (Score:1)
more polygons per frame, of course... and higher resolution images.... hardware NURBS.... real-time ray tracing.... don't worry, there are plenty of ways to use the eztra CPU cycles!
A couple of problems: Congress, and the INS (Score:1)
Re:Ripped off Ars Technica lately? (Score:1)
Re:Time to kick a friend's ass (Score:1)
True, although register renaming eliminates some of the "data-dependency" problem he mentioned.
Re:IBM's SMT isn't SMT its vertical threading. (Score:1)
The deal is that the processor executes instructions from one stream until it takes a cache miss at which point it takes a small context switch overhead time (something like 6 cycles) and starts executing another thread until that thread takes a cache miss. IBM's implementation is more of a method of hiding memory latency than a way to increase IPC. Of course decreasing memory latency causes the average system global IPC to go up but it also slows down the thread specific IPC. There are also issues with the TLB, cache etc being shared between both threads which can cause TLB thrashing which is a nasty performance problem. The result is that the performance improvement isn't nearly what the theoretical says it can be but it is a nice cheap way to gain a small but quite significant performance improvement.
Re:IBM's SMT (Score:1)
Ripped off Ars Technica lately? (Score:1)
You think that's fast? (Score:1)
No. (Score:1)
Re:250Watts!!!! (Score:1)
You could heat a swimming pool with that.
Actually, 2kW is just the power consumtion of one or two cooking plates. So we can expect that the properly networked kitchen is only a short time away, although it probably won't be the fridge which has the most computing power...
Finally, a chip you can *really* fry eggs on! :-) (Score:1)
Let's take a look at the projected power consumption of the 21464/EV8:
250W / 1.2V = 208.3 A
I wonder how they're going to push that kind of current through the processor core and how they want to cool that baby - considering that the die size has only increased by a factor of 1.5 since the 21064, but the power consumption has increased by a factor of 8.3, so the cooling will have to be 5.6 times as efficient as for the 21064.
Will the 21464 have to be submerged in liquid nitrogen to avoid a deth by spontaneous evaporation?
Time to kick a friend's ass (Score:1)
Re:Billion Transistor Chips (Score:1)
Actually, I believe you're thinking of VLIW (aka EPIC) architectures. They need special compilers because parallelism is expressed in the instruction stream itself rather than discovered by the scheduling logic in the CPU. (Essentially, the compiler does some or all of the work of instruction scheduling.)
SMT does not have this problem. It needs support in the OS (to present OS-level processes/threads as CPU-level threads), not the compiler.
[BTW, I work for Compaq on the Araña (aka EV8) project that this article is about.]
Re:Finally, a chip you can *really* fry eggs on! : (Score:1)
I work for Compaq on the Araña (aka EV8) project that this article is about.
Floating around our offices there's this comic somebody in the Alpha group drew years ago about the somewhat laughable concept of an Alpha-based portable. It shows the "Alpha notebook/backyard barbecue" crunching numbers and grilling steaks at the same time!
[There actually was an Alpha notebook back in '96 [unixpac.com.au], but AFAIK there wasn't much demand for it and it died fairly quickly. Who would want to run VMS on a laptop anyway?]
Future Architectures (Score:1)
The major microprocessor developers are all pursuing one of the following architecutreal paths:
The reason for pursuing these is really a matter of differences in underlying philosophy. SMT is based on the philosophy the throughput is more important than single-stream performance. VLIW is based on the belief that single-stream performance is most important. MCU is based on the notion that time-to-market is key.
Re:Billion Transistor Chips (Score:1)
Ryan Earl
Student of Computer Science
University of Texas
Here's the paper! (Score:1)
"Billion Transistor Architectures" in PDF format [utexas.edu].
And here's his homepage [utexas.edu] with other articles you should find interesting. He's the hauss; one the best professor I've ever had the pleasure of taking. The architecture is called CMT = Chip MultiProcessor.
Ryan Earl
Student of Computer Science
University of Texas
A couple of points about the PowerPC Northstar (Score:2)
In part two, Paul DeMone states:
The IBM PowerPC RS64, also known as Northstar, is rumored to incorporate two way coarse grained multithreading capability, although it is not utilized in some product lines.
The Northstar processor does in fact incorporate CMT. According to http://www.as400.ibm.com/beyondtech/arch_nstar_per f.htm, "Emphasis was placed on ... increasing the processor utilization by alternately executing two independent instruction streams." A similar page targeted at the RS/6000 audience (http://www.rs6000.ibm.com/resource/technology/nst ar.html) makes no mention of multithreading, so it appears that this feature is not used in that product line.
Also, "Northstar" refers to the A50 (AS/400) / RS64-II (RS/6000) processor. An older processor, the A35/RS64-I "Apache" did not have hardware multithreading.
SMT is the only way to scale performance (Score:2)
The problem with processors today is that they stall, often for hundreds (or thousands) of CPU cycles waiting for memory (or device registers). Adding more processors helps some, as other programs can run on the other processors, but they can also stall for hundreds of cycles on every cache miss.
What SMT does is allows another thread to use the execution units on THIS processor during those hundreds or thousands of cycles that the processor is waiting on a cache line. SMT is an enormous win, especially for programs that have forced cache misses (such as reading memory that was DMAed from a device, as is the case for network packets and disk blocks). Forced cache misses are why OS kernels don't scale with the hadrware as much as applications. They are also the reason larger caches don't continue to buy more performance. (They also occur on SMPs when different processors modify the same cache line.)
There are downsides to SMT, of course. First, it increases cache pressure (more working sets need to be kept in cache at a time). However, now the larger caches offer more benefit. And even if the cache isn't large enough, SMT allows the processor to maximally utilize it. Second, SMT increases the processor's complexity. Fortunately, most of the circutry required for SMT is already required for out-of-order execution. Interestingly, that circutry was removed from Intel's IA64 processors. Third, since the processors can't issue enough simultaneous bus transactions now, they would be further starved with SMT. This can be solved by increasing the number of outstanding requests that can be supported (which would be done for any reasonable SMT implementation).
Explicit prefetching can also be beneficial at improving the execution of a single thread, but current processors do not allow enough outstanding memory transactions (typically 4-8; 4 in Intel, 8 on some RISC) for prefetching to be very useful: Rambus memory is so "slow" on x86 because the processor can't issue enough requests to the memory controller to keep the pipeline full: Thing of the memory bus as a highway: we can engineer it to have more lanes, or increase the speed limit. But we need to increase the number of "cars" on the road to see the performance advantage. Being stuck with 4 is pathetic.
I hope the Alpha kicks some serious butt.
Been hearing about it for a while (Score:2)
Re:Billion Transistor Chips (Score:2)
Who says the threads all need to be from the same context? I asked a compaq guy about that 14 months ago and he said they cold be from diffrent VM contexts, and (and this was a supprise to me) load/stores in diffrent MMU contexts can be done in one cycle.
I don't recall if he was on the EV8 team, but he was in their CPU design department (his focus was on heat though).
Re:Billion Transistor Chips (Score:2)
My guess is that the paper and Compaq have slightly diffrent definitions of SMT. I assume the paper chose one the author thought was intresting, or easy to evalulate, or easy to implment, or most constructave to evalulate, and Compaq chose one that would give good value for the design and transistor investment.
Given that very little existing software could use all-in-one-MMU-context SMT (multithreded programs only, and only CPU bound ones would take much advantage), and pretty much any CPU bound server workload (anything with more then one process) could take advantage of SMT with multiple MMU contexts.... of corse that assumes the implmentation cost isn't too horrific, but given that they picked it...
Anyway, if you do get another copy of the paper I would love to see it. Even if it doesn't exactly address Compaq's SMT, it sounds intresting. I can't find it with google, but maybe if you remember the title of the paper, or any authors other then Prof Berger?
Re:Just one question... why? (Score:2)
Thanks
Bruce
Re:Just one question... why? (Score:2)
Thanks
Bruce
Re:Just one question... why? (Score:2)
Re:Just one question... why? (Score:2)
This could result in an extremely fast, relatively cheap SMP machine. Yields would actually be much higher than normal CPU's, even though the chips would be bigger.
Each CPU could have it's own level 1 cache, but they could share a big level 2-cache, and all the inter-CPU communication would be on the single chunk of silicon -- very fast!
Hmmm. How about a Beowulf cluster of those! (duck).
Torrey Hoffman (Azog)
Re:Just one question... why? (Score:2)
-B
Re:250Watts!!!! (Score:2)
Re:Intel IXP1200 already does this (kinda) (Score:2)
One cannot compare a network processor to a general purpose processor since all of the NPs I've looked at are very specific to one application, networking.
For example, one could not run SETI@Home calculations on a network processor, nor could they run Linux, as their memory architectures are often limited with most of the program memory residing on-chip and/or in very fast SRAM. Right now the largest high-speed SRAM chip available is around 4MB. It becomes impractical to add more than 16 MB of SRAM due to loading of the bus (assuming 64-bit). At 166MHz it is even worse.
As for multiple contexts, many of the network processors can switch between contexts very quickly, but also remember that NP cores do not have many of the things a general purpose processor has. There's no paging or fancy memory management, nor is there floating point.
Re:How is this different from Tera MTA ? (Score:2)
Isn't the whole point of SMT specifically because there usually isn't enough ILP in a single thread to keep the CPU busy...so you expose additional thread-level parallelism to the out-of-order execution engine to hopefully keep things humming?
If you think SMT is different, please explain!
Are compilers amd CPUs really that bad? (Score:2)
Re:Are compilers amd CPUs really that bad? (Score:2)
Re:Are compilers amd CPUs really that bad? (Score:2)
Do you have any idea if this type of code rewriting/reordering can actually be effective?
Re:Time to kick a friend's ass (Score:2)
5 years ago roughly what you describe was already being implemented by someone as a research project. So I don't know why you were pissed, it's been a known idea for quite a while.
Re:Because .... (Score:2)
Latency to main memory is only one of many problems you're trying to solve. The Tera MTA solves only that problem; SMT solves more.
Re:Intel IXP1200 already does this (kinda) (Score:2)
From reading the datasheet, the IXP1200 has nothing to do with SMT.
And you don't need a new benchmark for multi-threaded processors; current benchmarks generally cover both single process performance and workloads. For workstations, these benchmarks are SPEC2000 and SPECrate2000...
Re:How is this different from Tera MTA ? (Score:2)
If you think it looks cool, you're the only person on the planet who does. It made me retch the first time I saw it.
Re:Just one question... why? (Score:2)
There are many real workloads that can use more than 4 units. Many numerical programs, for example. SMT is a great way of running these programs really fast while doing as good as several current CPUs on workloads which can't use as many functional units.
Re:Time to kick a friend's ass (Score:2)
Re:Are compilers amd CPUs really that bad? (Score:2)
The other reason is that to tolerate high memory latencies and other delays while keeping the processor busy, you need to have a really big instruction window. But that is very expensive to build and doesn't really scale, and futhermore requires lots of very accurate branch prediction; but programs have a certain amount of inherent unpredictability.
Re:Similar complexity, better effects (Score:2)
Re:Already been done by tera computing! (Score:2)
Register freeing (Score:2)
Re:Similar complexity, better effects (Score:2)
PS, I was an intern at DEC SRC while Eggers was there on sabbatical helping design the EV8.
Re:IBM's SMT (Score:2)
He interned at IBM and is working on speculative multithreading with Todd Mowry.
Re:Register freeing (Score:2)
However, it does seem logical to me that it could be a problem. Imagine you have two SMT threads, one which wants lots of physical registers and one which wants only a few. You'd have to tie up a bunch of physical registers to back up all the logical registers for the second thread, even though they're not really needed. How much more elegant to have the compiler insert instructions to say "I'm not going to need the value in this register".
Re:Register freeing (Score:2)
SMT research at the University of Washington CSE (Score:2)
A link to his SMT page is here: http://www.cs.washington.edu/research/smt [washington.edu]
Since I'm not really qualified to say much about SMT, I recommend those that are interested to visit the link above and read some of the research. I attended Prof. Levy's lectures on SMT and it sounded very interesting.
One very interesting note I'd like to make is that SMT is a way of keeping today's superscalar out-of-order architecture, and pump it with the benefits of running multiple threads without a context switch. VLIW machines rely on the COMPILER to organize and arrange machine code to take advantage of the parallelism inside the VLIW architecture. Of course, the problem with VLIW is that you live and die by the compiler. Not only that, but because the scheduling is static for VLIW, subtle changes in the architecture could result in the code no longer running at optimal scheduling.
SMT allows the processor to execute multiple threads "simultaneously" (ie without requiring context switch). You allow maximum utilization of your functional units because a math hungry thread can run along-side a "light" thread, maximizing processor utilization simultaneously. As others have pointed out, this helps increase utilization especially with today's long latencies for a cache miss. And, because the processor does this dynamically, you can achieve close to optimal utilization across different running scenarios, and across multiple iterations of the architecture.
Please correct me if I made mistakes, either through mis-understanding or lack of proof-reading.
Re:Time to kick a friend's ass (Score:2)
Having no data dependance isn't necesarily a good thing - it tends to lead to needing caches and TLBs that are twice as big or having the existing caches/TLBs thrash - some SMT schemes assume compilers that do things like generate speculative threads and share data and address mappings closely in order not to choke.
Re:Multiple prefetch queues? (Score:2)
Ummm ... maybe, maybe not .... in an out of order, register renaming CPU like a Athlon/Pentium/etc 'pipelines' are pretty amourphous, apart from the prefetch there's basicly just a bunch of instructions waiting for chances to get done - you may have even speculatively gone down both sides of a conditional branch and intend to toss some of them depending on the branch being resolved (or even speculatively guessed at the results of a load and gone down that path ....). Expanding this to SMT is a pretty simple process - you just expand the size of the 'name' that you rename things to and tag loads/stores to use a particular TLB mapping.
Now ifetch (and as a result decode) is a harder problem - ports into icaches are expensive - running 4 caches with associated decoders is possible. But remember the idea here is to use existing hardware that's unused some portion of the time - not to make the whole design 4 times larger, so more likely you're going to do something like provide some back pressure to the decoder logic giving information about how many micro-ops are waiting for each thread and use that to interleave fetch and decode from various threads.
Now IMHO the conditions that make SMT viable are somewhat transient - they may make sense for a particular architecture one year, and maybe not next year - depends on a lot of confluence of technologies (for example I still think RISC to CISC transition made sense mostly because of the level of integration available at the time and the sudden speed up of ifetch bandwidth over core) - apart from the super-computer (everything's a memory access) crowd SMT may be a passing fad - not worth breaking your ISA for or creating a new one with SMT as its raison d'etre (ie add a few primitives, don't go crazy).
(note to patent lawyers - I'm "skilled in the art" I find all the above is obvious)
Re:Just one question... why? (Score:2)
Anyways, now, to answer your valid concern
From an adoption standpoint(ie: how well your CPU will sell), putting 8 or more CPUs in one die isn't a great thing. How many operating systems do you know run well on 8 or more processors? However, almost every OS today uses multiple threads/processes which will benefit from this architecture.
Of course, we're talking about an Alpha here, which basically runs Unix(and the various flavours thereof). When Linux gets ported to this processor, I imagine it'll perform stellarly. That's why I want Linux to succeed, actually
Dave
Barclay family motto:
Aut agere aut mori.
(Either action or death.)
Re:Linux has been running on Alpha for quite a whi (Score:2)
There is also going to be some needed kernel support too. Since the threads need to be distinguishable to the EV8, the kernel will have to name them(looks like two bits would do it, and it makes sens to me. But maybe they'll use just one, or use three or four for some headroom).
Actually, you're right, I was speaking specifically of this hypothetical EV8; but only because I imagine it'll be a while before it comes out. In that time, I hope Linux becomes mainstream enough that some nice high-quality(read: nice graphics[not necessarily 3D; Diablo was great) games will be ported/written to it/for it. That way, with a nicely updated GCC and an EV8-aware Linux kernel, those games would just scream
I havn't played a real game in about a year now - it's all old hat. There's just not a whole lot more you can do with today's processors. I hope the EV8 inspires someone
Dave
Barclay family motto:
Aut agere aut mori.
(Either action or death.)
Re:Ripped off Ars Technica lately? (Score:2)
Re:Ripped off Ars Technica lately? (Score:2)
Re:Just one question... why? (Score:2)
Multiple prefetch queues? (Score:2)
Not knowing anything about modern processor design, maybe this is naive, but...
If the processor itself is dealing with thread-local state, wouldn't you include more than one prefetch queue/pipeline, and match available pipelines to working threads just like any other register set or other thread-local stuff?
"In this thread, I am executing clock two of a floating pt divide, have this suite of values in the registers, and have prefetched sixty probable future instructions." Multiply by four for a 4xSMT processor.
It's still using a single fetch mechanism to feed stuff to the pipelines, but one stalled thread doesn't waste the whole clock cycle that could have been spent fetching for another thread's pipeline.
Like I said, I once was able to grok the actual transistor design of a 6502, but nothing more modern than that.
Re:Bugs? (Score:2)
Re:Just one question... why? (Score:2)
Crossbars are for anywhere you have more than 1 source and/or more than one destination and you want to have multiple flows going at the same time -- imagine a bus architecture as an old thinwire ethernet and a crossbar architecture as an ethernet switch.
That is, CPU A can be fetching from memory bank 2 at the same time as CPU C is writing to bank 3:
A--[_4x4__]--0
B--[_cross]--1
C--[__bar_]--2
D--[switch]--3
Re:Just one question... why? (Score:2)
It's not quite a symmetrical multiprocessor (Score:2)
The cache bottleneck strikes again. The thing is intended to support multiple threads in the same address space. If the threads are in different address spaces, or doing drastically different things, the load on the cache goes up, apparently to unacceptable levels. This has a number of implications, most of them bad.
The obvious application, just running lots of processes as if it were a big symmetrical multiprocessor, isn't what this is optimized for, apparently. What these people seem to have in mind are multithreaded applications where all the threads are doing roughly the same thing. SGI used to have a parallelizing compiler for their multiprocessor MIPS machines, so you could run one application faster, and this machine would be ideal for that. But that's a feature for the large-scale number-crunching community, and you just can't sell that many machines to that crowd.
Graphics code would benefit, but the graphics pipeline is moving to special-purpose hardware (mostly from nVidia) which has much higher performance on that specific problem.
I think if this is to be useful, it has to be as general-purpose as a shared-memory multiprocessor. Historically, parallel machines less general than an SMP machine have been market flops. This idea may fly if they can squeeze in enough cache to remove the single-address-space restriction. Then it makes sense for big server farms.
Re:How is this different from Tera MTA ? (Score:2)
SDSC has one which is the coolest looking computer in the bunch - blue and kinda wavy shaped. And we all know that that's what really matters most.
Re:Just one question... why? (Score:2)
--ricardo
Threat? (Score:2)
Alpha EV8 (Part 1): Simultaneous Multi-Threat
I thought the threat part was Microsoft's job?
IBM AS/400 has "HMT" (Hardware Multithreading) (Score:2)
The CPU has two register files, each of which is called a "thread." All of the architected registers are duplicated, so it is like having two processes/executable units cached on the processor. When one is executing, if it stalls on memory the processor context switches to the other. The context switch is a hard boundary - only one thread can execute at at time.
This isn't as fine grained as SMT, but it is easy to implement and it provides pretty good bang for the buck. It improves throughput, not speed. The throughput improves because the processor can try to do more work on the other thread while the first one is stalled. Some deadlock prevention stuff is thrown in, as well as some operating system tweaks to make it run better.
There is a published paper - it's in the IEEE archives, from 1997 or 1998.
This is relevant because it's been out for a few years (1998), it's commercially available, and thousands of AS/400 customers are using it today. (And it works so well, most don't even know that it is there.)
Re:250Watts!!!! (Score:2)
That's stupid, troll (Score:2)
Re:Just one question... why? (Score:2)
Intel IXP1200 already does this (kinda) (Score:3)
Sounds like it may be time to make a new benchmark to cover multi-threaded processors.
Re:Just one question... why? (Score:3)
Just because benchmarks are single threaded doesn't mean that they can't benefit from multiple execution units. Typical chips today (Pentium III, Alpha) have a lot more than 1 exectuion unit, and get a benefit from it most of the time.
The benefit of SMT over N smaller cpus is flexability: A program that can use the entire chip at once is damn fast, or several programs can share it.
IBM's SMT (Score:3)
Working on an architecture that does this... (Score:3)
We have a simulator that can be set to simulate a processor with any number of closely coupled cores, and any number of threads per core. We get good results at a 8 core * 4 threads setup (total up to 32 way parallel).
Using some basic automatic parallelization on a piece of code designed to run in a single thread, we have generated up to a 26X speedup, 8 core * 4 threads versus 1 core * 1 thread.
The advantage of SMT over a normal processor is that it makes use of clock cycles that would otherwise be wasted, eg waiting for the cache to fill. If your architchture spends half of its time stalled, and you can make use of these cycles by adding SMT, then you can increase your processor performance very efficiently.
SMT basically requires you to duplicate all of the processor's registers n times (n = #threads), + a little extra hardware ('little', relative to duplicating the entire core). So for ((1 * core) + (2 * registers) + SMT hardware) you are getting the performance of ((2 * core) + (2 * registers)). Good bang per buck ratio, when you count up the transistors.
But SMT naturally gives you diminishing returns for each thread you add - the whole point is that each new thread is using up wasted cycles - and once you reach ~4 threads there are very few cycles left over. At this point, if you have room left over on the die, you may as well start thinking about SMP on the same die.
Surpriesd the article didn't mention SMT & AMD. Check out this link [chip-architect.com].
Re:Just one question... why? (Score:3)
Sharing the cache is hard
Cache is the vast majority of chip area in a modern processor; as others have pionted out, it's obvious that multiple processors should share a cache. However, this is difficult. The problem is that every load/store unit from every processor must share the same cache bandwidth.
Thus, for a 2-way chip with only a shared cache, memory latency---to the cache, the best possible case---is cut in half.
We can work around this by using various tricks up to and including multiported caches---but most of these tricks increase latency (lowering maximum clock speed) or require much more circuitry in the caches (we were sharing the cache because it was so big, remember?).
It makes much more sense to share the circuitry that feeds into the cache.
Those are the superscalar execution units! Thus, SMT.
Utilization
Instead of keeping half the execution units busy, we attempt to keep them all busy. Extrapolating very roughly from Figure 2 [realworldtech.com] we can expect to issue about half as many instructions as we have issue slots (actually less if we have a lot of execution units). The basic idea is we can cut the number of empty issue slots in half each time we add a new thread. Further, instructions from separate threads do not need to be checked for resource overlaps---this circuitry is the main source of complexity in a modern processor.
What's happening now has been predicted for a long time. The extra resources (a bigger register set, TLB, extra fetch units) required for multithreading are now cheaper than the extra resources you'd need (mostly pipeline overlap logic) to get a similar increase in single-threaded performance.
SMT easier than SMP?
Moving thread parallelism into the processor is actually easier for the compiler and programmer; the weak memory models implied by cache coherence models aren't an issue when threads share exactly the same memory subsystem.
To get an idea for how hard it is to really understand weak memory models, consider Java (which actually tries to explain the problem to programmers---in every other language you're on your own). Numerous examples of code in the JDK and elsewhere contain an idiom---double-checked locking---which is wrong [umd.edu] on weakly-ordered architectures. What's this mean? Your "portable" Java code will break mysteriously when you move it to a fast SMP. Alternatively, you will need to run your code in a special "cripple mode" which is extremely slow.
From a programmer's perspective, SMT (as opposed to SMP) architectures will be a godsend.
Just one question... why? (Score:3)
How is this different from Tera MTA ? (Score:3)
Tera's home-brew supercomputers used what they called the TERA MTA - Multi-threaded architecture processors. You could get a 4 proc MTA machine that would significantly outperform much larger super computers.
Essentially the MTA cpu has knowledge of 128 virtual threads of execution inside of it. AFAICT, the point of the MTA design, and apparently of this one, is to minimize the penalty for branches, context switches, etc, wherever possible by putting fine grained execution knowledge in the CPU itself.
Given that superscaling has reached its limit and superpipelining is getting nastier and nastier, this might be a good way to go. Apparently Tera gets great numbers with their MTA stuff.
Other similar concepts (Score:3)
The Denelcor HEP. Only a few were made, and this dates way back to 1985, but it was a really neat multi-threaded CPU. It ran a variety of Unix, and had some reasonable extensions to adapt Fortran (even now probably the most popular number crunching language) to the multi-threaded CPU world.
The Alewife project at MIT. A variety of interesting ideas. Nothing ever really past the prototypes was finished to my knowledge. The concepts of operation are fun to examine.
These are an interesting complement to the SMP approach.
250Watts!!!! (Score:3)
--------
Make something idiot proof and someone will make a better idiot.
Re:Just one question... why? (Score:3)
So DEC's idea was, hell, grab some work from some other thread and do that.
Pretty cool, IMO.
Re:Just one question... why? (Score:3)
So that brings us to a second reason. Wide-issue superscalar processors end up using very little of that issue width most of the time. You just can't get enough parallelism out of single threaded applications. SMT offers the ability to use that wasted issue width by scheduling different threads onto the wasted functional units.
A third benefit to SMT is that it drives the industry in the right direction. Writing code to take advantage of SMT is basically the same as for SMP. You want to find ways to break your application into separate threads. If SMT becomes a common feature on CPU's, then perhaps we'll have lots more SMP-favorable code. There will also be greater incentive to write efficient parallelizing compilers. CMP is more efficient that SMT for high levels of parallism. So in future, people will probably be moving from SMT to CMP.
It's true that SMT doesn't help with standard single-threaded benchmarks. That's probably what's delayed the industry in adopting it. But the industry is finding out that it's running out of ways to speed up processors. Increasing clock rate isn't enough because your memory latency becomes a greater bottleneck. So increased parallelism becomes more and more crucial
Yes, definitely... (Score:3)
Steven
Re:Finally, a chip you can *really* fry eggs on! : (Score:3)
" ... grandmas want them! college kids want them! ... " etc.
Re:Time to kick a friend's ass (Score:4)
Actually, that's pretty much what the Pentium Pro (ergo p2, p3, celeron, celeron2 and the p4) do - only there it's done using "virtual registers" which means that the register "eax" can map to a completely different physical register if the instruction scheduler needs it to.
For example, you could write your code like this:
mov ebx, Pointer
mov cx, [ebx]
mov eax, [ebx+cx]
mov Pointer2, eax
(now I'm pretty sure that's not the best way to do it - it's just an example, ok?)
Now, if you have another multi-instruction operation after this and it's going to use any of the registers used above, the CPU will see in the decoding phase that "a-ha! eax has received a value that doesn't depend on itself (i.e. a completely new value)" and will assign a different physical register to "eax" until it's overwritten again. (this is also the reason who xor reg,reg is not the preferred way of clearing a register on the ppro and up.) Same for ebx and ecx and the other regs. By the time the CPU is finished decoding these instructions (this would take 1 and 1/3 cycles for ppro through p3 and 1 cycle for the p4 (due to the 4-1-1-1 decoders)), the reorder buffer (that receives the decoded instructions, also called micro-ops or uops) will have been filled up with previously decoded instructions and will be able to put as many uops into the execution "ports" as possible (3 per cycle in ppro through p3, not sure about the p4).
This, of course, assumes that the code is organised so that the decoders can feed the reorder buffer with more than 3 micro-ops per decoding cycle, so that there's something to reorder. But this will, for the most part, take care of that data-dependency problem.
Personally, I prefer explicit register setting (a'la PowerPC, 32 int regs + 32 fp regs) so that the CPU won't have to schedule instructions for me...
(all this information, except for the p4 decoder uop-max series, comes from the excellent pentopt.txt [agner.org] file.)
Re:How is this different from Tera MTA ? (Score:4)
The Tera MTA requires a compiler to multi-thread all processes. You only get 1 functional unit (and huge latencies == terrible speed) if your program can't be transformed by the compiler.
SMT, in contrast, can work on programs which can't be multi-threaded by a compiler. It works on "instruction level parallelism" (ILP). This is a much finer grain than parallelism that a compiler can find and exploit with another thread.
Because .... (Score:4)
Having gone down the route of doing a paper design for an SMT I know that one of the real problems with SMT in traditionally piped CPUs (ie non-OO) is that with today's deep pipelining the cost of thread switches is really high - often to the point of being useless.
The alternative (SMP) is good for other reasons - you can potentially reduce the size synchonous clock domains on a dies - design time may be lower (build one and lay out 8). The downsides have to do with memory architectures (cross bars, buses, cache paths etc)
Billion Transistor Chips (Score:5)
The cores are actually able to execute in different contexts as well, not just within the same context as with SMT. This opens up parallization across more than one process.
One of the more interesting problems in a billion transistor chip is the wire delay. With processes so small that a billion transistors can be put on a moderate sized die, the clock rate is so high that the wire delay from one side of the chip to another side can be over 100 clock cycles! So locality of information becomes extremely important. With multiple, simple processing cores, all the logic for the pipeline is close together. The data is readily available in L1 cache. The scheduling logic has been mostly handled outside the cores, all they have to do it crunch numbers within their context as fast as possible. They don't have to worry about sending/receiving signals from very far on the chip and the resultant delay, so everything is local and fast.
Additionally, it's the least complex chip to design. Only one processing core needs to be designed and tested since it's duplicated 4 times. The core is much simpler than other designs. The scheduling logic is all much simpler and easier to test. Most of the die space is devoted to localized caches and executive units, not scheduling logic.
In the benchmarks the SMT and MSC processors vastly outperformed a convential massively pipelined/parallel billion transistor processor. And the MSC performed an additional 20+% (on average) than the SMT processor.
On top of all that, to get the best performance from SMT processors you need very smart compilers that are able to find parallelizable code and generate the binary for such. With MSC this isn't a problem. It'll run multi-threaded code simultaneously, but it'll also run multiple processes or any combination of both processes and theads simultaneously without help from smart compilers.
Ryan Earl
Student of Computer Science
University of Texas