

Intel Turbo Boost vs. AMD Turbo Core Explained 198
An anonymous reader recommends a PC Authority article explaining the whys and wherefores of Intel Turbo Boost and AMD Turbo Core approaches to wringing more apparent performance out of multi-core CPUs. "Gordon Moore has a lot to answer for. His prediction in the now seminal 'Cramming more components onto integrated circuits' article from 1965 evolved into Intel's corporate philosophy and have driven the semiconductor industry forward for 45 years. This prediction was that the number of transistors on a CPU would double every 18 months and has driven CPU design into the realm of multicore. But the thing is, even now there are few applications that take full advantage of multicore processers. What this has led to is the rise of CPU technology designed to speed up single core performance when an application doesn't use the other cores. Intel's version of the technology is called Turbo Boost, while AMD's is called Turbo Core. This article neatly explains how these speed up your PC, and the difference between the two approaches. Interesting reading if you're choosing between Intel and AMD for your next build."
Can we get.. (Score:5, Funny)
...Turbo switches on our workstations again like back in the day?
Re:Can we get.. (Score:5, Funny)
Plus a straightforward way of figuring out how to best assign processes to particular cores? (which ones are faster and which are slower)
PS. (Score:4, Interesting)
For that matter, can we have one more thing: a way to limit max core usage to, say, 10% (imagine you're playing an old game on a laptop, for example Diablo2; now, many games have the unfortunate habit of consuming all available CPU power...whether they need to or not; taking battery with them)
Re:PS. (Score:5, Informative)
aptitude install cpulimit
Re: (Score:2)
Re: (Score:3, Informative)
http://appdb.winehq.org/appview.php?iVersionId=49 [winehq.org]
Re: (Score:3, Informative)
It's called SpeedStep. (OK, it doesn't reduce the CPU usage, but it reduces the CPU clock speed, which is more effective.)
Re: (Score:3, Insightful)
Yes, one can force SpeedStep setting - but the game would still be consuming all available power, preventing the CPU from going to pseudo sleep states (which is even more effective)
Re: (Score:2, Insightful)
Re: (Score:3, Interesting)
What do you mean "that's why SpeedStep is used, against normal CPU throttling"? SpeedStep is CPU throttling; but on top of that C-states are also highly effective, or at least Thinkpad Wiki thinks so [thinkwiki.org], and I see no reason to disbelieve them...
Re: (Score:3, Informative)
In the off chance that you're running on Windows;
http://mion.faireal.net/BES/ [faireal.net]
( ugly UI, does the job )
Re: (Score:2)
RMClock does CPU-Throttling. That may be what you're after...
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
How about a small daemon that at intervals re-assigns the running processes to the cores in a balanced way (or one of your choice), and also sets the affinity for new processes. Should be about 30 minutes with any fast language of your choice that can call the appropriate commands.
I think you could even do it with bash, although it would not be very resource-saving. (Hey, everything is a file even those settings! If not, then they did UNIX wrong. ;)
Remember: You are using a computer. Not an appl(e)iance. Yo
Re: (Score:3, Informative)
How about a small daemon that at intervals re-assigns the running processes to the cores in a balanced way (or one of your choice), and also sets the affinity for new processes. Should be about 30 minutes with any fast language of your choice that can call the appropriate commands.
The linux scheduler doesn't do this? The OS X one certainly does, it also moves processes from core to core based on which one is getting hot.
Re:Can we get.. (Score:5, Funny)
Plus a straightforward way of figuring out how to best assign processes to particular cores? (which ones are faster and which are slower)
Heh, trick question. You almost got me there.
You see, Intel stack their cores from fastest to slowest in order to maximise heat dissipation. This is known as a High-Endian architecture. AMD, on the other hand, use a Low-Endian architecture, stacking their cores from slowest to fastest because they claim it lowers power usage. So the real trick when trying to figure out which cores are faster is finding a cross-platform approach that won't penalise any given processor.
The Slaughterhouse-5[*] method says that with a non-randomised Tralfamadorean transform, you can infer where your sample data is going to end up before you actually send it there. So you just measure the incipient idiopathic latency of your unsent bytes and then apply a parsimonious lectern to the results and voilà!
... Why, yes, I am in Marketing. Why do you ask?
------------------
[*] As developed by Billy Pilgrim [wikipedia.org]. Po tee-weet
Re: (Score:2)
You have it – assign the processes, and the cores they're assigned to will become fast. This doesn't need software fighting against it all the time.
Re:Can we get.. (Score:5, Informative)
Actually, that’s pretty easy to do with Linux right now. /etc/acpi/ directory, modify the scripts so they call “cpufreq-set -f $freq” on the right events. (You may need a state file in your /var/state/ dir, to remember which mode you are in. But you can also toggle a keyboard led that you don’t use much.)
Just choose any ACPI button (you at least have a power button, often more), and in your
And this is why I love Linux. If you can think of it, and it’s physically possible... you can do it. :)
Next: Using the graphics ram that is unused while in 2D mode, as a fast swap/tmpfs/cache. ;)
Re: (Score:3, Insightful)
But perhaps not without exceeding the amperage value for which power lines are rated...
Re: (Score:2)
heh, care to guess how many watts a CPU pulls?
Now can you guess how many watts a wall outlet+power cable can give?
Now compare those two....
Re: (Score:2)
He's talking about PSU power rails, not mains power.
Re: (Score:2)
The CPU pins, motherboard traces to them and the traces from them into the CPU proper are probably all narrower than the PSU wires.
Re:Can we get.. (Score:4, Interesting)
Yes but it consumes more power and heat than they'd like. Binning is also a bigger deal than you think with CPUs. My CPU can be over-clocked significantly, because I got a lucky unit, but not nearly as much as what some people get. My CPU isn't stable at the memory speeds most over-clockers see online either. So in some ways, I got a good CPU, in some, it's meh.
On the other hand, there's no way I'd sell a company my CPU & motherboard at the speed I've boosted it up to. Not a chance. It's not 100% stable, there are infrequent glitches, etc. I improved my cooling, decreased my over-clock, and I've still had it not wake up from s3 sleep and done a couple other odd things.
So no, super turbo boost is not what you think it is. Is it a marketing ploy? Everything is a marketing ploy, but it's also a useful feature. Especially on laptops, where all but one core of the CPU can completely shut down and the remaining one can nearly double its clock speed.
Re:Can we get.. (Score:5, Interesting)
So what you do is get people to code apps that use lighter-weight threads. Apple's GCD and FOSS ports of GCD spawn low-cost (as in overhead) threads so you can cram more in, make them smaller, and relieve part of the dirty cache memory problem in using them.
Spawn threads across cores, keep thread life simpler. Make those freaking cores actually do something. It can be done. It's just that MacOS or Linux or BSD have to be used to run the app/games.
Don't get me started on GPU threading.
Re: (Score:2)
Well, what I find very stupid, is that we programmers always preach modularity, but we still use mostly huge monolithic apps on the graphical desktop.
I’d prefer the following:
Imagine Photoshop. Or an office program. Now imagine everyone of those tools in the palette, buttons in the button bar, and menu items, becoming a completely separate program. With them all interacting trough a standardized interface, as an analogy to piping in shell scripting. And the property sidebars become the properties of t
Re:Can we get.. (Score:4, Informative)
Re: (Score:2)
Huh? (Score:5, Insightful)
That read like the pasting of two press releases together. That did very little to explain what is going on beyond press grade buzz words.
Re:Huh? (Score:5, Informative)
Re: (Score:2)
Essentially they both just detect if other cores can be powered down, power them down and then crank up the clock speed on the single cores because heat/power doesn't matter if the other cores are turned off or in the low megahertz. AMD's solution is like an afterthought because their architecture is older than Intel's while Intel's was built in to the architecture.
It's actually existed since the original Phenom series of chips that came out a few years ago; and they've only recently given the BIOS code for it in these newer chips.
On my Phenom 2 720BE (which I unlocked to a quad) I use Phenom MSR Tweaker to control my power states and multiplier settings. I can have 1, 2, 3, or all 4 of my cores overclocked. Core 1 hits 3.8ghz better than 2, 3, and 4; but I leave them all at 3.5ghz.
Re: (Score:3)
Intel is better.
Has been that way for many years now. Yes, it's more expensive.
Re:Huh? (Score:5, Informative)
Intel is better.
Has been that way for many years now. Yes, it's more expensive.
Depends on your metrics. If the only thing that matters is pure raw speed out of a single die, Intel does eek out on top, but not by as much as you'd think.
If you're going for massive multi-processor, multi-core systems, it's AMD.
If it's power vs performance out of a single die, then it depends - idle or full throttle. Intel for the former, AMD for the latter, depending upon weighting.
and so on. At least as of the last set of performance benchmarks I read just a few months ago on the topic, meaning they're probably completely out of date by now.
Re: (Score:2)
As for power, the parent didn't mention it.
Re: (Score:3, Informative)
Performance on it's own isn't meaningful for everyone. If I'm setting up a server or better yet a data center, performance against cost has to be considered. And that's not just the cost of the hardware but also things like MTBF and power costs.
I'm not implying that one is better than the other here, but a raw performance comparisson between like-for-like processors is not enough information to make spending decsion with. It may be better value to buy the system with poor performance and spend the savings
Re:Huh? (Score:4, Informative)
The way I understand it (and I could be wrong) is that on a quad core 1.6ghz i7 each core is actually capable of going up to 2.8ghz, although I'm not sure if they are all capable of going to 2.8ghz at the same time. If you run a program that can't take advantage of more than 1 core, and it starts maxing out that core at 100%, the cpu will increase the clock speed of that core, up to 2.8ghz until it isn't maxed out anymore. In order to keep energy consumption and heat down the cpu will also lower the clock speeds of the other cores as needed.
With older multi-core processors if you had a quad core 1.6ghz and you had a program that could only use 1 core then you would effectively just have a 1.6ghz processor, in which case a dual core 2.8ghz would be way better. With Turbo Boost you can essentially get the best of both worlds.
Or, put another way (Score:2)
How to sell people lots and lots of cores but only have to actually deliver on one of them.
Neat.
Re: (Score:2)
See, my problem with this is the expectation that the application can use multiple cores. I don't care if an application can use multiple cores, I want the operating system to be able to make use of them.
The day an application sees that I have four cores and can feel free to use all of them, we're pretty much hosed, because we'll have apps ch
Re:Huh? (Score:4, Insightful)
Re: (Score:2)
It also reads like AMD bashing rather than tech article.
Re: (Score:2, Informative)
Re: (Score:2)
When will we stop referencing Moore's law?
When it's clearly no longer true nor applicable- neither of which is the case yet.
As a child I loved the TV show SPACE: 1999. But its gone now and not relevant
That's because the moon was not- and has never been- catastrophically blasted from the Earth's orbit with Martin Landau and Barbera Bain onboard, running into countless humanoid-alien-of-the-weeks along the way.
Moore's law was, and remains true, and a fundamental aspect of progress in the computer industry.
Cooling fan noise anyone? (Score:2)
Rather than cranking up the GHz of each core to obtain more speed, I wish they'd concentrate on keeping it cool. I hate the fan noise, and multicore was a way around that because it rarely heats up with standard usage. Hence less or no cooling required.
"We've got to find some way to get that fan to rotate to annoy the users... ah I have a cunning plan..."
Re:Cooling fan noise anyone? (Score:5, Informative)
Re: (Score:3, Interesting)
I bought an Intel i7-860 recently and the supplied HSF is barely able to keep the core temperatures under 95 deg. C with eight threads of Prime95 running. Eek!! I replaced it with a cheap Hyper TX3 cooler (larger coolers won't fit with four DIMMs fitted), and it run at least 20 degrees C cooler under the same conditions. The supplied fan is a little noisy under full load, but for gaming etc. it's not a problem.
Turbo Boost is cute, but I've opted to overclock it at a constant 3.33GHz (up from
Re:Cooling fan noise anyone? (Score:5, Interesting)
predictable performance
Predictable power-drain, you mean, and a predictable shortening of the life of your hardware -- assuming it doesn't just overheat and underclock itself, which I've seen happen a few times.
CPU scaling has been mature for awhile now, and it's implemented in hardware. Can you give me any real examples of it causing a problem? The instant I need that speed (for gaming, etc), it's there. The rest of the time, I'd much rather it coast at 800 mhz all around, especially on a laptop.
with no temperature or stability issues. YMMV.
Understatement of the year.
Overclocking is a bit of a black art, for a number of reasons. First problem: How do you know it's stable? Or rather, when things start to go wrong, how do you know if it's a software or a hardware issue? The last time I did this was a 1.8 ghz machine to 2.7. I ran superpi, 3dmark, and a number of other things, and it seemed stable, but occasionally crashed. Clocked it back to 2.4, it crashed less often, but there were occasionally subtle filesystem corruption issues -- which was much worse, because I had absolutely no indication anything was wrong (over months of use) until I found my data corrupted for no apparent reason. Finally set it back to the factory default (and turned on the scaling) and it's been solid ever since.
Second problem: Even with the same chip, it varies a lot. All that testing I did is nothing compared to how the manufacturer actually tests the chip -- but they only test what they're actually selling. That means if they're selling you a dual-core chip that's really a quad-core chip with two cores disabled, it might just be surplus, the extra cores might be fine, but they haven't tested them. Or maybe they have, and that's why they sold it as a dual-core instead of quad-core.
So even if you follow a guide to the letter, it's not guaranteed.
I'm sure you already know all of the above, but I'm at the point in my life where, even as a starving college student, even as a Linux user on a Dvorak keyboard, it's much saner for me to simply buy a faster CPU, rather than trying to overclock it myself.
Re: (Score:2)
blah blah blah, get with the times old folks with small UIDs. A hefty majority of the build-your-own-pc crowd overclocks.
We test with a multitude of stress testing programs that test all parts and instructions of the architecture. We find the maximum frequency we can run at acceptable voltage and heat. There's a linear region of overclocking and an exponential region. Most people find the divergence point and sit there. Those with water cooling can go a little further.
It pretty much is guaranteed, because 9
Re: (Score:2)
Assume away. I'll repeat myself: at full load the replacement heatsink runs substantially cooler than the stock (and warrantied) Intel cooler. The Intel cooler at 90+ deg. C - barely within the thermal specs. for the CPU - did not thermal trip; why would the replacement cooler trip at 65 deg. C?! Your overclocking exam
So buy a better cooler (Score:2)
The stock Intel coolers are designed to be economical and meet the thermal requirements, not good.
I use an Arctic Cooling Freezer 7 Pro. With my Q9550 I cannot make the fan spin up past the minimum, which is about 300rpm. The Intel board I use figures the CPU should maintain about a 20 degree thermal margin, meaning run 20 degrees below its rated max. If it is running hotter than that, the fan spins faster up to the max. If it is running cooler than that, the fan throttles back as low as the minimum. Idle,
Re: (Score:2)
Re: (Score:2)
Get a goat.
Then after it craps in your bed and chews up your linens and brays all night, get rid of it.
Your computer will seem so much cooler and quieter after you get rid of the goat!
(my current PC from 2007 is soooo much quieter than the 2002-era PC it replaced)
"Apparent performance" (Score:5, Insightful)
Re:"Apparent performance" (Score:4, Insightful)
What's "apparent performance"? It's either faster or it's not.
You have obviously never worked in UI design! (though in this area I don't know who/what they would be trying to fool or how they would be trying to fool them/it so your response is probably quite right)
Re: (Score:2)
You have obviously never worked in UI design! (though in this area I don't know who/what they would be trying to fool or how they would be trying to fool them/it so your response is probably quite right)
And apparently you have never worked in sentence design. ;)
Re:"Apparent performance" (Score:5, Insightful)
Many programs simply do not benefit from multiple cores. This technology is basically a trade off between partially disabling one core and increasing the frequency of the other core.
Re:"Apparent performance" (Score:5, Interesting)
A better explanation (Score:5, Informative)
The article kinda glosses over things. So a more detailed explanation of how Intel's turbo boost works:
As stated, every core has a budget for the maximum heat it can give off, and the maximum power it can use, as well as a max clock speed that it can handle. However, when you look at these things, they aren't all even, one ends up being the limiting factor. So Intel said, ok, we design a chip to always run at a given speed and stay under the thermal and power envelopes. However, if it isn't running at that, we allow for speed increases. It can increase the speed of cores in 133MHz increments. If things go over, it throttles it back down again.
This can be done no matter how many cores are active, but the less that are active the more it is likely to be able to be. On desktop cores, it isn't a big deal since they usually run fairly near their speed limit anyhow. So you pay see only 1 or 2 max 133MHz increments that can happen. For laptop cores, in particular quad cores, it can be a lot more.
The Intel i7-720QM runs at 1.6GHz and has 1/1/6/9 turbo boost multipliers. That means with all 4 cores running, it can clock up at most 1 increment, to 1.73GHz. However with only one running, it can go to 2.8GHz, 9 133MHz clocks up. It allows for a processor that would be too fast to reside in the laptop to go in there with some flexibility. A desktop Core i7-930 is 2.8GHz with 1/1/1/2 turbo mode. That means it'll clock up to 2.93GHz with 2-4 cores active, and 3GHz with 1. Much less flexible, since it is already running near it's rated max clock speed.
Now this is not the same as speed step, which is their technology to down clock the CPUs when they aren't in so much use. Similar idea, but purse based on how hard the CPU is being asked to work, not based on if the system can handle the higher speeds.
As an aside, I'll call BS on the "Little uses multiple cores." Games these days are heavily going at least dual core, some even more. Reason is, if nothing else, the consoles are that way too. The Xbox 360 has 3 cores, 2 threads each. The PS3 has a weak CPU attached to 7 powerful SPUs. On a platform like that, you learn to do parallel or your games don't look as good. Same knowledge translates to the PC.
However there are still single core things, hence the turbo boost thing can be real useful. In laptops this is particularly the case. If the i7 quad was limited to 1.6GHz, few people would want it over one of the duals that can be 2.53GHz or more. Just too much loss in MHz to be worth it. However now, it can be the best of all worlds. A slower quad, a faster dual, whatever the apps call for, it handles.
Re: (Score:3)
What I find interesting is that the current OS most people use, with the exception of
some RealTime and big iron custom dealies are still built in such a monolithic way that
it becomes more "profitable" to the user experience to still ramp up single cores as
opposed to having most cores running at the same speed.
With the exception of some high demand apps like games, extensive math apps,
and stuff that could or should be offloaded to GPUs desktop OS don't need a VERY
fast single core, they instead need lots of e
Re:A better explanation (Score:5, Informative)
Re: (Score:2)
I'd say games qualify for "little uses". Believe it or not, most people don't use their computers for high end gaming.
Most people don't use their computers for high end _anything_ and whether the machine has one core or a dozen is basically irrelevant.
Re:A better explanation (Score:3, Insightful)
I'd say games qualify for "little uses". Believe it or not, most people don't use their computers for high end gaming.
Afaict most people don't use their computers for anything that strains the CPU at all. Most people would be perfectly happy with a bottom of the range C2D, i3, late P4 or maybe even less as long as the system was kept free of crapware and had enough ram for their OS and applications of choice.
However of those apps that DO strain the CPU (e.g. games, video encoding, scientific software, softw
Why not use the extra transistors... (Score:3, Interesting)
...for more cache instead of more processors? Think of something with as many transistors as a hex core but with only two cores and the rest used for L1 cache! I'd suggest lots more registers as well, but that would mean giving up on x86.
Re:Why not use the extra transistors... (Score:5, Interesting)
Larger caches are slower. Moving to a larger L1 cache would either require that the chip run at a lower clock rate, or increase the latency (increasing the length of time it takes to retrieve the data).
As for registers, they did increase them, from 8 to 16 with x64. IIRC, AMD stated that moving to 16 registers gave 80% of the performance increase they would have gained by moving to 32 registers.
Re: (Score:2)
As for registers, they did increase them, from 8 to 16 with x64. IIRC, AMD stated that moving to 16 registers gave 80% of the performance increase they would have gained by moving to 32 registers.
AMD64 retains the index registers &c and increases general purpose registers from 4 to 16, not 8 to 16. And to be totally critical, x86 has zero general-purpose registers, because many (most?) instructions require that operands be in specific registers, and that the result will be placed in a specific register. Those other four registers are specific-purpose as well, you usually can't even use them to temporarily hold some data because you need to put addresses in them to execute instructions. x86 has f
Re: (Score:3, Insightful)
L1 and sometimes L2 caches are small not because of die area but because there is a tradeoff between cache size and cache speed. Only the lowest level cache (L3 on the i series) takes significant chip area (and it already takes a pretty large proportion on both the quad and hex core chips).
Re: (Score:2)
Because the cache is shared on newer multicore processors you essentially do get more cache. Cache is the largest user of real estate on die. The added processors you get are just a bonus.
Don't need to (Score:2)
Turns out, you can figure out based on the kinds of programs yo run, how much cache you need to give good performance. With a sufficient amount of cache, you can have total effective throughput better than 90% of the throughput of just the cache itself. Thus more cache doesn't really get you anything. You find it is very much a logarithmic kind of function. With no cache, your performance is limited by the speed of the system memory. Just a little cache gives you a big increase. More makes sense, to a point
Re: (Score:3, Interesting)
PII is also of the PPro lineage. And even if PII, PIII and to some degree P-M and Core1 aren't that different, there are supposed to be some notable changes with Core2 and, especially, Nehalem.
Besides, if the tech is good and it works... (look what happened when they tried "innovating" with PIV)
Re: (Score:2)
IIRC, Intel got their head handed to them by AMD when they "lost" the Mhz race to AMDs Marketing dept.
Intel with the P4 had a very long pipeline which they could crank the speed of faster than what AMD could. On the other hand, AMD went for multiple shorter pipelines though the CPU which meant that most of what people did went through the CPU faster. (I am purposefully skipping over cache misses, delays on getting things through the pipeline etc) The benchmarks at the time proved it. Also, the addition of t
Re: (Score:2)
They didn't lost it just to marketing department of AMD. Athlons were simply a better core, capable of higher clocks (and without such obvious tricks as with P4; they had often higher IPC than P3)
If anything, Intel was driven too much by marketing back then... (remember P3 Coppermine 1.13 GHz fiasko?)
Re: (Score:3, Interesting)
Re: (Score:2)
Somebody please either lambaste this or tell me it isn't that far off.
Oversimpliyfing...
The PII was a PPro with an off-die L2 cache and MMX. The PIII was a PII with SSE. The Pentium M was a PIII with a P4 FSB and SSE2.
Turbo button comming back? (Score:2, Funny)
So they are bringing the Turbo Button back?
Seriously, When I was looking at laptops, 2 laptops that were pretty much the same in specks, one had a "Turbo" CPU the other's CPU was the speed of the "Boosted" one next to it...
The price difference... $20.00!!! I'll pay an extra $20 to have FULL SPEED ALL THE TIME!
Re: (Score:2)
No, this is automatic at the hardware level -- not a manual switch. In fact, it's more or less useless on desktop machines (as someone excellently explained above) since the speed improvements are small. On laptops with >2 cores, however, it seems to be very, very nice. A fairly easy way to have both reasonably powerful parallel processing with multiple cores, fairly fast single-thread processing, and not creating a level of heat that could damage components.
Also, if you're overclocking a desktop (which
Re: (Score:2)
That turbo boosted CPU also had hyperthreading, AES-NI and a few newer instruction sets, and will last longer on battery. I can't imagine why you'd want battery on a laptop, though... it's all about teh megahurz!
Why? (Score:3, Insightful)
Re:Why? (Score:5, Insightful)
Because it's a pain in the ass and very hard for most coders.
What we need is either a simple library for threading or a new language (like haskell) for auto-parallelization
Re: (Score:2, Insightful)
And more importantly, not all tasks CAN be parallelized.
Re: (Score:2)
And more importantly, not all tasks CAN be parallelized.
Why not just run two for twice the price?? [youtube.com] Only, this one can be secret.
Because Intel knows their history (Score:4, Interesting)
When Intel came out with the Pentium Pro, they had a good 32-bit machine, and it ran UNIX and NT, in 32-bit mode, just fine. People bitched about its poor performance on 16-bit code; Intel had assumed that 16-bit code would have been replaced by 1995.
Intel hasn't made that mistake again. They test heavily against obsolete software.
Re: (Score:2)
Why this compromise? There's a huge need for developers to start thinking in terms of multicore CPUs. Offering them this solution is just postponing the inevitable. We need change now.
Legacy applications. Anyway, we need better multicore support at the OS level, just like we need GPU rendering support at the OS level. Leaving it up to applications programmers to figure out either for themselves is a total failure.
Question... (Score:2)
Does Intel's architecture adjust its management scheme based on CPU temperature? It'd be nice if having a better heat sink or a cooling system would allow the system to run even faster.
I've also been wondering why, given the new poly-core systems, we don't see a mix of CPU types in a system. Throwing a bunch of slower but less complex and therefore less expensive cores in with a few premium cores would result in a better balance, allowing the system to concentrate heavy-load apps on the faster CPUs while
Re: (Score:3, Insightful)
> I've also been wondering why, given the new poly-core systems, we
> don't see a mix of CPU types in a system.
How would the OS decide which process to assign to which core?
Re: (Score:2, Interesting)
Looking at history of CPU time to running time ratio for each process, or perhaps also what typically causes spikes of usage and moving process to faster core at that point? Plus central db of what to expect from specific processes.
(I'm not saying it's necessarily a good idea; just that it could be not so hard, OS-wise)
Re: (Score:3, Funny)
How about, every app that runs in the background or as a tray icon by default gets a cheesy core? :-P
Also takes more complex hardware (Score:2)
Adding more CPUs isn't as simple as just putting another socket on the board. There are real issues to be dealt with. That's one of the reasons you see such jumps in price for things. Making a CPU and chipset that deal with only a single processor is easier than multiple. Also at a certain point you end up having to add "glue" chips which deal with all the issues of all the multiple CPUs.
Ok well that is all for symmetric multiprocessing, where all CPUs are equal. Add on a whole new layer of complexity if th
Re: (Score:2)
Though we do see a mix, in a different way, with GPGPU adoption.
In layman's terms (Score:5, Funny)
Just wanted to clarify some of the misconceptions about the Turbo Boost...
The technology is fairly simple. At it's most level, we take the exhaust from the CPU fan and route it back into the intake of the system. If you're using Linux you can see the RPM increase by running 'top' (google Linux RPM for more information).
The turbo itself is a fairly simple technology. As you're aware, we can use pipes to stream the outputs of different applications together. In the case of Linux, we pipe the stdout stream to the stdin (the intake) of the turbo (tr) which increases the speed and feeds it into a different application. For example, we can increase the throughput of dd as follows:
dd if=/dev/zero | tr rpm | tee /proc/cpuinfo
This will increase the CPU speed by feeding output from dd into the turbo (and increasing the rpm) and finally pumping it back into the CPU.
On other platforms there are some proprietary solutions. For example, take the output of Adobe AIR to HyperV to PCSpeedup! then back into the processor.
Hope this helps...
Take advantage of what? (Score:2)
Too bad people don't use even a single core to correct their mistakes.
The article in one line (Score:2)
Turbo boost and turbo core over-clock cores in use up to a thermal limit.
Hardly cutting edge stuff.
correct me if i'm wrong, but (Score:4, Interesting)
Correct me if i'm wrong, and maybe i'm missing something here, but i think it's possible to simulate this kind of functionality on Linux with a script. Cores 2 to N are taken offline (echo 0 > /sys/devices/system/cpu/cpu/offline), the "performance" governor is used for cpu0 (which causes it to run at full clock), then the script monitors usage of cpu0 and brings the other cores online as load on cpu0 goes up. When load goes down then the other cores can be taken offline again.
Re: (Score:2)
Re: (Score:2)
"You will not find a turbine (ok, rotor with blades) in your CPU"
Maybe not *IN* the CPU but the Delta that was on top of my heatsink was most certainly close to a turbine, right down to the ultra-high whining noise.
Re: (Score:2)
Can i get turbo with this one: http://www.hardwarecanucks.com/wp-content/uploads/newegg-sells-fake-core-i7-cpus-9.jpg [hardwarecanucks.com] ???
Re:"Your next build" - who builds PCs anymore? (Score:5, Insightful)
For $300 you can get a brand new Dell - who builds a PC anymore?
Someone who wants something better than a $300 Dell?
Re: (Score:2)
I bought one of those $400 laptops for my wife last year... 2.2Ghz dual core Toshiba Satellite with a fairly recent Intel 4000 integrated graphics chip.
My upgraded 8-year old 2.2Ghz dual SMP Athlon XP-M with an AGP nVidia 6800GS still blows the doors off of it for gaming. Works great as long as the games don't require DX10 or 64-bit. And it actually didn't cost all that much since I waited to upgrade the CPUs, video, and RAM after they got cheap.
Re: (Score:2)
Re: (Score:2)
I built my desktop and I will build the next one when I finally upgrade. I build my desktop computers because I end up getting exactly what I want, the compatibility that I need and I end up getting it cheaper than anything Dell throws my way. For example, I've managed to build my desktop 4 years ago and not only it cost me 300 euros to build an Athlon X2 4000+ system with 2GB of RAM and with a 19'' monitor but it also gave me zero problems up to this day.
Knowing that, I've took a peek at Dell's low bid o
Re: (Score:2)
The difference in heat dissipation requirements is negligible when it's a choice between 1cm or 2cm thick asbestos padding to avoid 3rd degree burns from your MacBook Pro.
Re: (Score:2)
You know that you can buy hardware elsewhere, and software too, right? Also you’re a smart grownup. You don’t need the Apple “’tardpadding”. ^^
Re: (Score:2)
There has to be Core 2 Quad with lower power consumption... the current Mac mini's power supply is 110W.
I'm afraid the next Mac mini bump is going to be either another Core 2 Duo paired with the new nVidia 320M or a Core i3 with a crap intel GPU...