Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Intel

Intel Turbo Boost vs. AMD Turbo Core Explained 198

An anonymous reader recommends a PC Authority article explaining the whys and wherefores of Intel Turbo Boost and AMD Turbo Core approaches to wringing more apparent performance out of multi-core CPUs. "Gordon Moore has a lot to answer for. His prediction in the now seminal 'Cramming more components onto integrated circuits' article from 1965 evolved into Intel's corporate philosophy and have driven the semiconductor industry forward for 45 years. This prediction was that the number of transistors on a CPU would double every 18 months and has driven CPU design into the realm of multicore. But the thing is, even now there are few applications that take full advantage of multicore processers. What this has led to is the rise of CPU technology designed to speed up single core performance when an application doesn't use the other cores. Intel's version of the technology is called Turbo Boost, while AMD's is called Turbo Core. This article neatly explains how these speed up your PC, and the difference between the two approaches. Interesting reading if you're choosing between Intel and AMD for your next build."
This discussion has been archived. No new comments can be posted.

Intel Turbo Boost vs. AMD Turbo Core Explained

Comments Filter:
  • by vjlen ( 187941 ) on Tuesday May 04, 2010 @07:46PM (#32092484) Homepage

    ...Turbo switches on our workstations again like back in the day?

    • by sznupi ( 719324 ) on Tuesday May 04, 2010 @07:57PM (#32092564) Homepage

      Plus a straightforward way of figuring out how to best assign processes to particular cores? (which ones are faster and which are slower)

      • PS. (Score:4, Interesting)

        by sznupi ( 719324 ) on Tuesday May 04, 2010 @08:02PM (#32092604) Homepage

        For that matter, can we have one more thing: a way to limit max core usage to, say, 10% (imagine you're playing an old game on a laptop, for example Diablo2; now, many games have the unfortunate habit of consuming all available CPU power...whether they need to or not; taking battery with them)

      • Re: (Score:3, Informative)

        by wealthychef ( 584778 )
        I like Grand Central Dispatch [wikipedia.org]. Don't shoot me, it's from Apple. But it's open source, so it's good, right? What I like about it is that it relieves a programmer from the burden of choosing the number of threads to run, initializing all the various mutexes, etc. Very nice model. I don't see a big driver for adoption, unfortunately, outside of HPC geeks like yours truly.
      • Re: (Score:3, Interesting)

        by Hurricane78 ( 562437 )

        How about a small daemon that at intervals re-assigns the running processes to the cores in a balanced way (or one of your choice), and also sets the affinity for new processes. Should be about 30 minutes with any fast language of your choice that can call the appropriate commands.

        I think you could even do it with bash, although it would not be very resource-saving. (Hey, everything is a file even those settings! If not, then they did UNIX wrong. ;)

        Remember: You are using a computer. Not an appl(e)iance. Yo

        • Re: (Score:3, Informative)

          by beelsebob ( 529313 )

          How about a small daemon that at intervals re-assigns the running processes to the cores in a balanced way (or one of your choice), and also sets the affinity for new processes. Should be about 30 minutes with any fast language of your choice that can call the appropriate commands.

          The linux scheduler doesn't do this? The OS X one certainly does, it also moves processes from core to core based on which one is getting hot.

      • by grcumb ( 781340 ) on Wednesday May 05, 2010 @01:48AM (#32094590) Homepage Journal

        Plus a straightforward way of figuring out how to best assign processes to particular cores? (which ones are faster and which are slower)

        Heh, trick question. You almost got me there.

        You see, Intel stack their cores from fastest to slowest in order to maximise heat dissipation. This is known as a High-Endian architecture. AMD, on the other hand, use a Low-Endian architecture, stacking their cores from slowest to fastest because they claim it lowers power usage. So the real trick when trying to figure out which cores are faster is finding a cross-platform approach that won't penalise any given processor.

        The Slaughterhouse-5[*] method says that with a non-randomised Tralfamadorean transform, you can infer where your sample data is going to end up before you actually send it there. So you just measure the incipient idiopathic latency of your unsent bytes and then apply a parsimonious lectern to the results and voilà!

        ... Why, yes, I am in Marketing. Why do you ask?

        ------------------
        [*] As developed by Billy Pilgrim [wikipedia.org]. Po tee-weet

      • You have it – assign the processes, and the cores they're assigned to will become fast. This doesn't need software fighting against it all the time.

    • Re:Can we get.. (Score:5, Informative)

      by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Wednesday May 05, 2010 @01:10AM (#32094434)

      Actually, that’s pretty easy to do with Linux right now.
      Just choose any ACPI button (you at least have a power button, often more), and in your /etc/acpi/ directory, modify the scripts so they call “cpufreq-set -f $freq” on the right events. (You may need a state file in your /var/state/ dir, to remember which mode you are in. But you can also toggle a keyboard led that you don’t use much.)

      And this is why I love Linux. If you can think of it, and it’s physically possible... you can do it. :)

      Next: Using the graphics ram that is unused while in 2D mode, as a fast swap/tmpfs/cache. ;)

  • Huh? (Score:5, Insightful)

    by Wyatt Earp ( 1029 ) on Tuesday May 04, 2010 @07:46PM (#32092490)

    That read like the pasting of two press releases together. That did very little to explain what is going on beyond press grade buzz words.

    • Re:Huh? (Score:5, Informative)

      by Darkness404 ( 1287218 ) on Tuesday May 04, 2010 @07:51PM (#32092518)
      Essentially they both just detect if other cores can be powered down, power them down and then crank up the clock speed on the single cores because heat/power doesn't matter if the other cores are turned off or in the low megahertz. AMD's solution is like an afterthought because their architecture is older than Intel's while Intel's was built in to the architecture.
      • Essentially they both just detect if other cores can be powered down, power them down and then crank up the clock speed on the single cores because heat/power doesn't matter if the other cores are turned off or in the low megahertz. AMD's solution is like an afterthought because their architecture is older than Intel's while Intel's was built in to the architecture.

        It's actually existed since the original Phenom series of chips that came out a few years ago; and they've only recently given the BIOS code for it in these newer chips.

        On my Phenom 2 720BE (which I unlocked to a quad) I use Phenom MSR Tweaker to control my power states and multiplier settings. I can have 1, 2, 3, or all 4 of my cores overclocked. Core 1 hits 3.8ghz better than 2, 3, and 4; but I leave them all at 3.5ghz.

    • Re:Huh? (Score:4, Informative)

      by DeadboltX ( 751907 ) on Tuesday May 04, 2010 @08:18PM (#32092734)

      The way I understand it (and I could be wrong) is that on a quad core 1.6ghz i7 each core is actually capable of going up to 2.8ghz, although I'm not sure if they are all capable of going to 2.8ghz at the same time. If you run a program that can't take advantage of more than 1 core, and it starts maxing out that core at 100%, the cpu will increase the clock speed of that core, up to 2.8ghz until it isn't maxed out anymore. In order to keep energy consumption and heat down the cpu will also lower the clock speeds of the other cores as needed.

      With older multi-core processors if you had a quad core 1.6ghz and you had a program that could only use 1 core then you would effectively just have a 1.6ghz processor, in which case a dual core 2.8ghz would be way better. With Turbo Boost you can essentially get the best of both worlds.

      • How to sell people lots and lots of cores but only have to actually deliver on one of them.

        Neat.

         

      • With older multi-core processors if you had a quad core 1.6ghz and you had a program that could only use 1 core then you would effectively just have a 1.6ghz processor

        See, my problem with this is the expectation that the application can use multiple cores. I don't care if an application can use multiple cores, I want the operating system to be able to make use of them.

        The day an application sees that I have four cores and can feel free to use all of them, we're pretty much hosed, because we'll have apps ch

    • Re:Huh? (Score:4, Insightful)

      by oldhack ( 1037484 ) on Tuesday May 04, 2010 @09:37PM (#32093232)
      The damn thing could (and should) have been two-paragraph memo - reads like it's written by a high school kid trying to fill up page quota. Oh well - the info, as shallow as it is, still is something I didn't know before.
    • by mrmeval ( 662166 )

      It also reads like AMD bashing rather than tech article.

    • Re: (Score:2, Informative)

      by Athanasius ( 306480 )
      This [tomshardware.com] might make things a little clearer for the Intel case. Certainly it gives more detail about how it works (for one thing it's not just a "base speed or Turbo speed" thing, there are multiple boost steps depending on the exact situation).
  • Rather than cranking up the GHz of each core to obtain more speed, I wish they'd concentrate on keeping it cool. I hate the fan noise, and multicore was a way around that because it rarely heats up with standard usage. Hence less or no cooling required.

    "We've got to find some way to get that fan to rotate to annoy the users... ah I have a cunning plan..."

    • by washu_k ( 1628007 ) on Tuesday May 04, 2010 @08:40PM (#32092862)
      There are a multitude of aftermarket CPU coolers which are much quieter than the stock ones from Intel or AMD. Some chips can even be run passive with the right heatsink. Take a look at the reviews on http://www.silentpcreview.com/ [silentpcreview.com]
    • Re: (Score:3, Interesting)

      by ozbird ( 127571 )
      Do both.

      I bought an Intel i7-860 recently and the supplied HSF is barely able to keep the core temperatures under 95 deg. C with eight threads of Prime95 running. Eek!! I replaced it with a cheap Hyper TX3 cooler (larger coolers won't fit with four DIMMs fitted), and it run at least 20 degrees C cooler under the same conditions. The supplied fan is a little noisy under full load, but for gaming etc. it's not a problem.

      Turbo Boost is cute, but I've opted to overclock it at a constant 3.33GHz (up from
      • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday May 04, 2010 @10:13PM (#32093456) Journal

        predictable performance

        Predictable power-drain, you mean, and a predictable shortening of the life of your hardware -- assuming it doesn't just overheat and underclock itself, which I've seen happen a few times.

        CPU scaling has been mature for awhile now, and it's implemented in hardware. Can you give me any real examples of it causing a problem? The instant I need that speed (for gaming, etc), it's there. The rest of the time, I'd much rather it coast at 800 mhz all around, especially on a laptop.

        with no temperature or stability issues. YMMV.

        Understatement of the year.

        Overclocking is a bit of a black art, for a number of reasons. First problem: How do you know it's stable? Or rather, when things start to go wrong, how do you know if it's a software or a hardware issue? The last time I did this was a 1.8 ghz machine to 2.7. I ran superpi, 3dmark, and a number of other things, and it seemed stable, but occasionally crashed. Clocked it back to 2.4, it crashed less often, but there were occasionally subtle filesystem corruption issues -- which was much worse, because I had absolutely no indication anything was wrong (over months of use) until I found my data corrupted for no apparent reason. Finally set it back to the factory default (and turned on the scaling) and it's been solid ever since.

        Second problem: Even with the same chip, it varies a lot. All that testing I did is nothing compared to how the manufacturer actually tests the chip -- but they only test what they're actually selling. That means if they're selling you a dual-core chip that's really a quad-core chip with two cores disabled, it might just be surplus, the extra cores might be fine, but they haven't tested them. Or maybe they have, and that's why they sold it as a dual-core instead of quad-core.

        So even if you follow a guide to the letter, it's not guaranteed.

        I'm sure you already know all of the above, but I'm at the point in my life where, even as a starving college student, even as a Linux user on a Dvorak keyboard, it's much saner for me to simply buy a faster CPU, rather than trying to overclock it myself.

        • blah blah blah, get with the times old folks with small UIDs. A hefty majority of the build-your-own-pc crowd overclocks.

          We test with a multitude of stress testing programs that test all parts and instructions of the architecture. We find the maximum frequency we can run at acceptable voltage and heat. There's a linear region of overclocking and an exponential region. Most people find the divergence point and sit there. Those with water cooling can go a little further.

          It pretty much is guaranteed, because 9

        • by ozbird ( 127571 )

          Predictable power-drain, you mean and a predictable shortening of the life of your hardware -- assuming it doesn't just overheat and underclock itself, which I've seen happen a few times.

          Assume away. I'll repeat myself: at full load the replacement heatsink runs substantially cooler than the stock (and warrantied) Intel cooler. The Intel cooler at 90+ deg. C - barely within the thermal specs. for the CPU - did not thermal trip; why would the replacement cooler trip at 65 deg. C?! Your overclocking exam

    • The stock Intel coolers are designed to be economical and meet the thermal requirements, not good.

      I use an Arctic Cooling Freezer 7 Pro. With my Q9550 I cannot make the fan spin up past the minimum, which is about 300rpm. The Intel board I use figures the CPU should maintain about a 20 degree thermal margin, meaning run 20 degrees below its rated max. If it is running hotter than that, the fan spins faster up to the max. If it is running cooler than that, the fan throttles back as low as the minimum. Idle,

    • by DAldredge ( 2353 )
      Then buy one of the low voltage / extreme low voltage variants that AMD and Intel both make.
    • by rwa2 ( 4391 ) *

      Get a goat.

      Then after it craps in your bed and chews up your linens and brays all night, get rid of it.

      Your computer will seem so much cooler and quieter after you get rid of the goat!

      (my current PC from 2007 is soooo much quieter than the 2002-era PC it replaced)

  • by macshome ( 818789 ) on Tuesday May 04, 2010 @08:03PM (#32092618) Homepage
    What's "apparent performance"? It's either faster or it's not.
    • by asdf7890 ( 1518587 ) on Tuesday May 04, 2010 @08:13PM (#32092684)

      What's "apparent performance"? It's either faster or it's not.

      You have obviously never worked in UI design! (though in this area I don't know who/what they would be trying to fool or how they would be trying to fool them/it so your response is probably quite right)

      • You have obviously never worked in UI design! (though in this area I don't know who/what they would be trying to fool or how they would be trying to fool them/it so your response is probably quite right)

        And apparently you have never worked in sentence design. ;)

    • by phantomcircuit ( 938963 ) on Tuesday May 04, 2010 @08:14PM (#32092690) Homepage

      Many programs simply do not benefit from multiple cores. This technology is basically a trade off between partially disabling one core and increasing the frequency of the other core.

    • by pwnies ( 1034518 ) <j@jjcm.org> on Tuesday May 04, 2010 @08:15PM (#32092708) Homepage Journal
      Not necessarily. If they're overclocking a single core, while underclocking the rest, it may all balance out to have an average core speed that's less than what it was. However, in doing this it may actually increase performance if there is a single app that requires a lot of CPU time (and isn't threaded). In reality the total speed of the computer is being reduced, while the performance as viewed by the user is increasing.
  • A better explanation (Score:5, Informative)

    by Sycraft-fu ( 314770 ) on Tuesday May 04, 2010 @08:11PM (#32092668)

    The article kinda glosses over things. So a more detailed explanation of how Intel's turbo boost works:

    As stated, every core has a budget for the maximum heat it can give off, and the maximum power it can use, as well as a max clock speed that it can handle. However, when you look at these things, they aren't all even, one ends up being the limiting factor. So Intel said, ok, we design a chip to always run at a given speed and stay under the thermal and power envelopes. However, if it isn't running at that, we allow for speed increases. It can increase the speed of cores in 133MHz increments. If things go over, it throttles it back down again.

    This can be done no matter how many cores are active, but the less that are active the more it is likely to be able to be. On desktop cores, it isn't a big deal since they usually run fairly near their speed limit anyhow. So you pay see only 1 or 2 max 133MHz increments that can happen. For laptop cores, in particular quad cores, it can be a lot more.

    The Intel i7-720QM runs at 1.6GHz and has 1/1/6/9 turbo boost multipliers. That means with all 4 cores running, it can clock up at most 1 increment, to 1.73GHz. However with only one running, it can go to 2.8GHz, 9 133MHz clocks up. It allows for a processor that would be too fast to reside in the laptop to go in there with some flexibility. A desktop Core i7-930 is 2.8GHz with 1/1/1/2 turbo mode. That means it'll clock up to 2.93GHz with 2-4 cores active, and 3GHz with 1. Much less flexible, since it is already running near it's rated max clock speed.

    Now this is not the same as speed step, which is their technology to down clock the CPUs when they aren't in so much use. Similar idea, but purse based on how hard the CPU is being asked to work, not based on if the system can handle the higher speeds.

    As an aside, I'll call BS on the "Little uses multiple cores." Games these days are heavily going at least dual core, some even more. Reason is, if nothing else, the consoles are that way too. The Xbox 360 has 3 cores, 2 threads each. The PS3 has a weak CPU attached to 7 powerful SPUs. On a platform like that, you learn to do parallel or your games don't look as good. Same knowledge translates to the PC.

    However there are still single core things, hence the turbo boost thing can be real useful. In laptops this is particularly the case. If the i7 quad was limited to 1.6GHz, few people would want it over one of the duals that can be 2.53GHz or more. Just too much loss in MHz to be worth it. However now, it can be the best of all worlds. A slower quad, a faster dual, whatever the apps call for, it handles.

    • by juuri ( 7678 )

      What I find interesting is that the current OS most people use, with the exception of
      some RealTime and big iron custom dealies are still built in such a monolithic way that
      it becomes more "profitable" to the user experience to still ramp up single cores as
      opposed to having most cores running at the same speed.

      With the exception of some high demand apps like games, extensive math apps,
      and stuff that could or should be offloaded to GPUs desktop OS don't need a VERY
      fast single core, they instead need lots of e

  • by John Hasler ( 414242 ) on Tuesday May 04, 2010 @08:31PM (#32092804) Homepage

    ...for more cache instead of more processors? Think of something with as many transistors as a hex core but with only two cores and the rest used for L1 cache! I'd suggest lots more registers as well, but that would mean giving up on x86.

    • by glsunder ( 241984 ) on Tuesday May 04, 2010 @09:03PM (#32093026)

      Larger caches are slower. Moving to a larger L1 cache would either require that the chip run at a lower clock rate, or increase the latency (increasing the length of time it takes to retrieve the data).

      As for registers, they did increase them, from 8 to 16 with x64. IIRC, AMD stated that moving to 16 registers gave 80% of the performance increase they would have gained by moving to 32 registers.

      • As for registers, they did increase them, from 8 to 16 with x64. IIRC, AMD stated that moving to 16 registers gave 80% of the performance increase they would have gained by moving to 32 registers.

        AMD64 retains the index registers &c and increases general purpose registers from 4 to 16, not 8 to 16. And to be totally critical, x86 has zero general-purpose registers, because many (most?) instructions require that operands be in specific registers, and that the result will be placed in a specific register. Those other four registers are specific-purpose as well, you usually can't even use them to temporarily hold some data because you need to put addresses in them to execute instructions. x86 has f

    • Re: (Score:3, Insightful)

      by petermgreen ( 876956 )

      L1 and sometimes L2 caches are small not because of die area but because there is a tradeoff between cache size and cache speed. Only the lowest level cache (L3 on the i series) takes significant chip area (and it already takes a pretty large proportion on both the quad and hex core chips).

    • Because the cache is shared on newer multicore processors you essentially do get more cache. Cache is the largest user of real estate on die. The added processors you get are just a bonus.

    • Turns out, you can figure out based on the kinds of programs yo run, how much cache you need to give good performance. With a sufficient amount of cache, you can have total effective throughput better than 90% of the throughput of just the cache itself. Thus more cache doesn't really get you anything. You find it is very much a logarithmic kind of function. With no cache, your performance is limited by the speed of the system memory. Just a little cache gives you a big increase. More makes sense, to a point

  • So they are bringing the Turbo Button back?

    Seriously, When I was looking at laptops, 2 laptops that were pretty much the same in specks, one had a "Turbo" CPU the other's CPU was the speed of the "Boosted" one next to it...
    The price difference... $20.00!!! I'll pay an extra $20 to have FULL SPEED ALL THE TIME!

    • No, this is automatic at the hardware level -- not a manual switch. In fact, it's more or less useless on desktop machines (as someone excellently explained above) since the speed improvements are small. On laptops with >2 cores, however, it seems to be very, very nice. A fairly easy way to have both reasonably powerful parallel processing with multiple cores, fairly fast single-thread processing, and not creating a level of heat that could damage components.

      Also, if you're overclocking a desktop (which

    • That turbo boosted CPU also had hyperthreading, AES-NI and a few newer instruction sets, and will last longer on battery. I can't imagine why you'd want battery on a laptop, though... it's all about teh megahurz!

  • Why? (Score:3, Insightful)

    by Bootarn ( 970788 ) on Tuesday May 04, 2010 @08:43PM (#32092886) Homepage
    Why this compromise? There's a huge need for developers to start thinking in terms of multicore CPUs. Offering them this solution is just postponing the inevitable. We need change now.
    • Re:Why? (Score:5, Insightful)

      by Shikaku ( 1129753 ) on Tuesday May 04, 2010 @09:03PM (#32093034)

      Because it's a pain in the ass and very hard for most coders.

      What we need is either a simple library for threading or a new language (like haskell) for auto-parallelization

    • by Animats ( 122034 ) on Tuesday May 04, 2010 @11:19PM (#32093826) Homepage

      When Intel came out with the Pentium Pro, they had a good 32-bit machine, and it ran UNIX and NT, in 32-bit mode, just fine. People bitched about its poor performance on 16-bit code; Intel had assumed that 16-bit code would have been replaced by 1995.

      Intel hasn't made that mistake again. They test heavily against obsolete software.

    • Why this compromise? There's a huge need for developers to start thinking in terms of multicore CPUs. Offering them this solution is just postponing the inevitable. We need change now.

      Legacy applications. Anyway, we need better multicore support at the OS level, just like we need GPU rendering support at the OS level. Leaving it up to applications programmers to figure out either for themselves is a total failure.

  • Does Intel's architecture adjust its management scheme based on CPU temperature? It'd be nice if having a better heat sink or a cooling system would allow the system to run even faster.

    I've also been wondering why, given the new poly-core systems, we don't see a mix of CPU types in a system. Throwing a bunch of slower but less complex and therefore less expensive cores in with a few premium cores would result in a better balance, allowing the system to concentrate heavy-load apps on the faster CPUs while

    • Re: (Score:3, Insightful)

      by John Hasler ( 414242 )

      > I've also been wondering why, given the new poly-core systems, we
      > don't see a mix of CPU types in a system.

      How would the OS decide which process to assign to which core?

      • Re: (Score:2, Interesting)

        by sznupi ( 719324 )

        Looking at history of CPU time to running time ratio for each process, or perhaps also what typically causes spikes of usage and moving process to faster core at that point? Plus central db of what to expect from specific processes.
        (I'm not saying it's necessarily a good idea; just that it could be not so hard, OS-wise)

      • Re: (Score:3, Funny)

        by TheSHAD0W ( 258774 )

        How about, every app that runs in the background or as a tray icon by default gets a cheesy core? :-P

      • Adding more CPUs isn't as simple as just putting another socket on the board. There are real issues to be dealt with. That's one of the reasons you see such jumps in price for things. Making a CPU and chipset that deal with only a single processor is easier than multiple. Also at a certain point you end up having to add "glue" chips which deal with all the issues of all the multiple CPUs.

        Ok well that is all for symmetric multiprocessing, where all CPUs are equal. Add on a whole new layer of complexity if th

    • by sznupi ( 719324 )

      Though we do see a mix, in a different way, with GPGPU adoption.

  • by digitalhermit ( 113459 ) on Tuesday May 04, 2010 @09:31PM (#32093190) Homepage

    Just wanted to clarify some of the misconceptions about the Turbo Boost...

    The technology is fairly simple. At it's most level, we take the exhaust from the CPU fan and route it back into the intake of the system. If you're using Linux you can see the RPM increase by running 'top' (google Linux RPM for more information).

    The turbo itself is a fairly simple technology. As you're aware, we can use pipes to stream the outputs of different applications together. In the case of Linux, we pipe the stdout stream to the stdin (the intake) of the turbo (tr) which increases the speed and feeds it into a different application. For example, we can increase the throughput of dd as follows:

            dd if=/dev/zero | tr rpm | tee /proc/cpuinfo

    This will increase the CPU speed by feeding output from dd into the turbo (and increasing the rpm) and finally pumping it back into the CPU.

    On other platforms there are some proprietary solutions. For example, take the output of Adobe AIR to HyperV to PCSpeedup! then back into the processor.

    Hope this helps...

  • take full advantage of multicore processers

    Too bad people don't use even a single core to correct their mistakes.

  • Turbo boost and turbo core over-clock cores in use up to a thermal limit.

    Hardly cutting edge stuff.

  • by yanyan ( 302849 ) on Wednesday May 05, 2010 @03:16AM (#32095002)

    Correct me if i'm wrong, and maybe i'm missing something here, but i think it's possible to simulate this kind of functionality on Linux with a script. Cores 2 to N are taken offline (echo 0 > /sys/devices/system/cpu/cpu/offline), the "performance" governor is used for cpu0 (which causes it to run at full clock), then the script monitors usage of cpu0 and brings the other cores online as load on cpu0 goes up. When load goes down then the other cores can be taken offline again.

What is research but a blind date with knowledge? -- Will Harvey

Working...