Forgot your password?
typodupeerror

The Apple News That Got Buried 347

Posted by kdawson
from the times-eight dept.
An anonymous reader writes, "Apple's Showtime event was all well and good, but the big news today was on Anandtech.com. They found that the two dual-core CPUs in the Mac Pro were not only removable, but that they were able to insert two quad-core Clovertown CPUs. OS X recognized all eight cores and it worked fine. Anandtech could not release performance numbers for the new monster, but did report they were unable to max out the CPUs."
This discussion has been archived. No new comments can be posted.

The Apple News That Got Buried

Comments Filter:
  • by GrahamCox (741991) on Tuesday September 12, 2006 @11:31PM (#16093920) Homepage
    Typing this on an 8-core Mac pro, I manged to get first post! Wow, it IS fast!
    • I guess (Score:5, Funny)

      by Anonymous Coward on Wednesday September 13, 2006 @12:19AM (#16094125)
      with 8 cores, that no one cares about Beowulf clusters anymore. :(
      • Re:I guess (Score:5, Funny)

        by heatdeath (217147) on Wednesday September 13, 2006 @01:40AM (#16094416)
        with 8 cores, that no one cares about Beowulf clusters anymore. :(

        I suppose you could run 8 VMs on the machine and make a Beowulf cluster out of those.
      • by Yvanhoe (564877)
        But just imagine...
      • Re: (Score:3, Informative)

        by hpcanswers (960441)
        Too many cores on the same bus will cause a lot of contention for memory access. There will always be a place for NUMA architectures, including clusters. That place is for the ultra-high end though, not for scientists who merely want a few processors for a Gaussian computation.
        • Re: (Score:3, Insightful)

          by Doctor Memory (6336)
          I wonder if that's why they "couldn't max out the CPUs" — the bus was saturated.
        • Re: (Score:3, Interesting)

          by camperslo (704715)
          Speaking of memory access, it seem Anandtech showed the Pro in the worst light. They pointed out (fairly) where the higher latency of FB-DIMMs slowed performance, but ran the benchmarks with only a pair of DIMMs instead of four, failing to show the boost in performance from quad-channel memory access. Doubling memory bandwidth could have boosted some of the scores.

          It would have been fun to see something better show the potential gains available from additional cores. A utility like Visual Hub [techspansion.com] can use mul
      • Re: (Score:3, Informative)

        by hey! (33014)
        with 8 cores, that no one cares about Beowulf clusters anymore. :(

        Which puts me in mind of sex researchers, Masters and Johnson, who forty years ago established under rigorous experimental conditions that degree of uh, masculine endowment doesn't make any difference. Nothwithstanding this, people always care about what they can't have.
    • That's because with 8 cores, it's more difficult to clog up the tubes with internets
  • CPU upgrade market (Score:2, Interesting)

    by BWJones (18351) *
    Hrmmm. Well, seeing as how I just took delivery of a new quad 3.0Ghz Mac Pro, this dulls my bragging rights a bit. However, this bodes well for the CPU upgrade market. Companies like Sonnett, Newer, Powerlogix and OWC have had a tough time with the IBM/Freescale market because of poor performance among other critical reasons. The old 1.0 Ghz G4 I have at home as a media server is still an adequate system that currently holds a terabyte of storage space and I'd love to drop a good 2.0 Ghz or higher chip
    • Re: (Score:3, Interesting)

      by the_humeister (922869)
      However, this bodes well for the CPU upgrade market. Companies like Sonnett, Newer, Powerlogix and OWC have had a tough time with the IBM/Freescale market because of poor performance among other critical reasons.


      And it will still bode poorly for these companies because now that the Mac is all off-the-shelf components, so are the CPU upgrades.
      • by Pope (17780) on Wednesday September 13, 2006 @09:38AM (#16095684)
        There are enough old G4s lying around for the after market to last for a few more years. I'm keeping mine til the thing dies because I still need an OS 9 native environment; Classic still can't do everything, and is no longer available on x86 Macs.
    • "this dulls my bragging rights a bit."

      Mot at all. The quad cores are not on the market yet, but when they come out, you'll be able to drop them in your box. I'm jealous.

      • by Yakman (22964)
        Sure, but he'll have to drop $2,000 or whatever it will cost to buy two of these puppies first. His credit card is probably already strained from buying a $5,000 desktop to start with :)

        In Australian dollars at least, it is over $1,000 extra to get the 3GHz vs the 2.66GHz CPUs in the Mac Pro - that's about US$750 at the current rate. So chances are these quad-core CPUs will be pricey.
        • In Australian dollars at least, it is over $1,000 extra to get the 3GHz vs the 2.66GHz CPUs in the Mac Pro - that's about US$750 at the current rate.

          FYI, this processor bump costs exactly US$800 (plus applicable tax, of course) from Apple for buyers in the US.

          Having always presumed it a foregone conclusion that the processors would be swappable, I opted for the standard 2.66GHz configuration and an eventual upgrade as it becomes necessary. Considering the current cost of FB-DIMMs with huge heat sinks (a

  • by ShaunC (203807) on Tuesday September 12, 2006 @11:33PM (#16093931)
    "Crimson and Clover."
    • by dangitman (862676)
      Isn't Clovertown where all the leprechauns hang out? 'Tis a fine place to spend your gold on some Guinness while watching some midget porn. Just don't get into an argument about pipe tobacco with one of those short-assed little shits.
    • Re: (Score:3, Funny)

      by Kenshin (43036)
      "Eight Arms To Hold You"

      Or "Octomac"
  • Great!! (Score:4, Interesting)

    by yabos (719499) on Tuesday September 12, 2006 @11:36PM (#16093942)
    I can't say I'm surprised that it works since it's pin compatible but I think it's good news that this works so easily. It definately bodes well for future upgrades.
    • You know, I thought I would never say this. Your right, this is great. The one MAJOR thing I did not like about Apple is that I can't change the hardware much. Back in high school I swore I would never own a Mac unless I could upgrade the CUP. With OSX and tweekable hardware Mac is looking more and more worthwhile.
  • Bash fork bomb (Score:5, Interesting)

    by Anonymous Coward on Tuesday September 12, 2006 @11:38PM (#16093951)
    Here's a guaranteed way to max out those CPUs:

    :(){ :|:& };:

    It's the ultimate performance benchmark! How fast does your system halt?
    • Can someone explain this to me so I don't have to run it to find out what it does? :D

      I imagine it forks processes like crazy, but, not knowing much Bash, I can't see how.
    • Re: (Score:3, Informative)

      by Anonymous Coward
      Actually, on MacOS X, I get 60 or so "-bash: fork: Resource temporarily unavailable" messages without any huge amounts of CPU usage.
    • by Amouth (879122)
      ahh so tempting.... it is like you are just making funny faces at the prompt..

      till at least --- it starts making funny faces back at you....
    • Amdahl's Law (Score:3, Interesting)

      The system is probably far too constrained elsewhere (RAM bandwidth etc) to effectively feed 8 cores.

      Amdahl's Law might have been written for Big Iron, but it applies even more so to smaller sytstems.

    • Mac OSX kills it (Score:5, Informative)

      by goombah99 (560566) on Wednesday September 13, 2006 @12:32AM (#16094187)
      Trying this on macosx, the bomb dies when the number of forks exceeds a certain depth. So it's harmless. :(){ :|:& };:

      $ bash: fork: Resource temporarily unavailable
      bash fork Resource temporarily unavailable
      bash fork Resource temporarily unavailable
      bash fork Resource temporarily unavailable
      bash fork Resource temporarily unavailable
      bash fork Resource temporarily unavailable
      bash fork Resource temporarily unavailable
      bash fork Resource temporarily unavailable

        Done

      • Re: (Score:3, Interesting)

        by Jetson (176002)
        It's harmless in the sense that it won't crash your computer, but it will still block that user from running any additional programs because it uses up their thread quota. Of course, if you can trick someone into running it as root....

        I remember writing stuff similar to this back in the 80's to trip the watchdog on the VAX when the system operator was away and the machine needed a reboot. I think the C code of choice was something like "main(){while(fork(fork())||!fork(fork()))fork();} ". We'd get a few
    • Re: (Score:3, Informative)

      by pod (1103)
      Funny, but doubtfull. Standard, off-the-shelf PCs are still plaguaed by relatively crappy bus bandwidth. They can't max them out, because memory can't keep up feeding data to crunch.
  • Apple Cores (Score:5, Funny)

    by dotslashdot (694478) on Tuesday September 12, 2006 @11:39PM (#16093955)
    Shouldn't they be calling them "Apple Cores?"
  • by Desolator144 (999643) on Tuesday September 12, 2006 @11:43PM (#16093971)
    "they were unable to max out the CPUs" that is ridiculous! On PC's in VB it's pretty simple:
    dim Processor1Thread as new thread(addressof sub1)
    dim Processor2Thread as new thread(addressof sub2)
    Processor1Thread.start()
    Processor2Thread.start()
    dim x as integer
    sub sub1()
    for x = 0 to 1000000000000000
    end sub
    sub sub2()
    dim x as integer
    for x = 0 to 1000000000000000
    end sub
    and repeat for 6 other threads and subs. So they either proved it doesn't really work well at all or programming on a mac is impossibly hard...or they're lying to make it sound more dramatic. So whether they're lying about not maxing it out or they're lying and you just plain can't use all 8 cores at once, it's not as good as it sounds.
    • No, silly. The problem with your nitpick is that the first sub finishes it's loop before the last one starts. That's the problem - one speedy mac ;)
    • Re: (Score:3, Interesting)

      by jericho4.0 (565125)
      Your sig reads (to me) like you are a (younger) CS student. Assuming you are, here's what you're missing; in the real world, we need to max out those cores doing something productive, or we get in trouble. Very few users have apps that can use even more than one core usefully.
      • Not that it's hard to do that either - ripping 8 movies to XVID at once will certainly do it, and it's definitely "useful". Other options: 16-32 Xen virtual servers, 3D rendering, etc.

        There are a lot of tasks that paralellize nicely. There are many that don't.
      • Re: (Score:3, Funny)

        by Desolator144 (999643)
        I'm 19, been in college since I was 17 cuz they made me go early since I was so smart. And forget that CS theory bullshit, the department is called IT and that's what's written on the degree. People that go to 4 year colleges for programming are beyond stupid and I've heard many stories of how all that theory and little experience forced them to go to my college for a year before anyone would hire them. But gee, at least they know when C++ was invented and how they decided to name memory addresses. And
        • by SoCalChris (573049) on Wednesday September 13, 2006 @02:22AM (#16094510) Journal
          Sounds like you're going to something like DeVry [devry.edu], correct?

          Here's a hint... Most companies won't give a DeVry graduate any more consideration than someone wihout a degree. In fact, many companies will take someone who is self taught without a degree over a DeVry graduate.

          And forget that CS theory bullshit
          Good luck with ever being more than a code monkey. If you don't understand the theory behind programming, you'll never do more than writing basic code that conforms to the specifications that the architects gave you.

          P.S. My sig says that because the teacher, a 15 year programming veteran, and some other crazy expert with natural skills like me all couldn't design the project we were working on as fast as I could and only one other person's was virtually crash proof.
          If a second year student is writing better code than the teacher, that says a lot about the school. That goes back to what I said about most companies don't give much (If any) weight to a degree in "PC programming/Web Development with a certificate in Web Design", because the types of schools that give those out are usually not the highest caliber.

          And I'm not trying to be a dick, but drop the attitude; you're not the super programmer that you think you are. Relax, and pay attention to what others are telling you, you'll learn something.

          ps... Graduating high school and starting college at 17 isn't all that special, tons of people do that.
          • by ceoyoyo (59147)
            I had some friends who did two year certificate programs at a local college with a VERY good CS program. Anyone who graduated from that program was almost guaranteed a job at a certain darling of the game development world.

            These guys told me a story once. Some hotshot with a degree from DeVry was hired one day. He was fired within two weeks for incompetence.

            I'm always suspicious of an institution of higher education that finds it necessary to advertise on TV, radio and by SPAM!
          • GP claims correctness because he was one of the best programmers at his school, and he started school at 17. I started university at 15 and similarly out-performed (most of) the (largely mediocre) students at my (less-than-prestigious) university as well as many of the professors. Ergo, if we assume the GP's correctness, my opinions must carry equal or greater weight than the GP's, by his own arguments.

            However, I agree with the parent and think the GP is full of crap. This contradicts the starting assump
        • by gbulmash (688770) *
          "Or maybe they got lucky and wasted thousands of dollars on learning about Shakespear, atoms, Africa, grammar,"

          The reason colleges make you take all those stupid classes is to help round out your education, so you learn to think in a variety of different ways and learn different methods of analysis... at least at good colleges. If you really want to be a better programmer, take a class on the philosophy of language.

          "cuz they made me go early since I was so smart."

          Book smart, life foolish. You com
          • Re: (Score:3, Funny)

            by Desolator144 (999643)
            way to make the 4 year assumption there. I'm going to write and sell software myself. I looked at the badly designed crap that's out there and decided to become a programmer because I can do infinitely better. That same theory applied to computer repair and that business is running pretty well for me at the moment too.
            And 4 year colleges rerun all that info from high school and middle school because they assume you paid no attention and must have cheated on the SAT/ACT's or something to get in. It's an
        • by tolldog (1571)
          Just a guess, they don't teach English there, do they? And, I want to guess you skipped typing when you left highschool early and went straight on to "college".

          I think what they meant in the article is that they have no applications that thread to 8 threads nicely.
          Its easy to max out 8 CPU's/cores with 8 different tasks (or 9-10 tasks if you want to take advantage of context switches and iowait). Its harder to find something that scales past 4 threads because most programmers just don't program for it. A
    • by Anonymous Coward on Wednesday September 13, 2006 @02:22AM (#16094508)
      I run blender (www.blender3d.org), and the latest version supports 8 cpus. When integrated with povray (blend2pov), you get really nice rendering of very powerful models and can animate the lot (plus add hair/cloth/particle effects) plus sound/animation, etc. When you add Catmul-Clarke subdivisions, and advanced effects, and povray the lot at 24 frames per second, your cpu's can be pinned at 100% for literally hundreds of hours at a crack. My single 1.8 GHz processor can easily be pinned working on the same job for months on end (6 at least). Double the processor speed and you could look at 3 months. Now divide by 8 processors, 90 days turns into 11.25 days --pinned at 100%. Now I take the animation, and add 3 more scenes, and we are back up to 45 days of rendering with 8 cores twice as fast as what I am running now. There are literally a million computer applications that suck time hard. Over at Pixar, one frame from Finding Nemo took 4500 computers over 90 hours to render. Supercomputers with hundreds of thousands of processors (BlueGene/L, etc.) are usually capped to not run jobs that take more than two weeks to run. Short answer: they did not try very hard to 'max the processors'.
    • Re: (Score:3, Funny)

      by GaryPatterson (852699)
      Apart from a missing 'next' statement, why wouldn't any half-decent compiler just optimise out the pointless empty looping?

      I'm pretty sure you've got to do something in a loop or it'll be dropped by the compiler as a trivial optimisation. But hey! What do I know after years of VB, VBA programming, in addition to *real* languages like C++ or *useful* things like SQL? I'm a babe in the woods compared to a Uni student full of piss and vinegar!

      So - when will you debunk AnandTech? Clearly you're more knowledgea
  • by BandwidthHog (257320) <inactive.slashdo ... icallyenough.com> on Tuesday September 12, 2006 @11:43PM (#16093976) Homepage Journal
    The NeXT architecture of OS X has always been more “at ease” with multiple CPUs than various versions of NT. Not that NT can’t handle them, but that OS X does a better job of dividing tasks sanely to more fully utilize the chips and from what I’ve heard is much more capable once you move past four. That being the case, as multiple CPUs/cores become more commonplace, I think OS X will end up with the reputation of being the faster of the two.

    • Re: (Score:3, Informative)

      by Sycraft-fu (314770)
      Windows divides just fine on multiple cores. It just spreads threads around, and can even move things core to core (or CPU to CPU case being) as needed. Remember there ARE 32 processor versions of Windows. I have a friend who works on them, they do large SQL databases on 32-processor Itanium Superdomes (HP) running Windows.

      I've never seen any good benchmarking on it, probably because there haven't been higher order Intel Macs until recently, but I'm going to bet you find little difference when running apps
      • Remember there ARE 32 processor versions of Windows. I have a friend who works on them, they do large SQL databases on 32-processor Itanium Superdomes (HP) running Windows.

        I thought that the >4 CPU Windows systems were, in essence, specially tweaked systems to make it all worthwhile and that standard setups couldn’t really make effective use of more than four processors. If so, I stand corrected. *looks around* Err, sit corrected, sorry.

        • Re: (Score:3, Informative)

          by Osty (16825)

          I thought that the >4 CPU Windows systems were, in essence, specially tweaked systems to make it all worthwhile and that standard setups couldn't really make effective use of more than four processors. If so, I stand corrected. *looks around* Err, sit corrected, sorry.

          Multi-core restrictions on Windows versions are mostly artificial. For example, 8-CPU systems running just fine on Windows 2003 Advanced Server without any special tweaking. The system the grandparent referred to must have been runnin

        • Re: (Score:2, Informative)

          by Foolhardy (664051)
          That was ten years ago. A lot has been done for concurrency since then.

          For example, Windows Server 2003 Kernel Scaling Improvements [72.14.203.104] (Google MS Word->HTML version)
        • by Sycraft-fu (314770) on Wednesday September 13, 2006 @04:54AM (#16094834)
          Well, according to MS, Windows has no problems supporting 32 processors for 32 bit software and 64 processors for 64-bit software. Given versions of windows are limited to a lower number of processors, though not cores. One processor is one processor regardless of cores by MS's licensing. Indeed you'll find XP Pro, while only supporting 2 processors, will happily run 2 dual core processors and see and use all 4 cores.

          You have to remember that Windows is not static, they improve it all the time. They rolled out a 32-processor version back with Windows 2000. It's called Data Center Edition. You can't buy it over the counter, only from OEMs that make systems with tons of processors. You've likely never encountered it since it's fairly rare to see systems with that many processors. Generally you cluster smaller systems rather than get one large one. However there are cases where the big iron is called for, hence why HP sells them.

          Also I think multiprocessing in the OS is less complicated than many people make it out to be. The OS isn't where the magic has to happen, it's the app. The OS already has things broken up for it in the form of threads and processes. A thread, by definition, can be executed in parallel. So the OS simply needs to decide on the optimum placement of all the threads it's being asked ot run on it's cores. Also, it doesn't have to stick with where it puts them (unless software requests a certain CPU), it can move them if there's reason to. The hard part is in the app, to break it up in to pieces that can be processed at the same time and to keep them all in sync.

          My guess is that it's mostly FUD floated by anti-Windows people. There is, unfortunately, a lot of that going around. For example it was reported on /. that Vista won't support OpenGL (http://slashdot.org/article.pl?sid=05/08/06/17725 1). Well it turns out this isn't just false, but is the exact opposite of the truth. Vista indeed supports OpenGL in three different ways:

          1) The method mentioned there, as an emulation that is limited to 1.4 and isn't that fast. Bonus is it works on any system with Vista graphics drivers, even if the manufacturer doesn't provide GL.

          2) Old style ICD. This is the kind of driver used on XP today. This more or less takes over the display, and thus will turn off all the nifty effects while active. The bonus is there's little to update. However this is probably not going to be used because there's...

          3) The new ICD. This provides full, hardware accelerated GL and is fully compatible with the shiny new compositing engine. For that matter, you can add any API you want via an ICD that works with the new UI.

          So not only does the OS have the ability to support GL, it can do so better than XP can, because GL can be used in the same way as DX. However to read the /. story, you'd think they'd all but disabled hardware GL in their OS. As it stands nVidia has beta drivers with a GL ICD. I haven't tried them, but the release notes suggest it's a new ICD that work with the compositor. ATi's drivers don't have an ICD, though ATi claims to be working on it and says they'll have it for launch. Intel doesn't have any driver status for Vista on their website.

          When it comes to Windows info, you do need to check sources, as with anything else. There's plenty of misinformation floating around. Often people who don't like Windows believe they know what they are talking about so post incorrect information.
      • by Eivind (15695)
        Why would anyone want to do that ? Seriously.

        Yes, you "can", in the same way that you *can* put peas up your nose. It's not terribly useful though.

        For all practical purposes, Windows has one advantage today: larger availability of enduser-software. That's it.

        There's zero advantage, and a lot of disadvantage to running Windows on a big-iron database-server.

        • Re: (Score:3, Insightful)

          by Sycraft-fu (314770)
          Well I'm not going to justify their business case to you since I don't work for them. However, I'm going to go out on a limb here and say you've got no idea what you are talking about. I'm going to guess you probably do not develop enterprise telecom apps for a living. This is, in fact, what the company my friend works for does (no I'm not going to name them). I don't know why they use what they use, I don't work for them, however I'm going to guess, given that they do a good job making money, that their ch
    • by gad_zuki! (70830)
      >I think OS X will end up with the reputation of being the faster of the two.

      Maybe, but in TFA they're running XPSP2.
    • Re: (Score:2, Insightful)

      by dfghjk (711126)
      "...but that OS X does a better job of dividing tasks sanely to more fully utilize the chips and from what I've heard is much more capable once you move past four."

      Wonder where you heard that.

      "That being the case, as multiple CPUs/cores become more commonplace, I think OS X will end up with the reputation of being the faster of the two."

      Reputation maybe, after all OS X has the reputation of being God's gift in certain circles. Somehow I think reality will be different just as it is now. NT's design is vast
    • Re: (Score:3, Informative)

      by drsmithy (35869)
      The NeXT architecture of OS X has always been more "at ease" with multiple CPUs than various versions of NT.

      Your evidence for this being what, exactly ? Tea leaves ?

      NeXT didn't even *support* multiple processors until Apple's OS X reinvention, whereas NT was designed from the ground up with multi-CPU machines in mind and has supported them since its first release in 1993.

      Not that NT can't handle them, but that OS X does a better job of dividing tasks sanely to more fully utilize the chips and from what

  • by brundog (675895) on Tuesday September 12, 2006 @11:45PM (#16093986)
    ..."but did report they were unable to max out the CPUs."


    Try installing Vista.

  • Summary is wrong. (Score:4, Informative)

    by Anonymous Coward on Tuesday September 12, 2006 @11:55PM (#16094026)
    From summary:
    Anandtech could not release performance numbers for the new monster, but did report they were unable to max out the CPUs.


    From TFA:
    We definitely had a difficult time stressing 8 cores in the Mac Pro, but if you have a handful of well threaded, CPU intensive tasks then a pair of slower Clovertowns can easily outperform a pair of dual core Woodcrest based Xeons.


    There's a big difference between unable to and had a difficult time. When I first read the summary I thought that there must be some problem with the system if they're unable to get all the CPUs under full load.
    • Re:Summary is wrong. (Score:5, Interesting)

      by adam31 (817930) <adam31.gmail@com> on Wednesday September 13, 2006 @03:07AM (#16094613)
      I thought that there must be some problem with the system if they're unable to get all the CPUs under full load.


      It's actually really easy to do if your memory system isn't meant to service 8 cores. And the article pretty much backs this up, every time the quad cores fail to shine it's blamed on the memory. But to me, the really interesting aspect of this is that they always blame FB-DIMM, which gains bandwidth by sacrificing latency. They even go so far as to suggest:

      if Apple were to release a Core 2 (Conroe/Kentsfield) based Mac similar to the Mac Pro, it could end up outperforming the Mac Pro by being able to use regular DDR2 memory.

      So, I think regular DDR2 @ 667 = 5.4 GB/s... divided amongst 8 cores is just 677 MB/s per core. It seems insane to think that would work (maybe it would, maybe my numbers are wrong also). If you want to attack latency but simply can't give up the bandwidth, wouldn't the SMP model work better-- just swap out the L2-miss stalled thread, and run the other full bore. Now you've reduced the problem to distributing your register bank among active threads. Well, I think that's how video cards do it, and memory latency is their enemy #1.

      In any event, there you have it. The performance pendulum has left Ghz, is briefly swinging toward more cores, but appears headed now toward memory systems. Does anyone else think it's funny that L1 is still just 32kb? (oughta be enough for anybody).

  • XP 64? (Score:4, Interesting)

    by TheSHAD0W (258774) on Wednesday September 13, 2006 @12:17AM (#16094118) Homepage
    I notice this machine was tested with XPSP2. Are the Macs able to run the 64-bit version of XP?
    • Sorta. In theory, you could put Win64 on it, but all Apple's drivers are 32-bit for Windows. So you can do it, but you lose a lot of stuff.
    • Re: (Score:3, Informative)

      by Kunimodi (1002148)
      Yes, and it runs very well (drivers for all major devices). Note that installing XP of any sort on the Mac Pro is a bit of an endeavor currently due to the need to slipstream drivers or you get 1/20th of the SATA performance. http://forums.macrumors.com/showthread.php?t=23190 1 [macrumors.com]
    • Re: (Score:2, Informative)

      by chriscappuccio (80696)
      With a core 2 chip, sure. It has the 64 bit mode, but the 'core' that apple shipped in the first intel macs did not have a 64 bit mode.
  • by constantnormal (512494) on Wednesday September 13, 2006 @07:08AM (#16095117)
    ... an 8-cpu monster with only 2G of RAM and a standard disk setup.

    The poor baby's probably starved for data to crunch, having only 256M of RAM per cpu and apparently just the standard disk setup.

    And it appears that they left the default OS X limit of 100 tasks per user in place as well.

    Gotta open things up to let those puppies breathe!

How often I found where I should be going only by setting out for somewhere else. -- R. Buckminster Fuller

Working...