Forgot your password?
typodupeerror

The Apple News That Got Buried 347

Posted by kdawson
from the times-eight dept.
An anonymous reader writes, "Apple's Showtime event was all well and good, but the big news today was on Anandtech.com. They found that the two dual-core CPUs in the Mac Pro were not only removable, but that they were able to insert two quad-core Clovertown CPUs. OS X recognized all eight cores and it worked fine. Anandtech could not release performance numbers for the new monster, but did report they were unable to max out the CPUs."
This discussion has been archived. No new comments can be posted.

The Apple News That Got Buried

Comments Filter:
  • CPU upgrade market (Score:2, Interesting)

    by BWJones (18351) * on Tuesday September 12, 2006 @11:31PM (#16093922) Homepage Journal
    Hrmmm. Well, seeing as how I just took delivery of a new quad 3.0Ghz Mac Pro, this dulls my bragging rights a bit. However, this bodes well for the CPU upgrade market. Companies like Sonnett, Newer, Powerlogix and OWC have had a tough time with the IBM/Freescale market because of poor performance among other critical reasons. The old 1.0 Ghz G4 I have at home as a media server is still an adequate system that currently holds a terabyte of storage space and I'd love to drop a good 2.0 Ghz or higher chip in it for a reasonable cost. There are some 1.8Ghz chips out there that may do the job just fine, but the market has been stuck at 1.8Ghz for quite some time.

    And yes, my blog is down until we get a new transformer installed at my building...... Hopefully tomorrow by noon as they are installing a new one as we speak.

  • Great!! (Score:4, Interesting)

    by yabos (719499) on Tuesday September 12, 2006 @11:36PM (#16093942)
    I can't say I'm surprised that it works since it's pin compatible but I think it's good news that this works so easily. It definately bodes well for future upgrades.
  • Bash fork bomb (Score:5, Interesting)

    by Anonymous Coward on Tuesday September 12, 2006 @11:38PM (#16093951)
    Here's a guaranteed way to max out those CPUs:

    :(){ :|:& };:

    It's the ultimate performance benchmark! How fast does your system halt?
  • by the_humeister (922869) on Tuesday September 12, 2006 @11:39PM (#16093957)
    However, this bodes well for the CPU upgrade market. Companies like Sonnett, Newer, Powerlogix and OWC have had a tough time with the IBM/Freescale market because of poor performance among other critical reasons.


    And it will still bode poorly for these companies because now that the Mac is all off-the-shelf components, so are the CPU upgrades.
  • by BandwidthHog (257320) <inactive.slashdo ... icallyenough.com> on Tuesday September 12, 2006 @11:43PM (#16093976) Homepage Journal
    The NeXT architecture of OS X has always been more “at ease” with multiple CPUs than various versions of NT. Not that NT can’t handle them, but that OS X does a better job of dividing tasks sanely to more fully utilize the chips and from what I’ve heard is much more capable once you move past four. That being the case, as multiple CPUs/cores become more commonplace, I think OS X will end up with the reputation of being the faster of the two.

  • by jericho4.0 (565125) on Wednesday September 13, 2006 @12:05AM (#16094069)
    Your sig reads (to me) like you are a (younger) CS student. Assuming you are, here's what you're missing; in the real world, we need to max out those cores doing something productive, or we get in trouble. Very few users have apps that can use even more than one core usefully.
  • by HaloZero (610207) <protodekaNO@SPAMgmail.com> on Wednesday September 13, 2006 @12:05AM (#16094070) Homepage
    To be perfectly honest, I can see an immediate application for this where I work.

    We're introducting a virtual infrastructure very quickly, using XServe RAIDs as our storage LUNs. That being said, with VMware's soon-to-be Mac OS X offering, this would give our mac-toting engineers the ability to build a virtual machine locally before deploying it into the wider infrastructure. That is a truly valuable tool.

    There's three of us at work that heavily rely on our non-mac machines - a pair of us doing some reasonably heavy VM work. I'd love to transition to a straight Mac platform (not Mac OS X + SuSE + XP). It's such a pain in the ass to have to suspend one and start another constantly because my performance starts to block. It's not disk I/O - the I/O never pegs (most of the stuff is resident, anyway). The RAM can be mitigated by adding more RAM (4GB currently). More than once I've watched procmon show me that the vmx process is pegged on the
  • Amdahl's Law (Score:3, Interesting)

    by EmbeddedJanitor (597831) on Wednesday September 13, 2006 @12:10AM (#16094090)
    The system is probably far too constrained elsewhere (RAM bandwidth etc) to effectively feed 8 cores.

    Amdahl's Law might have been written for Big Iron, but it applies even more so to smaller sytstems.

  • XP 64? (Score:4, Interesting)

    by TheSHAD0W (258774) on Wednesday September 13, 2006 @12:17AM (#16094118) Homepage
    I notice this machine was tested with XPSP2. Are the Macs able to run the 64-bit version of XP?
  • Re:Mac OSX kills it (Score:3, Interesting)

    by Jetson (176002) on Wednesday September 13, 2006 @01:52AM (#16094447) Homepage
    It's harmless in the sense that it won't crash your computer, but it will still block that user from running any additional programs because it uses up their thread quota. Of course, if you can trick someone into running it as root....

    I remember writing stuff similar to this back in the 80's to trip the watchdog on the VAX when the system operator was away and the machine needed a reboot. I think the C code of choice was something like "main(){while(fork(fork())||!fork(fork()))fork();} ". We'd get a few dozen students to run it at the same time and the machine would reboot.
  • by Anonymous Coward on Wednesday September 13, 2006 @02:22AM (#16094508)
    I run blender (www.blender3d.org), and the latest version supports 8 cpus. When integrated with povray (blend2pov), you get really nice rendering of very powerful models and can animate the lot (plus add hair/cloth/particle effects) plus sound/animation, etc. When you add Catmul-Clarke subdivisions, and advanced effects, and povray the lot at 24 frames per second, your cpu's can be pinned at 100% for literally hundreds of hours at a crack. My single 1.8 GHz processor can easily be pinned working on the same job for months on end (6 at least). Double the processor speed and you could look at 3 months. Now divide by 8 processors, 90 days turns into 11.25 days --pinned at 100%. Now I take the animation, and add 3 more scenes, and we are back up to 45 days of rendering with 8 cores twice as fast as what I am running now. There are literally a million computer applications that suck time hard. Over at Pixar, one frame from Finding Nemo took 4500 computers over 90 hours to render. Supercomputers with hundreds of thousands of processors (BlueGene/L, etc.) are usually capped to not run jobs that take more than two weeks to run. Short answer: they did not try very hard to 'max the processors'.
  • Re:Summary is wrong. (Score:5, Interesting)

    by adam31 (817930) <adam31.gmail@com> on Wednesday September 13, 2006 @03:07AM (#16094613)
    I thought that there must be some problem with the system if they're unable to get all the CPUs under full load.


    It's actually really easy to do if your memory system isn't meant to service 8 cores. And the article pretty much backs this up, every time the quad cores fail to shine it's blamed on the memory. But to me, the really interesting aspect of this is that they always blame FB-DIMM, which gains bandwidth by sacrificing latency. They even go so far as to suggest:

    if Apple were to release a Core 2 (Conroe/Kentsfield) based Mac similar to the Mac Pro, it could end up outperforming the Mac Pro by being able to use regular DDR2 memory.

    So, I think regular DDR2 @ 667 = 5.4 GB/s... divided amongst 8 cores is just 677 MB/s per core. It seems insane to think that would work (maybe it would, maybe my numbers are wrong also). If you want to attack latency but simply can't give up the bandwidth, wouldn't the SMP model work better-- just swap out the L2-miss stalled thread, and run the other full bore. Now you've reduced the problem to distributing your register bank among active threads. Well, I think that's how video cards do it, and memory latency is their enemy #1.

    In any event, there you have it. The performance pendulum has left Ghz, is briefly swinging toward more cores, but appears headed now toward memory systems. Does anyone else think it's funny that L1 is still just 32kb? (oughta be enough for anybody).

  • by constantnormal (512494) on Wednesday September 13, 2006 @07:08AM (#16095117)
    ... an 8-cpu monster with only 2G of RAM and a standard disk setup.

    The poor baby's probably starved for data to crunch, having only 256M of RAM per cpu and apparently just the standard disk setup.

    And it appears that they left the default OS X limit of 100 tasks per user in place as well.

    Gotta open things up to let those puppies breathe!
  • by Anonymous Coward on Wednesday September 13, 2006 @07:31AM (#16095164)
    Listen dude, you sound like an attention whore when you finish a post on Slashdot informing everyone that your blog is down but you expect it up soon. Come on, who doesn't have a blog? Do you really think anyone is going to care? Do you really think anyone has ever heard of you? Or did you just want to say, "Hey everyone look at me! I have a blog. It will be back up soon so give it a look."

    Even if there really is some person on Earth that found your blog being down distressing enough to email you about it, I'm sure you put their fears at ease with a return email right? I'm thinking it's pretty safe to say that your responsibility to the public ends there. I doubt someone is going to say, "Hey, I saw your post on Slashdot yesterday, what an ass, you didn't even update us on the status of your blog! We're on pins and needles here. If I give you my cell number would you please give me a call the moment it's back up?"

    So like the other guy said, nobody cares that your blog is down. To go one further, nobody cares that you even have a blog so don't fool yourself into thinking you are going to impress anyone by telling them you have a blog. "Hey baby, how's about we go back to my place for a drink? We can get a little more comfortable and have a nice long talk, right after I check my blog. Oh yeah, you heard me right baby. That's right, I'm a blog star."

    As for calling into question the validity of a person's opinion because they posted AC, Slashdot should have policies to protect people like you from yourself. How ignorant is it to post on Slashdot, especially engaging in any sort of confrontational banter, using your name and having links to your blog in your profile? Where someone can easily google all sorts of the critical information you have sprawled all over the web in just minutes. While you can know nothing more than I exist, I can know everything about you. The person with a problem isn't the person posting AC, it's the person with their full name, address, phone number and place of employment in their sig. Wether you know it or not, that's what you have been doing every time you post on Slashdot.

    Welcome to the internet. Please stow all personally identifiable information in public forums where it is likely you will draw unwanted attention to yourself. Keep your hands, feet, and self promoting blogs to yourself at all times.
  • by Anonymous Coward on Wednesday September 13, 2006 @09:15AM (#16095540)
    You have no idea how nice I was, I think most people that read this little thread will see that.

    You question the productivity of my post as well as the motivation behind it. Very well, if you insist, I will support my statement and the contribution it was intended to make. In your first post you unethically advertised your blog. You were called on it. You then attacked the person that called you on it and attempted to deny your self-promotion by explaining it away as a public service announcement. It was laughable at best.

    Your actions personally made me sick. Citing web statistics in an attempt to inflate your image. While I'm sure they inflated nothing more than your ego, they are most irrelevant. If you feel your blog being down is of general interest to Slashdot visitors, submit it as a news story. Quite frankly you are not fooling anyone, you did not post about your blog out of concern for your audience whom you think the majority of Slashdot is a member of. My contempt for your behavior led me to refute your claims of public service.

    It was my desire to mitigate the vile stench your post brought with a somewhat lighthearted ribbing. I think I accomplished that task. I also wanted to refute your implication that nothing credible can be said in anonymity. Quite to the contrary in fact, seldom does ones name add to the validity, credibility, or productivity of a post. The only identity that would have supported the AC's claim that nobody cares would be Official Slashdot Poll Administrator. Since any other identity would not have added anything, anonymity certainly did not diminish his point.

    The vast majority of Slashdot users do not use their real name. There is a reason for this, this is a public forum and some people can venomously disagree with even the slightest of opinion in opposition to their own. But you know that, you gave quite a response to someone who expressed an opinion of apathy to your cause. It is not courageous to use your real name while participating in flamewars on Slashdot, it's foolhardy. You have no idea who is watching and what your words might motivate them to do.

    I would have no problem at all posting on Slashdot with my real name, so long as I did nothing more than add factual on-topic information to the conversation. However, informing you that your ego is hanging out is a situation that I feel calls for anonymity. I have no idea what kind of psychotic reaction you might have. There's certainly nothing to come from sharing my identity with you that would warrant such a risk. I was merely reminding you of the same.

    Other than amusement the point of my post was to embarrass you and others who would use Slashdot for their own self-promotion into not doing it. If everyone typed up some gibberish post to brag about having a new Mac and a blog then Slashdot would be less than it is now.
  • Re:I guess (Score:3, Interesting)

    by camperslo (704715) on Wednesday September 13, 2006 @09:32AM (#16095653)
    Speaking of memory access, it seem Anandtech showed the Pro in the worst light. They pointed out (fairly) where the higher latency of FB-DIMMs slowed performance, but ran the benchmarks with only a pair of DIMMs instead of four, failing to show the boost in performance from quad-channel memory access. Doubling memory bandwidth could have boosted some of the scores.

    It would have been fun to see something better show the potential gains available from additional cores. A utility like Visual Hub [techspansion.com] can use multiple cores to be simultaneously transcoding multiple .AVIs (mpeg 4 etc) to generate a DVD image (mpeg 2). For a benchmark just give it multiple copies of the same video clip to work with. It isn't cross-platform though.
  • by Anonymous Coward on Wednesday September 13, 2006 @10:15AM (#16095920)
    $ uname -rs
    Darwin 9.0.0d1
    $ ulimit -u
    266

    Stock settings (though who knows what the final release will use).
  • Re:I guess (Score:3, Interesting)

    by Doctor Memory (6336) on Wednesday September 13, 2006 @11:13AM (#16096356)
    There should be a considerable performance improvement if the core's are on the same chip die, since communication doesn't have to go through the motherboard.
    If the bulk of your bus traffic is inter-CPU transfers, yes. However, if you've now got four cores and they all need to get to memory (or, heaven forbid, the disk), then they're all going to be sucking down bus bandwidth, and sitting in wait states until the cache refills. A single processor can waste over a hundred cycles on a cache miss, I don't even want to think about how long a cache will take to fetch a line when it has to share the same bus with three other processors.

    <idea>Maybe up the CPU quantum in the scheduler on multi-processor machines, to reduce the bus traffic spent on cache spill & refill.</idea>

FORTH IF HONK THEN

Working...