Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

The Apple News That Got Buried 347

An anonymous reader writes, "Apple's Showtime event was all well and good, but the big news today was on They found that the two dual-core CPUs in the Mac Pro were not only removable, but that they were able to insert two quad-core Clovertown CPUs. OS X recognized all eight cores and it worked fine. Anandtech could not release performance numbers for the new monster, but did report they were unable to max out the CPUs."
This discussion has been archived. No new comments can be posted.

The Apple News That Got Buried

Comments Filter:
  • by Anonymous Coward on Tuesday September 12, 2006 @11:43PM (#16093978)
    And yes, my blog is down until we get a new transformer installed at my building...... Hopefully tomorrow by noon as they are installing a new one as we speak.

    Nobody cares that your blog is down. You're not that important. Get over yourself.

  • Re: Bash fork bomb (Score:5, Informative)

    by Anonymous Coward on Tuesday September 12, 2006 @11:51PM (#16094010)
  • Summary is wrong. (Score:4, Informative)

    by Anonymous Coward on Tuesday September 12, 2006 @11:55PM (#16094026)
    From summary:
    Anandtech could not release performance numbers for the new monster, but did report they were unable to max out the CPUs.

    From TFA:
    We definitely had a difficult time stressing 8 cores in the Mac Pro, but if you have a handful of well threaded, CPU intensive tasks then a pair of slower Clovertowns can easily outperform a pair of dual core Woodcrest based Xeons.

    There's a big difference between unable to and had a difficult time. When I first read the summary I thought that there must be some problem with the system if they're unable to get all the CPUs under full load.
  • Re:Bash fork bomb (Score:3, Informative)

    by Anonymous Coward on Wednesday September 13, 2006 @12:01AM (#16094050)
    Actually, on MacOS X, I get 60 or so "-bash: fork: Resource temporarily unavailable" messages without any huge amounts of CPU usage.
  • by Sycraft-fu ( 314770 ) on Wednesday September 13, 2006 @12:19AM (#16094127)
    Windows divides just fine on multiple cores. It just spreads threads around, and can even move things core to core (or CPU to CPU case being) as needed. Remember there ARE 32 processor versions of Windows. I have a friend who works on them, they do large SQL databases on 32-processor Itanium Superdomes (HP) running Windows.

    I've never seen any good benchmarking on it, probably because there haven't been higher order Intel Macs until recently, but I'm going to bet you find little difference when running apps that are the same. I'm sure some apps will suck at ti because they won't thread out properly, but those that do shouldn't have any troubles.
  • Mac OSX kills it (Score:5, Informative)

    by goombah99 ( 560566 ) on Wednesday September 13, 2006 @12:32AM (#16094187)
    Trying this on macosx, the bomb dies when the number of forks exceeds a certain depth. So it's harmless. :(){ :|:& };:

    $ bash: fork: Resource temporarily unavailable
    bash fork Resource temporarily unavailable
    bash fork Resource temporarily unavailable
    bash fork Resource temporarily unavailable
    bash fork Resource temporarily unavailable
    bash fork Resource temporarily unavailable
    bash fork Resource temporarily unavailable
    bash fork Resource temporarily unavailable


  • Re:XP 64? (Score:3, Informative)

    by Kunimodi ( 1002148 ) on Wednesday September 13, 2006 @12:53AM (#16094270)
    Yes, and it runs very well (drivers for all major devices). Note that installing XP of any sort on the Mac Pro is a bit of an endeavor currently due to the need to slipstream drivers or you get 1/20th of the SATA performance. 1 []
  • by Osty ( 16825 ) on Wednesday September 13, 2006 @12:58AM (#16094290)

    I thought that the >4 CPU Windows systems were, in essence, specially tweaked systems to make it all worthwhile and that standard setups couldn't really make effective use of more than four processors. If so, I stand corrected. *looks around* Err, sit corrected, sorry.

    Multi-core restrictions on Windows versions are mostly artificial. For example, 8-CPU systems running just fine on Windows 2003 Advanced Server without any special tweaking. The system the grandparent referred to must have been running Windows 2003 Data Center Edition to support more than 8 processors, but should still require no special tweaking.

    That said, I'm sure the systems that make it to the top of TPC benchmarks [] are highly tweaked.

  • by Foolhardy ( 664051 ) <(moc.liamg) (ta) (23htimsc)> on Wednesday September 13, 2006 @01:15AM (#16094354)
    That was ten years ago. A lot has been done for concurrency since then.

    For example, Windows Server 2003 Kernel Scaling Improvements [] (Google MS Word->HTML version)
  • Re:Bash fork bomb (Score:3, Informative)

    by pod ( 1103 ) on Wednesday September 13, 2006 @01:20AM (#16094365) Homepage
    Funny, but doubtfull. Standard, off-the-shelf PCs are still plaguaed by relatively crappy bus bandwidth. They can't max them out, because memory can't keep up feeding data to crunch.
  • by SoCalChris ( 573049 ) on Wednesday September 13, 2006 @02:22AM (#16094510) Journal
    Sounds like you're going to something like DeVry [], correct?

    Here's a hint... Most companies won't give a DeVry graduate any more consideration than someone wihout a degree. In fact, many companies will take someone who is self taught without a degree over a DeVry graduate.

    And forget that CS theory bullshit
    Good luck with ever being more than a code monkey. If you don't understand the theory behind programming, you'll never do more than writing basic code that conforms to the specifications that the architects gave you.

    P.S. My sig says that because the teacher, a 15 year programming veteran, and some other crazy expert with natural skills like me all couldn't design the project we were working on as fast as I could and only one other person's was virtually crash proof.
    If a second year student is writing better code than the teacher, that says a lot about the school. That goes back to what I said about most companies don't give much (If any) weight to a degree in "PC programming/Web Development with a certificate in Web Design", because the types of schools that give those out are usually not the highest caliber.

    And I'm not trying to be a dick, but drop the attitude; you're not the super programmer that you think you are. Relax, and pay attention to what others are telling you, you'll learn something.

    ps... Graduating high school and starting college at 17 isn't all that special, tons of people do that.
  • Re:XP 64? (Score:2, Informative)

    by chriscappuccio ( 80696 ) on Wednesday September 13, 2006 @02:46AM (#16094561) Homepage
    With a core 2 chip, sure. It has the 64 bit mode, but the 'core' that apple shipped in the first intel macs did not have a 64 bit mode.
  • by drsmithy ( 35869 ) <drsmithy@gmai[ ]om ['l.c' in gap]> on Wednesday September 13, 2006 @03:10AM (#16094618)
    The NeXT architecture of OS X has always been more "at ease" with multiple CPUs than various versions of NT.

    Your evidence for this being what, exactly ? Tea leaves ?

    NeXT didn't even *support* multiple processors until Apple's OS X reinvention, whereas NT was designed from the ground up with multi-CPU machines in mind and has supported them since its first release in 1993.

    Not that NT can't handle them, but that OS X does a better job of dividing tasks sanely to more fully utilize the chips and from what I've heard is much more capable once you move past four.

    Heard from who ? Apple zealots who think OS X isn't dog-slow to use and multitasks well because Expose still works when the machine is under load ?

    As good old Ars [] describes, the multiprocessor support in OS X before 10.4 was only average, to say the least.

    It's doubtful that the multiprocessor capabilities in OS X at the moment are even as mature as it was in Windows 2000.

    That being the case, as multiple CPUs/cores become more commonplace, I think OS X will end up with the reputation of being the faster of the two.

    Well, it's got a lot of work to do before it's faster than anything except earlier versions of OS X.

  • by bangenge ( 514660 ) on Wednesday September 13, 2006 @03:33AM (#16094675)
    Dude, give the kid a break. He didn't learn anything about Shakespear, atoms, Africa, grammar, and how to turn on a computer (his words, not mine). By the time we get to be managers (if you aren't already), he's still in college, trying to figure out why he can't get laid, and we can make it a point not to hire one-trick ponies with big ego problems. He was 17 when he entered college for crying out loud!
  • by CreateWindowEx ( 630955 ) on Wednesday September 13, 2006 @04:21AM (#16094758)
    I'm not quite sure what you mean by "mitigate their single-threaded nature", but if you run 8 single-threaded processes on an 8 core machine on any modern OS, the OS will end up spreading the workload across all 8 processors without having to do anything special. Normally, the OS will move threads from core to core as it sees fit, depending on the whims of the thread scheduler. However, you can override this (e.g., in XP by using the task manager and setting the processor affinity mask). The main reason to do this is for processes that have special synchronization bugs, but this shouldn't be true for a joe-blow single threaded process.

    So while multiple-core machines will not perform single-threaded tasks faster than a single-core machine of the same speed, but if you are running multiple applications you can still saturate all the cores even if all your apps are single-threaded, as long as all the apps you are running have a high ratio of CPU work to disk activity/OS calls (e.g., video compression or encryption or calculating pi, not running MS Word or reindexing your mp3 collection). In practice, this won't happen that often, especially with 4 or 8 cores.

  • Re:I guess (Score:3, Informative)

    by hpcanswers ( 960441 ) on Wednesday September 13, 2006 @04:27AM (#16094775)
    Too many cores on the same bus will cause a lot of contention for memory access. There will always be a place for NUMA architectures, including clusters. That place is for the ultra-high end though, not for scientists who merely want a few processors for a Gaussian computation.
  • by Gary W. Longsine ( 124661 ) on Wednesday September 13, 2006 @04:45AM (#16094815) Homepage Journal

    NeXT multiprocessed the guts of OS X on 2-4 processors. The result is that the mach kernel doesn't scale the processors linearly. There isn't the straightline performance boost of adding another processor beyond 4 cores with Mac OS X's mach kernel.

    Let's assume for the moment that none of us in this forum actually know anything factual about how many years Apple (or even NeXT before them) have been running Mach on machines with more than 4 processors on the corporate campus behind locked doors.

    However, we can probably reason this out if we try. We're all bright geek types, right? There are several clues. NeXT bought Apple for a negative $400 million or so in what, December of 1996?

    The heritage of NeXT that you mention is a pretty big clue. I don't recall off the top of my head how many processors were supported by the production shipping Mach build for SPARC and PA-RISC back in the NeXT days, but let's assume it was 2, just for the sake of argument. Both of those platforms offered ready availability of systems with many processors even way back then. Perhaps there were systems like that in the lab.

    Mach was originally a research project with an interesting goal: clean support of certain abstractions in a platform-independent way. One of those abstractions was support for multiple processors, beyond the typical SMP architectures we see today, which means that the author's concept of platform-independent went quite some distance beyond a different instruction set in a different risk architecture. Dig this:

    Mach kernel []
    Unlike UNIX, which was developed without regard for multiprocessing, Mach incorporates multiprocessing support throughout. Its multiprocessing support is also exceedingly flexible, ranging from shared memory systems to systems with no memory shared between processors. Mach is designed to run on computer systems ranging from one to thousands of processors. In addition, Mach is easily ported to many varied computer architectures. A key goal of Mach is to be a distributed system capable of functioning on heterogeneous hardware.

    That text is unattributed at the Wikipedia page, but comes from this document: Appendix B [] from the book: Operating System Concepts []

    An excellent book entirely about Mach is: Programming under Mach [], which also mentions the design intent.

    The original project was funded by DARPA, with the specific goal of developing operating systems technologies which would support super computers with hundreds or thousands of processors.

    The Mach project developed new techniques which have migrated directly (via actual Mach code to OSF, NeXT, Mac OS X, et. al.) or indirectly into pretty much every modern operating system.

    Mach research spanned a very long period of time, and two Universities. Curious, bright, and arguably insane people (or they would have been making money instead of slaving away making Mach on grad-student salary) with access to multiple processor machines with DARPA funded directives to make it scale to hundreds of processors. Hmm... that seems like a clue.

    NeXT was, and Apple is a hardware engineering company. Apple has been building multiple processor boxes since before the reverse acquisition. I know, I had the, uh, perverse and shameful pleasure of running BeOS on one of them for sport.

    If any joker with a web site can get ahold of pre-

  • Re:I guess (Score:3, Informative)

    by hey! ( 33014 ) on Wednesday September 13, 2006 @04:50AM (#16094826) Homepage Journal
    with 8 cores, that no one cares about Beowulf clusters anymore. :(

    Which puts me in mind of sex researchers, Masters and Johnson, who forty years ago established under rigorous experimental conditions that degree of uh, masculine endowment doesn't make any difference. Nothwithstanding this, people always care about what they can't have.
  • by Sycraft-fu ( 314770 ) on Wednesday September 13, 2006 @04:54AM (#16094834)
    Well, according to MS, Windows has no problems supporting 32 processors for 32 bit software and 64 processors for 64-bit software. Given versions of windows are limited to a lower number of processors, though not cores. One processor is one processor regardless of cores by MS's licensing. Indeed you'll find XP Pro, while only supporting 2 processors, will happily run 2 dual core processors and see and use all 4 cores.

    You have to remember that Windows is not static, they improve it all the time. They rolled out a 32-processor version back with Windows 2000. It's called Data Center Edition. You can't buy it over the counter, only from OEMs that make systems with tons of processors. You've likely never encountered it since it's fairly rare to see systems with that many processors. Generally you cluster smaller systems rather than get one large one. However there are cases where the big iron is called for, hence why HP sells them.

    Also I think multiprocessing in the OS is less complicated than many people make it out to be. The OS isn't where the magic has to happen, it's the app. The OS already has things broken up for it in the form of threads and processes. A thread, by definition, can be executed in parallel. So the OS simply needs to decide on the optimum placement of all the threads it's being asked ot run on it's cores. Also, it doesn't have to stick with where it puts them (unless software requests a certain CPU), it can move them if there's reason to. The hard part is in the app, to break it up in to pieces that can be processed at the same time and to keep them all in sync.

    My guess is that it's mostly FUD floated by anti-Windows people. There is, unfortunately, a lot of that going around. For example it was reported on /. that Vista won't support OpenGL ( 1). Well it turns out this isn't just false, but is the exact opposite of the truth. Vista indeed supports OpenGL in three different ways:

    1) The method mentioned there, as an emulation that is limited to 1.4 and isn't that fast. Bonus is it works on any system with Vista graphics drivers, even if the manufacturer doesn't provide GL.

    2) Old style ICD. This is the kind of driver used on XP today. This more or less takes over the display, and thus will turn off all the nifty effects while active. The bonus is there's little to update. However this is probably not going to be used because there's...

    3) The new ICD. This provides full, hardware accelerated GL and is fully compatible with the shiny new compositing engine. For that matter, you can add any API you want via an ICD that works with the new UI.

    So not only does the OS have the ability to support GL, it can do so better than XP can, because GL can be used in the same way as DX. However to read the /. story, you'd think they'd all but disabled hardware GL in their OS. As it stands nVidia has beta drivers with a GL ICD. I haven't tried them, but the release notes suggest it's a new ICD that work with the compositor. ATi's drivers don't have an ICD, though ATi claims to be working on it and says they'll have it for launch. Intel doesn't have any driver status for Vista on their website.

    When it comes to Windows info, you do need to check sources, as with anything else. There's plenty of misinformation floating around. Often people who don't like Windows believe they know what they are talking about so post incorrect information.
  • by argent ( 18001 ) <peter.slashdot@2006@taronga@com> on Wednesday September 13, 2006 @09:04AM (#16095473) Homepage Journal
    would you be able to somehow mitigate the their single-threaded nature by assigning the respective processes to it's own core?

    First, pretty much any application on the Mac is multithreaded just because of the way the user interface works. Apple's OpenGL implementation is partly software, for example... this is why you can run hardware T&L on the Mac mini with its GMA950 GPU - the OS does that in software on the second core even in single-threaded games.

    Second, OS X does a pretty goodjob of distributing applications to cores without having to explicitly bind them. Binding an application to a core would most likely slow it down... unless the program has been written to use a lot of fined grained shared state between threads... and what you're doing with processor affinity is *preventing* it from multiprocessing.

    Processor affinity is like 64 bit. Unless you're doing something on the edge you probably don't need it, and if you need it you're probably already doing it.

    Here's the summary:

    The bad news is that OS X doesn't provide a hook for processor affinity. The good news is that Mach does support it, and you could use the Darwin sources to figure out how to implement it in OSX using direct Mach calls. The bad news is that it's really hard. The good news is you don't need to do it unless you're trying to prevent multiprocessing anyway.

    Summary of the summary: Don't worry, be happy.
  • by Pope ( 17780 ) on Wednesday September 13, 2006 @09:38AM (#16095684)
    There are enough old G4s lying around for the after market to last for a few more years. I'm keeping mine til the thing dies because I still need an OS 9 native environment; Classic still can't do everything, and is no longer available on x86 Macs.

Air is water with holes in it.