The Apple News That Got Buried 347
An anonymous reader writes, "Apple's Showtime event was all well and good, but the big news today was on Anandtech.com. They found that the two dual-core CPUs in the Mac Pro were not only removable, but that they were able to insert two quad-core Clovertown CPUs. OS X recognized all eight cores and it worked fine. Anandtech could not release performance numbers for the new monster, but did report they were unable to max out the CPUs."
So fast, I got first post! (Score:5, Funny)
I guess (Score:5, Funny)
Re:I guess (Score:5, Funny)
I suppose you could run 8 VMs on the machine and make a Beowulf cluster out of those.
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
If the bulk of your bus traffic is inter-CPU transfers, yes. However, if you've now got four cores and they all need to get to memory (or, heaven forbid, the disk), then they're all going to be sucking down bus bandwidth, and sitting in wait states until the cache refills. A single processor can waste over a hundred cycles on a cache miss, I don't e
Re: (Score:3, Interesting)
It would have been fun to see something better show the potential gains available from additional cores. A utility like Visual Hub [techspansion.com] can use mul
Re:I guess (Score:4, Funny)
Re: (Score:3, Informative)
Which puts me in mind of sex researchers, Masters and Johnson, who forty years ago established under rigorous experimental conditions that degree of uh, masculine endowment doesn't make any difference. Nothwithstanding this, people always care about what they can't have.
Re: (Score:2)
CPU upgrade market (Score:2, Interesting)
Re: (Score:3, Interesting)
And it will still bode poorly for these companies because now that the Mac is all off-the-shelf components, so are the CPU upgrades.
Re:CPU upgrade market (Score:4, Informative)
Re: (Score:2)
Mot at all. The quad cores are not on the market yet, but when they come out, you'll be able to drop them in your box. I'm jealous.
Re: (Score:2)
In Australian dollars at least, it is over $1,000 extra to get the 3GHz vs the 2.66GHz CPUs in the Mac Pro - that's about US$750 at the current rate. So chances are these quad-core CPUs will be pricey.
Re: (Score:2)
FYI, this processor bump costs exactly US$800 (plus applicable tax, of course) from Apple for buyers in the US.
Having always presumed it a foregone conclusion that the processors would be swappable, I opted for the standard 2.66GHz configuration and an eventual upgrade as it becomes necessary. Considering the current cost of FB-DIMMs with huge heat sinks (a
Re: (Score:3, Funny)
Alternate Blog (Score:3, Funny)
Coming soon to an Apple Store near you... (Score:3, Funny)
Re: (Score:2)
Re: (Score:3, Funny)
Or "Octomac"
Great!! (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
Great, now that Apple no longer has complete control over the hardware they are trying to take over the acronyms so the CPU is now CUP, Central Unit of Processing.
Re:Great!! (Score:5, Funny)
Re:Great!! (Score:5, Funny)
Bash fork bomb (Score:5, Interesting)
It's the ultimate performance benchmark! How fast does your system halt?
Re: (Score:2)
I imagine it forks processes like crazy, but, not knowing much Bash, I can't see how.
Re: Bash fork bomb (Score:5, Informative)
Cheers!
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
till at least --- it starts making funny faces back at you....
Amdahl's Law (Score:3, Interesting)
Amdahl's Law might have been written for Big Iron, but it applies even more so to smaller sytstems.
Mac OSX kills it (Score:5, Informative)
$ bash: fork: Resource temporarily unavailable
bash fork Resource temporarily unavailable
bash fork Resource temporarily unavailable
bash fork Resource temporarily unavailable
bash fork Resource temporarily unavailable
bash fork Resource temporarily unavailable
bash fork Resource temporarily unavailable
bash fork Resource temporarily unavailable
Done
Re: (Score:3, Interesting)
I remember writing stuff similar to this back in the 80's to trip the watchdog on the VAX when the system operator was away and the machine needed a reboot. I think the C code of choice was something like "main(){while(fork(fork())||!fork(fork()))fork();} ". We'd get a few
Re: (Score:3, Informative)
Re: (Score:2)
Apple Cores (Score:5, Funny)
completely impossible statementt (Score:4, Funny)
dim Processor1Thread as new thread(addressof sub1)
dim Processor2Thread as new thread(addressof sub2)
Processor1Thread.start()
Processor2Thread.start()
dim x as integer
sub sub1()
for x = 0 to 1000000000000000
end sub
sub sub2()
dim x as integer
for x = 0 to 1000000000000000
end sub
and repeat for 6 other threads and subs. So they either proved it doesn't really work well at all or programming on a mac is impossibly hard...or they're lying to make it sound more dramatic. So whether they're lying about not maxing it out or they're lying and you just plain can't use all 8 cores at once, it's not as good as it sounds.
Re: (Score:2)
Re: (Score:2)
Completely missed that one didn't we?
Re: (Score:3, Interesting)
Re: (Score:2)
There are a lot of tasks that paralellize nicely. There are many that don't.
Re: (Score:3, Funny)
Re:completely impossible statementt (Score:4, Informative)
Here's a hint... Most companies won't give a DeVry graduate any more consideration than someone wihout a degree. In fact, many companies will take someone who is self taught without a degree over a DeVry graduate.
Good luck with ever being more than a code monkey. If you don't understand the theory behind programming, you'll never do more than writing basic code that conforms to the specifications that the architects gave you.
If a second year student is writing better code than the teacher, that says a lot about the school. That goes back to what I said about most companies don't give much (If any) weight to a degree in "PC programming/Web Development with a certificate in Web Design", because the types of schools that give those out are usually not the highest caliber.
And I'm not trying to be a dick, but drop the attitude; you're not the super programmer that you think you are. Relax, and pay attention to what others are telling you, you'll learn something.
ps... Graduating high school and starting college at 17 isn't all that special, tons of people do that.
Re: (Score:2)
These guys told me a story once. Some hotshot with a degree from DeVry was hired one day. He was fired within two weeks for incompetence.
I'm always suspicious of an institution of higher education that finds it necessary to advertise on TV, radio and by SPAM!
Parent is correct by GP's own standards (Score:3, Funny)
However, I agree with the parent and think the GP is full of crap. This contradicts the starting assump
Re: (Score:2)
The reason colleges make you take all those stupid classes is to help round out your education, so you learn to think in a variety of different ways and learn different methods of analysis... at least at good colleges. If you really want to be a better programmer, take a class on the philosophy of language.
"cuz they made me go early since I was so smart."
Book smart, life foolish. You com
Re: (Score:3, Funny)
And 4 year colleges rerun all that info from high school and middle school because they assume you paid no attention and must have cheated on the SAT/ACT's or something to get in. It's an
Re: (Score:2)
I think what they meant in the article is that they have no applications that thread to 8 threads nicely.
Its easy to max out 8 CPU's/cores with 8 different tasks (or 9-10 tasks if you want to take advantage of context switches and iowait). Its harder to find something that scales past 4 threads because most programmers just don't program for it. A
Re: (Score:2, Informative)
Re:completely impossible statementt (Score:5, Interesting)
Re: (Score:3, Funny)
I'm pretty sure you've got to do something in a loop or it'll be dropped by the compiler as a trivial optimisation. But hey! What do I know after years of VB, VBA programming, in addition to *real* languages like C++ or *useful* things like SQL? I'm a babe in the woods compared to a Uni student full of piss and vinegar!
So - when will you debunk AnandTech? Clearly you're more knowledgea
Re: (Score:2)
Re: (Score:2)
You are new around here aren't you?
Thanks, that was REALLY funny to read right after your rant about how smart you are, then your message about how it "must be night time" because you screwed up an empty loop.
Also the "are you new around here" is always hilarious, but even more so from someone who's 357 ids short of the million mark.
Re: (Score:2)
Must be hard typing under the bedcovers after mummy has turned out the lights.
How does this bode for NT6? (Score:3, Interesting)
Re: (Score:3, Informative)
I've never seen any good benchmarking on it, probably because there haven't been higher order Intel Macs until recently, but I'm going to bet you find little difference when running apps
Re: (Score:2)
I thought that the >4 CPU Windows systems were, in essence, specially tweaked systems to make it all worthwhile and that standard setups couldn’t really make effective use of more than four processors. If so, I stand corrected. *looks around* Err, sit corrected, sorry.
Re: (Score:3, Informative)
Multi-core restrictions on Windows versions are mostly artificial. For example, 8-CPU systems running just fine on Windows 2003 Advanced Server without any special tweaking. The system the grandparent referred to must have been runnin
Re: (Score:2, Informative)
For example, Windows Server 2003 Kernel Scaling Improvements [72.14.203.104] (Google MS Word->HTML version)
Re:How does this bode for NT6? (Score:5, Informative)
You have to remember that Windows is not static, they improve it all the time. They rolled out a 32-processor version back with Windows 2000. It's called Data Center Edition. You can't buy it over the counter, only from OEMs that make systems with tons of processors. You've likely never encountered it since it's fairly rare to see systems with that many processors. Generally you cluster smaller systems rather than get one large one. However there are cases where the big iron is called for, hence why HP sells them.
Also I think multiprocessing in the OS is less complicated than many people make it out to be. The OS isn't where the magic has to happen, it's the app. The OS already has things broken up for it in the form of threads and processes. A thread, by definition, can be executed in parallel. So the OS simply needs to decide on the optimum placement of all the threads it's being asked ot run on it's cores. Also, it doesn't have to stick with where it puts them (unless software requests a certain CPU), it can move them if there's reason to. The hard part is in the app, to break it up in to pieces that can be processed at the same time and to keep them all in sync.
My guess is that it's mostly FUD floated by anti-Windows people. There is, unfortunately, a lot of that going around. For example it was reported on
1) The method mentioned there, as an emulation that is limited to 1.4 and isn't that fast. Bonus is it works on any system with Vista graphics drivers, even if the manufacturer doesn't provide GL.
2) Old style ICD. This is the kind of driver used on XP today. This more or less takes over the display, and thus will turn off all the nifty effects while active. The bonus is there's little to update. However this is probably not going to be used because there's...
3) The new ICD. This provides full, hardware accelerated GL and is fully compatible with the shiny new compositing engine. For that matter, you can add any API you want via an ICD that works with the new UI.
So not only does the OS have the ability to support GL, it can do so better than XP can, because GL can be used in the same way as DX. However to read the
When it comes to Windows info, you do need to check sources, as with anything else. There's plenty of misinformation floating around. Often people who don't like Windows believe they know what they are talking about so post incorrect information.
Re: (Score:2)
Yes, you "can", in the same way that you *can* put peas up your nose. It's not terribly useful though.
For all practical purposes, Windows has one advantage today: larger availability of enduser-software. That's it.
There's zero advantage, and a lot of disadvantage to running Windows on a big-iron database-server.
Re: (Score:3, Insightful)
Re: (Score:2)
Maybe, but in TFA they're running XPSP2.
Re: (Score:2, Insightful)
Wonder where you heard that.
"That being the case, as multiple CPUs/cores become more commonplace, I think OS X will end up with the reputation of being the faster of the two."
Reputation maybe, after all OS X has the reputation of being God's gift in certain circles. Somehow I think reality will be different just as it is now. NT's design is vast
Re: (Score:3, Informative)
Your evidence for this being what, exactly ? Tea leaves ?
NeXT didn't even *support* multiple processors until Apple's OS X reinvention, whereas NT was designed from the ground up with multi-CPU machines in mind and has supported them since its first release in 1993.
Not that NT can't handle them, but that OS X does a better job of dividing tasks sanely to more fully utilize the chips and from what
Couldn't max out the CPUs? (Score:5, Funny)
Try installing Vista.
Summary is wrong. (Score:4, Informative)
From TFA:
There's a big difference between unable to and had a difficult time. When I first read the summary I thought that there must be some problem with the system if they're unable to get all the CPUs under full load.
Re:Summary is wrong. (Score:5, Interesting)
It's actually really easy to do if your memory system isn't meant to service 8 cores. And the article pretty much backs this up, every time the quad cores fail to shine it's blamed on the memory. But to me, the really interesting aspect of this is that they always blame FB-DIMM, which gains bandwidth by sacrificing latency. They even go so far as to suggest:
So, I think regular DDR2 @ 667 = 5.4 GB/s... divided amongst 8 cores is just 677 MB/s per core. It seems insane to think that would work (maybe it would, maybe my numbers are wrong also). If you want to attack latency but simply can't give up the bandwidth, wouldn't the SMP model work better-- just swap out the L2-miss stalled thread, and run the other full bore. Now you've reduced the problem to distributing your register bank among active threads. Well, I think that's how video cards do it, and memory latency is their enemy #1.
In any event, there you have it. The performance pendulum has left Ghz, is briefly swinging toward more cores, but appears headed now toward memory systems. Does anyone else think it's funny that L1 is still just 32kb? (oughta be enough for anybody).
XP 64? (Score:4, Interesting)
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2, Informative)
certainly difficult to max out .. (Score:5, Interesting)
The poor baby's probably starved for data to crunch, having only 256M of RAM per cpu and apparently just the standard disk setup.
And it appears that they left the default OS X limit of 100 tasks per user in place as well.
Gotta open things up to let those puppies breathe!
Re: (Score:2)
Re:Yeah... really BIG news... bah (Score:5, Insightful)
We do! "News for Nerds", remember?
Re:Yeah... really BIG news... bah (Score:4, Interesting)
We're introducting a virtual infrastructure very quickly, using XServe RAIDs as our storage LUNs. That being said, with VMware's soon-to-be Mac OS X offering, this would give our mac-toting engineers the ability to build a virtual machine locally before deploying it into the wider infrastructure. That is a truly valuable tool.
There's three of us at work that heavily rely on our non-mac machines - a pair of us doing some reasonably heavy VM work. I'd love to transition to a straight Mac platform (not Mac OS X + SuSE + XP). It's such a pain in the ass to have to suspend one and start another constantly because my performance starts to block. It's not disk I/O - the I/O never pegs (most of the stuff is resident, anyway). The RAM can be mitigated by adding more RAM (4GB currently). More than once I've watched procmon show me that the vmx process is pegged on the
Re: (Score:2)
Whoops. Machine must've gotten away from me there!
Re: (Score:2)
Really, who the frig cares from a general computing standpoint? Who needs 8 CPUs?
When you try to raytrace a few hundred million polygons with soft shadows, radiosity, and every optical effect switched on, you will have your aqnswer, grasshopper.
Re: (Score:2)
Re: (Score:2)
Which was my point really. It's pretty cool and all, and hell, I would like 8 CPUs (although you're talking mucho power drain I imagine.) It's just not the BIG news as the submitter tries to suggest.
The other things were bigger news... this is just cool geekiness.
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Be aware that most high quality renderers aren't multithreaded through an entire render job though.
Case in point: a Maya mental ray render uses a single thread for translation, displacement map triangulation and subsurface scattering map processing before the render itself begins. Most dynamics calculations are also single-threaded.
So, on a dual core station rendering a scene fitting the above description taking 5 minutes, I see about 2 1/2 minutes only utilize one core.
Re: (Score:2)
Re: (Score:2)
Where this really pays off right now is with virtualization. For the cost of 1 & 1/2 boxes, you get the value of 8. That may not seem like a "general computing standpoint" to you, but virtualization is getting absolutely huge in the software development and server world. Besides, since when is the Mac Pro dual core a "general computing" machine? My guess is >75% of the buyers are buying them for specific heavy lifting.
Huge news to me! (Score:2)
My CPU is maxed out. I've got plenty of RAM (2GB) and I can only wait to get a faster hard drive as >5400RPM SATA drives aren't readily available. Give me a user-installable multi-core CPU PLEASE!
I wish I could convey how w
Re: (Score:2)
Scientific computing (Score:2)
Why don't I just farm the software out to a Beowulf cluster? Well, I do, but we have a queue for ours. When I'm testing the software I need to run, stop, and rerun the software, something which would be inefficient on a remote clust
Re: (Score:2)
There is so much better out there than LightWave and it's clunky interface. Say, Maya or Cinema 4D for starters?
Re:have to ask (Score:5, Funny)
Re:have to ask (Score:5, Funny)
Re: (Score:2)
Re:can't max out CPUs? uh oh (Score:5, Funny)
You know what happens when you make assumptions. (Score:5, Informative)
Let's assume for the moment that none of us in this forum actually know anything factual about how many years Apple (or even NeXT before them) have been running Mach on machines with more than 4 processors on the corporate campus behind locked doors.
However, we can probably reason this out if we try. We're all bright geek types, right? There are several clues. NeXT bought Apple for a negative $400 million or so in what, December of 1996?
The heritage of NeXT that you mention is a pretty big clue. I don't recall off the top of my head how many processors were supported by the production shipping Mach build for SPARC and PA-RISC back in the NeXT days, but let's assume it was 2, just for the sake of argument. Both of those platforms offered ready availability of systems with many processors even way back then. Perhaps there were systems like that in the lab.
Mach was originally a research project with an interesting goal: clean support of certain abstractions in a platform-independent way. One of those abstractions was support for multiple processors, beyond the typical SMP architectures we see today, which means that the author's concept of platform-independent went quite some distance beyond a different instruction set in a different risk architecture. Dig this:
That text is unattributed at the Wikipedia page, but comes from this document: Appendix B [wiley.com] from the book: Operating System Concepts [wiley.com]
An excellent book entirely about Mach is: Programming under Mach [amazon.com], which also mentions the design intent.
The original project was funded by DARPA, with the specific goal of developing operating systems technologies which would support super computers with hundreds or thousands of processors.
The Mach project developed new techniques which have migrated directly (via actual Mach code to OSF, NeXT, Mac OS X, et. al.) or indirectly into pretty much every modern operating system.
Mach research spanned a very long period of time, and two Universities. Curious, bright, and arguably insane people (or they would have been making money instead of slaving away making Mach on grad-student salary) with access to multiple processor machines with DARPA funded directives to make it scale to hundreds of processors. Hmm... that seems like a clue.
NeXT was, and Apple is a hardware engineering company. Apple has been building multiple processor boxes since before the reverse acquisition. I know, I had the, uh, perverse and shameful pleasure of running BeOS on one of them for sport.
If any joker with a web site can get ahold of pre-
Re: (Score:2)
Re: (Score:3, Informative)
Bad news/good news/bad news/good news... (Score:4, Informative)
First, pretty much any application on the Mac is multithreaded just because of the way the user interface works. Apple's OpenGL implementation is partly software, for example... this is why you can run hardware T&L on the Mac mini with its GMA950 GPU - the OS does that in software on the second core even in single-threaded games.
Second, OS X does a pretty goodjob of distributing applications to cores without having to explicitly bind them. Binding an application to a core would most likely slow it down... unless the program has been written to use a lot of fined grained shared state between threads... and what you're doing with processor affinity is *preventing* it from multiprocessing.
Processor affinity is like 64 bit. Unless you're doing something on the edge you probably don't need it, and if you need it you're probably already doing it.
Here's the summary:
The bad news is that OS X doesn't provide a hook for processor affinity. The good news is that Mach does support it, and you could use the Darwin sources to figure out how to implement it in OSX using direct Mach calls. The bad news is that it's really hard. The good news is you don't need to do it unless you're trying to prevent multiprocessing anyway.
Summary of the summary: Don't worry, be happy.