And in other news...well traveled street corners are magnets for pan handlers. Details at 11!!
Now we know where to look for these people online. Who knew?
Of course, this doesn't mean that facebook doesn't have other users. I mean...does the fact that a pan handlers hangs out near the place that you walk by every day to pick up a cup of coffee mean that every one else you see walking by that spot is also a pan handler?
I suppose it might.
You are correct that discrete graphics isn't going anywhere, but your argument for why gaming performance is so important in integrated graphics contradicts that. I think the same argument applies there: the thing that people riding on Intel for the performance of their integrated gpu's and who are predicting that AMD will spank them in the integrated gpu space are forgetting is: 1) heat, 2) power, and 3) bw. When you integrate the GPU with the CPU you have to share the heat, power, and bw budget with the CPU.
It will be interesting to see what AMD comes up with when they have an integrated GPU, but frankly, the fact that it has taken them so long to get to that step since acquiring ATI tells me that they are likely finding it rather difficult to do well. I will be very happy if they somehow come up with something that can "spank" Intel in the integrated gpu space, but I wouldn't be surprised to find that they had to cut out so many features to get it into the power/bw budget that it ends up being...about the same.
Other point worth keeping in mind is that a huge portion of the market where intel's gpus do reign is the business desktop space where the only thing the machine needs to do is run web browsers, email apps, and office applications. Blue ray decoding? In that space, it doesn't matter.
Speak for yourself.
I feel for those who can't find jobs, but if you've got one, then just live well within your means, and retirement remains an option.
And take all those calls you get from your bank asking you to enroll in income/debt protection plans as the constant reminder that the bank isn't really on your side in this effort.
even several meters across, let's say 10 for simplicity, is 100 square feet of area illuminated at a time. If you had a 1000 square foot area to send the warning to, that means if you want to get the warning out in say 5 minutes, you would be able to send your signal to each location in your danger zone for 120 seconds. That is slow enough to give everyone a chance to see it, assuming that everyone in the warning zone has the habit of looking up at the sky in the direction of the satellite at least once every 120 seconds (not bloody likely).
In reality though 1000 square feet is actually a rather small. The way Tsunami's propogate out in a circular pattern from the point of the shock, 10000 square feet of area is more likely, in which case to get the warning out in 5 minutes to the entire area with a 100 square foot beam, you would be able to illuminate each area for only 0.5 seconds. That obviously isn't going to work.
That all sets aside the fact that it won't work when the weather is poor since visual light doesn't penetrate cloud cover, and also sets aside the other point that a warning system that relies on visual queues is utterly useless if people are indoors or asleep.
you seriously think that the battery used to power an electric car has enough energy to illuminate 100s of miles of coastline from orbit? I don't think you have a clear sense of how much energy it takes to get a beam of light to travel such a large distance.
And even if you had a satellite with enough energy, what is your warning system going to do when it is cloudy in the area where the tsunami is going to hit?
and even if you solved that problem too somehow, what if the people you are warning are asleep because it is night time? how are they going to see your warning system? what if they are awake but inside a building?
vastly more effective would be the systems that are already in place: a siren so loud that it is essentially impossible for people not to hear it unless they are deaf.
The problem you are eluding to as I see it is not that the memory architecture does not serve the instruction sets. The problem is that latencies from the core to memory are very long and writing code that is tolerant of those kinds of latencies is either impossible (or impossibly difficult) to do for the majority of typical (desktop in particular) applications. Any code that has a lot of branches (think: any software you write that has a lot of "if" or "switch" statements) invariable ends up reaching the point where the processor cannot perform any more useful work toward running code until it evaluates the condition of the logical expression. And eventually, unless your code fits in the processor's cache, that means stalling and waiting on memory to retrieve some data.
Code needs to be very deeply pipelined to tolerate those kinds of latencies, for some applications that is easy to do, for some it is possible, but very difficult, for many, it simply isn't possible.
Real problem there is that processing speeds in the core have been increasing very fast over the last few decades, and memory technology simply isn't improving at the same rate.
If you look back to the early days (original Pentium was the last time this was true I believe) the speed of the processor and the depth of the processor pipelines actually did match the latency to memory and the cores were slow enough that they were able to absorb a memory latency hit without incurring long multi-cycle stalls. But now the cores are just too fast. Now, if your code needs to wait for a result from memory, its going to wait for a large number of cycles because the cores are so much faster.
People have been arguing as you are that x86's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20+ years. Nobody has ever proven that the elegance of the instruction set matters with hard data though.
What evidence we do have goes against that argument.
Apple machines used a cleaner RISC architecture for a while in the desktop space. They never performed any better than equivalent x86 based machines, and in the end Apple abandoned RISC and moved to x86.
Intel came out with a cleaner RISC based instruction set that that the Itanium line uses. If x86 was really as bad as you say, Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD. That isn't happenning.
Another thing you might not realize: all x86 chips, from both Intel and AMD, once you strip them down to the micro-code level ARE RISC designs under the hood. RISC is the cleaner way to implement the micro code and the underlying execution architecture, but all historical data seems to indicate that the question of whether the instruction set that sits on top of that is RISC or CISC is irrelevant to performance. It is arguably more complicated to design a CISC based chip like x86, but that clearly has not been an obstacle to competing with RISC on the performance end for Intel or AMD engineers.
How long before game makers right code that supports this new chip?
Answer: a very long time. This is a xeon part designed for large database servers. It isn't intended for desktops. Some fool might try to put it into a gaming rig eventually, but that person...really will be worth of the title "fool". That would be like putting an engine designed for a freight train into a ferrari.
When will other software vendors have code that supports this many cores?
Answer: they already do. the companies that write database management software for very large backend database servers already have code that scales to very large core counts. As do many HPC software vendors. That is the intended market segment for this chip and that market segment has lots of software that is ready to burn through all those cores now.
If you can't understand it, it is intuitively obvious.