mrdirkdiggler writes: ArsTechnica's Hannibal takes a look at how the power concerns that currently plague datacenters are shaping next-generation computing technologies at the levels of the microchip, the board-level interconnect, and the datacenter. In a nutshell, engineers are now willing to take on a lot more hardware overhead in their designs (thermal sensors, transistors that put components into sleep states, buffers and filters at the ends of links, etc.) in order to get maximum power efficiency. The article, which has lots of nice graphics to illustrate the main points, mostly focuses on the specific technologies that Intel has in the pipeline to address these issues.
Justin Wheeler writes: "Intel has been slowly trickling information on their new Penryn cores (the next release after Merom/Conroe), as well as their upcoming Nehalem cores. From the articles: "At a press meeting today, Intel's Pat Gelsinger also made a number of high-level disclosures about the successor to Penryn, the 45nm Nehalem core.
Unlike Penryn, which is a shrink/derivative of Core 2 Duo (Merom), Nehalem is architected from the ground up for 45nm. This is a major new design, and Gelsinger revealed some truly tantalizing details about it.
Nehalem has its roots in the four-issue Core 2 Duo architecture, but the direction that it will take Intel is apparent in Gelsinger's insistence that, "we view Nehalem as the first true dynamically scalable microarchitecture." What Gelsinger means by this is that Nehalem is not only designed to take Intel up to eight cores on a single die, but those cores are meant to be mixed and matched with varied amounts of cache and different features in order to produce processors that are tailored to specific market segments.""
CafreeDC writes: Intel's Pat Gelsinger revealed all sorts of information about the upcoming Penryn today. The 45nm processor family will support SSE4 and offer better virtualization performance:
Right now, a lot of folks who're testing out VT have been disappointed that its performance isn't much better than existing, non-VT-based virtualization solutions like VMware. Specifically, VMware products use a binary translation engine that ingests regular x86 OS code and produces a "safe" subset; VMware claims that this binary translation approach is as fast as, or faster, than VT-based approaches because the OS doesn't have to do costly VM transitions in order to execute privileged instructions. (These claims are debated; I'm merely reporting the fact that they are made.)
A major decrease in VM transition times will help the performance of VT-based solutions like Xen, and it would make the "which virtualization package to use?" debate even more about managment and less about relative performance than it already is.
At the same conference, Gelsinger also talked about the 45nm Nehalem core, Penryn's successor. Among the disclosures came the fact that Nehalem will sport an on-die memory controller, as well as integrated graphics processor.
Reading between the lines on this comment and others, I can say with a pretty high degree of certainty Intel will almost certainly be using its packaging skills to put a GPU in the same package as a Nehalem CPU. Furthermore, this is going to help out with mobile products, small-form-factor devices (*cough* Apple), and anywhere else that power and cooling are more critical than raw performance. I'd expect that such CPU/GPU devices will cut down on the number of on-die cores that you can put on the CPU die (for power dissipation reasons).
Jeff Pierce writes: "AMD's CEO, Hector Ruiz, explained today why AMD's revenues won't meet expectations this quarter. According to Ruiz, this is because the company couldn't produce enough chips to meet growing OEM demand. (Funny he didn't mention the price war with Intel.) Also covered in the presentation is Ruiz's vision of what you might call "x86 everywhere." Ruiz thinks that the x86 processor market is by no means "mature," and that x86 will expand into home entertainment devices, appliances, education, and lots of other places where we don't even currently use microprocessors. AMD intends to have a big slice of that growing pie."
PreacherTom writes: Scientists at the NEC Research Institute in Princeton, NJ are reporting that they have broken the speed of light. For the experiment, the researchers manipulated a vapor of laser-irradiated atoms, causing a pulse that shoots about 300 times faster than it would take the pulse to go the same distance in a vacuum, to the point where the pulse seemed to exit the chamber before even entering it. Apparently, Uncle Albert is still resting comfortably: relativity only states that an object with mass cannot travel faster than light. Still, the results are sufficient to merit publication in the prestigious journal, Nature.
Peacemaker writes: Response to Rep. Boucher's FAIR USE Act from around the web has been mostly positive, but the bill may become the biggest roadblock in the quest for meaningful copyright reform. 'Why would Boucher, traditionally a staunch supporter of real DMCA reform, choose to put it on the back burner this session in favor of reforming secondary liability rules? It's a pretty good guess that Boucher's allies in the consumer electronics industry had a big influence on his decision. Indeed, the legislation appears to be an attempt by the consumer electronics industry to make a separate peace with copyright interests, leaving the broader movement for balanced copyright policies to soldier on without its support.' While the bill would solve some real problems, 'Boucher should defend that proposal on the merits instead of pretending that his legislation would reform the DMCA or shore up fair use.'
Poodlie_Doo writes: "AMD's first ATI-designed chipset, the 690G is now out. The new chipset has integrated graphics courtesy of the ATI X1250 and supports HDMI with HDCP. Tech Report has a review up, and the benchmarks look pretty good."