MrSeb writes: "Transistor announcements aren’t the sexiest occasions on the block, but Intel’s 22nm SoC unveil is important for a host of reasons. As process nodes shrink and more components move on-die, the characteristics of each new node have become particularly important. 22nm isn’t a new node for Intel; it debuted the technology last year with Ivy Bridge, but SoCs are more complex than CPU designs and create their own set of challenges. Like its 22nm Ivy Bridge CPUs, the upcoming 22nm SoCs rely on Intel’s Tri-Gate implementation of FinFET technology. According to Intel engineer Mark Bohr, the 3D transistor structure is the principle reason why the company’s 22nm technology is as strong as it is. Other evidence backs up this point. Earlier this year, we brought you news that Nvidia was deeply concerned about manufacturing economics and the relative strength of TSMC’s sub-28nm planar roadmap. Morris Chang, TSMC’s CEO, has since admitted that such concerns are valid, given that performance and power are only expected to increase by 20-25% as compared to 28nm. The challenge for both TSMC and GlobalFoundries is going to be how to match the performance of Intel’s 22nm technology with their own 28nm products. 20nm looks like it won’t be able to do so, which is why both companies are emphasizing their plans to move to 16nm/14nm ahead of schedule. There’s some variation on which node comes next; both GlobalFoundries and Intel are talking up 14nm; TSMC is implying a quick jump to 16nm. Will it work? Unknown. TSMC and GlobalFoundries both have excellent engineers, but FinFET is a difficult technology to deploy. Ramping it up more quickly than expected while simultaneously bringing up a new process may be more difficult than either company anticipates."
MrSeb writes: "For the past thirty years, desktop system longevity has been defined in sockets. I cut my teeth as an enthusiast on Socket 7, and I’ve owned examples of virtually every AMD and Intel socket standard that followed. For the past eight years, Intel has used an LGA (land grid array) socket in which the actual contacts are on the motherboard with contact points on the CPU. This packaging has served the company well; it’s scaled the number of contacts from 775 to 2011 on Sandy Bridge-E, and had no trouble with high TDP parts. According to a leaked roadmap, Haswell will be the last Intel chip slated for an LGA package. All of the Broadwell parts on the map are dual- and quad-core SoCs with 47-57W TDPs that would be soldered to the motherboard, using BGA (ball grid array, the usual method for soldering surface-mount chips to PCBs). Dual-core Broadwells also pick up the 10W and 15W form factors; the same article suggests Intel intends to abolish the 35W TDP segment altogether. Is Intel doing this to simply ramp up chipset sales, by forcing users to upgrade the whole motherboard rather than just the CPU? Unlikely, given the tiny number of users who still upgrade CPUs, rather than merely upgrading their entire system. More likely the move away from LGA to BGA is to reduce electrical resistance, thus reducing the TDP of the parts and pushing newer chips into lower power envelopes."
MrSeb writes: "At a joint press conference in London, Motorola and Intel have unveiled the Razr I smartphone. The Razr I has a edge-to-edge 960×540 4.3-inch Super AMOLED display, a layer of Kevlar on the back, and — most importantly — the brains of the operation is an Intel Medfield SoC clocked at 2GHz. Rounding out the hardware specs, there’s an 8-megapixel rear shooter, a front-facing VGA camera, NFC, a 2000 mAh battery, and the entire phone (including the internal components) is protected by a “splash-guard” water repellent coating. On the software side of things, rather excitingly, it looks like the Razr I runs an almost-vanilla version of Android 4.0 Ice Cream Sandwich. One of the weirder aspects of today’s product announcement is that the Razr I won’t be made available in North America; it’ll only ever see the light of day in Europe and some parts of Latin America (Mexico, Brazil) sometime in October. In fact, to date, four Medfield-powered Android smartphones have been released — the Xolo X900, Lenovo K800, Orange San Diego, and now the Razr I — but not a single one of them is available in the US. Why? The most likely reason is that Intel isn’t quite ready to face off against the latest and greatest SoCs in the most hotly contested smartphone battleground."
MrSeb writes: "Fully unveiled at the Intel Developer Forum over the last few days, Intel’s next-generation architecture, codenamed Haswell, isn’t just another “tock” in Intel’s tick/tock cadence; it’s a serious threat to both AMD and Nvidia. For the first time, Intel is poised to challenge both companies in the mainstream graphics market while simultaneously eroding Nvidia’s edge in the GPGPU business. For a start, the Haswell CPU core will be 10-15% faster than Ivy Bridge, but thanks to the addition of AVX2, Haswell's floating point performance will be monstrous: a quad-core part should be capable of 256 (double-precision) gigaflops, which should be enough to outpace Nvidia's GTX 680. On the GPU side of things, Haswell will massively increase the number of processing cores, offering "up to 2x" the performance of Ivy Bridge's HD 4000. Even a conservative take on that promise spells trouble for AMD and Nvidia. According to benchmarks, Trinity’s GPU is an average of 18% faster than Llano’s across a range of 15 popular titles. Compared to Sandy Bridge, Trinity was almost 80% faster. Against Ivy Bridge, it’s just 20% faster. Given what we know of Haswell’s GPU shader counts and performance targets, it shouldn’t be hard for Intel to deliver a 30-50% performance boost in real-world games. If it does, Trinity goes from the fastest integrated GPU on the market to an also-ran, and AMD loses the superior graphics hole card it’s been playing since it launched the AMD 780G chipset four years ago. It isn't looking good for either AMD or Nvidia."
MrSeb writes: "Intel often uses the Intel Developer Forum (IDF) as a platform to discuss its long-term vision for computing as well as more practical business initiatives. This year, the company has discussed the shrinking energy cost of computation as well as a point when it believes the energy required for “meaningful compute” will approach zero and become ubiquitous by the year 2020. The idea that we could push the energy cost of computing down to nearly immeasurable levels is exciting. It’s the type of innovation that’s needed to drive products like Google Glass or VR headsets like the Oculus Rift. Unfortunately, Intel’s slide neatly sidesteps the greatest problems facing such innovations — the cost of computing already accounts for less than half the total energy expenditure of a smartphone or other handheld device. Yes, meaningful compute might approach zero energy — but touchscreens, displays, radios, speakers, cameras, audio processors, and other parts of the equation are all a long way away from being as advanced as Intel's semiconductor processes."
MrSeb writes: "When Intel goes looking for new chip manufacturing technology to invest in, the company doesn’t play for pennies. Chipzilla has announced a major investment and partial purchase of lithography equipment developer ASML. Intel has agreed to invest €829 million (~$1B USD) in ASML’s R&D programs for EUV and 450mm wafer deployment, to purchase €1.7B worth of ASML shares ($2.1B USD, or roughly 10% of the total shares available) and to invest general R&D funds totaling €3.3B (~$4.1B USD). The goal is to bring 450mm wafer technology and extreme ultraviolet lithography (EUVL) within reach despite the challenges facing both deployments. Moving to 450mm wafers is a transition Intel and TSMC have backed for years, while smaller foundries (including GlobalFoundries, UMC, and Chartered, when it existed as a separate entity) have dug in their heels against the shift — mostly because the shift costs an insane amount of money. It’s effectively impossible to retrofit 300mm equipment for 450mm wafers, which makes shifting from one to the other extremely expensive. EUVL is a technology that’s been percolating in the background for years, but the deployment time frame has slipped steadily outwards as problems stubbornly refused to roll over and solve themselves. Basically, this investment is a signal from Intel that it intends to push its technological advantage over TSMC, GloFo, UMC, and Samsung, even further."
MrSeb writes: "In the past week, both AMD and Intel have given us a tantalizing peek at their next-generation neuromorphic (brain-like) computer chips. These chips, it is hoped, will provide brain-like performance (i.e. processing power and massive parallelism way beyond current CPUs) while consuming minimal amounts of power. First, AMD last week announced that its future APUs will feature ARM Cortex cores, first to implement TrustZone (ARM Holdings' hardware DRM/security chip), but then eventually as part of a proper x86-ARM-GPU heterogeneous system architecture (HSA). It isn’t too crazy to think that a future AMD (or Texas Instruments) chip might have a few GPU cores, a few x86 CPU cores, and thousands of tiny ARM cores, all working in perfect, parallel, neuromorphic harmony — as long as the software toolchain is good enough that you don’t have to be some kind of autist to use all of those resources efficiently. Intel, on the other hand, today unveiled a neuromorphic chip design based on multi-input lateral spin valves (LSV) and memristors. LSVs are microscopic magnets that change their magnetism to match the spin of electrons being passed through them (spintronics). Memristors are electronic components that increase their resistance as electricity passes through them one way, and reduce their resistance when electricity flows in the opposite direction — and when no power flows, the memristor remembers its last resistance value (meaning it can store data). Unlike state-of-the-art CMOS transistors that require volts to switch on and off, the LSV neurons only require a handful of electrons to change their orientation, which equates to 20 millivolts. For some applications, Intel thinks its neuromorphic chip could be up to 300 times more energy efficient than the CMOS equivalent."
MrSeb writes: "In an interview with ExtremeTech, Mike Bell — Intel's new mobile chief, previously of Apple and Palm — has completely dismissed the decades-old theory that x86 is less power efficient than ARM. From the story: “There is nothing in the instruction set that is more or less energy efficient than any other instruction set,” Bell says. “I see no data that supports the claims that ARM is more efficient.” The interview also covers Intel's inherent tech advantage over ARM and the foundries ("There are very few companies on Earth who have the capabilities we’ve talked about, and going forward I don’t think anyone will be able to match us," Bell says), the age-old argument that Intel can't compete on price, and whether Apple will eventually move its iOS products from ARM to x86, just like it moved its Macs from Power to x86 in 2005."
MrSeb writes: "Intel may have officially launched its new 22nm Ivy Bridge CPUs last month, but this morning’s ultrabook announcement is the chip’s real debut. The new dual-core IVB processors are at the center of the company’s ultrabook plans, and the various OEM partners like Dell, HP, Asus, and Lenovo have readied a full range of product SKUs. Intel’s new dual-core, Hyper-Threaded Ivy Bridge processors retain the same core/thread configuration as their Sandy Bridge-based predecessors. At the upper end of the market, the replacement cycle starts now; Intel delayed the introduction of mobile IVB until now to give OEMs time to clear stocks of older parts. Ivy Bridge’s greatest advantage over Sandy will be in GPU-centric workloads, but the higher clock speed will deliver 12-28% higher CPU performance as well. Interestingly enough, battery life is one area Intel isn’t pushing upwards this refresh. That’s not to say we won’t see Ivy Bridge systems with better battery life, but the company has put its focus on other improvements."
MrSeb writes: "After months of being teased by the China-only Lenovo K800 and the India-only Xolo X900, the Western world is finally getting a piece of the Intel Atom-powered smartphone action: The Orange San Diego (previously codenamed Santa Clara) will launch in the UK on June 6 and cost just £199 ($300) on a prepaid tariff, or £15.50 ($24) per month on a 2-year contract. Under the San Diego’s hood there’s an Atom (Medfield) Z2460 SoC, which features a single-core 1.6GHz CPU with Hyper-Threading. Combined with Intel’s XMM 6260 modem, the new phone supports HSPA+, and pentaband (worldwide) UMTS 3G. There’s an 8-megapixel shooter on the back that’s capable of 1080p video capture (with software image stabilization) and 10-frames-per-second “burst” shooting. Rounding out the hardware is a lovely 1024×600 4-inch (297 PPI) display, backed by a PowerVR SGX540 GPU clocked at 400MHz. Benchmark-wise, the San Diego will perform very similarly to the Xolo X900 (they're basically the same phone), which means it beats out most of the current phones on the market, but falls behind any Snapdragon S4 (Krait) smartphones. As one Intel exec told ExtremeTech, though, Medfield’s purpose is simply to earn “a seat at the table.” Now that it’s finally there, expect Chipzilla’s girth to rapidly expand until there’s only space for one or two ARM cleaner fish."
MrSeb writes: "When Intel launched Ivy Bridge last week, it didn’t just release a new CPU — it set a new record. By launching 22nm parts at a time when its competitors (TSMC and GlobalFoundries) are still ramping their own 32/28nm designs, Intel gave notice that it’s now running a full process node ahead of the rest of the semiconductor industry. That’s an unprecedented gap and a fairly recent development; the company only began pulling away from the rest of the industry in 2006, when it launched 65nm. But how has Intel managed to pull so far ahead? Joel Hruska of ExtremeTech talks to Mark Bohr, Senior Intel Fellow and the Director of Process Architecture and Integration to find out some of Chipzilla's tips and tricks."
MrSeb writes: "Details of a new, ultra-compact computer form factor from Intel, called the Next Unit of Computing (NUC) are starting to emerge. First demonstrated at PAX East at the beginning of April, and Intel’s Platinum Summit in London last week, NUC is a complete 10x10cm (4x4in) Sandy Bridge Core i3/i5 computer. On the back, there are Thunderbolt, HDMI, and USB 3.0 ports. On the motherboard itself, there are two SO-DIMM (laptop) memory slots and two mini PCIe headers. On the flip side of the motherboard, is a CPU socket that takes most mobile Core i3 and i5 processors, and a heatsink and fan assembly. Price-wise, it's unlikely that the NUC will approach the $25 Raspberry Pi, but an Intel employee has said that the price will "not be in the hundreds and thousands range." A price point around $100 would be reasonable, and would make the NUC an ideal HTPC or learning/educational PC. The NUC is scheduled to be released in the second half of 2012."
MrSeb writes: "'Intel can’t make smartphones.' That statement has summarized the opinion of any number of engineers and business analysts ever since Intel launched Atom back in 2008. Intel, meanwhile, went ahead and built a smartphone anyway. That device, the Android-powered Xolo X900, is what ExtremeTech is reviewing today. At first glance, the X900 is impressive: It has a 4-inch 1024x600 (297 PPI) screen, an 8MP rear camera, a micro USB/HDMI port — and it weighs less than the iPhone 4S. In benchmarking, the Medfield-powered Xolo isn't earth-shattering, but it does beat out most Android handsets already on the market. Perhaps silencing the x86-can't-be-as-efficient-as-ARM naysayers, the Xolo even has quite impressive power usage characteristics. In short, he X900 is a solid, well-built Android phone, period. It might not beat out the iPhone 4S in some benchmarks, but remember, this is Intel's first x86 phone; it will improve. With the Xolo X900, Intel has officially put the ARM manufacturers on notice. Intel is going to power smartphones, and it is going to be competitive in that space."
MrSeb writes: "Intel’s Ivy Bridge (IVB) has been one of the hottest tech topics of the past 12 months — we haven’t seen this much interest in a CPU since Intel launched Nehalem. Ivy Bridge is the first 22nm processor at a time when die shrinks have become increasingly difficult, the first CPU to use FinFETs (Intel calls its specific implementation Tri-Gate), and it’s a major component of Intel’s ultrabook initiative. In benchmarking, these changes equate to a CPU speed boost of around 5-10% over Sandy Bridge, but with a significantly lower TDP (around 25%). If Ivy Bridge’s CPU is a bit boring, the new GPU more than makes up for it. Ivy Bridge’s integrated graphics core increases the total number of execution units by 33% (to 16, up from 12), and implements support for DirectX 11, OpenCL 1.1, and OpenGL 3.1. There are now two texture units instead of one, and the GPU can issue twice as many MADs (Multiply-Add) per clock. Ivy Bridge incorporates a small, dedicated L3 cache of its own, but retains its ability to share data across the high-bandwidth ring bus that connects it to the processor, if necessary. GPU performance is up by around 25-50%. Read the full review for more details."
MrSeb writes: "Ivy Bridge’s debut is still a few weeks away, but Intel has decided to launch its new 7 Series (codename Panther Point) ahead of the CPU’s ship date. The new Panther Point chipset family is the follow-up to the 6 Series that debuted 14 months ago, and it’s very much an evolution of that initial design. Z77, aka Panther Point, adds a number of features that the 6 Series lacked and includes the PCIe 3.0 support that was previously confined to the high-end X79 chipset. Intel’s DZ77GA-70K board also offers a gorgeous UEFI interface, improved fan controls that are far more accessible than the company’s previous products, and it's also compatible with both Ivy Bridge and Sandy Bridge chips. But is the Z77 a drop-in successor and easy upgrade choice? Unfortunately, the Z77 has a ton of caveats and "known issues" that suggest that Intel has pushed this one out the door a little early — plus, with a Sandy Bridge chip, the motherboard's performance is virtually identical to the board it replaces."