Intel Aims To Take on Nvidia With a Processor Specially Designed for AI (fastcompany.com) 43
An anonymous reader shares a report: In what looks like a repeat of its loss to Qualcomm on smartphones, Intel has lagged graphics chip (GPU) maker Nvidia in the artificial intelligence revolution. Today Intel announced that its first AI chip, the Nervana Neural Network Processor, will roll out of factories by year's end. Originally called Lake Crest, the chip gets its name from Nervana, a company Intel purchased in August 2016, taking on the CEO, Naveen Rao, as Intel's AI guru. Nervana is designed from the ground up for machine learning, Rao tells me. You can't play Call of Duty with it. Rao claims that ditching the GPU heritage made room for optimizations like super-fast data interconnections allowing a bunch of Nervanas to act together like one giant chip. They also do away with the caches that hold data the processor might need to work on next. "In neural networks... you know ahead of time where the data's coming from, what operation you're going to apply to that data, and where the output is going to," says Rao.
Not even Intels first.. (Score:3)
AI chip that is, rather than n00b ;)
http://articles.baltimoresun.com/1993-02-13/business/1993044090_1_neural-networks-chips-michael-glier
NI-1000, back in 1993, for missles (or more to the point to cash in of defense spending of course)
https://www.intel.com/content/www/us/en/embedded/products/quark/mcu/se-soc/overview.html
A little more recently we have Quark-SE, with 'pattern matching hardware'
https://www.hpcwire.com/2017/08/28/intel-debuts-myriad-x-vision-processing-unit-neural-net-inferencing/
And of course
So basically Intel is SkyNet.. (Score:4, Interesting)
Re: So basically Intel is SkyNet.. (Score:1)
Don't typically used activation functions applied to the linear nature make NNs non-linear? NNs are particularly good no-theory needed classifiers for non-linear relationships. Some applied theory in the domain is typically better but when you have something complex or need faster performance, they work quite well.
Re: (Score:1)
AI doesn't sell games, so money isn't spent making it better. Also, It is usually only given a very small value for how much processing it can take up, lest it get in the way of what actually does sell games, like moar pixels and moar framerates.
Re:So basically Intel is SkyNet.. (Score:5, Insightful)
AI in video games is very different from the AI that these chips would be using. The AI in video games sucks for three reasons:
1. If the AI were too good, players would quit because it is too hard. Certain games specifically are designed to be extremely hard, but this isn't the norm. In a first person shooter, this would be the equivalent of fighting against an aimbot. (Aimbots are a great example of AI - they're designed to be perfect or to use information that the player doesn't normally have.) It becomes a game balance and design decision to make the AI imperfect.
2. Complex AI can be very computationally intensive. There's a tradeoff in the speed of calculations between "good enough" and "perfect" in some algorithms. When you're dealing with a lot of variables in a complex game that a good AI might use, it takes a lot of processor cycles to calculate how to respond to you to make it more difficult for you. Take for example Civilization that takes far longer between turns at the end of the scenario than it does at the beginning of the scenario because there are more variables to consider.
3. Given 1 and 2, the easiest method to implement a balanced AI is to cheat - give the computer advantages that the players don't have or to hardcode certain behaviors. Those behaviors can be easy for humans to learn and beat, even when the computer is cheating.
The processors in this article are talking about new chips that are designed to calculate the math for a certain type of complex calculation by making assumptions about the type of calculation being done and taking some shortcuts with the drawback that they can't perform other calculations as fast.
Re: (Score:2)
What company is going to delay releasing the game for several years while the AI gets built?
Re: (Score:2)
1. By far most game AIs are not "learning" in any sense of the word and since learning is essential to intelligence calling it AI is really a misnomer. They do what they do, when you've found a flaw in the algorithms you can just exploit it over and over again and it'll never change or improve.
2. There's no such thing as "perfect play" in any game more complex than tic-tac-toe. Even chess computers only do "good enough".
3. Cheating is orthogonal to AI, there's nothing inherent to AI about "use information t
Re: (Score:1)
There was a third-party AI mod for Quake II that was actually good:
* the developer went to some lengths to "hide" info from it it shouldn't have
* it included variable error; you could set it to be perfectly "aimbot" accurate, or to exhibit random inaccuracy, or simulated human inaccuracy (starts off inaccurate and gets better for successive shots)
* it learned which areas, routes and even weapons were more effective for a given map and given opponent(s)
* you could set it's health and damage modifier to "chea
Re: (Score:2)
Which bot was that?
I know the Steve Polge did the the ReaperBot for Quake 1 ...
Anyone know which bot for Quake 2 this was ?
Re: (Score:2)
Re: (Score:2)
/Oblg. Intel, Santa, GPU [bestofmicro.com]
Sounds lika a Hail Mary for Intel (Score:3)
For Intel's sake, this had better rock, or else it's DOA.
I'm guessing you'd need to purchase a specialized motherboard with accompanying chipset to use one of these. Whereas GPUs can just plug into slots that most motherboards have already.
GPUs, like cassette tapes, may be with us for awhile before something else comes along that competes well enough with them in cost and utility to make switching a no-brainer.
Re: (Score:2)
Why wouldn't it use standard PCIE 16x or similar existing bus? Your assumptions are way off.
Well it might, but that wouldn't really allow for the "super-fast data connections" between chips mentioned in TFS. That kind of bandwidth would need to on a board that supports multiple chips with high-bandwidth connections between them.
Initially, this chip may find a vertical market in AI farms that might otherwise use GPUs. But it won't have access to the kind of crossover market that GPUs do, so it will face greater challenges with broader adoption.
Re: (Score:3)
For Intel's sake, this had better rock, or else it's DOA.
I'm guessing you'd need to purchase a specialized motherboard with accompanying chipset to use one of these. Whereas GPUs can just plug into slots that most motherboards have already.
GPUs, like cassette tapes, may be with us for awhile before something else comes along that competes well enough with them in cost and utility to make switching a no-brainer.
Slots come and go. Just in my (young) lifetime I have seen 4 different slot standards for graphics cards (ISA,PCI,AGP, and PCI-E) just for the consumer market. Plus a bunch of other ones of varying popularity for the server market. If something better comes along and doesn't get mired in a patent fight, the slot will change again.
Re: (Score:2)
The different address/data width standards on each of those busses.
(E)ISA 8/16/32 bit, PIO/DMA VLB/MCA (Bet you forgot about those!) 25/33/50mhz VLB. Not sure about MCA. PCI(-X) 32/64 bit, 33-133mhz 3.3/5V Variety of PCI latencies. Different max PCI memory apertures. Some cards may/may not work on different PCI chipsets as a result. AGP: dedicated PCI channel with a fast one way DMA aperture from main memory to card. 3.3V/1.8V/1.5V signalling, Chipsets generally supported either AGP2x/4x or AGP4x/8x, or only one of those standards at only 1 or 2 signal voltages. Some early pentium era hardware may be AGP2x only at AGP 1x speed. PCIe: At least 3 major standards. PCIe 1.1/2.0 weren't very different, 3/4 added 64 bit BAR and a variety of other features that may cause breakage of newer devices on older PCIe busses, similiar to
Some of this may be inaccurate, but is damn close given it is all off the top of my head.
This only covers *desktop PC* standards too. It gets even messier if you start looking at Laptop busses or Mac/SGI/Sun/HP/DEC/etc busses as well. Go look at how many MXM (lack of) standards there are. And try to find a GPU upgrade for your laptop. If it is pre-Intel ME, you have maybe 3 options(all Vulkan compatibility, and maybe 1 with basic OpenCL/Cuda support), assuming you had one of the standards compliant notebooks. Similiar problem for anything newer, although the cards got a bit better standardized, but good luck disassembling your laptop and getting your new card under the heatsink, thermally contacting, and actually wattage/heat compatible with your system.
I did not forget about MCA. I had a IBM PS/2 model 80 back when they had fallen to around $75 but before they became collectors items. I didn't end up using it for anything, but it was worth the $75 just to open it up and admire the build quality.
Re: (Score:2)
Here we go again (Score:3)
For Intel's sake, this had better rock, or else it's DOA.
Odds are it is already DOA. Intel being Intel they get the itch every now and then and feel the need to capture some high-yield revenue stream other than the x86 family.
Over the years they have done it all. FPGAs? Embedded controllers? RISC processors? Switch Chips? Infiniband? GPUs? The list is endless. From this perspective "neural net" is nothing new.
Start with some tech acquisition, run up a bunch of hype, do some trade shows or even TED talks. At the end of the day it doesn't run Windows s
Mention GPUs and suddenly (Score:1)
everyone starts talking games and PCs.
This is about AI. We just had a discussion recently over an article ridiculing the energy cost of NVidia's approach to self-driving cars. Chips like this is why that article was ridiculous. NVidia's current approach is a developmental solution. Purpose-made solutions like this will be the rulers of the AI market and, within a few short years, use several orders of magnitude less energy to perform the same work.
No such thing as an 'AI chip' (Score:1)
Unfortunate phrasing (Score:2)
Nervana is designed from the ground up for machine learning, Rao tells me. You can't play Call of Duty with it.
One could also read that as "We're trying to develop AI, but it's still so primitive that you can't even make a machine-learned bot for a contemporary game with it yet." ;)
Re: (Score:2)
They'll never beat Microsoft [theverge.com] at this rate.