Forgot your password?
typodupeerror

Comment Re:Herald of the future? (Score 1) 5

100%. The point of learning-based AI is that it's faster and cheaper to develop than conventional engineered algorithms. It also tends to execute faster with fewer resources than conventional algorithms. Apple, Nvidia and other companies already do this locally pretty extensively: DLSS, background segmentation and other processing in videoconferencing, audio processing, photo processing including object and person recognition, text to speech and speech recognition, information extraction from e-mails, etc.

You probably actually mean large language models. Those too. Language models are so compelling because they seem to have personalities and the can interact with us like people. People are going to want theirs personalized. The current approach is to shove context into hidden background for every prompt but that's expensive and very limited. In future you'll have a local version that learns and adapts to you: what you like for breakfast, what time you get up, what kind of jokes you like, if you're a furry. These things are all over sci fi, from Niven and Heinlein to Star Wars, Star Trek and Marvel.

No reason why it can't be open either. The ridiculous amounts of power put into training language models today is because it's an arms race. Six months behind the behemoths it's all enthusiasts reenacting the early days of PCs in their basements.

Comment Re:Temu missiles (Score 1) 257

They did. If you don't think military contractors build things as cheaply as they can, or that there's something magical about "military-grade" you're dreaming. They charge as much as they can because they don't have any proper competition.

Iron Dome interceptors, the Tamir missile, cost about $40-50k. Patriots are around $4 million, SM3s $10-30 million. The Tamir works fine and is that cheap because Israel is a small country with limited resources and lots of demands on those resources. Patriots and SM3s are that expensive because the US is a big country with lots of resources, not nearly as many demands on them, and you guys didn't listen to Eisenhower.

Comment Re:Abstract Syntax Tree (Score 1) 159

In the old(er) days of AI there was a philosophical split between people who believed the way forward was creating larger and larger databases of hand compiled facts and rules for relating them, and people who favoured doing as little as possible hand engineering and letting learning algorithms figure things out for themselves. The latter group is certainly dominant now, but old habits die hard.

If we dispense with the need for humans to understand programs then we won't have a "language designed for" programming models. The models will generate their own internal representation and we'll swap out translation layers for specific hardware the way language translation systems swap language layers to go to and from arbitrary language pairs.

Comment Re:Not according to Intel or AMD (Score 2) 125

You can "do AI shit" on a processor you can build in your mom's basement, or on a 1970s Z80 that's already there. Tesla wanted to do it a bit faster than that with less power so their HW3 was built by Samsung on a 14 nm process using DUV lithography. The HW4 is probably a TSMC 7 nm which could be DUV or EUV lithography.

Intel announced their first 14 nm fab in 2011 at $5 billion and they quote their 7 nm fabs at $10 billion.

Tesla/SpaceX might be talking about a cutting edge fab, in which case $20 billion is too low. There isn't really any need for that though, and if they're serious about space then they might well prefer a larger, more robust process.

Slashdot Top Deals

Those who do things in a noble spirit of self-sacrifice are to be avoided at all costs. -- N. Alexander.

Working...