Comment Re:The Varginha mass hysteria incident (Score 1) 11
It's interesting how practically everything is "the most convincing proof we have."
It's interesting how practically everything is "the most convincing proof we have."
Are you crying? There's certainly some starting, but probably a ways to go yet.
Do you not already have this? Amazon has local lockers here in drug stores, gas stations and convenience stores, most of which are open 24 hours. Most of the postal locations are also in drug stores.
100%. The point of learning-based AI is that it's faster and cheaper to develop than conventional engineered algorithms. It also tends to execute faster with fewer resources than conventional algorithms. Apple, Nvidia and other companies already do this locally pretty extensively: DLSS, background segmentation and other processing in videoconferencing, audio processing, photo processing including object and person recognition, text to speech and speech recognition, information extraction from e-mails, etc.
You probably actually mean large language models. Those too. Language models are so compelling because they seem to have personalities and the can interact with us like people. People are going to want theirs personalized. The current approach is to shove context into hidden background for every prompt but that's expensive and very limited. In future you'll have a local version that learns and adapts to you: what you like for breakfast, what time you get up, what kind of jokes you like, if you're a furry. These things are all over sci fi, from Niven and Heinlein to Star Wars, Star Trek and Marvel.
No reason why it can't be open either. The ridiculous amounts of power put into training language models today is because it's an arms race. Six months behind the behemoths it's all enthusiasts reenacting the early days of PCs in their basements.
Both yours and the OPs numbers have a certain... fragrance.
They did. If you don't think military contractors build things as cheaply as they can, or that there's something magical about "military-grade" you're dreaming. They charge as much as they can because they don't have any proper competition.
Iron Dome interceptors, the Tamir missile, cost about $40-50k. Patriots are around $4 million, SM3s $10-30 million. The Tamir works fine and is that cheap because Israel is a small country with limited resources and lots of demands on those resources. Patriots and SM3s are that expensive because the US is a big country with lots of resources, not nearly as many demands on them, and you guys didn't listen to Eisenhower.
Are they the same ones they put in vaccines?
Probably not. If the Chinese had their own pre-installed backdoors in Cisco gear they could just use those instead of exploiting the Cisco and US government ones in order to install their own.
Currently modded flamebait, a sure sign someone with mod points knows it's true.
Yep. Still registering all the payback from 1953.
Someone who knows what "empathetic" means.
There's a simple solution to that, and a somewhat less simple but one-time solution that also lets you visit the US afterward.
In the old(er) days of AI there was a philosophical split between people who believed the way forward was creating larger and larger databases of hand compiled facts and rules for relating them, and people who favoured doing as little as possible hand engineering and letting learning algorithms figure things out for themselves. The latter group is certainly dominant now, but old habits die hard.
If we dispense with the need for humans to understand programs then we won't have a "language designed for" programming models. The models will generate their own internal representation and we'll swap out translation layers for specific hardware the way language translation systems swap language layers to go to and from arbitrary language pairs.
I'm very curious what you think the answer is.
You can "do AI shit" on a processor you can build in your mom's basement, or on a 1970s Z80 that's already there. Tesla wanted to do it a bit faster than that with less power so their HW3 was built by Samsung on a 14 nm process using DUV lithography. The HW4 is probably a TSMC 7 nm which could be DUV or EUV lithography.
Intel announced their first 14 nm fab in 2011 at $5 billion and they quote their 7 nm fabs at $10 billion.
Tesla/SpaceX might be talking about a cutting edge fab, in which case $20 billion is too low. There isn't really any need for that though, and if they're serious about space then they might well prefer a larger, more robust process.
In the long run, every program becomes rococco, and then rubble. -- Alan Perlis