Comment Re:Prepare now because (Score 1) 58
Reality ain't popular. It's why politicians lie their asses off.
Reality ain't popular. It's why politicians lie their asses off.
Liar.
I can think of a few that did, with similar results. Chasing fads is not a guarantee of success. Quite the opposite, as it happens.
China's geezer party leaders probably got jealous of all the young whippersnappers having fun, so had to reign it in.
Theory is that's also why the Old Testament was written. "If my wanker no longer works, those wippersnappers are no longer allowed to snap their whips either! Parity of misery, Hallelujah, pass the wine!"
Legged robots are little more than 'reflexes', continuously responding to various sensors to keep from tipping over. As for "trying new things", keep in mind that you're watching propaganda. Take anything seen or said with a few grains of salt.
You're deeply confused. The parent is talking about autonomous robots, not generating text. Companies have indeed been "faking it" for years, sometimes with dancers dressed like robots, sometimes with proper robots controlled by a remote operator.
As "gramatical perfection", you seem to have confused the whole of AI with silly chat bots. Those also "fake it" in countless ways, thought I should probably point out that grammatically correct output is the easy part. Still, Joe Weizenbaum's Eliza proves that we can not only fake "gramatical perfection" in a chat bot, doing so is trivial.
You'll find that a lot of the superficial 'human-like' things have been little more than smoke and mirrors for about as long as we've been making humanoid robots. A mix of actual technology and stage magic will get you a vacuum-tube robot than can respond to voice commands in 1939.
The rush is that burning it is buggering up the planet. If the US refuses, it becomes a security issue and we be dealt with appropriately.
While there is no "NLP layer", and LLMs do fall under that category, he's not exactly wrong here. A surprising amount of tinkering with the input and output outside of the model goes into making them seem less silly. As for "communicating with a system in natural language", having pretend conversations isn't quite the same thing, is it? If your goal is to translate messy NL input into precise commands, LLMs can only really fake it. Sure, it's good enough for things like function calling, as long as the stakes are low and the alternative is almost certain failure. Given how they work, however, this is not something they'll ever be able to do reliably enough for anything that actually matters.
I just asked myself... what would John DeLorean do? -- Raoul Duke