Forgot your password?
typodupeerror

Comment Re: The latest pro-AI talking point... (Score 2) 104

Exactly my point. AI boosters seem to think the details don't matter, but code is literally all details. The big picture or architecture *guides* the thousands of tiny choices you make, but each of them still matter to the whole. LLMs don't understand cause and effect and they can't map a conceptual understanding from one problem to another - in short they aren't capable of the kind of cognition you need to be capable of in order to break down and solve problems *while* respecting an overall design. They also don't have any sort of consistent internal model of the world, so the choices they made before don't guide the choices they make now. Anyone who claims we're going to be project managing teams of agents anytime soon, or that the tech is capable of building large projects without constant correction from people who *actually know what they're doing* is deluding themselves. The people breathlessly hyping LLMs all seem to have an enourmous amount of money poured into a technology that *still* isn't turning a profit. The whole thing is a house of cards and it won't stand for much longer. All the "new developments" in the tech are parlor tricks or smoke and mirrors. A social network of LLMs? Okay??? "Agents" that nuke your inbox? Fascinating! Sign me the f*** up for a cloud subscription immediately! I cannot wait for the bubble to pop.

Comment The latest pro-AI talking point... (Score 4, Insightful) 104

..seems to be that with LLMs doing all the "pointless drudgery" of actually writing code, that this will mean we can actually focus on the bigger picture or architecture while the LLM takes care of the details. Whenever I read this take it makes me think about *actual* architecture. Architects draw building plans. They visualize, plan out, and draw the structure of the house/office/whatever. To do this successfuly, i.e to draw a building that can actually exist in the real world, they have to understand the limits of their building materials. You cannot build a skyscraper out of wood and nails. Architects don't have to be engineers, but they do have to understand the basic contraints of physics and materials. In software engineering, that base understanding of the building blocks comes from *writing code*, lots of it, and *making mistakes*, lots of them, that lead you to an overall understanding of what is and is not possible. Those that gain an understanding pf the details can also become adept at big picture thinking. They understand the role of each component part in holding up the structure and how those components fit together. There's no shortcut to this understanding, and despite what you hear from different AI boosters every single f***ing month, no LLM writes good enough code that you can ignore the details. No, not even Claude or whatever the hype-du-jour is. None of them, and they never will. "Prompt Engineer" is not, and will never be a job title. That's like hiring an architect that doesn't know what a brick is to design your house. The drawing he does is very pretty, but a mild breeze knocks it over on the real world.

Comment Re:LLM's are prediction machines (Score 5, Insightful) 46

100% - LLMs don't do cognition and they never will. You can train a dog to sniff out drugs or fetch the paper - that doesn't mean it understands what drugs or papers are. LLMs will never understand cause and effect or be able to do human cognition things like apply knowledge from one context to another by mapping concepts - that's not what they do and it never will be. The tech fundamentally cannot do those things. I can't wait for the C-Suites to catch up so this "useless AI in everything" era can come to a close.

Comment "That does not mean there isn't promise for AI" (Score 2) 32

Yeah, just like the promise that crypto would replace fiat currency, and the promise that buying nothing and receiving a JPG of a cartoon monkey as a receipt was a wise investment vehicle. I can't wait for the bubble to pop so we can get this "Useless chatbots in literally everything" era over with.

Comment Re: And they lost? (Score 1) 45

That applies to things that can be improved. The flaws inherent to LLMs are permanent. Make the models as gigantic as you want, LLMs will always hallucinate, and will never be able to reason/understand cause and effect, which makes them unsuitable for pretty much every use case the executive class has been salivating over them for. That hasn't stopped people from trying of course, and fundamental misunderstandings of the tech's abilities persist everywhere, but as more projects fail in spectacular, money-losing ways, those misunderstandings will clear up, and the bubble will deflate. Blockchain was a "game changing technology" too - how much impact has it had on people's everyday lives? It's a grift my guy, the signs are all there, and the lies of markting departments and the gullibility of executives is going to cause a recession. Meet me here in 5 years, we'll see who's right.

Comment Re: And they lost? (Score 2) 45

It isn't at all ubiquitous - it's well known that most corporate AI pilots are failing to deliver any value. Investment is cooling off, consumers and C-suites are beginning to understand the tech's serious, unsolvable flaws. Companies that have invested heavily in it are motivated to lie about the outcomes, but that can only last so long before the truth comes out. It's a bubble, and it's deflating.

Comment Re: Typical (Score 2) 329

In a June 15 tweet, President Trump said testing âoemakes us look bad.â At his campaign rally in Tulsa five days later, he said he had asked his âoepeopleâ to âoeslow the testing down, please.â At a White House press conference on July 13th, he told reporters, âoeWhen you test, you create cases.â

Slashdot Top Deals

After all is said and done, a hell of a lot more is said than done.

Working...