Comment Re: I may be "old fashoned", but... (Score 1) 175
That myth has absolutely caused harm. That you can't see that is a personal failing. Try thinking before blindly reacting.
That myth has absolutely caused harm. That you can't see that is a personal failing. Try thinking before blindly reacting.
It's a silly toy, not a tool. If you have any sense, you'll realize that in time.
You're certainly producing more code, but that doesn't mean you're more productive, only that you're producing more code. What a lot of people are discovering is that they're getting a lot of 'dead' code and redundant code, particularly when using agents. Here is one guy's experience, if you're curious.
The writing is on the wall. The promised improvements are never going to materialize, as I've been saying for years, due to fundamental limitations of LLMs. Using AI might make you feel more productive, but all you're really doing is making your code base larger, more fragile, and less secure. It will cost you more in the long-run than you think you're saving in the near-term.
I think that a programmer would be stupid not to use them.
In 10 years time, you'll deny you ever believed such a ridiculous thing.
Well, it used to be anyway...
We covered more CS content in my intro course than kids get today in 4 years. The average CS undergrad program is little more than a glorified programming bootcamp these days.
Python seems very instinctive to me. It seems very similar to BASIC.
Python is nothing like BASIC. The comparison is both misleading and, to beginners, actively harmful.
As you've probably already figured out, Python is a terrible language for beginners.
It takes a while before they realize the poor computer executes the code line by line and does not interpret anything.
Python obscures many basic concepts. For example, I've also found that the whitespace rules obscure blocks, making it very difficult for beginners to understand control flow. Even simple for loops are needlessly burdened. BASIC makes these things so simple and obvious that they almost don't require any explanation.
basic would be a good starting point for some.
No question. Dijkstra was wrong about BASIC.
Education is however a world build on compromises, would be nice to research though.
If you haven't already, Papert's Mindstorms is worth a read.
If an executive can code an app talking to AI...he won't hire programmers.
It's not like we haven't seen that countless times over the years. It always ends the same way. It turns out that specifying the kind of program you want in sufficient detail for a computer to generate it is just programming.
If a call center can handle all of it's calls with a couple of AI bots...they will fire the 1200 employees they have
Sure
New means jack shit. These sorts of blind appeals to "new" technology rather than technical merit have always been a drag on the industry.
I've never once regretted ignoring the latest programming fad. The time, effort, and money saved over 40+ years is incalculable.
The difference between "AI coding" and the other tools you mention is that those are accurate, consistent, reliable, and inexpensive. LLMs are not.
That said, AI coding agents do provide a competitive advantage
You're the only one talking about Searle, dipshit.
as it does in biological brains
Prove it.
We don't know how brains work, you drooling moron.
You're such a fucking joke!
"LLMs objectively can not do those things" is nonsense
If you believe otherwise, you've got to overturn 100 years of computer science.
I'm sure that you, someone who randomly brings up and still fails to understand Searle's argument is surely capable of such a feat!
What a fucking joke!
You can stick your head in the sand and ignore reality if you want, but you're only hurting yourself and any unfortunate person who takes your advice.
Takes some time to actually learn about what LLMs do and how they work. It will very quickly dispel whatever ridiculous science fiction nonsense you inexplicably believe is science fact.
, it actually shows the exact list of sources for the summary under "detailed answer"
Again, these things can't actually summarize text. The "sources" are just the pages that had some of their content included in the context when that part of the response was generated.
That's a nonsense solution. You can ask for "well sourced material", but odds are against you actually getting that. The model has no concept of "well source material". You can ask to "include the relevant snippets form the sources", but odds are against whatever it produces actually existing in the source documents. (Ask it to quote relevant snippets from something that you know doesn't exist and it will still produce quotes.) You can ask it to check the sources support the conclusion, but you'd be better off flipping a coin. That'll get you the right answer more often.
The problem here is that LLMs objectively can not do those things. Remember that they don't operate on facts and concepts, nor can they actually follow instructions (this is provable). All they do is next-token prediction. It's only through careful training and framing that we're able to create the illusion that they can do those things.
These are all perfectly reasonable use cases. After reading the insanity above, this seemed praiseworthy.
Never trust anyone who says money is no object.