> Yes, AI will struggle with doing full tasks unsupervised
Which means it's NOT the panacea the people investing all the hope and money and FIRINGS into think it will be.
> But it can still do most of the work for many tasks
But, stripped of your hand-wavy unquantified wish-fulfilment-wishing, HOW MOST? WELL? How MANY tasks?
> It just needs supervision by someone who understands the task
1) Which means you STILL NEED THE PERSON.
2) And in the future, how do we get people who understand the task and have the supervision-level skills if they DON'T DO THE TASKS THEMSELVES?
Do you work in some environment where the managers and supervisors actually understand the ground-level work as well as the people who've been DOING the ground-level work? If so, good for you, spread the joy. If, as more likely, NOT, then, what the hell do you think is going to make it better when we don't have any supervisors in the future who will do the ground-level work to understand the tasks?
> Sometimes the problem is the AI making incorrect assumptions about the task (it wasn't fully framed), sometimes as stated in the summary, the AI context window is too small, so it forgets things, and sometimes it just chooses a really bad approach.
All I heard was "three problems that were never fixed before they rushed this garbage to market and started firing people".
> I have been using Claude Code a lot recently. It's really good at summarizing existing code.
How big a codebase have you given it to analyse and how are you measuring "good at summarising"? How big a program can you give it before it doesn't understand a f___ing thing? Because I've seen people using inhouse, Perplexity and ChatGPT to analyse code and it still fails to understand the point of single-screen scripts.
> It's good at specific targeted changes. It's pretty bad at designing solutions.
All I'm hearing at "it does classification and detection tasks like neural nets and more importantly, the maths functions neural nets really are do".
Plus "It's good at changing a specific targeted thing my own fingers could've done faster if I weren't so invested in translating the problem into an English space and typing out the less-precise English to the literally brainless AI".
> I find that while it's usually still faster than doing it manually, I often have to point out where there's a better (usually simpler) solution.
1) Which means you-and-it are putting out much, much more slop a lot faster. Thank you ohsomuch for your contribution to the world.
2) Also going back to the METR study that says developers thought it made them ~20% faster when it was in fact making them ~20% SLOWER for a net enshittification of ~40%.
> So AI doesn't replace the human, but when used correctly, it makes the human more productive.
Producing WHAT? Crap? Forty percent slower when taking 40% more time into doing it yourself would've produced much better code, if you'd bothered to exercise the grey cells? Thank you for the best argument to start investing AI-levels of money into, oh, I don't know, f__king EDUCATION.
Not to mention, your conclusion is based on a whole bunch of premises where we've seen massive problems. Which - and maybe what, you need to ask the AI to work this out for you? - is a logically giant problem meaning the conclusion is worthless. Which is yet another argument for investing AI-levels of money into, oh, I don't know, f__king EDUCATION.
> If instead of having a human do the task manually and compare that to the time taken for a human to supervise AI doing the task, you'll probably find for many that the human can do a lot more with AI. (Yes, I know some studies have shown the opposite, but I think that's mostly people not understanding how to effectively manage AI, which may take some experience and training.)
1) because when the product that is the AI-entity is mainstreamed, *everybody* will be so incredibly well educated on how to manage it. Like every other tool currently on the planet. Yes, both kinds of tools.
2) I'm sorry, did you just say we should "start investing AI-levels of money into, oh, I don't know, f__king EDUCATION"?
3) "You'll probably find for many that the human can do a lot more with AI" which fails to cover the "quality of output" argument.
4) And your argument dismisses, with nothing more than a literal "but I think"-grade anecdote with ZERO DATA, the ACTUAL DATA of "some published studies have shown the opposite".
> But AI is far better at almost everything that it was a year ago. So even if it's 2.5% now, it may be 25% next year and 90% a year later. We're living in interesting times.
And given we know LLMs hallucinate more as they're given more data, given we know AIs feeding AIs leads to slop-meltdown, it may be 0.25% next year and 0.09% a year later. And that's JUST as likely as the unfounded supposition you've presented here.
> We're living in interesting times.
That's the secret, Cap. We're always living in "interesting times". Even the late-19th-early-20th century sense of the term. Especially right goddamn now.
Do I sound like a downer or an AI-luddite? Maybe. At this point I don't care anymore, because actual studies, data, analysis, evidence and consequent logic - not to mention plenty of anecdotes we can find just as easily - are on that side too.
I mean, I doubt you're an AI, simply because the English in your text was better. But the grade of LOGIC in your entire text was... well as bad as an LLM. Or for that matter, as bad as the logic of an AI-booster. Or polite but still zealous religious zealot. Or average human being, because we don't invest AI-levels of money into, oh, I don't know, f__king healthcare and EDUCATION.