You made a bunch of statements about how wrong I am, but didn't back any of them up. I guess I just have to take your word for it.
Nor did you, so I figured I'd match you claim for claim. Your one attempt at doing so was a misfire and a half.
On the productivity article, yes, I sent the wrong link. Here's the right one. https://www.cnbc.com/2023/04/2... [cnbc.com] This happened because I used Gemini to dig for the data, and you may have heard about how great AI is at attribution (not). It got the reference links mixed up.
LOL.
Bullshit.
Search engine attribution doesn't come from the training data (which would indeed be dangerously inaccurate). It comes from tool-based querying. You sent the wrong link after filtering ones that didn't fit your narrative. Don't worry about it- happens to the best bullshit artists.
But let's get back to the actual article you meant to reference:
FTA:
We find that less-skilled and less-experienced workers improve significantly across all productivity measures we consider, including a 34% increase in the number of issues they are able to resolve per hour.
The 14% is across the entire workforce.
I don't see how the math doesn't jump out at you.
A 14% increase in productivity for service agents means you can afford to let go of 14% of your staff.
That number is fucking huge.
Microsoft's biggest layoff in history wasn't anywhere close to 14%.
Worse, for these guys- is that the low end folks are 34%. CSRs are buggy whip manufacturers- they're done for.
The cost to improve them by 14-34% is a tiny fraction of their cost as an employee. To replace them entirely is just a matter of figuring out logistics.
There are various studies that show different levels of productivity gains. I honestly don't believe the more dramatic ones. Sure, AI can write thousands of lines of code in the blink of an eye. That kind of thing would boost productivity numbers for developers, if you use metrics like number of lines of code. But are those lines of code right? No, not by a long shot. I use multiple AI developer tools and models from Microsoft, Anthropic, and Google.
Developers are safe for a while. LLMs can assist there, but they really can't operate unsupervised. Everyone's tried it. It's just not there yet. Anyone who claims that is definitely drinking kool-aid. But they will continue to get better. I begin transitioning to a role of LLM-jockey about a year ago. I recommend anyone concerned about job security do the same.
Even when the AI bubble pops- like .coms and houses before it, corporations will continue to become more and more dependent on them. They'll just be a lot fucking cheaper.
I have yet to see any of them get more than a few lines of code right, without having to be corrected. If vibe coding were a real thing, we'd be seeing all kinds of vibe-coded stuff in production by now.
There are a few ways to evaluate this claim.
1) You're full of shit.
2) You just haven't figured out how to interface with them yet.
3) You have no actual programming skills.
Entire projects are written by LLMs now.
We are seeing vibe-coded shit in production now.
The purest vibiest form of vibe coding- a programming illiterate person tries to make something out of it is usually pretty limited in scope, but even those- a coworker of mine that we sadly just laid off made an entire fucking front-end for our ticketing and accounting systems, with almost no programming experience (he had the skill level of "being able to sometimes successfully make small changes to scripts to suit him")
Your claim doesn't reflect reality, at all.
I want to understand why in good faith, but I'm finding it difficult without being insulting.