Comment Re: Hmmmm (Score 3, Funny) 39
The prior year. Laws were made to be broken.
The prior year. Laws were made to be broken.
These are concentration camps. That you felt the need to post AC tells me you know that already. You're on the wrong side of history.
What happens when the datacenter rush ends? Either becaue we have enough capacity or because the AI bubble bursts?
"Look at all the jobs building this one thing will create! Surely, those will last forever and just vanish at the end of the project!"
Think.
In case you haven't noticed, there's not a lot of hope or optimism to be found anywhere at the moment. We're staring down a global recession, possible nuclear war, and watching the leader of the free world build concentration camps. If that wasn't enough, isolationist policies are creating a power vacuum that the most dangerous actors are best positioned to fill. Oh, and fascism and authoritarianism are making a comeback.
Maybe you can find a story about a police officer not shooting a puppy when responding to a welfare check, but I doubt it.
Yes, they could. It would be quick, simple, and efficient. You could even call it AI.
Of course, you'd have a hard time charging millions for a system like that.
This is a perfectly sensible use for AI. There's a lot more to AI that silly chatbots, after all. No one would be foolish enough to
The software uses generative AI
...
After a few rounds of testing, I think we'll have an idea about what is the right time to call it successful or not
Yes, I suppose you will.
What gets posted there doesn't matter a fuck all to me.
It should. I don't "use that shit" either, but the shit that gets posted there has killed people that I care about. It's also at least partially responsible for the rapidly developing police state and the newly constructed concentration camps, like "Alligator Auschwitz".
Solution: don't use that shit!
Ignoring the problem won't make it go away. We need real solutions. What those look like, I can't say, but I'm confident that algorithms designed to drive engagement will probably need to go, as will systems controlled by a single entity. Truly open standards for social media, similar to email or the web, would prevent a lot of the worst kinds of abuses. This is an achievable goal, but it will mean entities that would otherwise be interested in maintaining their own little monopoly acting in a socially responsible way...
You're just denying reality at this point.
One nice thing about this study is that it highlights the false belief that AI is actually saving them time. The developers in this study, just like you, thought that AI was saving them considerable time and effort. Just like I've been saying since 2023, that's clearly not true. I've seen people in real life struggle with a stupid chatbot for hours before declaring 'it took just 15 minutes!'. I don't know exactly why this happens, but it does. Maybe it's the novelty. Maybe they feel like they're better focused with AI. Maybe it's AI psychosis. Whatever the reason, we know that can't trust self-reported productivity gains.
I'm sure you feel more productive and I'm sure that 'knowing how to use AI' when so many other people don't makes feel important, but odds are good that you're just deluding yourself.
People who learn to use it go faster
The evidence suggests otherwise. It turns out, like I've been saying for years, people only think it makes them faster.
I'm asking what you think that training would look like. What kinds of things do you think these developers aren't doing that would make them actually more productive, and not just make them think they're more productive?
From what we've found, simply not using AI would net an easy boost in productivity.
We aren't going back. That's the only thing I am sure of.
Don't bet on it. AI is expensive. A lot more expensive than people realize. Add to that the astonishing technical debt it creates and the increasing evidence that it isn't actually saving any time
Predictions are hard, especially about the future, but this one is as clear as it gets. The only reason things haven't crashed already is the insane belief that things are rapidly improving. The simple fact is that for all the hope people put in the magic of emergence, there are fundamental limits here that are becoming increasingly difficult to ignore. Those aren't going away, no matter how much you want to believe in the inevitability of progress. LLMs are a dead-end. We've taken them just about as far as they can go. The only way forward is with a fundamentally different approach. It's really that simple.
What "training" do you think would change the results here?
Especially by thoughtless people.
Bullshit. Find just one example of that claim being made on this site. Make it snappy.
just another tool in the toolbox
This is the dumbest take. The most thoughtless "opinion" on anything you'll ever see.
You can say the same thing about *anything* and it would be just as relevant:
"Two-color corn so you can read binary in your shit!" Useless? Maybe if you don't know how to use it! To the experienced developer, it's just another tool in the toolbox(tm).
AI coding tools can slow down seasoned developers by 19% even though they think it makes them 20% more productive. Add to that the ridiculous cost of AI and I have to wonder why you're still calling the silly thing a 'tool' and why you're letting it take up space in your 'toolbox'.
Last yeer I kudn't spel Engineer. Now I are won.