Forgot your password?
typodupeerror

Submission + - Devs Gaining Little (If Anything) From AI Coding Assistants (cio.com) 3

snydeq writes: Code analysis firm Uplevel's recent study of GitHub Copilot use found no major productivity benefits for developers based on key metrics, reports CIO.com's Grant Gross. 'Coding assistants have been an obvious early use case in the generative AI gold rush, but promised productivity improvements are falling short of the mark — if they exist at all. Many developers say AI coding assistants make them more productive, but a recent study set forth to measure their output and found no significant gains. Use of GitHub Copilot also introduced 41% more bugs, according to the study,' Gross writes, adding that in the trenches, reported results are mixed, with few seeing productivity gains and most experiencing a shift to more time on code review. As Gehtsoft's Ivan Gekht puts it: 'It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it.'
This discussion was created for logged-in users only, but now has been archived. No new comments can be posted.

Devs Gaining Little (If Anything) From AI Coding Assistants

Comments Filter:
  • While this may hold true with large code bases or complex code, I have seen a notable increase in productivity from using Cursor, the AI powered IDE. Yes, you have to review all the code that it generates, why wouldn't you? But often times it just works. It removes the tedious tasks like querying databases, writing model code, writing forms and processing forms, and a lot more. Some forms can have hundreds of fields and processing those fields along with doing checks for valid input is time consuming, but c
    • Cursor/Claude are great BUT the code produced is almost never great quality.

      Even given these tools, the junior/intern teams still cannot outpace the senior devs. Great for learning, maybe, but the productivity angle not quite there.... yet.

      It's damned close, though. GIve it 3-6 months.

  • I suspect that the results are quite a bit more nuanced than that. I expect that it is, even outside of the mentioned code review, a shift in where and how the time is spent, and not necessarily in how much time is spent. There were almost always "good enoughs" left in my code that I would have addressed if time and resources permitted, or refactors that I simply didn't have time for. I can't imagine that an AI assist changes the fact that it's usually the deadline driving the final state of the code, more

You're already carrying the sphere!

Working...