No way experienced developers are letting AI generate bug fixes or entirely new features using Slack to talk to AI on the way to work.
Depends on whether they can review the code and tests effectively first. I frequently push commits without ever typing a line of code myself: Tell the LLM to write the test, and how to write it, check the test, tell the LLM how to tweak it if necessary, then tell the LLM to write the code and verify the test passes, check the code, tell the LLM what to fix, repeat until good, then tell the LLM to write the commit message (which I also review), then tell it to commit and push.
Actually "tell the LLM what to fix/tweak" is often not right. More often it's "Ask the LLM why it chose to do X". I find I program via the Socratic Method a lot these days. The LLM usually immediately recognizes what I'm getting at and fixes it -- most often not because the code was wrong but because the implementation was more complex than necessary, or duplicated code that should be factored out, or similar. Sometimes it provides a good explanation and I agree that the LLM got it right.
As an example from immediately before I started typing this comment, the LLM wrote some code that included a line like [[maybe_unused]] ignored = ptr->release(). The LLM had recognized that the linter was going to flag the unused return value (which it had named "ignored" to make clear to readers that ignoring it was intended) and inserted the annotation to suppress it. This was all unnecessarily complex, made necessary by the fact that it had previously used get() to get the raw pointer value before checking it and then (right after the release()) stuffing it into another smart pointer object to return. The release() call was necessary to keep the first smart pointer from deleting the pointed-at object. I typed "Why not move the pointer directly from release() to the new smart pointer?". The LLM said "Oh, that would be cleaner and then I could get rid of the temporaries entirely" and reorganized the code that way. That's a trivial code structure example, of course, but the pattern often holds with deeper bugs, including sometimes that my question makes the LLM realize that its whole approach (which is often what I suggested) was wrong and to go into planning mode to develop a correct strategy.
There are exceptions, of course. Sometimes the LLM seems to be incredibly obtuse and after a couple of prompts I click "stop" and type what I want, at least enough that I can then tell the LLM "See what I did? That's what I mean."
"Writing code" with AI assistance is mostly reviewing code and you can often do that on a small screen and without a keyboard.