Comment I wrote a programming book with it (Score 1) 192
I have used ChatGPT since December; I was (and still am) impressed at what it can generate. I think you need to be careful about what questions you ask it, and how you ask those questions. If you ask ChatGPT about a bunch of "edge" cases, or questions that require a lot of inference, it will not do well. But if you ask ChatGPT about topics that are fairly well understood and researched, it can write a very coherent reply.
As an experiment, I used ChatGPT to write a programming book about FORTRAN77. I wrote it over a week in January. It took me longer to map out the book's outline and to figure out what questions to ask ChatGPT than for ChatGPT to generate its responses. You might learn FORTRAN77 by reading this book, but that wasn't the point of writing it. I was experimenting with ChatGPT to see how far I could push it. I picked FORTRAN77 as my topic for two reasons:
(1) FORTRAN77 has been around for a long time, so a lot has already been written about it. This is a well researched topic and I figured ChatGPT would have a lot to work with. And it did.
(2) I thought it was funny to ask an AI to write about FORTRAN77 in 2023. Because let's face it, I did the book for fun.
Some interesting things here: ChatGPT generates text that is of "average" complexity and sentence length. There weren't any really short sentences, nor any really long sentences. Everything was kind of "moderate" length. Also, in the 80 pages of the book, the only semicolons were in the code, or in my end-of-chapter commentary. ChatGPT doesn't seem to use semicolons in its writing.
Another interesting thing is that you need to be careful of errors. If you're relying on an AI to do the writing, then you need to fact-check it. In the book, ChatGPT boldly claimed code samples were FORTRAN77 but then it wrote something else, probably some more recent version of Fortran. (I stopped programming in FORTRAN77 as an undergraduate in the early 1990s, just as Fortran90 was starting to replace F77. So I don't know the later editions of Fortran.) Based on other experiments I've done, I think if you (a human) know a topic fairly well, you can probably catch most (or all?) of any errors that ChatGPT makes. But you need to be careful and watch closely. If you're using ChatGPT's output verbatim, that fact-checking and editing phase might outweight its usefulness. But if you're using ChatGPT as a starting point to write something on your own, then that might be okay.
I think we're still in the early days of AI. Today, you're seeing a lot of people using ChatGPT to write annual goals, or to respond to exam questions, or to write a book - and we're all "wow'ed" by that right now. If you set the clock ahead 5 or 10 years, we'll all use AI as some kind of "co-author." It won't seem weird anymore.