But give it a few years and it will be.
Really? Because it seems to me that unlike most technology it gets harder and harder to get anything meaningful out of an LLM. The first ones I saw were impressive in the fact that they generated text that seemed to be going somewhere, it seemed to have points and purpose, but it took very little to tell that it, in fact did not.
Made up GPT-2 style example: Once upon a a pink deer frolicked through a strawberry field. Then as the princess said he burst into flames! "He's dead jim", said no one in particular. But you can put the fires out with sodium bicarbonate"
I won't bother with GPT-3 or 4, we all know they're good, but the leap between 2 and 3 is enough that it goes from being barely more than jibberish to something coherent, you can't produce more than a paragraph of text without being able to instantly tell the difference between 2 and 3. Between 3 and 4? It might not be so clear unless gpt-3 does something especially airheaded. Between 4 & 5???? People can tell enough that it's not the same but many people prefer 4 despite 5 being advertised as more correct!