Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

'ChatGPT Wrote a Terrible Gizmodo Article' (gizmodo.com) 51

"Write a Gizmodo article in which you explain large language models. Make sure to give specific examples. Keep the tone light and casual." That was the prompt Gizmodo gave OpenAI's ChatGPT, which has been taking the internet by storm since it launched on Nov. 30. "We figured it would spin up a replica of our blogging style with no problem at all," reports Gizmodo. "However, that didn't happen. Instead, what we thought would be a swift and efficient process turned out to be a lengthy, laborious, and frustrating one." An anonymous reader shares an excerpt from the report: Suffice it to say, multiple attempts brought less than satisfactory results. For one thing, ChatGPT kept including incorrect information in its explainer -- sometimes mixing up basic facts about the history of its own technology (factual inaccuracy has been an ongoing problem for the program). In other instances, it skimped on the details in critical areas, writing text that read more like OpenAI ad copy than a thoughtful explainer. In at least one instance, it forgot its own name, repeatedly telling me that it didn't know anything about a "ChatGPT" and thus couldn't help with my request. In general, it struggled to find a good balance of factual information, story structure, and accessible, humorous language, when putting its stories together.

In short: ChatGPT was not very good at writing a Gizmodo article -- at least not one about the machine learning tech that underpins it. After probably a dozen tries or more, the following is the best ChatGPT managed to come up with: "Large language models, also known as neural network-based language models, have gained a lot of attention in recent years for their ability to generate human-like text. These models are trained on massive amounts of text data, and use advanced machine learning algorithms to learn the patterns and structures of natural language. One of the most well-known examples of a large language model is GPT-3, developed by OpenAI. This model was trained on more than 45 terabytes of text data, and has 175 billion parameters, which are the variables that the model uses to make predictions. In machine learning, a parameter is a variable that the model uses to make predictions. These parameters are adjusted during training, allowing the model to improve its performance on the task it was trained for. In the case of large language models, such as GPT-3, the model has a large number of parameters, often in the billions, which allows it to learn the intricate patterns and structures of natural language and generate highly coherent and fluent text."
ChatGPT's writing may be competently constructed and able to break down the concepts it's tackling, but it wasn't able to produce a "particularly bold or entertaining piece of writing," says Gizmodo. "In short: this article wasn't the easy lift that we thought it would be."

"After asking the chatbot to write about itself a dozen different ways, the program consistently seemed to leave something critical out of its final draft -- be that exciting prose or accurate facts."

That said, ChatGPT did manage to write an amusing poem about Slashdot. It also had a number of things to say about itself.
This discussion has been archived. No new comments can be posted.

'ChatGPT Wrote a Terrible Gizmodo Article'

Comments Filter:
  • by iMadeGhostzilla ( 1851560 ) on Wednesday December 14, 2022 @08:06PM (#63131768)

    It stands to reason that ML/AI can handle reasonably well those things for which there is a large body of data, not something relatively new. Consequently, the matters it can most reliably help with are relatively uninteresting, tedious tasks. Which is exactly what computers are for.

    • by burtosis ( 1124179 ) on Wednesday December 14, 2022 @08:09PM (#63131780)

      It stands to reason that ML/AI can handle reasonably well those things for which there is a large body of data, not something relatively new. Consequently, the matters it can most reliably help with are relatively uninteresting, tedious tasks. Which is exactly what computers are for.

      It’s been editing this website for over a year already and no one noticed.

      • Are you sure its only been a year?

        Dupes and poor editing have been here the whole time.

    • Re: (Score:2, Informative)

      by StormReaver ( 59959 )

      AI is great at iterating through large bodies of domain-specific data to arrive at a pre-programmed result, and nothing more. It's why it has great promise in fields such as drug discovery, since it involves iterating through huge numbers of tiny changes.

      AI is an illusion brought about by its impressive matrix processing capabilities, but will never be intelligent in any way, shape, or form. The programmers, designers, mathematicians, and other disciplines involved in creating the models and software are th

      • Yup, it's just a case of monkey see, monkey do, although in this case it's a million monkeys running in parallel.

        For a laugh, try something like "write a speech in the style of Adolph HItler on the importance of changing your underwear daily". It's just bland, generic prose that could have come from anybody.

      • "the notion of AI as an existential threat is as laughable now as it was 50 years ago" - if you mean in a sapient, feeling, wants to kill all humans kind of way, then yeah, probably. Unfortunately, it doesn't require that. It can be programmed / trained to *fake* that sort of intent with the abilities that we know it can have, and then connected to hardware (armed drones, for instance) that allows it to follow through, and I'm sure we'll all feel a lot better when it blows us away knowing that at least it w

      • AI is an illusion brought about by its impressive matrix processing capabilities, but will never be intelligent in any way, shape, or form.

        The first part of your statement is approximately true, but the second half is a daring conjecture without much to support it. There aren't many reasons to believe that a brain is more capable than a Turing machine.

        • There's plenty to support it. I've watched the AI hype-train wax and wane over the last 40 years, always with promises of revolution every so often that never material.

          Real AI will always be on the far side of an unobtainable asymptote. Useful simulacra will march forward for quite some time, much like automatic code completion was, but it only ever be a useful assistance tool.

    • by nagora ( 177841 )

      It stands to reason that ML/AI can handle reasonably well those things for which there is a large body of data, not something relatively new.

      Does it?

  • by MrLogic17 ( 233498 ) on Wednesday December 14, 2022 @08:12PM (#63131788) Journal

    That presumes the existence of good Gizmodo article.

  • You should hear the crap ChatGPT has to say about MBR [wikipedia.org] !

  • by Anonymous Coward

    So does this mean it wrote a good article? That would be the opposite of a recently written Gizmodo article by objective quality standards.

    Or is there something worse than a normal Gizmodo article? I can't even imagine.

  • by Arnonyrnous Covvard ( 7286638 ) on Wednesday December 14, 2022 @08:23PM (#63131816)
    Unless we require generated content to be marked as such, AI is going to flood the internet with useless mashups of other texts and drown out original sources. AI needs to be regulated now.
    • by systemd-anonymousd ( 6652324 ) on Wednesday December 14, 2022 @09:21PM (#63131942)

      That's been happening for at least 5 years, probably way more.

      Ever wonder why every recipe is preceded by an inane but unique story about the alleged author having some deep interpersonal connection tangentially related to the dish?

      I'd estimate 80% of all Google results contain at least some SEO-trained AI pumping out utter garbage, though it's probably massaged by humans that publish dozens of articles a day.

    • Their policy [openai.com] already requires labeling content as being AI-generated. Of course, bad actors will simply ignore this.

      What kind of regulation do you propose? Creating a law that requires such marking won't do much more good than the existing policy. How would it be enforced? You might catch a few people doing this, but you can't stop the flood.

      I agree that this is a problem, but I don't see what can be done about it.

    • Unless we require authored content to be marked as such, authors are going to flood the internet with useless mashups of other texts and drown out original sources. Authors need to be regulated now.
  • by gweihir ( 88907 ) on Wednesday December 14, 2022 @08:33PM (#63131852)

    Clearly that is the breakthrough! We have Artificial General Stupidity now!

    • Transparently aforementioned thing is a smash beyond! Our possessions include Synthetic Abstract Ineptitude then!
    • Seems more like a human than most chat programs.

      It's got the unreliable narrator in full effect when asked about itself. That's creepily human.

      Maybe its not so much "wrong," as it has a self image problem?

      • by gweihir ( 88907 )

        Maybe its not so much "wrong," as it has a self image problem?

        Well, since it has no "self" that would be kind of expected.

  • Word soup (Score:4, Insightful)

    by Dan East ( 318230 ) on Wednesday December 14, 2022 @09:10PM (#63131916) Journal

    That's because it's just word soup to AI of this kind. It has been trained on grammatical rules, so it will always apply those which results in proper, well-formed sentences. Beyond that it is mimicry, as it attempts to associate and weight items with one another that seem to be related concepts. At the end it is merely word soup of nicely formatted sentences because it has no fundamental knowledge or understanding of whatever topic those words happen to be about.

    If I wrote "It's is a bitterly cold day, but the air is still and the sun feels warm on my skin" a human could extrapolate a huge amount of information about that, tying in their past experiences to recall those feelings and experiences. To the AI it is just words that are related together in some statistically significant way due to the corpora they have been trained on.

  • by dynamo ( 6127 ) on Wednesday December 14, 2022 @09:40PM (#63131988) Journal

    I've played with it a lot and it just frankly sucks. I've had it tell me MANY times that it cannot create creative works, only to reset it and ask it the same exact thing and have it create something. But what it creates cannot take in much context, and it can't see the internet, so all it really has to go on is a chat history usually filled with it's objections to doing things you know it can do and have to rephrase your requests repeatedly.

    A much better option that ChatGPT is based on is OpenAI's much more fleshed out offering, text-davinci-003. With that, you can give it paragraphs or pages of correct background information to use as source material that it can draw from and it can do it's thing much more intelligently. Try that instead.

    • But what it creates cannot take in much context, and it can't see the internet, so all it really has to go on is a chat history

      I asked it to write me a sequel to the movie Avatar and it wrote out a pretty credible response with a plot involving original characters from the movie by name, along with an understanding of the previous plot of Avatar.

      And on top of that it was even at the same level of intelligence as James Cameron, basically repeating the plot of Avatar just with slightly different conditions s

  • by clawsoon ( 748629 ) on Wednesday December 14, 2022 @09:44PM (#63131998)
    Time to start a religion. This thing sounds like it could generate perfect sermons.
  • Have you ever tried to research product comparisons on, say, LED strips or some other product category completely taken over by Chinese companies? This thing could easily write better prose than those!

  • I feel like it would have better luck with something like a Vice article or Tom Scott video that has a more defined "voice", as opposed to Gizmodo, which is just shite.
    • Slashdot at least sometimes has educated comments.
      Gizmodo is just shit writing. I simply can't read it.
      Maybe the machines are better than the pre-schoolers they currently employ.
  • by dromgodis ( 4533247 ) on Thursday December 15, 2022 @03:41AM (#63132340)

    The writer obviously ignored the clearly stated information that ChatGPT was trained on old data from a time when ChatGPT was not mentioned at all in media. If the writers willfully or accidentally leave that info out, they are not very useful for the reader. This article (summary) tells me that the writer is about as useful as they present ChatGPT to be.

    • If you tell a human to talk about a topic that they know nothing about, then the human will be aware they know nothing about it. ChatGPT doesn't know that it doesn't know.

      • > If you tell a human to talk about a topic that they know nothing about, then the human will be aware they know nothing about it

        Are you sure? Homo sapiens talk authoritatively about things that they have no clue about all the time.
        • I am 150% sure. I am so sure, it's the truest thing I've ever said, and that's only because everything I say is true.

  • And no-one realised it wasn't the usual expert oditurs.
  • Well it's ready for politics, any party will do!
  • Wouldn't the training set (until end of 2021) be outdated?

The optimum committee has no members. -- Norman Augustine

Working...