Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

ChatGPT Is Being Used To Declassify Redacted Government Docs 61

Last month, OpenAI launched GPT-4 with vision (GPT-4V), allowing the chatbot to read and respond to questions about images. One of the many ways AI users are using this new feature is to decode redacted government documents on UFO sightings. "ChatGPT-4V Multimodal decodes a redacted government document on a UFO sighting released by NASA," one tweet raves. "Maybe the truth isn't out there; it's right here in GPT-V." Decrypt reports: Trying to fill gaps in a string of text is basically what LLMs do. The user did the next best thing when trying to test GPT-V's capabilities and made it guess parts of a text that he censored. "Nearly 100% intent accuracy." he reported. Of course, it's hard to verify whether its guess at what's otherwise obscured is accurate -- it's not like we can ask the CIA how well it did peering through the black lines. Some other ways users are utilizing GPT-4V include: deciphering a doctor's handwriting; understanding medical images, such as X-rays, and receiving analysis and insights for specific medical cases; providing information about the nutritional content of meals or food items; assisting interior design enthusiasts by offering design suggestions based on personal preferences and images of living spaces; and proving technical analysis for stocks and cryptocurrencies based on screenshots.
This discussion has been archived. No new comments can be posted.

ChatGPT Is Being Used To Declassify Redacted Government Docs

Comments Filter:
  • by ihavesaxwithcollies ( 10441708 ) on Friday October 13, 2023 @05:44PM (#63923671)
    Hopefully ChatGPT can find winning scratch tickets.
  • This is idiotic (Score:5, Informative)

    by dlleigh ( 313922 ) on Friday October 13, 2023 @05:50PM (#63923691)

    Large language models like ChatGPT manufacture plausible details according to statistical likelihoods. They have no way of recreating the redacted information.

    Because the statistics come from large collections of open literature, and not actual classified documents that allegedly discuss aliens, all the newly filled-in text will do is reinforce the same preconceptions that general population already has.

    Garbage in, garbage out.

    • Oh... I was wondering why these redacted government documents regarding UFOs referred to Hari Seldon so many times.

    • Re:This is idiotic (Score:5, Insightful)

      by Fuck_this_place ( 2652095 ) on Friday October 13, 2023 @06:39PM (#63923789)

      It's not about making aliens real, it's about making idiots real. Lunacy sells.

      • by Tablizer ( 95088 )

        The Large Tinted Dude has claimed lots of powers, but he's yet to claim having special Area 51 knowledge or powers. Is he slipping?

        • You mean like being able stop all the bad things if only he had the power? I seem to recall he had that power and bad things continued unabated anyway. A throw-rug is more useful at covering up dirt.

          Give it time, he'll get to the "it's aliens" part soon.

          • by Tablizer ( 95088 )

            But oil was cheep, and that's the Most Important Thing. Econ crashes and deficits are for the little people to worry about.

      • Yup. Instead of hallucinating aliens for yourself you can get ChatGPT to hallucinate them for you.
    • by MrL0G1C ( 867445 )

      Thank you. I agree, It's just putting in words that are likely to follow the previous words based upon other literature that was scanned previously, it has a chance of being correct like a horse has a chance of winning a race after having won other races.

    • by gweihir ( 88907 )

      Sure. But being fed garbage is what most people desire and what makes them feel good.

      • by taustin ( 171655 )

        More important, the web sites that put out stories like this know what will sell ads. And selling ads is the only thing they care about (or dare to care about).

    • by taustin ( 171655 )

      As one actual expert explained, it knows what the correct answer should look like, but that's not the same thing as knowing the correct answer.

      And the difference matters.

    • My expectation is that when you fill the gaps the worst possible outcome for humanity with maximum people suffering and the most evil actions are uncovered.
  • "Nearly 100% intent accuracy." He found what he intended to find?
    • by taustin ( 171655 )

      He found what he intended to find?

      One generally does. To quote an episode of CSI, "If you're looking for something specific, there's only one right answer."

    • by Barny ( 103770 )

      Given the whole "deciphering doctor's handwriting" thing, what could possibly go wrong for those edge cases? /s

  • by pieisgood ( 841871 ) on Friday October 13, 2023 @05:59PM (#63923709) Journal

    it doesn't help that the redacted information likely contains the largest acronyms known to man that decompose into coded words for a project anyway removing any sort of probabilistic ad lib effectiveness. This is AI generated hopium.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      That's not true. By applying CREDIBLESCHNAUZER techniques to the data acquired via LIQUIDTRIPWIRE, a FOOTLOOSEHOTLINE analyst has a high likelihood of detecting any ROUNDBLUEECHO patterns. This leads to revealing any hidden words behind redaction marks.

      (Disclaimer: I just made up all of those all-caps names.)

  • one tweet raves. "Maybe the truth isn't out there; it's right here in GPT-V."

    I think you picked the wrong tagline to play off of.

  • FWIW there are open source alternatives...
    https://llava-vl.github.io/ [github.io]

    The technology is impressive but what TFA is talking about WRT UFO documents doesn't seem to have anything substantive to do with it.

  • by Kernel Kurtz ( 182424 ) on Friday October 13, 2023 @06:18PM (#63923747)
    Either way isn't this what LLMs do - guess the word(s) that come next after being trained on the entire content of the internet?

    Redacted text seem like a perfect test. Might be hard to find a control group in this case though.
    • It would be fairly easy to download some lengthy but public report on a similar topic, something vaguely technical and not common vocabulary. Then redact sections and use this technique. Would get some measure of the accuracy.
      • by narcc ( 412956 )

        I would expect anything sensitive enough to be redacted to also be unique enough that accurate predictions of this kind wouldn't be possible.

      • That might work, but it'd be necessary to make sure the training model didn't contain the original document, otherwise it'd be filling the gaps from previous knowledge.

        • by kiore ( 734594 )
          I wonder how hard it would be to get some unique but plausible documents that had never been near the searchable parts of the internet written? Then redact those & see what GPT produces.
  • by OrangeTide ( 124937 ) on Friday October 13, 2023 @06:20PM (#63923749) Homepage Journal

    feed a computer faulty data, and it will confidently give you the wrong answer.

    • Same thing happens when the data isn't faulty. That's not what causes hallucinations.

      • AI training is basically the science of classifying data, almost all of it a tremendously low quality. Maybe I exaggerate for effect but it's standard practice to feed AI garbage.

    • by gweihir ( 88907 )

      The "confidently" part is the real breakthrough here...

    • by taustin ( 171655 )

      In this case, not faulty data, but rather irrelevant data.

      Consider what the AI was trained on: The internet. What's on the internet? Real classified documents? No, for the most part, not. Instead, the AI was trained on:

      1) Fiction
      2) Conspiracy theories
      3) Ramblings from the mentally ill
      4) Deliberate misinformation by officials who want to distract the public from their graft and corruption
      5) Deliberate lies by a "news" media that only cares about selling ads in their desperate struggle to survive.

      Small wonder

      • by narcc ( 412956 )

        But what if the mad ramblings from the crazy conspiracy nuts were really true all along? No secret is safe!

  • by 93 Escort Wagon ( 326346 ) on Friday October 13, 2023 @06:24PM (#63923761)

    ChatGPT is not being used to declassify anything. Some UFO nutter is using ChatGPT to guess at what the redacted parts of already-declassified documents might say.

  • So ChatGPT finally has found one unconditionally positive application after all!

    • by taustin ( 171655 )

      I've found another thing it's very, very good at.

      Think up the most ridiculous headline for a tabloid article, and ask it to write the article. It's indistinguishable from the real thing (and, these days, may well be the real thing:

      Title: Batboy's Love Child Kidnapped by Mole People: A Tabloid Exclusive!

      By [Your Name]
      [Your Tabloid Publication]

      In a shocking and sensational twist of events that could only happen in the bizarre world of the supernatural, we've uncovered a story that will make your skin crawl!

      • by gweihir ( 88907 )

        Nice! Not a fundamentally different thing though, just addresses a lot more but less flashy groups of morons.

  • ChatGPT Is Being Used To Declassify Redacted Government Docs

    Can't we just get an ex-President to do this, maybe with his mind? /snark :-)

  • Whomever is buying this stuff, needs to buy a bridge from me.
  • Government agencies are now aware of this possibility so you can expect them to take countermeasures.

    I'd expect they will start doing a draft redaction then run it through GPT to see if it is recoverable, slowly increasing the size of the redaction until GPT no longer gets close.

A person with one watch knows what time it is; a person with two watches is never sure. Proverb

Working...