Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Businesses

Nature Bans AI-generated Art From Its 153-Year-Old Science Journal (arstechnica.com) 30

Renowned scientific journal Nature has announced in an editorial that it will not publish images or video created using generative AI tools. From a report: The ban comes amid the publication's concerns over research integrity, consent, privacy, and intellectual property protection as generative AI tools increasingly permeate the world of science and art. Founded in November 1869, Nature publishes peer-reviewed research from various academic disciplines, mainly in science and technology. It is one of the world's most cited and most influential scientific journals. Nature says its recent decision on AI artwork followed months of intense discussions and consultations prompted by the rising popularity and advancing capabilities of generative AI tools like ChatGPT and Midjourney.
This discussion has been archived. No new comments can be posted.

Nature Bans AI-generated Art From Its 153-Year-Old Science Journal

Comments Filter:
  • Why not have a separate category?

    • Re:Separate category (Score:5, Informative)

      by timeOday ( 582209 ) on Monday June 12, 2023 @04:02PM (#63596696)
      The statement does say, "Apart from in articles that are specifically about AI..."

      So, they can still publish research on generative AI.

      • I had a brain fart, I was thinking about Nature image competition, for some reason.
        In my defense, I've been working on something for many hours now, and my gray matter is mushier than the usual.

    • This has little if anything to do with scientific/ethical integrity & everything to do with IP. So far, it looks like you can't copyright AI output. Nature (Springer actually) is an academic publisher & so they're an IP company. If they can't copyright & control it, it's no good to them.
  • by xevioso ( 598654 ) on Monday June 12, 2023 @04:03PM (#63596702)

    If I use Photoshop in almost any capability where I am using an algorithm to apply a lighting or other filter to a work of art, which is done almost anytime you do anything digital these days, am I not using AI? I am not using crowd-sourced LLM's, true, to apply dithering and a sharpen filter to my nifty piece of artwork, but this is still AI to a degree. How are they differentiating?

    • Re: (Score:3, Insightful)

      by fleeped ( 1945926 )
      They can't differentiate. They just want to make a statement, and they do that by drawing an arbitrary line. Old farts not keeping up with the implications of this technology, and how image processing ALREADY works under the hood, as you also point out
      • by Rei ( 128717 )

        Anti-AI artists: "AI art is evil! BAN AI!"

        Also Anti-AI artists: "Now let me just use some content-aware fill here, then run it through this AI upscaler..."

        --

        Anti-AI artists: "AI art generators just copy!"

        Also Anti-AI artists: "Now let me just stare at this batch of "inspiration images" that I googled so I can figure out how to make 'my' painting look good."

        • Exactly. It's such a grey zone, that any arbitrary drawn line is really problematic. Established notions of originality and plagiarism are going to be very challenged.
    • by mkwan ( 2589113 )

      Likewise, most text editors will soon have Gmail-style sentence completion. So most papers will be at least partly AI-generated - and probably more readable as a result.

    • by Opyros ( 1153335 ) on Monday June 12, 2023 @08:06PM (#63597188) Journal
      Well, the actual editorial [nature.com] is titled "Why Nature will not allow the use of generative AI in images and video." So apparently it's not all AI which is banned.
    • Probably referring to the use of the big name AI art gens like SD and Midjourney. Stuff like Adobe Photoshop Generative Fill is probably going to be a bit harder to discern though, so idk where that would fall.

    • Its not even close to the same thing here. Altering an image as apposed to creating a one is a lot different don't ya think?
  • They state they won't accept any photo, video or even illustrations, generated wholly or partly using AI. So that counts everybody who's using any sufficiently advanced image/video processing application? Do they understand how modern image processing works? Way to show that they're old fossils, out-of-touch outside of their realm of expertise.
    • by Rei ( 128717 )

      Yeah. Like, from a quick glance, looks like 8 of ffmpeg's filters are neural network based. Including some really mundane things, like deinterlacing.

      Do they understand how widely used AI upscalers are these days, in order to meet publication resolution requirements?

  • by jenningsthecat ( 1525947 ) on Monday June 12, 2023 @05:39PM (#63596948)

    If it's an illustration or a rendering, how will they know whether an image was generated partially or even entirely by AI? I think it will be difficult or impossible to tell much of the time.

    • Likely by having artists submit their workflow for verification. Some artists I follow for example will sometimes publish PSDs and Timelapse videos to show how they created something.

      If nothing else it keeps the most blatant gen. AI works from getting in.

  • AI generated is a loaded term.

    "AI" tools are used constantly to help us understand the natural world. They are used for content enhancement, element identification, comparison etc. And when those outputs need to be presented to a human a representation of that information needs to be generated.

    For example colourised ultrasound. Is most often built off of ML systems.

    What might be a better approach is to ban imagery that is based off of simulated/artificial data. Rather than banning the end result you ban

    • by Rei ( 128717 )

      But then why do the tools matter? Surely "banning non-real/non-factual information" should be a requirement in its own right, at least wherein the content is presented as real / factual?

      If something is for illustrative purposes, all that matters is that it's accurate. And assessing accuracy is the whole point of the peer-review process.

      If the standard is however no enhancements to the raw data, presented as in, like, "you're looking at raw data", then ooh boy, you better not have done pretty much anything

  • The process of publishing â" as far as both science and art are concerned â" is underpinned by a shared commitment to integrity. That includes transparency. As researchers, editors and publishers, we all need to know the sources of data and images, so that these can be verified as accurate and true. Existing generative AI tools do not provide access to their sources so that such verification can happen.

    This is nonsensical. These systems are very much deterministic. Images are reproducible given model, prompts, parameters and seed values are recorded. Most tools store the necessary data to exactly reproduce a given image.

    Then thereâ(TM)s attribution: when existing work is used or cited, it must be attributed. This is a core principle of science and art, and generative AI tools do not conform to this expectation.

    No different than citing use of third party software.

    Consent and permission are also factors. These must be obtained if, for example, people are being identified or the intellectual property of artists and illustrators is involved. Again, common applications of generative AI fail these tests.

    Since when is consent required to be influenced by the publicly available works of others?

    Generative AI systems are being trained on images for which no efforts have been made to identify the source. Copyright-protected works are routinely being used to train generative AI without appropriate permissions. In some cases, privacy is also being violated â" for example, when generative AI systems create what look like photographs or videos of people without their consent. In addition to privacy concerns, the ease with which these âdeepfakesâ(TM) can be created is accelerating the spread of false information.

    Copyright claims are assessed on the result not the modality. If I use MS paint to reproduce a copyrighted work I'm not going to get a pass bec

    • by WDot ( 1286728 )
      From a technical standpoint, this view is silly. If I was super worried about AI regulation though, I might discourage AI art submissions just in case these generative AI models get hit with copyright infringement claims regarding the training data.
  • For a publication like Nature, I'd say this is a pretty obvious decision. This is very much about facts and the truth. Illustrating with some algorithm's interpretation doesn't make sense.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...