Nature Bans AI-generated Art From Its 153-Year-Old Science Journal (arstechnica.com) 30
Renowned scientific journal Nature has announced in an editorial that it will not publish images or video created using generative AI tools. From a report: The ban comes amid the publication's concerns over research integrity, consent, privacy, and intellectual property protection as generative AI tools increasingly permeate the world of science and art. Founded in November 1869, Nature publishes peer-reviewed research from various academic disciplines, mainly in science and technology. It is one of the world's most cited and most influential scientific journals. Nature says its recent decision on AI artwork followed months of intense discussions and consultations prompted by the rising popularity and advancing capabilities of generative AI tools like ChatGPT and Midjourney.
Separate category (Score:1)
Why not have a separate category?
Re:Separate category (Score:5, Informative)
So, they can still publish research on generative AI.
Re: (Score:1)
I had a brain fart, I was thinking about Nature image competition, for some reason.
In my defense, I've been working on something for many hours now, and my gray matter is mushier than the usual.
Re: (Score:2)
Re: (Score:3)
RTA.
"Apart from in articles that are specifically about AI"
They they will be able to tell exactly how? (Score:5, Interesting)
If I use Photoshop in almost any capability where I am using an algorithm to apply a lighting or other filter to a work of art, which is done almost anytime you do anything digital these days, am I not using AI? I am not using crowd-sourced LLM's, true, to apply dithering and a sharpen filter to my nifty piece of artwork, but this is still AI to a degree. How are they differentiating?
Re: (Score:3, Insightful)
Re: (Score:2)
Anti-AI artists: "AI art is evil! BAN AI!"
Also Anti-AI artists: "Now let me just use some content-aware fill here, then run it through this AI upscaler..."
--
Anti-AI artists: "AI art generators just copy!"
Also Anti-AI artists: "Now let me just stare at this batch of "inspiration images" that I googled so I can figure out how to make 'my' painting look good."
Re: (Score:2)
Re: (Score:2)
Likewise, most text editors will soon have Gmail-style sentence completion. So most papers will be at least partly AI-generated - and probably more readable as a result.
Re: They they will be able to tell exactly how? (Score:2)
Have you read a lot of papers by exact science authors lately? Their abilities are usually not in manipulating language. ChatGPT is an improvement for many.
Re:They they will be able to tell exactly how? (Score:4, Informative)
Re: They they will be able to tell exactly how? (Score:2)
Probably referring to the use of the big name AI art gens like SD and Midjourney. Stuff like Adobe Photoshop Generative Fill is probably going to be a bit harder to discern though, so idk where that would fall.
Re: (Score:2)
Re: (Score:1)
They are. Just like you're not allowed to fake things in PhotoShop, you're not allowed to do it with an ANN.
Re: (Score:2)
Nature does not have a flat ban on using Photoshop in submitted paper. They just ban fraud.
Why they'd specify specific tools and not others is beyond me.
The accolades make this even more embarassing (Score:2)
Re: (Score:2)
Yeah. Like, from a quick glance, looks like 8 of ffmpeg's filters are neural network based. Including some really mundane things, like deinterlacing.
Do they understand how widely used AI upscalers are these days, in order to meet publication resolution requirements?
How will they know? (Score:3)
If it's an illustration or a rendering, how will they know whether an image was generated partially or even entirely by AI? I think it will be difficult or impossible to tell much of the time.
Re: How will they know? (Score:2)
Likely by having artists submit their workflow for verification. Some artists I follow for example will sometimes publish PSDs and Timelapse videos to show how they created something.
If nothing else it keeps the most blatant gen. AI works from getting in.
Where do you draw the line? (Score:2)
AI generated is a loaded term.
"AI" tools are used constantly to help us understand the natural world. They are used for content enhancement, element identification, comparison etc. And when those outputs need to be presented to a human a representation of that information needs to be generated.
For example colourised ultrasound. Is most often built off of ML systems.
What might be a better approach is to ban imagery that is based off of simulated/artificial data. Rather than banning the end result you ban
Re: (Score:2)
But then why do the tools matter? Surely "banning non-real/non-factual information" should be a requirement in its own right, at least wherein the content is presented as real / factual?
If something is for illustrative purposes, all that matters is that it's accurate. And assessing accuracy is the whole point of the peer-review process.
If the standard is however no enhancements to the raw data, presented as in, like, "you're looking at raw data", then ooh boy, you better not have done pretty much anything
Natures nonsensical pretexts (Score:2)
The process of publishing â" as far as both science and art are concerned â" is underpinned by a shared commitment to integrity. That includes transparency. As researchers, editors and publishers, we all need to know the sources of data and images, so that these can be verified as accurate and true. Existing generative AI tools do not provide access to their sources so that such verification can happen.
This is nonsensical. These systems are very much deterministic. Images are reproducible given model, prompts, parameters and seed values are recorded. Most tools store the necessary data to exactly reproduce a given image.
Then thereâ(TM)s attribution: when existing work is used or cited, it must be attributed. This is a core principle of science and art, and generative AI tools do not conform to this expectation.
No different than citing use of third party software.
Consent and permission are also factors. These must be obtained if, for example, people are being identified or the intellectual property of artists and illustrators is involved. Again, common applications of generative AI fail these tests.
Since when is consent required to be influenced by the publicly available works of others?
Generative AI systems are being trained on images for which no efforts have been made to identify the source. Copyright-protected works are routinely being used to train generative AI without appropriate permissions. In some cases, privacy is also being violated â" for example, when generative AI systems create what look like photographs or videos of people without their consent. In addition to privacy concerns, the ease with which these âdeepfakesâ(TM) can be created is accelerating the spread of false information.
Copyright claims are assessed on the result not the modality. If I use MS paint to reproduce a copyrighted work I'm not going to get a pass bec
Re: (Score:2)
Nature should be about actual reality. (Score:1)