Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re: Cue all the people acting shocked about this.. (Score 1) 41

Based on the Office's understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.[28] For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style.[29] But the technology will decide the rhyming pattern, the words in each line, and the structure of the text.[30]

Compare with my summary:

" their argument was that because the person doesn't control the exact details of the composition of the work"

I'll repeat: I accurately summed up their argument. You did not.

Comment Re:AI Incest (Score 2, Interesting) 41

Yes, "you've been told" that by people who have no clue what they're talking about. Meanwhile, models just keep getting better and better. AI images have been out for years now. There's tons on the net.

First off, old datasets don't just disappear. So the *very worst case* is that you just keep developing your new models on pre-AI datasets.

Secondly, there is human selection on things that get posted. If humans don't like the look of something, they don't post it. In many regards, an AI image is replacing what would have been a much crapper alternative choice.

Third, dataset gatherers don't just blindly use a dump of the internet. If there's a place that tends to be a source of crappy images, they'll just exclude or downrate it.

Fourth, images are scored with aesthetic gradients before they're used. That is, humans train models to assess how much they like images, and then those models look at all the images in the dataset and rate them. Once again, crappy images are excluded / downrated.

Fifth, trainers do comparative training and look at image loss rates, and an automatically exclude problematic ones. For example, if you have a thousand images labeled "watermelon" but one is actually a zebra, the zebra will have an anomalous loss spike that warrants more attention (either from humans or in an automated manner). Loss rates can also be compared between data +sources+ - whole websites or even whole datasets - and whatever is working best gets used.

Sixth, trainers also do direct blind human comparisons for evaluation.

This notion that AIs are just going to get worse and worse because of training on AI images is just ignorant. And demonstrably false.

Comment Re:Cue all the people acting shocked about this... (Score 4, Interesting) 41

As for why I think the ruling was bad: their argument was that because the person doesn't control the exact details of the composition of the work, than the basic work (before postprocessing or selection) can't be copyrighted. But that exact same thing applies to photography, outside of studio conditions. Ansel Adams wasn't out there going, "Okay, put a 20 meter oak over there, a 50 meter spruce over there, shape that mountain ridge a bit steeper, put a cliff on that side, cover the whole thing with snow... now add a rainbow to the sky... okay, cue the geese!" He was searching the search space for something to match a general vision - or just taking advantage of happenstance findings. And sure, a photographer has many options at their hands in terms of their camera and its settings, but if you think that's a lot, try messing around with AUTOMATIC1111 with all of its plugins some time.

The winner of Nature Photographer of the year in 2022 was Dmitry Kokh, with "House of Bears". He was stranded on a remote Russian archipelago and discovered that polar bears had moved into an abandoned weather station, and took photos of them. He didn't even plan to be there then. He certainly didn't plan on having polar bears in an abandoned weather station, and he CERTAINLY wasn't telling the bears where to stand and how to pose. Yet his work is a classic example of what the copyright office thinks should be a copyrightable work.

And the very notion that people don't control the layout with AI art is itself flawed. It was an obsolete notion even when they made their ruling - we already had img2img, instructpix2pix and controlnet. The author CAN control the layout, down to whatever level of intricate detail they choose. Unlike, say, a nature photographer. And modern models give increasing levels of control even with the prompt itself - with SD3 (unlike SD1/2 or SC) - you can do things like "A red sphere on a blue cube to the left of a green cone" . We're heading to - if not there already - where you could write a veritable short story's worth of detail to describe a scene.

I find it just plain silly that Person A could grab their cell phone and spend 2 seconds snapping a photo of whatever happens to be out their window, and that's copyrightable, but a person who spends hours searching through the latent space - let alone with ControlNet guidance (controlnet inputs can be veritable works of art in their own right) - isn't given the same credit for the amount of creative effort put into the work.

I think, rather, it's very simple: the human creative effort should be judged not on the output of the work (the work is just a transformation of the inputs), but the amount of creative effort they put into said inputs. Not just on the backend side - selection, postprocessing, etc - but on the frontend side as well. If a person just writes "a fluffy dog" and takes the first pic that comes up, obviously, that's not sufficient creative endeavour. But if a person spends hours on the frontend in order to get the sort of image they want, why shouldn't that frontend work count? Seems dumb to me.

Comment Cue all the people acting shocked about this... (Score 4, Informative) 41

... when the original ruling itself plainly said that though the generated content itself isn't copyrightable, human creative action such as postprocessing or selection can render it copyrightable.

I still think the basic ruling was bad for a number of reasons, and it'll increasingly come under stress in the coming years. But there is zero shock to this copyright here. The copyright office basically invited people to do this.

Comment Don't care about the cause (Score 4, Interesting) 304

If I'm running a business and an employee tries to occupy my office for a protest, they're going to be immediately terminated and escorted out by security.

Who in their right mind thinks this is a good idea? If you think the company is evil, you quit and take action as a free person. These protesters acted like idiot adult children.

Comment Re:Sigh... (Score 1) 49

Here we go again with this.

NVidia shipped 100k AI GPUs last year, which - if run nonstop - would consume 7,4 TWh. Crypto consumes over 100 TWh per year, and the world as a whole consumes just under 25000 TWh per year.

AI consumption of power is a pittiance. To get these huge numbers, they have to assume long-term extreme exponential scaling. But you can make anything give insane numbers with an assumption like that.

I simply don't buy the assumption. Not even assuming an AI bust - even assuming that AI keeps hugely growing, and that nobody rests on their laurels but rather keeps training newer and better foundations - the simple fact is that there's far too much progress being made towards vastly more efficient architectures at every level - model structure, neuron structure, training methodologies, and hardware. . Not like "50% better", but like "orders of magnitude better". I just don't buy these notions of infinite exponential growth.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...