Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:AI Incest (Score 2, Interesting) 41

Yes, "you've been told" that by people who have no clue what they're talking about. Meanwhile, models just keep getting better and better. AI images have been out for years now. There's tons on the net.

First off, old datasets don't just disappear. So the *very worst case* is that you just keep developing your new models on pre-AI datasets.

Secondly, there is human selection on things that get posted. If humans don't like the look of something, they don't post it. In many regards, an AI image is replacing what would have been a much crapper alternative choice.

Third, dataset gatherers don't just blindly use a dump of the internet. If there's a place that tends to be a source of crappy images, they'll just exclude or downrate it.

Fourth, images are scored with aesthetic gradients before they're used. That is, humans train models to assess how much they like images, and then those models look at all the images in the dataset and rate them. Once again, crappy images are excluded / downrated.

Fifth, trainers do comparative training and look at image loss rates, and an automatically exclude problematic ones. For example, if you have a thousand images labeled "watermelon" but one is actually a zebra, the zebra will have an anomalous loss spike that warrants more attention (either from humans or in an automated manner). Loss rates can also be compared between data +sources+ - whole websites or even whole datasets - and whatever is working best gets used.

Sixth, trainers also do direct blind human comparisons for evaluation.

This notion that AIs are just going to get worse and worse because of training on AI images is just ignorant. And demonstrably false.

Comment Re:Cue all the people acting shocked about this... (Score 4, Interesting) 41

As for why I think the ruling was bad: their argument was that because the person doesn't control the exact details of the composition of the work, than the basic work (before postprocessing or selection) can't be copyrighted. But that exact same thing applies to photography, outside of studio conditions. Ansel Adams wasn't out there going, "Okay, put a 20 meter oak over there, a 50 meter spruce over there, shape that mountain ridge a bit steeper, put a cliff on that side, cover the whole thing with snow... now add a rainbow to the sky... okay, cue the geese!" He was searching the search space for something to match a general vision - or just taking advantage of happenstance findings. And sure, a photographer has many options at their hands in terms of their camera and its settings, but if you think that's a lot, try messing around with AUTOMATIC1111 with all of its plugins some time.

The winner of Nature Photographer of the year in 2022 was Dmitry Kokh, with "House of Bears". He was stranded on a remote Russian archipelago and discovered that polar bears had moved into an abandoned weather station, and took photos of them. He didn't even plan to be there then. He certainly didn't plan on having polar bears in an abandoned weather station, and he CERTAINLY wasn't telling the bears where to stand and how to pose. Yet his work is a classic example of what the copyright office thinks should be a copyrightable work.

And the very notion that people don't control the layout with AI art is itself flawed. It was an obsolete notion even when they made their ruling - we already had img2img, instructpix2pix and controlnet. The author CAN control the layout, down to whatever level of intricate detail they choose. Unlike, say, a nature photographer. And modern models give increasing levels of control even with the prompt itself - with SD3 (unlike SD1/2 or SC) - you can do things like "A red sphere on a blue cube to the left of a green cone" . We're heading to - if not there already - where you could write a veritable short story's worth of detail to describe a scene.

I find it just plain silly that Person A could grab their cell phone and spend 2 seconds snapping a photo of whatever happens to be out their window, and that's copyrightable, but a person who spends hours searching through the latent space - let alone with ControlNet guidance (controlnet inputs can be veritable works of art in their own right) - isn't given the same credit for the amount of creative effort put into the work.

I think, rather, it's very simple: the human creative effort should be judged not on the output of the work (the work is just a transformation of the inputs), but the amount of creative effort they put into said inputs. Not just on the backend side - selection, postprocessing, etc - but on the frontend side as well. If a person just writes "a fluffy dog" and takes the first pic that comes up, obviously, that's not sufficient creative endeavour. But if a person spends hours on the frontend in order to get the sort of image they want, why shouldn't that frontend work count? Seems dumb to me.

Comment Cue all the people acting shocked about this... (Score 4, Informative) 41

... when the original ruling itself plainly said that though the generated content itself isn't copyrightable, human creative action such as postprocessing or selection can render it copyrightable.

I still think the basic ruling was bad for a number of reasons, and it'll increasingly come under stress in the coming years. But there is zero shock to this copyright here. The copyright office basically invited people to do this.

Comment No (Score 1) 437

No, its not. It wouldn't be SO bad though, except that they no longer allow memory upgrades after the fact (at least on all the entry level stuff where they start at 8gb), and whilst 8GB of memory for a PC cost $25 or so, Apple wants $200 to bump up from 8GB to 16GB of memory.

Apple uses their software to lock their users into their hardware ecosystem where they charge exorbitant amounts for stuff.

Comment Re: Doesn't like military using their services (Score 1) 307

So, people can protest so long as the things or people you are protesting against aren't inconvenienced or have to look at your protest.

To a large degree, yes.

You don't annoy people into submission. There is a societal contract where we all have to live together at some baseline level of cooperation. There can be disagreements that don't affect that, but when you start interfering with societal level functioning (blocking traffic, etc), then the rest of the public just becomes angry at the protestors.

Societal controls are what keep those other people from mowing you down wholesale with their cars. You can't expect to benefit from those parts of organized society while trying to halt others, because eventually the people in the cars will start "protesting" in their own way by running you over.

If you want society to keep them from running you over, then you also have to expect society to clear the road.

Comment Re:Sigh... (Score 1) 49

Here we go again with this.

NVidia shipped 100k AI GPUs last year, which - if run nonstop - would consume 7,4 TWh. Crypto consumes over 100 TWh per year, and the world as a whole consumes just under 25000 TWh per year.

AI consumption of power is a pittiance. To get these huge numbers, they have to assume long-term extreme exponential scaling. But you can make anything give insane numbers with an assumption like that.

I simply don't buy the assumption. Not even assuming an AI bust - even assuming that AI keeps hugely growing, and that nobody rests on their laurels but rather keeps training newer and better foundations - the simple fact is that there's far too much progress being made towards vastly more efficient architectures at every level - model structure, neuron structure, training methodologies, and hardware. . Not like "50% better", but like "orders of magnitude better". I just don't buy these notions of infinite exponential growth.

Comment Re:Support Palestinians! (Score 1) 507

Hamas was only in control of the Gaza Strip in the first place (for the last 20ish years) solely and entirely at the agreement and behest of Israel trying to gain peace. The attacks broke that agreement, what did anyone think was going to happen?

        And the last is rhetorical, of course, because what is happening now is exactly what Hamas and all their supporters expected. They got in their licks, of course the Israelis responded just like any state would to such an attack, a bunch of quasi-innocent cannon fodder it getting wiped out while the masterminds call the shots from Tehran, Qatar, etc, and spin up the rest of the Western liberal apologists to put pressure on Israel to stop - again - so they can try to consolidate their position.

        Lather, rinse, repeat, it's the same story over and over. You can almost excuse a bunch of 20-somethings not realizing they are being played, if *every single step of this tawdry cycle wasn't clearly documented on the very internet they are supposedly experts about*. The cycle is about 20 years for a reason - anyone older than about 20 isn't dumb enough to fall for the same scam again.

Comment Re:We dropped 2 atomic bombs on Japan. (Score 4, Insightful) 507

Exactly. The sole reason that Hamas exists and controlled the Gaza Strip was because Israel unilaterally agreed to it - land for peace. Hamas violently broke that agreement (as anyone older than about 25 could have easily predicted) with a series of brutal, barbaric murders. Now they are running off the the world community again hoping for someone else to intervene. Classic chickenshit "punch someone in the back of the head and run for home". This is just the latest example, I have seen it over and over in my lifetime.

Comment Re:Words matter (Score 1) 507

That's like saying those against US's Iraq invasion were "pro-Saddam".

        Of course, that's merely sophistry. I note the invasion of Iraq enjoyed wide bi-partisan support - until it appeared to be giving Bush 43 too much political currency, whereupon, presto-chango, the hard left flipped and then it became the worst thing ever.

Comment Re:Well... (Score 3, Insightful) 125

It's not just a question of whether it's justifiable. It's just simply nonsense to think that they can enforce this. Anyone can run Stable Diffusion on their computer. There's a virtually limitless number of models finetuned to make all kinds of porn. It's IMHO extremely annoying all the porn flooding the model sites; I think like 3/4ths of the people using these tools are guys making wank material. Even models that aren't tuned specifically for porn, rarely does anyone (except the foundation model developers, like StabilityAI) specifically try to *prevent* it.

The TL/DR is: if you think stopping pirated music was hard, well, *good luck* stopping people from generating porn on their computers. You might as well pass a law declaring it illegal to draw porn.

Slashdot Top Deals

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...