Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:AI Incest (Score 2, Interesting) 40

Yes, "you've been told" that by people who have no clue what they're talking about. Meanwhile, models just keep getting better and better. AI images have been out for years now. There's tons on the net.

First off, old datasets don't just disappear. So the *very worst case* is that you just keep developing your new models on pre-AI datasets.

Secondly, there is human selection on things that get posted. If humans don't like the look of something, they don't post it. In many regards, an AI image is replacing what would have been a much crapper alternative choice.

Third, dataset gatherers don't just blindly use a dump of the internet. If there's a place that tends to be a source of crappy images, they'll just exclude or downrate it.

Fourth, images are scored with aesthetic gradients before they're used. That is, humans train models to assess how much they like images, and then those models look at all the images in the dataset and rate them. Once again, crappy images are excluded / downrated.

Fifth, trainers do comparative training and look at image loss rates, and an automatically exclude problematic ones. For example, if you have a thousand images labeled "watermelon" but one is actually a zebra, the zebra will have an anomalous loss spike that warrants more attention (either from humans or in an automated manner). Loss rates can also be compared between data +sources+ - whole websites or even whole datasets - and whatever is working best gets used.

Sixth, trainers also do direct blind human comparisons for evaluation.

This notion that AIs are just going to get worse and worse because of training on AI images is just ignorant. And demonstrably false.

Comment Re:Cue all the people acting shocked about this... (Score 4, Interesting) 40

As for why I think the ruling was bad: their argument was that because the person doesn't control the exact details of the composition of the work, than the basic work (before postprocessing or selection) can't be copyrighted. But that exact same thing applies to photography, outside of studio conditions. Ansel Adams wasn't out there going, "Okay, put a 20 meter oak over there, a 50 meter spruce over there, shape that mountain ridge a bit steeper, put a cliff on that side, cover the whole thing with snow... now add a rainbow to the sky... okay, cue the geese!" He was searching the search space for something to match a general vision - or just taking advantage of happenstance findings. And sure, a photographer has many options at their hands in terms of their camera and its settings, but if you think that's a lot, try messing around with AUTOMATIC1111 with all of its plugins some time.

The winner of Nature Photographer of the year in 2022 was Dmitry Kokh, with "House of Bears". He was stranded on a remote Russian archipelago and discovered that polar bears had moved into an abandoned weather station, and took photos of them. He didn't even plan to be there then. He certainly didn't plan on having polar bears in an abandoned weather station, and he CERTAINLY wasn't telling the bears where to stand and how to pose. Yet his work is a classic example of what the copyright office thinks should be a copyrightable work.

And the very notion that people don't control the layout with AI art is itself flawed. It was an obsolete notion even when they made their ruling - we already had img2img, instructpix2pix and controlnet. The author CAN control the layout, down to whatever level of intricate detail they choose. Unlike, say, a nature photographer. And modern models give increasing levels of control even with the prompt itself - with SD3 (unlike SD1/2 or SC) - you can do things like "A red sphere on a blue cube to the left of a green cone" . We're heading to - if not there already - where you could write a veritable short story's worth of detail to describe a scene.

I find it just plain silly that Person A could grab their cell phone and spend 2 seconds snapping a photo of whatever happens to be out their window, and that's copyrightable, but a person who spends hours searching through the latent space - let alone with ControlNet guidance (controlnet inputs can be veritable works of art in their own right) - isn't given the same credit for the amount of creative effort put into the work.

I think, rather, it's very simple: the human creative effort should be judged not on the output of the work (the work is just a transformation of the inputs), but the amount of creative effort they put into said inputs. Not just on the backend side - selection, postprocessing, etc - but on the frontend side as well. If a person just writes "a fluffy dog" and takes the first pic that comes up, obviously, that's not sufficient creative endeavour. But if a person spends hours on the frontend in order to get the sort of image they want, why shouldn't that frontend work count? Seems dumb to me.

Comment Cue all the people acting shocked about this... (Score 4, Informative) 40

... when the original ruling itself plainly said that though the generated content itself isn't copyrightable, human creative action such as postprocessing or selection can render it copyrightable.

I still think the basic ruling was bad for a number of reasons, and it'll increasingly come under stress in the coming years. But there is zero shock to this copyright here. The copyright office basically invited people to do this.

Comment Re:Victory through bankruptcy! Play along, please. (Score 1) 68

Yes, you could. If you decide that your criteria for having won doesn't factor in things like your own survival as an organization, or the safety of the folks around you, but only that your enemy is damaged, you could decide that you won. Case in point... Hamas. One can easily make the case that Hamas has won, even if they (as a discrete, identifiable group) cease to exist. They've torpedoed changes in the region that were in progress that were to Israel's benefit, the world's support for Israel has been severely compromised, and the forces of other nations with similar views are slowly being mobilized. No matter what happens, Hamas "won". The only outstanding issues don't change that... they just help shape the events of the next year or two.

Comment Re:Don't sit on this bench(mark.) (Score 3, Interesting) 19

LLMs cannot do it. Hallucination is baked-in.

LLMs alone definitely can't do it. LLMs, however, seem (to me, speaking for myself as an ML developer) to be a very likely component in an actual AI. Which, to be clear, is why I use "ML" instead of "AI", as we don't have AI yet. It's going to take other brainlike mechanisms to supervise the hugely flawed knowledge assembly that LLMs generate before we even have a chance to get there. Again, IMO.

I'd love for someone to prove me wrong. No sign of that, though. :)

Comment Don't sit on this bench(mark.) (Score 3, Insightful) 19

I'll be impressed when one of these ML engines is sophisticated enough to be able to say "I don't know" instead of just making up nonsense by stacking probabilistic sequences; also it needs to be able tell fake news from real news. Although there's an entire swath of humans who can't do that, so it'll be a while I guess. That whole "reality has a liberal bias" truism ought to be a prime training area.

While I certainly understand that the Internet and its various social media cesspools are the most readily available training ground(s), it sure leans into the "artificial stupid" thing.

Comment Victory through bankruptcy! Play along, please. (Score 1) 68

Maybe we can eliminate pilot (or even soldier) risk altogether, and move conventional war strictly into the economic realm. Whomever has more of the best toys to smash together wins. Of course that means that if a country is at a disadvantage it's in their best interests to move the fight into the unconventional sphere... attacks on civilians through all sorts of unpalatable methods... the aggressive pursuit of nuclear parity/superority/relevance... bioweapons... terrorism.. The Geneva conventions and other norms all boil down to "be civil and fight fair". As the technology gap grows, fewer and fewer opponents will choose to do either of those things.

Comment Re:This is just sad and funny at the same time (Score 1) 247

I'm not necessarily going to defend this protest, but criticism of Israel is hardly some "woke Commie" position (whatever the hell that even means). One can sincerely believe Israel's actions against Palestinians is unjust, without, say, wanting state control of the economy.

Comment Re:insubordination (Score 1) 247

"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances."

It's been protected since 1791.

Comment Re:insubordination (Score 0) 247

I expect that general anxiety about anti-Semitism is driving this. Like it or not, condemnation of Israel comes with certain baggage, and as can be seen on campuses throughout the Western world, criticism of Israel can turn into anti-Zionism which then turns into anti-Semitism very quickly. The lines are very thin. The business world is very risk averse, and coming down on the wrong side of this particular debate can have a whole lot of consequences. Beyond that, of course, Alphabet is a business, not a society for activists, and while it may tolerate certain kinds of activism that may not be perceived as threatening the bottom line, right now, criticism of Israel is just a step too far.

Comment Re:Sigh... (Score 1) 49

Here we go again with this.

NVidia shipped 100k AI GPUs last year, which - if run nonstop - would consume 7,4 TWh. Crypto consumes over 100 TWh per year, and the world as a whole consumes just under 25000 TWh per year.

AI consumption of power is a pittiance. To get these huge numbers, they have to assume long-term extreme exponential scaling. But you can make anything give insane numbers with an assumption like that.

I simply don't buy the assumption. Not even assuming an AI bust - even assuming that AI keeps hugely growing, and that nobody rests on their laurels but rather keeps training newer and better foundations - the simple fact is that there's far too much progress being made towards vastly more efficient architectures at every level - model structure, neuron structure, training methodologies, and hardware. . Not like "50% better", but like "orders of magnitude better". I just don't buy these notions of infinite exponential growth.

Slashdot Top Deals

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...