Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:How will they know? (Score 1) 39

I suspect the real thing that will trigger these kinds of bans will be posting behavior rather than direct knowledge of how something was produced. I do not know about bandcamp, but other places where AI slop is becoming a problem, it comes out of content farms that produce impossibly large amounts of material over short times, flooding the system. If you are making 100 tracks/day, all similar, all optimized to show up in people's feeds because they are tailored to what the recommendation engines will suggest, hopefully they will get reported and kicked.

Comment Re:What in the .... (Score 1) 49

That.. is an interesting point. SCOTUS, or at least the new additions, have been really big on the idea that agencies should not be able to write or enforce regulations. That has been a BIG wish list of the reagan rejects for decades. But take away their ability to fine based on behavior or content or anything, then what stops pirate stations or people just interfering with broadcasts? Since that hurts profit margins and isn't a top down dominance, I imagine that power will magically be retained.

Comment Maybe? (Score 1) 80

As loaded as the term 'person' is, legally it could mean almost anything since we have all sorts of categories floating around for entities which are 'people' in terms of specific capabilities, like entering into contracts.

Though here.. I would say probably not. AIs have no reason to be able to independently sign contracts, or take legal responsibility for things. In fact, for society, it would probably be a terrible idea.. though for AI owners I could see how the ability to pass the legal blame onto software would be nice.

Comment Re:"Science fiction" is the lamest criticism (Score 1) 105

Does it really have no basis though? I guess it depends on which doom we are talking, but a lot of them are just taking the trends over the last few decades and extrapolating them out, with 'AI' simply being the latest step. No skynet or minority report type doom, just capitalism, concentration of wealth, an increasingly dysfunctional economy driven by advertising/engagement, and how lucrative private surveillance has become. The doom that comes with AI is not new aspects, but an acceleration of things already happening.

Comment Re:The cause is easy to see (Score 1) 105

Does anyone actually see that potential though? It is sold as an idea for why AI will be good, but these models really are not well suited to that and in those domains, most of the people trying to apply them are AI researchers trying to apply their tools in other fields, not those fields using AI tools for their own work. It is a subtle but important distinction.. and one that is heavily tainted by the AI industry's (and culture) having fundamental disrespect for any field other than their own.

In the research domain, the big appeal of ML is that you don't have to understand something to solve it, so Ai researchers only need to understand AI and thus solve problems those lesser scientists in other fields could not. It is kinda creepy how anti-intellectual AI has become... and I doubt 'solving intractible problems' is really going to come out of that. Lots of junk science on the other hand... right up their ally.

Comment Re: I am optimistic (Score 1) 105

I would say the web 2.0 did it. In the .com era, you had companies (at least) trying to build goods and services that consumers would want. Success or failure, their business motivation was building products that people would pay money to use. Web2.0 on the other hand, users became product and the customers are the advertisers.. so it produced results that, well, encouraged rot.. content no longer mattered because the consumers of the content were not the customers any more.

Comment Re:I am optimistic (Score 1) 105

There is a real question regarding how much it can really keep improving though. Setting aside the 'symbolic vs statistical' issue, we are already seeing model collapse starting. Machine learning depends on training with real data... not synthetic, and no the output of other models. As AI dumps more and more content into the same places that AI trains from, weird things happen to models. The only way to stay ahead of this collapse is bigger and bigger models, which we see in this massive attempt to build out. The unknown part is what these curves will really look like.. can they build fast enough to stay ahead, stay in place, or slow the decay?

Economically, it is not looking good. AI projects are consuming vast amounts of wealth for incremental gains, but that is a one time burst of investment during the one time period where you have large datasets that have not yet been contaminated. If they cut over from investment to self sustaining today, it would likely collapse.

Comment Re: More money????? (Score 1) 105

That is if we are lucky. I sometimes wonder if homebuilders will go the same route as cars someday.. more and more computerized parts that can not be repaired, only replaced.... looking at a lot of homebuilding trends, we could be moving to a 'disposable house' market. McMansions already have lifespans of maybe 20 years...

Comment Re:"Nvidia CEO Jensen Huang Says- (Score 1) 105

While their wording was not great, they have the right idea. Stats based AI models really only became popular because they just happened to run really well on cheap hardware. They have always been the least expressive and least promising branch of AI, and today you can really see the lack of symbolic reasoning catching up with them. Since we have wiped out an entire generation of researchers looking into other branches, we are probably going to have to start over...

Slashdot Top Deals

The tao that can be tar(1)ed is not the entire Tao. The path that can be specified is not the Full Path.

Working...