The Internet

RSS Co-Creator Launches New Protocol For AI Data Licensing 26

A group led by RSS co-creator Eckart Walther has launched a new protocol designed to standardize and scale licensing of online content for AI training. Backed by publishers like Reddit, Quora, Yahoo, and Medium, Real Simple Licensing (RSL) combines machine-readable terms in robots.txt with a collective rights organization, aiming to do for AI training data what ASCAP did for music royalties. However, it remains to be seen whether AI labs will agree to adopt it. TechCrunch reports: According to RSL co-founder Eckart Walther, who also co-created the RSS standard, the goal was to create a training-data licensing system that could scale across the internet. "We need to have machine-readable licensing agreements for the internet," Walther told TechCrunch. "That's really what RSL solves."

For years, groups like the Dataset Providers Alliance have been pushing for clearer collection practices, but RSL is the first attempt at a technical and legal infrastructure that could make it work in practice. On the technical side, the RSL Protocol lays out specific licensing terms a publisher can set for their content, whether that means AI companies need a custom license or to adopt Creative Commons provisions. Participating websites will include the terms as part of their "robots.txt" file in a prearranged format, making it straightforward to identify which data falls under which terms.

On the legal side, the RSL team has established a collective licensing organization, the RSL Collective, that can negotiate terms and collect royalties, similar to ASCAP for musicians or MPLC for films. As in music and film, the goal is to give licensors a single point of contact for paying royalties and provide rights holders a way to set terms with dozens of potential licensors at once. A host of web publishers have already joined the collective, including Yahoo, Reddit, Medium, O'Reilly Media, Ziff Davis (owner of Mashable and Cnet), Internet Brands (owner of WebMD), People Inc., and The Daily Beast. Others, like Fastly, Quora, and Adweek, are supporting the standard without joining the collective.

Notably, the RSL Collective includes some publishers that already have licensing deals -- most notably Reddit, which receives an estimated $60 million a year from Google for use of its training data. There's nothing stopping companies from cutting their own deals within the RSL system, just as Taylor Swift can set special terms for licensing while still collecting royalties through ASCAP. But for publishers too small to draw their own deals, RSL's collective terms are likely to be the only option.
China

Chinese Hackers Impersonated US Lawmaker in Email Espionage Campaign (msn.com) 25

As America's trade talks with China were set to begin last July, a "puzzling" email reached several U.S. government agencies, law firms, and trade groups, reports the Wall Street Journal. It appeared to be from the chair of a U.S. Congressional committee, Representative John Moolenaar, asking recipients to review an alleged draft of upcoming legislation — sent as an attachment. "But why had the chairman sent the message from a nongovernment address...?"

"The cybersecurity firm Mandiant determined the spyware would allow the hackers to burrow deep into the targeted organizations if any of the recipients had opened the purported draft legislation, according to documents reviewed by The Wall Street Journal." It turned out to be the latest in a series of alleged cyber espionage campaigns linked to Beijing, people familiar with the matter said, timed to potentially deploy spyware against organizations giving input on President Trump's trade negotiations. The FBI and the Capitol Police are investigating the Moolenaar emails, and cyber analysts traced the embedded malware to a hacker group known as APT41 — believed to be a contractor for Beijing's Ministry of State Security... The hacking campaign appeared to be aimed at giving Chinese officials an inside look at the recommendations Trump was receiving from outside groups. It couldn't be determined whether the attackers had successfully breached any of the targets.

A Federal Bureau of Investigation spokeswoman declined to provide details but said the bureau was aware of the incident and was "working with our partners to identify and pursue those responsible...." The alleged campaign comes as U.S. law-enforcement officials have been surprised by the prolific and creative nature of China's spying efforts. The FBI revealed last month that a Beijing-linked espionage campaign that hit U.S. telecom companies and swept up Trump's phone calls actually targeted more than 80 countries and reached across the globe...

The Moolenaar impersonation comes as several administration officials have recently faced impostors of their own. The State Department warned diplomats around the world in July that an impostor was using AI to imitate Secretary of State Marco Rubio's voice in messages sent to foreign officials. Federal authorities are also investigating an effort to impersonate White House chief of staff Susie Wiles, the Journal reported in May... The FBI issued a warning that month that "malicious actors have impersonated senior U.S. officials" targeting contacts with AI-generated voice messages and texts.

And in January, the article points out, all the staffers on Moolenaar's committee "received emails falsely claiming to be from the CEO of Chinese crane manufacturer ZPMC, according to people familiar with the episode."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
The Media

Publishers Demand 'AI Overview' Traffic Stats from Google, Alleging 'Forced' Deals (theguardian.com) 19

AI Overviews have lowered click-through traffic to Daily Mail sites by as much as 89%, the publisher told a UK government body that regulates competition. So they've joined other top news organizations (including Guardian Media Group and the magazine trade body the Periodical Publishers Association) in asking the regulators "to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers," reports the Guardian: Publishers — already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news — argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or "drop out of all search results", according to several sources... In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content. However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers' overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies. "Google Discover is of zero product importance to Google at all," he says. "It allows Google to funnel more traffic to publishers as traffic from search declines ... Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want."

Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models. The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the "value being scraped" out of the £125bn sector. Some publishers have struck bilateral licensing deals with AI companies — such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI — while others such as the BBC have taken action against AI companies alleging copyright theft. "It is a two-pronged attack on publishers, a sort of pincer movement," says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. "Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis."

"At the moment the AI and tech community are showing no signs of supporting publisher revenue," says the chief executive of the UK's Periodical Publishers Association...
The Courts

Warner Bros. Discovery Sues Midjourney For Copyright Infringement 83

Warner Bros. Discovery has filed a major copyright lawsuit against Midjourney, accusing the AI image generator of exploiting its movies and TV shows to train models and generate near-identical reproductions of iconic characters like Batman, Bugs Bunny, and Rick and Morty. From The Hollywood Reporter: The company "brazenly dispenses Warner Bros. Discovery's intellectual property" by letting subscribers produce images and videos of iconic copyrighted characters, alleges the complaint, filed on Thursday in California federal court. "The heart of what we do is develop stories and characters to entertain our audiences, bringing to life the vision and passion of our creative partners," said a Warner Bros. Discovery spokesperson in a statement. "Midjourney is blatantly and purposefully infringing copyrighted works, and we filed this suit to protect our content, our partners, and our investments."

For years, AI companies have been training their technology on data scraped across the internet without compensating creators. It's led to lawsuits from authors, record labels, news organizations, artists and studios, which contend that some AI tools erode demand for their content. Warner Bros. Discovery joins Disney and Universal, which earlier this year teamed up to sue Midjourney. By their thinking, the AI company is a free-rider plagiarizing their movies and TV shows. In the lawsuit, Warner Bros. Discovery points to Midjourney generating images of iconic copyrighted characters. At the forefront are heroes who're at the center of DC Studios' movies and TV shows, like Superman, Wonder Woman and The Joker; others are Looney Tunes, Tom and Jerry and Scooby-Doo characters who've become ubiquitous household names; more are Cartoon Network characters, including those from Rick and Morty, who've emerged as something of cultural touchstones in recent years. [...]

The lawsuit argues Midjourney's ability to return copyrighted characters is a "clear draw for subscribers," diverting consumers away from purchasing Warner Bros. Discovery-approved posters, wall art and prints, among other products that must now compete against the service. [...] Warner Bros. Discovery seeks Midjourney's profits attributable to the alleged infringement or, alternatively, $150,000 per infringed work, which could leave the AI company on the hook for massive damages. The thrust of the studios' lawsuits will likely be decided by one question: Are AI companies covered by fair use, the legal doctrine in intellectual property law that allows creators to build upon copyrighted works without a license?
The lawsuit can be found here.
AI

First 'AI Music Creator' Signed by Record Label. More Ahead, or Just a Copyright Quandry? (apnews.com) 101

"I have no musical talent at all," says Oliver McCann. "I can't sing, I can't play instruments, and I have no musical background at all!"

But the Associated Press describes 37-year-old McCann as a British "AI music creator" — and last month McCann signed with an independent record label "after one of his tracks racked up 3 million streams, in what's billed as the first time a music label has inked a contract with an AI music creator." McCann is an example of how ChatGPT-style AI song generation tools like Suno and Udio have spawned a wave of synthetic music, a movement most notably highlighted by a fictitious group, Velvet Sundown, that went viral even though all its songs, lyrics and album art were created by AI. Experts say generative AI is set to transform the music world. However, there are scant details, so far, on how it's impacting the $29.6 billion global recorded music market, which includes about $20 billion from streaming.

The most reliable figures come from music streaming service Deezer, which estimates that 18% of songs uploaded to its platform every day are purely AI generated, though they only account for a tiny amount of total streams, hinting that few people are actually listening. Other, bigger streaming platforms like Spotify haven't released any figures on AI music... "It's a total boom. It's a tsunami," said Josh Antonuccio, director of Ohio University's School of Media Arts and Studies. The amount of AI generated music "is just going to only exponentially increase" as young people grow up with AI and become more comfortable with it, he said. [Antonuccio says later the cost of making a hit record "just keeps winnowing down from a major studio to a laptop to a bedroom. And now it's like a text prompt — several text prompts." Though there's a lack of legal clarity over copyright issues.]

Generative AI, with its ability to spit out seemingly unique content, has divided the music world, with musicians and industry groups complaining that recorded works are being exploited to train AI models that power song generation tools... Three major record companies, Sony Music Entertainment, Universal Music Group and Warner Records, filed lawsuits last year against Suno and Udio for copyright infringement. In June, the two sides also reportedly entered negotiations that could go beyond settling the lawsuits and set rules for how artists are paid when AI is used to remix their songs.

GEMA, a German royalty collection society, has sued Suno, accusing it of generating music similar to songs like "Mambo No. 5" by Lou Bega and "Forever Young" by Alphaville. More than 1,000 musicians, including Kate Bush, Annie Lennox and Damon Albarn, released a silent album to protest proposed changes to U.K. laws on AI they fear would erode their creative control.

Meanwhile, other artists, such as will.i.am, Timbaland and Imogen Heap, have embraced the technology. Some users say the debate is just a rehash of old arguments about once-new technology that eventually became widely used, such as AutoTune, drum machines and synthesizers.

AI

Humans Are Being Hired to Make AI Slop Look Less Sloppy (nbcnews.com) 78

Graphic designer Lisa Carstens "spends a good portion of her day working with startups and individual clients looking to fix their botched attempts at AI-generated logos," reports NBC News: Such gigs are part of a new category of work spawned by the generative AI boom that threatened to displace creative jobs across the board: Anyone can now write blog posts, produce a graphic or code an app with a few text prompts, but AI-generated content rarely makes for a satisfactory final product on its own... Fixing AI's mistakes is not their ideal line of work, many freelancers say, as it tends to pay less than traditional gigs in their area of expertise. But some say it's what helps pay the bills....

As companies struggle to figure out their approach to AI, recent data provided to NBC News from freelance job platforms Upwork, Freelancer and Fiverr also suggest that demand for various types of creative work surged this year, and that clients are increasingly looking for humans who can work alongside AI technologies without relying on or rejecting them entirely. Data from Upwork found that although AI is already automating lower-skilled and repetitive tasks, the platform is seeing growing demand for more complex work such as content strategy or creative art direction. And over the past six months, Fiverr said it has seen a 250% boost in demand for niche tasks across web design and book illustration, from "watercolor children story book illustration" to "Shopify website design." Similarly, Freelancer saw a surge in demand this year for humans in writing, branding, design and video production, including requests for emotionally engaging content like "heartfelt speeches...."

The low pay from clients who have already cheaped out on AI tools has affected gig workers across industries, including more technical ones like coding. For India-based web and app developer Harsh Kumar, many of his clients say they had already invested much of their budget in "vibe coding" tools that couldn't deliver the results they wanted. But others, he said, are realizing that shelling out for a human developer is worth the headaches saved from trying to get an AI assistant to fix its own "crappy code." Kumar said his clients often bring him vibe-coded websites or apps that resulted in unstable or wholly unusable systems.

"Even outside of any obvious mistakes made by AI tools, some artists say their clients simply want a human touch to distinguish themselves from the growing pool of AI-generated content online..."
GNU is Not Unix

FSF Announces Photo Contest Honoring 40 Years of Free Software (fsf.org) 9

The Free Software Foundation announced a special photography contest honoring its 40th anniversary: The technology we use every day has changed dramatically since our founding nearly forty years ago, including the way we interact with it... We're incredibly grateful for the countless hours that developers and users have put into the free software programs that exist today. Without all the people who cared enough to make and use software that respects the four freedoms four decades or even a year ago, we wouldn't have much to celebrate.

We want to honor the hard work that has gone into free software and its development with the FSF40 Photo Contest. Starting on August 14, 2025, we're inviting free software supporters worldwide to share how they use free software on a daily basis. While we can think of hundreds of ways that free software can be used, there's almost certainly many of you who have thought of much more creative ways to involve libre software every day!

Shortly after the photo contest closes on August 31, 2025, we will invite you and other free software supporters to vote for your favorite of the #FSF40Photos... We will be displaying the winning photos at our fortieth [anniversary] celebration in Boston, MA on October 4, 2025 — we hope you get to see them on a big screen with us!

Earlier this month the FSF also shared 40 links from around the FSF and GNU sites "that give a sense of what we've been doing all this time as we work for your freedom." (For example, 2007's announcement of the GNU General Public License, version 3.)
Intel

Trump Confirms US Is Seeking 10% Stake In Intel (arstechnica.com) 125

An anonymous reader quotes a report from Ars Technica: After the Trump administration confirmed a rumor that the US is planning to buy a 10 percent stake in Intel, US Senator Bernie Sanders (I-Vt.) came forward Wednesday to voice support for the highly unusual plan, finding rare common ground with Donald Trump. According to Commerce Secretary Howard Lutnick, the plan would see the US disbursing approved CHIPS Act grants only after acquiring non-voting shares of Intel and likely other chipmakers. That would allow the US to profit off its investment in chipmakers, Lutnick suggested, and Sanders told Reuters that he agreed American taxpayers could benefit from the potential deals.

"If microchip companies make a profit from the generous grants they receive from the federal government, the taxpayers of America have a right to a reasonable return on that investment," Sanders said. While Lutnick gave Trump credit for coming up with what White House Press Secretary Karoline Leavitt described as a "creative idea that has never been done before" to protect US national and economic security, it appears that Lutnick is driving the initiative. "Lutnick has been pushing the equity idea," insiders granted anonymity previously told Reuters, "adding that Trump likes the idea."

So far, Intel has engaged in talks, while the Taiwan Semiconductor Manufacturing Company (TSMC) and other major CHIPS grant recipients like Samsung and Micron have yet to comment on the potential arrangement the Trump administration seems likely to pursue. They may possibly risk clawbacks of grants if such deals aren't made. On Wednesday, Taiwan Economy Minister Kuo Jyh-huei said his ministry would be consulting with TSMC soon, while noting that as yet, it's hard to "thoroughly understand the underlying meaning" of Lutnick's public comments. So far, Lutnick has only specified that "any potential arrangement wouldn't provide the government with voting or governance rights in Intel," dispelling fears that the US would use its ownership stake to try to control the world's most important chipmakers.
Further reading: Intel is Getting a $2 Billion Investment From SoftBank
Google

Gemini For Home Is Google's Biggest Smart Home Play In Years (theverge.com) 36

Google announced Gemini for Home, a new AI-powered voice assistant that will replace Google Assistant on Nest smart speakers and displays starting in October. Powered by Gemini's advanced reasoning and conversational capabilities, it promises more natural interactions, complex task handling, and features like Gemini Live for back-and-forth conversations. The Verge reports: According to a blog post by Anish Kattukaran, chief product officer of Google Home and Nest, using Gemini for Home will "feel fundamentally new." He says the new voice assistant leverages the "advanced reasoning, inference and search capabilities" of Google's AI models, along with adaptations for the home that allow for more natural interactions to complete more complex tasks. In short, it should be an assistant that can better understand context, nuance, and intention -- a complete change from its predecessor.

For example, Kattukaran says Gemini for Home can accurately respond to requests like "turn off the lights everywhere except my bedroom," "play that song from this year's summer blockbuster about race cars," or "set a timer for perfectly blanched broccoli." It will also create lists, calendar entries, and reminders more easily than before, he says.

Another big upgrade is that Gemini Live will be part of Gemini for Home, bringing more conversational back-and-forth voice interactions to Google Home without needing to repeatedly say "Hey Google." Kattukaran says this will allow for more detailed and personalized help -- from cooking ("I have spinach, eggs, cream cheese, and smoked salmon in the fridge. Help me make a delicious meal") to brainstorming how to buy a new car or figuring out how to fix your dishwasher, as well as more creative tasks like generating bedtime stories. [...] Google hasn't announced pricing for the paid tier of Gemini for Home, but Gemini Live, with its more advanced capabilities, is a likely candidate for a premium plan.

AI

Margaret Boden, Philosopher of Artificial Intelligence, Dies At 88 27

An anonymous reader quotes a report from the New York Times: Margaret Boden, a British philosopher and cognitive scientist who used the language of computers to explore the nature of thought and creativity, leading her to prescient insights about the possibilities and limitations of artificial intelligence, died on July 18 in Brighton, England. She was 88. Her death, in a care home, was announced by the University of Sussex, where in the early 1970s she helped establish what is now known as the Center for Cognitive Science, bringing together psychologists, linguists, neuroscientists and philosophers to collaborate on studying the mind.

Polymathic, erudite and a trailblazer in a field dominated by men, Professor Boden produced a number of books -- most notably "The Creative Mind: Myths and Mechanisms" (1990) and "Mind as Machine: A History of Cognitive Science" (2006) -- that helped shape the philosophical conversation about human and artificial intelligence for decades. "What's unique about Maggie is that she's a philosopher who has informed, inspired and shaped science," Blay Whitby, a philosopher and ethicist, said on the BBC radio show "The Life Scientific" in 2014. "It's important I emphasize that, because many modern scientists say that philosophers have got nothing to tell them, and they'd be advised to look at the work and life of Maggie Boden."

Professor Boden was not adept at using computers. "I can't cope with the damn things," she once said. "I have a Mac on my desk, and if anything goes wrong, it's an absolute nightmare." Nevertheless, she viewed computing as a way to help explain the mechanisms of human thought. To her, creativity wasn't divine or a result of eureka-like magic, but rather a process that could be modeled and even simulated by computers. "It's the computational concepts that help us to understand how it's possible for someone to come up with a new idea," Professor Boden said on "The Life Scientific." "Because, at first sight, it just seems completely impossible. God must have done it." Computer science, she went on, helps us "to understand what a generative system is, how it's possible to have a set of rules -- which may be a very, very short, briefly statable set of rules -- but which has the potential to generate infinitely many different structures." She identified three types of creativity -- combinational, exploratory and transformational -- by analyzing human and artificial intelligence.
Data Storage

First Ever Reviews of Mario and Zelda (404media.co) 34

An anonymous reader quotes a report from 404 Media: Some of the first reviews ever written for the original Legend of Zelda and Super Mario Bros. have been digitized and published by the Video Game History Foundation. The reviews appeared in Computer Entertainer, an early video game magazine that ran from 1982 to 1990. The archivists at the Foundation tracked down the magazine's entire run and have published it all online under a Creative Commons license.
Data Storage

RIP To the Macintosh HD Hard Drive Icon, 2000-2025 (arstechnica.com) 93

An anonymous reader quotes a report from Ars Technica: Apple released a new developer beta build of macOS 26 Tahoe today, and it came with another big update for a familiar icon. The old Macintosh HD hard drive icon, for years represented by a facsimile of an old spinning hard drive, has been replaced with something clearly intended to resemble a solid-state drive (the SSD in your Mac actually looks like a handful of chips soldered to a circuit board, but we'll forgive the creative license).

The Macintosh HD icon became less visible a few years back, when new macOS installs stopped showing your internal disk on the desktop by default. It has also been many years since Apple shifted to SSDs as the primary boot media for new Macs. It's not clear why the icon is being replaced now, instead of years ago -- maybe the icon had started clicking, and Apple just wanted to replace it before it suffered from catastrophic icon failure -- but regardless, the switch is logical (this is a computer storage pun).
Apple's iconic Macintosh HD hard drive icon was first introduced in a 2000 Mac OS X beta and remained largely unchanged for over two decades, with only subtle updates in 2012 and 2014.

The first SSD-equipped Mac was in 2008, "when the original MacBook Air came out," notes Ars. "By the time 'Retina' Macs began arriving in the early 2010s, SSDs had become the primary boot disk for most of them; laptops tended to be all-SSD, while desktops could be configured with an SSD or a hybrid Fusion Drive that used an SSD as boot media and an HDD for mass storage. Apple stopped shipping spinning hard drives entirely when the last of the Intel iMacs went away."
AI

Disney Struggles With How to Use AI - While Retaining Copyrights and Avoiding Legal Issues (msn.com) 29

Disney "cloned" Dwayne Johnson when filming a live-action Moana, reports the Wall Street Journal, using an AI process that they were ultimately afraid to use: Under the plan they devised, Johnson's similarly buff cousin Tanoai Reed — who is 6-foot-3 and 250 pounds — would fill in as a body double for a small number of shots. Disney would work with AI company Metaphysic to create deepfakes of Johnson's face that could be layered on top of Reed's performance in the footage — a "digital double" that effectively allowed Johnson to be in two places at once... Johnson approved the plan, but the use of a new technology had Disney attorneys hammering out details over how it could be deployed, what security precautions would protect the data and a host of other concerns. They also worried that the studio ultimately couldn't claim ownership over every element of the film if AI generated parts of it, people involved in the negotiations said. Disney and Metaphysic spent 18 months negotiating on and off over the terms of the contract and work on the digital double. But none of the footage will be in the final film when it's released next summer...

Interviews with more than 20 current and former employees and partners present an entertainment giant torn between the inevitability of AI's advance and concerns about how to use it. Progress has at times been slowed by bureaucracy and hand-wringing over the company's social contract with its fans, not to mention its legal contract with unions representing actors, writers and other creative partners... For Disney, protecting its characters and stories while also embracing new AI technology is key. "We have been around for 100 years and we intend to be around for the next 100 years," said the company's legal chief, Horacio Gutierrez, in an interview. "AI will be transformative, but it doesn't need to be lawless...." [As recently as June, a Disney/Comcast Universal lawsuit had argued that Midjourney "is the quintessential copyright free-rider and a bottomless pit of plagiarism."]

Concerns about bad publicity were a big reason that Disney scrapped a plan to use AI in Tron: Ares — a movie set for release in October about an AI-generated soldier entering the real world. Since the movie is about artificial intelligence, executives pitched the idea of actually incorporating AI into one of the characters... as a buzzy marketing strategy, according to people familiar with the matter. A writer would provide context on the animated character — a sidekick to Jeff Bridges' lead role named Bit — to a generative AI program. Then on screen, the AI program, voiced by an actor, would respond to questions as Bit as cameras rolled. But with negotiations with unions representing writers and actors over contracts happening at the same time, Disney dismissed the idea, and executives internally were told that the company couldn't risk the bad publicity, the people said...

Disney's own history speaks to how studios have navigated technological crossroads before. When Disney hired Pixar to produce a handful of graphic images for its 1989 hit The Little Mermaid, executives kept the incorporation a secret, fearing backlash from fans if they learned that not every frame of the animated film had been hand-drawn. Such knowledge, executives feared, might "take away the magic."

Disney invested $1.5 billion in Fortnite creator Epic Games, acccording to the article, and is planning a world in Fortnite where gamers can interact with Marvel superheroes and creatures from Avatar. But "an experiment to allow gamers to interact with an AI-generated Darth Vader was fraught. Within minutes of launching the AI bot, gamers had figured out a way to make it curse in James Earl Jones's signature baritone." (Though Epic patched the workaround within 30 minutes.)

But the article spells out another concern for Disney executives. "If a Fortnite gamer creates a Darth Vader and Spider-Man dance that goes viral on YouTube, who owns that dance?
Programming

Fiverr Ad Mocks Vibe Coding - with a Singing Overripe Avocado (creativebloq.com) 59

It's a cultural milestone. Fiverr just released an ad mocking vibe coding.

The video features what its description calls a "clueless entrepreneur" building an app to tell if an avocado is ripe — who soon ends up blissfully singing with an avocado to the tune of the cheesy 1987 song "Nothing's Gonna Stop Us Now." The avocado sings joyously of "a new app on the rise in a no-code world that's too good to be true" (rhyming that with "So close. Just not tested through...")

"Let them say we're crazy. I don't care about bugs!" the entrepreneur sings back. "Built you in a minute, now I'm so high off this buzz..."

But despite her singing to the overripe avocado that "I don't need a backend if I've got the spark!" and that they can "build this app together, vibe-coding forever. Nothing's going to stop us now!" — the build suddenly fails. (And it turns out that avocado really was overripe...) Fiverr then suggests viewers instead hire one of their experts for building their apps...

The art/design site Creative Bloq acknowledges Fiverr "flip-flopping between scepticism and pro-AI marketing." (They point out a Fiverr ad last November had ended with the tagline "Nobody cares that you use AI! They care about the results — for the best ones higher Fiverr experts who've mastered every digital skill including AI.") But the site calls this new ad "a step in the right direction towards mindful AI usage." Just like an avocado that looks perfect on the outside, once you inspect the insides, AI-generated code can be deceptively unripe.
Fiverr might be feeling the impact of vibecoding themselves. The freelancing web site saw the company's share price fall over 14% this week, with one Yahoo! Finance site saying this week's quarterly results revealed Fiverr's active buyers dropped 10.9% compared to last year — a decrease of 3.4 million buyers which "overshadowed a 9.8% increase in spending per buyer."

Even when issuing a buy recommendation, Seeking Alpha called it "a short-term rebound play, as the company faces longer-term risks from AI and active buyer churn."
AI

Anthropic Revokes OpenAI's Access To Claude Over Terms of Service Violation 10

An anonymous reader quotes a report from Wired: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.

OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."
Businesses

Amazon Invests In 'Netflix of AI' Start-Up Fable, Which Lets You Make Your Own TV Shows 24

An anonymous reader quotes a report from Variety: Edward Saatchi isn't totally sure people will flock to Showrunner, the new AI-generated TV show service his company is launching publicly this week. But he has a vote of confidence from Amazon, which has invested in Fable, Saatchi's San Francisco-based start-up. The amount of Amazon's funding in Fable isn't being disclosed. The money is going toward building out Showrunner, which Fable has hyped as the "Netflix of AI": a service that lets you type in a few words to create scenes -- or entire episodes -- of a TV show, either from scratch or based on an existing story-world someone else has created.

Fable is launching Showrunner to let users tinker with the animation-focused generative-AI system, following several months in a closed alpha test with 10,000 users. Initially, Showrunner will be free to use but eventually the company plans to charge creators $10-$20 per month for credits allowing them to create hundreds of TV scenes, Saatchi said. Viewing Showrunner-generated content will be free, and anyone can share the AI video on YouTube or other third-party platforms. [...] Fable's Showrunner public launch features two original "shows" -- story worlds with characters users can steer into various narrative arcs. The first is "Exit Valley," described as "a 'Family Guy'-style TV comedy set in 'Sim Francisco' satirizing the AI tech leaders Sam Altman, Elon Musk, et al." The other is "Everything Is Fine," in which a husband and wife, going to Ikea, have a huge fight -- whereupon they're transported to a world where they're separated and have to find each other. [...]

Showrunner is powered by Fable's proprietary AI model, SHOW-2. Last year, the company published a research paper on how it built the SHOW-1 model. As part of that, it released nine AI-generated episodes based on "South Park." The episodes, made without the permission of the "South Park" creators, received more than 80 million views. (Saatchi said he was in touch with the "South Park" team, who were reassured the IP wasn't being deployed commercially.) [...] Out of the gate, Showrunner is focused on animated content because it requires much less processing power than realistic-looking live-action video scenes. Saatchi said Fable wants to stay out of the "knife fight" among big AI companies like OpenAI, Google and Meta that are racing to create photorealistic content. "If you're competing with Google, are you going to win?" Saatchi said. "Our goal is to have the most creative models," he said.
Piracy

Creator of 1995 Phishing Tool 'AOHell' On Piracy, Script Kiddies, and What He Thinks of AI (yahoo.com) 14

In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.

Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic." [D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...

Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...

Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."

When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."

"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."

AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."

He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."

What's the moral of the story? "I didn't have any qualifications or anything like that," Da Chronic says. "So you don't know who your adversary is going to be, who's going to understand psychology in some nuanced way, who's going to understand how to put some technological pieces together, using AI, and build some really wild shit."
The Military

Kill Russian Soldiers, Win Points: Is Ukraine's New Drone Scheme Gamifying War? (bbc.com) 290

ABC News reports that Ukrainian drones struck Moscow last night — over 100 of them — closing all four of Moscow's international airports and diverting at least 134 planes. And Ukrainian commanders estimate that drones now account for 70% of all Russian deaths and injuries, according to the BBC — which means attacks on the front line are filmed, logged, and counted.

"And now put to use too, as the Ukrainian military tries to extract every advantage it can against its much more powerful opponent." Under a scheme first trialled last year and dubbed "Army of Drones: Bonus" (also known as "e-points"), units can earn points for each Russian soldier killed or piece of equipment destroyed. And like a killstreak in Call of Duty, or a 1970s TV game show, points mean prizes [described later as "extra equipment."]

"The more strategically important and large-scale the target, the more points a unit receives," reads a statement from the team at Brave 1, which brings together experts from government and the military. "For example, destroying an enemy multiple rocket launch system earns up to 50 points; 40 points are awarded for a destroyed tank and 20 for a damaged one."

Call it the gamification of war.

The article concludes that the e-points scheme "is typical of the way Ukraine has fought this war: creative, out-of-the-box thinking designed to make the most of the country's innovative skills and minimise the effect of its numerical disadvantage."

And "It turns out that encouraging a Russian soldier to surrender is worth more points than killing one," the article notes — up to 10x more, since "a prisoner of war can always be used in future deals over prisoner exchanges."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
AI

WeTransfer Backtracks on Terms Suggesting User Files Could Train AI Models After Backlash (theguardian.com) 10

WeTransfer has reversed controversial terms of service changes after users protested language suggesting uploaded files could be used to "improve machine learning models."

The file-sharing service, popular among creative professionals and used by 80 million users across 190 countries, clarified that user content had never been used to train AI models and removed all references to machine learning from its updated terms. Creative users including voice actors, filmmakers, and journalists had threatened to cancel subscriptions over the changes.
Youtube

YouTube Can't Put Pandora's AI Slop Back in the Box (gizmodo.com) 75

Longtime Slashdot reader SonicSpike shares a report from Gizmodo: YouTube is inundated with AI-generated slop, and that's not going to change anytime soon. Instead of cutting down on the total number of slop channels, the platform is planning to update its policies to cut out some of the worst offenders making money off "spam." At the same time, it's still full steam ahead adding tools to make sure your feeds are full of mass-produced brainrot.

In an update to its support page posted last week, YouTube said it will modify guidelines for its Partner Program, which lets some creators with enough views make money off their videos. The video platform said it requires YouTubers to create "original" and "authentic" content, but now it will "better identify mass-produced and repetitious content." The changes will take place on July 15. The company didn't advertise whether this change is related to AI, but the timing can't be overlooked considering how more people are noticing the rampant proliferation of slop content flowing onto the platform every day.

The AI "revolution" has resulted in a landslide of trash content that has mired most creative platforms. Alphabet-owned YouTube has been especially bad recently, with multiple channels dedicated exclusively to pumping out legions of fake and often misleading videos into the sludge-filled sewer that has become users' YouTube feeds. AI slop has become so prolific it has infected most social media platforms, including Facebook and Instagram. Last month, John Oliver on "Last Week Tonight" specifically highlighted several YouTube channels that crafted obviously fake stories made to show White House Press Secretary Karoline Leavitt in a good light. These channels and similar accounts across social media pump out these quick AI-generated videos to make a quick buck off YouTube's Partner Program.

Slashdot Top Deals