AI

People Are Using AI Chatbots To Guide Their Psychedelic Trips 46

An anonymous reader quotes a report from Wired: Trey had struggled with alcoholism for 15 years, eventually drinking heavily each night before quitting in December. But staying sober was a struggle for the 36-year-old first responder from Atlanta, who did not wish to use his real name due to professional concerns. Then he discovered Alterd, an AI-powered journaling app that invites users to "explore new dimensions" geared towards psychedelics and cannabis consumers, meditators, and alcohol drinkers. In April, using the app as a tripsitter -- a term for someone who soberly watches over another while they trip on psychedelics to provide reassurance and support -- he took a huge dose of 700 micrograms of LSD. (A typicalrecreational doseis considered to be 100 micrograms.) "I went from craving compulsions to feeling true freedom and not needing or wanting alcohol," he says.

He recently asked the app's "chat with your mind" function how he had become more wise through all his AI-assisted psychedelic trips. It responded: "I trust my own guidance now, not just external rules or what others think. I'm more creative, less trapped by fear, and I actually live by my values, not just talk about them. The way I see, reflect, and act in the world is clearer and more grounded every day." "It's almost like your own self that you're communicating with," says Trey, adding he's tripped with his AI chatbot about a dozen times since April. "It's like your best friend. It's kind of crazy."
The article mentions several different chatbot tools and AI systems that are being used for psychedelic therapy.

ChatGPT: "Already, many millions of people are using ChatGPT on a daily basis, and the developments may have helped democratize access to psychotherapy-style guidance, albeit in a dubious Silicon Valley style with advice that is often flush with untruths," reports Wired. The general-purpose AI chatbot is being used for emotional support, intention-setting, and even real-time guidance during psychedelic trips. While not designed for therapy, it has been used informally as a trip companion, offering customized music playlists, safety reminders, and existential reflections. Experts caution that its lack of emotional nuance and clinical oversight poses significant risks during altered states.

Alterd: Alterd is a personalized AI journal app that serves as a reflective tool by analyzing a user's entries, moods, and behavior patterns. Its "mind chat" function acts like a digital subconscious, offering supportive insights while gently confronting negative habits like substance use. Users credit it with deepening self-awareness and maintaining sobriety, particularly in the context of psychedelic-assisted growth.

Mindbloom's AI Copilot: Integrated into Mindbloom's at-home ketamine therapy program, the AI copilot helps clients set pretrip intentions, process post-trip emotions, and stay grounded between sessions. It generates custom reflections and visual art based on voice journals, aiming to enhance the therapeutic journey even outside of human-guided sessions. The company plans to evolve the tool into a real-time, intelligent assistant capable of interacting more dynamically with users.

Orb AI/Shaman Concepts (Speculative): Conceptual "orb" interfaces imagine an AI-powered, shaman-like robot facilitating various aspects of psychedelic therapy, from intake to trip navigation. While still speculative, such designs hint at a future where AI plays a central, embodied role in guiding altered states. These ideas raise provocative ethical and safety questions about replacing human presence with machines in deeply vulnerable psychological contexts.

AI in Virtual Reality and Brain Modulation Systems: Researchers are exploring how AI could coordinate immersive virtual reality environments and brain-modulating devices to enhance psychedelic therapy. These systems would respond to real-time emotional and physiological signals, using haptic suits and VR to deepen and personalize the psychedelic experience. Though still in the conceptual phase, this approach represents the fusion of biotech, immersive tech, and AI in pursuit of therapeutic transformation.
China

Chinese Film Foundation Plans to Use AI to 'Revitalize' 100 Classic Kung Fu Films (msn.com) 58

"The China Film Foundation, a nonprofit fund under the Chinese government, plans to use AI to revitalize 100 kung fu classics including Police Story, Once Upon a Time in China and Fist of Fury, featuring Jackie Chan, Jet Li and Bruce Lee, respectively," reports the Los Angeles Times.

"The foundation said it will partner with businesses including Shanghai Canxing Culture & Media Co., which will license 100 Hong Kong films to AI companies to reintroduce those movies to younger audiences globally." The foundation said there are opportunities to use AI to tell those stories through animation, for example. There are plans to release an animated version of director John Woo's 1986 film A Better Tomorrow that uses AI to "reinterpret" Woo's "signature visual language," according to an English transcript of the announcement....

The project raised eyebrows among U.S. artists, many of whom are deeply wary of the use of AI in creative pursuits. The Directors Guild of America said AI is a creative tool that should only be used to enhance the creative storytelling process and "it should never be used retroactively to distort or destroy a filmmaker's artistic work... The DGA strongly opposes the use of AI or any other technology to mutilate a film or to alter a director's vision," the DGA said in a statement. "The Guild has a longstanding history of opposing such alterations on issues like colorization or sanitization of films to eliminate so-called 'objectionable content', or other changes that fundamentally alter a film's original style, meaning, and substance."

The project highlights widely divergent views on AI's potential to reshape entertainment as the two countries compete for dominance in the highly competitive AI space.... During the project's announcement, supporters touted the opportunity AI will bring to China to further its cultural message globally and generate new work for creatives. At the same time, they touted AI's disruption of the filmmaking process, saying the A Better Tomorrow remake was completed with just 30 people, significantly fewer than a typical animated project. China is a "more brutal society in that sense," said Eric Harwit, professor of Asian studies at the University of Hawaii at Manoa. "If somebody loses their job because artificial intelligence is taking over, well, that's just the cost of China's moving forward.... You don't have those freestanding labor organizations, so they don't have that kind of clout to protest against the Chinese using artificial intelligence in a way that might reduce their job opportunities or lead to layoffs in the sector..."

The kung fu revitalization efforts will extend into other areas, including the creation of a martial arts video game.

The article also includes an interesting statistic. "Many people in China embrace AI, with 83% feeling confident that AI systems are designed to act in the best interest of society, much higher than the U.S. where it's 37%, according to a survey from the United Nations Development Program."
Television

The Last of Us Co-Creator Neil Druckmann Exits HBO Show (arstechnica.com) 28

Neil Druckmann and Halley Gross, two pivotal creative forces behind HBO's The Last of Us adaptation, have stepped away from the series before work begins on Season 3. Druckmann is focusing on new projects at Naughty Dog, while Gross hinted at other upcoming creative endeavors, leaving showrunner Craig Mazin at the helm. Ars Technica reports: Both were credited as executive producers on the show; Druckmann frequently contributed writing to episodes, as did Gross, and Druckmann also directed. Druckmann and Gross co-wrote the second game, The Last of Us Part 2.

Druckmann said in his announcement post: "I've made the difficult decision to step away from my creative involvement in The Last of Us on HBO. With work completed on season 2 and before any meaningful work starts on season 3, now is the right time for me to transition my complete focus to Naughty Dog and its future projects, including writing and directing our exciting next game, Intergalactic: The Heretic Prophet, along with my responsibilities as Studio Head and Head of Creative. Co-creating the show has been a career highlight. It's been an honor to work alongside Craig Mazin to executive produce, direct and write on the last two seasons. I'm deeply thankful for the thoughtful approach and dedication the talented cast and crew took to adapting The Last of Us Part I and the continued adaptation of The Last of Us Part II."

And Gross said: "With great care and consideration, I've decided to take a step back from my day-to-day work on HBO's The Last of Us to make space for what comes next. I'm so appreciative of how special this experience has been. Working alongside Neil, Craig, HBO, and this remarkable cast and crew has been life changing. The stories we told -- about love, loss, and what it means to be human in a terrifying world -- are exactly why I love this franchise. I have some truly rad projects ahead that I can't wait to share, but for now, I want to express my gratitude to everyone who brought Ellie and Joel's world to life with such care."

AI

AI Arms Race Drives Engineer Pay To More Than $10 Million (ft.com) 57

Tech companies are paying AI engineers unprecedented salaries as competition for talent intensifies, with some top engineers earning more than $10 million annually and typical packages ranging from $3 million to $7 million. OpenAI told staff this week it is seeking "creative ways to recognize and reward top talent" after losing key employees to rivals, despite offering salaries near the top of the market.

The move followed OpenAI CEO Sam Altman's claim that Meta had promised $100 million sign-on bonuses to the company's most high-profile AI engineers. Mark Chen, OpenAI's chief research officer, sent an internal memo saying he felt "as if someone has broken into our home and stolen something" after recent departures.

AI engineer salaries have risen approximately 50% since 2022, with mid-to-senior level research scientists now earning $500,000 to $2 million at major tech companies, compared to $180,000 to $220,000 for senior software engineers without AI experience.
AI

Has an AI Backlash Begun? (wired.com) 134

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong."

"The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since...

[F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate.

Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources."

The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people."

The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...
AI

AI Improves At Improving Itself Using an Evolutionary Trick (ieee.org) 41

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv.

From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges...

The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems."

... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)

As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."
Privacy

Facebook Is Asking To Use Meta AI On Photos In Your Camera Roll You Haven't Yet Shared (techcrunch.com) 19

Facebook is prompting users to opt into a feature that uploads photos from their camera roll -- even those not shared on the platform -- to Meta's servers for AI-driven suggestions like collages and stylized edits. While Meta claims the content is private and not used for ads, opting in allows the company to analyze facial features and retain personal data under its broad AI terms, raising privacy concerns. TechCrunch reports: The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into "cloud processing" to allow creative suggestions. As the pop-up message explains, by clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an "ongoing basis," based on information like time, location, or themes.

The message also notes that only you can see the suggestions, and the media isn't used for ad targeting. However, by tapping "Allow," you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas. [...] According to Meta's AI Terms around image processing, "once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image," the text states.

The same AI terms also give Meta's AIs the right to "retain and use" any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes "information you submit as Prompts, Feedback, or other Content." We have to wonder whether the photos you've shared for "cloud processing" also count here.

AI

How the Music Industry is Building the Tech to Hunt Down AI-Generated Songs (theverge.com) 75

The goal isn't to stop generative music, but to make it traceable, reports the Verge — "to identify it early, tag it with metadata, and govern how it moves through the system...."

"Detection systems are being embedded across the entire music pipeline: in the tools used to train models, the platforms where songs are uploaded, the databases that license rights, and the algorithms that shape discovery." Platforms like YouTube and [French music streaming service] Deezer have developed internal systems to flag synthetic audio as it's uploaded and shape how it surfaces in search and recommendations. Other music companies — including Audible Magic, Pex, Rightsify, and SoundCloud — are expanding detection, moderation, and attribution features across everything from training datasets to distribution... Vermillio and Musical AI are developing systems to scan finished tracks for synthetic elements and automatically tag them in the metadata. Vermillio's TraceID framework goes deeper by breaking songs into stems — like vocal tone, melodic phrasing, and lyrical patterns — and flagging the specific AI-generated segments, allowing rights holders to detect mimicry at the stem level, even if a new track only borrows parts of an original. The company says its focus isn't takedowns, but proactive licensing and authenticated release... A rights holder or platform can run a finished track through [Vermillo's] TraceID to see if it contains protected elements — and if it does, have the system flag it for licensing before release.

Some companies are going even further upstream to the training data itself. By analyzing what goes into a model, their aim is to estimate how much a generated track borrows from specific artists or songs. That kind of attribution could enable more precise licensing, with royalties based on creative influence instead of post-release disputes...

Deezer has developed internal tools to flag fully AI-generated tracks at upload and reduce their visibility in both algorithmic and editorial recommendations, especially when the content appears spammy. Chief Innovation Officer Aurélien Hérault says that, as of April, those tools were detecting roughly 20 percent of new uploads each day as fully AI-generated — more than double what they saw in January. Tracks identified by the system remain accessible on the platform but are not promoted... Spawning AI's DNTP (Do Not Train Protocol) is pushing detection even earlier — at the dataset level. The opt-out protocol lets artists and rights holders label their work as off-limits for model training.

Thanks to long-time Slashdot reader SonicSpike for sharing the article.
AI

What if Customers Started Saying No to AI? (msn.com) 213

An artist cancelled their Duolingo and Audible subscriptions to protest the companies' decisions to use more AI. "If enough people leave, hopefully they kind of rethink this," the artist tells the Washington Post.

And apparently, many more people feel the same way... In thousands of comments and posts about Audible and Duolingo that The Post reviewed across social media — including on Reddit, YouTube, Threads and TikTok — people threatened to cancel subscriptions, voiced concern for human translators and narrators, and said AI creates inferior experiences. "It destroys the purpose of humanity. We have so many amazing abilities to create art and music and just appreciate what's around us," said Kayla Ellsworth, a 21-year-old college student. "Some of the things that are the most important to us are being replaced by things that are not real...."

People in creative jobs are already on edge about the role AI is playing in their fields. On sites such as Etsy, clearly AI-generated art and other products are pushing out some original crafters who make a living on their creations. AI is being used to write romance novels and coloring books, design logos and make presentations... "I was promised tech would make everything easier so I could enjoy life," author Brittany Moone said. "Now it's leaving me all the dishes and the laundry so AI can make the art."

But will this turn into a consumer movement? The article also cites an assistant marketing professor at Washington State University, who found customers are now reacting negatively to the term "AI" in product descriptions — out of fear for losing their jobs (as well as concerns about quality and privacy). And he does predict this can change the way companies use AI.

"There will be some companies that are going to differentiate themselves by saying no to AI." And while it could be a niche market, "The people will be willing to pay more for things just made by humans."
Microsoft

Is 'Minecraft' a Better Way to Teach Programming in the Age of AI? (edsurge.com) 58

The education-news site EdSurge published "sponsored content" from Minecraft Education this month. "Students light up when they create something meaningful," the article begins. "Self-expression fuels learning, and creativity lies at the heart of the human experience."

But they also argue that "As AI rapidly reshapes software development, computer science education must move beyond syntax drills and algorithmic repetition." Students "must also learn to think systemically..." As AI automates many of the mechanical aspects of programming, the value of CS education is shifting, from writing perfect code to shaping systems, telling stories through logic and designing ethical, human-centered solutions... [I]t's critical to offer computer science experiences that foster invention, expression and design. This isn't just an education issue — it's a workforce one. Creativity now ranks among the top skills employers seek, alongside analytical thinking and AI literacy. As automation reshapes the job market, McKinsey estimates up to 375 million workers may need to change occupations by 2030. The takeaway? We need more adaptable, creative thinkers.

Creative coding, where programming becomes a medium for self-expression and innovation, offers a promising solution to this disconnect. By positioning code as a creative tool, educators can tap into students' intrinsic motivation while simultaneously building computational thinking skills. This approach helps students see themselves as creators, not just consumers, of technology. It aligns with digital literacy frameworks that emphasize critical evaluation, meaningful contribution and not just technical skills.

One example of creative coding comes from a curriculum that introduces computer science through game design and storytelling in Minecraft... Developed by Urban Arts in collaboration with Minecraft Education, the program offers middle school teachers professional development, ongoing coaching and a 72-session curriculum built around game-based instruction. Designed for grades 6-8, the project-based program is beginner-friendly; no prior programming experience is required for teachers or students. It blends storytelling, collaborative design and foundational programming skills with a focus on creativity and equity.... Students use Minecraft to build interactive narratives and simulations, developing computational thinking and creative design... Early results are promising: 93 percent of surveyed teachers found the Creative Coders program engaging and effective, noting gains in problem-solving, storytelling and coding, as well as growth in critical thinking, creativity and resilience.

As AI tools like GitHub Copilot become standard in development workflows, the definition of programming proficiency is evolving. Skills like prompt engineering, systems thinking and ethical oversight are rising in importance, precisely what creative coding develops... As AI continues to automate routine tasks, students must be able to guide systems, understand logic and collaborate with intelligent tools. Creative coding introduces these capabilities in ways that are accessible, culturally relevant and engaging for today's learners.

Some background from long-time Slashdot reader theodp: The Urban Arts and Microsoft Creative Coders program touted by EdSurge in its advertorial was funded by a $4 million Education Innovation and Research grant that was awarded to Urban Arts in 2023 by the U.S. Education Department "to create an engaging, game-based, middle school CS course using Minecraft tools" for 3,450 middle schoolers (6th-8th grades)" in New York and California (Urban Arts credited Minecraft for helping craft the winning proposal)... New York City is a Minecraft Education believer — the Mayor's Office of Media and Entertainment recently kicked off summer with the inaugural NYC Video Game Festival, which included the annual citywide Minecraft Education Battle of the Boroughs Esports Competition in partnership with NYC Public Schools.
AI

AI Ethics Pioneer Calls Artificial General Intelligence 'Just Vibes and Snake Oil' (ft.com) 41

Margaret Mitchell, chief ethics scientist at Hugging Face and founder of Google's responsible AI team, has dismissed artificial general intelligence as "just vibes and snake oil." Mitchell, who was ousted from Google in 2021, has co-written a paper arguing that AGI should not serve as a guiding principle for the AI industry.

Mitchell contends that both "intelligence" and "general" lack clear definitions in AI contexts, creating what she calls an "illusion of consensus" that allows technologists to pursue any development path under the guise of progress toward AGI. "But as for now, it's just like vibes, vibes and snake oil, which can get you so far. The placebo effect works relatively well," she told FT in an interview. She warns that current AI advancement is creating a "massive rift" between those profiting from the technology and workers losing income as their creative output gets incorporated into AI training data.
AI

Midjourney Launches Its First AI Video Generation Model, V1 3

Midjourney has launched its first AI video generation model, V1, which turns images into short five-second videos with customizable animation settings. While it's currently only available via Discord and on the web, the launch positions the popular AI image generation startup in direct competition with OpenAI's Sora and Google's Veo. TechCrunch reports: While many companies are focused on developing controllable AI video models for use in commercial settings, Midjourney has always stood out for its distinctive AI image models that cater to creative types. The company says it has larger goals for its AI video models than generating B-roll for Hollywood films or commercials for the ad industry. In a blog post, Midjourney CEO David Holz says its AI video model is the company's next step towards its ultimate destination, creating AI models "capable of real-time open-world simulations." After AI video models, Midjourney says it plans to develop AI models for producing 3D renderings, as well as real-time AI models. [...]

To start, Midjourney says it will charge 8x more for a video generation than a typical image generation, meaning subscribers will run out of their monthly allotted generations significantly faster when creating videos than images. At launch, the cheapest way to try out V1 is by subscribing to Midjourney's $10-per-month Basic plan. Subscribers to Midjourney's $60-a-month Pro plan and $120-a-month Mega plan will have unlimited video generations in the company's slower, "Relax" mode. Over the next month, Midjourney says it will reassess its pricing for video models.

V1 comes with a few custom settings that allow users to control the video model's outputs. Users can select an automatic animation setting to make an image move randomly, or they can select a manual setting that allows users to describe, in text, a specific animation they want to add to their video. Users can also toggle the amount of camera and subject movement by selecting "low motion" or "high motion" in settings. While the videos generated with V1 are only five seconds long, users can choose to extend them by four seconds up to four times, meaning that V1 videos could get as long as 21 seconds.
The report notes that Midjourney was sued a week ago by two of Hollywood's most notorious film studios: Disney and Universal. "The suit alleges that images created by Midjourney's AI image models depict the studio's copyrighted characters, like Homer Simpson and Darth Vader."
Youtube

Google's Frighteningly Good Veo 3 AI Videos To Be Integrated With YouTube Shorts (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: YouTube CEO Neal Mohan has announced that the Google Veo 3 AI video generator will be integrated with YouTube Shorts later this summer. According to Mohan, YouTube Shorts has seen a rise in popularity even compared to YouTube as a whole. The streaming platform is now the most watched source of video in the world, but Shorts specifically have seen a massive 186 percent increase in viewership over the past year. Mohan says Shorts now average 200 billion daily views.

YouTube has already equipped creators with a few AI tools, including Dream Screen, which can produce AI video backgrounds with a text prompt. Veo 3 support will be a significant upgrade, though. At the Cannes festival, Mohan revealed that the streaming site will begin offering integration with Google's leading video model later this summer. "I believe these tools will open new creative lanes for everyone to explore," said Mohan. [...]

While you can add Veo 3 videos (or any video) to a YouTube Short right now, they don't fit with the format's portrait orientation focus. Veo 3 outputs 720p landscape videos, meaning you'd have black bars in a Short. Presumably, Google will create a custom version of the model for YouTube to spit out vertical video clips. Mohan didn't mention a pricing model, but Veo 3 probably won't be cheap for Shorts creators. Currently, you must pay for Google's $250 AI Ultra plan to access Veo 3, and that still limits you to 125 8-second videos per month.

Movies

DC Studios Chief Says Movie Industry Is 'Dying,' Claims Disney 'Killed' Marvel With Output Mandates (rollingstone.com) 183

DC Studios co-head James Gunn argues that the movie industry is "dying" primarily because productions begin before screenplays are complete, while also delivering a sharp critique of his former employer Marvel Studios, which he claims Disney has "killed" through output mandates.

Gunn dismissed common explanations for Hollywood's struggles like declining theater attendance or improved home viewing experiences, telling Rolling Stone that "the number one reason is because people are making movies without a finished screenplay." The filmmaker has implemented a strict rule at DC Studios requiring finished scripts before production starts, recently scrapping a project because its screenplay wasn't ready.

The director, who previously helmed three "Guardians of the Galaxy" films for Marvel, said Disney's corporate directive to increase output destroyed the studio's creative process. "They were under a corporate mandate, yeah. That wasn't fair. It wasn't right. And it killed them," Gunn said, referring to Marvel's mandated production quotas for movies and television shows. By contrast, Gunn said DC Studios operates without numerical mandates. "We don't have the mandate to have a certain amount of movies and TV shows every year. So we're going to put out everything that we think is of the highest quality," he explained.
Facebook

All Videos On Facebook Will Soon Be Shared As Reels (techcrunch.com) 13

Facebook announced it will soon share all videos as reels by default, regardless of their length or orientation. "Up until now, users have been able to share both video posts and reels," notes TechCrunch. From the report: The company is also renaming the "Video" tab on its platform to the "Reels" tab. The update won't change what videos are recommended to you, Facebook says. [...] The idea behind the changes is to streamline the video-sharing format on the social network. It won't be the first time that a Meta-owned platform has done so, as Instagram began automatically converting new video posts under 15 minutes into reels back in 2022.

"Previously, you'd upload a video to Feed or post a reel using different creative flows and tools for each format," Facebook explained in a blog post. "Now, we're bringing these experiences together with a simplified publishing flow that gives you access to even more creative tools. We'll also give you control over your audience setting of who sees your reels." [...] The company says it will gradually roll out the changes globally over the coming months.

Youtube

Fake Bands and Artificial Songs are Taking Over YouTube and Spotify (elpais.com) 137

Spain's newspaper El Pais found an entire fake album on YouTube titled Rumba Congo (1973). And they cite a study from France's International Confederation of Societies of Authors and Composers that estimated revenue from AI-generated music will rise to $4 billion in 2028, generating 20% of all streaming platforms' revenue: One of the major problems with this trend is the lack of transparency. María Teresa Llano, an associate professor at the University of Sussex who studies the intersection of creativity, art and AI, emphasizes this aspect: "There's no way for people to know if something is AI or not...." On Spotify Community — a forum for the service's users — a petition is circulating that calls for clear labeling of AI-generated music, as well as an option for users to block these types of songs from appearing on their feeds. In some of these forums, the rejection of AI-generated music is palpable.

Llano mentions the feelings of deception or betrayal that listeners may experience, but asserts that this is a personal matter. There will be those who feel this way, as well as those who admire what the technology is capable of... One of the keys to tackling the problem is to include a warning on AI-generated songs. YouTube states that content creators must "disclose to viewers when realistic content [...] is made with altered or synthetic media, including generative AI." Users will see this if they glance at the description. But this is only when using the app, because on a computer, they will have to scroll down to the very end of the description to get the warning....

The professor from the University of Sussex explains one of the intangibles that justifies the labeling of content: "In the arts, we can establish a connection with the artist; we can learn about their life and what influenced them to better understand their career. With artificial intelligence, that connection no longer exists."

YouTube says they may label AI-generated content if they become aware of it, and may also remove it altogether, according to the article. But Spotify "hasn't shared any policy for labeling AI-powered content..." In an interview with Gustav Söderström, Spotify's co-president and chief product & technology officer, he emphasized that AI "increases people's creativity" because more people can be creative, thanks to the fact that "you don't need to have fine motor skills on the piano." He also made a distinction between music generated entirely with AI and music in which the technology is only partially used. But the only limit he mentioned for moderating artificial music was copyright infringement... something that has been a red line for any streaming service for many years now. And such a violation is very difficult to legally prove when artificial intelligence is involved.
AI

Ohio University Says All Students Will Be Required To Train and 'Be Fluent' In AI (theguardian.com) 73

Ohio State University is launching a campus-wide AI fluency initiative requiring all students to integrate AI into their studies, aiming to make them proficient in both their major and the responsible use of AI. "Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future," said the university's president, Walter "Ted" Carter Jr. He added: "Artificial intelligence is transforming the way we live, work, teach and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI." The Guardian reports: The university said its program will prioritize the incoming freshman class and onward, in order to make every Ohio State graduate "fluent in AI and how it can be responsibly applied to advance their field." [...] Steven Brown, an associate professor of philosophy at the university, told NBC News that after students turned in the first batch of AI-assisted papers he found "a lot of really creative ideas."

"My favorite one is still a paper on karma and the practice of returning shopping carts," Brown said. Brown said that banning AI from classwork is "shortsighted," and he encouraged his students to discuss ethics and philosophy with AI chatbots. "It would be a disaster for our students to have no idea how to effectively use one of the most powerful tools that humanity has ever created," Brown said. "AI is such a powerful tool for self-education that we must rapidly adapt our pedagogy or be left in the dust."

Separately, Ohio's AI in Education Coalition is working to develop a comprehensive strategy to ensure that the state's K-12 education system, encompassing the years of formal schooling from kindergarten through 12th grade in high school, is prepared for and can help lead the AI revolution. "AI technology is here to stay," then lieutenant governor Jon Husted said last year while announcing an AI toolkit for Ohio's K-12 school districts that he added would ensure the state "is a leader in responding to the challenges and opportunities made possible by artificial intelligence."

Apple

Apple Researchers Challenge AI Reasoning Claims With Controlled Puzzle Tests 71

Apple researchers have found that state-of-the-art "reasoning" AI models like OpenAI's o3-mini, Gemini (with thinking mode-enabled), Claude 3.7, DeepSeek-R1 face complete performance collapse [PDF] beyond certain complexity thresholds when tested on controllable puzzle environments. The finding raises questions about the true reasoning capabilities of large language models.

The study, which examined models using Tower of Hanoi, checker jumping, river crossing, and blocks world puzzles rather than standard mathematical benchmarks, found three distinct performance regimes that contradict conventional assumptions about AI reasoning progress.

At low complexity levels, standard language models surprisingly outperformed their reasoning-enhanced counterparts while using fewer computational resources. At medium complexity, reasoning models demonstrated advantages, but both model types experienced complete accuracy collapse at high complexity levels. Most striking was the counterintuitive finding that reasoning models actually reduced their computational effort as problems became more difficult, despite operating well below their token generation limits.

Even when researchers provided explicit solution algorithms, requiring only step-by-step execution rather than creative problem-solving, the models' performance failed to improve significantly. The researchers noted fundamental inconsistencies in how models applied learned strategies across different problem scales, with some models successfully handling 100-move sequences in one puzzle type while failing after just five moves in simpler scenarios.
AI

After 'AI-First' Promise, Duolingo CEO Admits 'I Did Not Expect the Blowback' (ft.com) 46

Last month, Duolingo CEO Luis von Ahn "shared on LinkedIn an email he had sent to all staff announcing Duolingo was going 'AI-first'," remembers the Financial Times.

"I did not expect the amount of blowback," he admits.... He attributes this anger to a general "anxiety" about technology replacing jobs. "I should have been more clear to the external world," he reflects on a video call from his office in Pittsburgh. "Every tech company is doing similar things [but] we were open about it...."

Since the furore, von Ahn has reassured customers that AI is not going to replace the company's workforce. There will be a "very small number of hourly contractors who are doing repetitive tasks that we no longer need", he says. "Many of these people are probably going to be offered contractor jobs for other stuff." Duolingo is still recruiting if it is satisfied the role cannot be automated. Graduates who make up half the people it hires every year "come with a different mindset" because they are using AI at university.

The thrust of the AI-first strategy, the 46-year-old says, is overhauling work processes... He wants staff to explore whether their tasks "can be entirely done by AI or with the help of AI. It's just a mind shift that people first try AI. It may be that AI doesn't actually solve the problem you're trying to solve.....that's fine." The aim is to automate repetitive tasks to free up time for more creative or strategic work.

Examples where it is making a difference include technology and illustration. Engineers will spend less time writing code. "Some of it they'll need to but we want it to be mediated by AI," von Ahn says... Similarly, designers will have more of a supervisory role, with AI helping to create artwork that fits Duolingo's "very specific style". "You no longer do the details and are more of a creative director. For the vast majority of jobs, this is what's going to happen...." [S]ocietal implications for AI, such as the ethics of stealing creators' copyright, are "a real concern". "A lot of times you don't even know how [the large language model] was trained. We should be careful." When it comes to artwork, he says Duolingo is "ensuring that the entirety of the model is trained just with our own illustrations".

AI

Anthropic's AI is Writing Its Own Blog - Oh Wait. No It's Not (techcrunch.com) 2

"Everyone has a blog these days, even Claude," Anthropic wrote this week on a page titled "Claude Explains."

"Welcome to the small corner of the Anthropic universe where Claude is writing on every topic under the sun".

Not any more. After blog posts titled "Improve code maintainability with Claude" and "Rapidly develop web applications with Claude" — Anthropic suddenly removed the whole page sometime after Wednesday. But TechCrunch explains the whole thing was always less than it seemed, and "One might be easily misled into thinking that Claude is responsible for the blog's copy end-to-end." According to a spokesperson, the blog is overseen by Anthropic's "subject matter experts and editorial teams," who "enhance" Claude's drafts with "insights, practical examples, and [...] contextual knowledge."

"This isn't just vanilla Claude output — the editorial process requires human expertise and goes through iterations," the spokesperson said. "From a technical perspective, Claude Explains shows a collaborative approach where Claude [creates] educational content, and our team reviews, refines, and enhances it...." Anthropic says it sees Claude Explains as a "demonstration of how human expertise and AI capabilities can work together," starting with educational resources. "Claude Explains is an early example of how teams can use AI to augment their work and provide greater value to their users," the spokesperson said. "Rather than replacing human expertise, we're showing how AI can amplify what subject matter experts can accomplish [...] We plan to cover topics ranging from creative writing to data analysis to business strategy...."

The Anthropic spokesperson noted that the company is still hiring across marketing, content, and editorial, and "many other fields that involve writing," despite the company's dip into AI-powered blog drafting. Take that for what you will.

Slashdot Top Deals