Music

Google's AI Music Maker Is Coming To the Gemini App 7

Google is bringing its Lyria 3 AI music model into the Gemini app, allowing users to generate 30-second songs from text, images, or video prompts directly within the chatbot. The Verge reports: Lyria 3's text-to-music capabilities allow Gemini app users to make songs by describing specific genres, moods, or memories, such as asking for an "Afrobeat track for my mother about the great times we had growing up." The music generator can make instrumental audio and songs with lyrics composed automatically based on user prompts. Users can also upload photographs and video references, which Gemini then uses to generate a track with lyrics that fit the vibe.

"The goal of these tracks isn't to create a musical masterpiece, but rather to give you a fun, unique way to express yourself," Google said in its announcement blog. Gemini will add custom cover art generated by Nano Banana to songs created on the app, which aims to make them easier to share and download. Google is also bringing Lyria 3 to YouTube's Dream Track tool, which allows creators to make custom AI soundtracks for Shorts.

Dream Track and Lyria were initially demonstrated with the ability to mimic the style and voice of famous performers. Google says it's been "very mindful" of copyright in the development of Lyria 3 and that the tool "is designed for original expression, not for mimicking existing artists." When prompted for a specific artist, Gemini will make a track that "shares a similar style or mood" and uses filters to check outputs against existing content.
AI

Apple Reportedly Replacing Siri Interface With Actual Chatbot Experience For iOS 27 20

According to Bloomberg's Mark Gurman, Apple is reportedly planning a major Siri overhaul in iOS 27 and macOS 27 where the current assistant interface will be replaced with a deeply integrated, ChatGPT-style chatbot experience. "Users will be able to summon the new service the same way they open Siri now, by speaking the 'Siri' command or holding down the side button on their iPhone or iPad," says Gurman. "More significantly, Siri will be integrated into all of the company's core apps, including ones for mail, music, podcasts, TV, Xcode programming software and photos. That will allow users to do much more with just their voice." 9to5Mac reports: The unannounced Siri overhaul will reportedly be revealed at WWDC in June as the flagship feature for iOS 27 and macOS 27. Its release is expected in September when Apple typically ships major software updates. While Apple plans to release an improved version of Siri and Apple Intelligence this spring, that version will use the existing Siri interface. The big difference is that Google's Gemini models will power the intelligence. With the bigger update planned for iOS 27, the iOS 26 upgrade to Siri and Apple Intelligence sounds more like the first step to a long overdue modernization.

Gurman reports that the major Siri overhaul will "allow users to search the web for information, create content, generate images, summarize information and analyze uploaded files" while using "personal data to complete tasks, being able to more easily locate specific files, songs, calendar events and text messages." People are already familiar with conversational interactions with AI, and Bloomberg says the bigger update to Siri will be support both text and voice. Siri already uses these input methods, but there's no real continuity between sessions.
AI

Adobe Acrobat Now Lets You Edit Files Using Prompts, Generate Podcast Summaries (techcrunch.com) 20

Adobe has added a suite of AI-powered features to Acrobat that enable users to edit documents through natural language prompts, generate podcast-style audio summaries of their files, and create presentations by pulling content from multiple documents stored in a single workspace.

The prompt-based editing supports 12 distinct actions: removing pages, text, comments, and images; finding and replacing words and phrases; and adding e-signatures and passwords. The presentation feature builds on Adobe Spaces, a collaborative file and notes collection the company launched last year. Users can point Acrobat's AI assistant at files in a Space and have it generate an editable pitch deck, then style it using Adobe Express themes and stock imagery.

Shared files in Spaces now include AI-generated summaries that cite specific locations in the source document. Users can also choose from preset AI assistant personas -- "analyst," "entertainer," or "instructor" -- or create custom assistants using their own prompts.
The Internet

FCC Rejects Calls For Cable-like Fees on Broadband Providers (thedesk.net) 15

The Federal Communications Commission has rejected a call from the National Association of Broadcasters and some industry trade groups that would have imposed cable-style regulatory fees on streaming services, tech companies and pure broadband providers. From a report: In a Report and Order issued on Friday, the FCC reaffirmed that regulatory fees are calculated based on the number of full-time equivalent employees assigned to specific industries under the agency's jurisdiction. Broadcasters, satellite operators and other licensees are already assessed annual payments, which help fund the FCC's operational costs.

The NAB, in concert with other groups like Telesat, Iridium and the State Broadcasters Associations, pressed the FCC to expand the list of fee payers to include broadband providers and large technology firms. They argued that companies operating online platforms and broadband services rely on FCC resources and should contribute to the costs of regulation. "Big Tech should not be permitted to free ride on the FCC's oversight," NAB said in submitted comments earlier this year. The NAB argued that online platforms enjoy regulator benefits without paying into the agency's budget, as broadcasters and satellite operators do.

AI

LLMs' 'Simulated Reasoning' Abilities Are a 'Brittle Mirage,' Researchers Find (arstechnica.com) 238

An anonymous reader quotes a report from Ars Technica: In recent months, the AI industry has started moving toward so-called simulated reasoning models that use a "chain of thought" process to work through tricky problems in multiple logical steps. At the same time, recent research has cast doubt on whether those models have even a basic understanding of general logical concepts or an accurate grasp of their own "thought process." Similar research shows that these "reasoning" models can often produce incoherent, logically unsound answers when questions include irrelevant clauses or deviate even slightly from common templates found in their training data.

In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as "suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text." To pull on that thread, the researchers created a carefully controlled LLM environment in an attempt to measure just how well chain-of-thought reasoning works when presented with "out of domain" logical problems that don't match the specific logical patterns found in their training data. The results suggest that the seemingly large performance leaps made by chain-of-thought models are "largely a brittle mirage" that "become[s] fragile and prone to failure even under moderate distribution shifts," the researchers write. "Rather than demonstrating a true understanding of text, CoT reasoning under task transformations appears to reflect a replication of patterns learned during training." [...]

Rather than showing the capability for generalized logical inference, these chain-of-thought models are "a sophisticated form of structured pattern matching" that "degrades significantly" when pushed even slightly outside of its training distribution, the researchers write. Further, the ability of these models to generate "fluent nonsense" creates "a false aura of dependability" that does not stand up to a careful audit. As such, the researchers warn heavily against "equating [chain-of-thought]-style output with human thinking" especially in "high-stakes domains like medicine, finance, or legal analysis." Current tests and benchmarks should prioritize tasks that fall outside of any training set to probe for these kinds of errors, while future models will need to move beyond "surface-level pattern recognition to exhibit deeper inferential competence," they write.

Programming

How Do You Teach Computer Science in the Age of AI? (thestar.com.my) 177

"A computer science degree used to be a golden ticket to the promised land of jobs," a college senior tells the New York Times. But "That's no longer the case."

The article notes that in the last three years there's been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies "relying more on AI for some aspects of coding, eliminating some entry-level work."

So what do college professors teach when AI "is coming fastest and most forcefully to computer science"? Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy... Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills.

The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of "a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce," said Mary Lou Maher, a computer scientist and a director of the Computing Research Association.

The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.

The article raises other possibilities. Experts also suggest the possibility of "a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets." Stanford CS professor Alex Aiken even argues that "The growth in software engineering jobs may decline, but the total number of people involved in programming will increase."

Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school's undergraduate programs believes that coursework "should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools."
AI

After 'AI-First' Promise, Duolingo CEO Admits 'I Did Not Expect the Blowback' (ft.com) 46

Last month, Duolingo CEO Luis von Ahn "shared on LinkedIn an email he had sent to all staff announcing Duolingo was going 'AI-first'," remembers the Financial Times.

"I did not expect the amount of blowback," he admits.... He attributes this anger to a general "anxiety" about technology replacing jobs. "I should have been more clear to the external world," he reflects on a video call from his office in Pittsburgh. "Every tech company is doing similar things [but] we were open about it...."

Since the furore, von Ahn has reassured customers that AI is not going to replace the company's workforce. There will be a "very small number of hourly contractors who are doing repetitive tasks that we no longer need", he says. "Many of these people are probably going to be offered contractor jobs for other stuff." Duolingo is still recruiting if it is satisfied the role cannot be automated. Graduates who make up half the people it hires every year "come with a different mindset" because they are using AI at university.

The thrust of the AI-first strategy, the 46-year-old says, is overhauling work processes... He wants staff to explore whether their tasks "can be entirely done by AI or with the help of AI. It's just a mind shift that people first try AI. It may be that AI doesn't actually solve the problem you're trying to solve.....that's fine." The aim is to automate repetitive tasks to free up time for more creative or strategic work.

Examples where it is making a difference include technology and illustration. Engineers will spend less time writing code. "Some of it they'll need to but we want it to be mediated by AI," von Ahn says... Similarly, designers will have more of a supervisory role, with AI helping to create artwork that fits Duolingo's "very specific style". "You no longer do the details and are more of a creative director. For the vast majority of jobs, this is what's going to happen...." [S]ocietal implications for AI, such as the ethics of stealing creators' copyright, are "a real concern". "A lot of times you don't even know how [the large language model] was trained. We should be careful." When it comes to artwork, he says Duolingo is "ensuring that the entirety of the model is trained just with our own illustrations".

AI

Figma's Big AI Update Takes On Adobe, WordPress, and Canva 10

At its Config 2025 event on Wednesday, Figma unveiled four new AI-powered tools -- Sites, Make, Buzz, and Draw, positioning itself as a full-stack design platform to rival Adobe, WordPress, and Canva. These tools enable users to build websites, generate code, create marketing content, and design vector graphics without leaving the Figma ecosystem. The Verge reports: Figma's first solution is Figma Sites, a website builder that integrates with Figma Design and allows creators to turn their projects into live, functional sites. Figma Sites provides presets for layouts, blocks, templates, and interactions that aim to make building websites less complex and time-consuming. Additional components like custom animations can also be added either using existing code or by prompting Site's AI tool to generate new interaction codes via text descriptions, such as "animate the text to fall into place like a feather." Figma Sites is rolling out in beta for users with full seat access to Figma products. Figma says that AI code generation will be available "in the coming weeks," and that a CMS that allows designers to manage site content will be launched "later this year."

Figma Make is Figma's take on AI coding tools like Google's Gemini Code Assist and Microsoft's GitHub Copilot. The prompt-to-code Figma Make tool is powered by Anthropic's Claude 3.7 model and can build working prototypes and apps based on descriptions or existing designs, such as creating a functional music player that displays a disc that spins when new tracks are played. Specific elements of working design, like text formatting and font style, can be manually edited or adjusted using additional AI prompts. Make is rolling out in beta for full seat Figma users. Figma says it's "exploring integrations with third parties and design systems" for Figma Make and may apply the tool to other apps within its design platform.

Figma Buzz is a marketing-focused design app that's rolling out in beta to all users, and makes it easier for teams to publish brand content, similar to Canva's product design platform. The tool allows Figma designers to create brand-approved templates, styles, and assets that can be used by marketers to quickly assemble emails, social media posts, advertising, and more. Figma Buzz includes generative AI tools for making and editing images using text prompts, and can source information from spreadsheets to bulk create thousands of image assets at once.

Lastly, the Figma Draw vector design app is like a simplified version of Adobe Illustrator that creatives can use to make custom visuals without leaving the Figma platform. It includes a variety of brushes, texture effects, and vector editing tools to create or adjust scalable images and logos for product design projects. Figma Draw is generally available now for full seat users as a toggle in Figma Design, with some features accessible in Sites, Slides, and Buzz. It's not quite as expansive as Adobe's wider Creative Cloud ecosystem, but Figma Draw places the two companies in direct competition for the first time since Adobe killed its own XD product design platform. It also brings some new options to the creative software industry after Adobe failed to acquire Figma for $20 billion due to pressure from competition regulators.
Books

Facebook Whistleblower Demands Overturn of Interview Ban - as Her Book Remains a Bestseller (msn.com) 42

The latest Facebook whistleblower, a former international lawyer, "cannot grant any of the nearly 100 interview requests she has received from journalists from print and broadcast news outlets in the United States and the United Kingdom," reports the Washington Post (citing "a person familiar with the matter").

That's because of an independent arbiter's ruling that "also bars her from talking with lawmakers in the U.S., London and the EU, according to a legal challenge she lodged against the ruling..." On March 12, an emergency arbiter — a dispute resolution option outside the court system — sided with Meta by ruling that the tech giant might reasonably convince a court that Wynn-Williams broke a non-disparagement agreement she entered as she was being fired by the company in 2017. The arbiter also said that while her publisher Macmillan appeared for the hearing on Meta's motion, Wynn-Williams did not despite having received due notice. The arbiter did not make any assessments about the book's veracity, but Meta spokespeople argued that the ruling meant that "Sarah Wynn Williams' false and defamatory book should never have been published."

Wynn-Williams this week filed an emergency motion to overturn the ruling, arguing that she didn't receive proper notice of the arbitration proceedings to the email accounts Meta knows she uses, according to a copy of the motion seen by The Post. Wynn-Williams further alleged that her severance agreement including the non-disparagement provisions are unenforceable, arguing that it violates laws that protect whistleblowers from retaliation, among other points. In a statement, legal representatives for Wynn-Williams said they were "confident in the legal arguments and look forward to a swift restoration of Ms. Wynn-Williams' right to tell her story."

That book — Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism — is currently #1 on the New York Times best-seller list (and #3 on Amazon.com's best-selling books list). And the incident prompted an article by Wired editor at large Steven Levy titled "Meta Tries to Bury a Tell-All Book." ("Please pause for a moment to savor the irony," Levy writes. "Meta, the company that recently announced an end to fact-checking in posts seen by potentially millions of people, is griping that an author didn't fact-check with them?")

And this led to a heated exchange on X.com between the Wired editor at large and Meta's Chief Technology Officer Andrew Bozworth:

Steven Levy: Meta probably realizes that all-out war on this book will only help its sales. But they are furious that an insider--who signed an NDA!--is going White Lotus on them, showing what it's like on the inside.

Meta CTO Bozworth: Except that it is full of lies, Steven. Shame on you.

Steven Levy: Boz, it would be helpful if Meta called out what it believes are the factual inaccuracies, especially in cases where it calls the book "defamatory."

Meta CTO Bozworth: Sorry you don't get to make up a bunch of stories and then put the burden on the person you lied about. Read the accounts from former employees who have gone through several of the anecdotes and said flatly they did not happen as written and then extrapolate.

Steven Levy: I would love for Sheryl, Mark and Joel to speak out on those anecdotes and give their sides of the story. They are the key subjects of those stories and their direct denial of specific incidents would matter.

Meta CTO Bozworth: Did you read what I wrote? I'm sure you would love to have more fuel for your "nobody wants you to read this" headline, but that's a total bullshit expectation. It isn't unreasonable to expect a journalist like you to do basic diligence. I'm sure you have our comms email!

Steven Levy: Believe me I was in touch with your comms people...
AI

Protecting 'Funko' Brand, AI-Powered 'BrandShield' Knocks Itch.io Offline After Questionable Registrar Communications (polygon.com) 48

Launched in 2013, itch.io lets users host and sell indie video games online — now offering more than 200,000 — as well as other digital content like music and comics. But then someone uploaded a page based on a major videogame title, according to Game Rant. And somehow this provoked a series of overreactions and missteps that eventually knocked all of itch.io offline for several hours...

The page was about the first release from game developer 10:10 — their game Funko Fusion, which features characters in the style of Funko's long-running pop-culture bobbleheads. As a major brand, Funko monitors the web with a "brand protection" partner (named BrandShield). Interestingly, BrandShield's SaaS product "leverages AI-driven online brand protection," according to their site, to "detect and remove" things like brand impersonations "with over 98% success. Our advanced takedown capabilities save you time..." (Although BrandShield's CEO told the Verge that following AI reports "our team of Cybersecurity Threat hunters and IP lawyers decide on what actions should be taken.") This means that after automatically spotting the itch.io page with its web-crawling software, it was BrandShield's "team of Cybersecurity Threat hunters and IP lawyers" who decided to take action (for that specific page). But itch.io founder Leaf Corcoran commented on social media: From what I can tell, some person made a fan page for an existing Funko Pop video game (Funko Fusion), with links to the official site and screenshots of the game. The BrandShield software is probably instructed to eradicate all "unauthorized" use of their trademark, so they sent reports independently to our host and registrar claiming there was "fraud and phishing" going on, likely to cause escalation instead of doing the expected DMCA/cease-and-desist. Because of this, I honestly think they're the malicious actor in all of this.
Corcoran says he replied to both his registrar (iwantmyname) and to his site's host, telling them he'd removed the offending page (and disabled its uploader's account). This satisfied his host, Corcoran writes — but the registrar's owner later told him they'd never received his reply.

"And that's why they took the domain down."

In an interview with Polygon, Corcoran points out that the web page in question had already been dealt with five days before his registrar offlined his entire site. "No communication after that.... No 'We haven't heard from you, we're about to shut your domain down' or anything like that."

Defending themselves over the incident, BrandShield posted on X.com that they'd identified an "infringement" (also calling it an "abuse"), and that they'd requested "a takedown of the URL in question — not of the entire itch.io domain." They don't say this, but it seems like their concern might've been that the page looked official enough to impersonate Funko Fusion. But X.com readers added this context. "Entire domains do not go down on the basis of a copyright takedown request of an individual URL. This is the direct result of a fraudulent claim of malicious activity."

And Corcoran also posted an angry summation on X.com: I kid you not, @itchio has been taken down by @OriginalFunko because they use some trash "AI Powered" Brand Protection Software called @BrandShieldltd that created some bogus Phishing report to our registrar, @iwantmyname, who ignored our response and just disabled the domain.
The next day Funko's official account on X.com also issued their own statement that they "hold a deep respect and appreciation for indie games, indie gamers, and indie developers." (Though "Added Context" from X.com readers notes Funko's statement still claimed a "takedown request" was issued, rather than what Corcoran says was a false "fraud and phishing" report.)

Funko.com also posted that they'd "reached out" to itch.io "to engage with them on this issue." But this just led to another angry post from Corcoran. "This is not a joke, Funko just called my mom." Cocoran then posted what looks like a screenshot of a text message his mother sent him. Though she doesn't say which company was involved, his mother's text says she "Got a strange call from a company about accusatory statements on your social media account. Call me..."

Thanks to ewhac (Slashdot reader #5,844) for sharing the news.
AI

Anthropic Says Claude AI Can Match Your Unique Writing Style (theverge.com) 23

Anthropic is adding a new feature to its Claude AI assistant that will give users more control over how the chatbot responds to different writing tasks. From a report: The new custom styles are available to all Claude AI users, enabling anyone to train it to match their own communication style or select from preset options to quickly adjust the tone and level of detail it provides.

This update aims to personalize the chatbot's replies and make them feel more natural or appropriate for specific applications, such as writing detailed technical documents or professional emails. Three preset styles are available: Formal for "clear and polished" text, Concise for shorter and more direct responses, and Explanatory for educational replies that need to include additional detail. If these don't suit your requirements, Claude can also generate custom styles that are trained to mimic other writing mannerisms. Anthropic says users need to upload "sample content that reflects your preferred way of communicating" to the chatbot, and then instruct it on how to match the writing style.

Businesses

US Consumer Watchdog Cautions Businesses on Surveillance of Workers (msn.com) 22

The top U.S. consumer finance watchdog warned businesses about potential legal problems they could face from using new technology such as artificial intelligence or algorithmic scores to snoop on and evaluate their employees. From a report: The Consumer Financial Protection Bureau on Thursday said "invasive" new tools to monitor workers are governed by a law designed to ensure fairness in credit reporting, giving employees specific rights. Employees have the right to consent to the collection of personal information, to receive detailed information and to dispute inaccurate information, the CFPB said in the newly released guidance.

"Workers shouldn't be subject to unchecked surveillance or have their careers determined by opaque third-party reports without basic protections," CFPB Director Rohit Chopra said. More companies are leaning on AI and other powerful tools throughout the employment process, using software that can, for example, interview candidates and surveillance tools that can look for unsafe behavior. Americans have expressed concerns about Big Brother-style surveillance while they are on the job.

AI

Hobbyists Discover How To Insert Custom Fonts Into AI-Generated Images (arstechnica.com) 33

An anonymous reader quotes a report from Ars Technica: Last week, a hobbyist experimenting with the new Flux AI image synthesis model discovered that it's unexpectedly good at rendering custom-trained reproductions of typefaces. While far more efficient methods of displaying computer fonts have existed for decades, the new technique is useful for AI image hobbyists because Flux is capable of rendering depictions of accurate text, and users can now directly insert words rendered in custom fonts into AI image generations. [...] Since Flux is an open model available for download and fine-turning, this past month has been the first time training a typeface LoRA might make sense. That's exactly what an AI enthusiast named Vadim Fedenko (who did not respond to a request for an interview by press time) discovered recently. "I'm really impressed by how this turned out," Fedenko wrote in a Reddit post. "Flux picks up how letters look in a particular style/font, making it possible to train Loras with specific Fonts, Typefaces, etc. Going to train more of those soon."

For his first experiment, Fedenko chose a bubbly "Y2K" style font reminiscent of those popular in the late 1990s and early 2000s, publishing the resulting model on the Civitai platform on August 20. Two days later, a Civitai user named "AggravatingScree7189" posted a second typeface LoRA that reproduces a font similar to one found in the Cyberpunk 2077 video game. "Text was so bad before it never occurred to me that you could do this," wrote a Reddit user named eggs-benedryl when reacting to Fedenko's post on the Y2K font. Another Redditor wrote, "I didn't know the Y2K journal was fake until I zoomed it." It's true that using a deeply trained image synthesis neural network to render a plain old font on a simple background is probably overkill. You probably wouldn't want to use this method to replace Adobe Illustrator while designing a document. "This looks good but it's kinda funny how we're reinventing the idea of fonts as 300MB LoRAs," wrote one Reddit commenter on a thread about the Cyberpunk 2077 font.

Facebook

Meta, Activision Sued By Parents of Children Killed in Last Year's School Shooting (msn.com) 153

Exactly one year after the fatal shooting of 19 elementary school students in Texas, their parents filed a lawsuit against the publisher of the videogame Call of Duty, against Meta, and against the manufacturer of the AR-15-style weapon used in the attack, Daniel Defense.

The Washington Post says the lawsuits "may be the first of their kind to connect aggressive firearms marketing tactics on social media and gaming platforms to the actions of a mass shooter." The complaints contend the three companies are responsible for "grooming" a generation of "socially vulnerable" young men radicalized to live out violent video game fantasies in the real world with easily accessible weapons of war...

Several state legislatures, including California and Hawaii, passed consumer safety laws specific to the sale and marketing of firearms that would open the industry to more civil liability. Texas is not one of them. But it's just one vein in the three-pronged legal push by Uvalde families. The lawsuit against Activision and Meta, which is being filed in California, accuses the tech companies of knowingly promoting dangerous weapons to millions of vulnerable young people, particularly young men who are "insecure about their masculinity, often bullied, eager to show strength and assert dominance."

"To put a finer point on it: Defendants are chewing up alienated teenage boys and spitting out mass shooters," the lawsuit states...

The lawsuit alleges that Meta, which owns Instagram, easily allows gun manufacturers like Daniel Defense to circumvent its ban on paid firearm advertisements to reach scores of young people. Under Meta's rules, gunmakers are not allowed to buy advertisements promoting the sale of or use of weapons, ammunition or explosives. But gunmakers are free to post promotional material about weapons from their own account pages on Facebook and Instagram — a freedom the lawsuit alleges Daniel Defense often exploited.

According to the complaint, the Robb school shooter downloaded a version of "Call of Duty: Modern Warfare," in November 2021 that featured on the opening title page the DDM4V7 model rifle [shooter Salvador] Ramos would later purchase. Drawing from the shooter's social media accounts, Koskoff argued he was being bombarded with explicit marketing and combat imagery from the company on Instagram... The complaint cites Meta's practice, first reported by The Washington Post in 2022, of giving gun sellers wide latitude to knowingly break its rules against selling firearms on its websites. The company has allowed buyers and sellers to violate the rule 10 times before they are kicked off, The Post reported.

The article adds that the lawsuit against Meta "echoes some of the complaints by dozens of state attorneys general and school districts that have accused the tech giant of using manipulative practices to hook... while exposing them to harmful content." It also includes a few excerpts from the text of the lawsuit.
  • It argues that both Meta and Activision "knowingly exposed the Shooter to the weapon, conditioned him to see it as the solution to his problems, and trained him to use it."
  • The lawsuit also compares their practices to another ad campaign accused of marketing harmful products to children: cigarettes. "Over the last 15 years, two of America's largest technology companies — Defendants Activision and Meta — have partnered with the firearms industry in a scheme that makes the Joe Camel campaign look laughably harmless, even quaint."

Meta and Daniel Defense didn't respond to the reporters' requests for comment. But they did quote a statement from Activision expressing sympathy for the communities and families impacted by the "horrendous and heartbreaking" shooting.

Activision also added that "Millions of people around the world enjoy video games without turning to horrific acts."


AI

'AI Prompt Engineering Is Dead' 68

The hype around AI language models has companies scrambling to hire prompt engineers to improve their AI queries and create new products. But new research hints that the AI may be better at prompt engineering than humans, indicating many of these jobs could be short-lived as the technology evolves and automates the role. IEEE Spectrum: Battle and Gollapudi decided to systematically test [PDF] how different prompt engineering strategies impact an LLM's ability to solve grade school math questions. They tested three different open source language models with 60 different prompt combinations each. What they found was a surprising lack of consistency. Even chain-of-thought prompting sometimes helped and other times hurt performance. "The only real trend may be no trend," they write. "What's best for any given model, dataset, and prompting strategy is likely to be specific to the particular combination at hand."

There is an alternative to the trial-and-error style prompt engineering that yielded such inconsistent results: Ask the language model to devise its own optimal prompt. Recently, new tools have been developed to automate this process. Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM. Battle and his collaborators found that in almost every case, this automatically generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching.
AI

AI-Assisted Bug Reports Are Seriously Annoying For Developers (theregister.com) 29

Generative AI models like Google Bard and GitHub Copilot are increasingly being used in various industries, but users often overlook their limitations, leading to serious errors and inefficiencies. Daniel Stenberg of curl and libcurl highlights a specific problem of AI-generated security reports: when reports are made to look better and to appear to have a point, it takes a longer time to research and eventually discard it. "Every security report has to have a human spend time to look at it and assess what it means," adds Stenberg. "The better the crap, the longer time and the more energy we have to spend on the report until we close it." The Register reports: The curl project offers a bug bounty to security researchers who find and report legitimate vulnerabilities. According to Stenberg, the program has paid out over $70,000 in rewards to date. Of 415 vulnerability reports received, 64 have been confirmed as security flaws and 77 have been deemed informative -- bugs without obvious security implications. So about 66 percent of the reports have been invalid. The issue for Stenberg is that these reports still need to be investigated and that takes developer time. And while those submitting bug reports have begun using AI tools to accelerate the process of finding supposed bugs and writing up reports, those reviewing bug reports still rely on human review. The result of this asymmetry is more plausible-sounding reports, because chatbot models can produce detailed, readable text without regard to accuracy.

As Stenberg puts it, AI produces better crap. "A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is considered one of the most important areas so it tends to trump almost everything else." As examples, he cites two reports submitted to HackerOne, a vulnerability reporting community. One claimed to describe Curl CVE-2023-38545 prior to actual disclosure. But Stenberg had to post to the forum to make clear that the bug report was bogus. He said that the report, produced with the help of Google Bard, "reeks of typical AI style hallucinations: it mixes and matches facts and details from old security issues, creating and making up something new that has no connection with reality." [...]

Stenberg readily acknowledges that AI assistance can be genuinely helpful. But he argues that having a human in the loop makes the use and outcome of AI tools much better. Even so, he expects the ease and utility of these tools, coupled with the financial incentive of bug bounties, will lead to more shoddy LLM-generated security reports, to the detriment of those on the receiving end.

AI

How Artists are Sabotaging AI to Take Revenge on Image Generators (theconversation.com) 25

Some text-to-image generators "have been trained by indiscriminately scraping online images," reports the Conversation, "many of which may be under copyright.

"Researchers who want to empower individual artists have recently created a tool named 'Nightshade' to fight back against unauthorised image scraping." The tool works by subtly altering an image's pixels in a way that wreaks havoc to computer vision but leaves the image unaltered to a human's eyes.... This can result in the algorithm mistakenly learning to classify an image as something a human would visually know to be untrue. As a result, the generator can start returning unpredictable and unintended results... [A] balloon might become an egg. A request for an image in the style of Monet might instead return an image in the style of Picasso... The models could also introduce other odd and illogical features to images — think six-legged dogs or deformed couches. The higher the number of "poisoned" images in the training data, the greater the disruption.

Because of how generative AI works, the damage from "poisoned" images also affects related prompt keywords. For example, if a "poisoned" image of a Ferrari is used in training data, prompt results for other car brands and for other related terms, such as vehicle and automobile, can also be affected. Nightshade's developer hopes the tool will make big tech companies more respectful of copyright, but it's also possible users could abuse the tool and intentionally upload "poisoned" images to generators to try and disrupt their services... [Technological fixes] include the use of "ensemble modeling" where different models are trained on many different subsets of data and compared to locate specific outliers. This approach can be used not only for training but also to detect and discard suspected "poisoned" images. Audits are another option. One audit approach involves developing a "test battery" — a small, highly curated, and well-labelled dataset — using "hold-out" data that are never used for training. This dataset can then be used to examine the model's accuracy.

The article adds that the most obvious fix "is paying greater attention to where input data are coming from and how they can be used.

"Doing so would result in less indiscriminate data harvesting. This approach does challenge a common belief among computer scientists: that data found online can be used for any purpose they see fit."
Security

Intelligence Researchers To Study Computer Code for Clues To Hackers' Identities (wsj.com) 4

Government researchers in the U.S. are studying methods to help identify hackers based on the code they use to carry out cyberattacks. From a report: The Intelligence Advanced Research Projects Activity, the lead federal research agency for the intelligence community, plans to develop technologies that could speed up investigations for identifying perpetrators of cyberattacks. "The number of attacks is increasing far more than the number of forensic experts that are available to go after these attacks," said Kristopher Reese, who is managing the research program at IARPA and holds a doctorate in computer science and engineering. The lack of forensic resources means hackers who target small organizations or companies that don't fall under critical infrastructure sectors often escape identification, he said.

Tools that are developed as part of the planned 30-month research project won't replace human analysts, who are crucial for identifying social and political dynamics that might explain why a particular hacking group targeted a victim, Reese said. But using artificial intelligence to analyze code used in cyberattacks will make investigations more efficient, he said. IARPA is accepting pitches from researchers until next month and plans to begin research next summer. [...] There hasn't been enough research into how analyzing code can reveal a hacker's identity, Reese said. Behavioral traits evident in code can reveal specific countries where hackers might be from or even the university where they were trained, he said. Some companies also have style guides outlining how employees should program, which could leave traces that indicate a person worked there, he said.

AI

Which AI Model Provides the 'Best' Answers? (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: For those looking for a more rigorous way of comparing various models, the folks over at the Large Model Systems Organization (LMSys) have set up Chatbot Arena, a platform for generating Elo-style rankings for LLMs based on a crowdsourced blind-testing website. Chatbot Arena users can enter any prompt they can think of into the site's form to see side-by-side responses from two randomly selected models. The identity of each model is initially hidden, and results are voided if the model reveals its identity in the response itself. The user then gets to pick which model provided what they judge to be the "better" result, with additional options for a "tie" or "both are bad." Only after providing a pairwise ranking does the user get to see which models they were judging, though a separate "side-by-side" section of the site lets users pick two specific models to compare (without the ability to contribute a vote on the result).

Since its public launch back in May, LMSys says it has gathered over 130,000 blind pairwise ratings across 45 different models (as of early December). Those numbers seem poised to increase quickly after a recent positive review from OpenAI's Andrej Karpathy that has already led to what LMSys describes as "a super stress test" for its servers. Chatbot Arena's thousands of pairwise ratings are crunched through a Bradley-Terry model, which uses random sampling to generate an Elo-style rating estimating which model is most likely to win in direct competition against any other. Interested parties can also dig into the raw data of tens of thousands of human prompt/response ratings for themselves or examine more detailed statistics, such as direct pairwise win rates between models and confidence interval ranges for those Elo estimates.

Chatbot Arena's latest public leaderboard update shows a few proprietary models easily beating out a wide range of open-source alternatives. OpenAI's ChatGPT-4 Turbo leads the pack by a wide margin, with only an older GPT-4 model ("0314," which was discontinued in June) coming anywhere close on the ratings scale. But even months-old, defunct versions of GPT-3.5 Turbo outrank the highest-rated open-source models available in Chatbot Arena's testbed. Anthropic's proprietary Claude models also feature highly in Chatbot Arena's top rankings. Oddly enough, though, the site's blind human testing tends to rank the older Claude-1 slightly higher than the subsequent releases of Claude-2.0 and Claude-2.1. Among the tested non-proprietary models, the Llama-based Tulu 2 and 01.ai's Yi get rankings that are comparable to some older GPT-3.5 implementations. Past that, there's a slow but steady decline until you get to models like Dolly and StableLM at the bottom of the pack (amid older versions of many models that have more recent, higher-ranking updates on Chatbot Arena's charts).

AI

'ChatGPT Detector' Catches AI-Generated Papers With Unprecedented Accuracy (nature.com) 38

A machine-learning tool can easily spot when chemistry papers are written using the chatbot ChatGPT, according to a study published on 6 November in Cell Reports Physical Science. From a report: The specialized classifier, which outperformed two existing artificial intelligence (AI) detectors, could help academic publishers to identify papers created by AI text generators. "Most of the field of text analysis wants a really general detector that will work on anything," says co-author Heather Desaire, a chemist at the University of Kansas in Lawrence. But by making a tool that focuses on a particular type of paper, "we were really going after accuracy."

Desaire and her colleagues first described their ChatGPT detector in June, when they applied it to Perspective articles from the journal Science. Using machine learning, the detector examines 20 features of writing style, including variation in sentence lengths, and the frequency of certain words and punctuation marks, to determine whether an academic scientist or ChatGPT wrote a piece of text. The findings show that "you could use a small set of features to get a high level of accuracy," Desaire says. The findings suggest that efforts to develop AI detectors could be boosted by tailoring software to specific types of writing, Desaire says. "If you can build something quickly and easily, then it's not that hard to build something for different domains."

Slashdot Top Deals