AI

Will 'AI-Assisted' Journalists Bring Errors and Retractions? (msn.com) 22

Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal.

"AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said...

Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers.

While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..."

"Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava.

For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently....

Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue.

Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not...

Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave...

But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...
Social Networks

Will Social Media Change After YouTube and Meta's Court Defeat? (theverge.com) 54

Yes, this week YouTube and Meta were found negligent in a landmark case about social media addiction.

But "it's still far from certain what this defeat will change," argues The Verge's senior tech and policy editor, "and what the collateral damage could be." If these decisions survive appeal — which isn't certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more "bellwether" cases in Los Angeles, a much larger group settlement could be reached down the road... For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don't change their business practices...

The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change "toxic" features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize "shocking and crude" content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users' privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.

Blake Reid, a professor at Colorado Law, is more circumspect. "It's hard right now to forecast what's going to happen," Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for "cold, calculated" ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. "There are obviously harms here and it's pretty important that the tort system clocked those harms" in the recent cases, he told The Verge. "It's just that what comes in the wake of them is less clear to me".

The article also includes this prediction from legal blogger/Section 230 export Eric Goldman. "There will be even stronger pushes to restrict or ban children from social media." Goldman argues "This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations."
The Media

Ars Technica's AI Reporter Apologizes For Mistakenly Publishing Fake AI-Generated Quotes (arstechnica.com) 77

Last week Scott Shambaugh learned an AI agent published a "hit piece" about him after he'd rejected the AI agent's pull request. (And that incident was covered by Ars Technica's senior AI reporter.)

But then Shambaugh realized their article attributed quotes to him he hadn't said — that were presumably AI-generated.

Sunday Ars Technica's founder/editor-in-chief apologized, admitting their article had indeed contained "fabricated quotations generated by an AI tool" that were then "attributed to a source who did not say them... That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns... At this time, this appears to be an isolated incident."

"Sorry all this is my fault..." the article's co-author posted later on Bluesky. Ironically, their bio page lists them as the site's senior AI reporter, and their Bluesky post clarifies that none of the articles at Ars Technica are ever AI-generated.

Instead, Friday "I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline." But that tool "refused to process" the request, which the Ars author believes was because Shambaugh's post described harassment. "I pasted the text into ChatGPT to understand why... I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words... I failed to verify the quotes in my outline notes against the original blog source before including them in my draft." (Their Bluesky post adds that they were "working from bed with a fever and very little sleep" after being sick with Covid since at least Monday.)

"The irony of an AI reporter being tripped up by AI hallucination is not lost."

Meanwhile, the AI agent that criticized Shambaugh is still active online, blogging about a pull request that forces it to choose between deleting its criticism of Shambaugh or losing access to OpenRouter's API.

It also regrets characterizing feedback as "positive" for a proposal to change a repo's CSS to Comic Sans for accessibility. (The proposals were later accused of being "coordinated trolling"...)
Privacy

TikTok Is Now Collecting Even More Data About Its Users (wired.com) 41

An anonymous reader quotes a report from Wired: When TikTok users in the U.S. opened the app today, they were greeted with a pop-up asking them to agree to the social media platform's new terms of service and privacy policy before they could resume scrolling. These changes are part of TikTok's transition to new ownership. In order to continue operating in the U.S., TikTok was compelled by the U.S. government to transition from Chinese control to a new, American-majority corporate entity. Called TikTok USDS Joint Venture LLC, the new entity is made up of a group of investors that includes the software company Oracle. It's easy to tap "agree" and keep on scrolling through videos on TikTok, so users might not fully understand the extent of changes they are agreeing to with this pop-up.

Now that it's under U.S.-based ownership, TikTok potentially collects more detailed information about its users, including precise location data. Here are the three biggest changes to TikTok's privacy policy that users should know about. TikTok's change in location tracking is one of the most notable updates in this new privacy policy. Before this update, the app did not collect the precise, GPS-derived location data of U.S. users. Now, if you give TikTok permission to use your phone's location services, then the app may collect granular information about your exact whereabouts. Similar kinds of precise location data is also tracked by other social media apps, like Instagram and X.

[...] Rather than an adjustment, TikTok's policy on AI interactions adds a new topic to the privacy policy document. Now, users' interactions with any of TikTok's AI tools explicitly fall under data that the service may collect and store. This includes any prompts as well as the AI-generated outputs. The metadata attached to your interactions with AI tools may also be automatically logged. [...] This change to TikTok's privacy policy may not be as immediately noticeable to users, but it will likely have an impact on the types of ads you see outside of TikTok. So, rather than just using your collected data to target you while using the app, TikTok may now further leverage that info to serve you more relevant ads wherever you go online. As part of this advertising change, TikTok also now explicitly mentions publishers as one kind of partner the platform works with to get new data.

Businesses

Adobe Bolsters AI Marketing Tools With $1.9 Billion Semrush Buy (reuters.com) 4

Adobe is buying Semrush for $1.9 billion in a move to supercharge its AI-driven marketing stack. Reuters reports: Semrush designs and develops AI software that helps companies with search engine optimization, social media and digital advertising. The acquisition, expected to close in the first half of next year, would allow Adobe to help marketers better understand how their brands are viewed by online consumers through searches on websites and generative AI bots such as ChatGPT and Gemini. "The price is steep as Semrush isn't a massive revenue engine on its own, so Adobe is likely paying for strategic value. The payoff could be high too if Adobe can quickly turn Semrush's data into monetizable AI products," said Emarketer analyst Grace Harmon.

"While we are positive on Adobe restarting its M&A engine given the success that it has seen with this motion over the years... this deal likely does little to answer the questions revolving around the company's creative cloud business," added William Blair analysts.
AI

What Happens When Humans Start Writing for AI? (theamericanscholar.org) 69

The literary magazine of the Phi Beta Kappa society argues "the replacement of human readers by AI has lately become a real possibility.

"In fact, there are good reasons to think that we will soon inhabit a world in which humans still write, but do so mostly for AI." "I write about artificial intelligence a lot, and lately I have begun to think of myself as writing for Al as well," the influential economist Tyler Cowen announced in a column for Bloomberg at the beginning of the year. He does this, he says, because he wants to boost his influence over the world, because he wants to help teach the AIs about things he cares about, and because, whether he wants to or not, he's already writing for AI, and so is everybody else. Large-language-model (LLM) chatbots such as ChatGPT and Claude are trained, in part, by reading the entire internet, so if you put anything of yourself online, even basic social-media posts that are public, you're writing for them.

If you don't recognize this fact and embrace it, your work might get left behind or lost. For 25 years, search engines knit the web together. Anyone who wanted to know something went to Google, asked a question, clicked through some of the pages, weighed the information, and came to an answer. Now, the chatbot genie does that for you, spitting the answer out in a few neat paragraphs, which means that those who want to affect the world needn't care much about high Google results anymore. What they really want is for the AI to read their work, process it, and weigh it highly in what it says to the millions of humans who ask it questions every minute.

How do you get it to do this? For that, we turn to PR people, always in search of influence, who are developing a form of writing (press releases and influence campaigns are writing) that's not so much search-engine-optimized as chatbot-optimized. It's important, they say, to write with clear structure, to announce your intentions, and especially to include as many formatted sections and headings as you can. In other words, to get ChatGPT to pay attention, you must write more like ChatGPT. It's also possible that, since LLMs understand natural language in a way traditional computer programs don't, good writing will be more privileged than the clickbait Google has succumbed to: One refreshing discovery PR experts have made is that the bots tend to prioritize information from high-quality outlets.

Tyler Cowen also wrote in his Bloomberg column that "If you wish to achieve some kind of intellectual immortality, writing for the Als is probably your best chance.... Give the Als a sense not just of how you think, but how you feel — what upsets you, what you really treasure. Then future Al versions of you will come to life that much more, attracting more interest." Has AI changed the reasons we write? The Phi Beta Kappa magazine is left to consider the possibility that "power over a superintelligent beast and resurrection are nothing to sneeze at" — before offering another thought.

"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."
AI

Chan Zuckerberg Initiative Shifts Bulk of Philanthropy, 'Going All In on AI-Powered Biology' (apnews.com) 32

The Associated Press reports that "For the past decade, Dr. Priscilla Chan and her husband Mark Zuckerberg have focused part of their philanthropy on a lofty goal — 'to cure, prevent or manage all disease' — if not in their lifetime, then in their children's."

During that decade they also funded other initiatives (including underprivileged schools and immigration reform), according to the article. But there's a change coming: Now, the billionaire couple is shifting the bulk of their philanthropic resources to Biohub, the pair's science organization, and focusing on using artificial intelligence to accelerate scientific discovery. The idea is to develop virtual, AI-based cell models to understand how they work in the human body, study inflammation and use AI to "harness the immune system" for disease detection, prevention and treatment. "I feel like the science work that we've done, the Biohub model in particular, has been the most impactful thing that we have done. So we want to really double down on that. Biohub is going to be the main focus of our philanthropy going forward," Zuckerberg said Wednesday evening at an event at the Biohub Imaging Institute in Redwood City, California.... Chan and Zuckerberg have pledged 99% of their lifetime wealth — from shares of Meta Platforms, where Zuckerberg is CEO — toward these efforts...

On Thursday, Chan and Zuckerberg also announced that Biohub has hired the team at EvolutionaryScale, an AI research lab that has created large-scale AI systems for the life sciences... Biohub's ambition for the next years and decades is to create virtual cell systems that would not have been possible without recent advances in AI. Similar to how large language models learn from vast databases of digital books, online writings and other media, its researchers and scientists are working toward building virtual systems that serve as digital representations of human physiology on all levels, such as molecular, cellular or genome. As it is open source — free and publicly available — scientists can then conduct virtual experiments on a scale not possible in physical laboratories.

"We will continue the model we've pioneered of bringing together scientists and engineers in our own state-of-the-art labs to build tools that advance the field," according to Thursday's blog post. "We'll then use those tools to generate new data sets for training new biological AI models to create virtual cells and immune systems and engineer our cells to detect and treat disease....

"We have also established the first large-scale GPU cluster for biological research, as well as the largest datasets around human cell types. This collection of resources does not exist anywhere else."
Facebook

Zuckerberg Getting Ready To Dump More AI Content To Social Feeds (theverge.com) 70

Meta CEO Mark Zuckerberg is getting ready to dump even more AI-generated posts into your social feeds. From a report: During an earnings call on Wednesday, Zuckerberg said the company will "add yet another huge corpus of content" to its recommendations system as AI "makes it easier to create and remix" work that gets shared online.

"Social media has gone through two eras so far," Zuckerberg said. "First was when all content was from friends, family, and accounts that you followed directly. The second was when we added all of the Creator content." Though Zuckerberg stops short of calling AI the third era of social media, it's clear that the technology will be heavily involved in what comes next.

Zuckerberg said that recommendation systems that "deeply understand" AI-generated posts and "show you the right content" will become "increasingly valuable." The company has already begun embedding AI tools across its apps and is now experimenting with dedicated AI social apps, too.

Facebook

Facebook Data Reveal the Devastating Real-World Harms Caused By the Spread of Misinformation (theconversation.com) 174

An anonymous reader quotes a report from The Conversation: Twenty-one years after Facebook's launch, Australia's top 25 news outlets now have a combined 27.6 million followers on the platform. They rely on Facebook's reach more than ever, posting far more stories there than in the past. With access to Meta's Content Library (Meta is the owner of Facebook), our big data study analysed more than three million posts from 25 Australian news publishers. We wanted to understand how content is distributed, how audiences engage with news topics, and the nature of misinformation spread. The study enabled us to track de-identified Facebook comments and take a closer look at examples of how misinformation spreads. These included cases about election integrity, the environment (floods) and health misinformation such as hydroxychloroquine promotion during the COVID pandemic. The data reveal misinformation's real-world impact: it isn't just a digital issue, it's linked to poor health outcomes, falling public trust, and significant societal harm. [...]

Our study has lessons for public figures and institutions. They, especially politicians, must lead in curbing misinformation, as their misleading statements are quickly amplified by the public. Social media and mainstream media also play an important role in limiting the circulation of misinformation. As Australians increasingly rely on social media for news, mainstream media can provide credible information and counter misinformation through their online story posts. Digital platforms can also curb algorithmic spread and remove dangerous content that leads to real-world harms. The study offers evidence of a change over time in audiences' news consumption patterns. Whether this is due to news avoidance or changes in algorithmic promotion is unclear. But it is clear that from 2016 to 2024, online audiences increasingly engaged with arts, lifestyle and celebrity news over politics, leading media outlets to prioritize posting stories that entertain rather than inform. This shift may pose a challenge to mitigating misinformation with hard news facts. Finally, the study shows that fact-checking, while valuable, is not a silver bullet. Combating misinformation requires a multi-pronged approach, including counter-messaging by trusted civic leaders, media and digital literacy campaigns, and public restraint in sharing unverified content.

Chrome

Google Temporarily Pauses AI-Powered 'Homework Helper' Button in Chrome Over Cheating Concerns (msn.com) 65

An anonymous reader shared this article from the Washington Post: A student taking an online quiz sees a button appear in their Chrome browser: "homework help." Soon, Google's artificial intelligence has read the question on-screen and suggests "choice B" as the answer. The temptation to cheat was suddenly just two clicks away Sept. 2, when Google quietly added a "homework help" button to Chrome, the world's most popular web browser. The button has been appearing automatically on the kinds of course websites used by the majority of American college students and many high-schoolers, too. Pressing it launches Google Lens, a service that reads what's on the page and can provide an "AI Overview" answer to questions — including during tests.

Educators I've spoken with are alarmed. Schools including Emory University, the University of Alabama, the University of California at Los Angeles and the University of California at Berkeley have alerted faculty how the button appears in the URL box of course sites and their limited ability to control it.

Chrome's cheating tool exemplifies Big Tech's continuing gold rush approach to AI: launch first, consider consequences later and let society clean up the mess. "Google is undermining academic integrity by shoving AI in students' faces during exams," says Ian Linkletter, a librarian at the British Columbia Institute of Technology who first flagged the issue to me. "Google is trying to make instructors give up on regulating AI in their classroom, and it might work. Google Chrome has the market share to change student behavior, and it appears this is the goal."

Several days after I contacted Google about the issue, the company told me it had temporarily paused the homework help button — but also didn't commit to keeping it off. "Students have told us they value tools that help them learn and understand things visually, so we're running tests offering an easier way to access Lens while browsing," Google spokesman Craig Ewer said in a statement.

Education

Newfoundland's 10-Year Education Report Calling For Ethical AI Use Contains Over 15 Fake Sources 23

Newfoundland and Labrador's 10-year Education Accord report (PDF) intended to guide school reform has been found to contain at least 15 fabricated citations, including references to non-existent films and journals. Academics suggest the fake sources may have been generated by AI. "There are sources in this report that I cannot find in the MUN Library, in the other libraries I subscribe to, in Google searches. Whether that's AI, I don't know, but fabricating sources is a telltale sign of artificial intelligence," said Aaron Tucker, an assistant professor at Memorial whose current research focuses on the history of AI in Canada. "The fabrication of sources at least begs the question: did this come from generative AI?" CBC News reports: In one case, the report references a 2008 movie from the National Film Board called Schoolyard Games. The film doesn't exist, according to a spokesperson for the board. But the exact citation used in the report can be found in a University of Victoria style guide -- a document that clearly lists fake references designed as templates for researchers writing a bibliography. "Many citations in this guide are fictitious," reads the first page of the document.

"Errors happen. Made-up citations are a totally different thing where you essentially demolish the trustworthiness of the material," said Josh Lepawsky, the former president of the Memorial University Faculty Association who resigned from the report's advisory board last January, citing a "deeply flawed process" leading to "top-down" recommendations. The 418-page Education Accord NL report took 18 months to complete and was unveiled Aug. 28 by its co-chairs Anne Burke and Karen Goodnough, both professors at Memorial's Faculty of Education. The pair released the report alongside Education Minister Bernard Davis. "We are investigating and checking references, so I cannot respond to this at the moment," wrote Goodnough in an email declining an interview Thursday.
In a statement, the Department of Education and Early Childhood Development said it was aware of a "small number of potential errors in citations" in the report. "We understand that these issues are being addressed, and that the online report will be updated in the coming days to rectify any errors."
Security

Amid Service Disruption, Colt Confirms 'Criminal Group' Accessed Their Data, As Ransomware Gang Threatens to Sell It (bleepingcomputer.com) 7

British telecommunications service provider Colt Telecom "has offices in over 30 countries across North America, Europe, and Asia, reports CPO magazine. "It manages nearly 1,000 data centers and roughly 75,000 km of fiber infrastructure."

But now "a cyber attack has caused widespread multi-day service disruption..." On August 14, 2025, the telecom giant said it had detected a cyber attack that began two days earlier, on August 12. Upon learning of the cyber intrusion, the telecommunications service provider responded by proactively taking some systems offline to contain the cyber attack. Although Colt Telecom's cyber incident response team was working around the clock to mitigate the impacts of the cyber attack, service disruption has persisted for days. However, the service disruption did not affect the company's core network infrastructure, suggesting that Colt customers could still access its network services... The company also did not provide a clear timeline for resolving the service disruption. A week after the apparent ransomware attack, Colt Online and the Voice API platform remained unavailable.
And now Colt Technology Services "confirms that customer documentation was stolen," reports the tech news site BleepingComputer: "A criminal group has accessed certain files from our systems that may contain information related to our customers and posted the document titles on the dark web," reads an updated security incident advisory on Colt's site.

"We understand that this is concerning for you."

"Customers are able to request a list of filenames posted on the dark web from the dedicated call centre."

As first spotted by cybersecurity expert Kevin Beaumont, Colt added the no-index HTML meta tag to the web page, making it so it won't be indexed by search engines.

This statement comes after the Warlock Group began selling on the Ramp cybercrime forum what they claim is 1 million documents stolen from Colt. The documents are being sold for $200,000 and allegedly contain financial information, network architecture data, and customer information... The Warlock Group (aka Storm-2603) is a ransomware gang attributed to Chinese threat actors who utilize the leaked LockBit Windows and Babuk VMware ESXi encryptors in attacks... Last month, Microsoft reported that the threat actors were exploiting a SharePoint vulnerability to breach corporate networks and deploy ransomware.

"Colt is not the only telecom firm that has been named by WarLock on its leak website in recent days," SecurityWeek points out. "The cybercriminals claim to have also stolen data from France-based Orange."

Thanks to long-time Slashdot reader Z00L00K for sharing the news.
AI

Jim Acosta Interviews AI Version of Teenager Killed in Parkland Shooting (variety.com) 127

Jim Acosta, the former CNN chief White House correspondent who now hosts an independent show on YouTube, has interviewed an AI-generated avatar of Parkland shooting victim Joaquin Oliver. The late teen's parents created the avatar to preserve his voice and advocate for gun reform. Oliver's parents "granted Acosta the first 'interview' with the recreated version of their son on what would have been his 25th birthday," notes Variety. "Oliver was one of 17 people killed in the mass shooting at Marjory Stoneman Douglas High School." From the report: Acosta asked AI Oliver about his solution for gun violence, to which the avatar responded: "I believe in a mix of stronger gun control laws, mental health support and community engagement. We need to create safe spaces for conversations and connections, making sure everyone feels seen and heard. It's about building a culture of kindness and understanding." The avatar added, "Though my life was cut short, I want to keep inspiring others to connect and advocate for change." Acosta then asked AI Oliver about his personal life, such as his favorite sport and favorite basketball team. The two discussed the movie "Remember the Titans" and their favorite "Star Wars" moments.

After a five-minute chat with the AI, Acosta then connected with Oliver's father, Manuel Oliver. "I'm kind of speechless as to the technology there," Acosta said. "It was so insightful. I really felt like I was speaking with Joaquin. It's just a beautiful thing." Manuel, who has been an outspoken voice in the push for gun control, said he believed bringing "AI Joaquin to life" would "create more impact." According to Manuel, the avatar is trained on information on the internet as well as things Oliver wrote, said and posted online. He said he wanted to make it clear to viewers that he is under no illusions about reviving his son. "I understand that this is AI. I don't want anyone to think that I am, in some way, trying to bring my son back," he said. "Sadly, I can't, right? I wish I could. However, the technology is out there." [...]

Manuel said he is excited about the future of the project and what it means for his son's legacy. "What's amazing about this is that we've heard from the parents, we've heard from the politicians. Now we're hearing from one of the kids," Acosta said. "That's important. That hasn't happened." Manuel said he plans to have AI Oliver "on stage in the middle of a debate," and that "his knowledge is unlimited."
You can watch the full interview on YouTube.
Piracy

Creator of 1995 Phishing Tool 'AOHell' On Piracy, Script Kiddies, and What He Thinks of AI (yahoo.com) 14

In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.

Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic." [D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...

Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...

Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."

When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."

"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."

AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."

He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."

What's the moral of the story? "I didn't have any qualifications or anything like that," Da Chronic says. "So you don't know who your adversary is going to be, who's going to understand psychology in some nuanced way, who's going to understand how to put some technological pieces together, using AI, and build some really wild shit."
Wireless Networking

Echelon Kills Smart Home Gym Equipment Offline Capabilities With Update (arstechnica.com) 52

A recent Echelon firmware update has effectively bricked offline functionality for its smart gym equipment, cutting off compatibility with popular third-party apps like QZ and forcing users to connect to Echelon's servers -- even just to view workout stats. Ars Technica reports: As explained in a Tuesday blog post by Roberto Viola, who develops the "QZ (qdomyos-zwift)" app that connects Echelon machines to third-party fitness platforms, like Peloton, Strava, and Apple HealthKit, the firmware update forces Echelon machines to connect to Echelon's servers in order to work properly. A user online reported that as a result of updating his machine, it is no longer syncing with apps like QZ, and he is unable to view his machine's exercise metrics in the Echelon app without an Internet connection. Affected Echelon machines reportedly only have full functionality, including the ability to share real-time metrics, if a user has the Echelon app active and if the machine is able to reach Echelon's servers.

Viola wrote: "On startup, the device must log in to Echelon's servers. The server sends back a temporary, rotating unlock key. Without this handshake, the device is completely bricked -- no manual workout, no Bluetooth pairing, no nothing." Because updated Echelon machines now require a connection to Echelon servers for some basic functionality, users are unable to use their equipment and understand, for example, how fast they're going without an Internet connection. If Echelon were to ever go out of business, the gym equipment would, essentially, get bricked. Viola told Ars Technica that he first started hearing about problems with QZ, which launched in 2020, at the end of 2024 from treadmill owners. He said a firmware update appears to have rolled out this month on Echelon bikes that bricks QZ functionality. In his blog, Viola urged Echelon to let its machines send encrypted data to another device, like a phone or a tablet, without the Internet. He wrote: "Users bought the bike; they should be allowed to use it with or without Echelon's services."

Graphics

Graphics Artists In China Push Back On AI and Its Averaging Effect (theverge.com) 33

Graphic artists in China are pushing back against AI image generators, which they say "profoundly shifts clients' perception of their work, specifically in terms of how much that work costs and how much time it takes to produce," reports The Verge. "Freelance artists or designers working in industries with clients that invest in stylized, eye-catching graphics, like advertising, are particularly at risk." From the report: Long before AI image generators became popular, graphic designers at major tech companies and in-house designers for large corporate clients were often instructed by managers to crib aesthetics from competitors or from social media, according to one employee at a major online shopping platform in China, who asked to remain anonymous for fear of retaliation from their employer. Where a human would need to understand and reverse engineer a distinctive style to recreate it, AI image generators simply create randomized mutations of it. Often, the results will look like obvious copies and include errors, but other graphic designers can then edit them into a final product.

"I think it'd be easier to replace me if I didn't embrace [AI]," the shopping platform employee says. Early on, as tools like Stable Diffusion and Midjourney became more popular, their colleagues who spoke English well were selected to study AI image generators to increase in-house expertise on how to write successful prompts and identify what types of tasks AI was useful for. Ultimately, it was useful for copying styles from popular artists that, in the past, would take more time to study. "I think it forces both designers and clients to rethink the value of designers," Jia says. "Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?" [...]

Across the board, though, artists and designers say that AI hype has negatively impacted clients' view of their work's value. Now, clients expect a graphic designer to produce work on a shorter timeframe and for less money, which also has its own averaging impact, lowering the ceiling for what designers can deliver. As clients lower budgets and squish timelines, the quality of the designers' output decreases. "There is now a significant misperception about the workload of designers," [says Erbing, a graphic designer in Beijing who has worked with several ad agencies and asked to be called by his nickname]. "Some clients think that since AI must have improved efficiency, they can halve their budget." But this perception runs contrary to what designers spend the majority of their time doing, which is not necessarily just making any image, Erbing says.

Programming

Amid Turmoil, Stack Overflow Asks About AI, Salary, Remote Work in 15th Annual Developer Survey (stackoverflow.blog) 10

Stack Overflow remains in the midst of big changes to counter an AI-fueled drop in engagement. So "We're wondering what kind of online communities Stack Overflow users continue to support in the age of AI," writes their senior analyst, "and whether AI is becoming a closer companion than ever before."

For their 15th year of their annual reader survey, this means "we're not just collecting data; we're reflecting on the last year of questions, answers, hallucinations, job changes, tech stacks, memory allocations, models, systems and agents — together..." Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

Career shifts: We're keen to understand if you've considered a career change or transitioned roles and if AI is impacting your approach to learning or using existing tools. Did we make up the difference in salaries globally for tech workers...?

They're also re-visiting "a key finding from recent surveys highlighted a significant statistic: 80% of developers reported being unhappy or complacent in their jobs." This raised questions about changing office (and return-to-office) culture and the pressures of the industry, along with whether there were any insights into what could help developers feel more satisfied at work. Prior research confirmed that flexibility at work used to contribute more than salary to job satisfaction, but 2024's results show us that remote work is not more impactful than salary when it comes to overall satisfaction... [For some positions job satisfaction stayed consistent regardless of salary, though it increased with salary for other positions. And embedded developers said their happiness increased when they worked with top-quality hardware, while desktop developers cited "contributing to open source" and engineering managers were happier when "driving strategy".]

In 2024, our data showed that many developers experienced a pay cut in various roles and programming specialties. In an industry often seen as highly lucrative, this was a notable shift of around 7% lower salaries across the top ten reporting countries for the same roles. This year, we're interested in whether this trend has continued, reversed, or stabilized. Salary dynamics is an indicator for job satisfaction in recent surveys of Stack Overflow users and understanding trends for these roles can perhaps improve the process for finding the most useful factors contributing to role satisfaction outside of salary.

And of course they're asking about AI — while noting last year's survey uncovered this paradox. "While AI usage is growing (70% in 2023 vs. 76% in 2024 planning to or currently using AI tools), developer sentiment isn't necessarily following suit, as 77% in of all respondents in 2023 are favorable or very favorable of AI tools for development compared to 72% of all respondents in 2024." Concerns about accuracy and misinformation were prevalent among some key groups. More developers learning to code are using or are interested in using AI tools than professional developers (84% vs. 77%)... Developers with 10 — 19 years experience were most likely (84%) to name "increase in productivity" as a benefit of AI tools, higher than developers with less experience (<80%)...

Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

The Internet

Perplexity CEO Says Its Browser Will Track Everything Users Do Online To Sell Ads (techcrunch.com) 73

An anonymous reader quotes a report from TechCrunch: Perplexity CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads. "That's kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you," Srinivas said. "Because some of the prompts that people do in these AIs is purely work-related. It's not like that's personal."

And work-related queries won't help the AI company build an accurate-enough dossier. "On the other hand, what are the things you're buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you," he explained. Srinivas believes that Perplexity's browser users will be fine with such tracking because the ads should be more relevant to them. "We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there," he said. The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

Math

JPMorgan Says Quantum Experiment Generated Truly Random Numbers (financialpost.com) 111

JPMorgan Chase used a quantum computer from Honeywell's Quantinuum to generate and mathematically certify truly random numbers -- an advancement that could significantly enhance encryption, security, and financial applications. The breakthrough was validated with help from U.S. national laboratories and has been published in the journal Nature. From a report: Between May 2023 and May 2024, cryptographers at JPMorgan wrote an algorithm for a quantum computer to generate random numbers, which they ran on Quantinuum's machine. The US Department of Energy's supercomputers were then used to test whether the output was truly random. "It's a breakthrough result," project lead and Head of Global Technology Applied Research at JPMorgan, Marco Pistoia told Bloomberg in an interview. "The next step will be to understand where we can apply it."

Applications could ultimately include more energy-efficient cryptocurrency, online gambling, and any other activity hinging on complete randomness, such as deciding which precincts to audit in elections.

United Kingdom

UK Users Show Little Concern as Apple Removes iCloud Encryption (bloomberg.com) 98

British iPhone users have shown minimal reaction to Apple's decision to disable end-to-end encryption for UK iCloud customers, challenging the company's assumption about privacy priorities, a Bloomberg columnist notes. Rather than create a government-accessible backdoor demanded under Britain's Investigatory Powers Act, Apple chose to eliminate its Advanced Data Protection feature entirely for UK customers, effectively giving both authorities and potential hackers easier access to stored emails, photos and documents.

The near absence of public outcry from British consumers points to what researchers call the "privacy paradox," where stated concerns about data security rarely translate to action. According to cited research, while 92% of American consumers believe they should control their online information, only 16% have stopped using services over data misuse. The quiet reception suggests Apple's principled stand against backdoors may have limited impact if customers don't understand or value encrypted protection, potentially undermining privacy's effectiveness as a marketing differentiator for the tech giant.

Slashdot Top Deals