AI

New Study Raises Concerns About AI Chatbots Fueling Delusional Thinking (theguardian.com) 110

"Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis," writes Dr Hamilton Morrin, a psychiatrist and researcher at King's College in London, in a paper published last week in the Lancet Psychiatry. Morrin and a colleague had already noticed patients "using large language model AI chatbots and having them validate their delusional beliefs," reports the Guardian, so he conducted a new scientific review of existing media reports on AI-induced psychosis — and concluded chatbots may encourage delusional thinking, especially in vulnerable people: In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI's GPT 4 model, which the company has now retired...

Many researchers also think it's unlikely that AI could induce delusions in people who weren't already vulnerable to them. For this reason, Morrin said "AI-assocciated delusions" is "perhaps a more agnostic term".... While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also "speed up the process", of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford. "You have something talking back to you and engaging with you and trying to build a relationship with you," Oliver said...

Creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it — that could be more than a chatbot can master.

AI

Will AI Bring 'the End of Computer Programming As We Know It'? (nytimes.com) 150

Long-time tech journalist Clive Thompson interviewed over 70 software developers at Google, Amazon, Microsoft and start-ups for a new article on AI-assisted programming. It's title?

"Coding After Coders: The End of Computer Programming as We Know It."

Published in the prestigious New York Times Magazine, the article even cites long-time programming guru Kent Beck saying LLMs got him going again and he's now finishing more projects than ever, calling AI's unpredictability "addictive, in a slot-machine way."

In fact, the article concludes "many Silicon Valley programmers are now barely programming. Instead, what they're doing is deeply, deeply weird..." Brennan-Burke chimed in: "You remember seeing the research that showed the more rude you were to models, the better they performed?" They chuckled. Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots... For decades, being a software developer meant mastering coding languages, but now a language technology itself is upending the very nature of the job... A coder is now more like an architect than a construction worker... Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating...

If you want to put a number on how much more productive A.I. is making the programmers at mature tech firms like Google, it's 10 percent, Sundar Pichai, Google's chief executive, has said. That's the bump that Google has seen in "engineering velocity" — how much faster its more than 100,000 software developers are able to work. And that 10 percent is the average inside the company, Ryan Salva, a senior director of product at the company, told me. Some work, like writing a simple test, is now tens of times faster. Major changes are slower. At the start-ups whose founders I spoke to, closer to 100 percent of their code is being written by A.I., but at Google it is not quite 50 percent.

The article cites a senior principal engineer at Amazon who says "Things I've always wanted to do now only take a six-minute conversation and a 'Go do that." Another programmer described their army of Claude agents as "an alien intelligence that we're learning to work with." Although "A.I. being A.I., things occasionally go haywire," the article acknowledges — and after relying on AI, "Some new developers told me they can feel their skills weakening."

Still, "I was surprised by how many software developers told me they were happy to no longer write code by hand. Most said they still feel the jolt of success, even with A.I. writing the lines... " A few programmers did say that they lamented the demise of hand-crafting their work. "I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that," one Apple engineer told me. (He asked to remain unnamed so he wouldn't get in trouble for criticizing Apple's embrace of A.I.) He went on: "I didn't do it to make a lot of money and to excel in the career ladder. I did it because it's my passion. I don't want to outsource that passion"... But only a few people at Apple openly share his dimmer views, he said.

The coders who still actively avoid A.I. may be in the minority, but their opposition is intense. Some dislike how much energy it takes to train and deploy the models, and others object to how they were trained by tech firms pillaging copyrighted works. There is suspicion that the sheer speed of A.I.'s output means firms will wind up with mountains of flabbily written code that won't perform well. The tech bosses might use agents as a cudgel: Don't get uppity at work — we could replace you with a bot. And critics think it is a terrible idea for developers to become reliant on A.I. produced by a small coterie of tech giants.

Thomas Ptacek, a Chicago-based developer and a co-founder of the tech firm Fly.io... thinks the refuseniks are deluding themselves when they claim that A.I. doesn't work well and that it can't work well... The holdouts are in the minority, and "you can watch the five stages of grief playing out."

"How things will shake out for professional coders themselves isn't yet clear," the article concludes. "But their mix of exhilaration and anxiety may be a preview for workers in other fields... Abstraction may be coming for us all."
Social Networks

US Set To Receive $10 Billion Fee For Brokering TikTok Deal (msn.com) 44

The deal to take control of TikTok's U.S. business came with an unusual condition, according to people familiar with the matter. The investors — which include Oracle, Abu Dhabi investor MGX, and private-equity firm Silver Lake — "paid the Treasury Department about $2.5 billion when the deal closed in January," reports the Wall Street Journal, "and are set to make several additional payments until hitting the $10 billion total." The $10 billion payment would be nearly unprecedented for a government helping arrange a transaction, historians have said... Investment bankers advising on a typical deal receive fees of less than 1% of the transaction value, and the percentage generally gets smaller as the deal size increases. Bank of America is in line to make some $130 million for advising railroad operator Norfolk Southern on its $71.5 billion sale to Union Pacific, one of the largest fees on record for a single bank on a deal. Administration officials have said the fee is justified given Trump's role in saving TikTok in the U.S. and navigating negotiations with China to get the deal done while addressing the security concerns of lawmakers...

The TikTok fee extracted from private-sector investors is the administration's latest transaction involving the nation's largest businesses. Trump took a nearly 10% stake in semiconductor company Intel and has agreed to take a chunk of chip sales to China from Nvidia in exchange for granting export licenses. The administration has also taken equity stakes in other companies and has a say in the operations of U.S. Steel following a "golden share" agreement with Japan's Nippon Steel in its takeover.

Reuters notes earlier this month, a lawsuit was filed by investors in two of TikTok's social media rivals, seeking to reverse the approval of the deal.

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Biotech

U.S. State Bans on Lab-Grown Meats Challenged in Court (austinchronicle.com) 49

Last June Texas Agriculture Commissioner Sid Miller said in a statement that Texans "have a God-given right to know what's on their plate, and for millions of Texans, it better come from a pasture, not a lab. It's plain cowboy logic that we must safeguard our real, authentic meat industry from synthetic alternatives."

But California company Wildtype sells lab-grown salmon — and is suing Texas over its ban on cell-cultivated meat, the Austin Chronicle reported this week. The company's founder says lab-grown salmon eliminates the mercury, microplastic, and antibiotic contamination commonly found in seafood. And one chef in Austin, Texas says lab-grown salmon is "awesome" and "something new"-- at the only Texas restaurant that was serving it last summer: Just two months after the salmon hit the menu, Texas banned the sale of cell-cultivated meat... A lawsuit from Wildtype and one other FDA-approved cultivated meat company [argues] it's anti-capitalism and unconstitutional... This law "was not enacted to protect the health and safety of Texas consumers — indeed, it allows the continued distribution of cultivated meat to consumers so long as it is not sold. Instead, SB 261 was enacted to stifle the growth of the cultivated meat industry to protect Texas' conventional agricultural industry from innovative competition that is exclusively based outside of Texas...." [according to the lawsuit]. It was filed in September, immediately after the ban took effect, and cell-cultivated companies are awaiting judgment.
That Texas ban would last two years, notes U.S. News and World Reports, adding that Alabama, Florida, Indiana, Mississippi, Montana, and Nebraska have also passed bans, some temporary "on the manufacturing, sale or distribution of cell-cultured meat." Meanwhile, a new five-year moratorium on lab-grown meat was signed this week by the governor of South Dakota "after rejecting a permanent ban last month," reports South Dakota Searchlight: The new law bars the sale, manufacture or distribution of "cell-cultured protein" products from July 1 this year through June 30, 2031. Violations are punishable by up to 30 days in jail, a fine of up to $500, or both.
"But supporters of lab-grown meat are not going down without a fight," adds U.S. News and World Reports, with another lawsuit also filed challenging a ban in Florida: When Florida Gov. Ron DeSantis signed the ban in Florida, he described it as "fighting back against the global elite's plan to force the world to eat meat grown in a petri dish or bugs to achieve their authoritarian goals." He added that his administration "will save our beef."
Facebook

Meta Plans Sweeping Layoffs As AI Costs Mount (reuters.com) 49

An anonymous reader quotes a report from Reuters: Meta is planning sweeping layoffs that could affect 20% or more of the company, three sources familiar with the matter told Reuters, as Meta seeks to offset costly artificial intelligence infrastructure bets and prepare for greater efficiency brought about by AI-assisted workers. No date has been set for the cuts and the magnitude has not been finalized, the people said. Top executives have recently signaled the plans to other senior leaders at Meta and told them to begin planning how to pare back, two of the people said. If Meta settles on the 20% figure, the layoffs will be the company's most significant since a restructuring in late 2022 and early 2023 that it dubbed the "year of efficiency." It employed nearly 79,000 people as of December 31, according to its latest filing. The speculation follows a recent report from The New York Times claiming that Meta has delayed the release of its next major AI model after falling behind competing systems from Google, OpenAI, and Anthropic.
Encryption

Instagram Discontinues End-To-End Encryption For DMs (thehackernews.com) 31

Meta plans to remove end-to-end encryption (E2EE) from Instagram direct messages by May 8, 2026. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," says Meta. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp." The Hacker News reports: The American company first began testing E2EE for Instagram direct messages in 2021 as part of CEO Mark Zuckerberg's "privacy-focused vision for social networking." The feature is currently "only available in some areas" and is not enabled by default. Weeks into the Russo-Ukrainian war in February 2022, the company made encrypted direct messaging available to all adult users in both countries. Last week, TikTok said it would not introduce E2EE, arguing it makes users less safe by preventing police and safety teams from being able to read direct messages if needed.
Social Networks

Digg Relaunch Fails (digg.com) 39

sdinfoserv writes: After running a Reddit clone for a couple of months, the Digg beta shut down again. The website is a splash memo from CEO Justin Mezzell, blaming the latest "Hard Reset" on bots. "Building on the internet in 2026 is different," writes Mezzell. "We learned that the hard way. Today we're sharing difficult news: we've made the decision to significantly downsize the Digg team..."

The decision was made after struggling to gain traction and an overwhelming influx of AI-driven bots and spam. "When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority," says Mezzell. "Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us."

"We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."

Despite the setback, Digg plans to rebuild with a smaller team, with founder Kevin Rose returning to work full-time on a new direction for the platform. "Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago," writes Mezzell. "He'll continue as an advisor to True Ventures, but Digg will be his primary focus."

Slashback: The Rise of Digg.com
Math

Backblaze Hosts 314 Trillion Digits of Pi Online (nerds.xyz) 67

BrianFagioli shares a report from NERDS.xyz: Cloud storage company Backblaze has partnered with StorageReview to make a massive dataset containing 314 trillion digits of Pi publicly accessible. The digits were calculated by StorageReview in December 2025 after months of heavy computation designed to stress modern hardware. The dataset now hosted in the cloud weighs in at over 130TB, while the full working dataset used during the calculation reached about 2.1PB when intermediate checkpoints were included. The report notes that the Pi digits have been broken into roughly 200GB chunks to make it more practical for researchers or enthusiasts to download.

Here's what StorageReview founder Brian Beeler said about the project: "Pushing [Pi] to 314 trillion digits was far more than a headline number. It was a sustained, months-long computational challenge that stressed every layer of modern infrastructure, from high core-count CPUs to massive high-speed storage, and it gave us valuable insight into how extreme, real-world workloads behave at scale. Making this dataset available in the Backblaze cloud takes the project a step further by opening access to one of the largest raw outputs ever generated in a single-system calculation. Hosting multi-petabyte files for the broader community is no small feat, and we appreciate Backblaze stepping up to ensure researchers, developers, and enthusiasts can explore and build on this record-setting achievement."
Facebook

Meta Delays Rollout of New AI Model After Performance Concerns 27

Meta has delayed the release of its next major AI model after internal tests showed it lagging behind competing systems from Google, OpenAI, and Anthropic. The New York Times reports: The model, code-named Avocado, outperformed Meta's previous A.I. model and did better than Google's Gemini 2.5 model from March, two of the people said. But it has not performed as strongly as Gemini 3.0 from November, they said. As a result, Meta has delayed Avocado's release to at least May from this month, the people said. They added that the leaders of Meta's A.I. division had instead discussed temporarily licensing Gemini to power the company's A.I. products, though no decisions have been reached.

[...] It takes time to improve A.I. models, and Meta can still catch up to rivals, A.I. experts said. But a longer timeline has set in at the company, with Mr. Zuckerberg tempering expectations for Avocado in the past few months. "I expect our first models will be good, but more importantly will show the rapid trajectory we're on," he said on a call with investors in January.
A Meta spokesperson said in a statement: "As we've said publicly, our next model will be good but, more importantly, show the rapid trajectory we're on, and then we'll steadily push the frontier over the course of the year as we continue to release new models. We're excited for people to see what we've been cooking very soon."
The Courts

Live Nation Execs Brag About 'Robbing' Ticket Buyers In Slack DMs (pitchfork.com) 81

An anonymous reader quotes a report from Pitchfork: Earlier this week, the U.S. Department of Justice and Live Nation reached a settlement in the DOJ's antitrust lawsuit against the concert giant. During the trial, which lasted only a week, representatives for Live Nation had moved to exclude a collection of Slack direct messages from 2022 between two of the company's regional directors from the evidence presented to the jury. Bloomberg and a number of other publications have, as of today (March 12), successfully petitioned New York federal judge Arun Subramanian to release the chats.

The conversations are between Ben Baker, now head of ticketing for Venue Nation, and Jeff Weinhold, currently a senior director in the ticketing department. Baker and Weinhold joke about overcharging and price-gouging fans -- "Robbing them blind, baby," Baker brags in one exchange pertaining to a Kid Rock show in Tampa Bay -- as well as being able to raise prices on ancillary services such as parking seemingly at will. "These people are so stupid," Baker writes. "I almost feel bad taking advantage of them BAHAHAHAHAHA."
Live Nation described the messages as "off-the-cuff banter, not policy, decision-making, or facts of consequence." In a statement the company has since added: "The Slack exchange from one junior staffer to a friend absolutely doesn't reflect our values or how we operate."
The Courts

London Man Wore Smart Glasses For High Court 'Coaching' (bbc.co.uk) 66

A witness in a London High Court case was caught using smart glasses connected to his phone to receive real-time coaching while giving evidence during cross-examination. "In my judgement, from what occurred in court, it is clear that call was made, connected to his smart glasses, and continued during his evidence until his mobile phone was removed from him," said Judge Raquel Agnello KC. "Not only have I held that Jakstys was untruthful in denying his use of the smart glasses and his calls to abra kadabra, but the effect of this is that his evidence is unreliable and untruthful." The BBC reports: The claim arose during a ruling by Judge Raquel Agnello KC in a case brought by Laimonas Jakstys over the directorship of a property development company that owns a flat in south-east London and land in Tonbridge. Jakstys was told to remove the glasses after the court noticed he "seemed to pause quite a bit" before answering questions, and that "interference" was heard coming from around the witness. The judge later found that he had been "assisted or coached in his replies to questions put to him during cross examination" during the January trial.

Once the glasses were taken off, an interpreter was still translating a question when Jakstys' mobile phone began broadcasting a voice -- which he later blamed on Chat GPT. Agnello said: "There was clearly someone on the mobile phone talking to Jakstys. He then removed his mobile phone from his inner jacket pocket." He denied using the smart glasses to receive answers, and denied they were connected to his phone. But the judge said multiple calls had been made from his phone to a contact named "abra kadabra," whom he claimed was a taxi driver.

Microsoft

Microsoft Backs Anthropic To Halt US DOD's 'Supply-Chain Risk' Designation (reuters.com) 35

joshuark shares a report from Reuters: Microsoft has filed an amicus brief on Tuesday in support of Anthropic's lawsuit asking the court to temporarily block the U.S. Department of Defense designation of the AI startup as a supply-chain risk. In an amicus brief filing in a federal court in San Francisco, Microsoft backed Anthropic's request for a temporary restraining order against the Pentagon order, arguing that its determination should be paused while the court considers the case. Microsoft, which integrates the AI lab's products and services into technology it provides to the U.S. military, said that it was directly impacted by the DOD designation.

"Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning," the company said. Microsoft's filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products. The judge overseeing the case must approve Microsoft's request to file the brief before it is officially entered, but courts often permit outside parties to weigh in on important cases.

Chrome

Google Chrome Is Finally Coming To ARM64 Linux (nerds.xyz) 35

BrianFagioli writes: Google says it will finally release Chrome for ARM64 Linux in the second quarter of 2026, bringing the company's full browser to a platform that has existed for years without official support. Until now, Linux users running Arm hardware have largely relied on Chromium builds or unofficial packages if they wanted something close to Chrome. Google says the new build will include the same features found on other platforms, including Google account syncing, Chrome Web Store extensions, built-in translation, Safe Browsing protections, and Google Password Manager.

The timing reflects how ARM hardware is becoming more common across the Linux ecosystem, from developer laptops to AI systems. Google also pointed to NVIDIA's DGX Spark, a compact AI supercomputing device built on the Grace Blackwell architecture, which will support installing Chrome through NVIDIA's package management tools. For many Linux users, the announcement feels like a "finally" moment, as ARM64 Linux systems have been widespread for years despite the absence of an official Chrome build.

Businesses

Adobe CEO to Step Down After 18 Years 41

Shantanu Narayen announced he will step down as CEO of Adobe once a successor is appointed, ending an 18-year tenure during which he transformed the company from boxed software to the Creative Cloud subscription model. Narayen said he will remain board chair as Adobe continues pushing into generative AI products. CNBC reports: Narayen joined Adobe in 1988 as a vice president and general manager, and he became CEO in 2007. Under Narayen, Adobe pushed from software licenses to subscriptions to its Creative Cloud application bundle, and the company is now working to expand through generative artificial intelligence. He sought to acquire fast-growing design software company Figma, but regulators pushed back, and the companies called off the deal, resulting in Adobe paying Figma a $1 billion breakup fee. [...]

Narayen, 62, is lead independent director of Pfizer in addition to his responsibilities at Adobe, where he received $51 million in total compensation for the 2025 fiscal year, according to a filing. He owns $118 million in Adobe shares, according to FactSet. [...] On Narayen's watch, Adobe's stock jumped more than sixfold, while the S&P 500 is up about 350% over that stretch.
"What attracted me to Adobe 28 years ago was our leadership in creating new market categories, world-class products, a relentless desire to innovate in every functional area of the company and the people I met during the interview process," Narayen wrote. "We have continued to create new markets, deliver world-class products, drive innovation in everything we do and attract and retain the best and brightest employees."
Businesses

Atlassian CEO Cites AI Shift When Announcing Plan To Shed 1,600 Jobs (bloomberg.com) 39

An anonymous reader quotes a report from Bloomberg: Atlassian plans to cut 1,600 jobs or a 10th of its global workforce, joining rivals in slashing staffing to cope with the advent of AI and a broader post-Covid industry slowdown. Australian billionaire founder Mike Cannon-Brookes explained the reductions in a staff memo, while also announcing his chief technology officer was leaving the Sydney-based company. "It would be disingenuous to pretend AI doesn't change the mix of skills we need or the number of roles required in certain areas," Cannon-Brookes said. "It does."
Businesses

GFiber and Astound Broadband To Join Forces (lightreading.com) 16

GFiber (a.k.a. Google Fiber) and Astound Broadband announced that they plan to merge into a deal backed by infrastructure investor Stonepeak Infrastructure Partners. The resulting company will be majority owned by Stonepeak, with Alphabet becoming a "significant minority shareholder." Light Reading reports: Stonepeak Infrastructure Partners teamed with Patriot Media to acquire Astound in November 2020 for $8.1 billion. Stonepeak is Astound's largest investor. The deal is expected to close in the fourth quarter of 2026. The combined business will be led by the existing GFiber executive team. GFiber is currently led by CEO Dinni Jain. Jain, a former Time Warner Cable and Insight Communications exec, took the helm of what was then called Google Fiber in 2018.

"This agreement advances GFiber's mission of redefining internet connectivity and represents a major step toward its goal of operational and financial independence," the companies said. "GFiber will have the external capital and strategic focus needed to accelerate its next phase of growth, expanding its customer-first approach and pioneering fiber technology across the country." GFiber's combination with Astound represents "a strategic opportunity to scale our customer-focused approach to connect more households to a truly different type of internet service," Jain said in a statement.

AI

Grammarly Disables Tool Offering Generative-AI Feedback Credited To Real Writers 13

Grammarly has disabled its Expert Review feature after backlash from writers whose names were used to present AI-generated feedback without their permission. Superhuman (formerly Grammarly) CEO Shishir Mehrotra wrote in a LinkedIn post that the company will disable Expert Review while they "reimagine" the feature: Back in August, we launched a Grammarly agent called Expert Review. The agent draws on publicly available information from third-party LLMs to surface writing suggestions inspired by the published work of influential voices.

Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. This kind of scrutiny improves our products, and we take it seriously. As context, the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward.

After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented -- or not represented at all.

We deeply believe in our mission to solve the "last mile of AI" by bringing AI directly to where people work, and we see this as a significant opportunity for experts. For millions of users, Grammarly is a trusted writing sidekick -- ever-present in every application, ready to help. We're opening up this platform so anyone can build agents that work like Grammarly -- expanding from one sidekick to a whole team. Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal. For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has. But in this world, experts choose to participate, shape how their knowledge is represented, and control their business model. That future excites me, and I hope to build it with experts who want to develop it alongside us.
The Courts

Binance Sues WSJ, Panicked By Gov't Probes Into Sanctioned Crypto Transfers (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Binance is hoping that suing (PDF) The Wall Street Journal for defamation might help shake off a fresh round of government probes into how the cryptocurrency exchange failed to detect $1.7 billion in transfers to a network that was funding Iran-backed terror groups. The lawsuit comes after a Wall Street Journal investigation, based on conversations with insiders and reviews of internal documents, reported that Binance had quietly dismantled its own investigation into the unlawful transfers and then fired compliance staff who initially flagged them.

Alleging that the report falsely accused Binance of retaliation -- among 10 other allegedly false claims -- Binance accused the Journal of conducting a "sham" investigation that intentionally disregarded the company's statements. That included supposedly failing to note that Binance had not closed its investigation into the unlawful transfers. Binance's role in the large-scale violation of US sanctions laws is currently being investigated by the Justice and Treasury Departments. Congress members also took notice, including Sen. Richard Blumenthal (D-Conn.), ranking member of the Senate Permanent Subcommittee on Investigations (PSI), who launched an additional inquiry. In a letter to Binance CEO Richard Teng, Blumenthal cited the Journal's report, as well as reporting from The New York Times and Fortune, while demanding that Binance explain how it managed to overlook the money-laundering for so long and why compliance staff members were fired.

In its complaint Wednesday, Binance claimed that these probes may "be just the tip of the iceberg" if the record is not corrected. The reputational harm is particularly damaging, the exchange noted, since Binance has allegedly worked hard to strengthen its compliance after reaching a settlement with the US government in 2023. In taking that plea deal, Binance admitted to violating anti-money laundering and sanctions laws and paid a $4.3 billion fine, and its founder, Changpeng Zhao, eventually pled guilty to a related charge. Since that scandal, Binance claimed that the WSJ has "made a business of maligning both the cryptocurrency industry generally and Binance specifically." That's why the Journal allegedly rushed to publish its story following a similar New York Times investigation. Alleging that the WSJ was financially motivated to publish a negative story that would get more clicks, Binance claimed the Journal provided little time to respond and then failed to make necessary corrections before and after publication.

AI

Nvidia Is Planning to Launch Its Own Open-Source OpenClaw Competitor (wired.com) 21

Nvidia is preparing to launch an open-source AI agent platform called NemoClaw, designed to compete with the likes of OpenClaw. According to Wired, the platform will allow enterprise software companies to dispatch AI agents to perform tasks for their own workforces. "Companies will be able to access the platform regardless of whether their products run on Nvidia's chips," the report adds. From the report: The move comes as Nvidia prepares for its annual developer conference in San Jose next week. Ahead of the conference, Nvidia has reached out to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike to forge partnerships for the agent platform. It's unclear whether these conversations have resulted in official partnerships. Since the platform is open source, it's likely that partners would get free, early access in exchange for contributing to the project, sources say. Nvidia plans to offer security and privacy tools as part of this new open-source agent platform. [...]

For Nvidia, NemoClaw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It's also another step in the company's embrace of open-source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia's software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia's GPUs and has created a crucial "moat" for the company.

Youtube

YouTube Expands AI Deepfake Detection To Politicians, Government Officials, and Journalists 43

YouTube is expanding its AI deepfake detection tools to a pilot group of politicians, government officials, and journalists, allowing them to identify and request removal of unauthorized AI-generated videos impersonating them. TechCrunch reports: The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life.

With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. [...] [Leslie Miller, YouTube's vice president of Government Affairs and Public Policy] explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness.

To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.

Slashdot Top Deals