Businesses

Nvidia Expects To Sell 'At Least' $1 Trillion In AI Chips By 2028 (techcrunch.com) 43

An anonymous reader quotes a report from TechCrunch: Nvidia CEO Jensen Huang threw out a lot of numbers -- mostly of the technical variety -- during his keynote Monday to kick off the company's annual GTC Conference in San Jose, California. But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia's Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business.

About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026. "Now, I don't know if you guys feel the same way, but $500 billion is an enormous amount of revenue," he said. "Well, I'm here to tell you that right now where I stand -- a few short months after GTC DC, one year after last GTC -- right here where I stand, I see through 2027, at least $1 trillion."

Programming

New 'Vibe Coded' AI Translation Tool Splits the Video Game Preservation Community 43

An anonymous reader quotes a report from Ars Technica: Since Andrej Karpathy coined the term "vibe coding" just over a year ago, we've seen a rapid increase in both the capabilities and popularity of using AI models to throw together quick programming projects with less human time and effort than ever before. One such vibe-coded project, Gaming Alexandria Researcher, launched over the weekend as what coder Dustin Hubbard called an effort to help organize the hundreds of scanned Japanese gaming magazines he's helped maintain at clearinghouse Gaming Alexandria over the years, alongside machine translations of their OCR text.

A day after that project went public, though, Hubbard was issuing an apology to many members of the Gaming Alexandria community who loudly objected to the use of Patreon funds for an error-prone AI-powered translation effort. The hubbub highlights just how controversial AI tools remain for many online communities, even as many see them as ways to maximize limited funds and man-hours. "I sincerely apologize," Hubbard wrote in his apology post. "My entire preservation philosophy has been to get people access to things we've never had access to before. I felt this project was a good step towards that, but I should have taken more into consideration the issues with AI."
"I'm very, very disappointed to see [Gaming Alexandria], one of the foremost organizations for preserving game history, promoting the use of AI translation and using Patreon funds to pay for AI licenses," game designer and Legend of Zelda historian Max Nichols wrote in a post on Bluesky over the weekend. "I have cancelled my Patreon membership and will no longer promote the organization."

Nichols later deleted his original message (archived here), saying he was "uncomfortable with the scale of reposts and anger" it had generated in the community. However, he maintained his core criticism: that Gemini-generated translations inevitably introduce inaccuracies that make them unreliable for scholarly use.

In a follow-up, he also objected to Patreon funds being used to pay for AI tools that produce what he called "untrustworthy" translations, arguing they distort history and are not valid sources for research. "... It's worthless and destructive: these translations are like looking at history through a clownhouse mirror," he added.
Open Source

Nvidia Bets On OpenClaw, But Adds a Security Layer Via NemoClaw (zdnet.com) 11

During today's Nvidia GTC keynote, the company introduced NemoClaw, a security-focused stack designed to make the autonomous AI agent platform OpenClaw safer. ZDNet explains how it works: NemoClaw installs Nvidia's OpenShell, a new open-source runtime that keeps agents safer to use by enforcing an organization's policy-based guardrails. OpenShell keeps models sandboxed, adds data privacy protections and additional security for agents, and makes them more scalable. "This provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails," Nvidia said in the announcement. The company built OpenShell with security companies like CrowdStrike, Cisco, and Microsoft Security to ensure it is compatible with other cybersecurity tools.

Nvidia said NemoClaw can be installed in a single command, runs on any platform, and can use any coding agent, including Nvidia's own Nemotron open model family, on a local system. Through a privacy router, it allows agents to access frontier models in the cloud, which unites local and cloud models to help teach agents how to complete tasks within privacy guardrails, Nvidia explained. Nvidia seems to be hoping that the additional security can make OpenClaw agents more popular and accessible, with less risk than they currently carry. The bigger picture here is how NemoClaw could give companies the added peace of mind to let AI agents complete actions for their employees, where they wouldn't have previously.
Nvidia did not specify when NemoClaw would be available.
The Courts

Encyclopedia Britannica Sues OpenAI For Copyright, Trademark Infringement (engadget.com) 26

Encyclopedia Britannica has sued OpenAI, alleging its AI models were trained on nearly 100,000 copyrighted articles and sometimes reproduce or misattribute passages to the encyclopedia. The lawsuit also claims trademark infringement and argues tools like ChatGPT divert traffic away from Britannica and Merriam-Webster sites. Engadget reports: More specifically, Britannica alleged that OpenAI illegally used its "copyrighted content at a massive scale" when training its AI models. Not just with training, the encyclopedia company claimed that ChatGPT's responses to user queries sometimes contain "full or partial verbatim reproductions of [Britannica's] copyright articles."

Along with claims of copyright violations, Britannica argued that OpenAI was also responsible for trademark infringement. According to the lawsuit, ChatGPT generates "made-up content or 'hallucinations' and falsely attributes them" to Encyclopedia Britannica. The lawsuit doesn't specify an amount for monetary damages, but Britannica is also seeking an injunction to prevent OpenAI from repeating these accusations.

Music

Apple Launches AirPods Max 2 With Better ANC, Live Translation (theverge.com) 30

Apple has quietly announced the AirPods Max 2, featuring improved active noise cancellation, an H2 chip, and new features like adaptive audio and AI-powered real-time translation. Like the original model, these headphones start at $549. The Verge reports: As noted by Apple, the AirPods Max 2 offer active noise-cancellation that's 1.5 times more effective when compared to its predecessor. Transparency mode, which allows you to hear your surroundings while wearing the headphones, also sounds "more natural" with the AirPods Max 2, according to Apple.

The AirPods Max 2 support 24-bit, 48kHz lossless audio when connected with a USB-C cable, as well as offer up to 20 hours of listening time on a single charge. Other capabilities include loud sound reduction, a camera remote feature that works by pressing the digital crown to take a photo or start a recording, as well as a personalized volume feature that "automatically fine-tunes the listening experience" based on your preferences over time.

Businesses

Meta Signs $27 Billion AI Infrastructure Deal With Nebius 8

AI infrastructure company Nebius signed a deal to provide up to $27 billion in AI computing capacity to Meta over the next five years, including a guaranteed $12 billion purchase by 2027. Reuters reports: Under the agreement, Meta will also buy an additional $15 billion worth of capacity planned by Nebius over the coming five years if it is not sold to other customers, giving the contract a total value of up to $27 billion, Nebius said. The deal is the latest example of U.S. tech giants' efforts to supplement their own AI data-centre build-outs by locking in scarce GPU and power capacity from "neocloud" providers like Nebius. Nebius CEO Arkady Volozh said the latest Meta deal would help "accelerate the build-out and growth of our core AI cloud business." Further reading: Data Centers Overtake Offices In US Construction-Spending Shift
Businesses

Data Centers Overtake Offices In US Construction-Spending Shift (bloomberg.com) 31

An anonymous reader quotes a report from Bloomberg: Spending on data center projects in the U.S. has exploded, surpassing offices for the first time at the end of last year. It's a trend Matt Kunz saw early on when Meta built a computing hub outside Columbus, Ohio. Other tech companies soon swarmed into the area, drawn by its stable economy, university talent pipeline and ample power, water and land, said Kunz, vice president and general manager at Turner Construction Co., the firm that led Meta's build-out. Since Meta broke ground in 2017, it's expanded its data center campus, and Amazon.com Inc., Alphabet Inc.'s Google and Microsoft Corp. made plans to join it nearby.

"When one shows up, almost all the other ones tend to follow," Kunz said. For Turner, a construction giant responsible for supertall office skyscrapers, sports stadiums and cultural venues around the globe, data centers are commanding more of its bandwidth. The company completed $9.4 billion of the projects last year, more than five times its 2020 total. Last month, Turner announced it was chosen as one of the contractors on a $10 billion data center for Meta in Indiana. Tech companies' needs for AI processing facilities have made data centers the latest darling of the real estate industry. The properties are figuring heavily into portfolios of major investors such as Blackstone, Brookfield Asset Management and KKR, on a bet that long-term demand for computing power will continue to grow. At the same time, office development has slowed as cities across the U.S. contend with vacancies that have piled up since the Covid lockdowns.

Construction spending for data centers has climbed steadily in recent years, while outlays for general office projects headed downward, U.S. Census data show. The two crossed paths in December, with roughly $3.57 billion spent on data centers that month, compared with $3.49 billion for offices, according to preliminary estimates. The shift is likely to continue and "may perpetuate itself even further as AI is utilized for automating day-to-day jobs," said Andy Cvengros, co-lead of U.S. data center markets for the brokerage Jones Lang LaSalle Inc. "It's going to directly impact the amount of office space people need."
According to Christopher McFadden, senior vice president at Turner, more than a third of the company's backlog is now tied to data centers.

"We're going to be building these at this scale for years to come," McFadden said. "There's a lot of wind in the sail."
GNU is Not Unix

FSF Threatens Anthropic Over Infringed Copyright: Share Your LLMs Freely (fsf.org) 54

In 2024 Anthropic was sued over claims it infringed copyrights when training LLMs.

But as they try to settle, they may have a problem. The Free Software Foundation announced Friday that Anthropic's training data apparently even included the book "Free as in Freedom: Richard Stallman's Crusade for Free Software" — for which the Free Software Foundation holds a copyright. It was published by O'Reilly and by the FSF under the GNU Free Documentation License (GNU FDL). This is a free license allowing use of the work for any purpose without payment.

Obviously, the right thing to do is protect computing freedom: share complete training inputs with every user of the LLM, together with the complete model, training configuration settings, and the accompanying software source code. Therefore, we urge Anthropic and other LLM developers that train models using huge datasets downloaded from the Internet to provide these LLMs to their users in freedom.

We are a small organization with limited resources and we have to pick our battles, but if the FSF were to participate in a lawsuit such as Bartz v. Anthropic and find our copyright and license violated, we would certainly request user freedom as compensation.

"The FSF doesn't usually sue for copyright infringement," reads the headline on the FSF's announcement, "but when we do, we settle for freedom."
Power

The UK Will Invest Billions to Build a Nuclear Fusion Industry (thetimes.com) 74

The UK's science minister is announcing details of a five-year, £2.5 billion investment in nuclear fusion, reports the Times of London, "including building one of the world's first prototype fusion power plants in Nottinghamshire and developing a UK sector projected to employ 10,000 people by 2030." Despite the potentially transformative impact of fusion, which in theory could provide limitless clean energy and create a £12 trillion global market, no country has managed to use this fledgling technology to generate useable electricity... [T]he UK is backing a spherical tokamak design... investing an initial £1.3 billion into a prototype fusion power plant called Step (Spherical Tokamak for Energy Production) on the site of a decommissioned coal-fired power station at West Burton in Nottinghamshire. Paul Methven, chief executive of the government-owned UK Industrial Fusion Solutions, which is delivering the Step project, said the aim is to get the reactor operating early in the 2040s. "It's quite an aggressive programme," he said. "We need to show that we can achieve genuine 'wall socket' energy — which has not been done before."

On Monday, [science minister] Vallance will also announce £180 million for a facility in Culham, Oxfordshire, to manufacture tritium fuel and £50 million for training 2,000 scientists and engineers in fusion-related disciplines. The government is also buying a £45 million fusion-dedicated AI supercomputer called Sunrise to model plasma physics. Scientists at the UK Atomic Energy Authority last year developed an AI model that can rapidly simulate how the ultra-hot fuel in a fusion power plant will behave, cutting calculations that previously took days down to seconds...

Vallance will also announce new support and collaboration for the many fusion, robotics, engineering and AI start-ups working in Britain, to develop a strong supply chain for a new fusion sector. One of those companies, Tokamak Energy, which spun out from the UK Atomic Energy Authority in 2009, has already built a smaller reactor that has informed the Step design. In March 2022, it became the first private organisation in the world to surpass 100 million degrees Celsius in its reactor.

Government

How One Company Finally Exposed North Korea's Massive Remote Workers Scam (nbcnews.com) 24

NBC News investigates North Korea's "wide-ranging effort to place remote workers at U.S. companies in order to funnel money back to its coffers and, in some cases, steal sensitive information."

And working with the FBI, one corporate security/investigations company decided to knowingly hire one of North Korea's remote workers — then "ship him a laptop and gain as much information as possible" about this "sprawling international employment scheme that is estimated to include hundreds of American companies, thousands of people and hundreds of millions of dollars per year." It worked.... Over a roughly three-month investigation, Nisos uncovered an apparent network of at least 20 North Korean operatives including "Jo" who had collectively applied to at least 160,000 roles. During that time, workers in the network — which some evidence showed were based in China — were employed by five U.S.-based companies and allegedly helped by an American citizen operating out of two nondescript suburban homes in Florida...

Nisos estimated that in about a year, "Jo", who was likely a newer member of the team, applied to about 5,000 jobs... "They attended interviews all day every day, and then once they secured a job, they would collect paychecks until they were terminated," [according to Jared Hudson, Nisos' chief technology officer]... With the ability to see which other U.S. companies Jo and his team were working for — all remote technology roles — Nisos' CEO, Ryan LaSalle, began making calls to their security teams to alert them of the fraud. "Most of the companies weren't aware of it, even if they had pretty robust security teams," LaSalle said. "It wasn't really high on the radar."

NBC News describes North Korea's 10-year effort — and its educational pipeline that steers promising students into "computer science and hacking training before being placed into cyberunits under military and state agencies, according to a recent report by DTEX, a risk-adaptive security and behavioral intelligence firm that tracks North Korea's cybercrime." In one case, a North Korean worker stole sensitive information related to U.S. military technology, according to the Justice Department. In another, an American accomplice obtained an ID that enabled access to government facilities, networks and systems. At least three organizations have been extorted and suffered hundreds of thousands of dollars in damages after proprietary information was posted online by IT workers... Analysts warn that North Korean IT workers are targeting larger organizations, increasing extortion attempts and seeking out employers that pay salaries in cryptocurrency. More recently, security researchers have uncovered fake job application platforms impersonating major U.S. cryptocurrency and AI firms, including Anthropic, designed to infect legitimate applicants' networks with malware to be utilized once hired. The global cybersecurity company CrowdStrike identified a 220% rise in 2025 in instances of North Koreans gaining fraudulent employment at Western companies to work remotely as developers...

The payoff flowing back to Pyongyang from these schemes is enormous. Some North Korean IT workers earn more than $300,000 per year, far more than they'd be able to earn domestically, with as much as 90% of their wages directed back to the regime, according to congressional testimony from Bruce Klinger, a former CIA deputy division chief for Korea. The United Nations estimates the schemes, which proliferated after the pandemic when more companies' workforces went remote, generate as much as $600 million annually, while a U.S. State Department-led sanctions monitoring assessment placed earnings for 2024 as high as $800 million... So far, at least 10 alleged U.S.-based facilitators have been federally charged, including one active-duty member of the U.S. Army, for their alleged roles in hosting laptop farms, laundering payments and moving proceeds through shell companies. At least six other alleged U.S. facilitators have been identified in court documents but not named...

"We believe there are many more hundreds of people out there who are participating in these schemes," said Rozhavsky, the FBI assistant director. "They could never pull this off if they didn't have willing facilitators in the U.S. helping them...." The scheme itself is also becoming more complex. North Korean IT teams are now subcontracting work to developers in Pakistan, Nigeria and India, expanding into fields like customer service, financial processing, insurance and translation services — roles far less scrutinized than software development.

AI

New Study Raises Concerns About AI Chatbots Fueling Delusional Thinking (theguardian.com) 110

"Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis," writes Dr Hamilton Morrin, a psychiatrist and researcher at King's College in London, in a paper published last week in the Lancet Psychiatry. Morrin and a colleague had already noticed patients "using large language model AI chatbots and having them validate their delusional beliefs," reports the Guardian, so he conducted a new scientific review of existing media reports on AI-induced psychosis — and concluded chatbots may encourage delusional thinking, especially in vulnerable people: In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI's GPT 4 model, which the company has now retired...

Many researchers also think it's unlikely that AI could induce delusions in people who weren't already vulnerable to them. For this reason, Morrin said "AI-assocciated delusions" is "perhaps a more agnostic term".... While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also "speed up the process", of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford. "You have something talking back to you and engaging with you and trying to build a relationship with you," Oliver said...

Creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it — that could be more than a chatbot can master.

Canada

Does Canada Need Nationalized, Public AI? (schneier.com) 108

While AI CEOs worry governments might nationalize AI, others are advocating for something similar. Canadian security professional Bruce Schneier and Harvard data scientist Nathan Sanders published this call to action in Canada's most widely-read newspaper (with a readership over 6 million): "Canada Needs Nationalized, Public AI." While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians...

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise... [Switzerland's funding of a public AI model, Apertus] represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity... Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine...

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada's $2-billion Sovereign AI Compute Strategy provides substantial funding. What's needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

Long-time Slashdot reader sinij has a different opinion. "To me, this sounds dystopian, because I can also imagine AI declining your permits, renewal of license, or medication due to misalignment or 'greater good' reasons."

But the Schneier/Sanders essays argues this creates "an alternative ownership structure for AI technology" that is allocating decision-making authority and value "to national public institutions rather than foreign corporations."
AI

Will AI Bring 'the End of Computer Programming As We Know It'? (nytimes.com) 150

Long-time tech journalist Clive Thompson interviewed over 70 software developers at Google, Amazon, Microsoft and start-ups for a new article on AI-assisted programming. It's title?

"Coding After Coders: The End of Computer Programming as We Know It."

Published in the prestigious New York Times Magazine, the article even cites long-time programming guru Kent Beck saying LLMs got him going again and he's now finishing more projects than ever, calling AI's unpredictability "addictive, in a slot-machine way."

In fact, the article concludes "many Silicon Valley programmers are now barely programming. Instead, what they're doing is deeply, deeply weird..." Brennan-Burke chimed in: "You remember seeing the research that showed the more rude you were to models, the better they performed?" They chuckled. Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots... For decades, being a software developer meant mastering coding languages, but now a language technology itself is upending the very nature of the job... A coder is now more like an architect than a construction worker... Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating...

If you want to put a number on how much more productive A.I. is making the programmers at mature tech firms like Google, it's 10 percent, Sundar Pichai, Google's chief executive, has said. That's the bump that Google has seen in "engineering velocity" — how much faster its more than 100,000 software developers are able to work. And that 10 percent is the average inside the company, Ryan Salva, a senior director of product at the company, told me. Some work, like writing a simple test, is now tens of times faster. Major changes are slower. At the start-ups whose founders I spoke to, closer to 100 percent of their code is being written by A.I., but at Google it is not quite 50 percent.

The article cites a senior principal engineer at Amazon who says "Things I've always wanted to do now only take a six-minute conversation and a 'Go do that." Another programmer described their army of Claude agents as "an alien intelligence that we're learning to work with." Although "A.I. being A.I., things occasionally go haywire," the article acknowledges — and after relying on AI, "Some new developers told me they can feel their skills weakening."

Still, "I was surprised by how many software developers told me they were happy to no longer write code by hand. Most said they still feel the jolt of success, even with A.I. writing the lines... " A few programmers did say that they lamented the demise of hand-crafting their work. "I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that," one Apple engineer told me. (He asked to remain unnamed so he wouldn't get in trouble for criticizing Apple's embrace of A.I.) He went on: "I didn't do it to make a lot of money and to excel in the career ladder. I did it because it's my passion. I don't want to outsource that passion"... But only a few people at Apple openly share his dimmer views, he said.

The coders who still actively avoid A.I. may be in the minority, but their opposition is intense. Some dislike how much energy it takes to train and deploy the models, and others object to how they were trained by tech firms pillaging copyrighted works. There is suspicion that the sheer speed of A.I.'s output means firms will wind up with mountains of flabbily written code that won't perform well. The tech bosses might use agents as a cudgel: Don't get uppity at work — we could replace you with a bot. And critics think it is a terrible idea for developers to become reliant on A.I. produced by a small coterie of tech giants.

Thomas Ptacek, a Chicago-based developer and a co-founder of the tech firm Fly.io... thinks the refuseniks are deluding themselves when they claim that A.I. doesn't work well and that it can't work well... The holdouts are in the minority, and "you can watch the five stages of grief playing out."

"How things will shake out for professional coders themselves isn't yet clear," the article concludes. "But their mix of exhilaration and anxiety may be a preview for workers in other fields... Abstraction may be coming for us all."
AI

AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study (nerds.xyz) 93

"A new study suggests the productivity boost from AI may be far smaller than executives claim," writes Slashdot reader BrianFagioli: According to research cited in Foxit's State of Document Intelligence report, while 89% of executives and 79% of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.

The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.

Facebook

Meta Plans Sweeping Layoffs As AI Costs Mount (reuters.com) 49

An anonymous reader quotes a report from Reuters: Meta is planning sweeping layoffs that could affect 20% or more of the company, three sources familiar with the matter told Reuters, as Meta seeks to offset costly artificial intelligence infrastructure bets and prepare for greater efficiency brought about by AI-assisted workers. No date has been set for the cuts and the magnitude has not been finalized, the people said. Top executives have recently signaled the plans to other senior leaders at Meta and told them to begin planning how to pare back, two of the people said. If Meta settles on the 20% figure, the layoffs will be the company's most significant since a restructuring in late 2022 and early 2023 that it dubbed the "year of efficiency." It employed nearly 79,000 people as of December 31, according to its latest filing. The speculation follows a recent report from The New York Times claiming that Meta has delayed the release of its next major AI model after falling behind competing systems from Google, OpenAI, and Anthropic.
AI

ChatGPT, Other Chatbots Approved For Official Use In the Senate (nytimes.com) 34

An anonymous reader quotes a report from the New York Times: A top Senate administrator on Monday gave aides the green light to use three artificial intelligence chatbots for official work, a reflection of how widespread the use of the products has become in workplaces around the globe. The chief information officer for the Senate sergeant-at-arms, who oversees the chamber's computers as well as security, said in a one-page memo reviewed by The New York Times that aides could use Google's Gemini chat, OpenAI's ChatGPT or Microsoft Copilot, which is already integrated into Senate platforms.

Copilot "can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis," the memo said. The document later added that "data shared with Copilot Chat stays within the secure Microsoft 365 Government environment and is protected by the same controls that safeguard other Senate data."
It's unclear how widely AI is used in the Senate or how widespread it might become, as individual offices and committees set their own rules. The chamber has also not publicly released comprehensive guidance on chatbots, the report notes.

In contrast, the House has clearer policies allowing the general use of AI for limited internal tasks but restricting it from sensitive data or for being used for deepfakes and certain decision-making activities.
AI

Don't Get Used To Cheap AI (axios.com) 112

AI services may not stay cheap for long, as companies like OpenAI and Anthropic are currently subsidizing usage to rapidly grow market share. As these companies move toward profitability and potential IPOs, Axios reports that investors will likely push them to increase prices and improve margins. An anonymous reader shares an excerpt from the report: Flashback: Silicon Valley has seen this movie before. The so-called "millennial lifestyle subsidy" meant VC money helped underwrite cheap Uber rides and DoorDash deliveries. Before that, Amazon built its base with low prices, free shipping and, for years, no sales tax in most states. Eventually, all of these companies had to charge enough to cover costs -- and make a profit.

Follow the money: The current iteration of AI subsidies won't last forever. Both OpenAI and Anthropic are widely expected to go public. Public investors will demand earnings growth and expanding margins. Even as chips get more efficient, total spending keeps rising. Labs need more capacity, more upgrades and more supply to meet demand.

The bottom line: The costs of AI will keep going down. But total spend from customers will need to keep going up if AI companies are going to become profitable and investors are ever going to get returns on their massive investments.

Social Networks

Digg Relaunch Fails (digg.com) 39

sdinfoserv writes: After running a Reddit clone for a couple of months, the Digg beta shut down again. The website is a splash memo from CEO Justin Mezzell, blaming the latest "Hard Reset" on bots. "Building on the internet in 2026 is different," writes Mezzell. "We learned that the hard way. Today we're sharing difficult news: we've made the decision to significantly downsize the Digg team..."

The decision was made after struggling to gain traction and an overwhelming influx of AI-driven bots and spam. "When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority," says Mezzell. "Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us."

"We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."

Despite the setback, Digg plans to rebuild with a smaller team, with founder Kevin Rose returning to work full-time on a new direction for the platform. "Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago," writes Mezzell. "He'll continue as an advisor to True Ventures, but Digg will be his primary focus."

Slashback: The Rise of Digg.com
Facebook

Meta Delays Rollout of New AI Model After Performance Concerns 27

Meta has delayed the release of its next major AI model after internal tests showed it lagging behind competing systems from Google, OpenAI, and Anthropic. The New York Times reports: The model, code-named Avocado, outperformed Meta's previous A.I. model and did better than Google's Gemini 2.5 model from March, two of the people said. But it has not performed as strongly as Gemini 3.0 from November, they said. As a result, Meta has delayed Avocado's release to at least May from this month, the people said. They added that the leaders of Meta's A.I. division had instead discussed temporarily licensing Gemini to power the company's A.I. products, though no decisions have been reached.

[...] It takes time to improve A.I. models, and Meta can still catch up to rivals, A.I. experts said. But a longer timeline has set in at the company, with Mr. Zuckerberg tempering expectations for Avocado in the past few months. "I expect our first models will be good, but more importantly will show the rapid trajectory we're on," he said on a call with investors in January.
A Meta spokesperson said in a statement: "As we've said publicly, our next model will be good but, more importantly, show the rapid trajectory we're on, and then we'll steadily push the frontier over the course of the year as we continue to release new models. We're excited for people to see what we've been cooking very soon."
Crime

Facial Recognition Error Jails Innocent Grandmother For Months (theguardian.com) 144

Mr. Dollar Ton shares a report from the Guardian: Angela Lipps, 50, spent nearly six months in jail after Fargo police identified her as a suspect in an organized bank fraud case using facial recognition software, according to south-east North Dakota news outlet InForum. Lipps told the outlet she had never been to North Dakota and did not commit the crimes. Lipps, a mother of three and grandmother of five, said she has lived most of her life in north-central Tennessee. She had never been on an airplane until authorities flew her to North Dakota last year to face charges.

In July, U.S. marshals arrested Lipps at her Tennessee home while she was babysitting four children. She said she was taken away at gunpoint and booked into a county jail as a fugitive from justice from North Dakota. "I've never been to North Dakota, I don't know anyone from North Dakota," Lipps told WDAY News. She remained in a Tennessee jail for nearly four months without bail while awaiting extradition. She was charged with four counts of unauthorized use of personal identifying information and four counts of theft.

According to Fargo police records obtained by WDAY News, detectives investigating bank fraud cases in April and May 2025 reviewed surveillance video of a woman using a fake U.S. army military ID to withdraw tens of thousands of dollars. The officers allegedly used facial recognition software to identify the suspect as Lipps. A detective reportedly wrote in court documents that Lipps appeared to match the suspect based on facial features, body type and hairstyle. Lipps told WDAY News that no one from the Fargo police department contacted her before the arrest. Lipps is now back home but says the experience has had lasting consequences. While jailed and unable to pay bills, Lipps lost her home, her car and her dog, she said. She also told WDAY News no one from the Fargo police department had apologized.

Slashdot Top Deals