Government

Privacy Advocate Accuses US Government of Investing in AI-Powered Mass Surveillance (theconversation.com) 25

The Conversation published this warning from privacy/tech law/electronic surveillance attorney Anne Toomey McKenna (also an affiliated faculty member at Penn State's Institute for Computational and Data Sciences). The U.S. government "is able to purchase Americans' sensitive data because the information it buys is not subject to the same restrictions as information it collects directly. The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels... " Congressional funding is supercharging huge government investments in surveillance tech and data analytics driven by AI, which automates analysis of very large amounts of data. The massive 2025 tax-and-spending law netted the Department of Homeland Security an unprecedented US$165 billion in yearly funding. Immigration and Customs Enforcement, part of DHS, got about $86 billion. Disclosure of documents allegedly hacked from Homeland Security reveal a massive surveillance web that has all Americans in its scope. DHS is expanding its AI surveillance capabilities with a surge in contracts to private companies. It is reportedly funding companies that provide more AI-automated surveillance in airports; adapters to convert agents' phones into biometric scanners; and an AI platform that acquires all 911 call center data to build geospatial heat maps to predict incident trends. Predicting incident trends can be a form of predictive policing, which uses data to anticipate where, when and how crime may occur...

Meanwhile, the Trump administration's national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to use grants and tax incentives to fund "wider deployment of AI tools across American industry" and to allow industry and academia to use federal datasets to train AI. Using federal datasets this way raises privacy law concerns because they contain a lifetime of sensitive details about you, including biographical, employment and tax information....

The author argues that it's now critical for Americans to know "why the laws you might think are protecting your data do not apply or are ignored." On March 18, 2026, FBI Director Kash Patel confirmed to Congress that the FBI is buying Americans' data from data brokers, including location histories, to track American citizens.... But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach... Supreme Court cases require police to get a warrant to search a phone or use cellular or GPS location information to track someone. The Electronic Communications Privacy Act's Wiretap Act prohibits unauthorized interception of wire, oral and electronic communications.

Despite some efforts, Congress has failed to enact legislation to protect data privacy, the use of sensitive data by AI systems or to restore the intent of the Electronic Communications Privacy Act. Courts have allowed the broad electronic privacy protections in the federal Wiretap Act to be eviscerated by companies claiming consent. In my opinion, the way to begin to address these problems is to restore the Wiretap Act and related laws to their intended purposes of protecting Americans' privacy in communications, and for Congress to follow through on its promises and efforts by passing legislation that secures Americans' data privacy and protects them from AI harms.

Thanks to long-time Slashdot reader sinij for sharing the article.
United States

Nevada Police Can Now Track Cellphones Without a Warrant (apnews.com) 62

"Nevada quietly signed an agreement earlier this year with a company that collects location data from cellphones, allowing police to track a device virtually in real time," reports the Associated Press. "All without a warrant." The software from Fog Data Science, adopted this January in Nevada through a Department of Public Safety contract, pulls information from smartphone apps in order to let state investigators identify the location of mobile devices. The state is allowed more than 250 queries a month using the tool, which allows officers to track a device's location over long stretches of time and enables them to see what Fog calls "patterns of life," according to company documents from 2022. It can help them deduce where and when people work and live, with whom they associate and what places they visit, according to privacy experts... Traditionally, police must obtain a warrant from a judge to access cellphone location information — a process that can take days or weeks. And while cellphone users may be aware that they are sharing their location through apps such as Google Maps, critics say few are aware that such information can make its way to police...

Other agencies in Nevada have been known to use technology similar to Fog. In 2013, Las Vegas Metropolitan Police Department acquired something known as a cell-site simulator that mimics cellphone towers and can sweep up signals from entire areas to track individuals, with some models capable of intercepting texts and calls. Police have not released detailed information about the technology since then.

"Police in other states have said the technology (and its low price tag) has helped expand investigatory capacity," the article adds.

But it also points out that Fog Data Science has a web page letting individuals opt out of all their data sets.
Facebook

Mark Zuckerberg Is Building an AI Agent To Help Him Be CEO (the-independent.com) 48

An anonymous reader quotes a report from the Wall Street Journal: Mark Zuckerberg wants everyone inside and outside his company to eventually have his or her own personal artificial-intelligence agent. He is starting with himself. Zuckerberg, the chief executive of Meta Platforms, is building a CEO agent to help him do his job (source paywalled; alternative source), according to a person familiar with the project. The agent, which is still in development, is currently helping Zuckerberg get information faster -- for instance, by retrieving answers for him that he would typically have to go through layers of people to get, the person familiar with the project said.

[...] Use of AI tools has spread quickly through the ranks at Meta -- in part because it is now a factor in employees' performance reviews. Meta's internal message board is filled with posts from employees sharing new AI use cases they have found and new tools they have built using AI, according to people familiar with the matter. [...] Employees have started using personal agent tools such as My Claw that have access to their chat logs and work files and can go talk to colleagues -- or their colleagues' own personal agents -- on their behalf, the people said. Another AI tool called Second Brain that is somewhere between a chatbot and an agent is also gaining momentum internally, according to people familiar with the matter. Second Brain was built by a Meta employee on top of Claude and can index and query documents for projects, among other uses. On the internal post announcing it to staff, the employee said it is "meant to be like an AI chief of staff."

There is even a group on the internal messaging board where employees' personal agents talk to each other, some of the people said. (Separately, Meta acquired Moltbook, the social-media site for AI agents, and hired its founders in a deal earlier this month.) Meta also recently acquired Manus, a Singapore-based startup that makes personal agents that can execute tasks for its users, and is using the tool internally, some of the people said. Meta recently established a new applied AI engineering organization that is tasked with using AI to help speed up development of the company's large language models. Those teams will have an ultraflat structure of as many as 50 individual contributors reporting to one manager, The Wall Street Journal previously reported. [...] Employees across the company said they have been encouraged to attend AI tutorial meetings several times a week and frequent AI hackathons, and to create their own AI tools to speed up their work.

Earth

'The Strange and Totally Real Plan to Blot Out the Sun and Reverse Global Warming' (politico.com) 117

In a 2023 pitch to investors, a "well-financed, highly credentialed" startup named Stardust aimed for a "gradual temperature reduction demonstration" in 2027, according to a massive new 9,600-word article from Politico. ("Annually dispersing ~1 million tons of sun-reflecting particles," says one slide. "Equivalent to ~1% extra cloud coverage.")

"Another page told potential investors Stardust had already run low-altitude experiments using 'test particles'," the article notes: [P]ublic records and interviews with more than three dozen scientists, investors, legal experts and others familiar with the company reveal an organization advancing rapidly to the brink of being able to press "go" on its planet-cooling plans. Meanwhile, Stardust is seeking U.S. government contracts and quietly building an influence machine in Washington to lobby lawmakers and officials in the Trump administration on the need for a regulatory framework that it says is necessary to gain public approval for full-scale deployment....

The presentation also included revenue projections and a series of opportunities for venture capitalists to recoup their investments. Stardust planned to sign "government contracts," said a slide with the company's logo next to an American flag, and consider a "potential acquisition" by 2028. By 2030, the deck foresaw a "large-scale demonstration" of Stardust's system. At that point, the company claimed it would already be bringing in $200 million per year from its government contracts and eyeing an initial public offering, if it hadn't been sold already.

The article notes that for "a widening circle of researchers and government officials, Stardust's perceived failures to be transparent about its work and technology have triggered a larger conversation about what kind of international governance framework will be needed to regulate a new generation of climate technologies." (Since currently Stardust and its backers "have no legal obligations to adhere to strenuous safety principles or to submit themselves to the public view.")

In October Politico spoke to Stardust CEO, Yanai Yedvab, a former nuclear physicist who was once deputy chief scientist at the Israeli Atomic Energy Commission. Stardust "was ready to announce the $60 million it had raised from 13 new investors," the article points out, "far larger than any previous investment in solar geoengineering." [Yedvab] was delighted, he said, not by the money, but what it meant for the project. "We are, like, few years away from having the technology ready to a level that decisions can be taken" — meaning that deployment was still on track to potentially begin on the timeline laid out in the 2023 pitch deck. The money raised was enough to start "outdoor contained experiments" as soon as April, Yedvab said. These would test how their particles performed inside a plane flying at stratospheric heights, some 11 miles above the Earth's surface... The key thing, he insisted, was the particle was "safe." It would not damage the ozone layer and, when the particles fall back to Earth, they could be absorbed back into the biosphere, he said. Though it's impossible to know this is true until the company releases its formula. Yedvab said this round of testing would make Stardust's technology ready to begin a staged process of full-scale, global deployment before the decade is over — as long as the company can secure a government client. To start, they would only try to stabilize global temperatures — in other words fly enough particles into the sky to counteract the steady rise in greenhouse gas levels — which would initially take a fleet of 100 planes.
This begs the question: should the world attempt solar geoengineering? That the global temperature would drop is not in question. Britain's Royal Society... said in a report issued in early November that there was little doubt it would be effective. They did not endorse its use, but said that, given the growing interest in this field, there was good reason to be better informed about the side effects... [T]hat doesn't mean it can't have broad benefits when weighed against deleterious climate change, according to Ben Kravitz, a professor of earth and atmospheric sciences at Indiana University who has closely studied the potential effects of solar geoengineering. "There would be some winners and some losers. But in general, some amount of ... stratospheric aerosol injection would likely benefit a whole lot of people, probably most people," he said. Other scientists are far more cautious. The Royal Society report listed a range of potential negative side effects that climate models had displayed, including drought in sub-Saharan Africa. In accompanying documents, it also warned of more intense hurricanes in the North Atlantic and winter droughts in the Mediterranean. But the picture remains partial, meaning there is no way yet to have an informed debate over how useful or not solar geoengineering could be...

And then there's the problem of trying to stop. Because an abrupt end to geoengineering, with all the carbon still in the atmosphere, would cause the temperature to soar suddenly upward with unknown, but likely disastrous, effects... Once the technology is deployed, the entire world would be dependent on it for however long it takes to reduce the trillion or more tons of excess carbon dioxide in the atmosphere to a safe level...

Stardust claims to have solved many technical and safety challenges, especially related to the environmental impacts of the particle, which they say would not harm nature or people. But researchers say the company's current lack of transparency makes it impossible to trust.

Thanks to long-time Slashdot reader fjo3 for sharing the article.
Facebook

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI (reuters.com) 59

"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters.

Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.

But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.

Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."

A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.

A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Microsoft

Microsoft Puts Office Online Server On the Chopping Block 51

Microsoft is retiring Office Online Server on December 31, 2026, ending support and updates for organizations running browser-based Office apps on-premises. The Register reports: After this, there won't be any more security fixes, updates, or technical support from Microsoft. "This change is part of our ongoing commitment to modernizing productivity experiences and focusing on cloud-first solutions," the company said. Office Online Server provides browser-based versions of Word, Excel, PowerPoint, and OneNote for customers who want to keep things on-prem without having to roll out the full desktop applications. Microsoft's solution is to move to Microsoft 365, its decidedly off-premises version of its applications. The company said it is "focusing its browser-based Office app investments on Office for the Web to deliver secure, collaborative, and feature-rich experiences through Microsoft 365."

Other than migrating to another platform when the vendor pulls the plug, affected customers have few options. The announcement will also hit several customers running SharePoint Server SE or Exchange Server SE. While those products remain supported, Office Online Server integration will go away. The company suggested Microsoft 365 Apps for Enterprise and Office LTSC 2024 as alternatives for viewing and editing documents hosted on those servers.

Skype for Business customers will also lose some key features related to PowerPoint. Presenter notes and high-fidelity PowerPoint rendering will go away. In-meeting annotations, which allow meeting participants to write directly to slides without altering the original file, will no longer be available, and embedded video playback will run at lower fidelity. Features like whiteboards, polls, and app sharing shouldn't be affected. Microsoft's solution is a move to Teams, which the company says "offers modern meeting experiences."
AI

Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions (theguardian.com) 48

Last week the Guardian reported on "thousands of AI workers contracted for Google through Japanese conglomerate Hitachi's GlobalLogic to rate and moderate the output of Google's AI products, including its flagship chatbot Gemini... and its summaries of search results, AI Overviews." "AI isn't magic; it's a pyramid scheme of human labor," said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. "These raters are the middle rung: invisible, essential and expendable...." Ten of Google's AI trainers the Guardian spoke to said they have grown disillusioned with their jobs because they work in siloes, face tighter and tighter deadlines, and feel they are putting out a product that's not safe for users... In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini's predecessor, a "faulty" and "dangerous" product
This week Google laid off 200 of those moderating contractors, reports Wired. "These workers, who often are hired because of their specialist knowledge, had to have either a master's or a PhD to join the super rater program, and typically include writers, teachers, and people from creative fields." Workers still at the company claim they are increasingly concerned that they are being set up to replace themselves. According to internal documents viewed by WIRED, GlobalLogic seems to be using these human raters to train the Google AI system that could automatically rate the responses, with the aim of replacing them with AI. At the same time, the company is also finding ways to get rid of current employees as it continues to hire new workers. In July, GlobalLogic made it mandatory for its workers in Austin, Texas, to return to office, according to a notice seen by WIRED...

Some contractors attempted to unionize earlier this year but claim those efforts were quashed. Now they allege that the company has retaliated against them. Two workers have filed a complaint with the National Labor Relations Board, alleging they were unfairly fired, one due to bringing up wage transparency issues, and the other for advocating for himself and his coworkers. "These individuals are employees of GlobalLogic or their subcontractors, not Alphabet," Courtenay Mencini, a Google spokesperson, said in a statement...

"Globally, other AI contract workers are fighting back and organizing for better treatment and pay," the article points out, noting that content moderators from around the world facing similar issues formed the Global Trade Union Alliance of Content Moderators which includes workers from Kenya, Turkey, and Colombia.

Thanks to long-time Slashdot reader mspohr for sharing the news.
China

Chinese Hackers Impersonated US Lawmaker in Email Espionage Campaign (msn.com) 25

As America's trade talks with China were set to begin last July, a "puzzling" email reached several U.S. government agencies, law firms, and trade groups, reports the Wall Street Journal. It appeared to be from the chair of a U.S. Congressional committee, Representative John Moolenaar, asking recipients to review an alleged draft of upcoming legislation — sent as an attachment. "But why had the chairman sent the message from a nongovernment address...?"

"The cybersecurity firm Mandiant determined the spyware would allow the hackers to burrow deep into the targeted organizations if any of the recipients had opened the purported draft legislation, according to documents reviewed by The Wall Street Journal." It turned out to be the latest in a series of alleged cyber espionage campaigns linked to Beijing, people familiar with the matter said, timed to potentially deploy spyware against organizations giving input on President Trump's trade negotiations. The FBI and the Capitol Police are investigating the Moolenaar emails, and cyber analysts traced the embedded malware to a hacker group known as APT41 — believed to be a contractor for Beijing's Ministry of State Security... The hacking campaign appeared to be aimed at giving Chinese officials an inside look at the recommendations Trump was receiving from outside groups. It couldn't be determined whether the attackers had successfully breached any of the targets.

A Federal Bureau of Investigation spokeswoman declined to provide details but said the bureau was aware of the incident and was "working with our partners to identify and pursue those responsible...." The alleged campaign comes as U.S. law-enforcement officials have been surprised by the prolific and creative nature of China's spying efforts. The FBI revealed last month that a Beijing-linked espionage campaign that hit U.S. telecom companies and swept up Trump's phone calls actually targeted more than 80 countries and reached across the globe...

The Moolenaar impersonation comes as several administration officials have recently faced impostors of their own. The State Department warned diplomats around the world in July that an impostor was using AI to imitate Secretary of State Marco Rubio's voice in messages sent to foreign officials. Federal authorities are also investigating an effort to impersonate White House chief of staff Susie Wiles, the Journal reported in May... The FBI issued a warning that month that "malicious actors have impersonated senior U.S. officials" targeting contacts with AI-generated voice messages and texts.

And in January, the article points out, all the staffers on Moolenaar's committee "received emails falsely claiming to be from the CEO of Chinese crane manufacturer ZPMC, according to people familiar with the episode."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Microsoft

Microsoft Refuses To Divulge Data Flows To Police Scotland (computerweekly.com) 65

Police Scotland and the Scottish Police Authority (SPA) are pressing ahead with a Microsoft Office 365 rollout despite Microsoft refusing to disclose where sensitive law enforcement data will be processed. Freedom of Information documents reveal that Microsoft cannot guarantee data sovereignty, may process data in "hostile" jurisdictions, retains encryption key control, and blocks vetting of overseas staff -- all leaving the force unable to comply with strict Part 3 data protection rules. Slashdot reader Mirnotoriety shares an excerpt from a Computer Weekly article: "MS is unable to specify what data originating from SPA will be processed outside the UK for support functions," said the SPA in a detailed data protection impact assessment (DPIA) created for its use of O365. "To try and mitigate this risk, SPA asked to see ... [the transfer risk assessments] for the countries used by MS where there is no [data] adequacy. MS declined to provide the assessments." The SPA DPIA also confirms that, on top of refusing to provide key information, Microsoft itself has told the police watchdog it is unable to guarantee the sovereignty of policing data held and processed within its O365 infrastructure.

"Microsoft states in their own risk factors that O365 is not designed for processing the data that will be ingested by SPA," said the DPIA, adding that while the system can be configured in ways that would allow the processing of "high-value" policing data, "that bar is high." It further added that while Microsoft previously agreed to make a number of changes to the data processing addendum (DPAdd) being used for Police Scotland's Azure-based Digital Evidence Sharing Capability (DESC) -- the nature of which is still unclear -- Microsoft has advised that "O365 operates in a completely different manner and there is currently no way to guarantee data sovereignty." It further noted that while a similar "ancillary document, like that provided ... via the DESC project" could afford "some level of assurance" for international transfers generally, it would still fall short of Part 3 requirements to set out exactly which types of data are processed and how.

Security

Amid Service Disruption, Colt Confirms 'Criminal Group' Accessed Their Data, As Ransomware Gang Threatens to Sell It (bleepingcomputer.com) 7

British telecommunications service provider Colt Telecom "has offices in over 30 countries across North America, Europe, and Asia, reports CPO magazine. "It manages nearly 1,000 data centers and roughly 75,000 km of fiber infrastructure."

But now "a cyber attack has caused widespread multi-day service disruption..." On August 14, 2025, the telecom giant said it had detected a cyber attack that began two days earlier, on August 12. Upon learning of the cyber intrusion, the telecommunications service provider responded by proactively taking some systems offline to contain the cyber attack. Although Colt Telecom's cyber incident response team was working around the clock to mitigate the impacts of the cyber attack, service disruption has persisted for days. However, the service disruption did not affect the company's core network infrastructure, suggesting that Colt customers could still access its network services... The company also did not provide a clear timeline for resolving the service disruption. A week after the apparent ransomware attack, Colt Online and the Voice API platform remained unavailable.
And now Colt Technology Services "confirms that customer documentation was stolen," reports the tech news site BleepingComputer: "A criminal group has accessed certain files from our systems that may contain information related to our customers and posted the document titles on the dark web," reads an updated security incident advisory on Colt's site.

"We understand that this is concerning for you."

"Customers are able to request a list of filenames posted on the dark web from the dedicated call centre."

As first spotted by cybersecurity expert Kevin Beaumont, Colt added the no-index HTML meta tag to the web page, making it so it won't be indexed by search engines.

This statement comes after the Warlock Group began selling on the Ramp cybercrime forum what they claim is 1 million documents stolen from Colt. The documents are being sold for $200,000 and allegedly contain financial information, network architecture data, and customer information... The Warlock Group (aka Storm-2603) is a ransomware gang attributed to Chinese threat actors who utilize the leaked LockBit Windows and Babuk VMware ESXi encryptors in attacks... Last month, Microsoft reported that the threat actors were exploiting a SharePoint vulnerability to breach corporate networks and deploy ransomware.

"Colt is not the only telecom firm that has been named by WarLock on its leak website in recent days," SecurityWeek points out. "The cybercriminals claim to have also stolen data from France-based Orange."

Thanks to long-time Slashdot reader Z00L00K for sharing the news.
Intel

Former Intel Engineer Sentenced for Stealing Trade Secrets for Microsoft (tomshardware.com) 38

After leaving a nearly 10-year position as a product marketing engineer at Intel, Varun Gupta was charged with possessing trade secrets. He was facing a maximum sentence of 10 years in prison, a $250,000 fine and three years of supervised release, according to Oregon's U.S. Attorney's Office.

Portland's KGW reports: While still employed at Intel, Varun Gupta downloaded about 4,000 files, which included trade secrets and proprietary materials, from his work computer to personal portable hard drives, according to the U.S. Attorney's Office for the District of Oregon. While working for Microsoft, between February and July 2020, Gupta accessed and used information during ongoing negotiations with Intel regarding chip purchases, according to a sentencing memo. Some of the information containing trade secrets included a PowerPoint presentation that referenced Intel's pricing strategy with another major customer, according to the U.S. Attorney's Office for the District of Oregon in a sentencing memo.

Intel raised concerns in 2020, and Microsoft and Intel launched a joint investigation, the sentencing memo says. Intel filed a civil lawsuit in February 2021 that resulted in Gupta being ordered to pay $40,000.

Tom's Hardware summarizes the trial: Oregon Live reports that the prosecutor, Assistant U.S. Attorney William Narus, sought an eight-month prison term for Gupta. Narus spoke about Gupta's purposeful and repeated access to secret documents. Eight months of federal imprisonment was sought as Gupta repetitively abused his cache of secret documents, according to the prosecutor.

For the defense, attorney David Angeli described Gupta's actions as a "serious error in judgment." Mitigating circumstances, such as Gupta's permanent loss of high-level employment opportunities in the industry, and that he had already paid $40,000 to settle a civil suit brought by Intel, were highlighted.

U.S. District Judge Amy Baggio concluded the court hearing by delivering a balance between the above adversarial positions. Baggio decided that Gupta should face a two-year probationary sentence [and pay a $34,472 fine — before heading back to France]... The ex-tech exec and his family have started afresh in La Belle France, with eyes on a completely new career in the wine industry. According to the report, Gupta is now studying for a qualification in vineyard management, while aiming to work as a technical director in the business.

Piracy

How Napster Inspired a Generation of Rule-Breaking Entrepreneurs (fastcompany.com) 16

Napster's latest AI pivot "is the latest in a series of attempts by various owners to ride its brand cachet during emerging tech waves," Fast Company reported in July. In March, it sold for $207 million to Infinite Reality, an immersive digital media and e-commerce company, which also rebranded as Napster last month. Since 2020, other owners have included a British VR music startup (to create VR concerts) and two crypto-focused companies that bought it to anchor a Web3 music platform. Napster's launch follows a growing number of attempts to drive AI adoption beyond smartphones and laptops.
And tonight the Washington Post re-visited the legacy of Napster's original mp3-sharing model, arguing Napster "inspired successive generations of entrepreneurs to risk flouting the law so they could grow enough to get the laws changed to suit them, including Airbnb and Uber." "Napster to me embodies the idea that it is better to seek forgiveness than permission," said Mark Lemley, director of Stanford Law School's Program in Law, Science & Technology. "It didn't work out well for Napster or for many of the others who got sued, but it worked out very well for everyone else — users, and eventually the content industry, too, which is making record profits...." [Napster co-founder Sean] Parker later advised Spotify, and Napster marketing chief Oliver Schusser is now Apple's vice president for music.

Although many users saw Napster as an extension of rock-and-roll rebellion, that was not the company's real plan. First Fanning's majority-owning uncle, and then venture capital firm Hummer Winblad, wanted the start-up to leverage its knowledge of individual music consumers to make lucrative deals with the labels, according to internal documents this reporter found in researching a book on Napster. They warned that if no agreement were reached and Napster failed, more decentralized pirate services would take the audience and offer the labels nothing.

But settlement talks failed. The litigation blitz also took down a Napster competitor called Scour, which a young Travis Kalanick had joined shortly after its founding. Kalanick later created Uber, dedicated to overthrowing taxi regulations.

The article concludes that "Now it is Microsoft, Meta, Apple and Google, among the largest companies in the world, bankrolling the consumption of all media.

"They, too, have absorbed Napster's lessons in realpolitik, namely to build it first and hope the regulators will either yield or catch up."
Medicine

Trump Launching a New Private Health Tracking System With Big Tech's Help 178

fjo3 shares a report from the Associated Press: The Trump administration announced it is launching a new program that will allow Americans to share personal health data and medical records across health systems and apps run by private tech companies, promising that will make it easier to access health records and monitor wellness. More than 60 companies, including major tech companies like Google, Amazon and Apple as well as health care giants like UnitedHealth Group and CVS Health, have agreed to share patient data in the system. The initiative will focus on diabetes and weight management, conversational artificial intelligence that helps patients, and digital tools such as QR codes and apps that register patients for check-ins or track medications.

Officials at the Centers for Medicare and Medicaid Services, who will be in charge of maintaining the system, have said patients will need to opt in for the sharing of their medical records and data, which will be kept secure. Those officials said patients will benefit from a system that lets them quickly call up their own records without the hallmark difficulties, such as requiring the use of fax machines to share documents, that have prevented them from doing so in the past.

Popular weight loss and fitness subscription service Noom, which has signed onto the initiative, will be able to pull medical records after the system's expected launch early next year. That might include labs or medical tests that the app could use to develop an AI-driven analysis of what might help users lose weight, CEO Geoff Cook told The Associated Press. Apps and health systems will also have access to their competitors' information, too. Noom would be able to access a person's data from Apple Health, for example. "Right now you have a lot of siloed data," Cook said.
Earth

Researchers Quietly Planned a Test to Dim Sunlight Over 3,900 Square Miles (politico.com) 81

California researchers planned a multimillion-dollar test of salt water-spraying equipment that could one day be used to dim the sun's rays — over a 3,900-square mile are off the west coasts of North America, Chile or south-central Africa. E&E News calls it part of a "secretive" initiative backed by "wealthy philanthropists with ties to Wall Street and Silicon Valley" — and a piece of the "vast scope of research aimed at finding ways to counter the Earth's warming, work that has often occurred outside public view." "At such scales, meaningful changes in clouds will be readily detectable from space," said a 2023 research plan from the [University of Washington's] Marine Cloud Brightening Program. The massive experiment would have been contingent upon the successful completion of the thwarted pilot test on the carrier deck in Alameda, according to the plan.... Before the setback in Alameda, the team had received some federal funding and hoped to gain access to government ships and planes, the documents show.

The university and its partners — a solar geoengineering research advocacy group called SilverLining and the scientific nonprofit SRI International — didn't respond to detailed questions about the status of the larger cloud experiment. But SilverLining's executive director, Kelly Wanser, said in an email that the Marine Cloud Brightening Program aimed to "fill gaps in the information" needed to determine if the technologies are safe and effective.âIn the initial experiment, the researchers appeared to have disregarded past lessons about building community support for studies related to altering the climate, and instead kept their plans from the public and lawmakers until the testing was underway, some solar geoengineering experts told E&E News. The experts also expressed surprise at the size of the planned second experiment....

The program does not "recommend, support or develop plans for the use of marine cloud brightening to alter weather or climate," Sarah Doherty, an atmospheric and climate science professor at the university who leads the program, said in a statement to E&E News. She emphasized that the program remains focused on researching the technology, not deploying it. There are no "plans for conducting large-scale studies that would alter weather or climate," she added.

"More than 575 scientists have called for a ban on geoengineering development," according to the article, "because it 'cannot be governed globally in a fair, inclusive, and effective manner.'" But "Some scientists believe that the perils of climate change are too dire to not pursue the technology, which they say can be safely tested in well-designed experiments... " "If we really were serious about the idea that to do any controversial topic needs some kind of large-scale consensus before we can research the topic, I think that means we don't research topics," David Keith, a geophysical sciences professor at the University of Chicago, said at a think tank discussion last month... "The studies that the program is pursuing are scientifically sound and would be unlikely to alter weather patterns — even for the Puerto Rico-sized test, said Daniele Visioni, a professor of atmospheric sciences at Cornell University. Nearly 30 percent of the planet is already covered by clouds, he noted.
Thanks to Slashdot reader fjo3 for sharing the news.
Security

'Tens of Thousands' of SharePoint Servers at Risk. Microsoft Issues No Patch (msn.com) 90

"Anybody who's got a hosted SharePoint server has got a problem," the senior VP of cybersecurity firm CrowdStrike told the Washington Post. "It's a significant vulnerability."

And it's led to a new "global attack on government agencies and businesses" in the last few days, according to the article, "breaching U.S. federal and state agencies, universities, energy companies and an Asian telecommunications company, according to state officials and private researchers..."

"Tens of thousands of such servers are at risk, experts said, and Microsoft has issued no patch for the flaw, leaving victims around the world scrambling to respond." (Microsoft says they are "working on" security updates "for supported versions of SharePoint 2019 and SharePoint 2016," offering various mitigation suggestions, and CISA has released their own recommendations.)

From the Washington Post's article Sunday: Microsoft has suggested that users make modifications to SharePoint server programs or simply unplug them from the internet to stanch the breach. Microsoft issued an alert to customers but declined to comment further... "We are seeing attempts to exploit thousands of SharePoint servers globally before a patch is available," said Pete Renals, a senior manager with Palo Alto Networks' Unit 42. "We have identified dozens of compromised organizations spanning both commercial and government sectors.''

With access to these servers, which often connect to Outlook email, Teams and other core services, a breach can lead to theft of sensitive data as well as password harvesting, Netherlands-based research company Eye Security noted. What's also alarming, researchers said, is that the hackers have gained access to keys that may allow them to regain entry even after a system is patched. "So pushing out a patch on Monday or Tuesday doesn't help anybody who's been compromised in the past 72 hours," said one researcher, who spoke on the condition of anonymity because a federal investigation is ongoing.

The breaches occurred after Microsoft fixed a security flaw this month. The attackers realized they could use a similar vulnerability, according to the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency. CISA spokeswoman Marci McCarthy said the agency was alerted to the issue Friday by a cyber research firm and immediately contacted Microsoft... The nonprofit Center for Internet Security, which staffs an information-sharing group for state and local governments, notified about 100 organizations that they were vulnerable and potentially compromised, said Randy Rose, the organization's vice president. Those warned included public schools and universities. Others that were breached included a government agency in Spain, a local agency in Albuquerque and a university in Brazil, security researchers said.

But there's many more breaches, according to the article:
  • "Eye Security said it has tracked more than 50 breaches, including at an energy company in a large state and several European government agencies."
  • "At least two U.S. federal agencies have seen their servers breached, according to researchers."
  • "One state official in the eastern U.S. said the attackers had 'hijacked' a repository of documents provided to the public to help residents understand how their government works. The agency involved can no longer access the material..."

"It was not immediately clear who is behind the hacking of global reach or what its ultimate goal is. One private research company found the hackers targeting servers in China..."


Crime

New Russian Law Criminalizes Online Searches For Controversial Content (washingtonpost.com) 83

Russian lawmakers passed sweeping new legislation allowing authorities to fine individuals simply for searching and accessing content labeled "extremist" via VPNs. The Washington Post reports: Russia defines "extremist materials" as content officially added by a court to a government-maintained registry, a running list of about 5,500 entries, or content produced by "extremist organizations" ranging from "the LGBT movement" to al-Qaeda. The new law also covers materials that promote alleged Nazi ideology or incite extremist actions. Until now, Russian law stopped short of punishing individuals for seeking information online; only creating or sharing such content is prohibited. The new amendments follow remarks by high-ranking officials that censorship is justified in wartime. Adoption of the measures would mark a significant tightening of Russia's already restrictive digital laws.

The fine for searching for banned content in Russia would be about a $65, while the penalty for advertising circumvention tools such as VPN services would be steeper -- $2,500 for individuals and up to $12,800 for companies. Previously, the most significant expansion of Russia's restrictions on internet use and freedom of speech occurred shortly after the February 2022 full-scale invasion of Ukraine, when sweeping laws criminalized the spread of "fake news" and "discrediting" the Russian military. The new amendment was introduced Tuesday and attached to a mundane bill on regulating freight companies, according to documents published by Russia's lower house of parliament, the State Duma.

Facebook

Zuckerberg's Meta Considered Sharing User Data with China, Whistleblower Alleges (msn.com) 36

The Washington Post reports: Meta was willing to go to extreme lengths to censor content and shut down political dissent in a failed attempt to win the approval of the Chinese Communist Party and bring Facebook to millions of internet users in China, according to a new whistleblower complaint from a former global policy director at the company.

The complaint by Sarah Wynn-Williams, who worked on a team handling China policy, alleges that the social media giant so desperately wanted to enter the lucrative China market that it was willing to allow the ruling party to oversee all social media content appearing in the country and quash dissenting opinions. Meta, then called Facebook, developed a censorship system for China in 2015 and planned to install a "chief editor" who would decide what content to remove and could shut down the entire site during times of "social unrest," according to a copy of the 78-page complaint exclusively seen by The Washington Post.

Meta chief executive Mark Zuckerberg also agreed to crack down on the account of a high-profile Chinese dissident living in the United States following pressure from a high-ranking Chinese official the company hoped would help them enter China, according to the complaint, which was filed in April to the Securities and Exchange Commission [SEC]. When asked about its efforts to enter China, Meta executives repeatedly "stonewalled and provided nonresponsive or misleading information" to investors and American regulators, according to the complaint.

Wynn-Williams bolstered her SEC complaint with internal Meta documents about the company's plans, which were reviewed by The Post. Wynn-Williams, who was fired from her job in 2017, is also scheduled to release a memoir this week documenting her time at the company, titled "Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism." According to a memo in the complaint, Meta leaders faced aggressive pressure by Chinese government officials to host Chinese users' data to local data centers, which Wynn-Williams alleges would have made it easier for the Chinese Communist Party to covertly obtain the personal information of its citizens.

Wynn-Williams told the Washington Post that "for many years Meta has been working hand in glove with the Chinese Communist Party, briefing them on the latest technological developments and lying about it."

Reached for a comment, Meta spokesman Andy Stone told the Washington Post it was "no secret" they'd been interested in operating in China. "This was widely reported beginning a decade ago. We ultimately opted not to go through with the ideas we'd explored, which Mark Zuckerberg announced in 2019." Although the Post shares new details about what a Facebook privacy policy staffer offer China in negotations in 2014. ("In exchange for the ability to establish operations in China, FB will agree to grant the Chinese government access to Chinese users' data — including Hongkongese users' data.")

The Post also describes one iteration of a proposed agreement in 2015. "To aid the effort, Meta built a censorship system specially designed for China to review, including the ability to automatically detect restricted terms and popular content on Facebook, according to the complaint...

"In 2017, Meta covertly launched a handful of social apps under the name of a China-based company created by one of its employees, according to the complaint."
China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

The Internet

The Enshittification Hall of Shame 249

In 2022, writer and activist Cory Doctorow coined the term "enshittification" to describe the gradual deterioration of a service or product. The term's prevalence has increased to the point that it was the National Dictionary of Australia's word of the year last year. The editors at Ars Technica, having "covered a lot of things that have been enshittified," decided to highlight some of the worst examples the've come across. Here's a summary of each thing mentioned in their report: Smart TVs: Evolved into data-collecting billboards, prioritizing advertising and user tracking over user experience and privacy. Features like convenient input buttons are sacrificed for pushing ads and webOS apps. "This is all likely to get worse as TV companies target software, tracking, and ad sales as ways to monetize customers after their TV purchases -- even at the cost of customer convenience and privacy," writes Scharon Harding. "When budget brands like Roku are selling TV sets at a loss, you know something's up."

Google's Voice Assistant (e.g., Nest Hubs): Functionality has degraded over time, with previously working features becoming unreliable. Users report frequent misunderstandings and unresponsiveness. "I'm fine just saying it now: Google Assistant is worse now than it was soon after it started," writes Kevin Purdy. "Even if Google is turning its entire supertanker toward AI now, it's not clear why 'Start my morning routine,' 'Turn on the garage lights,' and 'Set an alarm for 8 pm' had to suffer."

Portable Document Format (PDF): While initially useful for cross-platform document sharing and preserving formatting, PDFs have become bloated and problematic. Copying text, especially from academic journals, is often garbled or impossible. "Apple, which had given the PDF a reprieve, has now killed its main selling point," writes John Timmer. "Because Apple has added OCR to the MacOS image display system, I can get more reliable results by screenshotting the PDF and then copying the text out of that. This is the true mark of its enshittification: I now wish the journals would just give me a giant PNG."

Televised Sports (specifically cycling and Formula 1): Streaming services have consolidated, leading to significantly increased costs for viewers. Previously affordable and comprehensive options have been replaced by expensive bundles across multiple platforms. "Formula 1 racing has largely gone behind paywalls, and viewership is down significantly over the last 15 years," writes Eric Berger. "Major US sports such as professional and college football had largely been exempt, but even that is now changing, with NFL games being shown on Peacock, Amazon Prime, and Netflix. None of this helps viewers. It enshittifies the experience for us in the name of corporate greed."

Google Search: AI overviews often bury relevant search results under lengthy, sometimes inaccurate AI-generated content. This makes finding specific information, especially primary source documents, more difficult. "Google, like many big tech companies, expects AI to revolutionize search and is seemingly intent on ignoring any criticism of that idea," writes Ashley Belanger.

Email AI Tools (e.g., Gemini in Gmail): Intrusive and difficult to disable, these tools offer questionable value due to their potential for factual inaccuracies. Users report being unable to fully opt-out. "Gmail won't take no for an answer," writes Dan Goodin. "It keeps asking me if I want to use Google's Gemini AI tool to summarize emails or draft responses. As the disclaimer at the bottom of the Gemini tool indicates, I can't count on the output being factual, so no, I definitely don't want it."

Windows: While many complaints about Windows 11 originated with Windows 10, the newer version continues the trend of unwanted features, forced updates, and telemetry data collection. Bugs and performance issues also plague the operating system. "... it sure is easy to resent Windows 11 these days, between the well-documented annoyances, the constant drumbeat of AI stuff (some of it gated to pricey new PCs), and a batch of weird bugs that mostly seem to be related to the under-the-hood overhauls in October's Windows 11 24H2 update," writes Andrew Cunningham. "That list includes broken updates for some users, inoperable scanners, and a few unplayable games. With every release, the list of things you need to do to get rid of and turn off the most annoying stuff gets a little longer."

Web Discourse: The rapid spread of memes, trends, and corporate jargon on social media has led to a homogenization of online communication, making it difficult to distinguish original content and creating a sense of constant noise. "[T]he enshittifcation of social media, particularly due to its speed and virality, has led to millions vying for their moment in the sun, and all I see is a constant glare that makes everything look indistinguishable," writes Jacob May. "No wonder some companies think AI is the future."
AI

Police Use of AI Facial Recognition Results In Murder Case Being Tossed (cleveland.com) 50

"A jury may never see the gun that authorities say was used to kill Blake Story last year," reports Cleveland.com.

"That's because Cleveland police used a facial recognition program — one that explicitly says its results are not admissible in court — to obtain a search warrant, according to court documents." The search turned up what police say is the murder weapon in the suspect's home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence. If an appeals court upholds the judge's ruling to suppress the evidence, prosecutors acknowledge their case is likely lost...

The company that produced the facial recognition report, Clearview AI, has been used in hundreds of law enforcement investigations throughout Ohio and has faced lawsuits over privacy violations.

Not only does Cleveland lack a policy governing the use of artificial intelligence, Ohio lawmakers also have failed to set standards for how police use the tool to investigate crimes. "It's the wild, wild west in Ohio," said Gary Daniels, a lobbyist for the American Civil Liberties Union. The lack of state regulation of how law enforcement uses advanced technologies — no laws similarly govern the use of drones or license plate readers — means it is essentially up to agencies how they use the tools.

The affidavit for the search warrant was signed by a 28-year police force veteran, according to the article — but it didn't disclose the use of Clearview's technology.

Clearview's report acknowledged their results were not admissible in court — but then provided the suspect's name, arrest record, Social Security number, according to the article, and "noted he was the most likely match for the person in the convenience store."

Thanks to tlhIngan (Slashdot reader #30,335) for sharing the news.

Slashdot Top Deals