The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 40

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety--building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

Government

CISA Replaces Bumbling Acting Director After a Year (techcrunch.com) 26

New submitter DeanonymizedCoward shares a report from TechCrunch: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is reportedly in crisis following major budget cuts, layoffs, and furloughs under the Trump administration, says TechCrunch. The agency has now replaced its acting director, Madhu Gottumukkala, after a turbulent year marked by controversy and internal turmoil. During his tenure, Gottumukkala allegedly mishandled sensitive information by uploading government documents to ChatGPT, oversaw a one-third reduction in staff, and reportedly failed a counterintelligence polygraph needed for classified access. His leadership also saw the suspension of several senior officials, including CISA's chief security officer. Nextgov also reported that CISA lost another top senior official, Bob Costello, the agency's chief information officer tasked with overseeing the agency's IT systems and data policies. "Last month, CISA's acting director Madhu Gottumukkala reportedly took steps to transfer Costello, but other political appointees blocked it," added Nextgov.
Crime

Four Convicted Over Spyware Affair That Shook Greece (bbc.com) 7

A Greek court has convicted four individuals linked to the marketing of Predator spyware in the wiretapping scandal that shook the country in 2022. The BBC reports: In what became known as "Greece's Watergate," surveillance software called Predator was used to target 87 people -- among them government ministers, senior military officials and journalists. The four who had marketed the software were found guilty by an Athens court of misdemeanours of violating the confidentiality of telephone communications and illegally accessing personal data and conversations.

The court sentenced the four defendants to lengthy jail sentences, suspended pending appeal. Although they each face 126 years, only eight would be typically served which is the upper limit for misdemeanors. One in three of the dozens of figures targeted had also been under legal surveillance by Greece's intelligence services (EYP). Prime Minister Kyriakos Mitsotakis, who had placed EYP directly under his supervision, called it a scandal, but no government officials have been charged in court and critics accuse the government of trying to cover up the truth.

The case dates back to the summer of 2022, when the current head of Greek Socialist party Pasok, Nikos Androulakis - then an MEP - was informed by the European Parliament's IT experts that he had received a malicious text message containing a link. Predator spyware, marketed by the Athens-based Israeli company Intellexa, can get access to a device's messages, camera, and microphone. Its use was illegal in Greece at that time but a new law passed in 2022 has since legalised state security use of surveillance software under strict conditions. Androulakis also discovered that he had been tracked for "national security reasons" by Greece's intelligence services. The scandal has since escalated into a debate over democratic accountability in Greece.

Government

The Government Just Made it Harder to See What Spy Tech it Buys 17

An anonymous reader shares a report: It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.

Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.

"FPDS may have been a little clunky, but its simple, old-school interface made it extremely functional and robust. Every facet of government operations touches on contracting at one point, and this was the first tool that many investigative journalists and researchers would reach for to quickly find out what the government is buying and who is selling it, and how these contracts all fit together," Dave Maass, director of investigations at the Electronic Frontier Foundation, told me.
The Courts

New York Sues Valve For Enabling 'Illegal Gambling' With Loot Boxes (arstechnica.com) 73

New York state has filed a lawsuit against Valve alleging that randomized loot boxes in games like Counter-Strike 2, Team Fortress 2, and Dota 2 amount to a form of unregulated gambling, letting users "pay for the chance to win a rare virtual item of significant monetary value." From a report: While many randomized video game loot boxes have drawn attention and regulation from various government bodies in recent years, the New York suit calls out Valve's system specifically for "enabl[ing] users to sell the virtual items they have won, either through its own virtual marketplace, the Steam Community Market, or through third-party marketplaces."

The vast majority of Valve's in-game loot boxes contain skins that can only be resold for a few cents, the suit notes, while the rarest skins can be worth thousands of dollars through marketplaces on and off of Steam. That fits the statutory definition of gambling as "charging an individual for a chance to win something of value based on luck alone," according to the suit.

The Steam Wallet funds that users get through directly reselling skins "have the equivalent purchasing power on the Steam platform as cash," the suit notes. But if a user wants to convert those Steam funds to real cash, they can do so relatively easily by purchasing a Steam Deck and reselling it to any interested party, as an investigator did while preparing the lawsuit.

Privacy

Americans Are Destroying Flock Surveillance Cameras 153

An anonymous reader shares a report: Brian Merchant, writing for Blood in the Machine, reports that people across the United States are dismantling and destroying Flock surveillance cameras, amid rising public anger that the license plate readers aid U.S. immigration authorities and deportations.

Flock is the Atlanta-based surveillance startup valued at $7.5 billion a year ago and a maker of license plate readers. It has faced criticism for allowing federal authorities access to its massive network of nationwide license plate readers and databases at a time when U.S. Immigration and Customs Enforcement is increasingly relying on data to raid communities as part of the Trump administration's immigration crackdown.

Flock cameras allow authorities to track where people go and when by taking photos of their license plates from thousands of cameras located across the United States. Flock claims it doesn't share data with ICE directly, but reports show that local police have shared their own access to Flock cameras and its databases with federal authorities. While some communities are calling on their cities to end their contracts with Flock, others are taking matters into their own hands.
AI

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data (bloomberg.com) 22

A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

Privacy

Russia Targets Telegram as Rift With Founder Pavel Durov Deepens (ft.com) 25

Russia has opened an investigation into Telegram founder Pavel Durov for "abetting terrorist activities," [non-paywalled source] in the latest sign that his uneasy relationship with the Kremlin has broken down. From a report: Two Russian newspapers, including the state-run Rossiiskaya Gazeta and Kremlin-friendly tabloid Komsomolskaya Pravda, alleged on Tuesday that the messaging app had become a tool of western and Ukrainian intelligence services.

The articles, credited to materials from Russia's FSB security service, accused Telegram of enabling attacks in Russia and said that Durov's "actions ... are under criminal investigation." Russia has restricted Telegram's functions, accusing it of flouting the law and is seeking to divert users towards Max, a state-run rival messenger. The steps escalate pressure on a platform that remains deeply embedded in Russian public life.

Encryption

Telegram Disputes Russia's Claim Its Encryption Was Compromised (business-standard.com) 21

Russia's domestic intelligence agency claimed Saturday that Ukraine can obtain sensitive information from troops using the Telegram app on the front line, reports Bloomberg. The fact that the claims were made through Russia's state-operated news outlet RIA Novosti signals "tightening scrutiny over a platform used by millions of Russians," Bloomberg notes, as the Kremlin continues efforts to "push people to use a new state-backed alternative." Russia's communications watchdog limited access to Telegram — a popular messaging app owned by Russian-born billionaire Pavel Durov — over a week ago for failing to comply with Russian laws requiring personal data to be stored locally. Voice and video calls were blocked via Telegram in August. The pressure is the latest move in a long-running campaign to promote what the Kremlin calls a sovereign internet that's led to blocks on YouTube, Instagram and WhatsApp... Foreign intelligence services are able to see Russia's military messages in Telegram too, Russia's Minister for digital development, Maksut Shadaev, said on Wednesday, although he added that Russia will not block access to Telegram for troops for now.

Telegram responded at the time that no breaches of the app's encryption have ever been found. "The Russian government's allegation that our encryption has been compromised is a deliberate fabrication intended to justify outlawing Telegram and forcing citizens onto a state-controlled messaging platform engineered for mass surveillance and censorship," it said in an emailed response.

Robotics

Man Accidentally Gains Control of 7,000 Robot Vacuums (popsci.com) 51

A software engineer tried steering his robot vacuum with a videogame controller, reports Popular Science — but ended up with "a sneak peak into thousands of people's homes." While building his own remote-control app, Sammy Azdoufal reportedly used an AI coding assistant to help reverse-engineer how the robot communicated with DJI's remote cloud servers. But he soon discovered that the same credentials that allowed him to see and control his own device also provided access to live camera feeds, microphone audio, maps, and status data from nearly 7,000 other vacuums across 24 countries.

The backend security bug effectively exposed an army of internet-connected robots that, in the wrong hands, could have turned into surveillance tools, all without their owners ever knowing. Luckily, Azdoufal chose not to exploit that. Instead, he shared his findings with The Verge, which quickly contacted DJI to report the flaw... He also claims he could compile 2D floor plans of the homes the robots were operating in. A quick look at the robots' IP addresses also revealed their approximate locations.

DJI told Popular Science the issue was addressed "through two updates, with an initial patch deployed on February 8 and a follow-up update completed on February 10."
Crime

DNA Technology Convicts a 64-Year-Old for Murdering a Teenager in 1982 (cnn.com) 78

"More than four decades after a teenager was murdered in California, DNA found on a discarded cigarette has helped authorities catch her killer," reports CNN: Sarah Geer, 13, was last seen leaving her friend's houseï in Cloverdale, California, on the evening of May 23, 1982. The next morning, a firefighter walking home from work found her body, the Sonoma County District Attorney's Office said in a news release... Her death was ruled a homicide, but due to the "limited forensic science of the day," no suspect was identified and the case went cold for decades, prosecutors said.

Nearly 44 years after Sarah's murder, a jury found James Unick, 64, guilty of killing her on February 13. It would have been the victim's 57th birthday, the Sonoma County District Attorney's Office told CNN. Genetic genealogy, which combines DNA evidence and traditional genealogy, helped match Unick's DNA from a cigarette butt to DNA found on Sarah's clothing, according to prosecutors... [The Cloverdale Police Department] said it had been in communication with a private investigation firm in late 2019 and had partnered with them in hopes the firm could revisit the case's evidence "with the latest technological advancements in cold case work...."

"The FBI, with its access to familial genealogical databases, concluded that the source of the DNA evidence collected from Sarah belonged to one of four brothers, including James Unick," prosecutors said. Once investigators narrowed down the list of suspects to the four Unick brothers, the FBI "conducted surveillance of the defendant and collected a discarded cigarette that he had been smoking," prosecutors said. A DNA analysis of the cigarette confirmed James Unick's DNA matched the 2003 profile, along with other DNA samples collected from Sarah's clothing the day she was killed.

In a statement, the county's district attorney "While 44 years is too long to wait, justice has finally been served..."

And the article points out that "In 2018, genetic genealogy led to the arrest of the Golden State Killer, and it has recently helped solve several other cold cases, including a 1974 murder in Wisconsin and a 1988 murder in Washington."
Government

Pro-Gamer Consumer Movement 'Stop Killing Games' Will Launch NGOs in America and the EU (pcgamer.com) 28

The consumer movement Stop Killing Games "has come a long way in the two years since YouTuber Ross Scott got mad about Ubisoft's destruction of The Crew in 2024," writes the gaming news site PC Gamer. "The short version is, he won: 1.3 million people signed the group's petition, mandating its consideration by the European Union, and while Ubisoft CEO Yves Guillemot reminded us all that nothing is forever, his company promised to never do something like that again." (And Ubisoft has since updated The Crew 2 with an offline mode, according to Engadget.)

"But it looks like even bigger things are in store," PC Gamer wrote Thursday, "as Scott announced today that Stop Killing Games is launching two official NGOs, one in the EU and the other in the US." An NGO — that's non-governmental organization — is, very generally speaking, an organization that pursues particular goals, typically but not exclusively political, and that may be funded partially or fully by governments, but is not actually part of any government. It's a big tent: Well-known NGOs include Oxfam, Doctors Without Borders, Amnesty International, and CARE International... "If there's a lobbyist showing up again and again at the EU Commission, that might influence things," [Scott says in a video]. "This will also allow for more watchdog action. If you recall, I helped organize a multilingual site with easy to follow instructions for reporting on The Crew to consumer protection agencies. Well, maybe the NGO could set something like that up for every big shutdown where the game is destroyed in the future...."

Scott said in the video that he doesn't have details, but the two NGOs are reportedly looking at establishing a "global movement" to give Stop Killing Games a presence in other regions.

"According to Scott, these NGOs would allow for 'long-term counter lobbying' when publishers end support for certain video games," Engadget reports" "Let me start off by saying I think we're going to win this, namely the problem of publishers destroying video games that you've already paid for," Scott said in the video. According to Scott, the NGOs will work on getting the original Stop Killing Games petition codified into EU law, while also pursuing more watchdog actions, like setting up a system to report publishers for revoking access to purchased video games... According to Scott, the campaign leadership will meet with the European Commission soon, but is also working on a 500-page legal paper that reveals some of the industry's current controversial practices.
AI

America's Peace Corps Announces 'Tech Corps' Volunteers to Help Bring AI to Foreign Countries (engadget.com) 49

Over 240,000 Americans volunteered for Peace Corps projects in 142 countries since the program began more than half a century ago.

But now the agency is launching a new initiative — called Tech Corps. "It's the Peace Corps, but make it AI," explains Engadget: The Peace Corps' latest proposal will recruit STEM graduates or those with professional experience in the artificial intelligence sector and send them to participating host countries.

According to the press release, volunteers will be placed in Peace Corps countries that are part of the American AI Exports Program, which was created last year from an executive order from President Trump as a way to bolster the US' grip on the AI market abroad. Tech Corps members will be tasked with using AI to resolve issues related to agriculture, education, health and economic development. The program will offer its members 12- to 27-month in-person assignments or virtual placements, which will include housing, healthcare, a living stipend and a volunteer service award if the corps member is placed overseas.

"American technology to power prosperity," reads the headline at Tech Corps web site. ("Build the tech nations depend on... See the world. Be the future."

The site says they're recruiting "service-minded technologists to serve in the Peace Corps to help countries around the world harness American AI to enhance opportunity and prosperity for their citizens." (And experienced technology professionals can donate 5-15 hours a week "to mentor and support projects on-the-ground.")
The Courts

US Supreme Court Rejects Trump's Global Tariffs (reuters.com) 227

The U.S. Supreme Court struck down on Friday President Donald Trump's sweeping tariffs that he pursued under a law meant for use in national emergencies, rejecting one of his most contentious assertions of his authority in a ruling with major implications for the global economy. From a report: The justices, in a 6-3 ruling authored by conservative Chief Justice John Roberts, upheld a lower court's decision that the Republican president's use of this 1977 law exceeded his authority.

The court ruled that the Trump administration's interpretation that the law at issue - the International Emergency Economic Powers Act, or IEEPA - grants Trump the power he claims to impose tariffs would intrude on the powers of Congress and violate a legal principle called the "major questions" doctrine. The doctrine, embraced by the conservative justices, requires actions by the government's executive branch of "vast economic and political significance" to be clearly authorized by Congress. The court used the doctrine to stymie some of Democratic former President Joe Biden's key executive actions.

Security

How Private Equity Debt Left a Leading VPN Open To Chinese Hackers (financialpost.com) 26

An anonymous reader quotes a report from Bloomberg: In early 2024, the agency that oversees cybersecurity for much of the US government issued a rare emergency order -- disconnect your Connect Secure virtual private network software immediately. Chinese spies had hacked the code and infiltrated nearly two dozen organizations. The directive applied to all civilian federal agencies, but given the product's customer base, its impact was more widely felt. The software, which is made by Ivanti Inc., was something of an industry standard across government and much of the corporate world. Clients included the US Air Force, Army, Navy and other parts of the Defense Department, the Department of State, the Federal Aviation Administration, the Federal Reserve, the National Aeronautics and Space Administration, thousands of companies and more than 2,000 banks including Wells Fargo & Co. and Deutsche Bank AG, according to federal procurement records, internal documents, interviews and the accounts of former Ivanti employees who requested anonymity because they were not authorized to disclose customer information.

Soon after sending out their order, which instructed agencies to install an Ivanti-issued fix, staffers at the Cybersecurity and Infrastructure Security Agency discovered that the threat was also inside their own house. Two sensitive CISA databases -- one containing information about personnel at chemical facilities, another assessing the vulnerabilities of critical infrastructure operators -- had been compromised via the agency's own Connect Secure software. CISA had followed all its own guidance. Ivanti's fix had failed. This was a breaking point for some American national security officials, who had long expressed concerns about Connect Secure VPNs. CISA subsequently published a letter with the Federal Bureau of Investigation and the national cybersecurity agencies of the UK, Canada, Australia and New Zealand warning customers of the "significant risk" associated with continuing to use the software. According to Laura Galante, then the top cyber official in the Office of the Director of National Intelligence, the government came to a simple conclusion about the technology. "You should not be using it," she said. "There really is no other way to put it."

That attack, along with several others that successfully targeted the Ivanti software, illustrate how private equity's push into the cybersecurity market ended up compromising the quality and safety of some critical VPN products, Bloomberg has found. Last year, Bloomberg reported that Citrix Systems Inc., another top VPN maker, experienced several major hacks after its private equity owners, Elliott Investment Management and Vista Equity Partners, cut most of the company's 70-member product security team following their acquisition of the company in 2022. Some government officials and private-sector executives are now reconsidering their approach to evaluating cybersecurity software. In addition to excising private equity-owned VPNs from their networks, some factor private equity ownership into their risk assessments of key technologies.

Slashdot Top Deals