The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

IT

Work-From-Office Mandate? Expect Top Talent Turnover, Culture Rot (cio.com) 95

CIO magazine reports that "the push toward in-person work environments will make it more difficult for IT leaders to retain and recruit staff, some experts say." "In addition to resistance, there would also be the risk of talent turnover," [says Lawrence Wolfe, CTO at marketing firm Converge]... "The truth is, both physical and virtual collaboration provide tremendous value...." IT workers facing work-from-office mandates are two to three times more likely than their counterparts to look for new jobs, according to Metaintro, a search engine that tracks millions of jobs. IT leaders hiring new employees may also face significant headwinds, with it taking 40% to 50% longer to fill in-person roles than remote jobs, according to Metaintro. "Some of the challenges CIOs face include losing top-tier talent, limiting the pool of candidates available for hire, and damaging company culture, with a team filled with resentment," says Lacey Kaelani, CEO and cofounder at Metaintro...

There are several downsides for IT leaders to in-person work mandates, [adds Lena McDearmid, founder and CEO of culture and leadership advisory firm Wryver], as orders to commute to an office can feel arbitrary or rooted in control rather than in value creation. "That erodes trust quickly, particularly in IT teams that proved they could deliver remotely for years," she adds. The mandates can also create new friction for IT leaders by requiring them to deal with morale issues, manage exceptions, and spend time enforcing policy instead of leading strategy, she says. "There's also a real risk of losing experienced, high-performing talent who have options and are unwilling to trade autonomy for proximity without a clear reason," McDearmid adds. "When companies mandate daily commutes without a clear rationale, they often narrow their talent pool and increase attrition, particularly among people who know they can work effectively elsewhere."

McDearmid has seen teams "sitting next to each other" who collaborate poorly "because decisions are unclear or leaders equate visibility with progress... Collaboration doesn't automatically improve just because people share a building."

And Rebecca Wettemann, CEO at IT analyst firm Valoir, warns of return-to-office mandates "being used as a Band-Aid for poor management. When IT professionals feel they're being evaluated based on badge swipes, not real accomplishments, they will either act accordingly or look to work elsewhere."

Thanks to Slashdot reader snydeq for sharing the article.
AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
Advertising

Reddit Is Removing Ability To Opt Out of Ad Personalization Based On Your Activity (techcrunch.com) 54

Ivan Mehta writes via TechCrunch: Reddit said Wednesday that the platform is revamping its privacy settings with an aim to make ad personalization and account visibility toggles consistent. Most notably though, it is removing the ability to opt out of ad personalization based on Reddit activity. The company said that it will still have opt-out controls in "select countries" without specifying which ones. It mentioned in a blog post that users won't see more ads but they will see better-targeted ads following this change.

The company is essentially removing the option to not track you based on whatever you do on Reddit. Additionally, Reddit is consolidating two toggles on showing ads based on activity and information from partners into one toggle. So there is no way to separate those two settings now. Reddit is seemingly removing toggles for getting post recommendations based on "general location" and activity on partner sites and apps. It's not clear if this means those parameters will be used for post suggestions by default and there is no way to turn them off.

The social network said it will also roll out controls to limit certain advertising categories such as alcohol, weight loss, dating, gambling pregnancy and parenting. The company noted that ad-limiting controls will possibly show you fewer ads from mentioned categories if the toggles are turned off, but won't possibly filter out all ads. Reddit justified this by saying it uses manual tagging and machine learning to label ads, so there is a chance that it is not 100% accurate. Reddit is also simplifying its location customization setting under a single menu, which will be easily accessible through settings on apps and on the web.

Transportation

New York State Bill Would Require Speed Limiting Tech In New Cars (autoblog.com) 155

The New York State Senate just introduced a bill that aims to improve safety around massive trucks and SUVs. Autoblog reports: Manhattan State Senator Brad Hoylman introduced the bill, which includes language requiring the NY DMV to dictate specific rules for vehicles over 3,000 pounds. One new regulation would be that the drivers of such cars have "direct visibility of pedestrians, cyclists, and other vulnerable road users from the driver's position." It's not clear exactly what enforcing that legislation would entail. However, the meat of Hoylman's bill centers on advanced safety technology. A summary of the legislation states, "Studies have shown that Intelligent Speed Assistance (ISA) alone can reduce traffic fatalities by 20%. This, in addition to Advanced Emergency Braking (AEB), Emergency Lane Keeping Systems (ELKS), drowsiness and distraction recognition technology, and rear-view cameras, would help prevent crashes from occurring in the first place."

If you've never heard of ISA, you're not alone. The term is pretty broad in what it encompasses, including speed limit recognition and alerts, speed assist, and speed limiting. The tech is common in Europe, where automakers like Ford offer it in several models. Ford's flavor of speed limiting allows drivers to set a maximum speed and automatically limit the vehicle to within five mph of the posted speed limit. It's optional, however, so drivers can turn it off when desired. If passed, the legislation would require automakers to include those advanced driver assistance systems as standard equipment in new vehicles from 2024 on.

Iphone

Apple App Store Appears to Be Widely Removing Outdated Apps (theverge.com) 76

"Apple may be cracking down on apps that no longer receive updates," reports the Verge: In a screenshotted email sent to affected developers, titled "App Improvement Notice," Apple warns it will remove apps from the App Store that haven't been "updated in a significant amount of time" and gives developers just 30 days to update them....

In 2016, Apple said it would start removing abandoned apps from the App Store. At the time, it also warned developers that they would have 30 days to update their app before it got taken down. That said, it's unclear whether Apple has continuously been enforcing this rule over the years, or if it recently started conducting a wider sweep. Apple also doesn't clearly outline what it considers to be "outdated" — whether it's based on the time that has elapsed since an app was last updated, or if it concerns compatibility with the most recent version of iOS.

Critics of this policy argue that mobile apps should remain available no matter their age, much like old video games remain playable on consoles. Others say the policy is unnecessarily tough on developers, and claim Apple doesn't fully respect the work that goes into indie games.

Earlier this month, the Google Play Store similarly announced it would begin limiting the visibility of apps that "don't target an API level within two years of the latest major Android release version." Android developers have until November 1st, 2022 to update their apps, but also have the option of applying for a six-month extension if they can't make the deadline.

Facebook

Six Reasons Meta (Formerly Facebook) is In Trouble (msn.com) 117

Meta's stock plunged 26% Thursday — its biggest one-day drop ever, lowering its marketing valuation by more than $230 billion. And then on Friday it dropped just a little bit more.

A New York Times technology correspondent offers six reasons Meta is in trouble: User growth has hit a ceiling. The salad days of Facebook's wild user growth are over. Even though the company on Wednesday recorded modest gains in new users across its so-called family of apps — which includes Instagram, Messenger and WhatsApp — its core Facebook social networking app lost about half a million users over the fourth quarter from the previous quarter.

That's the first such decline for the company in its 18-year history, during which time it had practically been defined by its ability to bring in more new users. The dip signaled that the core app may have reached its peak. Meta's quarterly user growth rate was also the slowest it has been in at least three years. Meta's executives have pointed to other growth opportunities, like turning on the money faucet at WhatsApp, the messaging service that has yet to generate substantial revenue. But those efforts are nascent. Investors are likely to next scrutinize whether Meta's other apps, such as Instagram, might begin to hit their top on user growth....

Apple's changes are limiting Meta and Google is stealing online advertising share. Last spring, Apple introduced an "App Tracking Transparency" update to its mobile operating system, essentially giving iPhone owners the choice as to whether they would let apps like Facebook monitor their online activities. Those privacy moves have now hurt Meta's business and are likely to continue doing so...

On Wednesday, David Wehner, Meta's chief financial officer, noted that as Apple's changes have given advertisers less visibility into user behaviors, many have started shifting their ad budgets to other platforms. Namely Google. In Google's earnings call this week, the company reported record sales, particularly in its e-commerce search advertising. That was the very same category that tripped up Meta in the last three months of 2021. Unlike Meta, Google is not heavily dependent on Apple for user data. Mr. Wehner said it was likely that Google had "far more third-party data for measurement and optimization purposes" than Meta's ad platform. Mr. Wehner also pointed to Google's deal with Apple to be the default search engine for Apple's Safari browser. That means Google's search ads tend to appear in more places, taking in more data that can be useful for advertisers. That's a huge problem for Meta in the long term, especially if more advertisers switch to Google search ads.

Meta's other problems include competition from TikTok (and the problems with monetizing "Reels," Meta's own TikTok clone on Instagram), as well as pending antitrust investigations (and the way it hampers future social media acquisitions). But with Meta expected to continue spending more than $10 billion a year on virtual reality, "still the province of niche hobbyists [that] has yet to really break into the mainstream," the article also suggests its final reason for why Meta is in trouble: that "Spending on the metaverse is bonkers."
Android

Google Play Limiting Android 11+ Apps From Seeing What's Installed on Devices This May (9to5google.com) 27

Google today announced a series of policy updates for apps distributed through the Play Store. The most impactful sees Google limit most developers from seeing which Android apps are installed on your device. From a report: As part of its ongoing work to restrict the use of high risk/sensitive permissions, Google is limiting what apps can use the QUERY_ALL_PACKAGES permission that "gives visibility into the inventory of installed apps on a given device." This applies to apps that target API 30+ on devices running Android 11 and newer. Enforcement was originally meant to occur earlier, but delayed in light of COVID-19.
Republicans

Twitter Is Limiting the Visibility of Prominent Republicans In Search Results (vice.com) 726

An anonymous reader quotes a report from VICE News: Twitter is limiting the visibility of prominent Republicans in search results -- a technique known as "shadow banning" -- in what it says is a side effect of its attempts to improve the quality of discourse on the platform. The Republican Party chair Ronna McDaniel, several conservative Republican congressmen, and Donald Trump Jr.'s spokesman no longer appear in the auto-populated drop-down search box on Twitter, VICE News has learned. It's a shift that diminishes their reach on the platform -- and it's the same one being deployed against prominent racists to limit their visibility. The profiles continue to appear when conducting a full search, but not in the more convenient and visible drop-down bar. (The accounts appear to also populate if you already follow the person.)

Democrats are not being "shadow banned" in the same way, according to a VICE News review. McDaniel's counterpart, Democratic Party chair Tom Perez, and liberal members of Congress -- including Reps. Maxine Waters, Joe Kennedy III, Keith Ellison, and Mark Pocan -- all continue to appear in drop-down search results. Not a single member of the 78-person Progressive Caucus faces the same situation in Twitter's search. Presented with screenshots of the searches, a Twitter spokesperson told VICE News: "We are aware that some accounts are not automatically populating in our search box and shipping a change to address this." Asked why only conservative Republicans appear to be affected and not liberal Democrats, the spokesperson wrote: "I'd emphasize that our technology is based on account *behavior* not the content of Tweets."

Cloud

Linux Vendors Push For Open-Source In Hybrid Datacenter Clouds 30

Nerval's Lobster writes "Linux vendors Red Hat and SUSE are pushing to make sure Linux-based virtual machines are an important part of datacenter-based hybrid clouds. The two are taking significantly different tacks toward the same destination, however. SUSE is using the visibility and cloud hype of VMware by extending its partnership with the virtualization provider to promote its SUSE Linux Enterprise Server for VMware as an alternative operating system for virtual machines running on VMware's vCloud Hybrid Service. Red Hat is happy to include VMware in its plans, but isn't limiting itself either to VMware-based clouds or, in fact, the idea that a Linux vendor has to tag along with a cloud- or virtualization developer to find its place in mixed infrastructures. 'We do not buy into the premise that a private or a hybrid platform based on one vendor's technologies and products is the answer,' wrote Bryan Che, general manager of Red Hat's Cloud Business Unit. More than 25 percent of customers want clouds or datacenter infrastructures using virtualization products from more than one vendor, according to a buyers' guide published in August by market researcher IDC."
Image

Book Review: The CERT Oracle Secure Coding Standard For Java Screenshot-sm 66

brothke writes "It has been a decade since Oracle started their unbreakable campaign touting the security robustness of their products. Aside from the fact that unbreakable only refers to the enterprise kernel; Oracle still can have significant security flaws. Even though Java supports very strong security controls including JAAS (Java Authentication and Authorization Services), it still requires a significant effort to code Java securely. With that The CERT Oracle Secure Coding Standard for Javais an invaluable guide that provides the reader with the strong coding guidelines and practices in order to reduce coding vulnerabilities that can lead to Java and Oracle exploits." Read on for the rest of Ben's review.

Answers from 'Our Man in Jordan' 181

At the beginning of this month we sent your questions to Isam Bayazidi of Amman, Jordan. He's a Slashdot reader, founder of the Jordan Planet blogging community, and (I know this from personal experience) knows the best places to buy discount-priced computer components in his home town. Enjoy!

The Art of Unix Programming 358

rjnagle writes "Eric S. Raymond (or ESR) is widely known for the groundbreaking series of essays in his book, The Cathedral and the Bazaar. In TCatB, he makes a credible case for why open source sofware works so well, and why community-supported software won't put developers out of a job. (I once attended a delightful talk he gave where, among other things, he gave sartorial advice to open source developers, urging them to avoid formal suits at presentations to CEO's as a way to give off the auras of foreign dignitaries unused to local customs). The arguments presented in Cathedral and the Bazaar were persuasive and original and now regarded as obvious. In his new book, Art of Unix Programming (available for free on the web), ESR stakes an even bolder claim: that initial design decisions make Unix uniquely well-suited to take advantage of open source's power. This book is an attempt to explain why Unix is so...well, Unixy." Read on for the rest of Nagle's review of The Art of Unix Programming.

Slashdot Top Deals