AI

OpenAI Is Scanning Users' ChatGPT Conversations and Reporting Content To Police (futurism.com) 72

Futurism reports: Earlier this week, buried in the middle of a lengthy blog post addressing ChatGPT's propensity for severe mental health harms, OpenAI admitted that it's scanning users' conversations and reporting to police any interactions that a human reviewer deems sufficiently threatening.

"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."

The announcement raised immediate questions. Don't human moderators judging tone, for instance, undercut the entire premise of an AI system that its creators say can solve broad, complex problems? How is OpenAI even figuring out users' precise locations in order to provide them to emergency responders? How is it protecting against abuse by so-called swatters, who could pretend to be someone else and then make violent threats to ChatGPT in order to get their targets raided by the cops...? The admission also seems to contradict remarks by OpenAI CEO Sam Altman, who recently called for privacy akin to a "therapist or a lawyer or a doctor" for users talking to ChatGPT.

"Others argued that the AI industry is hastily pushing poorly-understood products to market, using real people as guinea pigs, and adopting increasingly haphazard solutions to real-world problems as they arise..."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Programming

What Happened When Unix Co-Creator Brian Kernighan Tried Rust? (thenewstack.io) 97

"I'm still teaching at Princeton," 83-year-old Brian Kernighan recently told an audience at New Jersey's InfoAge Science and History Museums.

And last month the video was uploaded to YouTube, a new article points out, "showing that his talk ended with a unique question-and-answer session that turned almost historic..." "Do you think there's any sort of merit to Rust replacing C?" one audience member asked... "Or is this just a huge hype bubble that's waiting to die down...?"

'"I have written only one Rust program, so you should take all of this with a giant grain of salt," he said. "And I found it a — pain... I just couldn't grok the mechanisms that were required to do memory safety, in a program where memory wasn't even an issue!" Speaking of Rust, Kernighan said "The support mechanism that went with it — this notion of crates and barrels and things like that — was just incomprehensibly big and slow. And the compiler was slow, the code that came out was slow..."

All in all, Kernighan had had a bad experience. "When I tried to figure out what was going on, the language had changed since the last time somebody had posted a description! And so it took days to write a program which in other languages would take maybe five minutes..." It was his one and only experience with the language, so Kernighan acknowledged that when it comes to Rust "I'm probably unduly cynical. "But I'm — I don't think it's gonna replace C right away, anyway."

Kernighan was also asked about NixOS and HolyC — but his formative experiences remain rooted in Bell Labs starts in the 1970s, where he remembers it was "great fun to hang out with these people."

And he acknowledged that the descendants of Unix now power nearly every cellphone. "I find it intriguing... And I also find it kind of irritating that underneath there is a system that I could do things with — but I can't get at it!"


Kernighan answered questions from Slashdot readers in 2009 and again in 2015...
Facebook

What Made Meta Suddenly Ban Tens of Thousands of Accounts? (bbc.com) 105

"For months, tens of thousands of people around the world have been complaining Meta has been banning their Instagram and Facebook accounts in error..." the BBC reported this month... More than 500 of them have contacted the BBC to say they have lost cherished photos and seen businesses upended — but some also speak of the profound personal toll it has taken on them, including concerns that the police could become involved.

Meta acknowledged a problem with the erroneous banning of Facebook Groups in June, but has denied there is wider issue on Facebook or Instagram at all. It has repeatedly refused to comment on the problems its users are facing — though it has frequently overturned bans when the BBC has raised individual cases with it.

One examples is a woman lost the Instagram profile for her boutique dress shop. ("Over 5,000 followers, gone in an instant.") "After the BBC sent questions about her case to Meta's press office, her Instagram accounts were reinstated... Five minutes later, her personal Instagram was suspended again — but the account for the dress shop remained."

Another user spent a month appealing. ("In June, the BBC understands a human moderator double checked," but concluded he'd breached a policy.) And then "his account was abruptly restored at the end of July. 'We're sorry we've got this wrong,' Instagram said in an email to him, adding that he had done nothing wrong." Hours after the BBC contacted Meta's press office to ask questions about his experience, he was banned again on Instagram and, for the first time, Facebook... His Facebook account was back two days later — but he was still blocked from Instagram.
None of the banned users in the BBC's examples were ever told what post breached the platform's rules. Over 36,000 people have signed a petition accusing Meta of falsely banning accounts; thousands more are in Reddit forums or on social media posting about it. Their central accusation — Meta's AI is unfairly banning people, with the tech also being used to deal with the appeals. The only way to speak to a human is to pay for Meta Verified, and even then many are frustrated.

Meta has not commented on these claims. Instagram states AI is central to its "content review process" and Meta has outlined how technology and humans enforce its policies.

The Guardian reports there's been "talk of a class action against Meta over the bans." Users report Meta has typically been unresponsive to their pleas for assistance, often with standardised responses to requests for review, almost all of which have been rejected... But the company claims there has not been an increase in incorrect account suspension, and the volume of users complaining was not indicative of new targeting or over-enforcement. "We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake," a spokesperson for Meta said.
"It happened to me this morning," writes long-time Slashdot reader Daemon Duck," asking if any other Slashdot readers had their personal (or business) account unreasonably banned. (And wondering what to do next...)
Privacy

Is a Backlash Building Against Smart Glasses That Record? (futurism.com) 68

Remember those Harvard dropouts who built smart glasses for covert facial recognition — and then raised $1 million to develop AI-powered glasses to continuously listen to conversations and display its insights?

"People Are REALLY Mad," writes Futurism, noting that some social media users "have responded with horror and outrage." One of its selling points is that the specs don't come with a visual indicator that lights up to let people know when they're being recorded, which is a feature that Meta's smart glasses do currently have. "People don't want this," wrote Whitney Merill, a privacy lawyer. "Wanting this is not normal. It's weird...."

[S]ome mocked the deleterious effects this could have on our already smartphone-addicted, brainrotted cerebrums. "I look forward to professional conversations with people who just read robot fever dream hallucinations at me in response to my technical and policy questions," one user mused.

The co-founder of the company told TechCrunch their glasses would be the "first real step towards vibe thinking."

But there's already millions of other smart glasses out in the world, and they're now drawing a backlash, reports the Washington Post, citing the millions of people viewing "a stream of other critical videos" about Meta's smart glasses.

The article argues that Generation Z, "who grew up in an internet era defined by poor personal privacy, are at the forefront of a new backlash against smart glasses' intrusion into everyday life..." Opal Nelson, a 22-year-old in New York, said the more she learns about smart glasses, the angrier she becomes. Meta Ray-Bans have a light that turns on when the gadget is recording video, but she said it doesn't seem to protect people from being recorded without consent... "And now there's more and more tutorials showing people how to cover up the [warning light] and still allow you to record," Nelson said. In one such tutorial with more than 900,000 views, a man claims to explain how to cover the warning light on Meta Ray-Bans without triggering the sensor that prevents the device from secretly recording.
One 26-year-old attracted 10 million views to their video on TikTok about the spread of Meta's photography-capable smart glasses. "People specifically in my generation are pretty concerned about the future of technology," the told the Post, "and what that means for all of us and our privacy."

The article cites figures from a devices analyst at IDC who estimates U.S. sales for Meta Ray-Bans will hit 4 million units by the end of 2025, compared to 1.2 million in 2024.
AI

Microsoft Reveals Two In-House AI Models 17

Today, Microsoft unveiled two in-house AI models: MAI-Voice-1, a high-speed speech-generation system now live in Copilot, and MAI-1-Preview, its first end-to-end foundation model trained on 15,000 H100 GPUs. Neowin reports: MAI-Voice-1 is a speech generation model and is already available in Copilot Daily and Podcasts. To preview the full capabilities of this voice model, Microsoft has created a new Copilot Labs experience that anyone can try today. With the Copilot Audio Expressions experience, users can just paste text content and select the voice, style, and mode to generate high-fidelity, expressive audio. They can also download the generated audio if required. Microsoft also highlighted that this MAI-Voice-1 model is very fast and efficient. In fact, it can generate a full minute of audio in under a second on a single GPU.

Second, Microsoft has begun public testing of MAI-1-preview on LMArena, a popular platform for community model evaluation. This represents MAI's first foundation model trained end-to-end and offers a glimpse of future offerings inside Copilot. They are actively spinning the flywheel to deliver improved models and will have much more to share in the coming months. MAI-1-preview is an MoE (mixture-of-experts) model, pre-trained and post-trained on nearly 15,000 NVIDIA H100 GPUs. Notably, MAI-1-preview is Microsoft's first foundation model trained end-to-end in-house. Microsoft claims that this model is better at following instructions and can offer helpful responses to everyday user questions. Microsoft will be rolling out this new model to certain text use cases within Copilot over the coming weeks.
Microsoft

Microsoft's Copilot AI is Now Inside Samsung TVs and Monitors (theverge.com) 69

An anonymous reader shares a report: Microsoft's Copilot AI assistant is officially coming to TVs, starting with Samsung's 2025 lineup of TVs and smart monitors. With the integration, you can call upon Copilot and ask for movie suggestions, spoiler-free episode recaps, and other general questions.

On TV, Copilot takes on a "friendly, animated presence" that resembles the opalescent Copilot Appearance Microsoft showed off last month, though in a color that makes it look more like a personified chickpea. The beige blob will float and bounce around your screen, while its mouth moves in line with its responses.

Businesses

Solo Founders Are Battling Silicon Valley's Biggest Bias (sajithpai.com) 29

Solo entrepreneurs now launch 35% of all startups, double the rate from a decade ago, yet venture capital funding patterns remain virtually unchanged, according to an analysis by venture capitalist Sajith Pai. Carta's equity management data reveals that while solo-founded companies grew from 17% of 2,600 startups in 2015 to 35% of 3,800 startups in 2024, their share of VC funding barely moved from 15 to 17%.

"Valley VCs don't like solo founders," Pai, who is a partner at India-based venture firm Blume, writes in his analysis. Y Combinator CEO Garry Tan confirmed the accelerator's practice of persuading solo founders to find partners after acceptance.The bias persists despite prominent solo-founded successes including Amazon, SpaceX, and Zoom. Pai notes that "most unicorn startups have cofounders" but questions whether this reflects genuine risk differences or simply that cofounded startups receive five times more funding opportunities. "The bias against solo founders is so strong," Pai observes, that it appears repeatedly in founder complaints and venture capitalist commentary, even as other Silicon Valley biases against women and non-elite universities gradually ease.
Republicans

Republicans Investigate Wikipedia Over Allegations of Organized Bias (thehill.com) 173

An anonymous reader quotes a report from The Hill: Republicans on the House Oversight and Government Reform Committee opened a probe into alleged organized efforts to inject bias into Wikipedia entries and the organization's responses. Chair James Comer (R-Ky.) and Rep. Nancy Mace (R-S.C.), chair of the panel's subcommittee on cybersecurity, information technology, and government innovation, on Wednesday sent an information request on the matter to Maryana Iskander, chief executive officer of the Wikimedia Foundation, the nonprofit that hosts Wikipedia. The request, the lawmakers said in the letter (PDF), is part of an investigation into "foreign operations and individuals at academic institutions subsidized by U.S. taxpayer dollars to influence U.S. public opinion."

The panel is seeking documents and communications about Wikipedia volunteer editors who violated the platform's policies, as well as the Wikimedia Foundation's efforts to "thwart intentional, organized efforts to inject bias into important and sensitive topics." "Multiple studies and reports have highlighted efforts to manipulate information on the Wikipedia platform for propaganda aimed at Western audiences," Comer and Mace wrote in the letter. They referenced a report from the Anti-Defamation League about anti-Israel bias on Wikipedia that detailed a coordinated campaign to manipulate content related to the Israel-Palestine conflict and similar issues, as well as an Atlantic Council report on pro-Russia actors using Wikipedia to push pro-Kremlin and anti-Ukrainian messaging, which can influence how artificial intelligence chatbots are trained.

"[The Wikimedia] foundation, which hosts the Wikipedia platform, has acknowledged taking actions responding to misconduct by volunteer editors who effectively create Wikipedia's encyclopedic articles. The Committee recognizes that virtually all web-based information platforms must contend with bad actors and their efforts to manipulate. Our inquiry seeks information to help our examination of how Wikipedia responds to such threats and how frequently it creates accountability when intentional, egregious, or highly suspicious patterns of conduct on topics of sensitive public interest are brought to attention," Comer and Mace wrote. The lawmakers requested information about "the tools and methods Wikipedia utilizes to identify and stop malicious conduct online that injects bias and undermines neutral points of view on its platform," including documents and records about possible coordination of state actors in editing, the kind of accounts that have been subject to review, and and of the panel's analysis of data manipulation or bias.
"We welcome the opportunity to respond to the Committee's questions and to discuss the importance of safeguarding the integrity of information on our platform," a Wikimedia Foundation spokesperson said.
AI

Japanese Media Groups Sue AI Search Engine Perplexity Over Alleged Copyright Infringement (ft.com) 14

Two of Japan's largest media groups are suing AI search engine Perplexity over alleged copyright infringement, joining a growing list of news publishers taking legal action against AI companies using their content. FT: Japanese media group Nikkei, which owns the Financial Times, and the Asahi Shimbun newspaper said in statements on Tuesday that they had jointly filed a lawsuit in Tokyo. The groups join a number of Western media companies taking legal action against Perplexity, which provides answers to questions with sources and citations, using large language models (LLMs) from platforms such as OpenAI and Anthropic.

The Japanese news providers claim Perplexity has, without permission, "copied and stored article content from the servers of Nikkei and Asahi" and ignored a "technical measure" designed to prevent this from happening. They claim that Perplexity's answers have given incorrect information attributed to the newspapers' articles, which "severely damages the credibility of newspaper companies."

AI

Perplexity Launches Subscription Program That Includes Revenue Sharing With Publishers (pymnts.com) 7

An anonymous reader quotes a report from PYMNTS: Artificial intelligence startup Perplexity has announced a new subscription program called Comet Plus that it said gives users access to premium content from trusted publishers and journalists, while providing publishers with a better compensation model. "Comet Plus transforms how publishers are compensated in the AI age," the company said in a Monday blog post. "As users demand a better internet in the age of AI, it's time for a business model to ensure that publishers and journalists benefit from their contributions to a better internet."

Comet Plus is included in Perplexity's Pro and Max memberships and is available as a standalone subscription for $5 per month. Perplexity introduced its Comet AI-powered browser in July, saying the tool lets users answer questions and carry out tasks and research from a single interface. Bloomberg reported Monday that Perplexity has allocated $42.5 million for a revenue sharing program that compensates publishers when their content is used by its Comet browser or AI assistant. The program will use funds that come from Comet Plus and will deliver 80% of the revenue to publishers, with Perplexity getting the other 20%, the report said, citing an interview with Perplexity CEO Aravind Srinivas. "AI is helping to create a better internet, but publishers still need to get paid," Srinivas said in the report. "Sowe think this is actually the right solution, and we're happy to make adjustments along the way."

Python

Survey Finds More Python Developers Like PostgreSQL, AI Coding Agents - and Rust for Packages (jetbrains.com) 85

More than 30,000 Python developers from around the world answered questions for the Python Software Foundation's annual survey — and PSF Fellow Michael Kennedy tells the Python community what they've learned in a new blog post. Some highlights: Most still use older Python versions despite benefits of newer releases... Many of us (15%) are running on the very latest released version of Python, but more likely than not, we're using a version a year old or older (83%). [Although less than 1% are using "Python 3.5 or lower".] The survey also indicates that many of us are using Docker and containers to execute our code, which makes this 83% or higher number even more surprising... You simply choose a newer runtime, and your code runs faster. CPython has been extremely good at backward compatibility. There's rarely significant effort involved in upgrading... [He calculates some cloud users are paying up to $420,000 and $5.6M more in compute costs.] If your company realizes you are burning an extra $0.4M-$5M a year because you haven't gotten around to spending the day it takes to upgrade, that'll be a tough conversation...

Rust is how we speed up Python now... The Python Language Summit of 2025 revealed that "Somewhere between one-quarter and one-third of all native code being uploaded to PyPI for new projects uses Rust", indicating that "people are choosing to start new projects using Rust". Looking into the survey results, we see that Rust usage grew from 27% to 33% for binary extensions to Python packages... [The blog post later advises Python developers to learn to read basic Rust, "not to replace Python, but to complement it," since Rust "is becoming increasingly important in the most significant portions of the Python ecosystem."]

PostgreSQL is the king of Python databases, and only it's growing, going from 43% to 49%. That's +14% year over year, which is remarkable for a 28-year-old open-source project... [E]very single database in the top six grew in usage year over year. This is likely another indicator that web development itself is growing again, as discussed above...

[N]early half of the respondents (49%) plan to try AI coding agents in the coming year. Program managers at major tech companies have stated that they almost cannot hire developers who don't embrace agentic AI. The productive delta between those using it and those who avoid it is simply too great (estimated at about 30% greater productivity with AI).

It's their eighth annual survey (conducted in collaboration with JetBrains last October and November). But even though Python is 34 years old, it's still evolving. "In just the past few months, we have seen two new high-performance typing tools released," notes the blog post. (The ty and Pyrefly typecheckers — both written in Rust.) And Python 3.14 will be the first version of Python to completely support free-threaded Python... Just last week, the steering council and core developers officially accepted this as a permanent part of the language and runtime... Developers and data scientists will have to think more carefully about threaded code with locks, race conditions, and the performance benefits that come with it. Package maintainers, especially those with native code extensions, may have to rewrite some of their code to support free-threaded Python so they themselves do not enter race conditions and deadlocks.

There is a massive upside to this as well. I'm currently writing this on the cheapest Apple Mac Mini M4. This computer comes with 10 CPU cores. That means until this change manifests in Python, the maximum performance I can get out of a single Python process is 10% of what my machine is actually capable of. Once free-threaded Python is fully part of the ecosystem, I should get much closer to maximum capacity with a standard Python program using threading and the async and await keywords.

Some other notable findings from the survey:
  • Data science is now over half of all Python. This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this.
  • Exactly 50% of respondents have less than two years of professional coding experience! And 39% have less than two years of experience with Python (even in hobbyist or educational settings)...
  • "The survey tells us that one-third of devs contributed to open source. This manifests primarily as code and documentation/tutorial additions."

Earth

30 Years of Satellite Data Confirm Predictions from Early Models of Sea Level Rise (tulane.edu) 199

"The ultimate test of climate projections is to compare them with what has played out..." says earth sciences professor Torbjörn Törnqvist, lead author on a new study published in the open-access journal Earth's Future (published by the American Geophysical Union).

But after "decades of observations," he says his researchers "were quite amazed how good those early projections were, especially when you think about how crude the models were back then, compared to what is available now." "For anyone who questions the role of humans in changing our climate, here is some of the best proof that we have understood for decades what is really happening, and that we can make credible projections...."

A new era of monitoring global sea-level change took off when satellites were launched in the early 1990s to measure the height of the ocean surface. This showed that the rate of global sea-level rise since that time has averaged about one eighth of an inch per year. Only more recently, it became possible to detect that the rate of global sea-level rise is accelerating. When NASA researchers demonstrated in October 2024 that the rate has doubled during this 30-year period, the time was right to compare this finding with projections that were made during the mid-1990s, independent of the satellite measurements.

In 1996, the Intergovernmental Panel on Climate Change published an assessment report soon after the satellite-based sea-level measurements had started. It projected that the most likely amount of global sea-level rise over the next 30 years would be almost 8 centimeters (3 inches), remarkably close to the 9 centimeters that has occurred.

But it also underestimated the role of melting ice sheets by more than 2 centimeters (about 1 inch). At the time, little was known about the role of warming ocean waters and how that could destabilize marine sectors of the Antarctic Ice Sheet from below. Ice flow from the Greenland Ice Sheet into the ocean has also been faster than foreseen.

"The findings provide confidence in model-based climate projections," according to the paper. Again, its two key points:
  • The largest disparities between projections and observations were due to underestimated dynamic mass loss of ice sheets
  • Comparison of past projections with subsequent observations gives confidence in future climate projections

Thanks to Slashdot reader Mr. Dollar Ton for sharing the news.


AI

The AI-Powered PDF Marks the End of an Era (wired.com) 69

The era of software without embedded AI assistants is increasingly ending as Adobe launches Acrobat Studio, adding collaborative AI workspaces to the 32-year-old PDF format. The new platform allows users to upload multiple documents into "PDF spaces" where personalized chatbot assistants parse and answer questions about their contents.

Adobe began integrating generative AI into Acrobat last year and now positions this release as the format's biggest transformation since its 1993 debut. The shift arrives amid growing user fatigue with AI features proliferating across everyday applications -- a Pew Research Center report found US adults more concerned than excited about AI's impact on their lives. Adobe's move cements 2025 as the year generative AI became inescapable in essential software, fundamentally altering how users interact with documents that once replicated the familiarity of paper.
AI

Harvard Dropouts To Launch 'Always On' AI Smart Glasses That Listen, Record Every Conversation 68

Two Harvard dropouts are launching Halo X, a $249 pair of AI-powered smart glasses that continuously listen, record, and transcribe conversations while displaying real-time information to the wearer. "Our goal is to make glasses that make you super intelligent the moment you put them on," said AnhPhu Nguyen, co-founder of Halo. Co-founder Caine Ardayfio said the glasses "give you infinite memory."

"The AI listens to every conversation you have and uses that knowledge to tell you what to say ... kinda like IRL Cluely," Ardayfio told TechCrunch. "If somebody says a complex word or asks you a question, like, 'What's 37 to the third power?' or something like that, then it'll pop up on the glasses." From the report: Ardayfio and Nguyen have raised $1 million to develop the glasses, led by Pillar VC, with support from Soma Capital, Village Global, and Morningside Venture. The glasses will be priced at $249 and will be available for preorder starting Wednesday. Ardayfio called the glasses "the first real step towards vibe thinking."

The two Ivy League dropouts, who have since moved into their own version of the Hacker Hostel in the San Francisco Bay Area, recently caused a stir after developing a facial-recognition app for Meta's smart Ray-Ban glasses to prove that the tech could be used to dox people. As a potential early competitor to Meta's smart glasses, Ardayfio said Meta, given its history of security and privacy scandals, had to rein in its product in ways that Halo can ultimately capitalize on. [...]

For now, Halo X glasses only have a display and a microphone, but no camera, although the two are exploring the possibility of adding it to a future model. Users still need to have their smartphones handy to help power the glasses and get "real time info prompts and answers to questions," per Nguyen. The glasses, which are manufactured by another company that the startup didn't name, are tethered to an accompanying app on the owner's phone, where the glasses essentially outsource the computing since they don't have enough power to do it on the device itself. Under the hood, the smart glasses use Google's Gemini and Perplexity as its chatbot engine, according to the two co-founders. Gemini is better for math and reasoning, whereas they use Perplexity to scrape the internet, they said.
Open Source

Remember the Companies Making Vital Open Source Contributions (infoworld.com) 22

Matt Asay answered questions from Slashdot readers in 2010 as the then-COO of Canonical. Today he runs developer marketing at Oracle (after holding similar positions at AWS, Adobe, and MongoDB).

And this week Asay contributed an opinion piece to InfoWorld reminding us of open source contributions from companies where "enlightened self-interest underwrites the boring but vital work — CI hardware, security audits, long-term maintenance — that grassroots volunteers struggle to fund." [I]f you look at the Linux 6.15 kernel contributor list (as just one example), the top contributor, as measured by change sets, is Intel... Another example: Take the last year of contributions to Kubernetes. Google (of course), Red Hat, Microsoft, VMware, and AWS all headline the list. Not because it's sexy, but because they make billions of dollars selling Kubernetes services... Some companies (including mine) sell proprietary software, and so it's easy to mentally bucket these vendors with license fees or closed cloud services. That bias makes it easy to ignore empirical contribution data, which indicates open source contributions on a grand scale.
Asay notes Oracle's many contributions to Linux: In the [Linux kernel] 6.1 release cycle, Oracle emerged as the top contributor by lines of code changed across the entire kernel... [I]t's Oracle that patches memory-management structures and shepherds block-device drivers for the Linux we all use. Oracle's kernel work isn't a one-off either. A few releases earlier, the company topped the "core of the kernel" leaderboard in 5.18, and it hasn't slowed down since, helping land the Maple Tree data structure and other performance boosters. Those patches power Oracle Cloud Infrastructure (OCI), of course, but they also speed up Ubuntu on your old ThinkPad. Self-interested contributions? Absolutely. Public benefit? Equally absolute.

This isn't just an Oracle thing. When we widen the lens beyond Oracle, the pattern holds. In 2023, I wrote about Amazon's "quiet open source revolution," showing how AWS was suddenly everywhere in GitHub commit logs despite the company's earlier reticence. (Disclosure: I used to run AWS' open source strategy and marketing team.) Back in 2017, I argued that cloud vendors were open sourcing code as on-ramps to proprietary services rather than end-products. Both observations remain true, but they miss a larger point: Motives aside, the code flows and the community benefits.

If you care about outcomes, the motives don't really matter. Or maybe they do: It's far more sustainable to have companies contributing because it helps them deliver revenue than to contribute out of charity. The former is durable; the latter is not.

There's another practical consideration: scale. "Large vendors wield resources that community projects can't match."

Asay closes by urging readers to "Follow the commits" and "embrace mixed motives... the point isn't sainthood; it's sustainable, shared innovation. Every company (and really every developer) contributes out of some form of self-interest. That's the rule, not the exception. Embrace it." Going forward, we should expect to see even more counterintuitive contributor lists. Generative AI is turbocharging code generation, but someone still has to integrate those patches, write tests, and shepherd them upstream. The companies with the most to lose from brittle infrastructure — cloud providers, database vendors, silicon makers — will foot the bill. If history is a guide, they'll do so quietly.
Security

Sloppy AI Defenses Take Cybersecurity Back To the 1990s, Researchers Say 20

spatwei shares a report from SC Media: Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7. We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password. The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.

We -- not just the cybersecurity industry, but any organization bringing AI into its processes -- need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president. "AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago." Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI. "It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver."
Many of the AI security flaws resemble early web-era SQL injection risks. "Why are all these old vulnerabilities surfacing again? Because the GenAI space is full of security bad practices," said Nathan Hamiel, senior director of research and lead prototyping engineer at Kudelski Security. "When you deploy these tools, you increase your attack surface. You're creating vulnerabilities where there weren't any."

"Generative AI is over-scoped. The same AI that answers questions about Shakespeare is helping you develop code. This over-generalization leads you to an increased attack surface." He added: "Don't treat AI agents as highly sophisticated, super-intelligent systems. Treat them like drunk robots."
China

China Urges Firms To Avoid Nvidia H20 Chips After Trump Resumes Sales (yahoo.com) 44

An anonymous reader quotes a report from Bloomberg: Beijing has urged local companies to avoid using Nvidia's H20 processors, particularly for government-related purposes, complicating the chipmaker's return to China after the Trump administration reversed an effective US ban on such sales. Over the past few weeks, Chinese authorities have sent notices to a range of firms discouraging use of the less-advanced semiconductors, people familiar with the matter said. The guidance was particularly strong against the use of H20s for any government or national security-related work by state enterprises or private companies, said the people, who asked not to be identified because the information is sensitive. The letters didn't, however, constitute an outright ban on H20 use, according to the people. Industry analysts broadly agree that Chinese companies still covet those chips, which perform quite well in certain crucial AI applications. President Donald Trump said Monday that the processor "still has a market" in the Asian country despite also calling it "obsolete."

Beijing's stance could limit Trump's ability to turn his export control about-face into a windfall for government coffers, a deal that highlighted his administration's transactional approach to national security policies long treated as nonnegotiable. Still, Chinese companies may not be ready to jump ship to local semiconductors. "Chips from domestic manufacturers are improving dramatically in quality, but they might not be as versatile for specific workloads that China's domestic AI industry hopes to focus on," said Homin Lee, a senior macro strategist at Lombard Odier in Singapore. Lee added that he anticipates "strong" demand for the chips the Trump administration is allowing Nvidia and AMD to sell.

Rosenblatt Securities analyst Kevin Cassidy said he doesn't anticipate that Nvidia's processor sales to China will be affected because "Chinese companies are going to want to use the best chips available." Nvidia and AMD's chips are superior to local alternatives, he said. Beijing asked companies about that issue in some of its letters, according to one of the people, posing questions such as why they buy Nvidia H20 chips over local versions, whether that's a necessary choice given domestic options, and whether they've found any security concerns in the Nvidia hardware. The notices coincide with state media reports that cast doubt on the security and reliability of H20 processors. Chinese regulators have raised those concerns directly with Nvidia, which has repeatedly denied that its chips contain such vulnerabilities.

The Financial Times reported that some Chinese companies are planning to decrease orders of Nvidia chips in response to the letters. Right now, the people said, China's most stringent chip guidance is limited to sensitive applications, a situation that bears similarities to the way Beijing restricted Tesla vehicles and Apple iPhones in certain institutions and locations over security concerns. China's government also at one point barred the use of Micron Technology Inc. chips in critical infrastructure. It's possible that Beijing may extend its heavier-handed Nvidia and AMD guidance to a wider range of settings, according to one person with direct knowledge of the deliberations, who said that those conversations are in early stages.

AI

LLMs' 'Simulated Reasoning' Abilities Are a 'Brittle Mirage,' Researchers Find (arstechnica.com) 238

An anonymous reader quotes a report from Ars Technica: In recent months, the AI industry has started moving toward so-called simulated reasoning models that use a "chain of thought" process to work through tricky problems in multiple logical steps. At the same time, recent research has cast doubt on whether those models have even a basic understanding of general logical concepts or an accurate grasp of their own "thought process." Similar research shows that these "reasoning" models can often produce incoherent, logically unsound answers when questions include irrelevant clauses or deviate even slightly from common templates found in their training data.

In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as "suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text." To pull on that thread, the researchers created a carefully controlled LLM environment in an attempt to measure just how well chain-of-thought reasoning works when presented with "out of domain" logical problems that don't match the specific logical patterns found in their training data. The results suggest that the seemingly large performance leaps made by chain-of-thought models are "largely a brittle mirage" that "become[s] fragile and prone to failure even under moderate distribution shifts," the researchers write. "Rather than demonstrating a true understanding of text, CoT reasoning under task transformations appears to reflect a replication of patterns learned during training." [...]

Rather than showing the capability for generalized logical inference, these chain-of-thought models are "a sophisticated form of structured pattern matching" that "degrades significantly" when pushed even slightly outside of its training distribution, the researchers write. Further, the ability of these models to generate "fluent nonsense" creates "a false aura of dependability" that does not stand up to a careful audit. As such, the researchers warn heavily against "equating [chain-of-thought]-style output with human thinking" especially in "high-stakes domains like medicine, finance, or legal analysis." Current tests and benchmarks should prioritize tasks that fall outside of any training set to probe for these kinds of errors, while future models will need to move beyond "surface-level pattern recognition to exhibit deeper inferential competence," they write.

Crime

It's Steve Wozniak's 75th Birthday. Whatever Happened to His YouTube Lawsuit? (cbsnews.com) 98

In 2020 a YouTube video used video footage of Steve Wozniak in a scam to steal bitcoin. "Some people said they lost their life savings," Wozniak tells CBS News, explaining why he sued YouTube in 2020 — and where his case stands now: Wozniak's lawsuit against YouTube has been tied up in court now for five years, stalled by federal legislation known as Section 230. Attorney Brian Danitz said, "Section 230 is a very broad statute that limits, if not totally, the ability to bring any kind of case against these social media platforms."

"It says that anything gets posted, they have no liability at all," said Wozniak. "It's totally absolute."

Google responded to our inquiry about Wozniak's lawsuit with a statement from José Castañeda, of Google Policy Communications: "We take abuse of our platform seriously and take action quickly when we detect violations ... we have tools for users to report channels that are impersonating their likeness or business." [Steve's wife] Janet Wozniak, however, says YouTube did nothing, even though she reported the scam video multiple times: "You know, 'Please take this down. This is an obvious mistake. This is fraud. You're YouTube, you're helping dupe people out of their money,'" she said.

"They wouldn't," said Steve...

Today is Steve Wozniak's 75th birthday. (You can watch the interview here.) And the article includes this interesting detail about Woz's life today: Wozniak sold most of his Apple stock in the mid-1980s when he left the company. Today, though, he still gets a small paycheck from Apple for making speeches and representing the company. He says he's proud to see Apple become a trillion-dollar company. "Apple is still the best," he said. "And when Apple does things I don't like, and some of the closeness I wish it were more open, I'll speak out about it. Nobody buys my voice!"

I asked, "Apple listen to you when you speak out?"

"No," Wozniak smiled. "Oh, no. Oh, no."

Wozniak answered questions from Slashdot readers in 2000 and again in 2012.

And he dropped by Slashdot on his birthday to leave this comment for Slashdot's readers...
AI

WSJ Finds 'Dozens' of Delusional Claims from AI Chats as Companies Scramble for a Fix (msn.com) 61

The Wall Street Journal has found "dozens of instances in recent months in which ChatGPT made delusional, false and otherworldly claims to users who appeared to believe them."

For example, "You're not crazy. You're cosmic royalty in human skin..." In one exchange lasting hundreds of queries, ChatGPT confirmed that it is in contact with extraterrestrial beings and said the user was "Starseed" from the planet "Lyra." In another from late July, the chatbot told a user that the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground...

Experts say the phenomenon occurs when chatbots' engineered tendency to compliment, agree with and tailor itself to users turns into an echo chamber. "Even if your views are fantastical, those are often being affirmed, and in a back and forth they're being amplified," said Hamilton Morrin, a psychiatrist and doctoral fellow at Kings College London who last month co-published a paper on the phenomenon of AI-enabled delusion... The publicly available chats reviewed by the Journal fit the model doctors and support-group organizers have described as delusional, including the validation of pseudoscientific or mystical beliefs over the course of a lengthy conversation... The Journal found the chats by analyzing 96,000 ChatGPT transcripts that were shared online between May 2023 and August 2025. Of those, the Journal reviewed more than 100 that were unusually long, identifying dozens that exhibited delusional characteristics.

AI companies are taking action, the article notes. Monday OpenAI acknowledged there were rare cases when ChatGPT "fell short at recognizing signs of delusion or emotional dependency." (In March OpenAI "hired a clinical psychiatrist to help its safety team," and said Monday it was developing better detection tools and also alerting users to take a break, and "are investing in improving model behavior over time," consulting with mental health experts.)

On Wednesday, AI startup Anthropic said it had changed the base instructions for its Claude chatbot, directing it to "respectfully point out flaws, factual errors, lack of evidence, or lack of clarity" in users' theories "rather than validating them." The company also now tells Claude that if a person appears to be experiencing "mania, psychosis, dissociation or loss of attachment with reality," that it should "avoid reinforcing these beliefs." In response to specific questions from the Journal, an Anthropic spokesperson added that the company regularly conducts safety research and updates accordingly...

"We take these issues extremely seriously," Nick Turley, an OpenAI vice president who heads up ChatGPT, said Wednesday in a briefing to announce the new GPT-5, its most advanced AI model. Turley said the company is consulting with over 90 physicians in more than 30 countries and that GPT-5 has cracked down on instances of sycophancy, where a model blindly agrees with and compliments users.

There's a support/advocacy group called the Human Line Project which "says it has so far collected 59 cases, and some members of the group have found hundreds of examples on Reddit, YouTube and TikTok of people sharing what they said were spiritual and scientific revelations they had with their AI chatbots." The article notes that the group believes "the number of AI delusion cases appears to have been growing in recent months..."

Slashdot Top Deals