Software

Nova Launcher's Founder and Sole Developer Has Left (theverge.com) 20

Kevin Barry, founder and sole developer of Nova Launcher, has left parent company Branch Metrics after being told to stop work on both the launcher and an open-source release. While the app remains on Google Play, the launcher's website currently shows a 404 error. The Verge reports: Mobile analytics company Branch Metrics acquired Nova in 2022. The company's CEO at the time, co-founder Alex Austin, said on Reddit that if Barry were to leave Branch, "it's contracted that the code will be open-sourced and put in the hands of the community." Austin left Branch in 2023, and now with Barry officially gone from the company, too, it's unclear if the launcher will now actually be open-sourced.

"I think the newer leadership since Alex Austin left has put a different focus on the company and Nova simply isn't part of that focus in any way at all," Cliff Wade, Nova's former customer relations lead who left as part of the 2024 layoffs, tells The Verge. "It's just some app that they own but no longer feel they need or want." Wade also said that "I don't believe Branch will do the right thing any time soon with regards to open-sourcing Nova. I think they simply just don't care and don't want to invest time, unless of course, they get enough pressure from the community and individuals who care."

Users have started a change.org petition to ask for the project to be open-sourced, and Wade says it's a "great start" to apply that pressure. Wade said he hasn't personally seen Barry's contract, so couldn't corroborate the claim of a contractual obligation to open-source Nova. Still, he said that the community "deserves" for the launcher to be open-sourced. "Branch just simply needs to do the right thing here and honor what they as a company have stated as well as what then CEO Alex Austin has stated numerous times prior to him leaving Branch."

Microsoft

Some Angry GitHub Users Are Rebelling Against GitHub's Forced Copilot AI Features (theregister.com) 63

Slashdot reader Charlotte Web shared this report from the Register: Among the software developers who use Microsoft's GitHub, the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories. The second most popular discussion — where popularity is measured in upvotes — is a bug report that seeks a fix for the inability of users to disable Copilot code reviews. Both of these questions, the first opened in May and the second opened a month ago, remain unanswered, despite an abundance of comments critical of generative AI and Copilot...

The author of the first, developer Andi McClure, published a similar request to Microsoft's Visual Studio Code repository in January, objecting to the reappearance of a Copilot icon in VS Code after she had uninstalled the Copilot extension... "I've been for a while now filing issues in the GitHub Community feedback area when Copilot intrudes on my GitHub usage," McClure told The Register in an email. "I deeply resent that on top of Copilot seemingly training itself on my GitHub-posted code in violation of my licenses, GitHub wants me to look at (effectively) ads for this project I will never touch. If something's bothering me, I don't see a reason to stay quiet about it. I think part of how we get pushed into things we collectively don't want is because we stay quiet about it."

It's not just the burden of responding to AI slop, an ongoing issue for Curl maintainer Daniel Stenberg. It's the permissionless copying and regurgitation of speculation as fact, mitigated only by small print disclaimers that generative AI may produce inaccurate results. It's also GitHub's disavowal of liability if Copilot code suggestions happen to have reproduced source code that requires attribution. It's what the Servo project characterizes in its ban on AI code contributions as the lack of code correctness guarantees, copyright issues, and ethical concerns. Similar objections have been used to justify AI code bans in GNOME's Loupe project, FreeBSD, Gentoo, NetBSD, and QEMU... Calls to shun Microsoft and GitHub go back a long way in the open source community, but moved beyond simmering dissatisfaction in 2022 when the Software Freedom Conservancy (SFC) urged free software supporters to give up GitHub, a position SFC policy fellow Bradley M. Kuhn recently reiterated.

McClure says In the last six months their posts have drawn more community support — and tells the Register there's been a second change in how people see GitHub within the last month. After GitHub moved from a distinct subsidiary to part of Microsoft's CoreAI group, "it seems to have galvanized the open source community from just complaining about Copilot to now actively moving away from GitHub."
Programming

32% of Senior Developers Say Half Their Shipped Code is AI-Generated (infoworld.com) 57

In July 791 professional coders were surveyed by Fastly about their use of AI coding tools, reports InfoWorld. The results?

"About a third of senior developers (10+ years of experience) say over half their shipped code is AI-generated," Fastly writes, "nearly two and a half times the rate reported by junior developers (0-2 years of experience), at 13%." "AI will bench test code and find errors much faster than a human, repairing them seamlessly. This has been the case many times," one senior developer said...

Senior developers were also more likely to say they invest time fixing AI-generated code. Just under 30% of seniors reported editing AI output enough to offset most of the time savings, compared to 17% of juniors. Even so, 59% of seniors say AI tools help them ship faster overall, compared to 49% of juniors. Just over 50% of junior developers say AI makes them moderately faster. By contrast, only 39% of more senior developers say the same.

But senior devs are more likely to report significant speed gains: 26% say AI makes them a lot faster, double the 13% of junior devs who agree. One reason for this gap may be that senior developers are simply better equipped to catch and correct AI's mistakes... Nearly 1 in 3 developers (28%) say they frequently have to fix or edit AI-generated code enough that it offsets most of the time savings. Only 14% say they rarely need to make changes. And yet, over half of developers still feel faster with AI tools like Copilot, Gemini, or Claude.

Fastly's survey isn't alone in calling AI productivity gains into question. A recent randomized controlled trial (RCT) of experienced open-source developers found something even more striking: when developers used AI tools, they took 19% longer to complete their tasks. This disconnect may come down to psychology. AI coding often feels smooth... but the early speed gains are often followed by cycles of editing, testing, and reworking that eat into any gains. This pattern is echoed both in conversations we've had with Fastly developers and in many of the comments we received in our survey...

Yet, AI still seems to improve developer job satisfaction. Nearly 80% of developers say AI tools make coding more enjoyable... Enjoyment doesn't equal efficiency, but in a profession wrestling with burnout and backlogs, that morale boost might still count for something.

Fastly quotes one developer who said their AI tool "saves time by using boilerplate code, but it also needs manual fixes for inefficiencies, which keep productivity in check."

The study also found the practice of green coding "goes up sharply with experience. Just over 56% of junior developers say they actively consider energy use in their work, while nearly 80% among mid- and senior-level engineers consider this when coding."
Power

Bill Gates-Backed Nuclear Fusion Developer Wants to Deploy a Reactor in Japan (japantimes.co.jp) 73

"A U.S.-based nuclear fusion developer wants to deploy a reactor in Japan in the late 2030s or early 2040s," reports Bloomberg, "in line with the Asian country's broader plans to adopt the potent, low-carbon energy source." Commonwealth Fusion Systems, which last week announced it raised $863 million from investors including Nvidia, has been in dialogue with Japanese government officials on the use of its technology, CEO Bob Mumgaard said in an interview in Tokyo on Wednesday... Several countries are eyeing the technology for its climate and energy security benefits but only some, like China, the U.S., Russia and South Korea have managed to crack the basics. Japan revised its national strategy in June to support fusion deployment and build a demonstration plant in the 2030s.
The article notes that Commonwealth "does not currently have any reactors in operation" — but that Mitsubishi this week invested in the company, in collaboration with a consortium of 12 Japanese companies. From Mitsubishi's announcement: The Japanese Consortium will acquire technical and commercial expertise in policy, regulatory, and the development, construction, operation, and maintenance of ARC [power plant] from CFS's commercialization projects in the United States. In addition, each consortium company will bring together its know-how and expertise and aspire to expedite the commercialization and industrialization of fusion energy power generation in Japan.
Open Source

Rust Foundation Announces 'Innovation Lab' to Support Impactful Rust Projects (webpronews.com) 30

Announced this week at RustConf 2025 in Seattle, the new Rust Innovation Lab will offer open source projects "the opportunity to receive fiscal sponsorship from the Rust Foundation, including governance, legal, networking, marketing, and administrative support."

And their first project will be the TLS library Rustls (for cryptographic security), which they say "demonstrates Rust's ability to deliver both security and performance in one of the most sensitive areas of modern software infrastructure." Choosing Rustls "underscores the lab's focus on infrastructure-critical tools, where reliability is paramount," argues explains WebProNews. But "Looking ahead, the foundation plans to expand the lab's portfolio, inviting applications from promising Rust initiatives. This could catalyze innovations in areas like embedded systems and blockchain, where Rust's efficiency shines."

Their article notes that the Rust Foundation "sees the lab as a way to accelerate innovation while mitigating the operational burdens that often hinder open-source development." [T]he Foundation aims to provide a stable, neutral environment for select Rust endeavors, complete with governance oversight, legal and administrative backing, and fiscal sponsorship... At its core, the Rust Innovation Lab addresses a growing need within the developer community for structured support amid Rust's rising adoption in sectors like systems programming and web infrastructure. By offering a "home" for projects that might otherwise struggle with sustainability, the lab ensures continuity and scalability. This comes at a time when Rust's memory safety features are drawing attention from major tech firms, including those in cloud computing and cybersecurity, as a counter to vulnerabilities plaguing languages like C++...

Industry observers note that such fiscal sponsorship could prove transformative, enabling projects to secure funding from diverse sources while maintaining independence. The Rust Foundation's involvement ensures compliance with best practices, potentially attracting more corporate backers wary of fragmented open-source efforts... By providing a neutral venue, the foundation aims to prevent the pitfalls seen in other ecosystems, such as project abandonment due to maintainer burnout or legal entanglements... For industry insiders, the Rust Innovation Lab represents a strategic evolution, potentially accelerating Rust's integration into mission-critical systems.

AI

Columbia Tries Using AI To Cool Off Student Tensions (theverge.com) 59

An anonymous reader shares a report: Can AI help "smooth over" discussion on abortion, racism, immigration, or Israel-Palestine? Columbia University sure hopes so. The Verge has learned that the university recently began testing Sway, an AI debate program currently in beta. Developed by two researchers at Carnegie Mellon University, Sway matches up students with opposing views to chat one-on-one about hot-button issues and "facilitates better discussions between them," according to the tool's website. Nicholas DiBella, a postdoctoral scholar at CMU who helped develop Sway, told The Verge that about 3,000 students from more than 30 colleges and universities have used the tool.

One of those may soon be Columbia. News of the potential partnership comes after more than two years of escalating tensions at Columbia between students, administrators, and the federal government. The university has spent years at the center of controversy after controversy: expulsions of pro-Palestinian student protesters, a string of police raids, and demands from the federal government.

People at Columbia's Teachers College are testing Sway in order to potentially integrate it into the conflict resolution curriculum and "bridge-building initiatives at Columbia," DiBella said. He said there's also been interest from other teams at Columbia in using Sway for the fall 2026 semester and onward. Simon Cullen, an assistant professor at CMU and the other developer behind Sway, told The Verge that the company is also in touch with Columbia University Life.

AI

FreeBSD Project Isn't Ready To Let AI Commit Code Just Yet (theregister.com) 21

The latest status report from the FreeBSD Project says no thanks to code generated by LLM-based assistants. From a report: The FreeBSD Project's Status Report for the second quarter of 2025 contains updates from various sub-teams that are working on improving the FreeBSD OS, including separate sub-projects such as enabling FreeBSD apps to run on Linux, Chinese translation efforts, support for Solaris-style Extended Attributes, and for Apple's legacy HFS+ file system.

The thing that stood out to us, though, was that the core team is working on what it terms a "Policy on generative AI created code and documentation." The relevant paragraph says: "Core is investigating setting up a policy for LLM/AI usage (including but not limited to generating code). The result will be added to the Contributors Guide in the doc repository. AI can be useful for translations (which seems faster than doing the work manually), explaining long/obscure documents, tracking down bugs, or helping to understand large code bases. We currently tend to not use it to generate code because of license concerns. The discussion continues at the core session at BSDCan 2025 developer summit, and core is still collecting feedback and working on the policy."

PHP

Laravel Inventor Tells Devs To Quit Writing 'Cathedrals of Complexity' (theregister.com) 48

Taylor Otwell, inventor and maintainer of popular PHP framework Laravel, is warning against overly complex code and the risks of bypassing the framework. From a report: Developers are sometimes drawn to building "cathedrals of complexity that aren't so easy to change," he said, speaking in a podcast for maintainable.fm, a series produced by Ruby on Rails consultancy Planet Argon.

Software, he said, should be "simple and disposable and easy to change." Some problems are genuinely complex, but in general, if a developer finds a "clever solution" which goes beyond the standard documented way in a framework such as Laravel or Ruby on Rails, "that would be like a smell."

A code smell -- for the uninitiated in the The Reg readership -- is a term developers use for code that works but may cause problems at a later date. Otwell described himself as a "pretty average programmer" but reckons many others are the same, solving basic problems as quickly and efficiently as they can.

Android

What Every Argument About Sideloading Gets Wrong (hugotunius.se) 89

Developer Hugo Tunius, writing in a blog post: Sideloading has been a hot topic for the last decade. Most recently, Google has announced further restrictions on the practice in Android. Many hundreds of comment threads have discussed these changes over the years. One point in particular is always made: "I should be able to run whatever code I want on hardware I own." I agree entirely with this point, but within the context of this discussion it's moot.

When Google restricts your ability to install certain applications they aren't constraining what you can do with the hardware you own, they are constraining what you can do using the software they provide with said hardware. It's through this control of the operating system that Google is exerting control, not at the hardware layer. You often don't have full access to the hardware either and building new operating systems to run on mobile hardware is impossible, or at least much harder than it should be. This is a separate, and I think more fruitful, point to make. Apple is a better case study than Google here. Apple's success with iOS partially derives from the tight integration of hardware and software. An iPhone without iOS is a very different product to what we understand an iPhone to be. Forcing Apple to change core tenets of iOS by legislative means would undermine what made the iPhone successful.

AI

Humans Are Being Hired to Make AI Slop Look Less Sloppy (nbcnews.com) 78

Graphic designer Lisa Carstens "spends a good portion of her day working with startups and individual clients looking to fix their botched attempts at AI-generated logos," reports NBC News: Such gigs are part of a new category of work spawned by the generative AI boom that threatened to displace creative jobs across the board: Anyone can now write blog posts, produce a graphic or code an app with a few text prompts, but AI-generated content rarely makes for a satisfactory final product on its own... Fixing AI's mistakes is not their ideal line of work, many freelancers say, as it tends to pay less than traditional gigs in their area of expertise. But some say it's what helps pay the bills....

As companies struggle to figure out their approach to AI, recent data provided to NBC News from freelance job platforms Upwork, Freelancer and Fiverr also suggest that demand for various types of creative work surged this year, and that clients are increasingly looking for humans who can work alongside AI technologies without relying on or rejecting them entirely. Data from Upwork found that although AI is already automating lower-skilled and repetitive tasks, the platform is seeing growing demand for more complex work such as content strategy or creative art direction. And over the past six months, Fiverr said it has seen a 250% boost in demand for niche tasks across web design and book illustration, from "watercolor children story book illustration" to "Shopify website design." Similarly, Freelancer saw a surge in demand this year for humans in writing, branding, design and video production, including requests for emotionally engaging content like "heartfelt speeches...."

The low pay from clients who have already cheaped out on AI tools has affected gig workers across industries, including more technical ones like coding. For India-based web and app developer Harsh Kumar, many of his clients say they had already invested much of their budget in "vibe coding" tools that couldn't deliver the results they wanted. But others, he said, are realizing that shelling out for a human developer is worth the headaches saved from trying to get an AI assistant to fix its own "crappy code." Kumar said his clients often bring him vibe-coded websites or apps that resulted in unstable or wholly unusable systems.

"Even outside of any obvious mistakes made by AI tools, some artists say their clients simply want a human touch to distinguish themselves from the growing pool of AI-generated content online..."
Python

New Python Documentary Released On YouTube (youtube.com) 46

"From a side project in Amsterdam to powering AI at the world's biggest companies — this is the story of Python," says the description of a new 84-minute documentary.

Long-time Slashdot reader destinyland writes: It traces Python all the way back to its origins in Amsterdam back in 1991. (Although the first time Guido van Rossum showed his new language to a co-worker, they'd typed one line of code just to prove they could crash Python's first interpreter.) The language slowly spread after van Rossum released it on Usenet — split across 21 separate posts — and Robin Friedrich, a NASA aerospace engineer, remembers using Python to build flight simulations for the Space Shuttle. (Friedrich says in the documentary he also attended Guido's first in-person U.S. workshop in 1994, and "I still have the t-shirt...")

Dropbox's CEO/founder Drew Houston describes what it was like being one of the first companies to use Python to build a company reaching millions of users. (Another success story was YouTube, which was built by a small team using Python before being acquired by Google). Anaconda co-founder Travis Oliphant remembers Python's popularity increasing even more thanks to the data science/macine learning community. But the documentary also includes the controversial move to Python 3 (which broke compatability with earlier versions). Though ironically, one of the people slogging through a massive code migration ended up being van Rossum himself at his new job at Dropbox. The documentary also includes van Rossum's resignation as "Benevolent Dictator for Life" after approving the walrus operator. (In van Rossum's words, he essentially "rage-quit over this issue.")

But the focus is on Python's community. At one point, various interviewees even take turns reciting passages from the "Zen of Python" — which to this day is still hidden in Python as an import-able library as a kind of Easter Egg.

"It was a massive undertaking", the documentary's director explains in a new interview, describing a full year of interviews. (The article features screenshots from the documentary — including a young Guido van Rossum and the original 1991 email that announced Python to the world.) [Director Bechtle] is part of a group that's filmed documentaries on everything from Kubernetes and Prometheus to Angular, Node.js, and Ruby on Rails... Originally part of the job platform Honeypot, the documentary-makers relaunched in April as Cult.Repo, promising they were "100% independent and more committed than ever to telling the human stories behind technology."
Honeypot's founder Emma Tracey bought back its 272,000-subscriber YouTube channel from Honeypot's new owners, New Work SE, and Cult.Repo now bills itself as "The home of Open Source documentaries."

Over in a thread at Python.org, language creator Guido van Rossum has identified the Python community members in the film's Monty Python-esque poster art. And core developer Hugo van Kemenade notes there's also a video from EuroPython with a 55-minute Q&A about the documentary.
AI

Vivaldi Browser Doubles Down On Gen AI Ban 17

Vivaldi CEO Jon von Tetzchner has doubled down on his company's refusal to integrate generative AI into its browser, arguing that embedding AI in browsing dehumanizes the web, funnels traffic away from publishers, and primarily serves to harvest user data. "Every startup is doing AI, and there is a push for AI inside products and services continuously," he told The Register in a phone interview. "It's not really focusing on what people need." The Register reports: On Thursday, Von Tetzchner published a blog post articulating his company's rejection of generative AI in the browser, reiterating concerns raised last year by Vivaldi software developer Julien Picalausa. [...] Von Tetzchner argues that relying on generative AI for browsing dehumanizes and impoverishes the web by diverting traffic away from publishers and onto chatbots. "We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship," he stated in his post. "Without exploration, the web becomes far less interesting. Our curiosity loses oxygen and the diversity of the web dies."

Von Tetzchner told The Register that almost all the users he hears from don't want AI in their browser. "I'm not so sure that applies to the general public, but I do think that actually most people are kind of wary of something that's always looking over your shoulder," he said. "And a lot of the systems as they're built today that's what they're doing. The reason why they're putting in the systems is to collect information." Von Tetzchner said that AI in browsers presents the same problem as social media algorithms that decide what people see based on collected data. Vivaldi, he said, wants users to control their own data and to make their own decisions about what they see. "We would like users to be in control," he said. "If people want to use AI as those services, it's easily accessible to them without building it into the browser. But I think the concept of building it into the browser is typically for the sake of collecting information. And that's not what we are about as a company, and we don't think that's what the web should be about."

Vivaldi is not against all uses of AI, and in fact uses it for in-browser translation. But these are premade models that don't rely on user data, von Tetzchner said. "It's not like we're saying AI is wrong in all cases," he said. "I think AI can be used in particular for things like research and the like. I think it has significant value in recognizing patterns and the like. But I think the way it is being used on the internet and for browsing is net negative."
Piracy

Apple Pulls iPhone Torrent App From AltStore PAL in Europe (theverge.com) 31

An anonymous reader shares a report: Apple has removed the iPhone torrenting client, iTorrent, from AltStore PAL's alternative iOS marketplace in the EU, showing that it can still exert control over apps that aren't listed on the official App Store. iTorrent developer Daniil Vinogradov told TorrentFreak that Apple has revoked his distribution rights to publish apps in any alternative iOS stores, so the issue isn't tied to AltStore PAL itself.
Software

Developer Unlocks Newly Enshittified Echelon Exercise Bikes But Can't Legally Release Software (404media.co) 105

samleecole shares a report from 404 Media: An app developer has jailbroken Echelon exercise bikes to restore functionality that the company put behind a paywall last month, but copyright laws prevent him from being allowed to legally release it. Last month, Peloton competitor Echelon pushed a firmware update to its exercise equipment that forces its machines to connect to the company's servers in order to work properly. Echelon was popular in part because it was possible to connect Echelon bikes, treadmills, and rowing machines to free or cheap third-party apps and collect information like pedaling power, distance traveled, and other basic functionality that one might want from a piece of exercise equipment. With the new firmware update, the machines work only with constant internet access and getting anything beyond extremely basic functionality requires an Echelon subscription, which can cost hundreds of dollars a year.

App engineer Ricky Witherspoon, who makes an app called SyncSpin that used to work with Echelon bikes, told 404 Media that he successfully restored offline functionality to Echelon equipment and won the Fulu Foundation bounty. But he and the foundation said that he cannot open source or release it because doing so would run afoul of Section 1201 of the Digital Millennium Copyright Act, the wide-ranging copyright law that in part governs reverse engineering. There are various exemptions to Section 1201, but most of them allow for jailbreaks like the one Witherspoon developed to only be used for personal use. [...] "I don't feel like going down a legal rabbit hole, so for now it's just about spreading awareness that this is possible, and that there's another example of egregious behavior from a company like this [...] if one day releasing this was made legal, I would absolutely open source this. I can legally talk about how I did this to a certain degree, and if someone else wants to do this, they can open source it if they want to."

The Military

Defense Department Reportedly Relies On Utility Written by Russian Dev (theregister.com) 58

A widely used Node.js utility called fast-glob, relied on by thousands of projectsâ"including over 30 U.S. Department of Defense systems -- is maintained solely by a Russian developer linked to Yandex. While there's no evidence of malicious activity, cybersecurity experts warn that the lack of oversight in such critical open-source projects leaves them vulnerable to potential exploitation by state-backed actors. The Register reports: US cybersecurity firm Hunted Labs reported the revelations on Wednesday. The utility in question is fast-glob, which is used to find files and folders that match specific patterns. Its maintainer goes by the handle "mrmlnc", and the Github profile associated with that handle identifies its owner as a Yandex developer named Denis Malinochkin living in a suburb of Moscow. A website associated with that handle also identifies its owner as the same person, as Hunted Labs pointed out.

Hunted Labs told us that it didn't speak to Malinochkin prior to publication of its report today, and that it found no ties between him and any threat actor. According to Hunted Labs, fast-glob is downloaded more than 79 million times a week and is currently used by more than 5,000 public projects in addition to the DoD systems and Node.js container images that include it. That's not to mention private projects that might use it, meaning that the actual number of at-risk projects could be far greater.

While fast-glob has no known CVEs, the utility has deep access to systems that use it, potentially giving Russia a number of attack vectors to exploit. Fast-glob could attack filesystems directly to expose and steal info, launch a DoS or glob-injection attack, include a kill switch to stop downstream software from functioning properly, or inject additional malware, a list Hunted Labs said is hardly exhaustive. [...] Hunted Labs cofounder Haden Smith told The Register that the ties are cause for concern. "Every piece of code written by Russians isn't automatically suspect, but popular packages with no external oversight are ripe for the taking by state or state-backed actors looking to further their aims," Smith told us in an email. "As a whole, the open source community should be paying more attention to this risk and mitigating it." [...]

Hunted Labs said that the simplest solution for the thousands of projects using fast-glob would be for Malinochkin to add additional maintainers and enhance project oversight, as the only other alternative would be for anyone using it to find a suitable replacement. "Open source software doesn't need a CVE to be dangerous," Hunted Labs said of the matter. "It only needs access, obscurity, and complacency," something we've noted before is an ongoing problem for open source projects. This serves as another powerful reminder that knowing who writes your code is just as critical as understanding what the code does," Hunted Labs concluded.

Open Source

Linux Turns 34 (tomshardware.com) 66

Mark Tyson writes via Tom's Hardware: On this day 34 years ago, an unknown computer science student from Finland announced that a new free operating system project was "starting to get ready." Linus Benedict Torvalds elaborated by explaining that the OS was "just a hobby, [it] won't be big and professional like GNU." Of course, this was the first public outing for the colossal collaborative project that is now known as Linux. Above, you can see Torvalds' first posting regarding Linux to the comp.os.minix newsgroup. The now famously caustic, cantankerous, curmudgeon seemed relatively mild, meek, and malleable in this historic Linux milestone posting.

Torvalds asked the Minix community about their thoughts on a free new OS being prepared for Intel 386 and 486 clones. He explained that he'd been brewing the project since April (a few months prior), and asked for direction. Specifically, he sought input about other Minix users' likes and dislikes of that OS, in order to differentiate Linux. The now renowned developer then provided a rough summary of the development so far. Some features of Linux that Torvalds thought were important, or that he was particularly proud of, were then highlighted in the newsgroup posting. For example, the Linux chief mentioned his OS's multithreaded file system, and its absence of any Minix code. However, he humbly admitted the code as it stood was Intel x86 specific, and thus "is not portable."

Last but not least, Torvalds let it be known that version 0.01 of this free OS would be out in the coming month (September 1991). It was indeed released on September 17, 1991, but someone else decided on the OS name at the last minute. Apparently, Torvalds didn't want to release his new OS under the name of Linux, as it would be too egotistical, too self-aggrandizing. He preferred Freax, a portmanteau word formed from Free-and-X. However, one of Torvald's colleagues, who was the administrator for the project's FTP server, did not think that 'Freax' was an appealing name for the OS. So this co-worker went ahead and uploaded the OS as 'Linux' on that date in September, without asking Torvalds.

Robotics

Nvidia's New 'Robot Brain' Goes On Sale (cnbc.com) 33

Nvidia has launched its Jetson AGX Thor robotics chip module, a $3,499 "robot brain" developer kit that starts shipping next month. CNBC reports: After a company uses the developer kit to prototype their robot, Nvidia will sell Thor T5000 modules that can be installed in production-ready robots. If a company needs more than 1,000 Thor chips, Nvidia will charge $2,999 per module. CEO Jensen Huang has said robotics is the company's largest growth opportunity outside of artificial intelligence, which has led to Nvidia's overall sales more than tripling in the past two years. "We do not build robots, we do not build cars, but we enable the whole industry with our infrastructure computers and the associated software," said Deepu Talla, Nvidia's vice president of robotics and edge AI, on a call with reporters Friday.

The Jetson Thor chips are based on a Blackwell graphics processor, which is Nvidia's current generation of technology used in its AI chips, as well as its chips for computer games. Nvidia said that its Jetson Thor chips are 7.5 times faster than its previous generation. That allows them to run generative AI models, including large language models and visual models that can interpret the world around them, which is essential for humanoid robots, Nvidia said. The Jetson Thor chips are equipped with 128GB of memory, which is essential for big AI models. [...] The company said its Jetson Thor chips can be used for self-driving cars as well, especially from Chinese brands. Nvidia calls its car chips Drive AGX, and while they are similar to its robotics chips, they run an operating system called Drive OS that's been tuned for automotive purposes.

Android

Google To Require Identity Verification for All Android App Developers by 2027 (androidauthority.com) 97

Google will require identity verification for all Android app developers, including those distributing apps outside the Play Store, starting September 2026 in Brazil, Indonesia, Singapore, and Thailand before expanding globally through 2027. Developers must register through a new Android Developer Console beginning March 2026. The requirement applies to certified Android devices running Google Mobile Services. Google cited malware prevention as the primary motivation, noting sideloaded apps contain 50 times more malware than Play Store apps.

Hobbyist and student developers will receive separate account types. Developer information submitted to Google will not be displayed to users.
Crime

Dev Gets 4 Years For Creating Kill Switch On Ex-Employer's Systems (bleepingcomputer.com) 113

Davis Lu, a former Eaton Corporation developer, has been sentenced to four years in prison for sabotaging his ex-employer's Windows network with malware and a custom kill switch that locked out thousands of employees once his account was disabled. The attack caused significant operational disruption and financial losses, with Lu also attempting to cover his tracks by deleting data and researching privilege escalation techniques. BleepingComputer reports: After a corporate restructuring and subsequent demotion in 2018, the DOJ says that Lu retaliated by embedding malicious code throughout the company's Windows production environment. The malicious code included an infinite Java thread loop designed to overwhelm servers and crash production systems. Lu also created a kill switch named "IsDLEnabledinAD" ("Is Davis Lu enabled in Active Directory") that would automatically lock all users out of their accounts if his account was disabled in Active Directory. When his employment was terminated on September 9, 2019, and his account disabled, the kill switch activated, causing thousands of users to be locked out of their systems.

"The defendant breached his employer's trust by using his access and technical knowledge to sabotage company networks, wreaking havoc and causing hundreds of thousands of dollars in losses for a U.S. company," said Acting Assistant Attorney General Matthew R. Galeotti. When he was instructed to return his laptop, Lu reportedly deleted encrypted data from his device. Investigators later discovered search queries on the device researching how to elevate privileges, hide processes, and quickly delete files. Lu was found guilty earlier this year of intentionally causing damage to protected computers. After his four-year sentence, Lu will also serve three years of supervised release following his prison term.

Security

Male-Oriented App 'TeaOnHer' Also Had Security Flaws That Could Leak Men's Driver's License Photos (techcrunch.com) 112

The women-only dating-advice app Tea "has been hit with 10 potential class action lawsuits in federal and state court," NBC News reported last week, "after a data breach led to the leak of thousands of selfies, ID photos and private conversations online." The suits could result in Tea having to pay tens of millions of dollars in damages to the plaintiffs, which could be catastrophic for the company, an expert told NBC News... One of the suits lists the right-wing online discussion board 4chan and the social platform X as defendants, alleging that they allowed bad actors to spread users' personal information.
But meanwhile, a new competing app for men called "TeaOnHer" has already been launched. And it was also found to have enormous security flaws, reports TechCrunch, that "exposed its users' personal information, including photos of their driver's licenses and other government-issued identity documents..." [W]hen we looked at the TeaOnHer's public internet records, it had no meaningful information other than a single subdomain, appserver.teaonher.com. When we opened this page in our browser, what loaded was the landing page for TeaOnHer's API (for the curious, we uploaded a copy here)... It was on this landing page that we found the exposed email address and plaintext password (which wasn't that far off from "password") for [TeaOnHer developer Xavier] Lampkin's account to access the TeaOnHer "admin panel"... This API landing page included an endpoint called /docs, which contained the API's auto-generated documentation (powered by a product called Swagger UI) that contained the full list of commands that can be performed on the API [including administrator commands to return user data]...

While it's not uncommon for developers to publish their API documentation, the problem here was that some API requests could be made without any authentication — no passwords or credentials were needed...

The records returned from TeaOnHer's server contained users' unique identifiers within the app (essentially a string of random letters and numbers), their public profile screen name, and self-reported age and location, along with their private email address. The records also included web address links containing photos of the users' driver's licenses and corresponding selfies. Worse, these photos of driver's licenses, government-issued IDs, and selfies were stored in an Amazon-hosted S3 cloud server set as publicly accessible to anyone with their web addresses. This public setting lets anyone with a link to someone's identity documents open the files from anywhere with no restrictions...

The bugs were so easy to find that it would be sheer luck if nobody malicious found them before we did. We asked, but Lampkin would not say if he has the technical ability, such as logs, to determine if anyone had used (or misused) the API at any time to gain access to users' verification documents, such as by scraping web addresses from the API. In the days since our report to Lampkin, the API landing page has been taken down, along with its documentation page, and it now displays only the state of the server that the TeaOnHer API is running on as "healthy."

The flaws were discovered while TeaOnHer was the #2 free app in the Apple App Store, the article points out. And while these flaws "appear to be resolved," the article notes a larger issue. "Shoddy coding and security flaws highlight the ongoing privacy risks inherent in requiring users to submit sensitive information to use apps and websites,"

And TeaOnHer also had another authentication issue. A female reporter at Cosmopolitan also noted Friday that TeaOnHer "lets you browse through profiles before your verifications are complete. So literally anyone (like myself) can read reviews..."

Slashdot Top Deals