AI

Researchers Created an Open Rival To OpenAI's o1 'Reasoning' Model for Under $50 23

AI researchers at Stanford and the University of Washington were able to train an AI "reasoning" model for under $50 in cloud compute credits, according to a research paper. From a report: The model, known as s1, performs similarly to cutting-edge reasoning models, such as OpenAI's o1 and DeepSeek's R1, on tests measuring math and coding abilities. The s1 model is available on GitHub, along with the data and code used to train it.

The team behind s1 said they started with an off-the-shelf base model, then fine-tuned it through distillation, a process to extract the "reasoning" capabilities from another AI model by training on its answers. The researchers said s1 is distilled from one of Google's reasoning models, Gemini 2.0 Flash Thinking Experimental. Distillation is the same approach Berkeley researchers used to create an AI reasoning model for around $450 last month.
Red Hat Software

Red Hat Plans to Add AI to Fedora and GNOME 49

In his post about the future of Fedora Workstation, Christian F.K. Schaller discusses how the Red Hat team plans to integrate AI with IBM's open-source Granite engine to enhance developer tools, such as IDEs, and create an AI-powered Code Assistant. He says the team is also working on streamlining AI acceleration in Toolbx and ensuring Fedora users have access to tools like RamaLama. From the post: One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points. "I'm still not sure how I feel about this approach," writes designer/developer and blogger, Bradley Taunt. "While IBM Granite is an open source model, I still don't enjoy so much artificial 'intelligence' creeping into core OS development. This also isn't something optional on the end-users side, like a desktop feature or package. This sounds like it's going to be built directly into the core system."

"Red Hat has been pushing hard towards AI and my main concern is having this influence other operating system dev teams. Luckily things seems AI-free in BSD land. For now, at least."
AI

OpenAI Holds Surprise Livestream to Announce Multi-Step 'Deep Research' Capability (indiatimes.com) 56

Just three hours ago, OpenAI made a surprise announcement to their 3.9 million followers on X.com. "Live from Tokyo," they'd be livestreaming... something. Their description of the event was just two words.

"Deep Research"

UPDATE: The stream has begun, and it's about OpenAI's next "agent-ic offering". ("OpenAI cares about agents because we believe they're going to transform knowlege work...")

"We're introducing a capability called Deep Research... a model that does multi-step research. It discovers content, it synthesizes content, and it reasons about this content." It even asks "clarifying" questions to your prompt to make sure its multi-step research stays on track. Deep Research will be launching in ChatGPT Pro later today, rolling out into other OpenAI products...

And OpenAI's site now has an "Introducing Deep Research" page. Its official description? "An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you. Available to Pro users today, Plus and Team next."

Before the livestream began, X.com users shared their reactions to the coming announcement:

"It's like DeepSeek, but cleaner"
"Deep do do if things don't work out"
"Live from Tokyo? Hope this research includes the secret to waking up early!"
"Stop trying, we don't trust u"

But one X.com user had presciently pointed out OpenAI has used the phrase "deep research" before. In July of 2024, Reuters reported on internal documentation (confirmed with "a person familiar with the matter") code-named "Strawberry" which suggested OpenAI was working on "human-like reasoning skills." How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.

Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry.

The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough... OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets.

Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence... OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability.

Power

Could New Linux Code Cut Data Center Energy Use By 30%? (datacenterdynamics.com) 65

Two computer scientists at the University of Waterloo in Canada believe changing 30 lines of code in Linux "could cut energy use at some data centers by up to 30 percent," according to the site Data Centre Dynamics.

It's the code that processes packets of network traffic, and Linux "is the most widely used OS for data center servers," according to the article: The team tested their solution's effectiveness and submitted it to Linux for consideration, and the code was published this month as part of Linux's newest kernel, release version 6.13. "All these big companies — Amazon, Google, Meta — use Linux in some capacity, but they're very picky about how they decide to use it," said Martin Karsten [professor of Computer Science in the Waterloo's Math Faculty]. "If they choose to 'switch on' our method in their data centers, it could save gigawatt hours of energy worldwide. Almost every single service request that happens on the Internet could be positively affected by this."

The University of Waterloo is building a green computer server room as part of its new mathematics building, and Karsten believes sustainability research must be a priority for computer scientists. "We all have a part to play in building a greener future," he said. The Linux Foundation, which oversees the development of the Linux OS, is a founder member of the Green Software Foundation, an organization set up to look at ways of developing "green software" — code that reduces energy consumption.

Karsten "teamed up with Joe Damato, distinguished engineer at Fastly" to develop the 30 lines of code, according to an announcement from the university. "The Linux kernel code addition developed by Karsten and Damato was based on research published in ACM SIGMETRICS Performance Evaluation Review" (by Karsten and grad student Peter Cai).

Their paper "reviews the performance characteristics of network stack processing for communication-heavy server applications," devising an "indirect methodology" to "identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead...

"Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput..."
Wine

Wine 10.0 Released (betanews.com) 34

BrianFagioli shares a report from BetaNews: The Wine team has officially released Wine 10.0, marking a full year of extensive development with over 6,000 changes. This stable release introduces major updates designed to enhance performance, compatibility, and visual experience when running Windows applications on Linux and other non-Windows platforms. Here's a list of the new changes and features:

- Full ARM64EC Support: Now on par with ARM64, allowing the creation of hybrid ARM64X modules blending ARM64EC and ARM64 code in a single binary.
- 64-bit x86 Emulation: Leverages ARM64EC to run internal processes natively, reducing the need for resource-intensive emulation.
- High-DPI Scaling Overhaul: Automatic adjustments for non-DPI-aware applications on high-resolution displays with customizable compatibility flags.
- Vulkan Improvements: Support for Vulkan child window rendering under X11 and compatibility with Vulkan 1.4.303.
- Direct3D Updates: Fixed-function pipeline for legacy Direct3D versions and introduced Dynamic Vulkan extensions to reduce stuttering.
- Experimental FFmpeg Backend: Better multimedia playback for applications with complex media pipelines.
- New Display Configuration Tool: Allows inspection and modification of settings, including virtual desktop resolutions.
- Wayland Graphics Driver: Enabled by default on Linux, with support for OpenGL and improved popup window placement (X11 takes precedence unless disabled).
- Input Device Improvements: Enhanced touchscreen support for X11 and expanded Bluetooth functionality.
- Internationalization Enhancements: Updated Unicode character tables and timezone data for better global compatibility.
- Upgraded Libraries: Includes FluidSynth, LibPng, and Vkd3d, alongside new developer tools like the Clang Static Analyzer and improved ARM64 support for C++ exceptions.

You can download Wine 10.0 and learn more about the release here.
Programming

Node.js 'Type Stripping' for TypeScript Now Enabled by Default (hashnode.dev) 63

The JavaScript runtime Node.js can execute TypeScript (Microsoft's JavaScript-derived language with static typing).

But now it can do it even better, explains Marco Ippolito of the Node.js steering committee: In August 2024 Node.js introduced a new experimental feature, Type Stripping, aimed at addressing a longstanding challenge in the Node.js ecosystem: running TypeScript with no configuration. Enabled by default in Node.js v23.6.0, this feature is on its way to becoming stable.

TypeScript has reached incredible levels of popularity and has been the most requested feature in all the latest Node.js surveys. Unlike other alternatives such as CoffeeScript or Flow, which never gained similar traction, TypeScript has become a cornerstone of modern development. While it has been supported in Node.js for some time through loaders, they relied heavily on configuration and user libraries. This reliance led to inconsistencies between different loaders, making them difficult to use interchangeably. The developer experience suffered due to these inconsistencies and the extra setup required... The goal is to make development faster and simpler, eliminating the overhead of configuration while maintaining the flexibility that developers expect...

TypeScript is not just a language, it also relies on a toolchain to implement its features. The primary tool for this purpose is tsc, the TypeScript compiler CLI... Type checking is tightly coupled to the implementation of tsc, as there is no formal specification for how the language's type system should behave. This lack of a specification means that the behavior of tsc is effectively the definition of TypeScript's type system. tsc does not follow semantic versioning, so even minor updates can introduce changes to type checking that may break existing code. Transpilation, on the other hand, is a more stable process. It involves converting TypeScript code into JavaScript by removing types, transforming certain syntax constructs, and optionally "downleveling" the JavaScript to allow modern syntax to execute on older JavaScript engines. Unlike type checking, transpilation is less likely to change in breaking ways across versions of tsc. The likelihood of breaking changes is further reduced when we only consider the minimum transpilation needed to make the TypeScript code executable — and exclude downleveling of new JavaScript features not yet available in the JavaScript engine but available in TypeScript...

Node.js, before enabling it by default, introduced --experimental-strip-types. This mode allows running TypeScript files by simply stripping inline types without performing type checking or any other code transformation. This minimal technique is known as Type Stripping. By excluding type checking and traditional transpilation, the more unstable aspects of TypeScript, Node.js reduces the risk of instability and mostly sidesteps the need to track minor TypeScript updates. Moreover, this solution does not require any configuration in order to execute code... Node.js eliminates the need for source maps by replacing the removed syntax with blank spaces, ensuring that the original locations of the code and structure remain intact. It is transparent — the code that runs is the code the author wrote, minus the types...

"As this experimental feature evolves, the Node.js team will continue collaborating with the TypeScript team and the community to refine its behavior and reduce friction. You can check the roadmap for practical next steps..."
China

Chinese RISC-V Project Teases 2025 Debut of Freely Licensed Advanced Chip Design (theregister.com) 110

China's Xiangshan project aims to deliver a high-performance RISC-V processor by 2025. If it succeeds, it could be "enormously significant" for three reasons, writes The Register's Simon Sharwood. It would elevate RISC-V from low-end silicon to datacenter-level capabilities, leverage the open-source Mulan PSL-2.0 license to disrupt proprietary chip models like Arm and Intel, and reduce China's dependence on foreign technology, mitigating the impact of international sanctions on advanced processors. From the report: The prospect of a 2025 debut appeared on Sunday in a post to Chinese social media service Weibo, penned by Yungang Bao of the Institute of Computing Technology at the Chinese Academy of Sciences. The academy has created a project called Xiangshan that aims to use the permissively licensed RISC-V ISA to create a high-performance chip, with the Scala source code to the designs openly available.

Bao is a leader of the project, and has described the team's ambition to create a company that does for RISC-V what Red Hat did for Linux -- although he said that before Red Hat changed the way it made the source code of RHEL available to the public. The Xiangshan project has previously aspired to six-monthly releases, though it appears its latest design to be taped out was a second-gen chip named Nanhu that emerged in late 2023. That silicon ran at 2GHz and was built on a 14nm process node. The project has since worked on a third-gen design, named Kunminghu, and published the image [here] depicting an overview of its non-trivial micro-architecture.

Open Source

New York Times Recognizes Open-Source Maintainers With 2024 'Good Tech' Award (thestar.com.my) 7

This week New York Times technology columnist Kevin Roose published his annual "Good Tech" awards to "shine the spotlight on a few tech projects that I think contributed positively to humanity."

And high on the list is "Andres Freund, and every open-source software maintainer saving us from doom." The most fun column I wrote this past year was about a Microsoft database engineer, Andres Freund, who got some odd errors while doing routine maintenance on an obscure open-source software package called xz Utils. While investigating, Freund inadvertently discovered a huge security vulnerability in the Linux operating system, which could have allowed a hacker to take control of hundreds of millions of computers and bring the world to its knees.

It turns out that much of our digital infrastructure rests on similar acts of nerdy heroism. After writing about Freund's discovery, I received tips about other near disasters involving open-source software projects, many of which were averted by sharp-eyed volunteers catching bugs and fixing critical code just in time to foil the bad guys. I could not write about them all, but this award is to say: I see you, open-source maintainers, and I thank you for your service.

Roose also acknowledges the NASA engineers who kept Voyager 1 transmitting back to earth from interstellar space — and Bluesky, "for making my social media feeds interesting again."

Roose also notes it was a big year for AI. There's a shout-out to Epoch AI, a small nonprofit research group in Spain, "for giving us reliable data on the AI boom." ("The firm maintains public databases of AI models and AI hardware, and publishes research on AI trends, including an influential report last year about whether AI models can continue to grow at their current pace. Epoch AI concluded they most likely could until 2030.") And there's also a shout-out to groups "pushing AI forward" and positive uses "to improve health care, identify new drugs and treatments for debilitating diseases and accelerate important scientific research."
  • The nonprofit Arc Institute released Evo, an AI model that "can predict and generate genomic sequences, using technology similar to the kind that allows systems like ChatGPT to predict the next words in a sequence."
  • A Harvard University lab led by Dr. Jeffrey Lichtman teamed with researchers from Google for "the most detailed map of a human brain sample ever created. The team used AI to map more than 150 million synapses in a tiny sample of brain tissue at nanometer-level resolution..."
  • Researchers at Stanford and McMaster universities developed SyntheMol, "a generative AI model that can design new antibiotics from scratch."

Python

Python in 2024: Faster, More Powerful, and More Popular Than Ever (infoworld.com) 45

"Over the course of 2024, Python has proven again and again why it's one of the most popular, useful, and promising programming languages out there," writes InfoWorld: The latest version of the language pushes the envelope further for speed and power, sheds many of Python's most decrepit elements, and broadens its appeal with developers worldwide. Here's a look back at the year in Python.

In the biggest news of the year, the core Python development team took a major step toward overcoming one of Python's longstanding drawbacks: the Global Interpreter Lock or "GIL," a mechanism for managing interpreter state. The GIL prevents data corruption across threads in Python programs, but it comes at the cost of making threads nearly useless for CPU-bound work. Over the years, various attempts to remove the GIL ended in tears, as they made single-threaded Python programs drastically slower. But the most recent no-GIL project goes a long way toward fixing that issue — enough that it's been made available for regular users to try out.

The no-GIL or "free-threaded" builds are still considered experimental, so they shouldn't be deployed in production yet. The Python team wants to alleviate as much of the single-threaded performance impact as possible, along with any other concerns, before giving the no-GIL builds the full green light. It's also entirely possible these builds may never make it to full-blown production-ready status, but the early signs are encouraging.

Another forward-looking feature introduced in Python 3.13 is the experimental just-in-time compiler or JIT. It expands on previous efforts to speed up the interpreter by generating machine code for certain operations at runtime. Right now, the speedup doesn't amount to much (maybe 5% for most programs), but future versions of Python will expand the JIT's functionality where it yields real-world payoffs.

Python is now more widely used than JavaScript on GitHub (thanks partly to its role in AI and data science code).
United States

Trump Transition Leaders Call For Eased Tech Immigration Policy 167

theodp writes: In 2012, now-Microsoft President Brad Smith unveiled Microsoft's National Talent Strategy, a two-pronged strategy that called for tech visa restrictions to be loosened to allow tech companies to hire non-U.S. citizens to fill jobs until more American schoolchildren could be made tech-savvy enough to pass hiring standards. Shortly thereafter, tech-backed nonprofit Code.org emerged (led by Smith's next-door neighbor Hadi Partovi with Smith as a founding Board member) with a mission to ensure that U.S. schoolchildren started receiving 'rigorous' computer science education instruction. Around the same time, Mark Zuckerberg's FWD.us PAC launched (with support from Smith, Partovi, and other tech leaders) with a mission to reform tech visa policy to meet tech's need for talent.

Fast forward to 2024, and Newsweek reports the debate over tech immigration policy has been revived, spurred by the recent appointment of Sriram Krishnan as senior policy adviser for AI at the Trump White House. Comments by far-right political activist Laura Loomer on Twitter about Krishnan's call for loosening Green Card restrictions were met with rebuttals from prominent tech leaders who are also serving as members of the Trump transition team. Entrepreneur David Sacks, who Trump has tapped as his cryptocurrency and AI czar, took to social media to clarify that Krishnan advocates for removing country caps on green cards, not eliminating caps entirely, aiming to create a more merit-based system. However, the NY Times reported that Sacks discussed a much broader visa reform proposal with Trump during a June podcast ("What I will do is," Trump told Sacks, "you graduate from a college, I think you should get automatically, as part of your diploma, a green card to be able to stay in this country"). Elon Musk, the recently appointed co-head of Trump's new Dept. of Government Efficiency (DOGE) had Sacks' and Krishnan's backs (not unexpected -- both were close Musk advisors on his Twitter purchase), tweeting out "Makes sense" to his 209 million followers, lamenting that "the number of people who are super talented engineers AND super motivated in the USA is far too low," reposting claims crediting immigrants for 36% of the innovation in the U.S., and taking USCIS to task for failing to immediately recognize his own genius with an Exceptional Ability Green Card (for his long-defunct Zip2 startup).

Vivek Ramaswamy, who Trump has tapped to co-lead DOGE with Musk, agreed and fanned the Twitter flames with a pinned Tweet of his own explaining, "The reason top tech companies often hire foreign-born -- first-generation engineers over "native" Americans isn't because of an innate American IQ deficit (a lazy -- wrong explanation). A key part of it comes down to the c-word: culture." (Colorado Governor Jared Polis also took to Twitter to agree with Musk and Ramaswamy on the need to import 'elite engineers'). And Code.org CEO Partovi joined the Twitter fray, echoing the old we-need-H1B-visas-to-make-US-schoolchildren-CS-savvy argument of Microsoft's 2012 National Talent Strategy. "Did you know 2/3 of H1B visas are for computer scientists?" Partovi wrote in reply to Musk, Loomer, and Sachs. "The H1B program raises $500M/year (from its corporate sponsors) and all that money is funneled into programs at Labor and NSF without focus to grow local CS talent. Let's fund CS education." The NYT also cited Zuckerberg's earlier efforts to influence immigration policy with FWD.us (which also counted Sacks and Musk as early supporters), taking note of Zuck's recent visit to Mar-a-Lago and Meta's $1 million donation to Trump's upcoming inauguration.

So, who is to be believed? Musk, who attributes any tech visa qualms to "a 'fixed pie' fallacy that is at the heart of much wrong-headed economic thinking" and argues that "there is essentially infinite potential for job and company creation ['We should let anyone in the country who is hardworking and honest and will be a contributor to the United States,' Musk has said]"? Or economists who have found that immigration and globalization is not quite the rising-tide-that-raises-all-boats it's been cracked up to be?
Robotics

Startup Set To Brick $800 Kids Robot Is Trying To Open Source It First (arstechnica.com) 35

Last week, startup Embodied announced it was closing down, and its product, an $800 robot for kids ages 5 to 10, would soon be bricked. Now, in a blog post published on Friday, CEO Paolo Pirjanian shared that Embodied's technical team is working on a way to open-source the robot, ensuring it can continue operating indefinitely. Ars Technica reports: The notice says that after releasing OpenMoxie, Embodied plans to release "all necessary code and documentation" for developers and users. Pirjanian said that an over-the-air (OTA) update is now available for download that will allow previously purchased Moxies to support OpenMoxie. The executive noted that Embodied is still "seeking long-term answers" but claimed that the update is a "vital first step" to "keep the door open" for the robot's continued functionality.

At this time, OpenMoxie isn't available and doesn't have a release date. Embodied's wording also seems careful to leave an opening for OpenMoxie to not actually release; although, the company seems optimistic. However, there's also a risk of users failing to update their robots in time and properly. Embodied noted that it won't be able to support users who have trouble with the update or with OpenMoxie post-release. Updating the robot includes connecting to Wi-Fi and leaving it on for at least an hour. "It is extremely important that you update your Moxie with this OTA as soon as possible because once the cloud servers stop working you will not be able to update your robot," the document reads. Embodied hasn't said when exactly its cloud servers still stop working.

Displays

Donald Bitzer, a Pioneer of Cyberspace and Plasma Screens, Dies At 90 (msn.com) 18

The Washington Post reports: Years before the internet was created and the first smartphones buzzed to life, an educational platform called PLATO offered a glimpse of the digital world to come. Launched in 1960 at the University of Illinois at Urbana-Champaign [UIUC], it was the first generalized, computer-based instructional system, and grew into a home for early message boards, emails, chatrooms, instant messaging and multiplayer video games.

The platform's developer, Donald Bitzer, was a handball-playing, magic-loving electrical engineer who opened his computer lab to practically everyone, welcoming contributions from Illinois undergrads as well as teenagers who were still in high school. Dr. Bitzer, who died Dec. 10 at age 90, spent more than two decades working on PLATO, managing its growth and development while also pioneering digital technologies that included the plasma display panel, a forerunner of the ultrathin screens used on today's TVs and tablets. "All of the features you see kids using now, like discussion boards or forums and blogs, started with PLATO," he said during a 2014 return to Illinois, his alma mater. "All of the social networking we take for granted actually started as an educational tool."

Long-time Slashdot reader theodp found another remembrance online. "Ray Ozzie, whose LinkedIn profile dedicates more space to describing his work as a PLATO developer as a UIUC undergrad than it does to his later successes as a creator of Lotus Notes and as Microsoft's Chief Software Architect, offers his own heartfelt mini-obit." Ozzie writes: It's difficult to adequately convey how much impact he had on so many, and I implore you to take a few minutes to honor him by reading a bit about him and his contributions. Links below. As an insecure young CS student at UIUC in 1974, Paul Tenczar, working for/with Don, graciously gave me a chance as a jr. systems programmer on the mind-bogglingly forward thinking system known as PLATO. A global, interactive system for learning, collaboration, and community like no other at the time. We were young and in awe of how Don led, inspired, and managed to keep the project alive. I was introverted; shaking; stage fright. Yeah I could code. But how could such a deeply technical engineer assemble such a strong team to execute on such a totally novel and inspirational vision, secure government funding, and yet also demo the product on the Phil Donahue show?

"Here's to the crazy ones. The misfits. The rebels. The troublemakers. The ones who see things differently. They're not fond of rules." You touched so many of us and shaped who we became and the risks we would take, having an impact well beyond that which you created. You made us think and you made us laugh. I hope we made you proud."

Open Source

Slashdot's Interview with Bruce Perens: How He Hopes to Help 'Post Open' Developers Get Paid (slashdot.org) 61

Bruce Perens, original co-founder of the Open Source Initiative, has responded to questions from Slashdot readers about a new alternative he's developing that hopefully helps "Post Open" developers get paid.

But first, "One of the things that's clear from the Slashdot patter is that people are not aware of what I've been doing, in general," Perens says. "So, let's start by filling that in..."

Read on for the rest of his wide-ranging answers....
Open Source

Ask Bruce Perens Your Questions About How He Hopes to Get Open Source Developers Paid (postopen.org) 93

Bruce Perens wrote the original Open Source definition back in 1997, and then co-founded the Open Source Initiative with Eric Raymond in 1998. But after resigning from the group in 2020, Perens is now diligently developing an alternative he calls "Post Open" to "meet goals that Open Source fails at today" — even providing a way to pay developers for their work.

To make it all happen, he envisions software developers owning (and controlling) a not-for-profit corporation developing a body of software called "the Post Open Collection" and collecting its licensing fees to distribute among developers. The hope? To "make it possible for an individual developer to stay at home and code all day, and make their living that way without having to build a company."

The not-for-profit entity — besides actually enforcing its licensing — could also:
  • Provide tech support, servicing all Post-Open software through one entity.
  • Improve security by providing developers with cryptographic-hardware-backed authentication guaranteeing secure software chain-of-custody.
  • Handle onerous legal requirements like compliance with the EU Cyber Resilience Act "on behalf of all developers in the Post Open Collection".
  • Compensate documentation writers.
  • Fund lobbying on behalf of developers, along with advocacy for their software's privacy-preserving features.

"We've started to build the team," Perens said in a recent interview, announcing weeks ago that attorneys are already discussing the structure of the future organization and its proposed license.

But what do you think? Perens has agreed to answer questions from Slashdot readers...

He's also Slashdot reader #3,872. (And Perens is also an amateur radio operator, currently on the board of M17 — a community of open source developers and radio enthusiasts — and in general support of Open Source and Amateur Radio projects through his non-profit HamOpen.org.) But more importantly, Perens "was the person to announce 'Open Source' to the world," according to his official site. Now's your chance to ask him about his next new big idea...

Ask as many questions as you'd like, but please, one per comment. We'll pick the very best questions — and forward them on to Bruce Perens himself to answer!

UPDATE: Bruce Perens has answered your questions!


Open Source

MacFORTH Code for 1984 Robot-Coding Game 'ChipWits' from 1984 is Now Open Source (chipwits.com) 10

Back in the mid-1980s Mark Roth was in 5th grade when the game ChipWits "helped kindle his interest in coding," according to an online biography. ("By middle school, he wrote his first Commodore 64 assembler and by high school he authored a 3D Graphics library for DOS.")

And 40 years later, Slashdot reader markroth8 writes that the programming puzzle/logic game "inspired many people to become professional coders": ChipWits was first released for Mac in 1984, and was later ported to Commodore 64 and Apple II in 1985. To celebrate the game's 40th anniversary, the team behind the new Steam reboot of ChipWits (including its original co-creator Doug Sharp, also of fame for the game King of Chicago) is announcing the recovery and open source release of the original game's source code, written in the FORTH programming language, for both Mac and Commodore 64 platforms.

Recovering data from 40-year old 5.25" and 3.5" disks was a challenge in and of itself, and most of the data survived unscathed! It's interesting to read the 40-year-old code, and compare it to modern game development.

"Our goal for open sourcing the original version of ChipWits is to ensure its legacy lives on," according to the announcement. (It adds that "We also wanted to share an appreciation for what cross-platform software development for 8-bit microcomputers was like in 1984.")
Programming

Verify the Rust's Standard Library's 7,500 Unsafe Functions - and Win 'Financial Rewards' (devclass.com) 85

The Rust community has "recognized the unsafety of Rust (if used incorrectly)," according to a blog post by Amazon Web Services.

So now AWS and the Rust Foundation are "crowdsourcing an effort to verify the Rust standard library," according to an article at DevClass.com, "by setting out a series of challenges for devs and offering financial rewards for solutions..." Rust includes ways to bypass its safety guarantees though, with the use of the "unsafe" keyword... The issue AWS highlights is that even if developers use only safe code, most applications still depend on the Rust standard library. AWS states that there are approximately 7.5K unsafe functions in the Rust Standard Library and notes that 57 "soundness issues" and 20 CVEs (Common Vulnerabilities and Exposures) have been reported in the last three years. [28% of the soundness issues were discovered in 2024.]

Marking a function as unsafe does not mean it is vulnerable, only that Rust does not guarantee its safety. AWS plans to reduce the risk by using tools and techniques for formal verification of key library code, but believes that "a single team would be unable to make significant inroads" for reasons including the lack of a verification mechanism in the Rust ecosystem and what it calls the "unknowns of scalable verification." The plan therefore is to turn this over to the community, by posing challenges and rewarding developers for solutions.... A GitHub repository provides a fork of the Rust code and includes a set of challenges, currently 13 of them... The Rust Foundation says that there is a financial reward tied to each challenge, and that the "challenge rewards committee is responsible for reviewing activity and dispensing rewards." How much will be paid though is not stated.

Despite the wide admiration for Rust, there is no formal specification for the language, an issue which impacts formal verification efforts.

Thanks to Slashdot reader sean-it-all for sharing the news.
Red Hat Software

Red Hat is Becoming an Official Microsoft 'Windows Subsystem for Linux' Distro (microsoft.com) 48

"You can use any Linux distribution inside of the Windows Subsystem for Linux" Microsoft recently reminded Windows users, "even if it is not available in the Microsoft Store, by importing it with a tar file."

But being an official distro "makes it easier for Windows Subsystem for Linux users to install and discover it with actions like wsl --list --online and wsl --install," Microsoft pointed out this week. And "We're excited to announce that Red Hat will soon be delivering a Red Hat Enterprise Linux WSL distro image in the coming months..."

Thank you to the Red Hat team as their feedback has been invaluable as we built out this new architecture, and we're looking forwards to the release...! Ron Pacheco, senior director, Red Hat Enterprise Linux Ecosystem, Red Hat says:

"Developers have their preferred platforms for developing applications for multiple operating systems, and WSL is an important platform for many of them. Red Hat is committed to driving greater choice and flexibility for developers, which is why we're working closely with the Microsoft team to bring Red Hat Enterprise Linux, the largest commercially available open source Linux distribution, to all WSL users."

Read Pacheco's own blog post here.

But in addition Microsoft is also releasing "a new way to make WSL distros," they announced this week, "with a new architecture that backs how WSL distros are packaged and installed." Up until now, you could make a WSL distro by either creating an appx package and distributing it via the Microsoft Store, or by importing a .tar file with wsl -import. We wanted to improve this by making it possible to create a WSL distro without needing to write Windows code, and for users to more easily install their distros from a file or network share which is common in enterprise scenarios... With the tar based architecture, you can start with the same .tar file (which can be an exported Linux container!) and just edit it to add details to make it a WSL distro... These options will describe key distro attributes, like the name of the distro, its icon in Windows, and its out of box experience (OOBE) which is what happens when you run WSL for the first time. You'll notice that the oobe_command option points to a file which is a Linux executable, meaning you can set up your full experience just in Linux if you wish.
Google

What Happened After Google Retrofitted Memory Safety Onto Its C++ Codebase? (googleblog.com) 140

Google's transistion to Safe Coding and memory-safe languages "will take multiple years," according to a post on Google's security blog. So "we're also retrofitting secure-by-design principles to our existing C++ codebase wherever possible," a process which includes "working towards bringing spatial memory safety into as many of our C++ codebases as possible, including Chrome and the monolithic codebase powering our services." We've begun by enabling hardened libc++, which adds bounds checking to standard C++ data structures, eliminating a significant class of spatial safety bugs. While C++ will not become fully memory-safe, these improvements reduce risk as discussed in more detail in our perspective on memory safety, leading to more reliable and secure software... It's also worth noting that similar hardening is available in other C++ standard libraries, such as libstdc++. Building on the successful deployment of hardened libc++ in Chrome in 2022, we've now made it default across our server-side production systems. This improves spatial memory safety across our services, including key performance-critical components of products like Search, Gmail, Drive, YouTube, and Maps... The performance impact of these changes was surprisingly low, despite Google's modern C++ codebase making heavy use of libc++. Hardening libc++ resulted in an average 0.30% performance impact across our services (yes, only a third of a percent) ...

In just a few months since enabling hardened libc++ by default, we've already seen benefits. Hardened libc++ has already disrupted an internal red team exercise and would have prevented another one that happened before we enabled hardening, demonstrating its effectiveness in thwarting exploits. The safety checks have uncovered over 1,000 bugs, and would prevent 1,000 to 2,000 new bugs yearly at our current rate of C++ development...

The process of identifying and fixing bugs uncovered by hardened libc++ led to a 30% reduction in our baseline segmentation fault rate across production, indicating improved code reliability and quality. Beyond crashes, the checks also caught errors that would have otherwise manifested as unpredictable behavior or data corruption... Hardened libc++ enabled us to identify and fix multiple bugs that had been lurking in our code for more than a decade. The checks transform many difficult-to-diagnose memory corruptions into immediate and easily debuggable errors, saving developers valuable time and effort.

The post notes that they're also working on "making it easier to interoperate with memory-safe languages. Migrating our C++ to Safe Buffers shrinks the gap between the languages, which simplifies interoperability and potentially even an eventual automated translation."
Desktops (Apple)

ChatGPT For macOS Now Works With Third-Party Apps, Including Apple's Xcode 6

An update to OpenAI's ChatGPT app for macOS adds integration with third-party apps, including developer tools such as VS Code, Terminal, iTerm2 and Apple's Xcode. 9to5Mac reports: In a demo seen by 9to5Mac, ChatGPT was able to understand code from an Xcode project and then provide code suggestions without the user having to manually copy and paste content into the ChatGPT app. It can even read content from more than one app at the same time, which is very useful for working with developer tools. According to OpenAI, the idea is to expand integration to more apps in the future. For now, integration with third-party apps is coming exclusively to the Mac version of ChatGPT, but there's another catch. The feature requires a paid ChatGPT subscription, at least for now.

ChatGPT Plus and Team subscribers will receive access to integration with third-party apps on macOS starting today, while access for Enterprise and Education users will be rolled out "in the next few weeks." OpenAI told 9to5Mac that it wants to make the feature available to everyone in the future, although there's no estimate of when this will happen. For privacy reasons, users can control at any time when and which apps ChatGPT can read.
The app can be downloaded here.
Programming

The Ultimate in Debugging 42

Mark Rainey: Engineers are currently debugging why the Voyager 1 spacecraft, which is 15 billions miles away, turned off its main radio and switched to a backup radio that hasn't been used in over forty years!

I've had some tricky debugging issues in the past, including finding compiler bugs and debugging code with no debugger that had been burnt into prom packs for terminals, however I have huge admiration for the engineers maintaining the operation of Voyager 1.

Recently they sent a command to the craft that caused it to shut off its main radio transmitter, seemingly in an effort to preserve power and protect from faults. This prompted it to switch over to the backup radio transmitter, that is lower power. Now they have regained communication they are trying to determine the cause on hardware that is nearly 50 years old. Any communication takes days. When you think you have a difficult issue to debug, spare a thought for this team.

Slashdot Top Deals