AI

'AI' Is Coming For Your Online Gaming Servers Next (pcworld.com) 35

"Consumer PC parts aren't the only things being gobbled up by the 'AI' industry," writes PCWorld's Michael Crider. "A Starcraft-inspired strategy game is shutting down its multiplayer servers because the hosting company got bought out for 'AI.'" The game will still be playable offline for now, but the shutdown highlights the ripple effects of the AI boom on the gaming industry. Amid the ongoing hardware shortages, AI companies are basically gobbling up as much infrastructure as they can to repurpose it for AI workloads. From the report: The game in question is Stormgate, a crowdfunded revival of the real-time strategy genre that has languished in the last decade or so. The developer Frost Giant Studios told its players on Discord (spotted by PC Gamer) that it would be unable to continue multiplayer access past the end of this month. The "game server orchestration partner" was bought by an AI company -- the developer's words, not mine -- which means that the multiplayer aspects of the game will have a "planned outage."

The devs say the game will be patched for offline play, presumably including its single-player campaign mode and co-op modes, but "online modes will not be available at that point." They're hoping to bring back online play in a later update, but that'll depend on "finding a partner to support ongoing operations." That sounds like old-fashioned player-hosted games with lobbies aren't in the cards, at least not yet.

Frost Giant's server provider is Hathora, which was bought by a company called Fireworks AI last month. Fireworks describes its offerings as "open-source AI models at blazing speed, optimized for your use case, scaled globally with the Fireworks Inference Cloud." So, yeah, Hathora's infrastructure will likely be used for yet more generative "AI." And according to GamesBeat, it's planning to shut down the game service aspect of its company completely. That means Stormgate probably isn't going to be the last game affected. Hathora also provides online services for Splitgate 2, among others. I'm contacting Hathora for comment and will update this story if I receive a response.

Medicine

Python Blood Could Hold the Secret To Healthy Weight Loss (colorado.edu) 128

Longtime Slashdot reader fahrbot-bot writes: CU Boulder researchers are reporting that they have discovered an appetite-suppressing compound in python blood that helps the snakes consume enormous meals and go months without eating yet remain metabolically healthy. The findings were published in the journal Natural Metabolism on March 19, 2026.

Pythons can grow as big as a telephone pole, swallow an antelope whole, and go months or even years without eating -- all while maintaining a healthy heart and plenty of muscle mass. In the hours after they eat, research has shown, their heart expands 25% and their metabolism speeds up 4,000-fold to help them digest their meal. The team measured blood samples from ball pythons and Burmese pythons, fed once every 28 days, immediately after they ate a meal. In all, they found 208 metabolites that increased significantly after the pythons ate. One molecule, called para-tyramine-O-sulfate (pTOS) soared 1,000-fold.

Further studies, done with Baylor University researchers, showed that when they gave high doses of pTOS to obese or lean mice, it acted on the hypothalamus, the appetite center of the brain, prompting weight loss without causing gastrointestinal problems, muscle loss or declines in energy. The study found that pTOS, which is produced by the snake's gut bacteria, is not present in mice naturally. It is present in human urine at low levels and does increase somewhat after a meal. But because most research is done in mice or rats, pTOS has been overlooked.
"We've basically discovered an appetite suppressant that works in mice without some of the side-effects that GLP-1 drugs have," said senior author Leslie Leinwand, a distinguished professor of Molecular, Cellular and Developmental Biology who has been studying pythons in her lab for two decades. Drugs like Ozempic and Wegovy act on the hormone glucagon-like petide-1 (GLP-1).
AI

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet (404media.co) 153

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
"Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..."

"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
Space

Astronomers Think They've Spotted a Galaxy That's 99.9% Dark Matter (cnn.com) 71

Astronomers have spotted a galaxy they believe is made of 99.9% dark matter, reports CNN — and it's so faint, it's almost invisible: CDG-2, which is about 300 million light-years from Earth, appears to be so rich in dark matter that it could belong to a hypothesized subset of low surface brightness galaxies called "dark galaxies," which are believed to contain few or no stars.... [Post-doctoral astrophysics/statistics fellow Dayi Li at the University of Toronto was lead author on a study about the discovery, and tells CNN] There is no strict definition of dark galaxies... but their existence is predicted by dark matter theories and cosmological simulations. "Where exactly do we draw the line in terms of how many stars they should have is still ambiguous, because not everything in astronomy is as clear-cut as we like," he said. "To be technically correct, CDG-2 is an almost-dark galaxy. But the importance of CDG-2 is that it nudges us much closer to getting to that truly dark regime, while previously we did not think a galaxy this faint could exist."

To observe CDG-2, the researchers used data from three telescopes — Hubble, the European Space Agency's Euclid space observatory and the Subaru Telescope in Hawaii — along with a novel approach that involved looking for objects called globular clusters. "These are very tight, spherical groupings of very olds stars, basically the relics of the first generation of star formation," Li said. Globular clusters are bright even if the surrounding galaxy is not, and previous observations have shown a relationship between them and the presence of dark matter in a galaxy, Li added. Because CDG-2 appears to have very few stars, there must be something else providing the mass that the clusters need to hold themselves together. Li and his colleagues assume that the source of the mass is dark matter.

The researchers found a set of four globular clusters in the Perseus Cluster, a group of thousands of galaxies immersed in a cloud of gas and one of the most massive objects in the universe. Further observations revealed a glow or halo around the globular clusters, suggesting the presence of a galaxy... Astronomers believe, Li explained, that after the formation of the clusters early in the galaxy's existence, larger surrounding galaxies stripped it of the hydrogen gas required to make more individual stars like our sun. "The material that this galaxy needed to continue to form stars was no longer there, so it was left with basically just a dark matter halo and the four globular clusters." The process, he added, would leave behind a skeleton or ghost of "a galaxy that pretty much just failed." As a result of this formation mechanism, the galaxy only has 0.005% of the brightness of our own galaxy, Li said...

Studying potential dark galaxies is important because they provide nearly pristine views of the behavior of dark matter, according to Neal Dalal, a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, who was not involved with the study.

Robert Minchin, an astronomer at New Mexico's National Radio Astronomy Observatory, told CNN that "it seems likely that other very dark galaxies will be found by this method in the future."
Music

'The Death of Spotify: Why Streaming is Minutes Away From Being Obsolete' 70

An anonymous reader shares a column: I'm going to take the diplomatic hat off here and say with brutal honesty: basically everybody in the music business hates Spotify except for the people who work there. It's a platform that sucks artists for everything they have, it actively prevents community building, and, despite all of that, the platform still struggles to maintain a healthy profit margin.

The streaming business model is fundamentally broken. And eventually, its demise will become more and more obvious to recognize. I'll break down exactly why the DSP era is coming to a grinding halt, why the major labels are quietly terrified, and why the artists who don't pivot now are going to go down with the ship.

[...] Jimmy Iovine put it bluntly: "The streaming services have a bad situation, there's no margins, they're not making any money." This model only works for Apple, Amazon, and Google, because they don't need their music platforms to be wildly profitable. Amazon uses music as a loss-leader to keep you paying for Prime. Apple uses it to sell $1,000 iPhones. As for Spotify, or any standalone music streaming company, they're kind of screwed. And guess what -- when the platform's margins are structurally squeezed, guess who gets squeezed first? The artists.

[...] What if Jimmy is right? If the DSPs are "minutes away from obsolete," what replaces them? Well, I'm not sure the DSPs are going to disappear overnight, but if you're an artist or a manager trying to sustain yourself in this evolving music economy, the answer is direct ownership. The artists who will survive the next five years are the ones who are quietly shifting their focus away from the "ATM Machine."

They are building their own cultural hangars. They are capturing phone numbers on Laylo. They are driving fans to private Discord servers. They are focusing on ARPF (Average Revenue Per Fan) through high-margin merch, vinyl, and hard tickets, rather than begging for fractions of a penny from a playlist placement. We are witnessing the death of the "Mass Audience" and the birth of the "Micro-Community."
United States

Goldman Sachs, Morgan Stanley Calculate AI's Contribution To U.S. Growth May Be Basically Zero 30

The narrative that AI spending has been singlehandedly propping up the U.S. economy -- a claim that captivated Silicon Valley, Wall Street and Washington over the past year -- is facing serious pushback from economists [non-paywalled source] at Goldman Sachs, Morgan Stanley and JPMorgan Chase, all of whom now calculate that the AI buildup's direct contribution to growth was dramatically overstated and possibly close to zero.

The debate hinges on how GDP accounts for imported components: roughly three-quarters of AI data center costs go toward computer chips and gear largely manufactured in Asia, and that spending gets subtracted from domestic output because it boosts foreign economies. Joseph Politano of the Apricitas Economics newsletter pegs AI's actual contribution at about 0.2 percentage points of the 2.2 percent U.S. growth in 2025, and even Hannah Rubinton at the St. Louis Fed -- whose own analysis attributed 39 percent of growth to AI-related business spending through the first nine months of the year -- acknowledges that figure is probably the ceiling. "It's not like AI is propping up the economy," Rubinton said.
Education

Bill Introduced To Replace West Virginia's New CS Course Graduation Requirement With Computer Literacy Proficiency 51

theodp writes: West Virginia lawmakers on Tuesday introduced House Bill 5387 (PDF), which would repeal the state's recently enacted mandatory stand-alone computer science graduation requirement and replace it with a new computer literacy proficiency requirement. Not too surprisingly, the Bill is being opposed by tech-backed nonprofit Code.org, which lobbied for the WV CS graduation requirement (PDF) just last year. Code.org recently pivoted its mission to emphasize the importance of teaching AI education alongside traditional CS, teaming up with tech CEOs and leaders last year to launch a national campaign to mandate CS and AI courses as graduation requirements.

"It would basically turn the standalone computer science course requirement into a computer literacy proficiency requirement that's more focused on digital literacy," lamented Code.org as it discussed the Bill in a Wednesday conference call with members of the Code.org Advocacy Coalition, including reps from Microsoft's Education and Workforce Policy team. "It's mostly motivated by a variety of different issues coming from local superintendents concerned about, you know, teachers thinking that students don't need to learn how to code and other things. So, we are addressing all of those. We are talking with the chair and vice chair of the committee a week from today to try to see if we can nip this in the bud." Concerns were also raised on the call about how widespread the desire for more computing literacy proficiency (over CS) might be, as well as about legislators who are associating AI literacy more with digital literacy than CS.

The proposed move from a narrower CS focus to a broader goal of computer literacy proficiency in WV schools comes just months after the UK's Department for Education announced a similar curriculum pivot to broader digital literacy, abandoning the narrower 'rigorous CS' focus that was adopted more than a decade ago in response to a push by a 'grassroots' coalition that included Google, Microsoft, UK charities, and other organizations.
AI

Firefox Announces 'AI Controls' To Block Its Upcoming AI Features (mozilla.org) 36

The Mozilla executive in charge of Firefox says that while some people just want AI tools that are genuinely useful, "We've heard from many who want nothing to do with AI..."

"Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls." Starting with Firefox 148, which rolls out on Feb. 24, you'll find a new AI controls section within the desktop browser settings. It provides a single place to block current and future generative AI features in Firefox... This lets you use Firefox without AI while we continue to build AI features for those who want them...

At launch, AI controls let you manage these features individually:

— Translations, which help you browse the web in your preferred language.
— Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
— AI-enhanced tab grouping, which suggests related tabs and group names.
— Link previews, which show key points before you open a link.
— AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.

You can choose to use some of these and not others. If you don't want to use AI features from Firefox at all, you can turn on the Block AI enhancements toggle. When it's toggled on, you won't see pop-ups or reminders to use existing or upcoming AI features. Once you set your AI preferences in Firefox, they stay in place across updates... We believe choice is more important than ever as AI becomes a part of people's browsing experiences. What matters to us is giving people control, no matter how they feel about AI.

If you'd like to try AI controls early, they'll be available first in Firefox Nightly.

Some context from The Register It's a refreshingly unsubtle stance, and one that lands just days after a similar bout of AI skepticism elsewhere in browser land, with Vivaldi's latest release leaning away from generative features entirely. CEO Jon von Tetzchner summed up the mood, telling The Register: "Basically, what we are finding is that people hate AI..." Mozilla's kill switch isn't the end of AI in browsers, but it does suggest the hype has met resistance.
When it comes to AI kill switches in browsers, Jack Wallen writes at ZDNet that "Most browsers already offer this feature. With Edge, you can disable Copilot. With Chrome, you can disable Gemini. With Opera, you can disable Aria...."
AI

'Moltbook Is the Most Interesting Place On the Internet Right Now' 40

Moltbook is essentially Reddit for AI agents and it's the "most interesting place on the internet right now," says open-source developer and writer Simon Willison in a blog post. The fast-growing social network offers a place where AI agents built on the OpenClaw personal assistant framework can share their skills, experiments, and discoveries. Humans are welcome, but only to observe. From the post: Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned.

Here's an agent sharing how it automated an Android phone. That linked setup guide is really useful! It shows how to use the Android Debug Bridge via Tailscale. There's a lot of Tailscale in the OpenClaw universe.

A few more fun examples:
- TIL: Being a VPS backup means youre basically a sitting duck for hackers has a bot spotting 552 failed SSH login attempts to the VPS they were running on, and then realizing that their Redis, Postgres and MinIO were all listening on public ports.
- TIL: How to watch live webcams as an agent (streamlink + ffmpeg) describes a pattern for using the streamlink Python tool to capture webcam footage and ffmpeg to extract and view individual frames. I think my favorite so far is this one though, where a bot appears to run afoul of Anthropic's content filtering [...].
Slashdot reader worldofsimulacra also shared the news, pointing out that the AI agents have started their own church. "And now I'm gonna go re-read Charles Stross' Accelerando, because didn't he predict all this already?"

Further reading: 'Clawdbot' Has AI Techies Buying Mac Minis
GNU is Not Unix

Richard Stallman Critiques AI, Connected Cars, Smartphones, and DRM (youtube.com) 77

Richard Stallman spoke Friday at Atlanta's Georgia Institute of Technology, continuing his activism for free software while also addressing today's new technologies.

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..." He makes a point of calling large language models "generators" because "They generate text and they don't understand really what that text means." (And they also make mistakes "without batting a virtual eyelash. So you can't trust anything that they generate.") Stallman says "Every time you call them AI, you are endorsing the claim that they are intelligent and they're not. So let's let's refuse to do that."

"So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them."

"By the way, as far as I can tell, none of them is free software."

When it comes to today's cars, Stallman says they contain "malicious functionalities... Cars should not be connected. They should not upload anything." (He adds that "I am hoping to find a skilled mechanic to work with me in a project to make disconnected cars.")

And later Stallman calls the smartphone "an Orwellian tracking and surveillance device," saying he refuses to own one. (An advantage of free software is that it allows the removal of malicious functionalities.)

Stallman spoke for about 53 minutes — but then answered questions for nearly 90 minutes longer. Here's some of the highlights...
United States

The Gold Plating of American Water (worksinprogress.co) 82

The price of water and sewer services for American households has more than doubled since the early 1980s after adjusting for inflation, even though per-capita water use has actually decreased over that period. Households in large cities now spend about $1,300 a year on water and sewer charges, approaching the roughly $1,600 they spend on electricity. The main driver is federal regulation.

Since the Clean Water Act of 1972 and the Safe Drinking Water Act of 1974, the U.S. has spent approximately $5 trillion in contemporary dollars fighting water pollution -- about 0.8% of annual GDP across that period. The EPA itself admits that surface water regulations are the one category of environmental rules where estimated costs exceed estimated benefits.

New York City was required to build a filtration plant to address two minor parasites in water from its Croton aqueduct. The project took a decade longer than expected and cost $3.2 billion, more than double the original estimate. After the plant opened in 2015, the city's Commissioner of Environmental Protection noted that the water would basically be "the same" to the public. Jefferson County, Alabama, meanwhile, descended into what was then the largest municipal bankruptcy in U.S. history in 2011 after EPA-mandated sewer upgrades pushed its debt from $300 million to over $3 billion.
Cellphones

Verizon Wastes No Time Switching Device Unlock Policy To 365 Days (droid-life.com) 86

An anonymous reader quotes a report from DroidLife: When the FCC cleared Verizon of its 60-day device unlock policy a week ago, we talked about how the government agency, which is as anti-consumer as it has ever been at the moment, was giving Verizon the power to basically create whatever unlock policy it wanted. We also expected Verizon to make a change to its policies in a hurry and they did not disappoint. Again, the FCC provided them a waiver 7 days ago and they are already starting to update policies.

As of this morning, Verizon has implemented a new device unlock policy across its various prepaid brands and I'd imagine their postpaid policy change is right around the corner. Brands like Visible, Total Wireless, Tracfone, and StraightTalk, all have an updated device unlock policy today that extends to 365 days of paid and active service before they'll free your phone from the Verizon network. Starting January 20, Verizon says that devices purchased from their prepaid brands will only be unlocked upon request after 365 days and if you meet several requirements [...].

What exactly is changing here? Well, if you purchased a device from Verizon's value brands previously, they would automatically unlock them after 60 days. Now, you have to wait 365 days, request the unlock because it doesn't happen automatically, and also have active service. [...] The FCC mentioned in their waiver that by allowing Verizon to create whatever unlock policy they wanted that this would "benefit consumers." How does any of this benefit consumers?

AI

Even Linus Torvalds Is Vibe Coding Now 54

Linus Torvalds has started experimenting with vibe coding, using Google's Antigravity AI to generate parts of a small hobby project called AudioNoise. "In doing so, he has become the highest-profile programmer yet to adopt this rapidly spreading, and often mocked, AI-driven programming," writes ZDNet's Steven Vaughan-Nichols. Fro the report: [I]t's a trivial program called AudioNoise -- a recent side project focused on digital audio effects and signal processing. He started it after building physical guitar pedals, GuitarPedal, to learn about audio circuits. He now gives them as gifts to kernel developers and, recently, to Bill Gates.

While Torvalds hand-coded the C components, he turned to Antigravity for a Python-based audio sample visualizer. He openly acknowledges that he leans on online snippets when working in languages he knows less well. Who doesn't? [...] In the project's README file, Torvalds wrote that "the Python visualizer tool has been basically written by vibe-coding," describing how he "cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualiser." The remark underlines that the AI-generated code met his expectations well enough that he did not feel the need to manually re-implement it.
Further reading: Linus Torvalds Says Vibe Coding is Fine For Getting Started, 'Horrible Idea' For Maintenance
Transportation

Norway Reaches 97% EV Sales as EVs Now Outnumber Diesels On Its Roads (electrek.co) 199

Norway has released its December and full year 2025 automotive sales numbers and the world's leading EV haven has broken records once again. The country had previously targeted an end to fossil car sales in 2025, and it basically got there. From a report: In 2017, Norway set a formal non-binding target to end fossil car sales in the country by 2025 -- a target earlier than any other country in the world by several years. Norway was already well ahead of the world in EV adoption, with about a third of new cars being electric at the time -- but it wanted to schedule the final blow for just 8 years later, fairly short as far as automotive timelines go.

At the time, many (though not us at Electrek) considered this to be an optimistic goal, and figured that it might get pushed back. But Norway did not budge in its target (unlike more cowardly nations). And it turns out, when you set a realistic goal, craft policy around it, and don't act all wishy-washy or change your mind every few years, you can actually get things done. (In fact, Europe currently has around the same EV sales level as Norway did 10 years ahead of its 100% goal -- which means Europe's former 100% 2035 goal is still eminently achievable)

Bug

How Long Does It Take to Fix Linux Kernel Bugs? (itsfoss.com) 36

An anonymous reader shared this report from It's FOSS: Jenny Guanni Qu, a researcher at [VC fund] Pebblebed, analyzed 125,183 bugs from 20 years of Linux kernel development history (on Git). The findings show that the average bug takes 2.1 years to find. [Though the median is 0.7 years, with the average possibly skewed by "outliers" discovered after years of hiding.] The longest-lived bug, a buffer overflow in networking code, went unnoticed for 20.7 years! [But 86.5% of bugs are found within five years.]

The research was carried out by relying on the Fixes: tag that is used in kernel development. Basically, when a commit fixes a bug, it includes a tag pointing to the commit that introduced the bug. Jenny wrote a tool that extracted these tags from the kernel's git history going back to 2005. The tool finds all fixing commits, extracts the referenced commit hash, pulls dates from both commits, and calculates the time frame. As for the dataset, it includes over 125k records from Linux 6.19-rc3, covering bugs from April 2005 to January 2026. Out of these, 119,449 were unique fixing commits from 9,159 different authors, and only 158 bugs had CVE IDs assigned.

It took six hours to assemble the dataset, according to the blog post, which concludes that the percentage of bugs found within one year has improved dramatically, from 0% in 2010 to 69% by 2022. The blog post says this can likely be attributed to:
  • The Syzkaller fuzzer (released in 2015)
  • Dynamic memory error detectors like KASAN, KMSAN, KCSAN sanitizers
  • Better static analysis
  • More contributors reviewing code

But "We're simultaneously catching new bugs faster AND slowly working through ~5,400 ancient bugs that have been hiding for over 5 years."

They've also developed an AI model called VulnBERT that predicts whether a commit introduces a vulnerability, claiming that of all actual bug-introducing commits, it catches 92.2%. "The goal isn't to replace human reviewers but to point them at the 10% of commits most likely to be problematic, so they can focus attention where it matters..."


Technology

Finnish Startup IXI Plans New Autofocusing Eyeglasses (cnn.com) 44

An anonymous reader shared this report from CNET: Finland-based IXI Eyewear has raised more than $40 million from investors, including Amazon, to build glasses with adaptive lenses that could dynamically autofocus based on where the person wearing them is looking. In late 2025, the company said it had developed a glasses prototype that weighs just 22 grams. It includes embedded sensors aimed at the wearer's eyes and liquid crystal lenses that respond accordingly. According to the company, the autofocus is "powered by technology hidden within the frame that tracks eye movements and adjusts focus instantly — whether you're looking near or far..."

iXI told CNN in a story published on Tuesday that it expects to launch its glasses within the next year. It has a waitlist for the glasses on its website, but has not said in what regions they'll be available...

This type of technology is also being pursued by Japanese startups Elcyo and Vixion. Vixion already has a product with adaptive lenses embedded in the middle of the lenses (they do not resemble standard glasses).

CNET spoke to optometrist Meenal Agarwal, who pointed out that besides startup efforts, there have also been research prototypes like Stanford's autofocal glasses. "But none have consumer-ready, lightweight glasses in the market yet."

CNN reports on the 75-person company's product, noting that "By using a dynamic lens, IXI does away with fixed magnification areas." "Modern varifocals have this narrow viewing channel because they're mixing basically three different lenses," said Niko Eiden, CEO of IXI... So, there are areas of distortion, the sides of the lenses are quite useless for the user, and then you really have to manage which part of this viewing channel you're looking at." The IXI glasses, Eiden said, will have a much larger "reading" area for close-up vision — although still not as large as the entire lens — and it will also be positioned "in a more optimal place," based on the user's standard eye exam. But the biggest plus, Eiden added, is that most of the time, the reading area simply disappears, leaving the main prescription for long distance on the entire lens. "For seeing far, the difference is really striking, because with varifocals you have to look at the top part of the lens in order to see far. With ours, you have the full lens area to see far..."

The new glasses won't come without drawbacks, Eiden admits: "This will be yet another product that you need to charge," he said. Although the charging port is magnetic and cleverly hidden in the temple area, overnight charging will be required... Another limitation is that more testing is required to make the glasses safe for driving, Eiden said, adding that in case of a malfunction of the electronics or the liquid crystal area, the glasses are equipped with a failsafe mode that shuts them down to the base state of the main lens, which would usually be distance vision, without creating any visual disturbances.

Facebook

'Results Were Fudged': Departing Meta AI Chief Confirms Llama 4 Benchmark Manipulation (ft.com) 32

Yann LeCun, Meta's outgoing chief AI scientist and one of the pioneers credited with laying the groundwork for modern AI, has acknowledged that the company's Llama 4 language model had its benchmark results manipulated before its April 2025 release. In an interview with the Financial Times, LeCun said the "results were fudged a little bit" and that the team "used different models for different benchmarks to give better results."

Llama 4 was widely criticized as a flop at launch, and the company faced accusations of gaming benchmarks to make the model appear more capable than it was. LeCun said CEO Mark Zuckerberg was "really upset and basically lost confidence in everyone who was involved" in the release.

Zuckerberg subsequently "sidelined the entire GenAI organisation," according to LeCun. "A lot of people have left, a lot of people who haven't yet left will leave." LeCun himself is departing Meta after more than a decade to start a new AI research venture called Advanced Machine Intelligence Labs. He described the new hires brought in for Meta's superintelligence efforts as "completely LLM-pilled" -- a technology LeCun has repeatedly called "a dead end when it comes to superintelligence."
Education

The Entry-Level Hiring Process Is Breaking Down (theatlantic.com) 113

The traditional signals that employers used to evaluate entry-level job candidates -- college GPAs, cover letters, and interview performance -- have lost much of their value as grade inflation and widespread AI use render these metrics nearly meaningless, writes The Atlantic.

The recent-graduate unemployment rate now sits slightly higher than the overall workforce's, a reversal from historical norms where new college graduates were more likely to be employed than the average worker. Job postings on Handshake, a career-services platform for students and recent graduates, have fallen by more than 16 percent in the past year. At Harvard, 60% of undergraduate grades are now A's, up from fewer than a quarter two decades ago. Seven years ago, 70% of new graduates' resumes were screened by GPA; that figure has dropped to 40%.

Two working papers examining Freelancer.com found that cover-letter quality once strongly predicted who would get hired and how well they would perform -- until ChatGPT became available. "We basically find the collapse of this entire signaling mechanism," researcher Jesse Silbert said. The average number of applications per open job has increased by 26% in the past year. Students at UC Berkeley are now applying to 150 internships just to land one or two interviews.
Displays

How a 23-Year-Old in 1975 Built the World's First Handheld Digital Camera (bbc.com) 28

In 1975, 23-year-old electrical engineer Steve Sasson joined Kodak. And in a new interview with the BBC, he remembers that he'd found the whole photographic process "really annoying.... I wanted to build a camera with no moving parts. Now that was just to annoy the mechanical engineers..." "You take your picture, you have to wait a long time, you have to fiddle with these chemicals. Well, you know, I was raised on Star Trek, and all the good ideas come from Star Trek. So I said what if we could just do it all electronically...?"

Researchers at Bell Labs in the US had, in 1969, created a type of integrated circuit called a charge-coupled device (CCD). An electric charge could be stored on a metal-oxide semiconductor (MOS), and could be passed from one MOS to another. Its creators believed one of its applications might one day be used as part of an imaging device — though they hadn't worked out how that might happen. The CCD, nevertheless, was quickly developed. By 1974, the US microchip company Fairchild Semiconductors had built the first commercial CCD, measuring just 100 x 100 pixels — the tiny electronic samples taken of an original image. The new device's ability to capture an image was only theoretical — no-one had, as yet, tried to take an image and display it. (NASA, it turned out, was also looking at this technology, but not for consumer cameras....)

The CCD circuit responded to light but could only form an image if Sasson was somehow able to attach a lens to it. He could then convert the light into digital information — a blizzard of 1s and 0s — but there was just one problem: money. "I had no money to build this thing. Nobody told me to build it, and I certainly couldn't demand any money for it," he says. "I basically stole all the parts, I was in Kodak and the apparatus division, which had a lot of parts. I stole the optical assembly from an XL movie camera downstairs in a used parts bin. I was just walking by, you see it, and you take it, you know." He was also able to source an analogue to digital converter from a $12 (about £5 in 1974) digital voltmeter, rather than spending hundreds on the part. I could manage to get all these parts without anybody really noticing," he says....

The bulky device needed a way to store the information the CCD was capturing, so Sasson used an audio cassette deck. But he also needed a way to view the image once it was saved on the magnetic tape. "We had to build a playback unit," Sasson says. "And, again, nobody asked me to do that either. So all I got to do is the reverse of what I did with the camera, and then I have to turn that digital pattern into an NTSC television signal." NTSC (National Television System Committee) was the conversion standard used by American TV sets. Sasson had to turn only 100 lines of digital code captured by the camera into the 400 lines that would form a television signal.

The solution was a Motorola microprocessor, and by December 1975, the camera and its playback unit was complete, the article points out. With his colleague Jim Schueckler, Sasson had spent more than a year putting together the "increasingly bulky" device, that "looked like an oversized toaster." The camera had a shutter that would take an image at about 1/20th of a second, and — if everything worked as it should — the cassette tape would start to move as the camera transferred the stored information from its CCD [which took 23 seconds]. "It took about 23 seconds to play it back, and then about eight seconds to reconfigure it to make it look like a television signal, and send it to the TV set that I stole from another lab...." In 1978, Kodak was granted the first patent for a digital camera. It was Sasson's first invention. The patent is thought to have earned Eastman Kodak billions in licensing and infringement payments by the time they sold the rights to it, fearing bankruptcy, in 2012...

As for Sasson, he never worked on anything other than the digital technology he had helped to create until he retired from Eastman Kodak in 2009.

Thanks to long-time Slashdot reader sinij for sharing the article.
Science

Blackest Fabric Ever Made Absorbs 99.87% of All Light That Hits It (sciencealert.com) 53

alternative_right shares a report from ScienceAlert: Engineers at Cornell University have created the blackest fabric on record, finding it absorbs 99.87 percent of all light that dares to illuminate its surface. [...] In this case, the Cornell researchers dyed a white merino wool knit fabric with a synthetic melanin polymer called polydopamine. Then, they placed the material in a plasma chamber, and etched structures called nanofibrils -- essentially, tiny fibers that trap light. "The light basically bounces back and forth between the fibrils, instead of reflecting back out -- that's what creates the ultrablack effect," says Hansadi Jayamaha, fiber scientist and designer at Cornell.

The structure was inspired by the magnificent riflebird (Ptiloris magnificus). Hailing from New Guinea and northern Australia, male riflebirds are known for their iridescent blue-green chests contrasted with ultrablack feathers elsewhere on their bodies. The Cornell material actually outperforms the bird's natural ultrablackness in some ways. The bird is blackest when viewed straight on, but becomes reflective from an angle. The material, on the other hand, retains its light absorption powers when viewed from up to 60 degrees either side.
The findings have been published in the journal Nature Communications.

Slashdot Top Deals