AI

Can Investors Trust AI Sales Figures? Asks Wall Street Journal Opinion Piece (wsj.com) 39

A Wall Street Journal opinion piece warns of "a troubling trend" in AI's growth. "Rather than selling software, some AI companies are paying their partners to use it."

It cites OpenAI's $1.5 billion joint venture with private-equity firms, Anthropic's $200 million contribution to a private-equity firm joint venture, and Google's $750 million subsidization of Gemini's adoption by consulting firms. "These agreements muddy the distinction between a company's sound growth trajectory and artificial financial engineering." [T]he scale and structure of the recent AI deals go beyond standard incentive mechanisms... When a seller pays customers to buy its products, it is unclear if its revenue growth reflects vibrant demand or a willingness to accept subsidies.
Slashdot reader destinyland writes: This warning comes from a prominent figure in the investing community. For six years Robert Pozen was chairman of America's oldest mutual fund company, after five years at Fidelity. An advocate for corporate governance, he's currently a lecturer at MIT's business school (and the author of the book Remote Inc.: How to Thrive at Work...Wherever You Are). "As AI companies prepare initial public offerings, investors should scrutinize their numbers closely," Pozner writes, warning about "time-limited financial support".
"In evaluating AI sales figures, analysts should consider the distorted incentives that the recent financing deals create," writes Pozner: Private-equity firms, enticed by promised returns, might demand rapid rollouts of AI products, rather than ensuring their orderly and safe development. Portfolio companies of private-equity firms may embrace AI tools not because they are needed but because adoption is mandated by their owners. Consultants may favor one set of AI models based on the subsidy instead of the merits.

If guarantees and subsidies are major factors in the rapid adoption of AI tools, investors should be skeptical of AI companies' revenue projections. Many of their customers enticed by consultants will stop paying full price when the financial incentives are gone. Many of the portfolio companies of private-equity firms could back away from selected AI tools once these joint ventures expire. The challenge with evaluating these AI financing deals is the lack of transparency. At present, AI vendors don't separate revenue driven by subsidies or joint ventures from standard sales.

The lesson from the telecom debacle is that financial engineering can obscure, for years, the difference between real customer demand and demand driven by incentives. When AI companies begin to finance their own product distribution, guaranteeing returns to investors and subsidizing sales, it's a signal for investors to dig deeper.
Investing in an AI company? Ask what percentage of enterprise revenue is coming from subsidized channels or joint ventures, Pozner suggests. And the renewal/retention rate for customers not supported by subsidies or joint ventures...
Sci-Fi

'Project Hail Mary': Real Space Science, Real Astrophotography (wcvb.com) 71

Project Hail Mary has now grossed $300.8 million globally after earning another $54.1 million this weekend from 86 markets, reports Variety, noting that after just nine days it's now Amazon MGM's highest-grossing film ever.

And last weekend it had the best opening for a "non-franchise" movie in three years, adds the Associated Press — the best since 2023's Oppenheimer: Project Hail Mary, which cost nearly $200 million to produce... is on an enviable trajectory. Its second weekend hold was even better than that of Oppenheimer, which collected $46.7 million in its follow-up frame.
But the movie is based on a book by The Martian author Andy Weir, described by one news outlet as "a former software engineer and self-proclaimed 'lifelong space nerd'... known for his realistic and clear-eyed approach to scientifically technical stories." Project Hail Mary has plenty of real science in it, whether it be space mathematics, physics, or astrobiology... The film's namesake project is even comprised of the space programs of other nations, such as Roscosmos from Russia, the Chinese space program, and the European Space Agency...

The story relies on work NASA has done regarding exoplanets, or planets outside our solar system... [This includes a nearby star named Tau Ceti approximately 12 light years from Earth which is orbited by four planets — two once thought to be in "the habitable zone" where liquid water can exist.] Tau Ceti has long been the setting used by sci-fi authors and storytellers. Isaac Asimov used it for his Robot series. Arthur C. Clarke's "Rama" spacecraft came across a mysterious tetrahedron in the Tau Ceti system. Authors Ursula K. Le Guin and Kim Stanley Robinson also set stories in Tau Ceti, and it also serves as the extrasolar setting of the 1968 Jane Fonda film Barbarella. Most recently, the Bungie video game Marathon is set in the far-off system, serving as part of the background story for the extraction shooter, about a large-scale plan to colonize the Tau Ceti system.

The movie also mentions 40 Eridani A, according to the article, a real star about 16 light-years away that was said to be orbited by the fictional planet Vulcan, home to Star Trek's Mr. Spock. It's also mentioned in Frank Herbert's Dune as the star system of the planets Ix and Richese ("noted for their machine culture and miniaturisation," according to the Stellar Australis site's "Project Dune" page).

And in a video on IMAX's YouTube channel, the film's directors explain how for a crucial scene they used non-visible-light photography, which is also an important part of modern astronomy. "Even the credits incorporate real astrophotography into the final moments," the article points out, using the work of award-winning Australian astrophotographer Rod Prazeres. "The only difference between his work of capturing space data in images and what ended up on the big screen was that he gave them 'starless versions' of his photographs to make it easier to place credit text over them."

Prazeres wrote on his web site that he was touched the producers "wanted the real thing... In a world where CGI and AI are everywhere, it meant a lot..."
Facebook

Meta Delays Rollout of New AI Model After Performance Concerns 27

Meta has delayed the release of its next major AI model after internal tests showed it lagging behind competing systems from Google, OpenAI, and Anthropic. The New York Times reports: The model, code-named Avocado, outperformed Meta's previous A.I. model and did better than Google's Gemini 2.5 model from March, two of the people said. But it has not performed as strongly as Gemini 3.0 from November, they said. As a result, Meta has delayed Avocado's release to at least May from this month, the people said. They added that the leaders of Meta's A.I. division had instead discussed temporarily licensing Gemini to power the company's A.I. products, though no decisions have been reached.

[...] It takes time to improve A.I. models, and Meta can still catch up to rivals, A.I. experts said. But a longer timeline has set in at the company, with Mr. Zuckerberg tempering expectations for Avocado in the past few months. "I expect our first models will be good, but more importantly will show the rapid trajectory we're on," he said on a call with investors in January.
A Meta spokesperson said in a statement: "As we've said publicly, our next model will be good but, more importantly, show the rapid trajectory we're on, and then we'll steadily push the frontier over the course of the year as we continue to release new models. We're excited for people to see what we've been cooking very soon."
Science

Why Falling Cats Always Seem To Land On Their Feet (nytimes.com) 66

An anonymous reader quotes a report from the New York Times: In a paper, published last month in the journal The Anatomical Record, researchers offered a novel take on falling felines. Their evidence suggests new insights into the so-called falling cat problem, particularly that cats have a very flexible segment of their spines that allows them to correct their orientation midair. [...] People have been curious about falling cats perhaps as long as the animals have been living with humans, but the method to their acrobatic abilities remains enigmatic. Part of the difficulty is that the anatomy of the cat has not been studied in detail, explains Yasuo Higurashi, a physiologist at Yamaguchi University in Japan and lead author of the study. [...]

Modern research has split the falling cat problem into two competing models. The first, "legs in, legs out," suggests that cats correct their falling trajectory by first extending their hind limbs before retracting them, using a sequential twist of their upper and then lower trunk to gain the proper posture while in free fall. The second model, "tuck and turn," suggests that cats turn their upper and lower bodies in simultaneous juxtaposed movements. [...]

The researchers found that the feline spine was extremely flexible in the upper thoracic vertebrae, but stiffer and heavier in the lower lumbar vertebrae. The discovery matches video evidence showing the cats first turn their front legs, and then their lower legs. The results suggest the cat quickly spins its flexible upper torso to face the ground, allowing it to see so that it can correctly twist the rest of its body to match. "The thoracic spine of the cat can rotate like our neck," Dr. Higurashi said.

Experiments on the spine show the upper vertebrae can twist an astounding 360 degrees, he says, which helps cats make these correcting movements with ease. The results are consistent with the "legs in, legs out" model, but definitively determining which model is correct will take more work, Dr. Higurashi says. The results also yielded another discovery: Cats, like many animals, appear to have a right-side bias. One of the dropped cats corrected itself by turning to the right eight out of eight times, while the other turned right six out of eight times.

AI

AI CEOs Worry the Government Will Nationalize AI (thenewstack.io) 125

Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."

And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.

How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy"

The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.

But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)

Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
Transportation

Detroit Automakers Take $50 Billion Hit (msn.com) 179

The Detroit Big Three -- General Motors, Ford and Stellantis -- have collectively announced more than $50 billion in write-downs on their electric-vehicle businesses after years of aggressive investment into a transition that, even before Republican lawmakers abolished a $7,500 federal tax credit last fall, was already running below expectations.

U.S. EV sales fell more than 30% in the fourth quarter of 2025 once the credit expired in September, and Congress also eliminated federal fuel-efficiency mandates. More than $20 billion in previously announced investments in EV and battery facilities were canceled last year -- the first net annual decrease in years, according to Atlas Public Policy.

GM has laid off thousands of workers and is converting plants once earmarked for EV trucks and motors to produce gas-powered trucks and V-8 engines. Ford dissolved a joint venture with a South Korean conglomerate to make batteries and now plans to build just one low-cost electric pickup by 2027. Stellantis is unloading its stake in a battery-making business after booking the largest EV-related charge of any automaker so far. Outside the U.S., the trajectory looks different: China's BYD recently overtook Tesla as the world's largest EV seller.
ISS

Microbes In Space Mutated and Developed a Remarkable Ability (sciencealert.com) 22

"A box full of viruses and bacteria has completed its return trip to the International Space Station," reports ScienceAlert, "and the changes these 'bugs' experienced in their travels could help us Earthlings tackle drug-resistant infections..." Scientists aboard the space station incubated different combinations of bacteria and phages for 25 days, while the research team led by biochemist Vatsan Raman carried out the same experiments in Madison, down here on Earth. "Space fundamentally changes how phages and bacteria interact: infection is slowed, and both organisms evolve along a different trajectory than they do on Earth," the researchers explain. In the weightlessness of space, bacteria acquired mutations in genes involved in the microbe's stress response and nutrient management. Their surface proteins also changed. After a slow start, the phages mutated in response, so they could continue binding to their victims.

The team found that certain space-specific phage mutations were especially effective at killing Earth-bound bacteria responsible for urinary tract infections (UTIs). More than 90 percent of the bacteria responsible for UTIs are antibiotic-resistant, making phage treatments a promising alternative.

AI

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."
Businesses

TSMC Says AI Demand Is 'Endless' After Record Q4 Earnings (arstechnica.com) 60

An anonymous reader quotes a report from Ars Technica: On Thursday, Taiwan Semiconductor Manufacturing Company (TSMC) reported record fourth-quarter earnings and said it expects AI chip demand to continue for years. During an earnings call, CEO C.C. Wei told investors that while he cannot predict the semiconductor industry's long-term trajectory, he remains bullish on AI. "All in all, I believe in my point of view, the AI is real -- not only real, it's starting to grow into our daily life. And we believe that is kind of -- we call it AI megatrend, we certainly would believe that," Wei said during the call. "So another question is 'can the semiconductor industry be good for three, four, five years in a row?' I'll tell you the truth, I don't know. But I look at the AI, it looks like it's going to be like an endless -- I mean, that for many years to come."

TSMC posted net income of NT$505.7 billion (about $16 billion) for the quarter, up 35 percent year over year and above analyst expectations. Revenue hit $33.7 billion, a 25.5 percent increase from the same period last year. The company expects nearly 30 percent revenue growth in 2026 and plans to spend between $52 billion and $56 billion on capital expenditures this year, up from $40.9 billion in 2025.

Businesses

'White-Collar Workers Shouldn't Dismiss a Blue-Collar Career Change' (msn.com) 145

White-collar workers stuck in a cycle of layoffs and stagnant wages might want to look past the traditional tech, finance and media job postings to an unexpected source of opportunity: the blue-collar sector, which faces a labor shortage and is seeing rapid transformation through private-equity investment. These jobs are generally less vulnerable to AI, and the earning trajectory can be steep, the WSJ writes.

At Crash Champions, a car-repair chain that has grown from 13 locations in 2019 to about 650 shops across 38 states, service advisers start at roughly $60,000 after a six-month apprenticeship and can double that within 18 months, according to CEO Matt Ebert. Directors overseeing multiple locations earn more than $200,000. Power Home Remodeling, a PE-backed construction company, says tech sales professionals earning $85,000 to $100,000 could make lateral moves after a 10-week training program.

The share of workers in their early 20s employed in blue-collar roles rose from 16.3% in 2019 to 18.4% in 2024, according to ADP -- five times the increase among 35- to 39-year-olds.
Medicine

The Golden Age of Vaccine Development (worksinprogress.co) 118

Microbiology had its golden age in the late nineteenth century, when researchers identified the bacterial causes of tuberculosis, cholera, typhoid, and a dozen other diseases in rapid succession. Antibiotics had theirs in the mid-twentieth century. Both booms eventually slowed. Vaccine development, by contrast, appears to be speeding up -- and the most productive era may still lie ahead, Works in Progress writes in a story.

In the first half of the 2020s alone, researchers delivered the first effective vaccines against four different diseases: Covid-19, malaria, RSV and chikungunya. No previous decade matched that output. The acceleration rests on infrastructure that took two centuries to assemble. Edward Jenner's 1796 smallpox vaccine was a lucky accident he didn't understand. Louis Pasteur needed ninety years to turn that luck into systematic methods -- attenuation and inactivation -- that could be applied to other diseases. Generations of scientists then built the supporting machinery: Petri dishes for bacterial culture, techniques to keep animal cells alive outside the body, bioreactors for industrial production, sterilization and cold-chain logistics.

Those tools have now compounded. Cryo-electron microscopy reveals viral proteins atom by atom, a capability that directly enabled the RSV vaccine after earlier attempts failed. Genome sequencing costs collapsed from roughly $100 million per human genome in 2001 to under $1,000 by 2014, according to data from the National Human Genome Research Institute. The mRNA platform, refined through work by Katalin Kariko, Drew Weissman, and others, allows vaccines to be redesigned in weeks rather than years. The trajectory suggests more breakthroughs are possible. Whether they arrive depends on continued investment, however.
Earth

Record Ocean Heat is Intensifying Climate Disasters, Data Shows (theguardian.com) 61

The world's oceans absorbed yet another record-breaking amount of heat in 2025, continuing an almost unbroken streak of annual records since the start of the millennium and fueling increasingly extreme weather events around the globe. More than 90% of the heat trapped by humanity's carbon emissions ends up in the oceans, making ocean heat content one of the clearest indicators of the climate crisis's trajectory.

The analysis, published in the journal Advances in Atmospheric Sciences, drew on temperature data collected across the oceans and collated by three independent research teams. The measurements cover the top 2,000 meters of ocean depth, where most heat absorption occurs. The amount of heat absorbed is equivalent to more than 200 times the total electricity used by humans worldwide.

This extra thermal energy intensifies hurricanes and typhoons, produces heavier rainfall and greater flooding, and results in longer marine heatwaves that decimate ocean life. The oceans are likely at their hottest in at least 1,000 years and heating faster than at any point in the past 2,000 years.
AI

Sal Khan: Companies Should Give 1% of Profits To Retrain Workers Displaced By AI (nytimes.com) 154

"I believe artificial intelligence will displace workers at a scale many people don't yet realize," says Sal Kahn (founder/CEO of the nonprofit Khan Academy). But in an op-ed in the New York Times he also proposes a solution that "could change the trajectory of the lives of millions who will be displaced..."

"I believe that every company benefiting from automation — which is most American companies — should... dedicate 1 percent of its profits to help retrain the people who are being displaced." This isn't charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous...

Roughly a dozen of the world's largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training.

"The problem isn't that people can't work," Khan writes in the essay. "It's that we haven't built systems to help them continue learning and connect them to new opportunities as the world changes rapidly." To meet the challenges, we don't need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge...

There is no shortage of meaningful work — only a shortage of pathways into it.

Thanks to long-time Slashdot reader destinyland for sharing the article.
Space

Rocket Crashes in Brazil's First Commercial Launch (reuters.com) 20

The first-ever commercial rocket launched at Brazil's Alcantara Space Center crashed soon after liftoff late earlier this week, dealing a blow to Brazilian aerospace ambitions and shares of South Korean satellite launch company Innospace. From a report: The rocket began its vertical trajectory as planned after liftoff [Monday] at 10:13 p.m. local time (0113 GMT) but fell to the ground after something went wrong 30 seconds into its flight, Innospace CEO Kim Soo-jong said in a letter to shareholders.

The craft crashed within a pre-designated safety zone and did not harm anyone, he said. Brazil's air force said firefighters were sent to analyze the wreckage and impact zone. "We are deeply sorry that we failed to meet the expectations of our shareholders who supported our first commercial launch," the CEO wrote in the letter, which was posted on the company's website on December 23. Innospace shares plunged nearly 29% in Seoul in its biggest daily drop and heaviest daily trading volume since its July 2024 listing.

AI

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power? (noemamag.com) 183

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..."

"When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions...

We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Some key points:
  • "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..."
  • "When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..."
  • "Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... "
  • "Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..."
  • "Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..."
  • "The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve..."
  • "The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..."
  • "These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..."

He's ultimately warning us about "politics masked as predictions..."

"The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation.

"It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."


Space

SpaceX Alleges a Chinese-Deployed Satellite Risked Colliding with Starlink (pcmag.com) 45

"A SpaceX executive says a satellite deployed from a Chinese rocket risked colliding with a Starlink satellite," reports PC Magazine: On Friday, company VP for Starlink engineering, Michael Nicolls, tweeted about the incident and blamed a lack of coordination from the Chinese launch provider CAS Space. "When satellite operators do not share ephemeris for their satellites, dangerously close approaches can occur in space," he wrote, referring to the publication of predicted orbital positions for such satellites...

[I]t looks like one of the satellites veered relatively close to a Starlink sat that's been in service for over two years. "As far as we know, no coordination or deconfliction with existing satellites operating in space was performed, resulting in a 200 meter (656 feet) close approach between one of the deployed satellites and STARLINK-6079 (56120) at 560 km altitude," Nicolls wrote... "Most of the risk of operating in space comes from the lack of coordination between satellite operators — this needs to change," he added.

Chinese launch provider CAS Space told PCMag that "As a launch service provider, our responsibility ends once the satellites are deployed, meaning we do not have control over the satellites' maneuvers."

And the article also cites astronomer/satellite tracking expert Jonathan McDowell, who had tweeted that CAS Space's response "seems reasonable." (In an email to PC Magazine, he'd said "Two days after launch is beyond the window usually used for predicting launch related risks."

But "The coordination that Nicolls cited is becoming more and more important," notes Space.com, since "Earth orbit is getting more and more crowded." In 2020, for example, fewer than 3,400 functional satellites were whizzing around our planet. Just five years later, that number has soared to about 13,000, and more spacecraft are going up all the time. Most of them belong to SpaceX. The company currently operates nearly 9,300 Starlink satellites, more than 3,000 of which have launched this year alone.

Starlink satellites avoid potential collisions autonomously, maneuvering themselves away from conjunctions predicted by available tracking data. And this sort of evasive action is quite common: Starlink spacecraft performed about 145,000 avoidance maneuvers in the first six months of 2025, which works out to around four maneuvers per satellite per month. That's an impressive record. But many other spacecraft aren't quite so capable, and even Starlink satellites can be blindsided by spacecraft whose operators don't share their trajectory data, as Nicolls noted.

And even a single collision — between two satellites, or involving pieces of space junk, which are plentiful in Earth orbit as well — could spawn a huge cloud of debris, which could cause further collisions. Indeed, the nightmare scenario, known as the Kessler syndrome, is a debris cascade that makes it difficult or impossible to operate satellites in parts of the final frontier.

Businesses

Cisco Stock Hits New All-Time High, 25 Years After the Dotcom Bubble Burst (ft.com) 29

Cisco's stock price touched $80.25 on Wednesday, finally eclipsing its dotcom-era peak of $80.06 set on March 27, 2000 -- when the networking giant briefly surpassed Microsoft to become the world's most valuable company. The journey back took 25 years, eight months and 13 days. The company's fundamentals improved dramatically over that period, of course. Revenues have nearly quintupled since 1999, profits have quadrupled, earnings per share have grown eightfold, and margins have remained healthy throughout. Investors who bought at the peak still lost money to inflation for a generation.

Cisco's trajectory draws obvious comparisons to Nvidia, today's dominant "picks and shovels" supplier for the AI boom. Nvidia trades at a price-to-earnings ratio above 45 and an enterprise value-to-sales ratio near 24. At its 2000 peak, Cisco traded at a P/E above 200 and EV/sales of 31.
Security

New OpenAI Models Likely Pose 'High' Cybersecurity Risk, Company Says (axios.com) 32

An anonymous reader quotes a report from Axios: OpenAI says the cyber capabilities of its frontier AI models are accelerating and warns Wednesday that upcoming models are likely to pose a "high" risk, according to a report shared first with Axios. The models' growing capabilities could significantly expand the number of people able to carry out cyberattacks. OpenAI said it has already seen a significant increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks.

The company notes that GPT-5 scored a 27% on a capture-the-flag exercise in August, GPT-5.1-Codex-Max was able to score 76% last month. "We expect that upcoming AI models will continue on this trajectory," the company says in the report. "In preparation, we are planning and evaluating as though each new model could reach 'high' levels of cybersecurity capability as measured by our Preparedness Framework." "High" is the second-highest level, below the "critical" level at which models are unsafe to be released publicly.
"What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," said OpenAI's Fouad Matin.
Space

LandSpace Could Become China's First Company To Land a Reusable Rocket (arstechnica.com) 21

China's private launch firm LandSpace is preparing the debut flight of its Zhuque-3 rocket, aiming to become the country's first to land a reusable orbital-class booster using a Falcon-9-style return profile. Ars Technica reports: Liftoff could happen around 11 pm EST tonight (04:00 UTC Wednesday), or noon local time at the Jiuquan Satellite Launch Center in northwestern China. Airspace warning notices advising pilots to steer clear of the rocket's flight path suggest LandSpace has a launch window of about two hours. When it lifts off, the Zhuque-3 (Vermillion Bird-3) rocket will become the largest commercial launch vehicle ever flown in China. What's more, LandSpace will become the first Chinese launch provider to attempt a landing of its first stage booster, using the same tried-and-true return method pioneered by SpaceX and, more recently, Blue Origin in the United States.

Construction crews recently finished a landing pad in the remote Gobi Desert, some 240 miles (390 kilometers) southeast of the launch site at Jiuquan. Unlike US spaceports, the Jiuquan launch base is located in China's interior, with rockets flying over land as they climb into space. When the Zhuque-3 booster finishes its job of sending the rocket toward orbit, it will follow an arcing trajectory toward the recovery zone, firing its engines to slow for landing about eight-and-a-half minutes after liftoff. At least, that's what is supposed to happen. LandSpace officials have not made any public statements about the odds of a successful landing -- or, for that matter, a successful launch...
UPDATE: Chinese Reusable Booster Explodes During First Orbital Test
China

The Growing Problem With China's Unreliable Numbers (ft.com) 42

Chinese economist Gao Shanwen told a Washington panel in December that China's real GDP growth might be around 2% rather than the official figure near 5%. By January, Gao was no longer chief economist at SDIC Securities and went silent for almost a year. As FT points out in a long piece, China does not publish quarterly GDP breakdowns showing consumption, investment and net exports. Every other major economy produces these figures.

The IMF in 2024 gave China a C grade for national accounts. The rating puts China on par with India and below Vietnam. Fixed asset investment data showed negative growth in 2025 for only the second time in decades. Property investment has fallen consistently since 2022. But official GDP investment data shows no signs of declining.

The National Bureau of Statistics stopped publishing sectoral breakdowns of fixed asset investment in 2018. It discontinued a price series in 2021 and a land sales series in 2023. Beijing has restricted researcher access rather than addressing longstanding questions about data quality. China says it disagrees with the IMF's C rating. The government argued its production-side GDP approach is appropriate.

Why does it matter? China is too large and too interconnected with the global economy for unreliable data to be a purely domestic issue. The lack of transparency creates problems for everyone trying to make decisions based on understanding China's economic trajectory. As Eswar Prasad, a professor at Cornell University and former IMF official, told FT: China is one of the two biggest economies in the world. "It would be nice to know what is really going on."

Slashdot Top Deals