Forgot your password?
typodupeerror

Submission + - Interview with Charles Forsyth of Plan 9 (www.livinginthefuture.rocks)

kyusaku writes: Interview with Charles Forsyth about the history and features of Plan 9 and Inferno including how the powerpc port came to be and some of the main features of the OSes. A good introduction to both operating systems and their innovations.

Submission + - Study Finds A Third of New Websites are AI-Generated (404media.co)

alternative_right writes: Researchers working with data from the Internet Archive have discovered that a third of websites created since 2022 are AI-generated. The team of researchersâ"which includes people from Stanford, the Imperial College London, and the Internet Archiveâ"published their findings online in a paper titled âoeThe Impact of AI-Generated Text on the Internet.â The research also found that all this AI-generated text is making the web more cheery and less verbose.

Submission + - Is AI Cannibalizing Human Intelligence? (wsj.com)

destinyland writes: "For the AI industry, a key design question has gone largely unasked: Is the product building human capacity or consuming it?" That's according to neuroscientist/cognitive scientist Vivienne Ming, who just published a book called “ Robot-Proof: When Machines Have All The Answers, Build Better People .” Writing in the Wall Street Journal she describes which group performed best at predicting real-world events (compared to forecasters on prediction market Polymarket) — AI, human, or human-AI hybrid teams.

The human groups performed poorly, relying on instinct or whatever information had come across their feeds that morning. The large AI models—ChatGPT and Gemini, in this case—performed considerably better, though still short of the market itself. But when we combined AI with humans, things got more interesting. Most hybrid teams used AI for the answer and submitted it as their own, performing no better than the AI alone. Others fed their own predictions into AI and asked it to come up with supporting evidence. These “validators” had stumbled into a classic confirmation bias-loop: the sycophancy that leads chatbots to tell you what you want to hear, even if it isn’t true. They ended up performing worse than an AI working solo.

But in roughly 5% to 10% of teams, something different emerged. The AI became a sparring partner. The teams pushed back, demanding evidence and interrogating assumptions. When the AI expressed high confidence, the humans questioned it. When the humans felt strongly about an intuition, they asked the AI to come up with a counterargument... These teams reached insightful conclusions that neither a human nor a machine could have produced on its own. They were the only group to consistently rival the prediction market’s accuracy. On certain questions, they even outperformed it...

We are building AI systems specifically designed to give us the answer before we feel the discomfort of not having it.What my experiment suggests is that the human qualities most likely to matter are not the feel-good ones. They’re the uncomfortable ones: the capacity to be wrong in public and stay curious; to sit with a question your phone could answer in three seconds and resist the urge to reach for it. To read a confident, fluent response from an AI and ask yourself, “What’s missing?” rather than default to “Great, that’s done.” To disagree with something that sounds authoritative and to trust your instinct enough to follow it. We don’t build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways: the student who struggles through a problem before checking the answer; the person who asks a follow-up question in a conversation; the reader who sits with a difficult idea long enough for it to actually change one’s mind. Most AI chatbots today default to easy answers, which is hurting our ability to think critically.

I call this the Information-Exploration Paradox. As the cost of information approaches zero, human exploration collapses. We see it in students who perform better on AI-assisted tasks and worse on everything afterward. We see it in developers shipping more code and understanding it less. We are, in ways that feel like progress, slowly optimizing ourselves out of the loop.

Submission + - Europe Demolishes Russian Soyuz Launch Pad in French Guiana (united24media.com)

Geoffrey.landis writes: Up until 2022, the Russians had an agreement with the European Space Agency to launch their Soyuz rockets from the Kourou launch site in French Guiana. The 15-year cooperation program between ESA and Roscosmos conducted 26 successful launches before being suspended after Russia’s full-scale invasion of Ukraine. The Kourou launch site's near-equatorial location is advantageous for commercial launches due to the additional velocity rockets gain from Earth’s rotation. The demolition of the Russian launch pad at Kourou included a controlled explosion of a 52-meter mobile service tower. The remaining infrastructure at the site—including the assembly and testing complex, railway lines, liquid oxygen storage facilities, and fueling systems—will be transferred to MaiaSpace, a French startup affiliated with Arianespace. The company plans to reuse up to 80% of the existing infrastructure for its own launch vehicle program.

Submission + - No sex please, we're on Mars! Inside the simulated red planet mission (telegraph.co.uk)

fjo3 writes: No sex, no alcohol, no daylight, no fruit or vegetables, and no eye contact with your captors for 100 days.

It might sound like a hellish prison sentence, but these are the conditions for the European Space Agency’s (ESA) latest experiment to learn how humans cope in social isolation, before a mission to Mars.

On Thursday, six participants entered a sealed, simulated space station in Cologne, Germany, and will not be allowed out until August – unless something goes seriously wrong.

The trial – named Solis100 – is hoping to answer the question: What happens to a small team of humans who spend months isolated in a confined environment, without friends or family, under strict rules, cut off from the outside world?

Submission + - Ultra-Processed Foods Can Wreak Havoc On Your Attention Span (studyfinds.com)

fjo3 writes: For every 10% increase in the share of calories coming from ultra-processed sources, attention scores dropped by a small but measurable amount (about 0.05 points on the study’s scale), and a score used to estimate future dementia risk ticked upward. Both associations held up even after accounting for how closely participants followed a Mediterranean-style diet, widely considered the gold standard for brain-healthy eating. That detail matters because it suggests something about the processing itself may be driving the effect, not simply the absence of better food choices.

Published in Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring, the study doesn’t prove that ultra-processed foods directly cause cognitive problems. It captured a single snapshot in time rather than tracking people over years.

Submission + - Is "Outsourcing Our Thinking to AI" a Bug or a Feature?

theodp writes: In a year-end podcast, GeekWire noted that Microsoft President Brad Smith offered his own evidence to investors that AI-is-real at Microsoft's Annual Shareholder Meeting in December, explaining that he relied on Copilot’s Researcher Agent's memory (YouTube, audio) earlier in the day to recall and explain an issue for company leaders that Microsoft faced seven or eight years ago (to help them deal with a similar problem they now faced), and it generated a 25-page report with 100 citations that so wowed his colleagues that they clamored for him to share the prompt he used to produce it so they too could learn how to use AI so effectively. While Smith didn't share either the report or prompt with investors in the webcast, the anecdote alone left his fellow Microsoft execs nodding and smiling in amazement (GeekWire couldn't resist wondering aloud how many of the recipients used their AI agents to summarize the 25-page report rather than having to actually read it).

Reminiscing about Def Leppard in her weekly Ed-Tech and AI newsletter Second Breakfast, watchdog Audrey Watters on Friday painted a much bleaker picture of the what-me-worry-about-thinking AI utopia presented to Microsoft investors, cautioning: "Our understanding of the world — knowledge, memories, skills — are never, as are the versions of these things fixed in print or in the machine, inert. And importantly, the more we know, the more we practice knowing — thinking, reading, writing, imagining, talking to one another — the more we strengthen our ability to know. And the inverse is true too: the less we practice, the weaker our cognitive powers. The more superficial and scattered our mental activities – skimming, clicking — the more shallow our thinking. The more we 'outsource our thinking' to 'AI' (hell, to the computer or the Web), the more we might find ourselves unable to think deeply at all. [...] There's a product, but there is no process for you, the user. No discernment, no contemplation. No recollection or consolidation of earlier thoughts and ideas and memories. No cognitive effort through which you will think or learn or know or grow or ever remember any of this."

Sharing Watters' concerns, The New Yorker's Jessica Winter asks, What Will It Take to Get A.I. Out of Schools? "The tech world assumes that A.I.-aided education is necessary and inevitable. A growing number of parents, educators, and cognitive scientists say the opposite," Winters begins. She closes with a reminder that "Nowhere is it written that a multinational conglomerate with a market cap of roughly four trillion dollars is fated to command our public schools, or to grant fellowships to the leaders of those schools, or to monetize the inefficient children who attend them. Another item in the Student Tech Bill of Rights, in fact, is the 'right to a learning environment that is free from undue corporate influence.'"

Submission + - Rectal cancer deaths rising rapidly among millennials (nbcnews.com) 2

fjo3 writes: “The rate of rectal cancer seems to be increasing more than two to three times compared to colon cancer,” said Mythili Menon Pathiyil, lead author of a new study and a gastroenterology fellow at SUNY Upstate Medical University in Syracuse, New York.

If the trend continues, rectal cancer deaths will exceed the number of colon cancer deaths — already the nation’s No. 1 cause of cancer death in people under age 50 — by 2035.

Submission + - US government ramps up mass surveillance (theconversation.com) 2

sinij writes:

People have little choice when buying devices, using apps or opening accounts but to agree to lengthy terms that include consent for companies to collect and sell their personal data. This “consent” allows their data to end up in the largely unregulated commercial data market. The government claims it can lawfully purchase this data from data brokers. But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach.

Still nothing to hide?

Submission + - Billionaire backer sues Trump family's crypto firm over alleged extortion (bbc.co.uk)

Alain Williams writes: The Trump family's World Liberty crypto venture is being sued by one of its billionaire backers over allegations of extortion.

Justin Sun has accused World Liberty of an "illegal scheme" to seize his WLFI tokens, a cryptocurrency issued by the company.

Sun alleges the firm, co-founded by US President Donald Trump and his son Eric Trump, has "frozen" all of his tokens and stripped him of his right to vote on governance issues.

Submission + - Mozilla Firefox uses AI to hunt bugs and suddenly zero days do not feel so untou (nerds.xyz)

BrianFagioli writes: Mozilla says it used an AI model from Anthropic to comb through Firefoxâ(TM)s code, and the results were hard to ignore. In Firefox 150, the team fixed 271 vulnerabilities identified during this effort, a number that would have been unthinkable not long ago. Instead of relying only on fuzzing or human review, the AI was able to reason through code and surface issues that typically require highly specialized expertise.

The bigger implication is less about one release and more about where this is heading. Security has long favored attackers, since they only need to find a single flaw while defenders have to protect everything. If AI can scale vulnerability discovery for defenders, that dynamic could start to shift. It does not mean zero days disappear overnight, but it suggests a future where bugs are found and fixed faster than attackers can weaponize them.

Submission + - Sun sets on Japanese pacifism with lifting of military trade ban (telegraph.co.uk)

fjo3 writes: Japan has lifted a post-war ban on weapons exports as it moves away from a pacifist stance that has defined its defence policy since the end of the Second World War.

Sanae Takaichi, Japan’s prime minister, announced the plans after a cabinet meeting on Tuesday, writing on X that the change was necessary given the “increasingly challenging security environment”.

Comment Re:Sounds about right (Score 1) 175

Reducing population with 50 % also means that the need for farmland and other similar natural resources will be reduced with about 50 % = say hello to reforestation! Forests are very efficient in absorbing CO.

Besides, I am European. I already use heat pumps and district heating, my gas stove is powered by renewable gas (and it is better than a induction stove on the margin). EVs aren't controversial here either.

Slashdot Top Deals

Your good nature will bring you unbounded happiness.

Working...