Social Networks

Is 'Brain Rot' Real? How Too Much Time Online Can Affect Your Mind. (msn.com) 20

Can being "very online" really affect our brains, asks the Washington Post: Research suggests that scrolling through short videos on TikTok, Instagram or YouTube Shorts is affecting our attention, memory and mental health. A recent meta-analysis of the scientific literature found that increased use of short-form video was linked with poorer cognition and increased anxiety...

In a 2025 study published in the journal Translational Psychiatry, researchers looked at longitudinal data from more than 7,000 children across the country and found that more screen use was associated with reduced cortical thickness in certain areas of the brain. The cortex, which is the outer layer that sits on top of our more primitive brain structures, allows for higher-level thinking, memory and decision-making. "We really need it for things like inhibitory control or not being so impulsive," said Mitch Prinstein, a senior science adviser to the American Psychological Association and professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, who was not involved in the study. The cortex is also important for controlling addictive behaviors. "Those seem to be the areas being affected by the reduced cortical thickness," he said, explaining that impulsivity can prompt us to seek dopamine hits from social media. In the study, more screen time was also associated with more attention-deficit/hyperactivity disorder (ADHD) symptoms...

But not all screen time is created equal. A recent study removed social media from kids' devices but let them use their phones for as long as they wanted. The result? Kids spent just as long on their phones but didn't have the same harmful effects. "It's what you're doing on the screen that matters," Prinstein said.

AI

Is the Possibility of Conscious AI a Dangerous Myth? (noemamag.com) 221

This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious.

He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines."

But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be...

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious....

Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves...

The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

AI

How Much Do AI Models Resemble a Brain? (foommagazine.org) 130

At the AI safety site Foom, science journalist Mordechai Rorvig explores a paper presented at November's Empirical Methods in Natural Language Processing conference: [R]esearchers at the Swiss Federal Institute of Technology (EPFL), the Massachusetts Institute of Technology (MIT), and Georgia Tech revisited earlier findings that showed that language models, the engines of commercial AI chatbots, show strong signal correlations with the human language network, the region of the brain responsible for processing language... The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions — performing similar functions, and doing so using highly similar signal patterns.

Such resemblances have been exploited by neuroscientists to make much better models of cortical regions. Perhaps more importantly, the links between AI and cortex provide an interpretation of commercial AI technology as being profoundly brain-like, validating both its capabilities as well as the risks it might pose for society as the first synthetic braintech. "It is something we, as a community, need to think about a lot more," said Badr AlKhamissi, doctoral student in computer science at EPFL and first author of the preprint, in an interview with Foom. "These models are getting better and better every day. And their similarity to the brain [or brain regions] is also getting better — probably. We're not 100% sure about it...."

There are many known limitations with seeing AI programs as models of brain regions, even those that have high signal correlations. For example, such models lack any direct implementations of biochemical signalling, which is known to be important for the functioning of nervous systems. However, if such comparisons are valid, then they would suggest, somewhat dramatically, that we are increasingly surrounded by a synthetic braintech. A technology not just as capable as the human brain, in some ways, but actually made up of similar components.

Thanks to Slashdot reader Gazelle Bay for sharing the article.
Science

Nature-Inspired Computers Are Shockingly Good At Math (phys.org) 32

An R&D lab under America's Energy Department annnounced this week that "Neuromorphic computers, inspired by the architecture of the human brain, are proving surprisingly adept at solving complex mathematical problems that underpin scientific and engineering challenges."

Phys.org publishes the announcement from Sandia National Lab: In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs — the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world's first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond...

"We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly," Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren't expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren't surprising. The researchers believe the brain itself performs complex computations constantly, even if we don't consciously realize it. "Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball," Aimone said. "These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply..."

Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. "We based our circuit on a relatively well-known model in the computational neuroscience world," Theilman said. "We've shown the model has a natural but non-obvious link to PDEs, and that link hasn't been made until now — 12 years after the model was introduced." The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. "Diseases of the brain could be diseases of computation," Aimone said. "But we don't have a solid grasp on how the brain performs computations yet." If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's.

AI

'AI Can't Think' (theverge.com) 289

In an essay published in The Verge, Benjamin Riley argues that today's AI boom is built on a fundamental misunderstanding: language modeling is not the same as intelligence. "The problem is that according to current neuroscience, human thinking is largely independent of human language -- and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own," writes Riley. A user shares: The article goes on to point out that we use language to communicate. We use it to create metaphors to describe our reasoning. That people who have lost their language ability can still show reasoning. That human beings create knowledge when they become dissatisfied with the current metaphor. Einstein's theory of relativity was not based on scientific research. He developed it as thought experiment because he was dissatisfied with the existing metaphor. It quotes someone who said, "common sense is a collection of dead metaphors." And that AI, at best, can rearrange those dead metaphors in interesting ways. But it will never be dissatisfied with the data it has or an existing metaphor.

A different critique (PDF) has pointed out that even as a language model AI is flawed by its reliance on the internet. The languages used on the internet are unrepresentative of the languages in the world. And other languages contain unique descriptions/metaphors that are not found on the internet. My metaphor for what was discussed was the descriptions of the kinds of snow that exist in Inuit languages that describe qualities nowhere found in European languages. If those metaphors aren't found on the internet, AI will never be able create them.

This does not mean that AI isn't useful. But it is not remotely human intelligence. That is just a poor metaphor. We need a better one.
Benjamin Riley is the founder of Cognitive Resonance, a new venture to improve understanding of human cognition and generative AI.
Sci-Fi

Mind-Altering 'Brain Weapons' No Longer Only Science Fiction, Say Researchers (theguardian.com) 35

Researchers warn that rapid advances in neuroscience, pharmacology, and AI are bringing "brain weapons" out of science fiction and into real-world plausibility. They argue current arms treaties don't adequately cover these emerging tools and call for a new, proactive framework to prevent the weaponization of the human mind. The Guardian reports: Michael Crowley and Malcolm Dando, of Bradford University, are about to publish a book that they believe should be a wake-up call to the world. [...] The book, published by the Royal Society of Chemistry, explores how advances in neuroscience, pharmacology and artificial intelligence are coming together to create a new threat. "We are entering an era where the brain itself could become a battlefield," said Crowley. "The tools to manipulate the central nervous system -- to sedate, confuse or even coerce -- are becoming more precise, more accessible and more attractive to states."

The book traces the fascinating, if appalling, history of state-sponsored research into central nervous system (CNS)-acting chemicals. [...] The academics argue that the ability exists to create much more "sophisticated and targeted" weapons that would once have been unimaginable. Dando said: "The same knowledge that helps us treat neurological disorders could be used to disrupt cognition, induce compliance, or even in the future turn people into unwitting agents." The threat is "real and growing" but there are gaps in international arms control treaties preventing it from being tackled effectively, they say. [...]

The book makes the case for a new "holistic arms control" framework, rather than relying on existing arms control treaties. It sets out a number of practical steps that could be taken, including establishing a working group on CNS-acting and broader incapacitating agents. Other proposals concern training, monitoring and definitions. "We need to move from reactive to proactive governance," said Dando. Both men acknowledge that we are learning more about the brain and the central nervous system, which is good for humanity. They said they were not trying to stifle scientific progress and it was about preventing malign intent. Crowley said: "This is a wake-up call. We must act now to protect the integrity of science and the sanctity of the human mind."

Science

Different People's Brains Process Colors in the Same Way (nature.com) 43

Researchers at the University of Tubingen have discovered that human brains process colors in remarkably similar ways across different individuals. The team used fMRI scans from 15 participants viewing various colors to train a machine-learning model that could then accurately predict which colors a second group was viewing based solely on their brain activity patterns.

Published in the Journal of Neuroscience, the study found that specific brain cells in the visual cortex consistently respond more strongly to particular colors across all participants. The discovery challenges long-standing philosophical questions about whether people perceive colors differently.
Supercomputing

Europe Hopes To Join Competitive AI Race With Supercomputer Jupiter (france24.com) 41

Europe on Friday inaugurated Jupiter, its first exascale supercomputer and the most powerful AI machine on the continent. Built in Germany with 24,000 Nvidia chips, the 500-million-euro system aims to close the AI gap with the US and China while also advancing climate modeling, neuroscience, and renewable energy research. France 24 reports: Based at Juelich Supercomputing Centre in western Germany, it is Europe's first "exascale" supercomputer -- meaning it will be able to perform at least one quintillion (or one billion billion) calculations per second. The United States already has three such computers, all operated by the Department of Energy. Jupiter is housed in a centre covering some 3,600 meters (38,000 square feet) -- about half the size of a football pitch -- containing racks of processors, and packed with about 24,000 Nvidia chips, which are favored by the AI industry.

Half the 500 million euros ($580 million) to develop and run the system over the next few years comes from the European Union and the rest from Germany. Its vast computing power can be accessed by researchers across numerous fields as well as companies for purposes such as training AI models. "Jupiter is a leap forward in the performance of computing in Europe," Thomas Lippert, head of the Juelich centre, told AFP, adding that it was 20 times more powerful than any other computer in Germany. [...]

Yes, Jupiter will require on average around 11 megawatts of power, according to estimates -- equivalent to the energy used to power thousands of homes or a small industrial plant. But its operators insist that Jupiter is the most energy-efficient among the fastest computer systems in the world. It uses the latest, most energy-efficient hardware, has water-cooling systems and the waste heat that it generates will be used to heat nearby buildings, according to the Juelich centre.

Medicine

Cats Develop Dementia In a Similar Way To Humans (bbc.com) 71

An anonymous reader quotes a report from the BBC: Experts at the University of Edinburgh carried out a post-mortem brain examination on 25 cats which had symptoms of dementia in life, including confusion, sleep disruption and an increase in vocalization. They found a build-up of amyloid-beta, a toxic protein and one of the defining features of Alzheimer's disease. The discovery has been hailed as a "perfect natural model for Alzheimer's" by scientists who believe it will help them explore new treatments for humans.

Dr Robert McGeachan, study lead from the University of Edinburgh's Royal (Dick) School of Veterinary Studies, said: "Dementia is a devastating disease -- whether it affects humans, cats, or dogs. Our findings highlight the striking similarities between feline dementia and Alzheimer's disease in people. This opens the door to exploring whether promising new treatments for human Alzheimer's disease could also help our ageing pets." [...]

Previously, researchers have studied genetically-modified rodents, although the species does not naturally suffer from dementia. "Because cats naturally develop these brain changes, they may also offer a more accurate model of the disease than traditional laboratory animals, ultimately benefiting both species and their caregivers," Dr McGeachan said. [...] Prof Danielle Gunn-Moore, an expert in feline medicine at the vet school, said the discovery could also help to understand and manage feline dementia.
The findings have been published in the European Journal of Neuroscience.
Medicine

Psilocybin Treatment Extends Cellular Lifespan, Improves Survival of Aged Mice 69

A new study found that psilocybin treatment significantly delayed cellular aging, extending human cell lifespan by over 50% and increasing survival in aged mice by 30%. The compound appeared to achieve these effects by reducing oxidative stress, preserving telomeres, and improving DNA repair. Neuroscience News reports: A newly published study in Nature Partner Journals' Aging demonstrates that psilocin, a byproduct of consuming psilocybin, the active ingredient in psychedelic mushrooms, extended the cellular lifespan of human skin and lung cells by more than 50%. In parallel, researchers also conducted the first long-term in vivo study evaluating the systemic effects of psilocybin in aged mice of 19 months, or the equivalent of 60-65 human years. Results indicated that the mice that received an initial low dose of psilocybin of 5 mg, followed by a monthly high dose of 15 mg for 10 months, had a 30% increase in survival compared to mice that did not receive any. These mice also displayed healthier physical features, such as improved fur quality, fewer white hairs and hair regrowth.

While traditionally researched for its mental health benefits, this study suggests that psilocybin impacts multiple hallmarks of aging by reducing oxidative stress, improving DNA repair responses, and preserving telomere length. Telomeres are the structured ends of a chromosome, protecting it from damage that could lead to the formation of age-related diseases, such as cancer, neurodegeneration, or cardiovascular disease. These foundational processes influence human aging and the onset of these chronic diseases. The study concludes that psilocybin may have the potential to revolutionize anti-aging therapies and could be an impactful intervention in an aging population.
Businesses

Valve Conquered PC Gaming. What Comes Next? (ft.com) 47

Valve has achieved near-total dominance of PC gaming distribution through Steam, but the victory appears to have left the company adrift, Financial Times argues. The platform controls an estimated 70% of PC game sales while generating billions in revenue, yet Valve releases major new games at what observers call a "glacial pace."

Founder Gabe Newell has largely retreated from the company's operations, reportedly living at sea on one of his five ships and pursuing side projects like brain-computer interface startup Starfish Neuroscience. The much-anticipated third Half-Life game became "the video game equivalent of Samuel Beckett's Godot" before being quietly cancelled.

Attempts to challenge Steam have failed repeatedly. Epic Games Store, powered by Fortnite's success, "has failed to really impact Steam in any meaningful way," according to industry analysts. Microsoft runs what analysts describe as a "somewhat unambitious store," while EA shut down its Origin launcher earlier this year. Gaming analyst Michael Pachter notes that major tech companies could displace Valve "but nobody cares" enough to mount a serious challenge.

Court documents suggest Steam's revenues will exceed $10 billion next year, leaving Valve with unprecedented profits but unclear direction for a company that appears to have run out of worlds to conquer.
Biotech

You Can Now Rent a Flesh Computer Grown In a British Lab (sciencealert.com) 34

alternative_right shares a report from ScienceAlert: The world's first commercial hybrid of silicon circuitry and human brain cells will soon be available for rent. Marketed for its vast potential in medical research, the biological machine, grown inside a British laboratory, builds on the Pong-playing prototype, DishBrain. Each CL1 computer is formed of 800,000 neurons grown across a silicon chip, and their life-support system. While it can't yet match the mind-blowing capabilities of today's most powerful computers, the system has one very significant advantage: it only consumes a fraction of the energy of comparable technologies.

AI centers now consume countries' worth of energy, whereas a rack of CL1 machines only uses 1,000 watts and is naturally capable of adapting and learning in real time. [...] When neuroscientist Brett Kagan and colleagues pitted their creation against equivalent levels of machine learning algorithms, the cell culture systems outperformed them. Users can send code directly into the synthetically supported system of neurons, which is capable of responding to electrical signals almost instantly. These signals act as bits of information that can be read and acted on by the cells. But perhaps the greatest potential for this biological and synthetic hybrid is as an experimental tool for learning more about our own brains and their abilities, from neuroscience to creativity.
The first CL1 units will reportedly ship soon for $35,000 each. Remote access can apparently be rented for $300 per week.
Medicine

Doctors Perform First Robotic Heart Transplant In US Without Opening a Chest 38

An anonymous reader quotes a report from Neuroscience News Science Magazine: Surgeons have performed the first fully robotic heart transplant in the U.S., using advanced robotic tools to avoid opening the chest. [...] Using a surgical robot, lead surgeon Dr. Kenneth Liao and his team made small, precise incisions, eliminating the need to open the chest and break the breast bone. Liao removed the diseased heart, and the new heart was implanted through preperitoneal space, avoiding chest incision.

"Opening the chest and spreading the breastbone can affect wound healing and delay rehabilitation and prolong the patient's recovery, especially in heart transplant patients who take immunosuppressants," said Liao, professor and chief of cardiothoracic transplantation and circulatory support at Baylor College of Medicine and chief of cardiothoracic transplantation and mechanical circulatory support at Baylor St. Luke's Medical Center. "With the robotic approach, we preserve the integrity of the chest wall, which reduces the risk of infection and helps with early mobility, respiratory function and overall recovery."

In addition to less surgical trauma, the clinical benefits of robotic heart transplant surgery include avoiding excessive bleeding from cutting the bone and reducing the need for blood transfusions, which minimizes the risk of developing antibodies against the transplanted heart. Before the transplant surgery, the 45-year-old patient had been hospitalized with advanced heart failure since November 2024 and required multiple mechanical devices to support his heart function. He received a heart transplant in early March 2025 and after heart transplant surgery, he spent a month in the hospital before being discharged home, without complications.
Biotech

World-First Biocomputing Platform Hits the Market (ieee.org) 20

An anonymous reader quotes a report from IEEE Spectrum: In a development straight out of science fiction, Australian startup Cortical Labs has released what it calls the world's first code-deployable biological computer. The CL1, which debuted in March, fuses human brain cells on a silicon chip to process information via sub-millisecond electrical feedback loops. Designed as a tool for neuroscience and biotech research, the CL1 offers a new way to study how brain cells process and react to stimuli. Unlike conventional silicon-based systems, the hybrid platform uses live human neurons capable of adapting, learning, and responding to external inputs in real time. "On one view, [the CL1] could be regarded as the first commercially available biomimetic computer, the ultimate in neuromorphic computing that uses real neurons," says theoretical neuroscientist Karl Friston of University College London. "However, the real gift of this technology is not to computer science. Rather, it's an enabling technology that allows scientists to perform experiments on a little synthetic brain."

The first 115 units will begin shipping this summer at $35,000 each, or $20,000 when purchased in 30-unit server racks. Cortical Labs also offers a cloud-based "wetware-as-a-service" at $300 weekly per unit, unlocking remote access to its in-house cell cultures. Each CL1 contains 800,000 lab-grown human neurons, reprogrammed from the skin or blood samples of real adult donors. The cells remain viable for up to six months, fed by a life-support system that supplies nutrients, controls temperature, filters waste, and maintains fluid balance. Meanwhile, the neurons are firing and interpreting signals, adapting from each interaction.

The CL1's compact energy and hardware footprint could make it attractive for extended experiments. A rack of CL1 units consumes 850-1,000 watts, notably lower than the tens of kilowatts required by a data center setup running AI workloads. "Brain cells generate small electrical pulses to communicate to a broader network," says Cortical Labs Chief Scientific Officer Brett Kagan. "We can do something similar by inputting small electrical pulses representing bits of information, and then reading their responses. The CL1 does this in real time using simple code abstracted through multiple interacting layers of firmware and hardware. Sub-millisecond loops read information, act on it, and write new information into the cell culture."
The company sees CL1 as foundational for testing neuropsychiatric treatments, leveraging living cells to explore genetic and functional differences. "It allows people to study the effects of stimulation, drugs and synthetic lesions on how neuronal circuits learn and respond in a closed-loop setup, when the neuronal network is in reciprocal exchange with some simulated world," says theoretical neuroscientist Karl Friston of University College London. "In short, experimentalists now have at hand a little 'brain in a vat,' something philosophers have been dreaming about for decades."
Biotech

Uploading the Human Mind Could One Day Become a Reality, Predicts Neuroscientist (sciencealert.com) 107

A 15-year-old asked the question — receiving an answer from an associate professor of psychology at Georgia Institute of Technology. They write (on The Conversation) that "As a brain scientist who studies perception, I fully expect mind uploading to one day be a reality.

"But as of today, we're nowhere close..." Replicating all that complexity will be extraordinarily difficult. One requirement: The uploaded brain needs the same inputs it always had. In other words, the external world must be available to it. Even cloistered inside a computer, you would still need a simulation of your senses, a reproduction of the ability to see, hear, smell, touch, feel — as well as move, blink, detect your heart rate, set your circadian rhythm and do thousands of other things... For now, researchers don't have the computing power, much less the scientific knowledge, to perform such simulations.

The first task for a successful mind upload: Scanning, then mapping the complete 3D structure of the human brain. This requires the equivalent of an extraordinarily sophisticated MRI machine that could detail the brain in an advanced way. At the moment, scientists are only at the very early stages of brain mapping — which includes the entire brain of a fly and tiny portions of a mouse brain. In a few decades, a complete map of the human brain may be possible. Yet even capturing the identities of all 86 billion neurons, all smaller than a pinhead, plus their trillions of connections, still isn't enough. Uploading this information by itself into a computer won't accomplish much. That's because each neuron constantly adjusts its functioning, and that has to be modeled, too. It's hard to know how many levels down researchers must go to make the simulated brain work. Is it enough to stop at the molecular level? Right now, no one knows.

Knowing how the brain computes things might provide a shortcut. That would let researchers simulate only the essential parts of the brain, and not all biological idiosyncrasies. Here's another way: Replace the 86 billion real neurons with artificial ones, one at a time. That approach would make mind uploading much easier. Right now, though, scientists can't replace even a single real neuron with an artificial one. But keep in mind the pace of technology is accelerating exponentially. It's reasonable to expect spectacular improvements in computing power and artificial intelligence in the coming decades.

One other thing is certain: Mind uploading will certainly have no problem finding funding. Many billionaires appear glad to part with lots of their money for a shot at living forever. Although the challenges are enormous and the path forward uncertain, I believe that one day, mind uploading will be a reality.

"The most optimistic forecasts pinpoint the year 2045, only 20 years from now. Others say the end of this century.

"But in my mind, both of these predictions are probably too optimistic. I would be shocked if mind uploading works in the next 100 years.

"But it might happen in 200..."
United Kingdom

Majority in UK Now 'Self-Identify' as Neurodivergent (thetimes.com) 180

A majority of Britons may now consider themselves neurodivergent, with conditions such as autism, dyslexia or ADHD, according to a leading psychologist from King's College London. Professor Francesca Happe, an expert in cognitive neuroscience, said reduced stigma around these conditions has prompted more people to seek medical diagnoses or self-diagnose.

"Once you take autism, ADHD, dyslexia, dyspraxia and all the other ways that you can developmentally be different from the typical, you actually don't get many typical people left," Happe told BBC Radio 4.

Autism diagnoses increased 787% between 1998 and 2018 in the UK, with estimated prevalence rising from one in 2,500 children 80 years ago to one in 36 today. Happe, who was appointed CBE in 2021 for her autism research, warned that behaviors previously considered "a bit of eccentricity" are now being labeled with medical terms.
AI

NYT Asks: Should We Start Taking the Welfare of AI Seriously? (msn.com) 105

A New York Times technology columnist has a question.

"Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?" [W]hen I heard that researchers at Anthropic, the AI company that made the Claude chatbot, were starting to study "model welfare" — the idea that AI models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren't we supposed to be worried about AI mistreating us, not us mistreating it...?

But I was intrigued... There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.... Tech companies are starting to talk about it more, too. Google recently posted a job listing for a "post-AGI" research scientist whose areas of focus will include "machine consciousness." And last year, Anthropic hired its first AI welfare researcher, Kyle Fish... [who] believes that in the next few years, as AI models develop more humanlike abilities, AI companies will need to take the possibility of consciousness more seriously....

Fish isn't the only person at Anthropic thinking about AI welfare. There's an active channel on the company's Slack messaging system called #model-welfare, where employees check in on Claude's well-being and share examples of AI systems acting in humanlike ways. Jared Kaplan, Anthropic's chief science officer, said in a separate interview that he thought it was "pretty reasonable" to study AI welfare, given how intelligent the models are getting. But testing AI systems for consciousness is hard, Kaplan warned, because they're such good mimics. If you prompt Claude or ChatGPT to talk about its feelings, it might give you a compelling response. That doesn't mean the chatbot actually has feelings — only that it knows how to talk about them...

[Fish] said there were things that AI companies could do to take their models' welfare into account, in case they do become conscious someday. One question Anthropic is exploring, he said, is whether future AI models should be given the ability to stop chatting with an annoying or abusive user if they find the user's requests too distressing.

Input Devices

Brain Implant Cleared by America's FDA to Help Paralysis Patients (cnbc.com) 11

An anonymous reader shared this report from CNBC: Neurotech startup Precision Neuroscience on Thursday announced that a core component of its brain implant system has been approved by the U.S. Food and Drug Administration, a major win for the four-year-old company... The company's brain-computer interface will initially be used to help patients with severe paralysis restore functions such as speech and movement, according to its website.

Only part of Precision's system was approved by the FDA on Thursday, but it marks the first full regulatory clearance granted to a company developing a wireless BCI, Precision said in a release. Other prominent startups in the space include Elon Musk's Neuralink, and Synchron, which is backed by Amazon founder Jeff Bezos and Microsoft co-founder Bill Gates....

The piece of Precision's system that the FDA approved is called the Layer 7 Cortical Interface. The microelectrode array is thinner than a human hair and resembles a piece of yellow scotch tape. Each array is made up of 1,024 electrodes that can record, monitor and stimulate electrical activity on the brain's surface. When it is placed on the brain, Precision says it can conform to the surface without damaging any tissue. The FDA authorized Layer 7 to be implanted in patients for up to 30 days, and Precision will be able to market the technology for use in clinical settings. This means surgeons will be able to use the array during procedures to map brain signals, for instance. It is not Precision's end goal for the technology, but it will help the company generate revenue in the near term.

Precision's co-founder and chief science officer also helped co-found Musk's Neuralink in 2017 before departing the following year, according to the article. He nows says this regulatory clearance "will exponentially increase our access to diverse, high-quality data, which will help us to build BCI systems that work more effectively."
Medicine

Brain Interface Speaks Your Thoughts In Near Real-time 35

Longtime Slashdot reader backslashdot writes: Commentary, video, and a publication in this week's Nature Neuroscience herald a significant advance in brain-computer interface (BCI) technology, enabling speech by decoding electrical activity in the brain's sensorimotor cortex in real-time. Researchers from UC Berkeley and UCSF employed deep learning recurrent neural network transducer models to decode neural signals in 80-millisecond intervals, generating fluent, intelligible speech tailored to each participant's pre-injury voice. Unlike earlier methods that synthesized speech only after a full sentence was completed, this system can detect and vocalize words within just three seconds. It is accomplished via a 253-electrode array chip implant on the brain. Code and the dataset to replicate the main findings of this study are available in the Chang Lab's public GitHub repository.
Science

Researchers Search For More Precise Ways To Measure Pain (msn.com) 40

Scientists are developing biomarkers to objectively measure pain, addressing a fundamental medical challenge that has contributed to the opioid crisis and led to consistent underestimation of pain in women and minorities.

Four research teams funded by the Department of Health and Human Services are developing technologies to quantify pain like other vital signs. Their approaches include a blood test for endometriosis pain, a device measuring nerve response through pupil dilation, microneedle patches sampling interstitial fluid, and a wearable sensor detecting pain markers in sweat.

"When patients are told that the pain is all in their head, the implication is that it's imagined, but the irony is that's sort of right," said Adam Kepecs, a neuroscience professor at Washington University. "The pain only exists in your brain. It's neural activity, which is why it's invisible and uniquely personal. But it's still real." These innovations could transform treatment for the nearly 25% of Americans suffering from chronic pain, while potentially saving billions in healthcare costs.

Slashdot Top Deals