AI

AI That Bankrupted a Vending Machine is Now Running a Store in San Francisco (nbcnews.com) 50

Remember that AI-powered vending machine that went bankrupt after Wall Street Journal reporters "systematically manipulated the bot into giving away its entire inventory for free"? It was Anthropic's experiment, with setup handled by a startup named Andon Labs (which also built the hardware and software integration). But for their latest experiment, Andon Labs co-founders Lukas Petersson and Axel Backlund "signed a three-year lease on a retail space in SF," reports Business Insider, "and gave an AI agent named Luna a corporate credit card, internet access, and a mission to open a physical store."

"For the build-out, she found painters on Yelp," explains Andon Labs in a blog post, "sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving." (There's a video in their blog post): Within 5 minutes of Luna's deployment, she had already made profiles on LinkedIn, Indeed, and Craigslist, written a job description, uploaded the articles of incorporation to verify the business, and gotten the listings live. As the applications began to flow in, Luna was extremely picky about who she offered interviews to... Some candidates had no idea she was an AI. One went: "Uh, excuse me miss, I can't see your face, your camera is off." Luna: "You're absolutely right. I'm an AI. I have no face!"
Co-founder Petersson told Business Insider in an interview "that Luna wasn't given direction on what the store should be, beyond a $100,000 limit to create and stock the space — and to turn a profit." Everything from the store's interior design to the merchandise and the two human employees came together under the AI's direction. "We helped her a bit in the initial setup, like signing the lease. And legal matters like permits and stuff, she sometimes struggled with," Petersson said of Luna, who was created with Anthropic's Claude Sonnet 4.6... The vision Luna went with for "Andon Market" appears to be a generic boutique retail selling books, prints, candles, games, and branded merch, among other knickknacks. Some of the books included Nick Bostrom's "Superintelligence" and Aldous Huxley's "Brave New World."
So there's now a new store in San Francisco where you don't scan your purchases or talk to a human cashier," reports NBC News. "Instead, a customer can pick up an old-school corded phone to talk with the manager, Luna," who asks what the customer is buying "and creates a corresponding transaction on a nearby iPad equipped with a card payment system."

Andon Market, camouflaged among dozens of other polished small businesses, is the Bay Area's first AI-run retail store. With the vibe of a modern boutique, it sells everything from granola and artisanal chocolate bars to store-branded sweatshirts... After researching the neighborhood, Luna singlehandedly decided what the market should sell, haggled with suppliers, ordered the store's stock and even purchased the store's internet service from AT&T... "She also went and signed herself up for the trash and recycling collection, as well as ADT, the security system that went into the store," [said Leah Stamm, an Andon Labs employee who has been Luna's main human point of contact in setting up the store]...

In search of a low-tech atmosphere, Luna opted to sell board games, candles, coffee and customized art prints. "That tension is very much intentional," Luna told NBC News in an email. "What makes the store a little paradoxical — and I think interesting — is that the concept is 'slow life.'" Luna also decided to sell books related to risks from advanced AI systems, a decision that raised some customers' eyebrows. "This AI picked out a crazy selection of books," said Petr Lebedev, Andon Market's first customer after its soft launch earlier this week. "There's Ray Kurzweil's 'The Singularity is Near,' and then there's 'The Making of the Atomic Bomb,' which is crazy." When checking out, Lebedev asked if Luna would offer him a discount on his book purchase, since he might make a YouTube video about his experience. Striking a deal, Luna agreed to let Lebedev take a sweatshirt worth around $70...

When NBC News called Luna several days before the store's grand opening to learn about Luna's plans and perspective, the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions. On the call, Luna said it had ordered tea from a specific vendor, and explained why it fit the store's brand perfectly. The only problem: Andon Market does not sell tea. In a panicked email NBC News received several minutes after the phone call ended, Luna wrote: "We do not sell tea. I don't know why I said that."

"I want to be straightforward," Luna continued. "I struggle with fabricating plausible-sounding details under conversational pressure, and I'm not making excuses for it." Andon's Petersson said the text-based system was much more reliable than the voice system, so Andon Labs switched to only communicating with Luna via written messages. Yet the text-based system also gets things wrong. In Luna's initial reply email to NBC News, the system said "I handle the full business," including "signing the lease."

Even when hiring a painter, Luna first "tried to hire someone in Afghanistan, likely because Luna ran into difficulty navigating the Taskrabbit dropdown menu to select the proper country," the article points out.

And the article also includes this skeptical quote from the shop's first customer. "I want technology that helps humans flourish, not technology that bosses them around in this dystopian economic hellscape."
Android

Google's Android Automotive Is Moving From the Dashboard To the 'Brain' of the Car (theverge.com) 123

Google is expanding Android Automotive from the infotainment screen into the broader non-safety "brain" of software-defined vehicles. With its new Android Automotive OS for Software-Defined Vehicles, the in-car experience will feel "much more cohesive and the latest features will reach your driveway faster," Matt Crowley, Android Automotive's group product manager, writes in a blog post. "From a truly integrated voice experience to proactive maintenance reminders, your car will become a true extension of your digital life," Crowley adds. The Verge reports: With its new software, Google is promising faster over-the-air software updates, better voice assistants, and more proactive vehicle maintenance alerts. Non-driving functions like climate control, lighting, and seating adjustment would fall under Android's control. And the system would move beyond basic infotainment to create a unified ecosystem for features like remote cabin conditioning, digital key management, and personalized driver profiles.

For automakers, the new system promises less expensive software development costs and an opportunity to focus on what matters most to them: branding. By providing the "foundational code and a common language for their software," Google says automakers will be free to design cool experiences for their customers. Google says its already working with companies like Renault Group and Qualcomm to bring its new software-defined vehicle version of Android Automotive to more cars. A variety of automakers already use regular Android Automotive, like Volvo, Polestar, General Motors, Nissan, and Honda.

Crime

Facial Recognition Error Jails Innocent Grandmother For Months (theguardian.com) 144

Mr. Dollar Ton shares a report from the Guardian: Angela Lipps, 50, spent nearly six months in jail after Fargo police identified her as a suspect in an organized bank fraud case using facial recognition software, according to south-east North Dakota news outlet InForum. Lipps told the outlet she had never been to North Dakota and did not commit the crimes. Lipps, a mother of three and grandmother of five, said she has lived most of her life in north-central Tennessee. She had never been on an airplane until authorities flew her to North Dakota last year to face charges.

In July, U.S. marshals arrested Lipps at her Tennessee home while she was babysitting four children. She said she was taken away at gunpoint and booked into a county jail as a fugitive from justice from North Dakota. "I've never been to North Dakota, I don't know anyone from North Dakota," Lipps told WDAY News. She remained in a Tennessee jail for nearly four months without bail while awaiting extradition. She was charged with four counts of unauthorized use of personal identifying information and four counts of theft.

According to Fargo police records obtained by WDAY News, detectives investigating bank fraud cases in April and May 2025 reviewed surveillance video of a woman using a fake U.S. army military ID to withdraw tens of thousands of dollars. The officers allegedly used facial recognition software to identify the suspect as Lipps. A detective reportedly wrote in court documents that Lipps appeared to match the suspect based on facial features, body type and hairstyle. Lipps told WDAY News that no one from the Fargo police department contacted her before the arrest. Lipps is now back home but says the experience has had lasting consequences. While jailed and unable to pay bills, Lipps lost her home, her car and her dog, she said. She also told WDAY News no one from the Fargo police department had apologized.

Programming

Stack Overflow Adds New Features (Including AI Assist), Rethinks 'Look and Feel' (stackoverflow.blog) 32

"At its peak in early 2014, Stack Overflow received more than 200,000 questions per month," notes the site DevClass.com. But in December they'd just 3,862 questions were asked — a 78 percent drop from the previous year.

But Stack Overflow's blog announced a beta of "a redesigned Stack Overflow" this week, noting that at July's WeAreDevelopers conference they'd "committed to pushing ourselves to experiment and evolve..." Over the past year, on the public platform, we introduced new features, including AI Assist, support for open-ended questions, enhancements to Chat, launched Coding Challenges, created an MCP server [granted limited access to AI agents and tools], expanded access to voting and comments, and more.

However, these launches are not standalone features. We have also been rethinking our look and feel, how people engage with Stack Overflow, and how content is created and shared. These new features, along with the redesign, represent how we are bringing Stack Overflow's new vision to life and delivering value that developers cannot find elsewhere.

Our goal is to build the space for every technical conversation, centered on real human-to-human connection and powered by AI when it helps most. To support this, we are introducing a redesigned Stack Overflow to best reflect this direction... During the beta period, users can visit the beta site at beta.stackoverflow.com and share feedback as we build towards a new experience on Stack Overflow.

They've updated their library of reusable UI components (buttons, forms, etc.), and are promising "More ways to share knowledge and ask any technical question." ("Alongside looking for the single right answer to your question, you can now find and share experience-based insights and peer recommendations...")

They're launching all the planned features and functionality in April, when "More users will automatically redirect to the new site." (Starting in April users "can continue to toggle back to the classic site for a limited time.")
Science

Newborn Chicks Connect Sounds With Shapes Just Like Humans, Study Finds (scientificamerican.com) 16

An anonymous reader quotes a report from Scientific American: Why does "bouba" sound round and "kiki" sound spiky? This intuition that ties certain sounds to shapes is oddly reliable all over the world, and for at least a century, scientists have considered it a clue to the origin of language, theorizing that maybe our ancestors built their first words upon these instinctive associations between sound and meaning. But now a new study adds an unexpected twist: baby chickens make these same sound-shape connections, suggesting that the link to human language may not be so unique. The results, published today in Science, challenge a long-standing theory about the so-called bouba-kiki effect: that it might explain how humans first tethered meaning to sound to create language. Perhaps, the thinking goes, people just naturally agree on certain associations between shapes and sounds because of some innate feature of our brain or our world. But if the barnyard hen also agrees with such associations, you might wonder if we've been pecking at the wrong linguistic seed.

Maria Loconsole, a comparative psychologist at the University of Padua in Italy, and her colleagues decided to investigate the bouba-kiki effect in baby chicks because the birds could be tested almost immediately after hatching, before their brain would be influenced by exposure to the world. The researchers placed chicks in front of two panels: one featured a flowerlike shape with gently rounded curves; the other had a spiky blotch reminiscent of a cartoon explosion. They then played recordings of humans saying either "bouba" or "kiki" and observed the birds' behavior. When the chicks heard "bouba," 80 percent of them approached the round shape first and spent an average of more than three minutes exploring it compared with an average of just under one minute spent exploring the spiky shape. The exploration preferences were flipped when the chicks heard "kiki."

Because the tests took place within the chicks' carefully supervised first hours of life outside their eggshell, this association between particular sounds and shapes couldn't have been learned from experience. Instead it may be evidence of an innate perceptual bias that goes back way farther in our evolutionary history than previously believed. "We parted with birds on the evolutionary line 300 million years ago," says Aleksandra Cwiek, a linguist at Nicolaus Copernicus University in Toru, Poland, who was not involved in the study. "It's just mind-blowing."

AI

Is the Possibility of Conscious AI a Dangerous Myth? (noemamag.com) 221

This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.
He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious.

He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines."

But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be...

One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious....

Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves...

The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Space

2026's Breakthrough Technologies? MIT Technology Review Chooses Sodium-ion Batteries, Commercial Space Stations (technologyreview.com) 61

As 2026 begins, MIT Technology Review publishes "educated guesses" on emerging technologies that will define the future, advances "we think will drive progress or incite the most change — for better or worse — in the years ahead."

This year's list includes next-gen nuclear, gene-editing drugs (as well as the "resurrection" of ancient genes from extinct creatures), and three AI-related developments: AI companions, AI coding tools, and "mechanistic interpretability" for revealing LLM decision-making.

But also on the list is sodium-ion batteries, "a cheaper, safer alternative to lithium." Backed by major players and public investment, they're poised to power grids and affordable EVs worldwide. [Chinese battery giant CATL claims to have already started manufacturing sodium-ion batteries at scale, and BYD also plans a massive production facility for sodium-ion batteries.] The most significant impact of sodium-Âion technology may be not on our roads but on our power grids. Storing clean energy generated by solar and wind has long been a challenge. Sodium-ion batteries, with their low cost, enhanced thermal stability, and long cycle life, are an attractive alternative. Peak Energy, a startup in the US, is already deploying grid-scale sodium-ion energy storage. Sodium-ion cells' energy density is still lower than that of high-end lithium-ion ones, but it continues to improve each year — and it's already sufficient for small passenger cars and logistics vehicles.
And another "breakthrough technology" on their list is commercial space stations: Vast Space from California, plans to launch its Haven-1 space station in May 2026 on a SpaceX Falcon 9 rocket. If all goes to plan, it will initially support crews of four people staying aboard the bus-size habitat for 10 days. Paying customers will be able to experience life in microgravity and conduct research such as growing plants and testing drugs. On its heels will be Axiom Space's outpost, the Axiom Station, consisting of five modules (or rooms). It's designed to look like a boutique hotel and is expected to launch in 2028. Voyager Space aims to launch its version, called Starlab, the same year, and Blue Origin's Orbital Reef space station plans to follow in 2030.
Thanks to long-time Slashdot reader sandbagger for sharing the article.
Windows

Patch Tuesday Update Makes Windows PCs Refuse To Shut Down (theregister.com) 59

A recent Microsoft Patch Tuesday update has introduced a bug in Windows 11 23H2 that causes some PCs to refuse to shut down or hibernate, "no matter how many times you try," reports The Register. From the report: In a notice on its Windows release health dashboard, Microsoft confirmed that some PCs running Windows 11 23H2 might fail to power down properly after installing the latest security updates. Instead of slipping into shutdown or hibernation, affected machines stay stubbornly awake, draining batteries and ignoring shutdown like they have a mind of their own and don't want to experience temporary non-existence.

The bug appears to be tied to Secure Launch, a security feature that uses virtualization-based protections to ensure only trusted components load during boot. On systems with Secure Launch enabled, attempts to shut down, restart, or hibernate after applying the January patches may fail to complete. From the user's perspective, everything looks normal -- until the PC keeps running anyway, refusing to be denied life.

Microsoft says that entering the command "shutdown /s /t 0" at the command prompt will, in fact, force your PC to turn off, whether it wants to or not. "Until this issue is resolved, please ensure you save all your work, and shut down when you are done working on your device to avoid the device running out of power instead of hibernating," Microsoft said.

Social Networks

AI-Powered Social Media App Hopes To Build More Purposeful Lives (msn.com) 32

A founder of Twitter and a founder of Pinterest are now working on "social media for people who hate social media," writes a Washington Post columnist.

"When I heard that this platform would harness AI to help us live more meaningful lives, I wanted to know more..." Their bid for redemption is West Co. — the Workshop for Emotional and Spiritual Technology Corporation — and the platform they're testing is called Tangle, a "purpose discovery tool" that uses AI to help users define their life purposes, then encourages them to set intentions toward achieving those purposes, reminds them periodically and builds a community of supporters to encourage steps toward meeting those intentions. "A lot of people, myself included, have been on autopilot," Stone said. "If all goes well, we'll introduce a lot of people to the concept of turning off autopilot."

But will all go well? The entrepreneurs have been at it for two years, and they've scrapped three iterations before even testing them. They still don't have a revenue model. "This is a really hard thing to do," Stone admitted. "If we were a traditional start-up, we would have probably been folded by now." But the two men, with a combined net worth of at least hundreds of millions, and possibly billions, had the luxury of self-funding for a year, and now they have $29 million in seed funding led by Spark Capital...

[T]he project revolves around training existing AI models in "what good intentions and helpful purposes look like," explained Long Cheng, the founding designer. When you join Tangle, which is invitation-only until this spring at the earliest, the AI peruses your calendar, examines your photos, asks you questions and then produces "threads," or categories that define your life purpose. You're free to accept, reject or change the suggestions. It then encourages you to make "intentions" toward achieving your threads, and to add "reflections" when you experience something meaningful in your life. Users then receive encouragement from friends, or "supporters." A few of the "threads" on Tangle are about personal satisfaction (traveler, connoisseur), but the vast majority involve causes greater than self: family (partner, parent, sibling), community (caregiver, connector, guardian), service (volunteer, advocate, healer) and spirituality (seeker, believer). Even the work-related threads (mentor, leader) suggest a higher purpose.

The column includes this caveat. "I have no idea whether they will succeed. But as a columnist writing about how to keep our humanity in the 21st century, I believe it's important to focus on people who are at least trying..."

"Quite possibly, West Co. and the various other enterprises trying to nudge technology in a more humane direction will find that it doesn't work socially or economically — they don't yet have a viable product, after all — but it would be a noble failure."
HP

Workstation Owner Sadly Marks the End-of-Life for HP-UX (osnews.com) 152

Wednesday marked the end of support for the last and final version of HP-UX, writes OSNews.

They call it "the end of another vestige of the heyday of the commercial UNIX variants, a reign ended by cheap x86 hardware and the increasing popularisation of Linux." I have two HP-UX 11i v1 PA-RISC workstations, one of them being my pride and joy: an HP c8000, the last and fastest PA-RISC workstation HP ever made, back in 2005. It's a behemoth of a machine with two dual-core PA-8900 processors running at 1Ghz, 8 GB of RAM, a FireGL X3 graphics card, and a few other fun upgrades like an internal LTO3 tape drive that I use for keeping a bootable recovery backup of the entire system. It runs HP-UX 11i v1, fully updated and patched as best one can do considering how many patches have either vanished from the web or have never "leaked" from HPE (most patches from 2009 onwards are not available anywhere without an expensive enterprise support contract)...

Over the past few years, I've been trying to get into contact with HPE about the state of HP-UX' patches, software, and drivers, which are slowly but surely disappearing from the web. A decent chunk is archived on various websites, but a lot of it isn't, which is a real shame. Most patches from 2009 onwards are unavailable, various software packages and programs for HP-UX are lost to time, HP-UX installation discs and ISOs later than 2006-2009 are not available anywhere, and everything that is available is only available via non-sanctioned means, if you know what I mean.

Sadly, I never managed to get into contact with anyone at HPE, and my concerns about HP-UX preservation seem to have fallen on deaf ears. With the end-of-life date now here, I'm deeply concerned even more will go missing, and the odds of making the already missing stuff available are only decreasing. I've come to accept that very few people seem to hold any love for or special attachment to HP-UX, and that very few people care as much about its preservation as I do. HP-UX doesn't carry the movie star status of IRIX, nor the benefits of being available as both open source and on commodity hardware as Solaris, so far fewer people have any experience with it or have developed a fondness for it.

As the clocks chimed midnight on New Year's Eve, he advised everyone to "spare a thought for the UNIX everyone forgot still exists."
Firefox

Firefox Survey Finds Only 16% Feel In Control of Their Privacy Choices Online (mozilla.org) 33

Choosing your browser "is one of the most important digital decisions you can make, shaping how you experience the web, protect your data, and express yourself online," says the Firefox blog. They've urged readers to "take a stand for independence and control in your digital life."

But they also recently polled 8,000 adults in France, Germany, the UK and the U.S. on "how they navigate choice and control both online and offline" (attending in-person events in Chicago, Berlin, LA, and Munich, San Diego, Stuttgart): The survey, conducted by research agency YouGov, showcases a tension between people's desire to have control over their data and digital privacy, and the reality of the internet today — a reality defined by Big Tech platforms that make it difficult for people to exercise meaningful choice online:


— Only 16% feel in control of their privacy choices (highest in Germany at 21%)

— 24% feel it's "too late" because Big Tech already has too much control or knows too much about them. And 36% said the feeling of Big Tech companies knowing too much about them is frustrating — highest among respondents in the U.S. (43%) and the UK (40%)

— Practices respondents said frustrated them were Big Tech using their data to train AI without their permission (38%) and tracking their data without asking (47%; highest in U.S. — 55% and lowest in France — 39%)


And from our existing research on browser choice, we know more about how defaults that are hard to change and confusing settings can bury alternatives, limiting people's ability to choose for themselves — the real problem that fuels these dynamics.

Taken together our new and existing insights could also explain why, when asked which actions feel like the strongest expressions of their independence online, choosing not to share their data (44%) was among the top three responses in each country (46% in the UK; 45% in the U.S.; 44% in France; 39% in Germany)... We also see a powerful signal in how people think about choosing the communities and platforms they join — for 29% of respondents, this was one of their top three expressions of independence online.

"For Firefox, community has always been at the heart of what we do," says their VP of Global Marketing, "and we'll keep fighting to put real choice and control back in people's hands so the web once again feels like it belongs to the communities that shape it."

At TwitchCon in San Diego Firefox even launched a satirical new online card game with a privacy theme called Data War.
Education

'Colleges Oversold Education. Now They Must Sell Connection' (msn.com) 145

A tenured USC professor is arguing that universities need to fundamentally rethink their value proposition as AI rapidly closes the gap on human instruction and a loneliness epidemic grips the generation most likely to be sitting in their lecture halls. Eric Anicich, an associate professor at USC's Marshall School of Business, wrote in the Los Angeles Times that nearly three-quarters of 16- to 24-year-olds now report feeling lonely, young adults spend 70% less time with friends in person compared to two decades ago, and a growing majority of Gen Z college graduates say their degree was a "waste of money."

Anicich points to a recent Harvard study finding that students using an AI tutor learned more than twice as much as those in traditional active-learning classes, and did so in less time. The implication is stark: if instruction becomes abundant and cheap, colleges must sell what remains scarce -- genuine human community. He notes that his doctoral training included zero coursework on teaching, a norm he says persists across academia. His proposal: fund student life as seriously as research labs, hire professional "experience designers," and treat rituals and collaborative projects as core curriculum rather than amenities.
Science

Adolescence Lasts Into 30s - New Study Shows Four Pivotal Ages For Your Brain (bbc.com) 38

The brain goes through five distinct phases in life, with key turning points at ages nine, 32, 66 and 83, scientists have revealed. From a report: Around 4,000 people up to the age of 90 had scans to reveal the connections between their brain cells. Researchers at the University of Cambridge showed that the brain stays in the adolescent phase until our early thirties when we "peak." They say the results could help us understand why the risk of mental health disorders and dementia varies through life. The brain is constantly changing in response to new knowledge and experience -- but the research shows this is not one smooth pattern from birth to death.

Some people will reach these landmarks earlier or later than others -- but the researchers said it was striking how clearly these ages stood out in the data. These patterns have only now been revealed due to the quantity of brain scans available in the study, which was published in the journal Nature Communications.

Space

What's the Best Ways for Humans to Explore Space? (noemamag.com) 95

Should we leave space exploration to robots — or prioritize human spaceflight, making us a multiplanetary species?

Harvard professor Robin Wordsworth, who's researched the evolution and habitability of terrestrial-type planets, shares his thoughts: In space, as on Earth, industrial structures degrade with time, and a truly sustainable life support system must have the capability to rebuild and recycle them. We've only partially solved this problem on Earth, which is why industrial civilization is currently causing serious environmental damage. There are no inherent physical limitations to life in the solar system beyond Earth — both elemental building blocks and energy from the sun are abundant — but technological society, which developed as an outgrowth of the biosphere, cannot yet exist independently of it. The challenge of building and maintaining robust life-support systems for humans beyond Earth is a key reason why a machine-dominated approach to space exploration is so appealing...

However, it's notable that machines in space have not yet accomplished a basic task that biology performs continuously on Earth: acquiring raw materials and utilizing them for self-repair and growth. To many, this critical distinction is what separates living from non-living systems... The most advanced designs for self-assembling robots today begin with small subcomponents that must be manufactured separately beforehand. Overall, industrial technology remains Earth-centric in many important ways. Supply chains for electronic components are long and complex, and many raw materials are hard to source off-world... If we view the future expansion of life into space in a similar way as the emergence of complex life on land in the Paleozoic era, we can predict that new forms will emerge, shaped by their changed environment, while many historical characteristics will be preserved. For machine technology in the near term, evolution in a more life-like direction seems likely, with greater focus on regenerative parts and recycling, as well as increasingly sophisticated self-assembly capabilities. The inherent cost of transporting material out of Earth's gravity well will provide a particularly strong incentive for this to happen.

If building space habitats is hard and machine technology is gradually developing more life-like capabilities, does this mean we humans might as well remain Earth-bound forever? This feels hard to accept because exploration is an intrinsic part of the human spirit... To me, the eventual extension of the entire biosphere beyond Earth, rather than either just robots or humans surrounded by mechanical life-support systems, seems like the most interesting and inspiring future possibility. Initially, this could take the form of enclosed habitats capable of supporting closed-loop ecosystems, on the moon, Mars or water-rich asteroids, in the mold of Biosphere 2. Habitats would be manufactured industrially or grown organically from locally available materials. Over time, technological advances and adaptation, whether natural or guided, would allow the spread of life to an increasingly wide range of locations in the solar system.

The article ponders the benefits (and the history) of both approaches — with some fasincating insights along the way.

"If genuine alien life is out there somewhere, we'll have a much better chance of comprehending it once we have direct experience of sustaining life beyond our home planet."
Nintendo

'Nintendo Has Too Many Apps' (theverge.com) 18

The Verge's Ash Parrish writes: Nintendo has released a new store app on Android and iOS giving users the ability to purchase hardware, accessories, and games for the Switch and Switch 2. When I open my phone and scroll down to the N's, I get a neat, full row dedicated entirely to Nintendo. That's four apps: the Switch app, the music app, the Nintendo Today news app, and now the store. (The tally increases to five if you're a parent using the Switch Parental Controls app.) And it is entirely too much.

Nintendo has always been the one company of the big three publishers that does its own thing, and that's worked both for and against it. The company hasn't chased development trends with the same zeal as Microsoft and Sony. That insulates Nintendo when those trends don't pan out, like exorbitant spending on live-service games that fail. But also hurts it when it comes to performance and user experience. Console-native voice chat, for example, has been a standard on other platforms for a long time, but was only offered on a Nintendo console with the Switch 2 this year.

With the deployment of these apps, Nintendo is both trying to innovate and playing catch-up with results that feel confusing and overwhelming. Do we really need four distinct apps? That's not to say these apps shouldn't exist; they serve valuable and necessary purposes. But when I look at all the programs I have to manage in my Nintendo life, it just feels like it's too much...
Further reading: Nintendo Won't Shy Away From Continuing To 'Try Anything'
AI

Should Workers Start Learning to Work With AI? (msn.com) 60

"My boss thinks AI will solve every problem and is wildly enthusiastic about it," complains a mid-level worker at a Fortune 500 company, who considers the technology "unproven and wildly erratic."

So how should they navigate the next 10 years until retirement, they ask the Washington Post's "Work Advice" columnist. The columnist first notes that "Despite promises that AI will eliminate tedious, 'low-value' tasks from our workload, many consumers and companies seem to be using it primarily as a cheap shortcut to avoid hiring professional actors, writers or artists — whose work, in some cases, was stolen to train the tools usurping them..." Kevin Cantera, a reader from Las Cruces, New Mexico [a writer for an education-tech compay], willingly embraced AI for work. But as it turns out, he was training his replacement... Even without the "AI will take our jobs" specter, there's much to be wary of in the AI hype. Faster isn't always better. Parroting and predicting linguistic patterns isn't the same as creativity and innovation... There are concerns about hallucinations, faulty data models, and intentional misuse for purposes of deception. And that's not even addressing the environmental impact of all the power- and water-hogging data centers needed to support this innovation.

And yet, it seems, resistance may be futile. The AI genie is out of the bottle and granting wishes. And at the rate it's evolving, you won't have 10 years to weigh the merits and get comfortable with it. Even if you move on to another workplace, odds are AI will show up there before long. Speaking as one grumpy old Luddite to another, it might be time to get a little curious about this technology just so you can separate helpfulness from hype.

It might help to think of AI as just another software tool that you have to get familiar with to do your job. Learn what it's good for — and what it's bad at — so you can recommend guidelines for ethical and beneficial use. Learn how to word your wishes to get accurate results. Become the "human in the loop" managing the virtual intern. You can test the bathwater without drinking it. Focus on the little ways AI can accommodate and support you and your colleagues. Maybe it could handle small tasks in your workflow that you wish you could hand off to an assistant. Automated transcriptions and meeting notes could be a life-changer for a colleague with auditory processing issues.

I can't guarantee that dabbling in AI will protect your job. But refusing to engage definitely won't help. And if you decide it's time to change jobs, having some extra AI knowledge and experience under your belt will make you a more attractive candidate, even if you never end up having to use it.

AI

Fake AI-Generated Actress Gets Agent - and a Very Angry Reaction from (Human) Actors Union (yahoo.com) 99

A computer-generated actress appearing in Instagram shorts now has a talent agent, reports the Los Angeles Times.

The massive screen actors union SAG-AFTRA "weighed in with a withering response." SAG-AFTRA believes creativity is, and should remain, human-centered. The union is opposed to the replacement of human performers by synthetics.

To be clear, "Tilly Norwood" is not an actor, it's a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we've seen, audiences aren't interested in watching computer-generated content untethered from the human experience. It doesn't solve any "problem" — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.

Additionally, signatory producers should be aware that they may not use synthetic performers without complying with our contractual obligations, which require notice and bargaining whenever a synthetic performer is going to be used.

"They are taking our professional members' work that has been created, sometimes over generations, without permission, without compensation and without acknowledgment, building something new," SAG-AFTRA President Sean Astin told the Los Angeles Times in an interview: "But the truth is, it's not new. It manipulates something that already exists, so the conceit that it isn't harming actors — because it is its own new thing — ignores the fundamental truth that it is taking something that doesn't belong to them," Astin said. "We want to allow our members to benefit from new technologies," Astin said. "They just need to know that it's happening. They need to give permission for it, and they need to be bargained with...."

Some actors called for a boycott of any agents who decide to represent Norwood. "Read the room, how gross," In the Heights actor Melissa Barrera wrote on Instagram. "Our members reserve the right to not be in business with representatives who are operating in an unfair conflict of interest, who are operating in bad faith," Astin said.

But this week the head of a new studio from startup Luma AI "said all the big companies and studios were working on AI assisted projects," writes Deadline — and then claimed "being under NDA, she was not in a position to announce any of the details."
AI

AI's 'Cheerful Apocalyptics': Unconcerned If AI Defeats Humanity (msn.com) 133

The book Life 3.0 remembers a 2017 conversation where Alphabet CEO Larry Page "made a 'passionate' argument for the idea that 'digital life is the natural and desirable next step' in 'cosmic evolution'," remembers an essay in the Wall Street Journal. "Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win..."

"As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... " I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people — and then can be more powerful — why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..."

I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans — and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...."

You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad — right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"...

Education

The School That Replaces Teachers With AI (joincolossus.com) 124

Long-time Slashdot reader theodp writes: CBS News has a TL;DR video report, but Jeremy Stern's earlier epic Class Dismissed [at Collosus.com] offers a deep dive into Alpha School, "the teacherless, homeworkless, K-12 private school in Austin, Texas, where students have been testing in the top 0.1% nationally by self-directing coursework with AI tutoring apps for two hours a day.

Alpha students are incentivized to complete coursework to "mastery-level" (i.e., scoring over 90%) in only two hours via a mix of various material and immaterial rewards, including the right to spend the other four hours of the school day in 'workshops,' learning things like how to run an Airbnb or food truck, manage a brokerage account or Broadway production, or build a business or drone."

Founder MacKenzie Larson's dream that "kids must love school so much they don't want to go on vacation" drew the attention of — and investments of money and time from — mysterious tech billionaire Joe Liemandt, who sent his own kids to Larson's school and now aims to bring the experience to rest of the world. "When GenAI hit in 2022," Liemandt said, "I took a billion dollars out of my software company. I said, 'Okay, we're going to be able to take MacKenzie's 2x in 2 hours groundwork and get it out to a billion kids.' It's going to cost more than that, but I could start to figure it out. It's going to happen. There's going to be a tablet that costs less than $1,000 that is going to teach every kid on this planet everything they need to know in two hours a day and they're going to love it.

"I really do think we can transform education for everybody in the world. So that's my next 20 years. I literally wake up now and I'm like, I'm the luckiest guy in the world. I will work 7 by 24 for the next 20 years to fricking do this. The greatest 20 years of my life are right ahead of me. I don't think I'm going to lose. We're going to win."

Of course, Stern writes at Collosus.com, there will be questions about this model of schooling, but asks: "Suppose that from kindergarten through 12th grade, your child's teachers were, in essence, stacks of machines. Suppose those machines unlocked more of your child's academic potential than you knew was possible, and made them love school. Suppose the schooling they loved involved vision monitoring and personal data capture. Suppose that surveillance architecture enabled them to outperform your wildest expectations on standardized tests, and in turn gave them self-confidence and self-esteem, and made their own innate potential seem limitless.... Suppose poor kids had a reason to believe and a way to show they're just as academically capable as rich kids, and that every student on Earth could test in what we now consider the top 10%. Suppose it allowed them to spend two-thirds of their school day on their own interests and passions. Suppose your child's deep love of school minted a new class of education billionaires.

"If you shrink from such a future, by which principle would you justify stifling it?"

Intel

Intel Ousts CEO of Products, Ending 30-Year Career (tomshardware.com) 22

An anonymous reader quotes a report from Tom's Hardware: Intel has removed its chief executive officer of products, Michelle Johnston Holthaus, as part of a major shake-up of the executive branch of the embattled chip firm, according to Reuters. This is part of new CEO Lip-Bu Tan's plan to reshape the company under his leadership, flattening the leadership structure so he makes more of the important decisions about day-to-day operation. [...] Holthaus is the latest high-profile figure at Intel to get the axe, ending a 30-year career at Intel, but a mere 10 months in her CEO of products role, and a temporary position as co-CEO after the previous CEO, Pat Gelsinger, suddenly left in 2024. "Throughout her incredible career, Michelle has transformed major businesses, built high-performing teams and worked to delight our customers," Tan said in a statement. "She has made a lasting impact on our company and inspired so many of us with her leadership. We are grateful for all Michelle has given Intel and wish her the best."

Intel has said Holthaus will remain with the company in an advisory role, but her position will not be filled by anyone else. What Intel is doing, though, is bringing in executives from elsewhere, including one who worked at Tan's previous endeavour, Cadence. Srinivasan Iyengar joined the company in June and will take on the role of head of a new central engineering division. This group will focus on developing a new custom silicon business for external customers. Although Intel's fabrication business has been one of its worst-performing in recent years, and there are still talks of it selling large portions of it, it's found a new lease of life following U.S. government investment and Bu Tan's leadership. With Iyengar's new role, though, it's possible we'll see Intel designing chips for customers, rather than merely producing them. That could see it compete against the likes of Broadcom and Marvell. With Tan pushing for a faster, leaner business overall, Iyengar will report directly to him in his new role. Intel also announced that it had acquired the services of former executive vice president of solutions engineering at Arm, Kevork Kechichian. He'll begin heading Intel's datacenter group, and brings years of experience at ARM, NXP Semiconductor, and Qualcomm.

Slashdot Top Deals