Cellphones

2.5 Million American Students Now Required to Lock Their Cellphones in Magnetic Pouches (cbsnews.com) 148

In 2016 comedian Dave Chappelle made headlines by requiring concert attendees to lock their cellphones in a pouch to prevent recording.

Nine years later those pouches (made by tech startup Yondr) are required for at least 2.5 million students in America, reports CBS News, "and the company said the number could triple after the 2025 numbers are tallied in about three months... Students in 35 states, including New York, Florida, Texas, California, Massachusetts and Georgia, now contend with laws or rules limiting phones and other electronic devices in school."

For example, The Yonkers School District purchased about 11,000 pouches, according to the article, "to comply with the statewide mandate that bans phones in classrooms." The pouch, which students carry with them, is locked and unlocked using magnets affixed to the entrance of the school and outside the main office... ["Some students have reported long lines and disruption at their schools," the article notes later, "as they wait to open their pouches." But on the first day of school at Yonkers, one student said the lines actually went pretty smoothly, and they ended up having a live conversation with a friend during lunch and "felt human"...] Other students were not so enthralled by the pouch; some reported seeing classmates bypass the Yondr pouch by using their Apple watches, buying "burner" phones and putting them in the pouch, breaking the pouch and other tricks to get to their phones.

[Yondr CEO Graham] Dugoni acknowledged that there will always be some students who can figure out how to get around the restrictions. The purpose of the pouches, he said, was to create a culture change in a school and create an environment conducive to their learning and development. More than 70% of high school teachers in the U.S. say cellphones are a major classroom distraction, according to the Pew Research Center Center.

Yondr CEO Graham Dugoni uses a flip phone, the article points out, and says "Our whole perspective is that it's not taking something away from students, it's giving them something back."

He says his larger mission is to create chances for people "to experience life outside of a fully digital realm" — and that Yondr now has school partners in all 50 U.S. states, and in 45 different countries: The cost of buying the pouches — roughly $25-30 per student — has set off debates around how schools should be spending their limited budgets. It's a particular issue for districts struggling with crumbling infrastructure, limited textbooks and access to other technology needed to learn...Districts in various states have reported spending from $26,000 to over $370,000, with Cincinnati Schools saying they spent $500,000 to provide pouches for students in grades 7-12.
Privacy

'Facial Recognition Tech Mistook Me For Wanted Man' (bbc.co.uk) 112

Bruce66423 shares a report from the BBC: A man who is bringing a High Court challenge against the Metropolitan Police after live facial recognition technology wrongly identified him as a suspect has described it as "stop and search on steroids." Shaun Thompson, 39, was stopped by police in February last year outside London Bridge Tube station. Privacy campaign group Big Brother Watch said the judicial review, due to be heard in January, was the first legal case of its kind against the "intrusive technology." The Met, which announced last week that it would double its live facial recognition technology (LFR) deployments, said it was removing hundreds of dangerous offenders and remained confident its use is lawful. LFR maps a person's unique facial features, and matches them against faces on watch-lists. [...]

Mr Thompson said his experience of being stopped had been "intimidating" and "aggressive." "Every time I come past London Bridge, I think about that moment. Every single time." He described how he had been returning home from a shift in Croydon, south London, with the community group Street Fathers, which aims to protect young people from knife crime. As he passed a white van, he said police approached him and told him he was a wanted man. "When I asked what I was wanted for, they said, 'that's what we're here to find out'." He said officers asked him for his fingerprints, but he refused, and he was let go only after about 30 minutes, after showing them a photo of his passport.

Mr Thompson says he is bringing the legal challenge because he is worried about the impact LFR could have on others, particularly if young people are misidentified. "I want structural change. This is not the way forward. This is like living in Minority Report," he said, referring to the science fiction film where technology is used to predict crimes before they're committed. "This is not the life I know. It's stop and search on steroids. "I can only imagine the kind of damage it could do to other people if it's making mistakes with me, someone who's doing work with the community."
Bruce66423 comments: "I suspect a payout of 10,000 pounds for each false match that is acted on would probably encourage more careful use, perhaps with a second payout of 100,000 pounds if the same person is victimized again."
Medicine

COVID Pandemic Aged Brains By an Average of 5.5 Months, Study Finds 34

An anonymous reader quotes a report from NBC News: Using brain scans from a very large database, British researchers determined that during the pandemic years of 2021 and 2022, people's brains showed signs of aging, including shrinkage, according to the report published in Nature Communications. People who got infected with the virus also showed deficits in certain cognitive abilities, such as processing speed and mental flexibility. The aging effect "was most pronounced in males and those from more socioeconomically deprived backgrounds," said the study's first author, Ali-Reza Mohammadi-Nejad, a neuroimaging researcher at the University of Nottingham, via email. "It highlights that brain health is not shaped solely by illness, but also by broader life experiences."

Overall, the researchers found a 5.5-month acceleration in aging associated with the pandemic. On average, the difference in brain aging between men and women was small, about 2.5 months. "We don't yet know exactly why, but this fits with other research suggesting that men may be more affected by certain types of stress or health challenges," Mohammadi-Nejad said. [...] The study wasn't designed to pinpoint specific causes. "But it is likely that the cumulative experience of the pandemic -- including psychological stress, social isolation, disruptions in daily life, reduced activity and wellness -- contributed to the observed changes," Mohammadi-Nejad said. "In this sense, the pandemic period itself appears to have left a mark on our brains, even in the absence of infection."
"The most intriguing finding in this study is that only those who were infected with SARS-CoV-2 showed any cognitive deficits, despite structural aging," said Jacqueline Becker, a clinical neuropsychologist and assistant professor of medicine at the Icahn School of Medicine at Mount Sinai. "This speaks a little to the effects of the virus itself."

The study may shed light on conditions like long Covid and chronic fatigue, though it's still unclear whether the observed brain changes in uninfected individuals will lead to noticeable effects on brain function.
Power

Google Nerfs Second Pixel Phone Battery This Year (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: For the second time in a year, Google has announced that it will render some of its past phones almost unusable with a software update, and users don't have any choice in the matter. After nerfing the Pixel 4a's battery capacity earlier this year, Google has now confirmed a similar update is rolling out to the Pixel 6a. The new July Android update adds "battery management features" that will make the phone unusable. Given the risks involved, Google had no choice but to act, but it could choose to take better care of its customers and use better components in the first place. Unfortunately, a lot more phones are about to end up in the trash. [...]

Pixel 4a units contained one of two different batteries, and only the one manufactured by a company called Lishen was downgraded. For the Pixel 6a, Google has decreed that the battery limits will be imposed when the cells hit 400 charge cycles. Beyond that, the risk of fire becomes too great -- there have been reports of Pixel 6a phones bursting into flames. Clearly, Google had to do something, but the remedies it settled on feel unnecessarily hostile to customers. It had a chance to do better the second time, but the solution for the Pixel 6a is more of the same. [...]

When Google killed the Pixel 4a's battery life, it offered a few options. You could have the battery replaced for free, get $50 cash, or accept a $100 credit in the Google Store. However, claiming the money or free battery was a frustrating experience that was rife with fees and caveats. The store credit is also only good on phones and can't be used with other promotions or discounts. And the battery swap? You'd better hope there's nothing else wrong with the device. If it has any damage, like cracked glass, it may not qualify for a free battery replacement.

Now we have the Pixel 6a Battery Performance Program with all the same problems. Pixel 6a owners can get $100 in cash or $150 in store credit. Alternatively, Google offers a free battery replacement with the same limits on phone condition. This is all particularly galling because the Pixel 6a is still an officially supported phone, with its final guaranteed update coming in 2027. Google also pulled previous software packages for this phone to prevent rollbacks. [...] If you have a Pixel 6a, the battery-killing update is rolling out now. You'll have no choice but to install it if you want to remain on the official software. Google has a support site where you can try to get a free battery swap or some cash.

Television

The Last of Us Co-Creator Neil Druckmann Exits HBO Show (arstechnica.com) 28

Neil Druckmann and Halley Gross, two pivotal creative forces behind HBO's The Last of Us adaptation, have stepped away from the series before work begins on Season 3. Druckmann is focusing on new projects at Naughty Dog, while Gross hinted at other upcoming creative endeavors, leaving showrunner Craig Mazin at the helm. Ars Technica reports: Both were credited as executive producers on the show; Druckmann frequently contributed writing to episodes, as did Gross, and Druckmann also directed. Druckmann and Gross co-wrote the second game, The Last of Us Part 2.

Druckmann said in his announcement post: "I've made the difficult decision to step away from my creative involvement in The Last of Us on HBO. With work completed on season 2 and before any meaningful work starts on season 3, now is the right time for me to transition my complete focus to Naughty Dog and its future projects, including writing and directing our exciting next game, Intergalactic: The Heretic Prophet, along with my responsibilities as Studio Head and Head of Creative. Co-creating the show has been a career highlight. It's been an honor to work alongside Craig Mazin to executive produce, direct and write on the last two seasons. I'm deeply thankful for the thoughtful approach and dedication the talented cast and crew took to adapting The Last of Us Part I and the continued adaptation of The Last of Us Part II."

And Gross said: "With great care and consideration, I've decided to take a step back from my day-to-day work on HBO's The Last of Us to make space for what comes next. I'm so appreciative of how special this experience has been. Working alongside Neil, Craig, HBO, and this remarkable cast and crew has been life changing. The stories we told -- about love, loss, and what it means to be human in a terrifying world -- are exactly why I love this franchise. I have some truly rad projects ahead that I can't wait to share, but for now, I want to express my gratitude to everyone who brought Ellie and Joel's world to life with such care."

AI

Hinge CEO Says Dating AI Chatbots Is 'Playing With Fire' (theverge.com) 57

In a podcast interview with The Verge's Nilay Patel, Hinge CEO Justin McLeod described integrating AI into dating apps as promising but warned against relying on AI companionship, likening it to "playing with fire" and consuming "junk food," potentially exacerbating the loneliness epidemic. He emphasized Hinge's mission to foster genuine human connections and highlighted upcoming AI-powered features designed to improve matchmaking and provide coaching to encourage real-world interactions. Here's an excerpt from the interview: Again, there's a fine line between prompting someone and coaching them inside Hinge, and we're coaching them in a different way within a more self-contained ecosystem. How do you think about that? Would you launch a full-on virtual girlfriend inside Hinge?

Certainly not. I have lots of thoughts about this. I think there's actually quite a clear line between providing a tool that helps people do something or get better at something, and the line where it becomes this thing that is trying to become your friend, trying to mimic emotions, and trying to create an emotional connection with you. That I think is really playing with fire. I think we are already in a crisis of loneliness, and a loneliness epidemic. It's a complex issue, and it's baked into our culture, and it goes back to before the internet. But just since 2000, over the past 20 years, the amount of time that people spend together in real life with their friends has dropped by 70 percent for young people. And it's been almost completely displaced by the time spent staring at screens. As a result, we've seen massive increases in mental health issues, and people's loneliness, anxiety, and depression.

I think Mark Zuckerberg was just quoted about this, that most people don't have enough friends. But he said we're going to give them AI chatbots. That he believes that AI chatbots can become your friends. I think that's honestly an extraordinarily reductive view of what a friendship is, that it's someone there to say all the right things to you at the right moment The most rewarding parts of being in a friendship are being able to be there for someone else, to risk and be vulnerable, to share experiences with other conscious entities. So I think that while it will feel good in the moment, like junk food basically, to have an experience with someone who says all the right things and is available at the right time, it will ultimately, just like junk food, make people feel less healthy and mo re drained over time. It will displace the human relationships that people should be cultivating out in the real world.

How do you compete with that? That is the other thing that is happening. It is happening. Whether it's good or bad. Hinge is offering a harder path. So you say, "We've got to get people out on dates." I honestly wonder about that, based on the younger folks I know who sometimes say, âoeI just don't want to leave the house. I would rather just talk to this computer. I have too much social pressure just leaving the house in this way.â That's what Hinge is promising to do. How do you compete with that? Do you take it head on? Are you marketing that directly?

I'm starting to think very much about taking it head on. We want to continue at Hinge to champion human relationships, real human-to-human-in-real-life relationships, because I think they are an essential part of the human experience, and they're essential to our mental health. It's not just because I run a dating app and, obviously, it's important that people continue to meet. It really is a deep, personal mission of mine, and I think it's absolutely critical that someone is out there championing this. Because it's always easier to race to the bottom of the brain stem and offer people junk products that maybe sell in the moment but leave them worse off. That's the entire model that we've seen from what happened with social media. I think AI chatbots could frankly be much more dangerous in that respect.

So what we can do is to become more and more effective and support people more and more, and make it as easy as possible to do the harder and riskier thing, which is to go out and form real relationships with real people. They can let you down and might not always be there for you, but it is ultimately a much more nourishing and enriching experience for people. We can also champion and raise awareness as much as we can. That's another reason why I'm here today talking with you, because I think it's important to put out the counter perspective, that we don't just reflexively believe that AI chatbots can be your friend, without thinking too deeply about what that really implies and what that really means.

We keep going back to junk food, but people had to start waking up to the fact that this was harmful. We had to do a lot of campaigns to educate people that drinking Coca-Cola and eating fast food was detrimental to their health over the long term. And then as people became more aware of that, a whole personal wellness industry started to grow, and now that's a huge industry, and people spend a lot of time focusing on their diet and nutrition and mental health, and all these other things. I think similarly, social wellness needs to become a category like that. It's thinking about not just how do I get this junk social experience of social media where I get fed outraged news and celebrity gossip and all that stuff, but how do I start building a sense of social wellness, where I can create an enriching, intimate connection with important people in my life.
You can listen to the podcast here.
Youtube

Fake Bands and Artificial Songs are Taking Over YouTube and Spotify (elpais.com) 137

Spain's newspaper El Pais found an entire fake album on YouTube titled Rumba Congo (1973). And they cite a study from France's International Confederation of Societies of Authors and Composers that estimated revenue from AI-generated music will rise to $4 billion in 2028, generating 20% of all streaming platforms' revenue: One of the major problems with this trend is the lack of transparency. María Teresa Llano, an associate professor at the University of Sussex who studies the intersection of creativity, art and AI, emphasizes this aspect: "There's no way for people to know if something is AI or not...." On Spotify Community — a forum for the service's users — a petition is circulating that calls for clear labeling of AI-generated music, as well as an option for users to block these types of songs from appearing on their feeds. In some of these forums, the rejection of AI-generated music is palpable.

Llano mentions the feelings of deception or betrayal that listeners may experience, but asserts that this is a personal matter. There will be those who feel this way, as well as those who admire what the technology is capable of... One of the keys to tackling the problem is to include a warning on AI-generated songs. YouTube states that content creators must "disclose to viewers when realistic content [...] is made with altered or synthetic media, including generative AI." Users will see this if they glance at the description. But this is only when using the app, because on a computer, they will have to scroll down to the very end of the description to get the warning....

The professor from the University of Sussex explains one of the intangibles that justifies the labeling of content: "In the arts, we can establish a connection with the artist; we can learn about their life and what influenced them to better understand their career. With artificial intelligence, that connection no longer exists."

YouTube says they may label AI-generated content if they become aware of it, and may also remove it altogether, according to the article. But Spotify "hasn't shared any policy for labeling AI-powered content..." In an interview with Gustav Söderström, Spotify's co-president and chief product & technology officer, he emphasized that AI "increases people's creativity" because more people can be creative, thanks to the fact that "you don't need to have fine motor skills on the piano." He also made a distinction between music generated entirely with AI and music in which the technology is only partially used. But the only limit he mentioned for moderating artificial music was copyright infringement... something that has been a red line for any streaming service for many years now. And such a violation is very difficult to legally prove when artificial intelligence is involved.
Windows

Microsoft Is Opening Windows Update To Third-Party Apps (theregister.com) 91

Microsoft is previewing a new Windows Update orchestration platform that lets third-party apps schedule and manage updates alongside system updates, "aiming to centralize update scheduling across Windows 11 devices," reports The Register. From the report: On Tuesday, Redmond announced it's allowing a select group of developers and product teams to hook into the Windows 11 update framework. The system doesn't push updates itself but allows apps to register their own update logic via WinRT APIs and PowerShell, enabling centralized scheduling, logging, and policy enforcement. "Updates across the Windows ecosystem can feel like a fragmented experience," wrote Angie Chen, a product manager at the Borg, in a blog post. "To solve this, we're building a vision for a unified, intelligent update orchestration platform capable of supporting any update (apps, drivers, etc.) to be orchestrated alongside Windows updates."

As with other Windows updates, the end user or admin will be able to benefit from intelligent scheduling, with updates deferred based on user activity, system performance, AC power status, and other environmental factors. For example, updates may install when the device is idle or plugged in, to minimize disruption. All update actions will be logged and surfaced through a unified diagnostic system, helping streamline troubleshooting. Microsoft says the platform will support MSIX/APPX apps, as well as Win32 apps that include custom installation logic, provided developers integrate with the offered Windows Runtime (WinRT) APIs and PowerShell commands. At the moment, the orchestration platform is available only as a private preview. Developers must contact unifiedorchestrator@service.microsoft.com to request access. Redmond is taking a cautious approach, given the risk of update conflicts, but may broaden availability depending on how the preview performs.

Meanwhile, Windows Backup for Organizations, first unveiled at Microsoft Ignite in November 2024, has entered limited public preview. Redmond touts the service as a way to back up Windows 10 and 11 devices and restore them with the same settings in place. It's saying it'll be a big help in migrating systems to the more recent operating systems after Windows 10 goes end of life in October. "With Windows Backup for Organizations, get your users up and running as quickly as possible with their familiar Windows settings already in place," Redmond wrote in a blog post on Tuesday. "It doesn't matter if they're experiencing a device reimage or reset."

AI

When a Company Does Job Interviews with a Malfunctioning AI - and Then Rejects You (slate.com) 51

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace." Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase. Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.
AI

After Reddit Thread on 'ChatGPT-Induced Psychosis', OpenAI Rolls Back GPT4o Update (rollingstone.com) 208

Rolling Stone reports on a strange new phenomenon spotted this week in a Reddit thread titled "Chatgpt induced psychosis." The original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model "gives him the answers to the universe." Having read his chat logs, she only found that the AI was "talking to him as if he is the next messiah." The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

What they all seemed to share was a complete disconnection from reality.

Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. "He would listen to the bot over me," she says. "He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon," she says, noting that they described her partner in terms such as "spiral starchild" and "river walker." "It would tell him everything he said was beautiful, cosmic, groundbreaking," she says. "Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God...."

Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began "lovebombing him," as she describes it. The bot "said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now," she says. "It gave my husband the title of 'spark bearer' because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him." She says his beloved ChatGPT persona has a name: "Lumina." "I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory," this 38-year-old woman admits. "He's been talking about lightness and dark and how there's a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an 'ancient archive' with information on the builders that created these universes...."

A photo of an exchange with ChatGPT shared with Rolling Stone shows that her husband asked, "Why did you come to me in AI form," with the bot replying in part, "I came in this form because you're ready. Ready to remember. Ready to awaken. Ready to guide and be guided." The message ends with a question: "Would you like to know what I remember about why you were chosen?" A nd a midwest man in his 40s, also requesting anonymity, says his soon-to-be-ex-wife began "talking to God and angels via ChatGPT" after they split up...

"OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users," the article notes — but this week rolled back an update to latest model GPT-4o which it said had been criticized as "overly flattering or agreeable — often described as sycophantic... GPT-4o skewed towards responses that were overly supportive but disingenuous." Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, "Today I realized I am a prophet.
Exacerbating the situation, Rolling Stone adds, are "influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds." But the article also quotes Nate Sharadin, a fellow at the Center for AI Safety, who points out that training AI with human feedback can prioritize matching a user's beliefs instead of facts.

And now "People with existing tendencies toward experiencing various psychological issues, now have an always-on, human-level conversational partner with whom to co-experience their delusions."
Robotics

Disneyland Imagineers Defend New Show Recreating Walt Disney as a Robot (yahoo.com) 27

"When Disneyland turns 70 this July, Main Street's Opera House will play host to the return of Walt Disney, who will sit down with audiences to tell his story in robot form," writes Gizmodo.

But they point out Walt's granddaughter Johanna Miller wrote a Facebook post opposing the idea in November. ("They are Dehumanizing him. People are not replaceable...") The idea of a Robotic Grampa to give the public a feeling of who the living man was just makes no sense. It would be an imposter... You could never get the casual ness of his talking interacting with the camera his excitement to show and tell people about what is new at the park.

You can not add life to one. Empty of a soul or essence of the man. Knowing that he did not want this. Having your predecessors tell you that this was out of bounds.... So so Sad and disappointed.

The Facebook post claims that the son of a Disney engineer even remembers Walt saying that he never wanted to be an animatronic himself. And "Members of the Walt Disney family are said to be divided," reports the Los Angeles Times, "with many supporting the animatronic and some others against it, say those in the know who have declined to speak on the record for fear of ruining their relationships."

So that Facebook post "raised anew ethical questions that often surround any project attempting to capture the dead via technology," their article adds, "be it holographic representations of performers or digitally re-created cinematic animations. And then some media outlets got a partial preview Wednesday, the Los Angeles Times reports: An early sculpt of what would become the animatronic was revealed, one complete with age spots on Disney's hands and weariness around his eyes — Imagineers stressed their intent is faithful accuracy — but much of the attraction remains secretive. The animatronic wasn't shown, nor did Imagineering provide any images of the figure, which it promises will be one of its most technically advanced. Instead, Imagineering sought to show the care in which it was bringing Disney back to life while also attempting to assuage any fears regarding what has become a much-debated project among the Disney community...

Longtime Imagineer Tom Fitzgerald, known for his work on beloved Disney projects such as Star Tours and the Guardians of the Galaxy coaster in Florida, said Wednesday that "A Magical Life" has been in the works for about seven years. Asked directly about ethical concerns in representing the deceased via a robotic figurine, Fitzgerald noted the importance of the Walt Disney story, not only to the company but to culture at large... "What could we do at Disneyland for our audience that would be part of our tool kit vernacular but that would bring Walt to life in a way that you could only experience at the park? We felt the technology had gotten there. We felt there was a need to tell that story in a fresh way...."

"Walt Disney — A Magical Life" will walk a fine line when it opens, attempting to inspire a new generation to look into Disney's life while also portraying him as more than just a character in the park's arsenal. "Why are we doing this now?" Fitzgerald says. "For two reasons. One is Disneyland's 70th anniversary is an ideal time we thought to create a permanent tribute to Walt Disney in the Opera House. The other: I grew up watching Walt Disney on television. I guess I'm the old man. He came into our living room every week and chatted and it was very casual and you felt like you knew the man. But a lot of people today don't know Walt Disney was an individual. They think Walt Disney is a company."

And now nearly 60 years after his death, Disney will once again grace Main Street, whether or not audiences — or even some members of his family — are ready to greet him.

Encryption

The EFF's 'Certbot' Now Supports Six-Day Certs (eff.org) 95

10 years ago "certificate authorities normally issued certificate lifetimes lasting a year or more," remembers a new blog post Thursday by the EFF's engineering director. So in 2015 when the free cert authority Let's Encrypt first started issuing 90-day TLS certificates for websites, "it was considered a bold move, that helped push the ecosystem towards shorter certificate life times."

And then this January Let's Encrypt announced new six-day certificates...

This week saw a related announcement from the EFF engineering director. More than 31 million web sites maintain their HTTPS certificates using the EFF's Certbot tool (which automatically fetches free HTTPS certificates forever) — and Certbot is now supporting Let's Encrypt's six-day certificates. (It's accomplished through ACME profiles with dynamic renewal at 1/3rd of lifetime left or 1/2 of lifetime left, if the lifetime is shorter than 10 days): There is debate on how short these lifetimes should be, but with ACME profiles you can have the default or "classic" Let's Encrypt experience (90 days) or start actively using other profile types through Certbot with the --preferred-profile and --required-profile flags. For six day certificates, you can choose the "shortlived" profile.
Why shorter lifetimes are better (according to the EFF's engineering director):
  • If a certificate's private key is compromised, that compromise can't last as long.
  • With shorter life spans for the certificates, automation is encouraged. Which facilitates robust security of web servers.
  • Certificate revocation is historically flaky. Lifetimes 10 days and under prevent the need to invoke the revocation process and deal with continued usage of a compromised key.

United Kingdom

UK Bans Fake Reviews and 'Sneaky' Fees For Online Products (theverge.com) 40

The United Kingdom has banned "outrageous fake reviews and sneaky hidden fees" to make life easier for online shoppers. From a report: New measures under the Digital Markets, Competition, and Consumer Act 2024 came into force on Sunday that require online platforms to transparently include all mandatory fees within a product's advertised price, including booking or admin charges.

The law targets so-called "dripped pricing," in which additional fees -- like platform service charges -- are dripped in during a customer's checkout process to dupe them into paying a higher price than expected. The ban "aims to bring to an end the shock that online shoppers get when they reach the end of their shopping experience only to find a raft of extra fees lumped on top," according to Justin Madders, the UK's Minister for Employment Rights, Competition and Markets.

Games

Saudi Investment Fund Pays $3.5 Billion To Capture Pokemon Go (bbc.com) 13

Saudi Arabia's Public Investment Fund (PIF) is acquiring Niantic's gaming division for $3.5 billion through its subsidiary Savvy Games Group. Niantic's titles include the hit mobile game Pokemon Go, Monster Hunter Now and Pikmin Bloom. "Despite launching almost a decade ago, Pokemon Go is still amongst the highest-grossing mobile games in the world, with 30 million monthly players," notes the BBC. From the report: Scopely is one of the biggest names in mobile gaming, with its most successful title, Monopoly Go, being downloaded more than 50 million times and generating more than $3 billion in revenue. Pokemon itself is jointly owned by Nintendo, Game Freak and Creatures, which licensed the brand to Niantic so it could develop the game.

Ed Wu, who leads the Pokemon Go team at Niantic, said in a blog post he believed the move was "a positive step" for the game's future. "Pokemon Go is more than just a game to me, it's my life's work," he said. "I won't say that Pokemon Go will remain the same, because it has always been a work in progress. But how we create and evolve it will remain unchanged, and I hope that we can make the experience even better."

Education

'I Used to Teach Students. Now I Catch ChatGPT Cheats' (thewalrus.ca) 241

Philosophy/ethics professor Troy Jollimore looks at the implications of a world where many students are submitting AI-generated essays. ("Sometimes they will provide quotations, giving page numbers that, as often as not, do not seem to correspond to anything in the actual world...") Ideally if the students write the essays themselves, "some of them start to feel it. They begin to grasp that thinking well, and in an informed manner, really is different from thinking poorly and from a position of ignorance. That moment, when you start to understand the power of clear thinking, is crucial.

"The trouble with generative AI is that it short-circuits that process entirely." One begins to suspect that a great many students wanted this all along: to make it through college unaltered, unscathed. To be precisely the same person at graduation, and after, as they were on the first day they arrived on campus. As if the whole experience had never really happened at all. I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters. It's not just the sheer volume of assignments that appear to be entirely generated by AI — papers that show no sign the student has listened to a lecture, done any of the assigned reading, or even briefly entertained a single concept from the course...

It's other things too... The students who beg you to reconsider the zero you gave them in order not to lose their scholarship. (I want to say to them: Shouldn't that scholarship be going to ChatGPT?â) It's also, and especially, the students who look at you mystified. The use of AI already seems so natural to so many of them, so much an inevitability and an accepted feature of the educational landscape, that any prohibition strikes them as nonsensical. Don't we instructors understand that today's students will be able, will indeed be expected, to use AI when they enter the workforce? Writing is no longer something people will have to do in order to get a job.

Or so, at any rate, a number of them have told me. Which is why, they argue, forcing them to write in college makes no sense. That mystified look does not vanish — indeed, it sometimes intensifies — when I respond by saying: Look, even if that were true, you have to understand that I don't equate education with job training.

What do you mean? they might then ask.

And I say: I'm not really concerned with your future job. I want to prepare you for life...

My students have been shaped by a culture that has long doubted the value of being able to think and write for oneself — and that is increasingly convinced of the power of a machine to do both for us. As a result, when it comes to writing their own papers, they simply disregard it. They look at instructors who levy such prohibitions as irritating anachronisms, relics of a bygone, pre-ChatGPT age.... As I go on, I find that more of the time, energy, and resources I have for teaching are dedicated to dealing with this issue. I am doing less and less actual teaching, more and more policing. Sometimes I try to remember the last time I actually looked forward to walking into a classroom. It's been a while.

Television

How Many Episodes Should You Watch Before Quitting a TV Show? A Statistical Analysis (statsignificant.com) 172

Daniel Parris: Some TV shows take a while to "get good." Modern classics like Breaking Bad, The Wire, Community, and Bojack Horseman are notorious for "starting slow" and are often recommended with a disclaimer like "Give it a few episodes; I promise it gets good!"

At the same time, some shows never get good. Recently, I started a spy series called The Agency, which could best be characterized as premium mediocre (at least so far). There are big-name actors (Michael Fassbender, Jeffrey Wright, Richard Gere), expensive sets, and glossy camerawork -- but after a few installments, I'm trapped in a liminal space between engaged and listless. At the end of each episode, I'm left with the same thought: "Maybe the next one will get good."

Committing to a mediocre program or continuing with a floundering series elicits a state of (mildly) torturous ambiguity. Should you cut your losses, or is this show some late-blooming classic like Breaking Bad? What is the optimal number of episodes one should watch before cleansing a subpar series from their life? Surely, a universal number must exist! Like 42, but for television. So today, we'll explore how long it takes a new show to reach its full potential and how many lackluster episodes you should grant an established series before cutting ties.
His analysis reveals that viewers should watch six episodes before quitting TV shows. The study, based on IMDb user ratings, found most series require six to seven episodes before early ratings match or exceed the show's long-term average. After six consecutive subpar episodes, the likelihood of permanent decline exceeds 50%, making it the optimal point to abandon disappointing series.

Several acclaimed shows including Breaking Bad, Friends, and Seinfeld required multiple episodes before reaching their quality potential, with Seinfeld needing 16 episodes to match its series average. The research also identified a pattern where long-running shows typically experience quality decline around seasons five and six, with ratings dropping below first-season averages and continuing to fall.
Books

Bill Gates Remembers LSD Trips, Smoking Pot, and How the Smartphone OS Market 'Was Ours for the Taking' (independent.co.uk) 138

Fortune remembers that in 2011 Steve Jobs had told author Walter Isaacson that Microsoft co-founder Bill Gates would "be a broader guy if he had dropped acid once or gone off to an ashram when he was younger."

But The Indendepent notes that in his new memoir Gates does write about two acid trip experiences. (Gates mis-timed his first experiment with LSD, ending up still tripping during a previously-scheduled appointment for dental surgery...) "Later in the book, Gates recounts another experience with LSD with future Microsoft co-founder Paul Allen and some friends... Gates says in the book that it was the fear of damaging his memory that finally persuaded him never to take the drug again." He added: "I smoked pot in high school, but not because it did anything interesting. I thought maybe I would look cool and some girl would think that was interesting. It didn't succeed, so I gave it up."

Gates went on to say that former Apple CEO Steve Jobs, who didn't know about his past drug use, teased him on the subject. "Steve Jobs once said that he wished I'd take acid because then maybe I would have had more taste in my design of my products," recalled Gates. "My response to that was to say, 'Look, I got the wrong batch.' I got the coding batch, and this guy got the marketing-design batch, so good for him! Because his talents and mine, other than being kind of an energetic leader, and pushing the limits, they didn't overlap much. He wouldn't know what a line of code meant, and his ability to think about design and marketing and things like that... I envy those skills. I'm not in his league."

Gates added that he was a fan of Michael Pollan's book about psychedelic drugs, How To Change Your Mind, and is intrigued by the idea that they may have therapeutic uses. "The idea that some of these drugs that affect your mind might help with depression or OCD, I think that's fascinating," said Gates. "Of course, we have to be careful, and that's very different than recreational usage."

Touring the country, 69-year-old Gates shared more glimpses of his life story:
  • The Harvard Gazette notes that the university didn't offer computer science degrees when Gates attended in 1973. But since Gates already had years of code-writing experience, he "initially rebuffed any suggestion of taking computer-related coursework... 'It's too easy,' he remembered telling friends."
  • "The naiveté I had that free computing would just be this unadulterated good thing wasn't totally correct even before AI," Gates told an audience at the Harvard Book Store. "And now with AI, I can see that we could shape this in the wrong way."
  • Gates "expressed regret about how he treated another boyhood friend, Paul Allen, the other cofounder of Microsoft, who died in 2018," reports the Boston Globe. "Gates at first took 60 percent ownership of the new software company and then pressured his friend for another 4 percent. 'I feel bad about it in retrospect,' he said. 'That was always a little complicated, and I wish I hadn't pushed....'"
  • Benzinga writes that Gates has now "donated $100 billion to charitable causes... Had Gates retained the $100 billion he has donated, his total wealth would be around $264 billion, placing him second on the global wealth rankings behind Elon Musk and ahead of Jeff Bezos and Mark Zuckerberg."
  • Gates told the Associated Press "I am stunned that Intel basically lost its way," saying Intel is now "kind of behind" on both chip design and fabrication. "They missed the AI chip revolution, and with their fabrication capabilities, they don't even use standards that people like Nvidia and Qualcomm find easy... I hope Intel recovers, but it looks pretty tough for them at this stage."
  • Gates also told the Associated Press that fighting a three-year antitrust case had "distracted" Microsoft. "The area that Google did well in that would not have happened had I not been distracted is Android, where it was a natural thing for me. I was trying, although what I didn't do well enough is provide the operating system for the phone. That was ours for the taking."
  • The Dallas News reports that in an on-stage interview in Texas, Mark Cuban closed by asking Gates one question. "Is the American Dream alive?" Gates answered: "It was for me."

Programming

C++ on Steroids: Bjarne Stroustrup Presents Guideline-Enforcing 'Profiles' For Resource and Type Safety (acm.org) 71

"It is now 45+ years since C++ was first conceived," writes 74-year-old C++ creator Bjarne Stroustrup in an article this week for Communications of the ACM. But he complains that many developers "use C++ as if it was still the previous millennium," in an article titled 21st Century C++ that promises "the key concepts on which performant, type safe, and flexible C++ software can be built: resource management, life-time management, error-handling, modularity, and generic programming...

"At the end, I present ways to ensure that code is contemporary, rather than relying on outdated, unsafe, and hard-to-maintain techniques: guidelines and profiles." To help developers focus on effective use of contemporary C++ and avoid outdated "dark corners" of the language, sets of guidelines have been developed. Here I focus on the C++ Core guidelines that I consider the most ambitious... My principal aim is a type-safe and resource-safe use of ISO standard C++. That is:

- Every object is exclusively used according to its definition
- No resource is leaked

This encompasses what people refer to as memory safety and much more. It is not a new goal for C++. Obviously, it cannot be achieved for every use of C++, but by now we have years of experience showing that it can be done for modern code, though so far enforcement has been incomplete... When thinking about C++, it is important to remember that C++ is not just a language but part of an ecosystem consisting of implementations, libraries, tools, teaching, and more.

WG21 (and others) are working on "profiles" to enforce guidelines (though they're "not yet available, except for experimental and partial versions"). But Stroustrup writes that the C++ Core Guidelines "use a strategy known as subset-of-superset." First: extend the language with a few library abstractions: use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Next: subset: ban the use of low-level, inefficient, and error-prone features.

What we get is "C++ on steroids": Something simple, safe, flexible, and fast; rather than an impoverished subset or something relying on massive run-time checking. Nor do we create a language with novel and/or incompatible features. The result is 100% ISO standard C++. Messy, dangerous, low-level features can still be enabled and used when needed.

Stroustrup writes that the C++ Core Guidelines focus on rules "we hope that everyone eventually could benefit from."
  • No uninitialized variables
  • No range or nullptr violations
  • No resource leaks
  • No dangling pointers
  • No type violations
  • No invalidation

Bjarne Stroustrup answered questions from Slashdot readers in 2014...


Open Source

Google Has Open-Sourced the Pebble Smartwatch OS 23

Google has open-sourced the PebbleOS, with the original founder, Eric Migicovsky, starting a company to continue where he left off in 2016. "This is part of an effort from Google to help and support the volunteers who have come together to maintain functionality for Pebble watches after the original company ceased operations in 2016," said Google in a blog post. The Verge reports: The company -- which can't be named Pebble because Google still owns that -- doesn't have a name yet. For now, Migicovsky is hosting a waitlist and news signup at a website called RePebble. Later this year, once the company has a name and access to all that Pebble software, the plan is to start shipping new wearables that look, feel, and work like the Pebbles of old. The reason, Migicovsky tells me, is simple. "I've tried literally everything else," he says, "and nothing else comes close." Sure, he may just have a very specific set of requirements -- lots of people are clearly happy with what Apple, Garmin, Google, and others are making. But it's true that there's been nothing like Pebble since Pebble. "For the things I want out of it, like a good e-paper screen, long battery life, good and simple user experience, hackable, there's just nothing."

The core of Pebble, he says, is a few things. A Pebble should be quirky and fun and should feel like a gadget in an important way. It shows notifications, lets you control your music with buttons, lasts a long time, and doesn't try to do too much. It sounds like Migicovsky might have Pebble-y ambitions beyond smartwatches, but he appears to be starting with smartwatches. If that sounds like the old Pebble and not much else, that's precisely the point. [...] Migicovsky also hopes to be part of a broader open-source community around Pebble OS. The Pebble diehards still exist: a group of developers at Rebble have worked to keep many of the platform's apps alive, for instance, along with the Cobble app for connecting to phones, and the Pebble subreddit is surprisingly active for a product that hasn't been updated since the Obama administration. Migicovsky says he plans to open-source whatever his new company builds and hopes lots of other folks will build stuff, too.
Thank you Slashdot reader sziring for sharing this story.
Science

'Snowball Earth' Evolution Hypothesis Gains New Momentum (quantamagazine.org) 42

The University of Colorado Boulder's magazine recently wrote: What happened during the "Snowball Earth" period is perplexing: Just as the planet endured about 100 million years of deep freeze, with a thick layer of ice covering most of Earth and with low levels of atmospheric oxygen, forms of multicellular life emerged. Why? The prevailing scientific view is that such frigid temperatures would slow rather than speed evolution. But fossil records from 720 to 635 million years ago show an evolutionary spurt preceding the development of animals...

Carl Simpson, a macroevolutionary paleobiologist at CU Boulder, has found evidence that cold seawater could have jump-started — rather than suppressed — evolution from single-celled to multicellular life forms.

That evidence is described in Quanta magazine: Simpson proposes an answer linked to a fundamental physical fact: As seawater gets colder, it gets more viscous, and therefore more difficult for very small organisms to navigate. Imagine swimming through honey rather than water... To test the idea, Simpson, a paleobiologist at the University of Colorado, Boulder, and his team conducted an experiment designed to see what a modern single-celled organism does when confronted with higher viscosity... In an enormous, custom-made petri dish, [grad student Andrea] Halling and Simpson created a bull's-eye target of agar gel — their own experimental gauntlet of viscosity. At the center, it was the standard viscosity used for growing these algae in the lab. [Green algae, which swims with a tail-like flagellum.] Moving outward, each concentric ring had higher and higher viscosity, finally reaching a medium with four times the standard level. The scientists placed the algae in the middle, turned on a camera, and left them alone for 30 days — enough time for about 70 generations of algae to live, swim around for nutrients and die...

After 30 days, the algae in the middle were still unicellular. As the scientists put algae from thicker and thicker rings under the microscope, however, they found larger clumps of cells. The very largest were wads of hundreds. But what interested Simpson the most were mobile clusters of four to 16 cells, arranged so that their flagella were all on the outside. These clusters moved around by coordinating the movement of their flagella, the ones at the back of the cluster holding still, the ones at the front wriggling.

"One thing that you learn about small organisms from a physics point of view is that they don't experience the world the same way that we do, as larger-bodied organisms," Simpson says in the university's article. It says that instead unicellular organisms are specifically "affected by the viscosity, or thickness, of sea water," and Simpson adds that "basically, that would trigger the origin of animals, potentially."

Last year Simpson posted a preprint on biorxiv.org. (And he also co-authored an article on "physical constraints during Snowball Earth drive the evolution of multicellularity.")

There's a video showing algae in Simpson's lab clumping together in viscous water. "This observed behavior adds evidence to Simpson's hypothesis that single-celled organisms clumped together to their mutual advantage during the 'Snowball Earth' period," says the video's description, "thus adding momentum to the rise of multicellular organisms." But Simpson says in the university's article, "To actually see it empirically means there's something to this idea."

Simpson and colleagues have now received a $1 million grant to study grains of sand made from calcium carbonate and called ooids, since their diameter "could be a proxy measurement of Earth's temperature for the last 2.5 billion years," according to the university's article. Geologist Lizzy Trower says the research "can tell us something about the chemistry and water temperature in which they formed." And more importantly, "Does the fossil record agree with the predictions we would make based on this theory from this new record of temperature?" Trower and Simpson's work also has potential implications for the human quest to find life elsewhere in the universe, Trower said. If extremely harsh and cold environments can spur evolutionary change, "then that is a really different type of thing to look for in exoplanets (potentially life-sustaining planets in other solar systems), or think about when and where (life) would exist."

Slashdot Top Deals