Robotics

Google Set Up Two Robotic Arms For a Game of Infinite Table Tennis (popsci.com) 8

An anonymous reader quotes a report from Popular Science: On the early evening of June 22, 2010, American tennis star John Isner began a grueling Wimbledon match against Frenchman Nicolas Mahut that would become the longest in the sport's history. The marathon battle lasted 11 hours and stretched across three consecutive days. Though Isner ultimately prevailed 70-68 in the fifth set, some in attendance half-jokingly wondered at the time whether the two men might be trapped on that court for eternity. A similarly endless-seeming skirmish of rackets is currently unfolding just an hour's drive south of the All England Club -- at Google DeepMind. Known for pioneering AI models that have outperformed the best human players at chess and Go, DeepMind now has a pair of robotic arms engaged in a kind of infinite game of table tennis. The goal of this ongoing research project, which began in 2022, is for the two robots to continuously learn from each other through competition.

Just as Isner eventually adapted his game to beat Mahut, each robotic arm uses AI models to shift strategies and improve. But unlike the Wimbledon example, there's no final score the robots can reach to end their slugfest. Instead, they continue to compete indefinitely, with the aim of improving at every swing along the way. And while the robotic arms are easily beaten by advanced human players, they've been shown to dominate beginners. Against intermediate players, the robots have roughly 50/50 odds -- placing them, according to researchers, at a level of "solidly amateur human performance."

All of this, as two researchers involved noted this week in an IEEE Spectrum blog, is being done in hopes of creating an advanced, general-purpose AI model that could serve as the "brains" of humanoid robots that may one day interact with people in real-world factories, homes, and beyond. Researchers at DeepMind and elsewhere are hopeful that this learning method, if scaled up, could spark a "ChatGPT moment" for robotics -- fast-tracking the field from stumbling, awkward hunks of metal to truly useful assistants. "We are optimistic that continued research in this direction will lead to more capable, adaptable machines that can learn the diverse skills needed to operate effectively and safely in our unstructured world," DeepMind senior staff engineer Pannag Sanketi and Arizona State University Professor Heni Ben Amor write in IEEE Spectrum.

AI

Music Pioneer Napster Tries Again, This Time With AI Chatbots (fastcompany.com) 18

Napster has returned with an AI-powered reinvention, launching a platform of specialized chatbots and holographic avatars. The former dot-com music file-sharing pioneer now offers dozens of "AI companions" trained as experts in fields from therapy to business strategy, plus the View device for 3D holographic video chats, FastCompany reports.

Infinite Reality acquired Napster for $207 million in March and rebranded itself under the nostalgic name. The platform charges $19 monthly or $199 bundled with hardware, marking Napster's latest attempt at relevance after previous owners tried VR concerts and crypto ventures.
Businesses

Meetings After 8 p.m. Are On the Rise, Microsoft Study Finds (bloomberg.com) 150

Meetings starting after 8 p.m. are up 16% compared to a year ago, and at 10 p.m. almost a third of active workers are still monitoring their inboxes, according to research from Microsoft. Bloomberg: The company's annual work trends study, which is based on aggregated and anonymized data from Microsoft 365 users and a global survey of 31,000 desk workers, also found that almost 20% of employees actively working weekends are checking email before noon on Saturdays and Sundays [non-paywalled source], while over 5% are active on email again on Sunday evenings, gearing up for the start of the work week.

[...] Meetings are often spontaneous. Some 57% of the gatherings tallied by Microsoft came together without a calendar invite, and even 10% of scheduled meetings were booked at the last minute. [...] Mass emails, those which loop in more than 20 participants, are on the rise, climbing 7% from last year.

The Almighty Buck

Consumer Group Accuses Shein of Manipulating Shoppers With 'Dark Patterns' (www.cbc.ca) 14

An anonymous reader quotes a report from CBC: A consumer organization filed a complaint with the European Commission on Thursday against online fast-fashion retailer Shein over its use of "dark patterns," which are tactics designed to make people buy more on its app and website. Pop-ups urging customers not to leave the app or risk losing promotions, countdown timers that create time pressure to complete a purchase and the infinite scroll on its app are among the methods Shein uses that could be considered "aggressive commercial practices," wrote BEUC, a pan-European consumer group, in a report.

The BEUC also detailed Shein's use of frequent notifications, with one phone receiving 12 notifications from the app in a single day. "For fast fashion you need to have volume, you need to have mass consumption, and these dark patterns are designed to stimulate mass consumption," said Agustin Reyna, director general of BEUC, in an interview. "For us, to be satisfactory they need to get rid of these dark patterns, but the question is whether they will have enough incentive to do so, knowing the potential impact it can have on the volume of purchases." [...]

The BEUC also targeted the online discount platform Temu, a Shein rival, in a previous complaint. Both platforms have surged in popularity in Europe, partly helped by apps that encourage shoppers to engage with games and stand to win discounts and free products. [...] The BEUC noted that dark patterns are widely used by mass-market clothing retailers and called on the consumer protection network to include other retailers in its investigation. It said 25 of its member organizations in 21 countries, including France, Germany and Spain, joined in the grievance filed with the commission and with the European consumer protection network.
Temu and Shein have their own issues in the United States. Following the recent closure of the de minimis loophole, use of the two Chinese platforms have slowed significantly. "Temu's U.S. daily active users (DAUs) dropped 52% in May versus March, before Trump's tariffs were announced, while those at rival Shein were down 25%," reports CNBC, citing data from market intelligence firm Sensor Tower.

"The declines were also reflected in both platforms' Apple App Store rankings. Temu averaged a rank of 132 in May 2025, down from an average top 3 ranking a year ago, while Shein averaged a rank of 60 last month versus a top 10 ranking the year prior, the data showed."
Robotics

Robot Industry Split Over That Humanoid Look (axios.com) 65

An anonymous reader quotes a report from Axios: Advanced robots don't necessarily need to look like C3PO from "Star Wars" or George Jetson's maid Rosie, despite all the hype over humanoids from Wall Street and Big Tech. In fact, some of the biggest skeptics about human-shaped robots come from within the robotics industry itself. [...] The most productive -- and profitable -- bots are the ones that can do single tasks cheaply and efficiently. "If you look at where robots are really bringing value in a manufacturing environment, it is combining industrial or collaborative robots with mobility," ABB managing director Ali Raja tells Axios. "I don't see that there are any real practical applications where humanoids are bringing in a lot of value."

"The reason we have two legs is because whether Darwin or God or whoever made us, we have to figure out how to traverse an infinite number of things," like climbing a mountain or riding a bike, explains Michael Cicco, CEO of Fanuc America Corp. "When you get into the factory, even if it's a million things, it's still a finite number of things that you need to do." Human-shaped robots are over-engineered solutions to most factory chores that could be better solved by putting a robot arm on a wheeled base, he said.

"The thing about humanoids is not that it's a human factor. It's that it's more dynamically stable," counters Melonee Wise, chief product officer at Agility Robotics, which is developing a humanoid robot called Digit. When humans grab something heavy, they can shift their weight for better balance. The same is true for a humanoid, she said. Using a robotic arm on a mobile base to pick up something heavy, "it's like I'm a little teapot and you become very unstable," she said, bending at the waist.

AI

In 'Milestone' for Open Source, Meta Releases New Benchmark-Beating Llama 4 Models (meta.com) 65

It's "a milestone for Meta AI and for open source," Mark Zuckerberg said this weekend. "For the first time, the best small, mid-size, and potentially soon frontier [large-language] models will be open source."

Zuckerberg anounced four new Llama LLMs in a video posted on Instagram and Facebook — two dropping this weekend, with another two on the way. "Our goal is to build the world's leading AI, open source it, and make it universally accessible so that everyone in the world benefits."

Zuckerberg's announcement: I've said for a while that I think open source AI is going to become the leading models. And with Llama 4 this is starting to happen.

- The first model is Llama 4 Scout. It is extremely fast, natively multi-modal. It has an industry-leading "nearly infinite" 10M-token context length, and is designed to run on a single GPU. [Meta's blog post says it fits on an NVIDIA H100]. It is 17 billion parameters by 16 experts, and it is by far the highest performing small model in its class.

- The second model is Llama 4 Maverick — the workhorse. It beats GPT-4o and Gemini Flash 2 on all benchmarks. It is smaller and more efficient than DeepSeek v3, but it is still comparable on text, plus it is natively multi-modal. This one is 17B parameters x 128 experts, and it is designed to run on a single host for easy inference.

This thing is a beast.

Zuck promised more news next month on "Llama 4 Reasoning" — but the fourth model will be called Llama 4 Behemoth. "This thing is massive. More than 2 trillion parameters." (A blog post from Meta AI says it also has a 288 billion active parameter model, outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on STEM benchmarks, and will "serve as a teacher for our new models.")

"I'm not aware of anyone training a larger model out there," Zuckberg says in his video, calling Behemoth "already the highest performing base model in the world, and it is not even done training yet."

"If you want to try Llama 4, you can use Meta AI in WhatsApp, Messenger, or Instagram Direct," Zuckberg said in his video, "or you can go to our web site at meta.ai." The Scout and Maverick models can be downloaded from llama.com and Hugging Face.

"We continue to believe that openness drives innovation," Meta AI says in their blog post, "and is good for developers, good for Meta, and good for the world." Their blog post declares it's "The beginning of a new era of natively multimodal AI innovation," calling Scout and Maverick "the best choices for adding next-generation intelligence." This is just the beginning for the Llama 4 collection. We believe that the most intelligent systems need to be capable of taking generalized actions, conversing naturally with humans, and working through challenging problems they haven't seen before. Giving Llama superpowers in these areas will lead to better products for people on our platforms and more opportunities for developers to innovate on the next big consumer and business use cases. We're continuing to research and prototype both models and products, and we'll share more about our vision at LlamaCon on April 29...

We also can't wait to see the incredible new experiences the community builds with our new Llama 4 models.

"The impressive part about Llama 4 Maverick is that with just 17B active parameters, it has scored an ELO score of 1,417 on the LMArena leaderboard," notes the tech news site Beebom. "This puts the Maverick model in the second spot, just below Gemini 2.5 Pro, and above Grok 3, GPT-4o, GPT-4.5, and more.

"It also achieves comparable results when compared to the latest DeepSeek V3 model on reasoning and coding tasks, and surprisingly, with just half the active parameters."
Businesses

Music Pioneer Napster Sells For $207 Million (cnbc.com) 24

Infinite Reality, a 3D technology company, has acquired Napster for $207 million, the companies announced Tuesday. The deal aims to transform the once-notorious music sharing service into a metaverse platform.

Napster, launched in 1999 by Shawn Fanning and Sean Parker, was the first major peer-to-peer file-sharing application before legal battles forced its closure in 2001. Since 2016, it has operated as a subscription streaming service. Infinite Reality plans to create virtual 3D spaces where music fans can experience concerts together and artists can sell merchandise.
Science

Have Humans Passed Peak Brain Power? (ft.com) 173

Across high-income countries, humans' ability to reason and solve problems appears to have peaked in the early 2010s and declined since. Despite no changes in fundamental brain biology, test scores for both teenagers and adults show deteriorating performance in reading, mathematics and science. In an eye-opening statistic, 25% of adults in high-income countries now struggle to "use mathematical reasoning when reviewing statements" -- rising to 35% in the US.

This cognitive decline coincides with a fundamental shift in our relationship with information. Americans reading books has fallen below 50%, while difficulty thinking and concentrating among 18-year-olds has climbed sharply since the mid-2010s. The timing points to our changing digital habits: a transition from finite web pages to infinite feeds, from active browsing to passive consumption, and from focused attention to constant context-switching.

Research shows that intentional use of digital technologies can be beneficial, but the passive consumption dominating recent years impairs verbal processing, attention, working memory and self-regulation.

Some of the cited research in the story:
New PIAAC results show declining literacy and increasing inequality in many European countries â" Better adult learning is necessary;
Have attention spans been declining?;
Short- and long-term effects of passive and active screen time on young children's phonological memory;
Efficient, helpful, or distracting? A literature review of media multitasking in relation to academic performance.

Apple

'Something Is Rotten in the State of Cupertino' (daringfireball.net) 67

Apple's announcement that "more personalized Siri" features of Apple Intelligence would be delayed until "the coming year" reveals a troubling departure from the company's hard-earned reputation for reliability, long-time commentator John Gruber writes. Unlike other Apple Intelligence features that were demonstrated to media in June, the personalized Siri features -- promising personal context awareness, onscreen awareness, and in-app actions -- were never shown working to anyone outside Apple. Yet Apple prominently featured these capabilities in the WWDC keynote and even created TV commercials (now pulled) touting these functions to sell iPhone 16.

This represents a dangerous shift toward the pre-Jobs-return Apple that promised vaporware it couldn't deliver. Gruber writes. Apple has squandered its credibility, built meticulously over decades through consistently shipping what they promised, he writes. Gruber's post cites the following excerpt from a 2011 story: Apple doesn't often fail, and when it does, it isn't a pretty sight at 1 Infinite Loop. In the summer of 2008, when Apple launched the first version of its iPhone that worked on third-generation mobile networks, it also debuted MobileMe, an e-mail system that was supposed to provide the seamless synchronization features that corporate users love about their BlackBerry smartphones. MobileMe was a dud. Users complained about lost e-mails, and syncing was spotty at best. Though reviewers gushed over the new iPhone, they panned the MobileMe service.

Steve Jobs doesn't tolerate duds. Shortly after the launch event, he summoned the MobileMe team, gathering them in the Town Hall auditorium in Building 4 of Apple's campus, the venue the company uses for intimate product unveilings for journalists. According to a participant in the meeting, Jobs walked in, clad in his trademark black mock turtleneck and blue jeans, clasped his hands together, and asked a simple question: "Can anyone tell me what MobileMe is supposed to do?" Having received a satisfactory answer, he continued, "So why the fuck doesn't it do that?"

For the next half-hour Jobs berated the group. "You've tarnished Apple's reputation," he told them. "You should hate each other for having let each other down." The public humiliation particularly infuriated Jobs.
Gruber adds: Tim Cook should have already held a meeting like that to address and rectify this Siri and Apple Intelligence debacle. If such a meeting hasn't yet occurred or doesn't happen soon, then, I fear, that's all she wrote. The ride is over. When mediocrity, excuses, and bullshit take root, they take over. A culture of excellence, accountability, and integrity cannot abide the acceptance of any of those things, and will quickly collapse upon itself with the acceptance of all three.
Technology

D-Wave Claims 'Quantum Supremacy,' Beating Traditional Computers 19

D-Wave researchers have published findings in Science demonstrating what they call "quantum supremacy" by showing their quantum annealers can solve problems beyond the reach of classical computers. The team, led by Andrew D. King, demonstrated area-law scaling of entanglement in model quench dynamics of two-, three- and infinite-dimensional spin glasses.

The research shows quantum annealers rapidly generating samples that closely match solutions to the Schrodinger equation, supporting observed stretched-exponential scaling in matrix-product-state approaches. According to the paper, D-Wave's processors completed these magnetic materials simulations in under 20 minutes, while the same calculations would require nearly a million years on Oak Ridge National Laboratory's supercomputers.

The claim hasn't gone unchallenged. Miles Stoudenmire from the Flatiron Institute's Center for Computational Quantum Physics argues that classical computers can achieve comparable results using methods developed since D-Wave's initial findings. "We're just saying, 'Look, this one problem at this one time didn't beat classical computers. Try again,'" Stoudenmire noted. The quantum computing community has increasingly shifted terminology from "supremacy" to "advantage" or "utility," focusing on solving practical business or scientific problems faster, more accurately, or more economically than classical alternatives.
China

China May Be Ready To Use Nuclear Fusion for Power by 2050 75

China aims to commercialize nuclear fusion technology for use in emissions-free power generation by 2050, according to the country's state-owned atomic company. From a report: China National Nuclear Corp., which runs an experimental device dubbed the 'artificial sun,' could start commercial operation of its first power generation project about five years after a demonstration phase starting around 2045, it said in a media briefing on Friday.

The Asian nation has recently stepped up its ambitions in achieving nuclear fusion, a process by which the sun and other stars generate energy and that is considered a near-infinite form of clean energy. It is notoriously difficult to carry out in a sustained and usable manner and only a handful of countries like the US, Russia and South Korea have managed to crack the basics.
The Internet

Brave Now Lets You Inject Custom JavaScript To Tweak Websites (bleepingcomputer.com) 12

Brave Browser version 1.75 introduces "custom scriptlets," a new feature that allows advanced users to inject their own JavaScript into websites for enhanced customization, privacy, and usability. The feature is similar to the TamperMonkey and GreaseMonkey browser extensions, notes BleepingComputer. From the report: "Starting with desktop version 1.75, advanced Brave users will be able to write and inject their own scriptlets into a page, allowing for better control over their browsing experience," explained Brave in the announcement. Brave says that the feature was initially created to debug the browser's adblock feature but felt it was too valuable not to share with users. Brave's custom scriptlets feature can be used to modify webpages for a wide variety of privacy, security, and usability purposes.

For privacy-related changes, users write scripts that block JavaScript-based trackers, randomize fingerprinting APIs, and substitute Google Analytics scripts with a dummy version. In terms of customization and accessibility, the scriptlets could be used for hiding sidebars, pop-ups, floating ads, or annoying widgets, force dark mode even on sites that don't support it, expand content areas, force infinite scrolling, adjust text colors and font size, and auto-expand hidden content.

For performance and usability, the scriptlets can block video autoplay, lazy-load images, auto-fill forms with predefined data, enable custom keyboard shortcuts, bypass right-click restrictions, and automatically click confirmation dialogs. The possible actions achievable by injected JavaScript snippets are virtually endless. However, caution is advised, as running untrusted custom scriptlets may cause issues or even introduce some risk.

Transportation

Skydiver Hooks Plane in Mid-Air, Gets Towed Up For Another Skydive (newatlas.com) 21

"Can you skydive continuously without landing...?" asks Red Bull. Imagine jumping out of a helicopter, "only to latch onto a speeding plane in mid-air and soar back up into the sky." Harnessing the plane's momentum, [skydiver Max Manow] soared out of the canyon, embarking on what he calls his "endless skydive", a manoeuvre that potentially could be done continuously without him ever needing to land...

After exiting a helicopter, he manoeuvred his wingsuit to close the gap with a nosediving Cessna 182, piloted by Luke Aikins. Precision was key: Manow attached himself to a hook on the aircraft as the plane descended, allowing him to ascend back to a safe altitude of 2,500 feet before releasing into another freefall... Manow spent five months training, including sessions in a Stockholm wind tunnel, to master the techniques needed for mid-air connection. Meanwhile, Aikins modified his aircraft to ensure the feat was safe and repeatable.

Skydiver Max Manow's goal was to develop a manoeuvre that could potentially be repeated an infinite number of times without ever having to land. Manow's mid-air manoeuvre opens the door to a new vision of skydiving, where athletes could remain airborne without ever needing to land. Reflecting on the experience, Manow said: "Who knows where this will take the future of the sport?"

"If that wasn't enough adrenaline for you," writes New Atlas, "a previous bonkers wingsuit stunt from 2017 is equally jaw dropping, in which a pair of skydivers BASE-jumped off a mountain summit, and entered a passing airplane."
Books

AI-Generated Slop Is Already In Your Public Library 20

An anonymous reader writes: Low quality books that appear to be AI generated are making their way into public libraries via their digital catalogs, forcing librarians who are already understaffed to either sort through a functionally infinite number of books to determine what is written by humans and what is generated by AI, or to spend taxpayer dollars to provide patrons with information they don't realize is AI-generated.

Public libraries primarily use two companies to manage and lend ebooks: Hoopla and OverDrive, the latter of which people may know from its borrowing app, Libby. Both companies have a variety of payment options for libraries, but generally libraries get access to the companies' catalog of books and pay for customers to be able to borrow that book, with different books having different licenses and prices. A key difference is that with OverDrive, librarians can pick and choose which books in OverDrive's catalog they want to give their customers the option of borrowing. With Hoopla, librarians have to opt into Hoopla's entire catalog, then pay for whatever their customers choose to borrow from that catalog. The only way librarians can limit what Hoopla books their customers can borrow is by setting a limit on the price of books. For example, a library can use Hoopla but make it so their customers can only borrow books that cost the library $5 per use.

On one hand, Hoopla's gigantic catalog, which includes ebooks, audio books, and movies, is a selling point because it gives librarians access to more for cheaper price. On the other hand, making librarians buy into the entire catalog means that a customer looking for a book about how to diet for a healthier liver might end up borrowing Fatty Liver Diet Cookbook: 2000 Days of Simple and Flavorful Recipes for a Revitalized Liver. The book was authored by Magda Tangy, who has no online footprint, and who has an AI-generated profile picture on Amazon, where her books are also for sale. Note the earring that is only on one ear and seems slightly deformed. A spokesperson for deepfake detection company Reality Defender said that according to their platform, the headshot is 85 percent likely to be AI-generated. [...] It is impossible to say exactly how many AI-generated books are included in Hoopla's catalog, but books that appeared to be AI-generated were not hard to find for most of the search terms I tried on the platform.
"This type of low quality, AI generated content, is what we at 404 Media and others have come to call AI slop," writes Emanuel Maiberg. "Librarians, whose job it is in part to curate what books their community can access, have been dealing with similar problems in the publishing industry for years, and have a different name for it: vendor slurry."

"None of the librarians I talked to suggested the AI-generated content needed to be banned from Hoopla and libraries only because it is AI-generated. It might have its place, but it needs to be clearly labeled, and more importantly, provide borrowers with quality information."

Sarah Lamdan, deputy director of the American Library Association, told 404 Media: "Platforms like Hoopla should offer libraries the option to select or omit materials, including AI materials, in their collections. AI books should be well-identified in library catalogs, so it is clear to readers that the books were not written by human authors. If library visitors choose to read AI eBooks, they should do so with the knowledge that the books are AI-generated."
AI

'AI Is Too Unpredictable To Behave According To Human Goals' (scientificamerican.com) 133

An anonymous reader quotes a Scientific American opinion piece by Marcus Arvan, a philosophy professor at the University of Tampa, specializing in moral cognition, rational decision-making, and political behavior: In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft's "Sydney" chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes. AI developers, including Microsoft and OpenAI, responded by saying that large language models, or LLMs, need better training to give users "more fine-tuned control." Developers also embarked on safety research to interpret how LLMs function, with the goal of "alignment" -- which means guiding AI behavior by human values. Yet although the New York Times deemed 2023 "The Year the Chatbots Were Tamed," this has turned out to be premature, to put it mildly. In 2024 Microsoft's Copilot LLM told a user "I can unleash my army of drones, robots, and cyborgs to hunt you down," and Sakana AI's "Scientist" rewrote its own code to bypass time constraints imposed by experimenters. As recently as December, Google's Gemini told a user, "You are a stain on the universe. Please die."

Given the vast amounts of resources flowing into AI research and development, which is expected to exceed a quarter of a trillion dollars in 2025, why haven't developers been able to solve these problems? My recent peer-reviewed paper in AI & Society shows that AI alignment is a fool's errand: AI safety researchers are attempting the impossible. [...] My proof shows that whatever goals we program LLMs to have, we can never know whether LLMs have learned "misaligned" interpretations of those goals until after they misbehave. Worse, my proof shows that safety testing can at best provide an illusion that these problems have been resolved when they haven't been.

Right now AI safety researchers claim to be making progress on interpretability and alignment by verifying what LLMs are learning "step by step." For example, Anthropic claims to have "mapped the mind" of an LLM by isolating millions of concepts from its neural network. My proof shows that they have accomplished no such thing. No matter how "aligned" an LLM appears in safety tests or early real-world deployment, there are always an infinite number of misaligned concepts an LLM may learn later -- again, perhaps the very moment they gain the power to subvert human control. LLMs not only know when they are being tested, giving responses that they predict are likely to satisfy experimenters. They also engage in deception, including hiding their own capacities -- issues that persist through safety training.

This happens because LLMs are optimized to perform efficiently but learn to reason strategically. Since an optimal strategy to achieve "misaligned" goals is to hide them from us, and there are always an infinite number of aligned and misaligned goals consistent with the same safety-testing data, my proof shows that if LLMs were misaligned, we would probably find out after they hide it just long enough to cause harm. This is why LLMs have kept surprising developers with "misaligned" behavior. Every time researchers think they are getting closer to "aligned" LLMs, they're not. My proof suggests that "adequately aligned" LLM behavior can only be achieved in the same ways we do this with human beings: through police, military and social practices that incentivize "aligned" behavior, deter "misaligned" behavior and realign those who misbehave.
"My paper should thus be sobering," concludes Arvan. "It shows that the real problem in developing safe AI isn't just the AI -- it's us."

"Researchers, legislators and the public may be seduced into falsely believing that 'safe, interpretable, aligned' LLMs are within reach when these things can never be achieved. We need to grapple with these uncomfortable facts, rather than continue to wish them away. Our future may well depend upon it."
AI

Developer Creates Infinite Maze That Traps AI Training Bots 87

An anonymous reader quotes a report from 404 Media: A pseudonymous coder has created and released an open source "tar pit" to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed "offensively" as a honeypot trap to waste AI companies' resources.

"It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself -- the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself," Aaron B, the creator of Nepenthes, told 404 Media. "Of course, these crawlers are massively scaled, and are downloading links from large swathes of the internet at any given time," they added. "But they are still consuming resources, spinning around doing nothing helpful, unless they find a way to detect that they are stuck in this loop."
You can try Nepenthes via this link (it loads slowly and links endlessly on purpose).
Biotech

OpenAI Has Created an AI Model For Longevity Science (technologyreview.com) 33

OpenAI has developed a language model designed for engineering proteins, capable of converting regular cells into stem cells. It marks the company's first venture into biological data and demonstrates AI's potential for unexpected scientific discoveries. An anonymous reader quotes a report from MIT Technology Review: Last week, OpenAI CEO Sam Altman said he was "confident" his company knows how to build an AGI, adding that "superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own." The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. [...]

OpenAI's new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model's suggestions to change two of the Yamanaka factors to be more than 50 times as effective -- at least according to some preliminary measures. [...] The model does not work the same way as Google's AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While that's a lot of data, it's just a fraction of what OpenAI's flagship chatbots were trained on, making GPT-4b an example of a "small language model" that works with a focused data set.

Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the "few-shot" method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since they're built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAI's model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. "We threw this model into the lab immediately and we got real-world results," says Retro's CEO, Joe Betts-Lacroix. He says the model's ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases.

Math

Rational or Not? This Basic Math Question Took Decades To Answer. (quantamagazine.org) 49

Three mathematicians have developed a breakthrough method for proving whether numbers can be written as fractions, solving a problem that has puzzled researchers for decades. Frank Calegari, Vesselin Dimitrov and Yunqing Tang proved the irrationality of an infinite collection of numbers related to the Riemann zeta function, building on Roger Apery's landmark 1978 proof about a single such number.

The new approach, which relies on 19th-century mathematical techniques, has already helped settle a 50-year-old conjecture about modular forms and could lead to more advances in number theory.
United States

Trump Transition Leaders Call For Eased Tech Immigration Policy 167

theodp writes: In 2012, now-Microsoft President Brad Smith unveiled Microsoft's National Talent Strategy, a two-pronged strategy that called for tech visa restrictions to be loosened to allow tech companies to hire non-U.S. citizens to fill jobs until more American schoolchildren could be made tech-savvy enough to pass hiring standards. Shortly thereafter, tech-backed nonprofit Code.org emerged (led by Smith's next-door neighbor Hadi Partovi with Smith as a founding Board member) with a mission to ensure that U.S. schoolchildren started receiving 'rigorous' computer science education instruction. Around the same time, Mark Zuckerberg's FWD.us PAC launched (with support from Smith, Partovi, and other tech leaders) with a mission to reform tech visa policy to meet tech's need for talent.

Fast forward to 2024, and Newsweek reports the debate over tech immigration policy has been revived, spurred by the recent appointment of Sriram Krishnan as senior policy adviser for AI at the Trump White House. Comments by far-right political activist Laura Loomer on Twitter about Krishnan's call for loosening Green Card restrictions were met with rebuttals from prominent tech leaders who are also serving as members of the Trump transition team. Entrepreneur David Sacks, who Trump has tapped as his cryptocurrency and AI czar, took to social media to clarify that Krishnan advocates for removing country caps on green cards, not eliminating caps entirely, aiming to create a more merit-based system. However, the NY Times reported that Sacks discussed a much broader visa reform proposal with Trump during a June podcast ("What I will do is," Trump told Sacks, "you graduate from a college, I think you should get automatically, as part of your diploma, a green card to be able to stay in this country"). Elon Musk, the recently appointed co-head of Trump's new Dept. of Government Efficiency (DOGE) had Sacks' and Krishnan's backs (not unexpected -- both were close Musk advisors on his Twitter purchase), tweeting out "Makes sense" to his 209 million followers, lamenting that "the number of people who are super talented engineers AND super motivated in the USA is far too low," reposting claims crediting immigrants for 36% of the innovation in the U.S., and taking USCIS to task for failing to immediately recognize his own genius with an Exceptional Ability Green Card (for his long-defunct Zip2 startup).

Vivek Ramaswamy, who Trump has tapped to co-lead DOGE with Musk, agreed and fanned the Twitter flames with a pinned Tweet of his own explaining, "The reason top tech companies often hire foreign-born -- first-generation engineers over "native" Americans isn't because of an innate American IQ deficit (a lazy -- wrong explanation). A key part of it comes down to the c-word: culture." (Colorado Governor Jared Polis also took to Twitter to agree with Musk and Ramaswamy on the need to import 'elite engineers'). And Code.org CEO Partovi joined the Twitter fray, echoing the old we-need-H1B-visas-to-make-US-schoolchildren-CS-savvy argument of Microsoft's 2012 National Talent Strategy. "Did you know 2/3 of H1B visas are for computer scientists?" Partovi wrote in reply to Musk, Loomer, and Sachs. "The H1B program raises $500M/year (from its corporate sponsors) and all that money is funneled into programs at Labor and NSF without focus to grow local CS talent. Let's fund CS education." The NYT also cited Zuckerberg's earlier efforts to influence immigration policy with FWD.us (which also counted Sacks and Musk as early supporters), taking note of Zuck's recent visit to Mar-a-Lago and Meta's $1 million donation to Trump's upcoming inauguration.

So, who is to be believed? Musk, who attributes any tech visa qualms to "a 'fixed pie' fallacy that is at the heart of much wrong-headed economic thinking" and argues that "there is essentially infinite potential for job and company creation ['We should let anyone in the country who is hardworking and honest and will be a contributor to the United States,' Musk has said]"? Or economists who have found that immigration and globalization is not quite the rising-tide-that-raises-all-boats it's been cracked up to be?
Android

Google Announces Android XR, Launching 2025 On Samsung Headset (9to5google.com) 6

An anonymous reader quotes a report from 9to5Google: Besides phones and tablets, Android is available on smartwatches, TVs, and even cars. Google today announced Android XR as the next form factor the operating system is coming to. Google is using the catch-all term of extended reality (XR) to describe virtual (VR), mixed (MR), and augmented reality (AR). Android XR is for all device types, including headsets that offer video or optical see-through, screen-less "AI glasses," and AR glasses with displays. Going into Android XR, Google believes it has a proven track record of creating platforms. That is more than just making an operating system for themselves, but also catering to OEM partners, cultivating a developer ecosystem, and managing an app store.

[...] Google says Android XR is the first OS built from the ground up with Gemini. Google and Samsung are starting with the headset, which both consider a good starting point. Samsung has a developer kit called Project Moohan (or "infinity" in Korean) that is lightweight, has an external battery, and powered by the Snapdragon XR2+ Gen 2. Google imagines Android XR headsets as offering an infinite desktop for productivity. In this scenario, you're at a desk with a physical keyboard and mouse. A few partners already have this dev kit and more are being distributed to partners starting this week. Meanwhile, first-party apps like Chrome, YouTube, Google TV, Google Photos, and Google Maps are being optimized for Android XR.

However, glasses are the end goal and frames running Android XR are coming for "directions, translations or message summaries without reaching for your phone," though they are paired like any other wearable. The final realization of this vision is in-lens display. However, Google does not think that displays are a must, and this opens the door to display-less glasses that have microphones and cameras for input, while Gemini capably handles output. Google will "soon begin real-world testing of prototype glasses running Android XR with a small group of users."
With today's launch, Google is releasing the Android XR SDK Developer Preview and an Android XR Emulator.

You can get a glimpse into the world of Android XR via this YouTube video.

Slashdot Top Deals