Space

Rubin Observatory Has Started Paging Astronomers 800,000 Times a Night (scientificamerican.com) 21

On February 24th, the Vera C. Rubin Observatory activated its automated alert system, sending out roughly 800,000 real-time notifications flagging asteroids, supernovae, flaring black holes and "other transient celestial events," reports Scientific American. And this is only the beginning -- that number is projected to climb into the millions as it continues scanning the ever-changing sky. From the report: The astronomical observatory equipped with world's largest camera hit a key milestone on February 24, when a complex data-processing system pushed hundreds of thousands of alerts out to scientists eager to pore over its most exciting sightings. The Vera C. Rubin Observatory began operations last year, capturing stunning, panoramic time-lapse views of the cosmos with ease. Rubin's first images, based on just 10 hours of observations, let space fans zoom seemingly forever into an overwhelmingly starry sky. But watchful astronomers were always awaiting the next step: the system that would automatically alert them to the most promising activity in the overhead sky amid the 1,000 or so enormous images that Rubin's telescope captures every night.

"We can detect everything that changes, moves and appears," said Yusra AlSayyad, an astronomer at Princeton University and Rubin's deputy associate director for data management, to Scientific American last summer. "It's way too much for one person to manually sift through and filter and monitor themselves." So even as they were designing and building the Rubin Observatory itself, scientists were also designing an alert system to help astronomers navigate the flood of data. As soon as the telescope began observations, the team started constructing a static reference image of the entire sky in impeccable detail.

Now the data processing systems that support the observatory are starting to automatically compare every new Rubin image to the corresponding section of that background template. The systems identify all of the differences, each of which is individually flagged. The algorithms can also distinguish between a potential supernova and a possible newfound asteroid, for example. Alerting the scientific community is the final, crucial step. Astronomers -- as well as members of the public -- can sign up for notifications based on the type of sighting they're interested in and the brightness of the observation in question. And now that the alerts system has gone live, users receive a tiny, fuzzy image with some astronomical metadata of each observation that fits their criteria -- all just a couple of minutes after Rubin captures the original image.

Moon

Asteroid 2024 YR4 Has a 4% Chance of Hitting the Moon (universetoday.com) 31

An anonymous reader quotes a report from Universe Today: There's a bright side to every situation. In 2032, the Moon itself might have a particularly bright side if it is blasted by a 60-meter-wide asteroid. The chances of such an event are still relatively small (only around 4%), but non-negligible. And scientists are starting to prepare both for the bad (massive risks to satellites and huge meteors raining down on a large portion of the planet) and the good (a once in a lifetime chance to study the geology, seismology, and chemical makeup of our nearest neighbor). A new paper from Yifan He of Tsinghua University and co-authors, released in pre-print form on arXiv, looks at the bright side of all of the potential interesting science we can do if a collision does, indeed, happen. If Asteroid 2024 YR4 were to hit the Moon, researchers would be able to watch a large lunar impact unfold in real time and collect data on extreme collisions that usually exist only in computer models. Telescopes could follow how a newly formed crater and its pool of molten rock cool and solidify, while the resulting moonquake would offer a clearer picture of its internal structure via the seismic waves it sends through the Moon.

Furthermore, researchers could compare the fresh crater to older ones to improve our understanding of the Moon's long history of impacts. Debris blasted off the surface could even deliver small lunar samples to Earth.

Altogether, it would be a once-in-a-generation chance to learn more about how the Moon/rocky worlds respond to powerful impacts.
Biotech

Microsoft Says AI Can Create 'Zero Day' Threats In Biology (technologyreview.com) 29

An anonymous reader quotes a report from MIT Technology Review: A team at Microsoft says it used artificial intelligence to discover a "zero day" vulnerability in the biosecurity systems used to prevent the misuse of DNA. These screening systems are designed to stop people from purchasing genetic sequences that could be used to create deadly toxins or pathogens. But now researchers led by Microsoft's chief scientist, Eric Horvitz, says they have figured out how to bypass the protections in a way previously unknown to defenders.The team described its work today in the journalScience.

Horvitz and his team focused on generative AI algorithms that propose new protein shapes. These types of programs are already fueling the hunt for new drugs at well-funded startups like Generate Biomedicines and Isomorphic Labs, a spinout of Google. The problem is that such systems are potentially "dual use." They can use their training sets to generate both beneficial molecules and harmful ones. Microsoft says it began a "red-teaming" test of AI's dual-use potential in 2023 in order to determine whether "adversarial AI protein design" could help bioterrorists manufacture harmful proteins.

The safeguard that Microsoft attacked is what's known as biosecurity screening software. To manufacture a protein, researchers typically need to order a corresponding DNA sequence from a commercial vendor, which they can then install in a cell. Those vendors use screening software to compare incoming orders with known toxins or pathogens. A close match will set off an alert. To design its attack, Microsoft used several generative protein models (including its own, called EvoDiff) to redesign toxins -- changing their structure in a way that let them slip past screening software but was predicted to keep their deadly function intact.
"This finding, combined with rapid advances in AI-enabled biological modeling, demonstrates the clear and urgent need for enhanced nucleic acid synthesis screening procedures coupled with a reliable enforcement and verification mechanism," says Dean Ball, a fellow at the Foundation for American Innovation, a think tank in San Francisco.
Music

Spotify Peeved After 10,000 Users Sold Data To Build AI Tools (arstechnica.com) 17

An anonymous reader quotes a report from Ars Technica: For millions of Spotify users, the "Wrapped" feature -- which crunches the numbers on their annual listening habits -- is a highlight of every year's end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so "irresistible," while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become "the ultimate status symbol" for tens of millions of music fans. It's no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them. Imagine, for example, accessing a music recap that encapsulates a user's full listening history -- not just their top songs and artists. With that unlocked, users could track emotional patterns, analyzing how their music tastes reflected their moods over time and perhaps helping them adjust their listening habits to better cope with stress or major life events. And for users particularly intrigued by their own data, there's even the potential to use AI to cross data streams from different platforms and perhaps understand even more about how their music choices impact their lives and tastes more broadly.

Likely just as appealing as gleaning deeper personal insights, though, users could also potentially build AI tools to compare listening habits with their friends. That could lead to nearly endless fun for the most invested music fans, where AI could be tapped to assess all kinds of random data points, like whose breakup playlists are more intense or who really spends the most time listening to a shared favorite artist. In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined "Unwrapped," a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana -- which Wired profiled earlier this year -- these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn't or wouldn't. In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective -- at the time about 10,000 members strong -- sold a "small portion" of its data (users' artist preferences) for $55,000 to Solo AI. While each Spotify user only earned about $5 in cryptocurrency tokens -- which Kazlauskas suggested was not "ideal," wishing the users had earned about "a hundred times" more -- she said the deal was "meaningful" in showing Spotify users that their data "is actually worth something."
Spotify responded to the collective by citing both trademark and policy violations. The company sent a letter to Unwrapped developers, warning that the project's name may infringe on Spotify's Wrapped branding, and that Unwrapped breaches developer terms. Specifically, Spotify objects to Unwrapped's use of platform data for AI/ML training and facilitating user data sales.

"Spotify honors our users' privacy rights, including the right of portability," Spotify's spokesperson said. "All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties."

Unwrapped says it plans to defend users' right to "access, control, and benefit from their own data," while providing reassurances that it will "respect Spotify's position as a global music leader."
Supercomputing

Startup Puts a Logical Qubit In a Single Piece of Hardware (arstechnica.com) 5

Startup Nord Quantique has demonstrated that a single piece of hardware can host an error-detecting logical qubit by using two quantum frequencies within one resonator. The breakthrough has the potential to slash the hardware demands for quantum error correction and deliver more compact and efficient quantum computing architectures. Ars Technica reports: The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. Without correcting errors, the system rapidly decayed, with an error probability in each round of measurement of about 12 percent. By the time the system reached the 25th measurement, almost every instance had already encountered an error. The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of the errors -- something the team didn't try -- would be able to fix all the detected problems.

Several other companies have already performed experiments in which errors were detected -- and corrected. In a few instances, companies have even performed operations with logical qubits, although these were not sophisticated calculations. Nord Quantique, in contrast, is only showing the operation of a single logical qubit, so it's not even possible to test a two-qubit gate operation using the hardware it has described so far. So simply being able to identify the occurrence of errors is not on the cutting edge. Why is this notable?

All the other companies require multiple hardware qubits to host a single logical qubit. Since building many hardware qubits has been an ongoing challenge, most researchers have plans to minimize the number of hardware qubits needed to support a logical qubit -- some combination of high-quality hardware, a clever error correction scheme, and/or a hardware-specific feature that catches the most common errors. You can view Nord Quantique's approach as being at the extreme end of the spectrum of solutions, where the number of hardware qubits required is simply one. From Nord Quantique's perspective, that's significant because it means that its hardware will ultimately occupy less space and have lower power and cooling requirements than some of its competitors. (Other hardware, like neutral atoms, requires lots of lasers and a high vacuum, so the needs are difficult to compare.) But it also means that, should it become technically difficult to get large numbers of qubits to operate as a coherent whole, Nord Quantique's approach may ultimately help us overcome some of these limits.

Google

Google Is Rolling Out AI Mode To Everyone In the US (engadget.com) 44

Google has unveiled a major overhaul of its search engine with the introduction of A.I. Mode -- a new feature that works like a chatbot, enabling users to ask follow-up questions and receive detailed, conversational answers. Announced at the I/O 2025 conference, the feature is now being rolled out to all Search users in the U.S. Engadget reports: Google first began previewing AI Mode with testers in its Labs program at the start of March. Since then, it has been gradually rolling out the feature to more people, including in recent weeks regular Search users. At its keynote today, Google shared a number of updates coming to AI Mode as well, including some new tools for shopping, as well as the ability to compare ticket prices for you and create custom charts and graphs for queries on finance and sports.

For the uninitiated, AI Mode is a chatbot built directly into Google Search. It lives in a separate tab, and was designed by the company to tackle more complicated queries than people have historically used its search engine to answer. For instance, you can use AI Mode to generate a comparison between different fitness trackers. Before today, the chatbot was powered by Gemini 2.0. Now it's running a custom version of Gemini 2.5. What's more, Google plans to bring many of AI Mode's capabilities to other parts of the Search experience.

Looking to the future, Google plans to bring Deep Search, an offshoot of its Deep Research mode, to AI Mode. [...] Another new feature that's coming to AI Mode builds on the work Google did with Project Mariner, the web-surfing AI agent the company began previewing with "trusted testers" at the end of last year. This addition gives AI Mode the ability to complete tasks for you on the web. For example, you can ask it to find two affordable tickets for the next MLB game in your city. AI Mode will compare "hundreds of potential" tickets for you and return with a few of the best options. From there, you can complete a purchase without having done the comparison work yourself. [...] All of the new AI Mode features Google previewed today will be available to Labs users first before they roll out more broadly.

NASA

Is There New Evidence for a 9th Planet - Planet X? (discovermagazine.com) 145

This week Discover magazine looks at evidence — both old and new — for a ninth planet in our solar system: "Orbits of the most distant small bodies — comets or asteroids — seem to be clustered on one half or one side of the solar system," says Amir Siraj [an astrophysicist with Princeton University]. "That's very weird and something that can't be explained by our current understanding of the solar system." A 2014 study in Nature first noted these orbits. A 2021 study in The Astronomical Journal examined the clustering in the orbit and concluded that "Planet Nine" was likely closer and brighter than expected.

Astrophysicists don't agree whether the clustering in the orbit is a real effect. Some have argued it is biased because the view that scientists currently have is limited, Siraj says. "This debate for the last decade has a lot of scientists confused, including myself. I decided to look at the problem from scratch," he says.

In a 2024 paper, Siraj and his co-authors ran simulations of the solar system, including an extra planet beyond Neptune. "We did it 300 times, about 2.5 times more than what was done previously," Siraj says. "In each simulation, you try different parameters for the extra planet. A different mass, a different tilt, a different shape of the orbit. You run these for millions of years, and then you compare the distribution to what we see in our solar system...." They found that the perimeters for this possible planet were different than what has been previously discussed in the scientific literature, and they supported the possibility of an unseen planet beyond Neptune.

Scientists hope a new telescope will have the potential to see deeper into the solar system. In 2025, the Vera C. Rubin Observatory on Cerro Pachón — a mountain in Chile, is expected to go online. The observatory boasts that in the time it takes a person to open up their phone and pose for a selfie, their new telescope will be able to snap an image of 100,000 galaxies, many of which have never been seen by scientists. The telescope will have the largest digital camera ever built, the LSST. Siraj says he expects it will take "the deepest, all-sky survey that humanity has ever conducted." So, what might the Rubin Observatory find past Neptune? Based on the current literature, Siraj sees a few possibilities. One is that the Rubin Observatory, with its increased capabilities, might be able to see a planet beyond Neptune... "Next year is going to be an enormous year for solar system science," he says.

NASA points out that the Hawaii-based Keck and Subaru telescopes are also searching for Planet X, while "a NASA-funded citizen science project called Backyard Worlds: Planet 9, encourages the public to help search using images captured by NASA's Wide-field Infrared Survey Explorer (WISE) mission.

And starting next year the Rubin observatory will also "search for more Kuiper Belt objects. If the orbits of these objects are systematically aligned with each other, it may give more evidence for the existence of Planet X (Planet Nine), or at least help astronomers know where to search for it.

"Another possibility is that Planet X (Planet Nine) does not exist at all. Some researchers suggest the unusual orbit of those Kuiper Belt objects can be explained by their random distribution."

Thanks to long-time Slashdot reader Tablizer for sharing the news.
Python

Python Foundation Nonprofit Fixes Bylaw Loophole That Left 'Virtually Unlimited' Financial Liability (blogspot.com) 16

The Python Software Foundation's board "was alerted to a defect in our bylaws that exposes the Foundation to an unbounded financial liability," according to a blog post Friday: Specifically, Bylaws Article XIII as originally written compels the Python Software Foundation to extend indemnity coverage to individual Members (including our thousands of "Basic Members") in certain cases, and to advance legal defense expenses to individual Members with surprisingly few restrictions. Further, the Bylaws compel the Foundation to take out insurance to cover these requirements, however, insurance of this nature is not actually available to 501(c)(3) nonprofit corporations such as the Python Software Foundation to purchase, and thus it is impossible in practice to comply with this requirement.

In the unlikely but not impossible event of the Foundation being called upon to advance such expenses, the potential financial burden would be virtually unlimited, and there would be no recourse to insurance. As this is an existential threat to the Foundation, the Board has agreed that it must immediately reduce the Foundation's exposure, and has opted to exercise its ability to amend the Bylaws by a majority vote of the Board directors, rather than by putting it to a vote of the membership, as allowed by Bylaws Article XI.

Acting on legal advice, the full Board has voted unanimously to amend its Bylaws to no longer extend an offer to indemnify, advance legal expenses, or insure Members when they are not serving at the request of the Foundation. The amended Bylaws still allow for indemnification of a much smaller set of individuals acting on behalf of the PSF such as Board Members and officers, which is in line with standard nonprofit governance practices and for which we already hold appropriate insurance.

Another blog post notes "the recent slew of conversations, initially kicked off in response to a bylaws change proposal, has been pretty alienating for many members of our community." - After the conversation on PSF-Vote had gotten pretty ugly, forty-five people out of ~1000 unsubscribed. (That list has since been put on announce-only)

- We received a lot of Code of Conduct reports or moderation requests about the PSF-vote mailing list and the discuss.python.org message board conversations. (Several reports have already been acted on or closed and the rest will be soon).

- PSF staff received private feedback that the blanket statements about "neurodiverse people", the bizarre motives ascribed to the people in charge of the PSF and various volunteers and the sideways comments about the kinds of people making reports were also very off-putting.

Cloud

Could We Lower The Carbon Footprint of Data Centers By Launching Them Into Space? (cnbc.com) 114

The Wall Street Journal reports that a European initiative studying the feasibility data centers in space "has found that the project could be economically viable" — while reducing the data center's carbon footprint.

And they add that according to coordinator Thales Alenia Space, the project "could also generate a return on investment of several billion euros between now and 2050." The study — dubbed Ascend, short for Advanced Space Cloud for European Net zero emission and Data sovereignty — was funded by the European Union and sought to compare the environmental impacts of space-based and Earth-based data centers, the company said. Moving forward, the company plans to consolidate and optimize its results. Space data centers would be powered by solar energy outside the Earth's atmosphere, aiming to contribute to the European Union's goal of achieving carbon neutrality by 2050, the project coordinator said... Space data centers wouldn't require water to cool them, the company said.
The 16-month study came to a "very encouraging" conclusion, project manager Damien Dumestier told CNBC. With some caveats... The facilities that the study explored launching into space would orbit at an altitude of around 1,400 kilometers (869.9 miles) — about three times the altitude of the International Space Station. Dumestier explained that ASCEND would aim to deploy 13 space data center building blocks with a total capacity of 10 megawatts in 2036, in order to achieve the starting point for cloud service commercialization... The study found that, in order to significantly reduce CO2 emissions, a new type of launcher that is 10 times less emissive would need to be developed. ArianeGroup, one of the 12 companies participating in the study, is working to speed up the development of such reusable and eco-friendly launchers. The target is to have the first eco-launcher ready by 2035 and then to allow for 15 years of deployment in order to have the huge capacity required to make the project feasible, said Dumestier...

Michael Winterson, managing director of the European Data Centre Association, acknowledges that a space data center would benefit from increased efficiency from solar power without the interruption of weather patterns — but the center would require significant amounts of rocket fuel to keep it in orbit. Winterson estimates that even a small 1 megawatt center in low earth orbit would need around 280,000 kilograms of rocket fuel per year at a cost of around $140 million in 2030 — a calculation based on a significant decrease in launch costs, which has yet to take place. "There will be specialist services that will be suited to this idea, but it will in no way be a market replacement," said Winterson. "Applications that might be well served would be very specific, such as military/surveillance, broadcasting, telecommunications and financial trading services. All other services would not competitively run from space," he added in emailed comments.

[Merima Dzanic, head of strategy and operations at the Danish Data Center Industry Association] also signaled some skepticism around security risks, noting, "Space is being increasingly politicised and weaponized amongst the different countries. So obviously, there is a security implications on what type of data you send out there."

Its not the only study looking at the potential of orbital data centers, notes CNBC. "Microsoft, which has previously trialed the use of a subsea data center that was positioned 117 feet deep on the seafloor, is collaborating with companies such as Loft Orbital to explore the challenges in executing AI and computing in space."

The article also points out that the total global electricity consumption from data centers could exceed 1,000 terawatt-hours in 2026. "That's roughly equivalent to the electricity consumption of Japan, according to the International Energy Agency."
Transportation

Feds Probe Waymo Driverless Cars Hitting Parked Cars, Drifting Into Traffic (arstechnica.com) 57

An anonymous reader quotes a report from Ars Technica: Crashing into parked cars, drifting over into oncoming traffic, intruding into construction zones -- all this "unexpected behavior" from Waymo's self-driving vehicles may be violating traffic laws, the US National Highway Traffic Safety Administration (NHTSA) said (PDF) Monday. To better understand Waymo's potential safety risks, NHTSA's Office of Defects Investigation (ODI) is now looking into 22 incident reports involving cars equipped with Waymo's fifth-generation automated driving system. Seventeen incidents involved collisions, but none involved injuries.

Some of the reports came directly from Waymo, while others "were identified based on publicly available reports," NHTSA said. The reports document single-party crashes into "stationary and semi-stationary objects such as gates and chains" as well as instances in which Waymo cars "appeared to disobey traffic safety control devices." The ODI plans to compare notes between incidents to decide if Waymo cars pose a safety risk or require updates to prevent malfunctioning. There is already evidence from the ODI's initial evaluation showing that Waymo's automated driving systems (ADS) were either "engaged throughout the incident" or abruptly "disengaged in the moments just before an incident occurred," NHTSA said. The probe is the first step before NHTSA can issue a potential recall, Reuters reported.
A Waymo spokesperson said the company currently serves "over 50,000 weekly trips for our riders in some of the most challenging and complex environments." When a collision occurs, Waymo reviews each case and continually updates the ADS software to enhance performance.

"We are proud of our performance and safety record over tens of millions of autonomous miles driven, as well as our demonstrated commitment to safety transparency," Waymo's spokesperson said, confirming that Waymo would "continue to work" with the ODI to enhance ADS safety.
AI

Stanford Releases AI Index Report 2024 26

Top takeaways from Stanford's new AI Index Report [PDF]:
1. AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.
2. Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.
3. Frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI's GPT-4 used an estimated $78 million worth of compute to train, while Google's Gemini Ultra cost $191 million for compute.
4. The United States leads China, the EU, and the U.K. as the leading source of top AI models. In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union's 21 and China's 15.
5. Robust and standardized evaluations for LLM responsibility are seriously lacking. New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.
6. Generative AI investment skyrockets. Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.
7. The data is in: AI makes workers more productive and leads to higher quality work. In 2023, several studies assessed AI's impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI's potential to bridge the skill gap between low- and high-skilled workers. Still, other studies caution that using AI without proper oversight can lead to diminished performance.
8. Scientific progress accelerates even further, thanks to AI. In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications -- from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.
9. The number of AI regulations in the United States sharply increases. The number of AIrelated regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.
10. People across the globe are more cognizant of AI's potential impact -- and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.
Technology

'Cory Doctorow Has a Plan To Wipe Away the Enshittification of Tech' (theregister.com) 206

In an interview with The Register, author and activist Cory Doctorow offers potential solutions to stop "enshittification," an age-old phenomenon that has become endemic in the tech industry. It's when a platform that was once highly regarded and user-friendly gradually deteriorates in quality, becoming less appealing and more monetized over time. Then, it dies. Here's an excerpt from the interview, conducted by The Register's Iain Thomson: [...] Doctorow explained that the reasons for enshittification are complex, and not necessarily directly malicious -- but a product of the current business environment and the state of regulation. He thinks the way to flush enshittification is enforcing effective competition. "We need to have prohibition and regulation that prohibits the capital markets from funding predatory pricing," he explained. "It's very hard to enter the market when people are selling things below cost. We need to prohibit predatory acquisitions. Look at Facebook: buying Instagram, and Mark Zuckerberg sending an email saying we're buying Instagram because people don't like Facebook and they're moving to Instagram, and we just don't want them to have anywhere else to go."

The frustrating part of this is that the laws needed to break up the big tech monopolies that allow enshittification, and encourage competition, are already on the books. Doctorow lamented those laws haven't been enforced. In the US, the Clayton Act, the Federal Trade Act, and the Sherman Act are all valid, but have either not been enforced or are being questioned in the courts. However, in the last few years that appears to be changing. Recent actions by increasingly muscular regulatory agencies like the FTC and FCC are starting to move against the big tech monopolies, as well as in other industry sectors. What's more, Doctorow pointed out, these are not just springing from the Democratic administration but are being actively supported by an increasing number of Republicans. He cited Lina Khan, appointed as chair of the FTC in part thanks to the support of Republican politicians seeking change (although the GOP now regularly criticizes her positions).

The sheer size of the largest tech companies certainly gives them an advantage in cases like these, Doctorow opined, noting that we've seen this in action more than 20 years ago. "Think back to the Napster era, and compare tech and entertainment. Entertainment was very concentrated into about seven big firms and they had total unity and message discipline," Doctorow recalled. "Tech was a couple of hundred firms, and they were much larger -- like an order of magnitude larger in aggregate than entertainment. But their messages were all over the place, and they were contradicting each other. And so they just lost, and they lost very badly."
Doctorow discusses the detrimental effects of mega-companies on innovation and security, noting how growth strategies focused on raising costs and reducing value can lead to vulnerabilities and employee demoralization. "Remember when tech workers dreamed of working for a big company before striking out on their own to put that big company out of business? Then that dream shrank to working for a few years, quitting and doing a fake startup to get hired back by your old boss in the world's most inefficient way to get a raise," he told the Def Con crowd last August. "Next it shrank even further. You're working for a tech giant your whole life but you get free kombucha and massages. And now that dream is over and all that's left is work with a tech giant until they fire your ass -- like those 12,000 Googlers who got fired six months after a stock buyback that would have paid their salaries for the next 27 years. We deserve better than this."

Additionally, Doctorow emphasizes the growing movement toward labor organizing in the tech industry, which could be a pivotal factor in reversing the trend of enshittification. "We're so much closer to tech unionization than we were just a few years ago. Yeah, it's still nascent, and yes, it's easy to double small numbers, but the strength is doubling very quickly and in a very heartening way," Doctorow told The Register. "We're really at a turning point. And some of it is coming from the kind of solidarity like you see with warehouse workers and tech workers."

Ultimately, Doctorow argues it should be possible to reintroduce a more competitive and innovative tech industry environment, where the interests of users, employees, and investors are better balanced.
Medicine

Common Energy Drink Ingredient Taurine 'May Slow Aging Process' 140

Scientists are calling for a major clinical trial to investigate the potential benefits of taurine supplementation, a substance commonly found in energy drinks. Animal studies have shown that replenishing taurine levels to more youthful levels can slow down the aging process, improve health, and even extend lifespans in mice. The Guardian reports: Prof Henning Wackerhage, a molecular exercise physiologist on the team at the Technical University of Munich, said a trial would compare how humans fared after taking daily taurine or placebo supplements. "It will probably be very difficult to look at whether they live longer, but at least we can check if they live healthier for longer, and that of course is the goal for medicine."

Yadav's team homed in on taurine as a potential driver of the ageing process in 2012 when an analysis of blood compounds found that levels of the amino acid dropped dramatically with age in mice, monkeys and humans. By the age of 60, taurine levels in a typical person slumped to one-third of that seen in five-year-olds, they found. The discovery prompted the team to test the impact of extra taurine on middle-aged mice. "Whatever we checked, taurine-supplemented mice were healthier and appeared younger than the control mice," Yadav said, noting they had denser bones, stronger muscles, better memory and younger looking immune systems. "Taurine made animals live healthier and longer lives by affecting all the major hallmarks of ageing." Beyond improving health, mice on taurine lived longer -- on average an extra 10% for males and 12% for females, amounting to an additional three to four months, the equivalent of seven or eight human years. A comparable dose for humans would be three to six grams a day.

The scientists next looked at whether boosting taurine benefited animals that were much closer biologically to humans. A six-month trial in middle-aged macaques found that a daily taurine pill appeared to boost health by preventing weight gain, lowering blood glucose and improving bone density and the immune system. Other evidence suggests taurine supplementation may have some effect in humans. Yadav and his team analysed medical data from 12,000 Europeans aged 60 and over. Those with higher taurine levels had less obesity, type 2 diabetes and high blood pressure, and lower levels of inflammation. Strenuous sessions on an exercise bike were found to boost taurine levels, the researchers report in Science.

Without a major trial to demonstrate the safety or any benefits of taurine supplements, the scientists are not recommending people boost their intake through pills, energy drinks or dietary changes. Taurine is made naturally in the body and is found in meat and shellfish diets, but the healthiest diets are largely plant-based. Some energy drinks contain taurine, but the scientists warn they also contain other substances that may not be safe to consume at high levels.
AI

Microsoft's New AI Can Simulate Anyone's Voice With 3 Seconds of Audio (arstechnica.com) 71

An anonymous reader quotes a report from ArsTechnica: On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person's voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything -- and do it in a way that attempts to preserve the speaker's emotional tone. Its creators speculate that VALL-E could be used for high-quality text-to-speech applications, speech editing where a recording of a person could be edited and changed from a text transcript (making them say something they originally didn't), and audio content creation when combined with other generative AI models like GPT-3.

Microsoft calls VALL-E a "neural codec language model," and it builds off of a technology called EnCodec, which Meta announced in October 2022. Unlike other text-to-speech methods that typically synthesize speech by manipulating waveforms, VALL-E generates discrete audio codec codes from text and acoustic prompts. It basically analyzes how a person sounds, breaks that information into discrete components (called "tokens") thanks to EnCodec, and uses training data to match what it "knows" about how that voice would sound if it spoke other phrases outside of the three-second sample. Or, as Microsoft puts it in the VALL-E paper (PDF): "To synthesize personalized speech (e.g., zero-shot TTS), VALL-E generates the corresponding acoustic tokens conditioned on the acoustic tokens of the 3-second enrolled recording and the phoneme prompt, which constrain the speaker and content information respectively. Finally, the generated acoustic tokens are used to synthesize the final waveform with the corresponding neural codec decoder."

[...] While using VALL-E to generate those results, the researchers only fed the three-second "Speaker Prompt" sample and a text string (what they wanted the voice to say) into VALL-E. So compare the "Ground Truth" sample to the "VALL-E" sample. In some cases, the two samples are very close. Some VALL-E results seem computer-generated, but others could potentially be mistaken for a human's speech, which is the goal of the model. In addition to preserving a speaker's vocal timbre and emotional tone, VALL-E can also imitate the "acoustic environment" of the sample audio. For example, if the sample came from a telephone call, the audio output will simulate the acoustic and frequency properties of a telephone call in its synthesized output (that's a fancy way of saying it will sound like a telephone call, too). And Microsoft's samples (in the "Synthesis of Diversity" section) demonstrate that VALL-E can generate variations in voice tone by changing the random seed used in the generation process.
Microsoft has not provided VALL-E code for others to experiment with, likely to avoid fueling misinformation and deception.

In conclusion, the researchers write: "Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models."
Science

This E-Nose Sniffs Out the Good Whiskey (ieee.org) 10

Slashdot reader Hmmmmmm quotes IEEE Spectrum: A whiskey connoisseur can take a whiff of a dram and know exactly the brand, region, and style of whiskey in hand. But how do our human noses compare to electronic noses in distinguishing the qualities of a whiskey? A study published 1 February in IEEE Sensors Journal describes a new e-nose that is surprisingly accurate at analyzing whiskies — and can identify the brand of whiskey with more than 95 percent accuracy after just one "whiff."

E-noses have been gaining in popularity over recent years thanks to their range of valuable applications, from sensing when crops are ready for harvest to identifying food products on the cusp of expiring. It is perhaps unsurprising that many e-noses have also been developed to analyze alcoholic beverages, including whiskey, which had an estimated international market worth US $58 billion in 2018 alone. "This lucrative industry has the potential to be a target of fraudulent activities such as mislabeling and adulteration," explains Steven Su, an associate professor at the Faculty of Engineering and IT, University of Technology Sydney....

Su and his colleagues sought to adapt one of their e-noses so that it could analyze some key qualities of whiskey. Their original e-nose was designed to detect illegal animal parts sold on the black market, such as rhino horns, and they have since also adapted their e-nose for breath analysis and assessing food quality. Their newest, whiskey-sniffing e-nose, called Nos.e, contains a little vial where the whiskey sample is added. The scent of the whiskey is injected into a gas sensor chamber, which detects the various odors and sends the data to a computer for analysis. The most important scent features are then extracted and analyzed by machine learning algorithms designed to recognize the brand, region, and style of whiskey.

Apple

Handwritten Apple 1 Serial Number Mystery Finally Solved By Forensic Analysis (9to5mac.com) 36

An anonymous reader quotes a report from 9to5Mac: Apple geeks may be aware of the mystery of the handwritten Apple 1 serial number present on some of the surviving machines. Namely, no one knew how they got there. Steve Wozniak said that he didn't write them. Steve Jobs said the same. Daniel Kottke, who assembled and tested some of the boards, said it wasn't him. Likewise for Byte Shop owner Paul Terrell, who bought a batch of 50 of them... Achim Baque, who maintains the Apple-1 Registry (a listing of all Apple 1 computers), finally decided to try to solve the mystery. This, it turned out, would not be a trivial task.

Despite Steve Jobs' denial, the handwriting on the boards did seem to match his. However, since Steve rarely signed autographs, making his signature and handwriting especially valuable, the potential impact on the value of the machines with serial numbers meant that as much certainty as possible was needed. Baque asked one of the world's leading handwriting authentication services to compare the serial numbers on two of the Apple 1 boards with known samples of Steve's writing. California-based PSA said that they could do it, but photos wouldn't be sufficient -- they would need to carry out a physical examination of both the boards and the handwriting samples. The company's analysis would include the slant, flow, pen pressure, letter size, and other characteristics.

Daniel Kottke, who was a close friend of Steve, provided a number of letters and postcards written by Steve. Helpfully, these documents include a number of handwritten numbers. Baque then personally transported two of the boards, and the handwriting samples, to California for examination by PSA. The company took three months to perform the analysis, also studying many photos, before authenticating the handwriting on the boards as that of Steve Jobs. Finally, the mystery is solved! Steve clearly just didn't recall doing it.
The full story has been reported at the Apple-1 Registry.
Medicine

COVID-19 Vaccines With 'Minor Side Effects' Could Still Be Pretty Bad (wired.com) 243

"The risk of nasty side effects in the Moderna and Oxford trials should be made clear now, before it ends up as fodder for the skeptics," argues Hilda Bastian, a former consumer health care advocate and a Ph.D. student at Bond University who studies evidence-based medicine. An anonymous reader shares an excerpt from her article via Wired: On Monday, vaccine researchers from Oxford University and the pharmaceutical company AstraZeneca announced results from a "Phase 1/2 trial," suggesting their product might be able to generate immunity without causing serious harm. Similar, but smaller-scale results, were posted just last week for another candidate vaccine produced by the biotech firm Moderna, in collaboration with the U.S. National Institutes of Health. [...] Back in May, a CNN report described the Oxford group as being "the most aggressive in painting the rosiest picture" of its product, so let's start with them. Just how rosy is the Oxford picture really? It's certainly true that this week's news shows the vaccine has the potential to provide protection from Covid-19. But there are flies in the ointment. After the first clinical trial for this vaccine began in April, for example, the researchers added new study arms in which people got acetaminophen every six hours for 24 hours after the injection. That's not featured in their marketing, of course, and I saw no discussion of this unusual step in media coverage in early summer. Newspapers only said the vaccine had been proven "safe with rhesus monkeys," and did not cause any adverse effects in those animal tests. It was a worrying signal though: How rough a ride were people having with this vaccine? Was the acetaminophen meant to keep down fever, headaches, malaise -- or all of the above?

The press release for Monday's publication of results from the Oxford vaccine trials described an increased frequency of "minor side effects" among participants. A look at the actual paper, though, reveals this to be a marketing spin that has since been parroted in media reports. Yes, mild reactions were far more common than worse ones. But moderate or severe harms -- defined as being bad enough to interfere with daily life or needing medical care -- were common, too. Around one-third of people vaccinated with the Covid-19 vaccine without acetaminophen experienced moderate or severe chills, fatigue, headache, malaise, and/or feverishness. Close to 10 percent had a fever of at least 100.4 degrees and just over one-fourth developed moderate or severe muscle aches. That's a lot, in a young and healthy group of people -- and the acetaminophen didn't help much for most of those problems. The paper's authors designated the vaccine as "acceptable" and "tolerated," but we don't yet know how acceptable this will be to most people.

There is another red flag. Clinical trials for other Covid-19 vaccines have placebo groups, where participants receive saline injections. Only one of the Oxford vaccine trials is taking this approach, however; the others instead compare the experimental treatment to an injected meningococcal vaccine. There can be good reasons to do this: Non-placebo injections may mimic telltale signs that you've received an active vaccine, such as a skin reaction, making the trial more truly "blind." But their use also opens the door to doubt-sowing claims that any harms of the new vaccine are getting buried among the harms already caused by the control-group, "old" vaccines.
What about the Moderna vaccine? "According to the press release from May, there were no serious adverse events for the people in that particular dosage group," reports Wired. "But last week's paper shows the full results: By the time they'd had two doses, every single one was showing signs of headaches, chills or fatigue; and for at least 80 percent this could have been enough to interfere with their normal activities. A participant who had a severe reaction to a particularly high dose has talked in detail about how bad it was: If reactions even half as bad as this were to be common for some of these vaccines, they will be hard sells once they reach the community -- and there could be a lot of people who are reluctant to get the second injection."

UPDATE 7/27/20: Slashdot interviewed Oxford Vaccine Trial participant Jennifer Riggins and asked what her reaction was to Wired's article. Riggins is an American technology journalist and marketer who's self-employed in London. Here's what she said:

"I think the article is a poorly written, poorly researched opinion piece. It says offering acetaminophen or paracetamol is unusual with vaccines. I'm a working mom with a three-year-old, and you are told to give them acetaminophen or paracetamol before all live vaccines as they can cause discomfort and fever for the first 24 to 48 hours.

"I'm actually surprised this article was in Wired that tends to be reputable. It seems to be written by a vaccine skeptic at best who knows little about them. This is a dangerous message because we most likely won't have a widely distributed vaccine til 2021 at earliest. Even longer if you consider, like the chicken pox vaccine, it needs a booster for efficacy. This flu season is going to be awful and then combined with this coronavirus. Add to that less kids are getting vaccinated or at least are delayed during the pandemic. Any antivaxxer message is incredibly dangerous. We won't be able to have herd immunity for Covid-19 by winter but we could for the flu which will save so many lives."
Communications

Scientists Create Quantum Sensor That Covers Entire Radio Frequency Spectrum (phys.org) 64

A quantum sensor could give Soldiers a way to detect communication signals over the entire radio frequency spectrum, from 0 to 100 GHz, said researchers from the Army. Such wide spectral coverage by a single antenna is impossible with a traditional receiver system, and would require multiple systems of individual antennas, amplifiers and other components. Phys.Org reports: In 2018, Army scientists were the first in the world to create a quantum receiver that uses highly excited, super-sensitive atoms -- known as Rydberg atoms -- to detect communications signals, said David Meyer, a scientist at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. The researchers calculated the receiver's channel capacity, or rate of data transmission, based on fundamental principles, and then achieved that performance experimentally in their lab -- improving on other groups' results by orders of magnitude, Meyer said.

"These new sensors can be very small and virtually undetectable, giving Soldiers a disruptive advantage," Meyer said. "Rydberg-atom based sensors have only recently been considered for general electric field sensing applications, including as a communications receiver. While Rydberg atoms are known to be broadly sensitive, a quantitative description of the sensitivity over the entire operational range has never been done." To assess potential applications, Army scientists conducted an analysis of the Rydberg sensor's sensitivity to oscillating electric fields over an enormous range of frequencies -- from 0 to 1012 Hertz. The results show that the Rydberg sensor can reliably detect signals over the entire spectrum and compare favorably with other established electric field sensor technologies, such as electro-optic crystals and dipole antenna-coupled passive electronics.
The findings have been published in the Journal of Physics B: Atomic, Molecular and Optical Physics.
Education

UCLA Abandons Plans To Use Facial Recognition After Backlash (vice.com) 19

Ahead of a national day of action led by digital rights group Fight for the Future, UCLA has abandoned its plans to become the first university in the United States to adopt facial recognition technology. From a report: In a statement shared with Fight for the Future's Deputy Director Evan Greer, UCLA's Administrative Vice Chancellor Michael Beck said the university "determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community." Since last year, UCLA has been considering using the university's security cameras to implement a facial recognition surveillance system.

These plans have been dogged by student criticism, culminating in an editorial in the Daily Bruin, UCLA's student newspaper, that argued the system would "present a major breach of students' privacy" while creating "a more hostile campus environment" by "collecting invasive amounts of data on [UCLA's] population of over 45,000 students and 50,000 employees." In an attempt to highlight the risks of using facial recognition on UCLA's campus, Fight for the Future used Amazon's facial recognition software, Rekognition, to scan public photos of UCLA's athletes and faculty, then compare the photos to a mugshot database. Over 400 photos were scanned, 58 of which were false positives for mugshot images -- the software often gave back matches with "100% confidence" for individuals "who had almost nothing in common beyond their race"

Earth

Is Air Travel Really Bad For the Climate? (thebulletin.org) 171

"The best way to get oneself somewhere with the least impact on the climate is a lot more complex than it may seem at first glance," writes Slashdot reader Dan Drollette (who is also the deputy editor of The Bulletin of the Atomic Scientists).

Slashdot reader Lasrick also submitted their report. A few excerpts: - For a short distance taking a train may be better than flying, but there is some ambiguity for long-distance travel. But no matter what mode of travel we choose, the distance traveled strongly determines emissions.

- Trends suggest that ground transportation is increasingly being electrified (with the potential for using renewable sources). However, there is likely no such technological breakthrough on the horizon for planes. Thus, flying less is an important long-term commitment because it helps to make sure there are more alternative transportation options, and shows where we want government and industry to prioritize efforts toward efficiency and transit... [I]f you choose to drive because it is more climate-friendly than flying short-haul, you are adding an extra car on the road while the plane would have flown anyway. However, in the long run, if many people choose to drive (hopefully in a full car), it is likely there will be fewer short-haul flights.

Obviously, fewer passengers per vehicle will also increase the per-passenger carbon count, and right now short economy flights "generally have higher occupancy and lighter fuel loads," placing them just below a U.S. grid-powered electric car. And mode of transportation is still less important than distance traveled, though very short flights less than 1000 kilometres (621 miles) are more carbon intensive than longer flights "as they spend little time cruising, and are often not very direct."

Energy sources also matter, since trains in Europe are largely electrified, while North America's trains burn fossil fuels. "In Europe, trains are by far the best choice in terms of climate benefits, even if that's not as true elsewhere." Thus the three worst choices right now are a large car (getting 15 miles per gallon), followed by a long (non-economy) business flight, and a "medium" car (getting 25 miles per gallon), while the three best choices are a solar-powered electric car (#3), a crowded U.S. school bus, and Eurostar rail.

But it's important to remember that the majority of people don't fly, Dan Drollette reminds us, "And we should not be so focused on the carbon contributions of air travel (which only account for 2 percent of all carbon emissions) that we take our eyes off the causes of the other 98 percent of carbon emissions."

Slashdot Top Deals